DBI database handle with AutoCommit set to 0 not returning proper data with SELECT?
Asked Answered
F

2

11

This is a tricky one to explain (and very weird), so bear with me. I will explain the problem, and the fix for it, but I would like to see if anyone can explain why it works the way it works :)

I have a web application that uses mod_perl. It uses MySQL database, and I am writing data to a database on regular basis. It is modular, so it also has its own 'database' type of a module, where I handle connection, updates, etc. database::db_connect() subroutine is used to connect to database, and AutoCommit is set to 0.

I made another Perl application (standalone daemon), that periodically fetches data from the database, and performs various tasks depending on what data is returned. I am including database.pm module in it, so I don't have to rewrite/duplicate everything.

Problem I am experiencing is:

Application connects to the database on startup, and then loops forever, fetching data from database every X seconds. However, if data in the database is updated, my application is still being returned 'old' data, that I got on the initial connection/query to the database.

For example - I have 3 rows, and column "Name" has values 'a', 'b' and 'c' - for each record. If I update one of the rows (using mysql client from command line, for example) and change Name from 'c' to 'x', my standalone daemon will not get that data - it will still get a/b/c returned from MySQL. I captured the db traffic with tcpdump, and I could definitely see that MySQL was really returning that data. I have tried using SQL_NO_CACHE with SELECT as well (since I wasn't sure what was going on), but that didn't help either.

Then, I have modified the DB connection string in my standalone daemon, and set AutoCommit to 1. Suddenly, application started getting proper data.

I am puzzled, because I thought AutoCommit only affects INSERT/UPDATE types of statements, and had no affect on SELECT statement. But it seemingly does, and I don't understand why.

Does anyone know why SELECT statement will not return 'updated' rows from the database when AutoCommit is set to 0, and why it will return updated rows when AutoCommit is set to 1?

Here is a simplified (taken out error checking, etc) code that I am using in standalone daemon, and that doesn't return updated rows.

#!/usr/bin/perl

use strict;
use warnings;
use DBI;
use Data::Dumper;
$|=1;

my $dsn = "dbi:mysql:database=mp;mysql_read_default_file=/etc/mysql/database.cnf";
my $dbh = DBI->connect($dsn, undef, undef, {RaiseError => 0, AutoCommit => 0});
$dbh->{mysql_enable_utf8} = 1;

while(1)
{
    my $sql = "SELECT * FROM queue";
    my $stb = $dbh->prepare($sql);
    my $ret_hashref = $dbh->selectall_hashref($sql, "ID");
    print Dumper($ret_hashref);
    sleep(30);
}

exit;

Changing AutoCommit to 1 fixes this. Why?

Thanks :)

P.S: Not sure if it anyone cares, but DBI version is 1.613, DBD::mysql is 4.017, perl is 5.10.1 (on Ubuntu 10.04).

Fornicate answered 17/10, 2010 at 6:22 Comment(2)
Is the auto_commit setting on or off in your command-line mysql client (where you did the UPDATE operation)?Haiku
It is on (it's on by default, and I haven't changed it). I can see the 'new' updated data from mysql client, or any other 'session' (new DBI session that connects, or any other client that connects to DB) - it's only the session with AutoCommit 0 that can not access the updated data.Fornicate
L
15

I suppose you are using InnoDB tables and not MyISAM ones. As described in the InnoDB transaction model, all your queries (including SELECT) are taking place inside a transaction.

When AutoCommit is on, a transaction is started for each query and if it is successful, it is implicitly committed (if it fails, the behavior may vary, but the transaction is guaranteed to end). You can see the implicit commits in MySQL's binlog. By setting AutoCommit to false, you are required to manage the transactions on your own.

The default transaction isolation level is REPEATABLE READ, which means that all SELECT queries will read the same snapshot (the one established when the transaction started).

In addition to the solution given in the other answer (ROLLBACK before starting to read) here are a couple of solutions:

You can choose another transaction isolation level, like READ COMMITTED, which makes your SELECT queries read a fresh snapshot every time.

You could also leave AutoCommit to true (the default setting) and start your own transactions by issuing BEGIN WORK. This will temporarily disable the AutoCommit behavior until you issue a COMMIT or ROLLBACK statement after which each query gets its own transaction again (or you start another with BEGIN WORK).

I, personally, would choose the latter method, as it seems more elegant.

Lithotomy answered 17/10, 2010 at 8:20 Comment(3)
This is really an amazing answer, and I thank you very much for taking time to explain this. I did read documentation and tried to figure it out, but really didn't come across what you mentioned here (was probably reading wrong docs, then ;). Thanks once again, this really explains a lot.Fornicate
Another question (assuming you even read this again :). We are using stored procedures on MySQL side, so I don't actually do any transaction stuff within Perl code. Could I use AutoCommit = 1, and issue "BEGIN WORK" just before I invoke the stored procedure from Perl code?Fornicate
Yes, you can do that. You can also put the transaction inside a stored procedure (but not inside a stored function), whatever fits your workflow better.Lithotomy
T
4

I think that when you turn autocommit off, you also start a transaction. And, when you start a transaction, you may be protected from other people's changes until you commit it, or roll it back. So, if my semi-informed guess is correct, and since you're only querying the data, add a rollback before the sleep operation (no point in holding locks that you aren't using, etc):

$dbh->rollback;
Tantara answered 17/10, 2010 at 7:5 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.