Is it safe to update a MySQL table while iterating a resultset with Perl DBI?
Asked Answered
B

2

6

Specifically, I need to delete some rows from a table while iterating over all rows in the table. Does DBI have something like an updateable resultset in Java? So, if I do something like:

$query_all = $dbh->prepare("select Id, X, Y, Z from MyTable");
$delete = $dbh->prepare("delete from MyTable where Id = ?");
$query_all->execute();

while ( my @res = $query_all->fetchrow_array() ){
    my ($id, $x, $y, $z) = @res;
    # fetch the IDs of any other rows that have the same X, Y, Z values
    foreach ( the_duplicate_ids ){
        $delete->execute($dup_id); # safe ??
        # also delete from another table using $dup_id
    }
}

...is that OK?

Just to give some context, I'm deleting duplicate rows that have the same values for the X, Y, Z columns, and leaving just one row in each case (the first one found). If that was all I was doing, I'd just setup a unique index on those 3 columns to eliminate the dups, but I also need to delete a row from another table for each duplicate deleted from MyTable.

In the end, I wrote a script to identify and collect all the IDs for the rows that I need to delete into an array, then iterate over that array, deleting the relevant rows from both tables. I'm still interested in finding an answer to the original question though. If I get time I'll try to answer it myself, but if anyone already knows I'd like to hear it.

Brindled answered 31/8, 2015 at 9:21 Comment(2)
You should add a trigger that deletes the corresponding row in your secondary table when a row is removed from the primary table. Then you can add the unique constraintByzantium
That seems like a good solution, but it actually won't be suitable for me. My requirements have changed slightly in that I need to hold onto the last duplicate added to the table instead of the first, and I think a unique constraint only keeps the first.Brindled
Z
1

You're worried about a concurrent modification error?

If you're using transactions, it won't be a problem as long as you save the commit for AFTER you're done iterating.

However if you're using autocommit it could be a problem and so the solution would be:

Pseudo Code ish -- not tested

#Loop through each row and note the records to delete
my @deathrow = ();
foreach my $row ( @resultset ) {
    push $row->{id}, @deathrow;
}

#Now that we're done iterating, do all the deletes in one statement
$dbh->execute("DELETE WHERE id IN ( " . @deathrow . " )"); #something like that
Zygoma answered 16/9, 2015 at 4:30 Comment(0)
M
0

One possible solution would be something along the following snippet:

my $dbh = DBI->connect("DBI:mysql:database=XXXX;host=localhost", "user", "pass", {'RaiseError' => 1, 'AutoCommit' => 0});

my $query_all = $dbh->prepare("select Id, X, Y, Z from MyTable");

eval {
    my $delete = $dbh->prepare("delete from MyTable where Id = ?");
    $query_all->execute();
    while ( my @res = $query_all->fetchrow_array() ){
        my ($id, $x, $y, $z) = @res;
        # fetch the IDs of any other rows that have the same X, Y, Z values
        foreach my $dup_id (@the_duplicate_ids){
            $delete->execute($dup_id); # safe ??
            # also delete from another table using $dup_id
        }
    }    
    $dbh->commit();
};

if ($@) {
    print "Transaction rollback: $@";
    $dbh->rollback();
}
Matronage answered 17/2, 2016 at 14:5 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.