Retry on deadlock for MySQL / SQLAlchemy
Asked Answered
G

2

27

I have searched for quite some time now and can't found a solution to my problem. We are using SQLAlchemy in conjunction with MySQL for our project and we encounter several time the dreaded error:

1213, 'Deadlock found when trying to get lock; try restarting transaction'.

We would like to try to restart the transaction at most three times in this case.

I have started to write a decorator that does this but i don't know how to save the session state before the fail and retry the same transaction after it ? (As SQLAlchemy requires a rollback whenever an exception is raised)

My work so far,

def retry_on_deadlock_decorator(func):
    lock_messages_error = ['Deadlock found', 'Lock wait timeout exceeded']

    @wraps(func)
    def wrapper(*args, **kwargs):
        attempt_count = 0
        while attempt_count < settings.MAXIMUM_RETRY_ON_DEADLOCK:
            try:
                return func(*args, **kwargs)
            except OperationalError as e:
                if any(msg in e.message for msg in lock_messages_error) \
                        and attempt_count <= settings.MAXIMUM_RETRY_ON_DEADLOCK:
                    logger.error('Deadlock detected. Trying sql transaction once more. Attempts count: %s'
                                 % (attempt_count + 1))
                else:
                    raise
            attempt_count += 1
    return wrapper
Granulate answered 16/5, 2014 at 14:32 Comment(1)
Did I help you? Or you found another solution. Please share any result.Lori
A
3

You can't really do that with the Session from the outside. Session would have to support this internally. It would involve saving a lot of private state, so this may not be worth your time.

I completely ditched most ORM stuff in favour of the lower level SQLAlchemy Core interface. Using that (or even any dbapi interface) you can trivially use your retry_on_deadlock_decorator decorator (see question above) to make a retry-aware db.execute wrapper.

 @retry_on_deadlock_decorator
 def deadlock_safe_execute(db, stmt, *args, **kw):
     return db.execute(stmt, *args, **kw)

And instead of

 db.execute("UPDATE users SET active=0")

you do

 deadlock_safe_execute(db, "UPDATE users SET active=0")

which will retry automatically if a deadlock happens.

Asberry answered 4/10, 2016 at 6:33 Comment(2)
pi, what happens if your retry fails in another deadlock? Is two attempts really enough or should there be some kind of exponential back off?Abamp
Judging from experience 2 or 3 times should be enough. Depends heavily on your workload though. 3 times is good enough to reduce the noise in the error log dramatically. I didn't see the need for exponential back off in my code, but because it is complexity I would only add it if absolutely necessary.Asberry
L
1

Did you use code like this?

try: 

     Perform table transaction 
     break 
except: 
     rollback 
     delay 
     try again to perform table transaction 

The only way to truly handle deadlocks is to write your code to expect them. This generally isn't very difficult if your database code is well written. Often you can just put a try/catch around the query execution logic and look for a deadlock when errors occur. If you catch one, the normal thing to do is just attempt to execute the failed query again.

Usefull links:

Lori answered 24/2, 2015 at 10:34 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.