How to avoid deadlocks in Postgres?
Asked Answered
L

1

8

I run multiple servers and each of them are running multi-update statements such as these

UPDATE user SET ... WHERE user_id = 2;
UPDATE user SET ... WHERE user_id = 1;
UPDATE user SET ... WHERE user_id = 3;

If there is a concurrenct update like:

UPDATE user SET ... WHERE user_id = 1;
UPDATE user SET ... WHERE user_id = 2;

Then I will run into error deadlock detected

Right now, my fix is to order the update statement on client-side and always guarantee that the ids are in the same order. i.e. I always sort the statement on the client-side ASC by user_id

This has seemed to fix the issue so far, but I still have questions:

  1. Is this (order the statements) a good solution for fixing deadlock errors?
  2. If I start doing multi-table updates, do I have to order the statements across tables as well?
Loupe answered 26/8, 2014 at 13:58 Comment(4)
re 1): Updating the rows in the same order is the usual approach to this problem (at least that's the only one I am aware of)Trimurti
Are you sure you're getting deadlocks updating single rows in a single table?Moriah
I am updating multiple rows in each concurrent transactionsLoupe
The best solution depends on why you are doing it like this. Do your current transaction boundaries constitute natural transactions, or are you implementing them that way for unnatural reasons? What is behind the SET ... ? Is it bind variables, or constants, or expressions (increments of a count column?), or subselects?Liebermann
P
2

Ordering statements at the application level is a good solution in that it avoids database overhead. The statements would need to keep their order per-table. If this is easily workable in the application, it's worthwhile.

There is also a solution at the database level: serializable isolation.

The Serializable isolation level provides the strictest transaction isolation. This level emulates serial transaction execution for all committed transactions; as if transactions had been executed one after another, serially, rather than concurrently. However, like the Repeatable Read level, applications using this level must be prepared to retry transactions due to serialization failures. In fact, this isolation level works exactly the same as Repeatable Read except that it monitors for conditions which could make execution of a concurrent set of serializable transactions behave in a manner inconsistent with all possible serial (one at a time) executions of those transactions.

You can set this isolation level when you start your transactions. This does add some database overhead, but more importantly the application must be ready to catch serialization failures and retry the transaction.

Pocosin answered 8/6, 2017 at 15:11 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.