Zero SQL deadlock by design - any coding patterns?
Asked Answered
A

10

32

I am encountering very infrequent yet annoying SQL deadlocks on a .NET 2.0 webapp running on top of MS SQL Server 2005. In the past, we have been dealing with the SQL deadlocks in the very empirical way - basically tweaking the queries until it work.

Yet, I found this approach very unsatisfactory: time consuming and unreliable. I would highly prefer to follow deterministic query patterns that would ensure by design that no SQL deadlock will be encountered - ever.

For example, in C# multithreaded programming, a simple design rule such as the locks must be taken following their lexicographical order ensures that no deadlock will ever happen.

Are there any SQL coding patterns guaranteed to be deadlock-proof?

Aircraftman answered 21/9, 2008 at 18:51 Comment(1)
This is by far one of the most frustrating things I have ever encountered in my programming career. I've spent 5+ years breaking my teeth trying to get databases to behave like imperative programming languages. +1 for the excellent question. I just wish there was a more satisfactory answer.Streetlight
D
21

Writing deadlock-proof code is really hard. Even when you access the tables in the same order you may still get deadlocks [1]. I wrote a post on my blog that elaborates through some approaches that will help you avoid and resolve deadlock situations.

If you want to ensure two statements/transactions will never deadlock you may be able to achieve it by observing which locks each statement consumes using the sp_lock system stored procedure. To do this you have to either be very fast or use an open transaction with a holdlock hint.


Notes:

  1. Any SELECT statement that needs more than one lock at once can deadlock against an intelligently designed transaction which grabs the locks in reverse order.
Dye answered 21/9, 2008 at 21:41 Comment(2)
Writing deadlock-proof code is really hard: Don't you mean to say impossible since the execution plan of a database might change over its lifetime?Streetlight
@Sam, Is it possible to manually manage our own locking and tell sql-server/mysql that we are going to handle our own locks and tell it to stop automatically grabbing locks?Choroid
G
13

Zero deadlocks is basically an incredibly costly problem in the general case because you must know all the tables/obj that you're going to read and modify for every running transaction (this includes SELECTs). The general philosophy is called ordered strict two-phase locking (not to be confused with two-phase commit) (http://en.wikipedia.org/wiki/Two_phase_locking ; even 2PL does not guarantee no deadlocks)

Very few DBMS actually implement strict 2PL because of the massive performance hit such a thing causes (there are no free lunches) while all your transactions wait around for even simple SELECT statements to be executed.

Anyway, if this is something you're really interested in, take a look at SET ISOLATION LEVEL in SQL Server. You can tweak that as necessary. http://en.wikipedia.org/wiki/Isolation_level

For more info, see wikipedia on Serializability: http://en.wikipedia.org/wiki/Serializability

That said -- a great analogy is like source code revisions: check in early and often. Keep your transactions small (in # of SQL statements, # of rows modified) and quick (wall clock time helps avoid collisions with others). It may be nice and tidy to do a LOT of things in a single transaction -- and in general I agree with that philosophy -- but if you're experiencing a lot of deadlocks, you may break the trans up into smaller ones and then check their status in the application as you move along. TRAN 1 - OK Y/N? If Y, send TRAN 2 - OK Y/N? etc. etc

As an aside, in my many years of being a DBA and also a developer (of multiuser DB apps measuring thousands of concurrent users) I have never found deadlocks to be such a massive problem that I needed special cognizance of it (or to change isolation levels willy-nilly, etc).

Glacial answered 22/9, 2008 at 2:34 Comment(1)
However this seems to be running away from the problem than actually solving it.Choroid
B
3

There is no magic general purpose solution to this problem that work in practice. You can push concurrency to the application but this can be very complex especially if you need to coordinate with other programs running in separate memory spaces.

General answers to reduce deadlock opportunities:

  1. Basic query optimization (proper index use) hotspot avoidanant design, hold transactions for shortest possible times...etc.

  2. When possible set reasonable query timeouts so that if a deadlock should occur it is self-clearing after the timeout period expires.

  3. Deadlocks in MSSQL are often due to its default read concurrency model so its very important not to depend on it - assume Oracle style MVCC in all designs. Use snapshot isolation or if possible the READ UNCOMMITED isolation level.

Blindworm answered 8/2, 2009 at 5:36 Comment(3)
about your 3rd point - just received the same feedback from a consultant, claiming that deadlocks can occur between an update and select transactions when using "read committed", due to sqlserver's lock escalation. we're planning to move to read uncommitted and see if the deadlocks disappear. haven't seen this mentioned very often though.Carlottacarlovingian
@YonatanKarni, Are you sure the consultant isn't wrong?Choroid
@Choroid this was long ago, however moving to read uncommitted did seem to solve those issue - not to suggest it won't introduce new issues if not done carefully...Carlottacarlovingian
T
2

I believe the following useful read/write pattern is dead lock proof given some constraints:

Constraints:

  1. One table
  2. An index or PK is used for read/write so engine does not resort to table locks.
  3. A batch of records can be read using a single SQL where clause.
  4. Using SQL Server terminology.

Write Cycle:

  1. All writes within a single "Read Committed" transaction.
  2. The first update in the transaction is to a specific, always-present record within each update group.
  3. Multiple records may then be written in any order. (They are "protected" by the write to the first record).

Read Cycle:

  1. The default read committed transaction level
  2. No transaction
  3. Read records as a single select statement.

Benefits:

  1. Secondary write cycles are blocked at the write of first record until the first write transaction completes entirely.
  2. Reads are blocked/queued/executed atomically between the write commits.
  3. Achieve transaction level consistency w/o resorting to "Serializable".

I need this to work too so please comment/correct!!

Turn answered 14/11, 2011 at 19:57 Comment(0)
U
1

As you said, always access tables in the same order is a very good way to avoid deadlocks. Furthermore, shorten your transactions as much as possible.

Another cool trick is to combine 2 sql statements in one whenever you can. Single statements are always transactional. For example use "UPDATE ... SELECT" or "INSERT ... SELECT", use "@@ERROR" and "@@ROWCOUNT" instead of "SELECT COUNT" or "IF (EXISTS ...)"

Lastly, make sure that your calling code can handle deadlocks by reposting the query a configurable amount of times. Sometimes it just happens, it's normal behaviour and your application must be able to deal with it.

Until answered 21/9, 2008 at 18:57 Comment(0)
I
1

If you have enough design control over your app, restrict your updates / inserts to specific stored procedures and remove update / insert privileges from the database roles used by the app (only explicitly allow updates through those stored procedures).

Isolate your database connections to a specific class in your app (every connection must come from this class) and specify that "query only" connections set the isolation level to "dirty read" ... the equivalent to a (nolock) on every join.

That way you isolate the activities that can cause locks (to specific stored procedures) and take "simple reads" out of the "locking loop".

Instrument answered 22/9, 2008 at 2:49 Comment(0)
K
1

In addition to consistent sequence of lock acquisition - another path is explicit use of locking and isolation hints to reduce time/resources wasted unintentionally acquiring locks such as shared-intent during read.

Km answered 22/9, 2008 at 5:5 Comment(0)
S
1

Not a direct answer to your question, but food for thought:

http://en.wikipedia.org/wiki/Dining_philosophers_problem

The "Dining philosophers problem" is an old thought experiment for examining the deadlock problem. Reading about it might help you find a solution to your particular circumstance.

Sucker answered 22/9, 2008 at 5:55 Comment(0)
E
1

Something that none has mentioned (surprisingly), is that where SQL server is concerned many locking problems can be eliminated with the right set of covering indexes for a DB's query workload. Why? Because it can greatly reduce the number of bookmark lookups into a table's clustered index (assuming it's not a heap), thus reducing contention and locking.

Excide answered 3/11, 2008 at 1:23 Comment(0)
A
0

Quick answer is no, there is no guaranteed technique.

I don't see how you can make any application deadlock proof in general as a design principle if it has any non-trivial throughput. If you pre-emptively lock all the resources you could potentially need in a process in the same order even if you don't end up needing them, you risk the more costly issue where the second process is waiting to acquire the first lock it needs, and your availability is impacted. And as the number of resources in your system grows, even trivial processes have to lock them all in the same order to prevent deadlocks.

The best way to solve SQL deadlock problems, like most performance and availability problems is to look at the workload in the profiler and understand the behavior.

Amenra answered 22/9, 2008 at 3:43 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.