SQL Server lock escalation issue
Asked Answered
D

5

7

I am reading SQL Server lock escalation from MSDN Page about SQL Server lock escalation

My question is, seems the major reason why there is lock escalation is to reduce overhead to maintain more locks (e.g. when more row locks are acquired for a table, then row level lock is escalated to table level). My question is, to maintain more locks will improve concurrency, it is a benefit, why it is an overhead? In my humble idea, lock should be as small as enough to make the db performance better by improving concurrency. Could anyone explain in simple way why lock escalation is needed and what is the so-called lock overhead please?

thanks in advance, George

Dara answered 16/5, 2009 at 16:17 Comment(0)
G
16

Could anyone explain in simple way why lock escalation is needed and what is the so-called lock overhead please?

When you update a table and lock a row, you need to record this fact somehow: that's a row, it's been updated and locked.

When you update million rows, you need to do this million times, and therefore have some space to keep million locks.

SQL Server keeps a list of locks in memory, while Oracle does in on the tablespaces.

This is probably because Oracle is old (older than me), and SQL Server is young compared to Oracle.

Keeping temporary resources (like locks) in a permanent storage is not so obvious solution from designer's point of view. Just one thing to mention: you may need a disk write to perform a SELECT FOR UPDATE.

Oracle's core features were developed in early 80's, when keeping things in memory was not an option at all. They just had to use disk space somehow.

If disk space was to be used anyway, you had to place a lock somewhere on disk.

And where to keep a lock for a row if not within the row itself?

Developers of SQL Server's lock system, when inventing design of their RDBMS called Sybase, decided to store temporary things (i. e. locks) in the temporary storage (i. e. RAM).

But Oracle's design is always balanced: if you have a 1,000,000 rows in your database, then you have a storage space for 1,000,000 locks, if you have a billion rows, you may store billion locks, etc.

SQL Server's design is flawy in this sense, because your RAM and HDD space may be unbalanced. You may easily have 16M of RAM and several terabytes of disk space. And you memory just cannot hold all the locks.

That's why when the lock count reaches a certain limit, SQL Server decides to escalate the locks: instead of keeping locks for, say, 10 individual rows in a data page (which requires 10 records), it locks the whole data page (which requires 1 record).

Oracle, on the other hand, when updating a row, just writes the lock right into the datapage.

That's why Oracle's locks are row-level.

Oracle doesn't "manage" the locks in a common sense of word: you can't, say, get a list of locked pages in Oracle.

When a transaction needs to update a row, it just goes to the row and sees if it's locked.

If it is, it looks which transaction holds a lock (this information is contained within the lock descriptor in the data page) and adds itself to that transaction's notification queue: when the locking transactions dies, the original one gets notified and locks the data.

From the concurrency's point of view, lock escalation is totally a poor man's solution: it adds nothing to concurrency. You can, say, get a lock on a row you didn't even touch.

From the performance's point of view, doing things in memory is, of course, faster than doing them on the disk.

But since Oracle caches the datablocks and the actual operations described above are performed in memory anyway, the performance is same or close to it.

Galata answered 16/5, 2009 at 16:55 Comment(7)
I still think more locks will improve concurrency -- which will improve the overall system performance. It deserves using more memory and more complex logics in code, if final overal system concurrency and performance is better. Appreciate if you could show sample or more descriptions why do you think managing more locks is overhead? Maybe you mean the overall system performance is worse if more locks? More descriptions are appreciated.Dara
Thanks for your comments, so the point using more locks will always improve concurrency is wrong? Or the point using more locks will always improve performance is wrong?Dara
Thanks for your updated comments, Quassnoi! I am interested in your comparison between SQL and Oracle. You mentioned SQL implements in linked list in memory and Oracle is using data page. I think the performance of them should be very similar -- both relay on OS paging to manage memory/disk swap automatically. Why you say Oracle is superior than SQL? Any more descriptions. :-)Dara
I don't say Oracle is superior to SQL Server. I say that Oracle is more mature than SQL Server. Most of its core features were implemented in 80's, that's why they rely heavily on disk space instead of memory and, since disk space is slow, they are licked clean to achieve maximal efficiency, which still pays back now in 2000's. SQL Server, on the other hand, has much fewer legacy burden and some things are implemented for efficiently. Procedural language, for instance, is much much faster in SQL Server.Galata
Thanks for your great comments again! I want to confirm with you that Oracle is no using lock escalation because it uses data page to manage row level lock, more efficient to manage than using pure memory based data structure (e.g. linked list which SQL Server is using)?Dara
I, Quassnoi, hereby confirm that Oracle uses no lock escalation because it uses data pages to manage row level locking. I don't say it's more efficient in terms of speed than using pure memory (because it is not), but I definitely say that this behavior is, first, more concurrency friendly and, second, more predictable: you don't lock the rows you didn't intend to lock and within you transaction you can always say which rows were locked by you and which were not.Galata
By the way: concurrency and performance generally are antagonists. In MySQL, transactionless MyISAM is faster than transactional InnoDB.Galata
C
4

If the SQL Server optimiser estimates/decides that a query will 'visit' all the rows in a certain range it will be more efficient to hold a single lock over that range rather than to have to negotiate many locks (locks have to be tested for type). This is in addition to consuming less lock resources (a system wide resource).

If you have a well designed schema, and indexes appropriate to your query workload, which are being regularly maintained, you should not have to worry about the escalation that is occurring. In many instances, blocking table locks can be eliminated by the appropriate covering indexes.

UPDATE: A covering index for a query means that a lookup into the clustered will not need to be performed, and this reduces the chances of blocking inserts into the table.

Convivial answered 16/5, 2009 at 16:47 Comment(4)
Could you add more commens about what do you mean "In many instances, blocking table locks can be eliminated by the appropriate covering indexes." please? If you mean in this situation, escalation to table level lock is eliminated, then my confusion is why in this scenario escalation to table level lock is eliminated?Dara
Every thing has a cost, including locks. If there are too many locks being held, it will become to expensive. It is killing the server. Yes, it is become less concurrency. Concurrency is not our target, but higher performance / throughput which is limited by hardware. It is meaningless concurrency logically, but already overloaded in hardware.Acarpous
@Dennis Cheung: that makes very little sense. You seem to be stating the obvious?Convivial
@Dennis Cheung, I can not imagine when concurrency is better (when using more locks) but system performance is worse. I think concurrency is performance. Any comments or ideas?Dara
S
1

the lock overhead means that managing one table lock is better perf wise than managing a lot of row locks. since each lock takes some memory, a lot of row locks can consume a lot more memory than one table lock. so lock escalation goes from row->page->table lock.

Stopover answered 16/5, 2009 at 16:23 Comment(4)
Do you mean managing more locks will impact performance, which will indirectly reduce the benefit of concurrency? Or do you mean managing more locks will decrease the concurrency directly?Dara
BTW: I still think more locks will improve concurrency -- which will improve the overall system performance. It deserves using more memory and more complex logics in code, if final overal system concurrency and performance is better. Appreciate if you could show sample or more descriptions why do you think managing more locks is overhead? Maybe you mean the overall system performance is worse if more locks? More descriptions are appreciated.Dara
yes more locks will definitely impact performance. when to switch from multiple rowlocks to a single page or tablelock totally depends on an internal threshold in sql server. you can force to only use rowlocks but you will see that this can lead to perf degradation.Stopover
I am still confused why more locks will degrade performance. More locks means more concurrency, i.e. the same to say better performance. If server has enough memory to hold more locks, why do we reduce the number locks and sacrifice concurrency? Sorry for my stupid, appreciate if you could express in more details with sample scenarios.Dara
C
1

The definition of "efficient" is complex. Sometimes it is more efficient to optimize for concurrency if lots of processes can do there thing without collisions. Sometimes it is more efficient to take a temporary concurrency hit to get a single process done faster. The escalated lock will keep other processes out so THIS process can get its job done and get out of the way.

Calorie answered 16/5, 2009 at 16:54 Comment(1)
Appreciate if you could show some scenarios why "keep other processes out" is good for overall system performance? I can understand in this way, it is good for the non-blocking process, but I am no sure about the overall performance.Dara
C
1

For specific info on how locks are maintained, you can see chapter 8 of Microsoft SQL Server 2005: The Storage Engine (I'm not affiliated, this is just the first internals info I came across). If you have a books24x7 account, it's on there. It shows on a >16gb memory machine there are 2^25 (33554432) slots in the lock hash table, with an upper limit of 2^31 slots.

For a given application you may very well find total throughput to be higher using only fine grained locks. As you can probably guess, it all depends on how the overhead of lock management compares to potential excessive locking.

Candace answered 16/5, 2009 at 18:14 Comment(1)
You mean when more locks are used, more memory and lock management code are executed, which will degrade the system performance even if more concurrency? If yes, my confusion is, lock management is pure memory operation, and concurreny management deals with thread context switch. Thread context switch is much expensive then memory management. I can not imagine the solution of lock escalation will improve performance -- the time saved by less memory management operation is not in the same magnitude of thread context switch operation. Any more insights or comments. Sorry for my stupid ideas. :-)Dara

© 2022 - 2024 — McMap. All rights reserved.