I'm having a deadlock and I'm trying to figure out the reasoning behind it.
The question can be reduced to this:
table:
create table testdl (id int auto_increment, c int, primary key (id), key idx_c (c));
Isolation level is repeatable read
(Tx1): begin; delete from testdl where c = 1000; -- nothing is deleted coz the table is empty
(Tx2): begin; insert into testdl (c) values (?);
Whatever the value in Tx2 is, it hangs. So it basically means that Tx1 holds the gap of the whole range (-∞, +∞), when delete from testdl where c = 1000
fails to find a match, right?.
So my question is: is this by design? What's the point of this if it is?
Update:
Say we already have a record in testdl
:
+----+------+
| id | c |
+----+------+
| 1 | 1000 |
+----+------+
Case 1:
(Tx1): select * from testdl where c = 500 for update; -- c = 500 not exists
(TX2): insert into testdl (c) values (?);
In this case, any value >= 1000 can be inserted, so Tx1 locks the gap (-∞, 1000)
Again, is locking (-∞, 1000) necessary? What's the reasoning behind this?
delete from testdl where c = 1000
doesn't find a match – Missiselect * from testdl where c = 500 for update
fails to match a record, I can't insert a record withc = 100
. After all, I'm selectedc = 500 for update
, notc < 500
. What would be wrong if inserting a record withc = 100
is allowed? Is there some reason for it? Or maybe is it just an implementation thing, nothing bad would actually happen? – Missiselect * from testdl where c > 500 for update
, it makes sense to me that all concurrent writes withc > 500
should be forbidden until I'm done. But if I'mselect * from testdl where c = 500 for update
, what possible harm can it be if another transactions inserts a record withc = 100
? – Missi