SELECT ... FOR UPDATE SKIP LOCKED in REPETABLE READ transactions
Asked Answered
G

1

13

I have the following statement in my PostgreSQL 10.5 database, which I execute in a repeatable read transaction:

delete from task
  where task.task_id = (
    select task.task_id
    from task
    order by task.created_at asc
    limit 1
    for update skip locked
  )
  returning
    task.task_id,
    task.created_at

Unfortunately, when I run it, I sometimes get:

[67] ERROR:  could not serialize access due to concurrent update
[67] STATEMENT:  delete from task
  where task.task_id = (
    select task.task_id
    from task
    order by task.created_at asc
    limit $1
    for update skip locked
  )
  returning
    task.task_id,
    task.created_at

which means the transaction rolled back because some other transaction modified the record in the meantime. (I think?)

I don't quite understand this. How could a different transaction modify a record that was selected with for update skip locked, and deleted?

Gerfalcon answered 13/11, 2018 at 19:58 Comment(0)
P
21

This quote from the manual discusses your case exactly:

UPDATE, DELETE, SELECT FOR UPDATE, and SELECT FOR SHARE commands behave the same as SELECT in terms of searching for target rows: they will only find target rows that were committed as of the transaction start time. However, such a target row might have already been updated (or deleted or locked) by another concurrent transaction by the time it is found. In this case, the repeatable read transaction will wait for the first updating transaction to commit or roll back (if it is still in progress). If the first updater rolls back, then its effects are negated and the repeatable read transaction can proceed with updating the originally found row. But if the first updater commits (and actually updated or deleted the row, not just locked it) then the repeatable read transaction will be rolled back with the message

ERROR:  could not serialize access due to concurrent update

Meaning, your transaction was unable to lock the row to begin with - due to concurrent write access that got there first. SKIP LOCKED cannot save you from this completely as there may not be a lock to skip any more and we still run into a serialization failure if the row has already been changed (and the change committed - hence the lock released) since transaction start.

The same statement should work just fine with default READ COMMITTED transaction isolation. Related:

Plethoric answered 13/11, 2018 at 20:47 Comment(2)
Thanks for you reply! I see, so before the lock can be taken, but after the transaction starts, a different client manged to delete the row. The reason why I wanted the REPEATABLE READ transaction isolation was because in the same transaction I insert a row into a second table. Would it be still safe to do so with READ COMMITED?Gerfalcon
@Ynv: Yes, a different transaction that deleted or updated and already committed. Still safe with READ COMMITTED? Probably yes, but that all depends on requirements. Safe against what? I suggest you start a new question with defining details. (If this question is answered properly, consider accepting it.)Plethoric

© 2022 - 2024 — McMap. All rights reserved.