While reading the ZooKeeper's recipe for lock, I got confused. It seems that this recipe for distributed locks can not guarantee "any snapshot in time no two clients think they hold the same lock". But since ZooKeeper is so widely adopted, if there were such mistakes in the reference documentation, someone should have pointed it out long ago, so what did I misunderstand?
Quoting the recipe for distributed locks:
Locks
Fully distributed locks that are globally synchronous, meaning at any snapshot in time no two clients think they hold the same lock. These can be implemented using ZooKeeeper. As with priority queues, first define a lock node.
- Call create( ) with a pathname of "locknode/guid-lock-" and the sequence and ephemeral flags set.
- Call getChildren( ) on the lock node without setting the watch flag (this is important to avoid the herd effect).
- If the pathname created in step 1 has the lowest sequence number suffix, the client has the lock and the client exits the protocol.
- The client calls exists( ) with the watch flag set on the path in the lock directory with the next lowest sequence number.
- if exists( ) returns false, go to step 2. Otherwise, wait for a notification for the pathname from the previous step before going to step 2.
Consider the following case:
- Client1 successfully acquired the lock (in step 3), with ZooKeeper node "locknode/guid-lock-0";
- Client2 created node "locknode/guid-lock-1", failed to acquire the lock, and is now watching "locknode/guid-lock-0";
- Later, for some reason (say, network congestion), Client1 fails to send a heartbeat message to the ZooKeeper cluster on time, but Client1 is still working away, mistakenly assuming that it still holds the lock.
But, ZooKeeper may think Client1's session is timed out, and then
- delete "locknode/guid-lock-0",
- send a notification to Client2 (or maybe send the notification first?),
- but can not send a "session timeout" notification to Client1 in time (say, due to network congestion).
- Client2 gets the notification, goes to step 2, gets the only node ""locknode/guid-lock-1", which it created itself; thus, Client2 assumes it hold the lock.
- But at the same time, Client1 assumes it holds the lock.
Is this a valid scenario?