One of the difficulties in writing algorithms or data structures that satisfy lock-free progress guarantees is dynamic memory allocation: calling something like malloc
or new
isn't guaranteed to be lock-free in a portable manner. However, many lock-free implementations of malloc
or new
exist, and there are also a variety of lock-free memory allocators that can be used to implement lock-free algorithms/data-structures.
However, I still don't understand how this can actually completely satisfy lock-free progress guarantees, unless you specifically limit your data-structure or algorithm to some pre-allocated static memory pool. But if you need dynamic memory allocation, I don't understand how any alleged lock-free memory allocator can ever be truly lock-free in the long-run. The problem is that no matter how amazing your lock-free malloc
or new
might be, ultimately you may run out of memory, at which point you have to fall back on asking the operating system for more memory. That means that ultimately you have to call brk()
or mmap()
or some such low-level equivalent to actually get access to more memory. And there is simply no guarantee that any of these low-level calls are implemented in a lock-free manner.
There's simply no way around this, (unless you're using an ancient OS like MS-DOS that doesn't provide memory protection, or you write your own completely lock-free operating system - two scenarios that are not practical or likely.) So, how can any dynamic memory allocator truly be lock-free?
sbrk()
or whatever can be strictly limited to the point where it doesn't matter. This doesn't satisfy hard real-time requirements, of course, but I don't think dynamic allocation would satisfy such requirements even if it was completely lock-free. – Festusmmap
, because locks are only a problem if it's possible for a thread to sleep while holding one. A couple CPU cache misses inside the critical section are probably the worst you'd get. – Dragline