Yes, a dedicated serial queue is a wonderful way to synchronize access to some resource shared amongst multiple threads. And, yes, with a serial queue, each task will wait for the prior one to complete.
Two observations:
While this sounds like a highly inefficient process, this is implicitly at the heart of any synchronization technique (whether queue-based or lock-based approach) whose goal is to minimize concurrent updates of a shared resource.
But in many scenarios the serial queue technique can yield significantly better performance than than other common techniques, such as simple mutex lock, NSLock
, or the @synchronized
directive. For a discussion on the alternative synchronization techniques, see the Synchronization section of the Threading Programming Guide. For a discussion about using queues in lieu of locks, see the Eliminating Lock-Based Code in the Migrating Away from Threads section of the Concurrency Programming Guide.
A variation of the serial queue pattern is to use the "reader-writer" pattern, where you create a GCD concurrent queue:
queue = dispatch_queue_create("identifier", DISPATCH_QUEUE_CONCURRENT);
You then perform reads using dispatch_sync
, but you perform writes using dispatch_barrier_async
. The net effective is to permit concurrent read operations, but ensure that writes are never performed concurrently.
If your resource permits concurrent reads, then the reader-writer pattern can offer further performance gain over that of a serial queue.
So, in short, while it seems inefficient to have task #24 wait for task #23, that is inherent in any synchronization technique where you strive to minimize concurrent updates of the shared resource. And GCD serial queues are a surprisingly efficient mechanism, often better than many simple locking mechanisms. The reader-writer pattern can, in some circumstance, offer even further performance improvements.
My original answer, below, was in response to the original question which was confusing titled "how does a serial dispatch queue guarantee concurrency?" In retrospect, this was just an accidental use of the wrong terms.
This is an interesting choice of words, "how does a serial dispatch queue guarantee concurrency?"
There are three types of queues, serial, concurrent, and the main queue. A serial queue will, as the name suggests, not start the next dispatched block until the prior one finished. (Using your example, this means that if task 23 takes a long time, it won't start task 24 until it's done.) Sometimes this is critical (e.g. if task 24 is dependent upon the results of task 23 or if both task 23 and 24 are trying to access the same shared resource).
If you want these various dispatched tasks to run concurrently with respect to each other, you use a concurrent queue (either one of the global concurrent queues that you get via dispatch_get_global_queue
, or you can create your own concurrent queue using dispatch_queue_create
with the DISPATCH_QUEUE_CONCURRENT
option). In a concurrent queue, many of your dispatched tasks may run concurrently. Using concurrent queues requires some care (notably the synchronization of shared resources), but can yield significant performance benefits when implemented properly.
And as a compromise between these two approaches, you can use operation queues, which can be both concurrent, but in which you can also constrain how many of the operations on the queue will run concurrently at any given time by setting maxConcurrentOperationCount
. A typical scenario where you'll use this is when doing background network tasks, where you do not want more than five concurrent network requests.
For more information, see the Concurrency Programming Guide.