Does a GCD dispatch_async wait on NSLog()?
Asked Answered
S

5

5

From what I've read about Grand Central Dispatch, GCD does not do preemptive multitasking; it is all a single event loop. I'm having trouble making sense of this output. I have two queues just doing some output (at first I was reading/writing some shared state, but I was able to simplify down to this and still get the same result).

dispatch_queue_t authQueue = dispatch_queue_create("authQueue", DISPATCH_QUEUE_SERIAL);
dispatch_queue_t authQueue2 = dispatch_queue_create("authQueue", DISPATCH_QUEUE_SERIAL);

dispatch_async(authQueue, ^{ 
    NSLog(@"First Block");
    NSLog(@"First Block Incrementing"); 
    NSLog(@"First Block Incremented"); 
});

dispatch_async(authQueue, ^{ 
    NSLog(@"Second Block");
    NSLog(@"Second Block Incrementing");
    NSLog(@"Second Block Incremented"); 
});

dispatch_async(authQueue2,^{ 
    NSLog(@"Third Block"); 
    NSLog(@"Third Block Incrementing");
    NSLog(@"Third Block Incremented"); 
});

I get the following output:

2011-12-15 13:47:17.746 App[80376:5d03] Third Block
2011-12-15 13:47:17.746 App[80376:1503] First Block
2011-12-15 13:47:17.746 App[80376:5d03] Third Block Incrementing
2011-12-15 13:47:17.746 App[80376:1503] First Block Incrementing
2011-12-15 13:47:17.748 App[80376:1503] First Block Incremented
2011-12-15 13:47:17.748 App[80376:5d03] Third Block Incremented
2011-12-15 13:47:17.750 App[80376:1503] Second Block
2011-12-15 13:47:17.750 App[80376:1503] Second Block Incrementing
2011-12-15 13:47:17.751 App[80376:1503] Second Block Incremented

As is evident, the blocks do not execute atomically. My only theory is that GCD writing to stdio via NSLog makes the current execution wait. I can't find anything related to this in the Apple documentation. Can anyone explain this?

Selfheal answered 15/12, 2011 at 19:56 Comment(0)
E
9

GCD does not use any kind of "event loop". It is a new kernel feature in recent releases of Mac OS X and iOS, that doesn't really have any other similar technology that I know of.

The goal is to finish executing all of the code you give it as quickly as the hardware will allow. Note that it's aiming for the quickest finish time, not the quickest start time. A subtle difference, but an important one with real world impact on how it works.

If you only have one idle CPU core, then theoretically only one of them will be executed at a time. Because multi-tasking inside a single core is slower than executing two tasks sequentially. But in reality, this isn't the case. If a CPU core becomes idle or not very busy for a moment (for example, reading the hard drive, or waiting for some other program to respond (Xcode drawing the NSLog output)), then it will quite likely move onto executing some a second GCD item, because the one it's currently doing is stuck.

And of course, most of the time you will have more than one idle CPU core.

It also will not necessarily execute things in the exact order you give it. GCD/the kernel have control over these details.

For your specific example Xcode's debugger is probably only capable of processing a single NSLog() event at a time (at the very least, it has to do the screen drawing one at a time). You've got two queues and they might begin executing simultaneously. If you are sending two NSLog() statements at once one of them will wait for the other to finish first. Because you're not doing anything but printing stuff to Xcode, those two GCD queues will be in a race to be the first to send log data to Xcode. The first one has a slight head start, but it's an extremely slight one and often not enough for it to open a connection with Xcode first.

It all depends on what actual hardware resources are available on the hardware at that specific nanosecond in time. You can't predict it, and need to structure your queues appropriately to assume some control.

Exceeding answered 15/12, 2011 at 20:59 Comment(0)
S
2

Where did you read that GCD does not do pre-emptive multitasking? I think you are mistaken. It is built upon the thread support provided by the system and so GCD blocks dispatched to queues may be preemptively interrupted.

The behaviour you are seeing is exactly what I would expect. The first and second blocks are dispatched to the same queue so GCD will ensure that the first block completes before the second block starts. However, block three is dispatched to a completely different queue (i.e. will be running on a separate background thread) and so its output is interleaved with the other two blocks as the threads are scheduled by the system.

Sacchariferous answered 15/12, 2011 at 20:37 Comment(2)
This may be a question of semantics. The kernel can certainly pre-empt a block as it is executing (due to the thread exhausting its CPU quantum or switching context) but GCD itself never interrupts a block in progress in order to go execute a different block.Callow
@Callow – This is not entirely true. If you have two GCD queues of different QoS, items dispatched to a high-priority queue can preempt an item running on a different, lower-priority queue. However, if you dispatch a high-QoS item to a low-QoS queue that has other low-QoS items running on it, rather than preempting these other items, it temporarily elevates the QoS of the entire queue and its existing work items, rather than preempting them. Needless to say, cancelation is always cooperative. And Swift concurrency (async-await) is also entirely cooperative.Melisent
I
2

Whatever you have read is wrong, unless you use a serial dispatch queue, all blocks will be executed concurrently.

Imminence answered 15/12, 2011 at 20:38 Comment(7)
He is talking about iOS, where everything but the most recent hardware only has a single CPU core. GCD will never execute two CPU intensive tasks concurrently on a single core. As I understand it they will only be concurrent if there aren't any CPU intensive tasks active anywhere on the system. I think he has slightly misunderstood something he read somewhere. It isn't completely wrong, as you claim.Exceeding
Your involvement of hardware is irrelevant, it is a fact that you cannot with any certainty say which block will be executed first or finish executing if you call dispatch_async(dispatch_get_global_queue(0,0), ^{ code here }); twice, even on a single processor. The only time you CAN be certain of serial execution is when you use a serial dispatch queue, where every block you submit WILL be executed in submission order. Bringing the specifics of hardware into the discussion is irrelevant from a code point of view as you have no idea where you will be executing.Imminence
And I see in this case @Selfheal is intact creating TWO serial dispatch queues, and submitting two blocks to one queue and one to the other, if you look at the debug output the two blocks submitted to the queue do in fact execute serially while the one submitted to the other queue executes concurrently. It was a misunderstanding on the part of the poster: serial queues execute blocks serially but each serial queue will run concurrently.Imminence
I agree with everything you've said in your comments here. But I don't agree with your answer, where you state "all blocks will be executed concurrently". This is not true. They will only be executed concurrently if there are hardware resources available. If you schedule 20,000 CPU intensive blocks, they will not be executed concurrently, as multi-threading within a core has a huge performance hit and GCD is designed specifically to avoid that performance issue. On an A4 CPU they may very well execute one at a time.Exceeding
I'm afraid Abhi is wrong. This is not just a hardware-biased decision, and in fact there are opportunities for concurrency even on a single CPU core which GCD will happily avail itself of. Think of it more as a pool of threads, any of which may or may not be executing concurrently regardless of the hardware configuration, and for which a more key question is "which are blocked?" I/O is one such blocking operation, and GCD will create more threads as necessary (within reason) if a given one context-switches back into the kernel and needs to block until its request is processed.Callow
@Callow I did specifically say CPU intensive blocks. My understanding is GCD will not execute two CPU intensive blocks at once on a single core device, especially an ARM device. Where have you seen otherwise?Exceeding
@AbhiBeckert While it is unlikely, it is simply not possible to categorically say that GCD will not execute two CPU-intensive tasks (submitted to concurrent queues) at once. If you still doubt this, take a look at the libdispatch source code and its contract with pthread_workqueue (the source for which is also available as part of the xnu sources).Callow
C
0

Your queues work in 2 concurent background threads. They supply NSLog messages concurently. While one thread makes NSlog output, another waits.
What's wrong?

Chill answered 15/12, 2011 at 20:34 Comment(0)
M
0

To expand on Ahbi’s answer (+1):

  • All of the “first block” will finish before the “second block” starts because they were two blocks submitted to the same serial queue; the second block will not start before the first block finishes on this serial queue; perhaps needless to say, if this queue was a concurrent queue, then all bets are off, but we’re talking about serial queues here;

  • The “third block”, on a separate serial queue, may run in parallel with items dispatched to the first queue (especially in the world of multicore CPUs); and

  • The above notwithstanding, NSLog, itself, is synchronized; e.g., you will never see the output of one NSLog output from one worker thread being interrupted mid-line with the output from another NSLog from another thread); as a historical reference, back in the early days of Swift, the print statement wasn’t synchronized, so you could see garbled output from multiple threads interspersed, but they’ve fixed that problem and Swift print is now synchronized, too.

My apologies for chiming in to an old question, but I wanted to distinguish between (a) the concurrency/parallelism of two separate serial queues; but also (b) the synchronization of the individual NSLog statements.


On a somewhat unrelated point, there have been comments suggesting that one work item on one GCD queue cannot preempt another item on another queue. (It’s not salient to the OP’s question, but is a source of confusion in the ensuing answers/comments.)

So, I performed an experiment on a 6-core device, running 8 identical, computationally-intensive calculations in parallel, launching the latter four on a separate queue 0.5 seconds after the first four tasks (but while the first four were still underway).

If I use Instruments’ “Points of Interest” tool, I can see if these two queues had the same priority (“default” QoS in my example), there is no preemption being exhibited. It runs a max of six at a time, and the last two wait for a core to be freed upon completion of the work on one of the other worker threads:

enter image description here

But if I lower the first queue to a “utility” QoS and raise the second queue to a “user initiated” QoS, I see the high-priority tasks that were submitted later preempt the lower priority tasks that started earlier:

enter image description here

And, setting my inspection range around the interval of time associated with the high priority tasks, I can see the low priority tasks which where enjoying full CPU utilization, dropped while the high priority tasks ran:

enter image description here

Bottom line, under resource contention, high-priority GCD tasks can preempt lower-priority tasks. As an aside, Swift concurrency (async-await) does not manifest this behavior, but GCD (and manual NSThread code) does.

Melisent answered 3/11, 2023 at 20:8 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.