I've been experimenting with GCD priorities recently. Here's the snippet of code that I've been working with.
for _ in 1...1000 {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {
for _ in 1...10000000 {
let _ = sin(0.64739812)
}
print("Finished a default")
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)) {
for _ in 1...10000 {
let _ = sin(0.64739812)
}
print("Finished a high")
}
}
}
I expected it to print
Finished a default
Finished a high
// Repeat default and high alternating back and forth 1000 times (because there are 1000 loops)
But what actually happened was the logs printed
Finished a default
Finished a high
Finished a default x 21
Finished a high
Finished a default
Finished a high x 20
Finished a default x 977
Finished a high x 978
It makes sense in the beginning, alternating a little bit. Even 21 defaults in a row makes some sense. But then it does 977 default blocks without processing a single high block. I assume this is happening because the dispatcher is very busy dealing with everything else going on. But still, it's a high priority queue vs a default priority queue.
Does anybody have any insights as to what's going on?
Edit 1
for _ in 1...1000 {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {
print("Starting a default")
for i in 1...10000000 {
let _ = sin(Double(i))
}
print("Finished a default")
}
}
for _ in 1...1000 {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)) {
print("Starting a high")
for i in 1...10000000 {
let _ = sin(Double(i))
}
print("Finished a high")
}
}
print("Done Dispatching Everything")
Here I would expect a couple default
s and a couple high
s to execute before printing Done Dispatching Everything
, and then to execute all the high
s then all the default
s.
However, here are the results:
Starting a default x6
Done Dispatching Everything // at this point, all the high and default blocks have been successfully submitted for execution.
Starting a high
Finished a default
Starting a default
Finished a default
Starting a default
Finished a default
Starting a default
Finished a default
Starting a default
Finished a default
Starting a default
Finished a default
Starting a default
Finished a default
Starting a default
Finished a high
Starting a high
Finished a default
Starting a default
Finished a default
Finished a default
Starting a default
Starting a default
Finished a default
Starting a default
Finished a default
Starting a default
Finished a default
Starting a default
Finished a high
Starting a high
Finished a default
Starting a default
Finished a default
Starting a default
// A sequence that looks like the above for around 1500 lines.
Started+Finished a high x ~500
So what's happening is that even after everything is scheduled, default
is happening significantly more than high
. And then after all the default
s have finished, the high
s finally start to execute and finish in bulk.
Edit 2
Another block
for _ in 1...1000 {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)) {
print("Starting a high")
for i in 1...10000000 {
let _ = sin(Double(i))
}
print("Finished a high")
}
}
for _ in 1...1000 {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {
print("Starting a default")
for i in 1...10000000 {
let _ = sin(Double(i))
}
print("Finished a default")
}
}
print("Done Dispatching Everything")
And the results blow my mind. It does the exact same thing as my second example, (Edit 1). Even though the high
s all get scheduled before the default
s, it still executes the default
blocks first!
Edit 3
Last example, I promise
for _ in 1...1000 {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)) {
print("Starting a high")
for i in 1...10000000 {
let _ = sin(Double(i))
}
print("Finished a high")
}
}
for _ in 1...1000 {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0)) {
print("Starting a background")
for i in 1...10000000 {
let _ = sin(Double(i))
}
print("Finished a background")
}
}
print("Done Dispatching Everything")
This executes exactly as expected. All the high
s run, then all the background
s run, with no exceptions at all. However this is significantly different in execution than edit 2, however in theory should be the exact same.
DEFAULT
can do 4 at once (4 threads, 1000-in). So it picks up 4, runs them. This will spawn 4 reqs toHIGH
. Then it grabs the next 4 A while 4 B are running. And so on. They interleave. The original assumption is (simplified) that the 4 high-B tasks run before the next 4 A tasks are picked up from the def-Q. And hence you'll get a somewhat even distribution between A's and B's completing, because A's take much longer to execute (10000000 vs 10000 iterations). – Floodedfor _ in 1...10000 { sin }
(or even 10000000) takes to execute on your machine, but all thoseprint
s are quite likely much slower. Your test may be flawed. Try again and capture the events in a preallocated structure (just a threadsafe array with a cursor). – Floodedfor _ in 1...10000000 { let _ = sin(0.64739812) }
into a nothing (or unwraps either into a single sin(0.64) call ... – Floodedhigh
s first, and then all thedefault
s. In that instance, you would think there is no way adefault
could execute before ahigh
, right? On 9.3.5, it definitely does. I'll grab a 10b device. – Speckle