I am trying to run some core image filters in the most efficient way possible. Trying to avoid memory warnings and crashes, which I am getting when rendering large images. I am looking at Apple's Core Image Programming Guide. Regarding multi-threading it says: "each thread must create its own CIFilter objects. Otherwise, your app could behave unexpectedly."
What does this mean?
I am in fact attempting to run my filters on a background thread, so I can run an HUD on the main thread (see below). Does this make sense in the context of coreImage? I gather that core image inherently uses GCD.
//start HUD code here, on main thread
// Get a concurrent queue form the system
dispatch_queue_t concurrentQueue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(concurrentQueue, ^{
//Effect image using Core Image filter chain on a background thread
dispatch_async(dispatch_get_main_queue(), ^{
//dismiss HUD and add fitered image to imageView in main thread
});
});
More from the Apple Docs:
Maintaining Thread Safety
CIContext and CIImage objects are immutable, which means each can be shared safely among threads. Multiple threads can use the same GPU or CPU CIContext object to render CIImage objects. However, this is not the case for CIFilter objects, which are mutable. A CIFilter object cannot be shared safely among threads. If your app is multithreaded, each thread must create its own CIFilter objects. Otherwise, your app could behave unexpectedly.