What APIs do I need to use, and what precautions do I need to take, when writing to an IOSurface
in an XPC process that is also being used as the backing store for an MTLTexture
in the main application?
In my XPC service I have the following:
IOSurface *surface = ...;
CIRenderDestination *renderDestination = [... initWithIOSurface:surface];
// Send the IOSurface to the client using an NSXPCConnection.
// In the service, periodically write to the IOSurface.
In my application I have the following:
IOSurface *surface = // ... fetch IOSurface from NSXPConnection.
id<MTLTexture> texture = [device newTextureWithDescriptor:... iosurface:surface];
// The texture is used in a fragment shader (Read-only)
I have an MTKView
that is running it's normal update loop. I want my XPC service to be able to periodically write to the IOSurface
using Core Image and then have the new contents rendered by Metal on the app side.
What synchronization is needed to ensure this is done properly? A double or triple buffering strategy is one, but that doesn't really work for me because I might not have enough memory to allocate 2x or 3x the number of surfaces. (The example above uses one surface for clarity, but in reality I might have dozens of surfaces I'm drawing to. Each surface represents a tile of an image. An image can be as large as JPG/TIFF/etc allows.)
WWDC 2010-442 talks about IOSurface
and briefly mentions that it all "just works", but that's in the context of OpenGL and doesn't mention Core Image or Metal.
I originally assumed that Core Image and/or Metal would be calling IOSurfaceLock()
and IOSurfaceUnlock()
to protect read/write access, but that doesn't appear to be the case at all. (And the comments in the header file for IOSurfaceRef.h
suggest that the locking is only for CPU access.)
Can I really just let Core Image's CIRenderDestination
write at-will to the IOSurface
while I read from the corresponding MTLTexture
in my application's update loop? If so, then how is that possible if, as the WWDC video states, all textures bound to an IOSurface
share the same video memory? Surely I'd get some tearing of the surface's content if reading and writing occurred during the same pass.
CIRenderTask waitUntilCompletedAndReturnError
will ensure that Core Image is finished but using that requires that I somehow block the Metal render loop in the application when I initiate the Core Image rendering and then unblock the render loop when I'm done. Apple's sample code uses two IOSurfaces to achieve this and requires that the service constantly tell the app which surface it can read from. I'd like to avoid double or triple buffering to reduce memory overhead.MTLSharedEventHandle
looks promising, but it's only for 10.14+ and documentation is very thin on it... – Kelly