Capture all NSWindows as active images like Mission Control in Mac OS X
Asked Answered
T

1

7

I'm looking to aggregate live representations of all windows. Much like Mission Control (Exposé), I want to extremely quickly access the image buffer of any given NSWindow or screen. Ideally, I want to composite these live images in my own OpenGL context so I can manipulate them (scale and move the windows screen captures around).

Things that are too slow:

  • CGDisplayCreateImage
  • CGWindowListCreateImage
  • CGDisplayIDToOpenGLDisplayMask & CGLCreateContext & CGBitmapContextCreate

Any other ideas? I'm trying to achieve 60 fps capture/composite/output but the best I can get with any of these methods is ~5 fps (on a retina display capturing the entire screen).

Thruway answered 25/3, 2014 at 1:25 Comment(0)
T
6

Unfortunately, I haven't found away to quickly capture the framebuffers of individual windows, but I figured out the next best thing. This is a method for quickly capturing the live view of the entire screen(s) into OpenGL:

AVFoundation Setup

_session = [[AVCaptureSession alloc] init];
_session.sessionPreset = AVCaptureSessionPresetPhoto;
AVCaptureScreenInput *input = [[AVCaptureScreenInput alloc] initWithDisplayID:kCGDirectMainDisplay];
input.minFrameDuration = CMTimeMake(1, 60);
[_session addInput:input];
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[output setAlwaysDiscardsLateVideoFrames:YES];
[output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[_session addOutput:output];
[_session startRunning];

On Each AVCaptureVideoDataOutput Frame

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
  CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
  const size_t bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
  const size_t bufferHeight = CVPixelBufferGetHeight(pixelBuffer);

  CVOpenGLTextureRef texture;
  CVOpenGLTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _textureCache, pixelBuffer, NULL, &texture);
  CVOpenGLTextureCacheFlush(_textureCache, 0);

  // Manipulate and draw the texture however you want...
  const GLenum target = CVOpenGLTextureGetTarget(texture);
  const GLuint name = CVOpenGLTextureGetName(texture);

  // ...

  glEnable(target);
  glBindTexture(target, name);

  CVOpenGLTextureRelease(texture);
}

Cleanup

[_session stopRunning];
CVOpenGLTextureCacheRelease(_textureCache);

The big difference here between some other implementations that get the AVCaptureVideoDataOutput image into OpenGL as a texture is that they might use CVPixelBufferLockBaseAddress, CVPixelBufferGetBaseAddress, glTexImage2D, and CVPixelBufferUnlockBaseAddress. The issue with this approach is that it's typically terribly redundant and slow. CVPixelBufferLockBaseAddress will make sure that the memory it's about to hand you is not GPU memory, and will copy it all to general purpose CPU memory. This is bad! After all, we'd just be copying it back to the GPU with glTexImage2D.

So, we can take advantage of the fact that the CVPixelBuffer is already in GPU memory with CVOpenGLTextureCacheCreateTextureFromImage.

I hope this helps someone else... the CVOpenGLTextureCache suite is terribly documented and its iOS counterpart CVOpenGLESTextureCache is only slightly better documented.

60fps at 20% CPU capturing the 2560x1600 desktop!

Thruway answered 28/3, 2014 at 22:57 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.