Render dynamic text onto CVPixelBufferRef while recording video
Asked Answered
G

4

7

I'm recording video and audio using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput and in the captureOutput:didOutputSampleBuffer:fromConnection: delegate method, I want to draw text onto each individual sample buffer I'm receiving from the video connection. The text changes with about every frame (it's a stopwatch label) and I want that to be recorded on top of the video data that's captured.

Here's what I've been able to come up with so far:

//1.
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

//2.   
UIImage *textImage = [self createTextImage];
CIImage *maskImage = [CIImage imageWithCGImage:textImage.CGImage];

//3.
CVPixelBufferLockBaseAddress(pixelBuffer, 0); 
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSDictionary *options = [NSDictionary dictionaryWithObject:(__bridge id)colorSpace forKey:kCIImageColorSpace];
CIImage *inputImage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:options];

//4.
CIFilter *filter = [CIFilter filterWithName:@"CIBlendWithMask"];
[filter setValue:inputImage forKey:@"inputImage"];
[filter setValue:maskImage forKey:@"inputMaskImage"];
CIImage *outputImage = [filter outputImage];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

//5.   
[self.renderContext render:outputImage toCVPixelBuffer:pixelBuffer bounds:[outputImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()];

//6.
[self.pixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:timestamp];
  1. Here I grab the pixel buffer, easy as pie.
  2. I use core graphics to write text to a blank UIImage (that's what createTextImage does. I was able to verify that this step works; I saved an image with text drawn to it to my photos.
  3. I create a CGImage from the pixel buffer.
  4. I make a CIFilter for CIBlendWithMask, setting the input image as the one created from the original pixel buffer and the input mask as the CIImage made from the image with text drawn on it.
  5. Finally, I render the filter output image to the pixelBuffer. The CIContext was created beforehand with [CIContext contextWithOptions:nil];.
  6. After all that, I append the pixel buffer to my pixelBufferAdaptor with the appropriate timestamp.

The video that's saved at the end of recording has no visible changes to it i.e. no mask image has been drawn onto the pixel buffers.

Anyone have any idea where I'm going wrong here? I've been stuck on this for days, any help would be so appreciated.

EDIT:

- (UIImage *)createTextImage {
    UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.view.bounds.size.width, self.view.bounds.size.height), NO, 1.0);
    NSMutableAttributedString *timeStamp = [[NSMutableAttributedString alloc]initWithString:self.timeLabel.text attributes:@{NSForegroundColorAttributeName:self.timeLabel.textColor, NSFontAttributeName: self.timeLabel.font}];
    NSMutableAttributedString *countDownString = [[NSMutableAttributedString alloc]initWithString:self.cDownLabel.text attributes:@{NSForegroundColorAttributeName:self.cDownLabel.textColor, NSFontAttributeName:self.cDownLabel.font}];
    [timeStamp drawAtPoint:self.timeLabel.center];
    [countDownString drawAtPoint:self.view.center];
    UIImage *blank = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return blank;
}
Goring answered 3/6, 2015 at 1:16 Comment(5)
Hi! Your question is answer on my question. But what is self.renderContext ?Woadwaxen
It's a CIContext objectGoring
How did you get this done finally?Martines
I ended up using GPUImage to record the live video and then applied a filter on the result.Goring
Can you share that sample code? @Goring I am using GPUImage but unable to achieve the same.Tinct
S
4

Do you want to as below? enter image description here

Instead of using CIBlendWithMask, you should use CISourceOverCompositing, try this:

//4.
CIFilter *filter = [CIFilter filterWithName:@"CISourceOverCompositing"];
[filter setValue:maskImage forKey:kCIInputImageKey];
[filter setValue:inputImage forKey:kCIInputBackgroundImageKey];
CIImage *outputImage = [filter outputImage];
Spaceless answered 3/6, 2015 at 6:55 Comment(9)
That's exactly what I want! I tried that though and no luck still.Goring
Did you set the maskImage to kCIInputImageKey not kCIInputBackgroundImageKey?Spaceless
Yeah just checked to make sure.Goring
Can you post your createTextImage ?Spaceless
Sure thing, just did.Goring
It seems that the orientation of the maskImage is incorrect. try this:CIImage *maskImage = [[CIImage imageWithCGImage:textImage.CGImage] imageByApplyingOrientation:8];Spaceless
Can you post your project on GitHub?Spaceless
will overlay retain if I save the video anyone?Hurff
@Bannings, this works perfectly. I posted swift version as answerFeverroot
T
1

You can also use CoreGraphics and CoreText to draw directly on top of the existing CVPixelBufferRef if it's RGBA (or on a copy if it's YUV). I have some sample code in this answer: https://mcmap.net/q/1624091/-how-do-i-draw-onto-a-cvpixelbufferref-that-is-planar-ycbcr-420f-yuv-nv12-not-rgb

Tandem answered 2/10, 2017 at 12:2 Comment(0)
F
1

I asked Apple DTS about this same issue as all approaches I had were running really slow or doing odd things and they sent me this:

https://developer.apple.com/documentation/avfoundation/avasynchronousciimagefilteringrequest?language=objc

Which got me to a working solution really quickly! you can bypass the CVPixelBuffer altogether using CIFilters, which IMHO is much easier to work with. So if you don't actually NEED to use CVPixelBuffer, then this approach will become your new friend quickly.

A combination of CIFilter(s) to composite the source image and the image with the text I generated for each frame did the trick.

I hope this helps someone else!

Foreconscious answered 17/11, 2017 at 17:16 Comment(1)
any example how to add text to CVPixelBuffer using CIFilters?Significative
F
0

Swift version of Bannings's answer.

        let combinedFilter = CIFilter(name: "CISourceOverCompositing")!
        combinedFilter.setValue(maskImage.oriented(.left), forKey: "inputImage")
        combinedFilter.setValue(inputImage, forKey: "inputBackgroundImage")

        let outputImage = combinedFilter.outputImage!

        let tmpcontext = CIContext(options: nil)
        tmpcontext.render(outputImage, to: pixelBuffer, bounds: outputImage.extent, colorSpace: CGColorSpaceCreateDeviceRGB())
Feverroot answered 10/7, 2018 at 12:6 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.