iPhone Watermark on recorded Video.
Asked Answered
T

6

23

In my Application I need to capture a video and Put a watermark on that video. The watermark should be Text(Time and Notes). I saw a code using "QTKit" Frame work. However I read that the framework is not available for iPhone.

Thanks in Advance.

Trona answered 26/8, 2011 at 14:1 Comment(5)
To:whoever needs more info on this subject. I know this question is old, but for the sake of providing additional information - see this post (https://mcmap.net/q/584207/-how-to-add-text-on-video-in-iphone)Displacement
@GuntisTreulands Thank you for adding more information, hope this helps ppl..Trona
@DilipRajkumar can you please suggest me how to set proper frame for CATextLayer?Subadar
@DipenChudasama, Sorry. currently I am not doing any iOS development. So I really forgot how to do. Hope someone can help..Trona
Okay NP, Solve the issue, Thanks for your reply.Subadar
V
16

Use AVFoundation. I would suggest grabbing frames with AVCaptureVideoDataOutput, then overlaying the captured frame with the watermark image, and finally writing captured and processed frames to a file user AVAssetWriter.

Search around stack overflow, there are a ton of fantastic examples detailing how to do each of these things I have mentioned. I haven't seen any that give code examples for exactly the effect you would like, but you should be able to mix and match pretty easily.

EDIT:

Take a look at these links:

iPhone: AVCaptureSession capture output crashing (AVCaptureVideoDataOutput) - this post might be helpful just by nature of containing relevant code.

AVCaptureDataOutput will return images as CMSampleBufferRefs. Convert them to CGImageRefs using this code:

    - (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    CVPixelBufferLockBaseAddress(imageBuffer,0);        // Lock the image buffer 

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);   // Get information of the image 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 
    CGContextRelease(newContext); 

    CGColorSpaceRelease(colorSpace); 
    CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
    /* CVBufferRelease(imageBuffer); */  // do not call this!

    return newImage;
}

From there you would convert to a UIImage,

  UIImage *img = [UIImage imageWithCGImage:yourCGImage];  

Then use

[img drawInRect:CGRectMake(x,y,height,width)]; 

to draw the frame to a context, draw a PNG of the watermark over it, and then add the processed images to your output video using AVAssetWriter. I would suggest adding them in real time so you're not filling up memory with tons of UIImages.

How do I export UIImage array as a movie? - this post shows how to add the UIImages you have processed to a video for a given duration.

This should get you well on your way to watermarking your videos. Remember to practice good memory management, because leaking images that are coming in at 20-30fps is a great way to crash the app.

Vannesavanness answered 26/8, 2011 at 14:25 Comment(5)
Thank you James, if you can provide me a start it would be great.. Thanks again.Trona
See my additional comments above.Vannesavanness
have you had a chance to try any of this out yet? any luck?Vannesavanness
@Vannesavanness can you please suggest me how to set proper frame for CATextLayer ? #31780560Subadar
@Vannesavanness How can i add watermark at specific time my video is 60 sec. and i want to add watermark from 10 to 50 sec. please help me.Witchy
L
49

Adding a watermark is quite more simple. You just need to use a CALayer and AVVideoCompositionCoreAnimationTool. The code can be just copied and assembled in the same order. I have just tried to insert some comments in between for better understanding.

Let's assume you recorded the video already so we are going to create the AVURLAsset first:

AVURLAsset* videoAsset = [[AVURLAsset alloc]initWithURL:outputFileURL options:nil];
AVMutableComposition* mixComposition = [AVMutableComposition composition];

AVMutableCompositionTrack *compositionVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo  preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipVideoTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoAsset.duration) 
                               ofTrack:clipVideoTrack
                                atTime:kCMTimeZero error:nil];

[compositionVideoTrack setPreferredTransform:[[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] preferredTransform]]; 

With just this code you would be able to export the video but we want to add the layer with the watermark first. Please note that some code may seem redundant but it is necessary for everything to work.

First we create the layer with the watermark image:

UIImage *myImage = [UIImage imageNamed:@"icon.png"];
CALayer *aLayer = [CALayer layer];
aLayer.contents = (id)myImage.CGImage;
aLayer.frame = CGRectMake(5, 25, 57, 57); //Needed for proper display. We are using the app icon (57x57). If you use 0,0 you will not see it
aLayer.opacity = 0.65; //Feel free to alter the alpha here

If we don't want an image and want text instead:

CATextLayer *titleLayer = [CATextLayer layer];
titleLayer.string = @"Text goes here";
titleLayer.font = @"Helvetica";
titleLayer.fontSize = videoSize.height / 6;
//?? titleLayer.shadowOpacity = 0.5;
titleLayer.alignmentMode = kCAAlignmentCenter;
titleLayer.bounds = CGRectMake(0, 0, videoSize.width, videoSize.height / 6); //You may need to adjust this for proper display

The following code sorts the layer in proper order:

CGSize videoSize = [videoAsset naturalSize]; 
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];   
parentLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
videoLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
[parentLayer addSublayer:videoLayer];
[parentLayer addSublayer:aLayer];
[parentLayer addSublayer:titleLayer]; //ONLY IF WE ADDED TEXT

Now we are creating the composition and add the instructions to insert the layer:

AVMutableVideoComposition* videoComp = [[AVMutableVideoComposition videoComposition] retain];
videoComp.renderSize = videoSize;
videoComp.frameDuration = CMTimeMake(1, 30);
videoComp.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];

/// instruction
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mixComposition duration]);
AVAssetTrack *videoTrack = [[mixComposition tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoCompositionLayerInstruction* layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
videoComp.instructions = [NSArray arrayWithObject: instruction];

And now we are ready to export:

_assetExport = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPresetMediumQuality];//AVAssetExportPresetPassthrough   
_assetExport.videoComposition = videoComp;

NSString* videoName = @"mynewwatermarkedvideo.mov";

NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:videoName];
NSURL    *exportUrl = [NSURL fileURLWithPath:exportPath];

if ([[NSFileManager defaultManager] fileExistsAtPath:exportPath]) 
{
    [[NSFileManager defaultManager] removeItemAtPath:exportPath error:nil];
}

_assetExport.outputFileType = AVFileTypeQuickTimeMovie; 
_assetExport.outputURL = exportUrl;
_assetExport.shouldOptimizeForNetworkUse = YES;

[strRecordedFilename setString: exportPath];

[_assetExport exportAsynchronouslyWithCompletionHandler:
 ^(void ) {
     [_assetExport release];
     //YOUR FINALIZATION CODE HERE
 }       
 ];   

[audioAsset release];
[videoAsset release];
Lilley answered 10/1, 2012 at 2:43 Comment(15)
Thank you Julio.. I have removed the functionality my app. This code would really help some one. If I am doing this functionality again I will use your code. I will definitely help some one. Thank you very much..Trona
No problem. Happy to be able to help :)Lilley
One problem I did find with this method is that it crashes if the app is backgrounded.Fermi
Did any of you get the text layer to work? I tried but couldn't get text to show. See my question: #10282372Gladi
When I record video using uiimagepickercontroller and use above code, it rotates my video to landscape. I checked if we save directly to photo album it is getting saved properly similar to default camera recorded videos, but after applying this code it saves in landscape mode. Any help?Parody
It will work only if I already done with recording. Is there any chance to do that while am recording that video (adding water mark is possible, what about saving that video with water mark?).Monahon
How would I modify this to only show the watermark for a second or two at the start of the video? See this question #21685049Sherry
Like Ruchir, I'm also seeing that it rotates to landscape mode if shot in portrait :(Dianemarie
This code work perfectly in iOS 7 .but not work in iOS 8.if anyone know then please tell me or upload the new code.Rici
@JulioBailon can you please suggest me how to set proper frame for CATextLayer ?Subadar
@DipenChudasama could you be a little more specific?Lilley
@JulioBailon Thanks for your reply, Issue solve now :)Subadar
Like @Dianemarie and Ruchir I am also facing same problem, Did you guys got any solution ? Please let me know.Holter
Hello its solve my problem but when i play this video it will missing AUDIO. Video play silent.Witchy
i didn't see any result, but i missed the assetExport.videoComposition = videoComp; as it was on the exporterRowney
V
16

Use AVFoundation. I would suggest grabbing frames with AVCaptureVideoDataOutput, then overlaying the captured frame with the watermark image, and finally writing captured and processed frames to a file user AVAssetWriter.

Search around stack overflow, there are a ton of fantastic examples detailing how to do each of these things I have mentioned. I haven't seen any that give code examples for exactly the effect you would like, but you should be able to mix and match pretty easily.

EDIT:

Take a look at these links:

iPhone: AVCaptureSession capture output crashing (AVCaptureVideoDataOutput) - this post might be helpful just by nature of containing relevant code.

AVCaptureDataOutput will return images as CMSampleBufferRefs. Convert them to CGImageRefs using this code:

    - (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    CVPixelBufferLockBaseAddress(imageBuffer,0);        // Lock the image buffer 

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);   // Get information of the image 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 
    CGContextRelease(newContext); 

    CGColorSpaceRelease(colorSpace); 
    CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
    /* CVBufferRelease(imageBuffer); */  // do not call this!

    return newImage;
}

From there you would convert to a UIImage,

  UIImage *img = [UIImage imageWithCGImage:yourCGImage];  

Then use

[img drawInRect:CGRectMake(x,y,height,width)]; 

to draw the frame to a context, draw a PNG of the watermark over it, and then add the processed images to your output video using AVAssetWriter. I would suggest adding them in real time so you're not filling up memory with tons of UIImages.

How do I export UIImage array as a movie? - this post shows how to add the UIImages you have processed to a video for a given duration.

This should get you well on your way to watermarking your videos. Remember to practice good memory management, because leaking images that are coming in at 20-30fps is a great way to crash the app.

Vannesavanness answered 26/8, 2011 at 14:25 Comment(5)
Thank you James, if you can provide me a start it would be great.. Thanks again.Trona
See my additional comments above.Vannesavanness
have you had a chance to try any of this out yet? any luck?Vannesavanness
@Vannesavanness can you please suggest me how to set proper frame for CATextLayer ? #31780560Subadar
@Vannesavanness How can i add watermark at specific time my video is 60 sec. and i want to add watermark from 10 to 50 sec. please help me.Witchy
O
9

Already the answer given by @Julio works fine in case of objective-c Here's the same code base for Swift 3.0:

WATERMARK & Generating SQUARE or CROPPED video like Instagram

Getting the output file from Documents Directory & create AVURLAsset

    //output file
    let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first
    let outputPath = documentsURL?.appendingPathComponent("squareVideo.mov")
    if FileManager.default.fileExists(atPath: (outputPath?.path)!) {
        do {
           try FileManager.default.removeItem(atPath: (outputPath?.path)!)
        }
        catch {
            print ("Error deleting file")
        }
    }



    //input file
    let asset = AVAsset.init(url: filePath)
    print (asset)
    let composition = AVMutableComposition.init()
    composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)

    //input clip
    let clipVideoTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0]

Create the layer with the watermark image:

    //adding the image layer
    let imglogo = UIImage(named: "video_button")
    let watermarkLayer = CALayer()
    watermarkLayer.contents = imglogo?.cgImage
    watermarkLayer.frame = CGRect(x: 5, y: 25 ,width: 57, height: 57)
    watermarkLayer.opacity = 0.85

Create the layer with Text as watermark instead of image:

    let textLayer = CATextLayer()
    textLayer.string = "Nodat"
    textLayer.foregroundColor = UIColor.red.cgColor
    textLayer.font = UIFont.systemFont(ofSize: 50)
    textLayer.alignmentMode = kCAAlignmentCenter
    textLayer.bounds = CGRect(x: 5, y: 25, width: 100, height: 20)

Adding the layers over the video in proper order for watermark

  let videoSize = clipVideoTrack.naturalSize
    let parentlayer = CALayer()
    let videoLayer = CALayer()

    parentlayer.frame = CGRect(x: 0, y: 0, width: videoSize.height, height: videoSize.height)
    videoLayer.frame = CGRect(x: 0, y: 0, width: videoSize.height, height: videoSize.height)
    parentlayer.addSublayer(videoLayer)
    parentlayer.addSublayer(watermarkLayer)
    parentlayer.addSublayer(textLayer) //for text layer only

Cropping the video in square format - of 300*300 in size

 //make it square
    let videoComposition = AVMutableVideoComposition()
    videoComposition.renderSize = CGSize(width: 300, height: 300) //change it as per your needs.
    videoComposition.frameDuration = CMTimeMake(1, 30)
    videoComposition.renderScale = 1.0

    //Magic line for adding watermark to the video
    videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayers: [videoLayer], in: parentlayer)

    let instruction = AVMutableVideoCompositionInstruction()
    instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30))

Rotate to Portrait

//rotate to potrait
    let transformer = AVMutableVideoCompositionLayerInstruction(assetTrack: clipVideoTrack)
    let t1 = CGAffineTransform(translationX: clipVideoTrack.naturalSize.height, y: -(clipVideoTrack.naturalSize.width - clipVideoTrack.naturalSize.height) / 2)
    let t2: CGAffineTransform = t1.rotated(by: .pi/2)
    let finalTransform: CGAffineTransform = t2
    transformer.setTransform(finalTransform, at: kCMTimeZero)
    instruction.layerInstructions = [transformer]
    videoComposition.instructions = [instruction]

Final step to export the video

        let exporter = AVAssetExportSession.init(asset: asset, presetName: AVAssetExportPresetMediumQuality)
    exporter?.outputFileType = AVFileTypeQuickTimeMovie
    exporter?.outputURL = outputPath
    exporter?.videoComposition = videoComposition

    exporter?.exportAsynchronously() { handler -> Void in
        if exporter?.status == .completed {
            print("Export complete")
            DispatchQueue.main.async(execute: {
                completion(outputPath)
            })
            return
        } else if exporter?.status == .failed {
            print("Export failed - \(String(describing: exporter?.error))")
        }
        completion(nil)
        return
    }

This will export the video in square size with watermark as Text Or Image

Thanks

Orthochromatic answered 25/7, 2017 at 13:3 Comment(2)
Thank you but this code shows me the video rotated and distorted !Aberrant
Export is too slow for some reason. This is only happening when there's videoCompositionEpstein
P
2

Simply Download the code and Use it.It is in Apple developer documentation Page.

http://developer.apple.com/library/ios/#samplecode/AVSimpleEditoriOS/Listings/AVSimpleEditor_AVSERotateCommand_m.html

Purdah answered 6/3, 2013 at 5:4 Comment(0)
U
0

Adding a CALayer to a video by working with the swift example code found in mikitamanko's blog I made a few small changes to fix the following error:

Error Domain=AVFoundationErrorDomain Code=-11841 "Operation Stopped" UserInfo={NSLocalizedFailureReason=The video could not be composed., NSLocalizedDescription=Operation Stopped, NSUnderlyingError=0x2830559b0 {Error Domain=NSOSStatusErrorDomain Code=-17390 "(null)"}}

The solution is to use the composition's video track instead of the original video track when setting the layer instruction like in the following swift 5 code:

    static func addSketchLayer(url: URL, sketchLayer: CALayer, block: @escaping (Result<URL, VideoExportError>) -> Void) {
        let composition = AVMutableComposition()
        let vidAsset = AVURLAsset(url: url)
        
        let videoTrack = vidAsset.tracks(withMediaType: AVMediaType.video)[0]
        let duration = vidAsset.duration
        let vid_timerange = CMTimeRangeMake(start: CMTime.zero, duration: duration)
        
        let videoRect = CGRect(origin: .zero, size: videoTrack.naturalSize)
        let transformedVideoRect = videoRect.applying(videoTrack.preferredTransform)
        let size = transformedVideoRect.size
                
        let compositionvideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))!
        
        try? compositionvideoTrack.insertTimeRange(vid_timerange, of: videoTrack, at: CMTime.zero)
        compositionvideoTrack.preferredTransform = videoTrack.preferredTransform
        
        let videolayer = CALayer()
        videolayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
        videolayer.opacity = 1.0
        sketchLayer.contentsScale = 1
        
        let parentlayer = CALayer()
        parentlayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
        sketchLayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
        parentlayer.addSublayer(videolayer)
        parentlayer.addSublayer(sketchLayer)
        
        let layercomposition = AVMutableVideoComposition()
        layercomposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
        layercomposition.renderScale = 1.0
        layercomposition.renderSize = CGSize(width: size.width, height: size.height)

        layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayers: [videolayer], in: parentlayer)
        
        let instruction = AVMutableVideoCompositionInstruction()
        instruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: composition.duration)
        let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionvideoTrack)
        layerinstruction.setTransform(compositionvideoTrack.preferredTransform, at: CMTime.zero)
        instruction.layerInstructions = [layerinstruction] as [AVVideoCompositionLayerInstruction]
        layercomposition.instructions = [instruction] as [AVVideoCompositionInstructionProtocol]
        
        let compositionAudioTrack:AVMutableCompositionTrack? = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
        let audioTracks = vidAsset.tracks(withMediaType: AVMediaType.audio)
        for audioTrack in audioTracks {
            try? compositionAudioTrack?.insertTimeRange(audioTrack.timeRange, of: audioTrack, at: CMTime.zero)
        }
        
        let movieDestinationUrl = URL(fileURLWithPath: NSTemporaryDirectory() + "/exported.mp4")
        try? FileManager().removeItem(at: movieDestinationUrl)
        
        let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality)!
        assetExport.outputFileType = AVFileType.mp4
        assetExport.outputURL = movieDestinationUrl
        assetExport.videoComposition = layercomposition
        
        assetExport.exportAsynchronously(completionHandler: {
            switch assetExport.status {
            case AVAssetExportSessionStatus.failed:
                print(assetExport.error ?? "unknown error")
                block(.failure(.failed))
            case AVAssetExportSessionStatus.cancelled:
                print(assetExport.error ?? "unknown error")
                block(.failure(.canceled))
            default:
                block(.success(movieDestinationUrl))
            }
        })
    }

enum VideoExportError: Error {
    case failed
    case canceled
}

Note that according to AVFoundation Crash on Exporting Video With Text Layer this code crashes only on simulator but works on a real device

Also note that the width and height are used after applying the preferred video transform.

Unger answered 30/6, 2020 at 7:21 Comment(0)
M
-2

Here's the example on swift3 how to insert both animated (array of images/slides/frames) and static image watermarks into the recorded video.

It uses CAKeyframeAnimation to animate the frames, and it uses AVMutableCompositionTrack, AVAssetExportSession and AVMutableVideoComposition together with AVMutableVideoCompositionInstruction to combine everything together.

Multicolor answered 21/5, 2017 at 20:36 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.