How do I export UIImage array as a movie?
Asked Answered
F

9

200

I have a serious problem: I have an NSArray with several UIImage objects. What I now want to do, is create movie from those UIImages. But I don't have any idea how to do so.

I hope someone can help me or send me a code snippet which does something like I want.

Edit: For future reference - After applying the solution, if the video looks distorted, make sure the width of the images/area you are capturing is a multiple of 16. Found after many hours of struggle here:
Why does my movie from UIImages gets distorted?

Here is the complete solution (just ensure width is multiple of 16)
http://codethink.no-ip.org/wordpress/archives/673

Forequarter answered 18/9, 2010 at 10:29 Comment(4)
@zoul: Tags should cover what the question is about, not possible solutions.Bacchae
Why not? There’s already a post for both AVFoundation and FFmpeg. If you were looking for some AVFoundation related info, wouldn’t you like to see this thread? (Or is that a consensus from Meta?)Woodwork
@zoul: The tags narrow the question down ( "A tag is a keyword or label that categorizes your question" ), with adding those two you'd be changing the context. I thought this to be obvious but if i stumble about something on meta i'll let you know. Alternatively start a discussion there.Bacchae
There is no Dana, there is only Zoul. [sorry for off-topic, but I couldn't resist]Bailor
W
233

Take a look at AVAssetWriter and the rest of the AVFoundation framework. The writer has an input of type AVAssetWriterInput, which in turn has a method called appendSampleBuffer: that lets you add individual frames to a video stream. Essentially you’ll have to:

1) Wire the writer:

NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
    [NSURL fileURLWithPath:somePath] fileType:AVFileTypeQuickTimeMovie
    error:&error];
NSParameterAssert(videoWriter);

NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
    AVVideoCodecH264, AVVideoCodecKey,
    [NSNumber numberWithInt:640], AVVideoWidthKey,
    [NSNumber numberWithInt:480], AVVideoHeightKey,
    nil];
AVAssetWriterInput* writerInput = [[AVAssetWriterInput
    assetWriterInputWithMediaType:AVMediaTypeVideo
    outputSettings:videoSettings] retain]; //retain should be removed if ARC

NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];

2) Start a session:

[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:…] //use kCMTimeZero if unsure

3) Write some samples:

// Or you can use AVAssetWriterInputPixelBufferAdaptor.
// That lets you feed the writer input data from a CVPixelBuffer
// that’s quite easy to create from a CGImage.
[writerInput appendSampleBuffer:sampleBuffer];

4) Finish the session:

[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:…]; //optional can call finishWriting without specifying endTime
[videoWriter finishWriting]; //deprecated in ios6
/*
[videoWriter finishWritingWithCompletionHandler:...]; //ios 6.0+
*/

You’ll still have to fill-in a lot of blanks, but I think that the only really hard remaining part is getting a pixel buffer from a CGImage:

- (CVPixelBufferRef) newPixelBufferFromCGImage: (CGImageRef) image
{
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
        [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
        [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
        nil];
    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
        frameSize.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, 
        &pxbuffer);
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
        frameSize.height, 8, 4*frameSize.width, rgbColorSpace, 
        kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);
    CGContextConcatCTM(context, frameTransform);
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), 
        CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

frameSize is a CGSize describing your target frame size and frameTransform is a CGAffineTransform that lets you transform the images when you draw them into frames.

Woodwork answered 18/9, 2010 at 15:20 Comment(23)
Ok, I just don't get it. I want to write a UIImage object to the AVAssetWriterInputPixelBufferAdaptor, but I don't know how to convert the image to the apropiate format. Can you help me again, please?Forequarter
Though this does work, drawing into a CGImage only to draw that into a CGBitmapContext backed by CVPixelBuffer is wasteful. Similarly, instead of creating a CVPixelBuffer each time, AVAssetWriterInputPixelBufferAdaptor's pixelBufferPool should be used to recycle buffers.Norword
@rpetrich: Thank you. What is the direct way from a CGImage to CVPixelBuffer then?Woodwork
zoul: There isn't one. My point is that you shouldn't encourage CGImage as a source format.Norword
Well what should you do then, when you have the source data as regular image files?Woodwork
i'm with zoul... what should we do if our data source consists of many images? i run into memory warnings around 30 or less seconds of encoding images to .mov and i can't figure out where the memory is building upAlexandrite
That's an awesome piece of code zoul, i got it working with set of images. Now, I want to add Audio as well, any clues how to do that? I guess I will need something like pixelBufferFromCGImage to get buffer data out of a sound file.Rhymester
I think you’ll have to create a mutable AV composition, insert the video you have created and then insert the sound track.Woodwork
After calling appendSampleBuffer:, if I remember correctly.Woodwork
@huesforalice: I think that background rendering is simply not supported, as the video hardware is probably needed for something else. I think you’ll have to cancel the rendering job and start it from scratch when the app returns to foreground.Woodwork
Zoul, thanks for your comment. I will try to look into this further. There actually is an app (iStopMotion) which does support bg rendering, but possibly on a different / own framework.Sovereign
@Woodwork I'm using this code to capture my screen but, it gives a buish video,and the image rendered from screen is correct(ihave tested it with UIImageWriteToSavedPhotosAlbum()method).Why this happens.CHow cxan I solve thisCathouse
@Norword @Woodwork May I ask why use CVPixelBuffer instead of CMSampleBufferRef? Isn't CMSampleBufferRef the parameter type of appendSampleBuffer:? BTW, I am using AVFoundation in OS X.Skater
Your code works great, but I'm getting compiler warning about implicit conversion from CGImageAlphaInfo to CGBitmapInfo in CGBitmapContextCreate(). What's the correct parameter to use instead of kCGImageAphaNonSkipFirst?Lattimer
@XiaochaoYang, the docs say: “The constants for specifying the alpha channel information are declared with the CGImageAlphaInfo type but can be passed to this parameter safely.” Which means that you can probably just hand-cast the constant to CGBitmapInfo and forget about the issue.Woodwork
@Woodwork Thanks Zoul. I'm doing a casting to suppress warning. It only happens in the new Xcode 5Lattimer
This will help someone, one day #9692146Roaster
@AndrewChang I believe that the CVPixelBuffer only holds the pixel/image data, whereas the CMSampleBufferRef also includes timing information of the frame.Docker
@AndyHin Zoul, can we ignore the compiler warning from your code? Your code converts to a CVPixelBufferRef but the AssetWriter's "appendSampleBuffer:" wants to receive a CMSampleBufferRef - so the compiler is throwing a warning. Is the warning inconsequential?Guipure
Do we repeatedly call "[writerInput appendSampleBuffer:sampleBuffer];" for each and every frame?Guipure
@Guipure see my answer here https://mcmap.net/q/129689/-avfoundation-reverse-an-avasset-and-output-video-file?noredirect=1 it is related and might help you.Docker
Just in case it helps - I took all the feedback from the questions and other posts and expanded Zoul's helpful code above into complete working code for iOS8 and posted it below.Guipure
If you use the image to pixelbuffer here to render with Metal, make sure to add [NSNumber numberWithBool:YES], kCVPixelBufferMetalCompatibilityKey as explained herePentha
L
59

Update To Swift 5

Last week I set out to write the iOS code to generate a video from images. I had a little bit of AVFoundation experience, but had never even heard of a CVPixelBuffer. I came across the answers on this page and also here. It took several days to dissect everything and put it all back together in Swift in a way that made sense to my brain. Below is what I came up with.

NOTE: If you copy/paste all the code below into a single Swift file, it should compile. You'll just need to tweak loadImages() and the RenderSettings values.

Part 1: Setting things up

Here I group all the export-related settings into a single RenderSettings struct.

import AVFoundation
import UIKit
import Photos

struct RenderSettings {

var size : CGSize = .zero
var fps: Int32 = 6   // frames per second
var avCodecKey = AVVideoCodecType.h264
var videoFilename = "render"
var videoFilenameExt = "mp4"


var outputURL: URL {
    // Use the CachesDirectory so the rendered video file sticks around as long as we need it to.
    // Using the CachesDirectory ensures the file won't be included in a backup of the app.
    let fileManager = FileManager.default
    if let tmpDirURL = try? fileManager.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: true) {
        return tmpDirURL.appendingPathComponent(videoFilename).appendingPathExtension(videoFilenameExt)
    }
    fatalError("URLForDirectory() failed")
}

Part 2: The ImageAnimator

The ImageAnimator class knows about your images and uses the VideoWriter class to perform the rendering. The idea is to keep the video content code separate from the low-level AVFoundation code. I also added saveToLibrary() here as a class function which gets called at the end of the chain to save the video to the Photo Library.

class ImageAnimator {

// Apple suggests a timescale of 600 because it's a multiple of standard video rates 24, 25, 30, 60 fps etc.
static let kTimescale: Int32 = 600

let settings: RenderSettings
let videoWriter: VideoWriter
var images: [UIImage]!

var frameNum = 0

class func saveToLibrary(videoURL: URL) {
    PHPhotoLibrary.requestAuthorization { status in
        guard status == .authorized else { return }

        PHPhotoLibrary.shared().performChanges({
            PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: videoURL)
        }) { success, error in
            if !success {
                print("Could not save video to photo library:", error)
            }
        }
    }
}

class func removeFileAtURL(fileURL: URL) {
    do {
        try FileManager.default.removeItem(atPath: fileURL.path)
    }
    catch _ as NSError {
        // Assume file doesn't exist.
    }
}

init(renderSettings: RenderSettings) {
    settings = renderSettings
    videoWriter = VideoWriter(renderSettings: settings)
    //images = loadImages()
}

func render(completion: (()->Void)?) {

    // The VideoWriter will fail if a file exists at the URL, so clear it out first.
    ImageAnimator.removeFileAtURL(fileURL: settings.outputURL)

    videoWriter.start()
    videoWriter.render(appendPixelBuffers: appendPixelBuffers) {
        ImageAnimator.saveToLibrary(videoURL: self.settings.outputURL)
        completion?()
    }

}

// This is the callback function for VideoWriter.render()
func appendPixelBuffers(writer: VideoWriter) -> Bool {

    let frameDuration = CMTimeMake(value: Int64(ImageAnimator.kTimescale / settings.fps), timescale: ImageAnimator.kTimescale)

    while !images.isEmpty {

        if writer.isReadyForData == false {
            // Inform writer we have more buffers to write.
            return false
        }

        let image = images.removeFirst()
        let presentationTime = CMTimeMultiply(frameDuration, multiplier: Int32(frameNum))
        let success = videoWriter.addImage(image: image, withPresentationTime: presentationTime)
        if success == false {
            fatalError("addImage() failed")
        }

        frameNum += 1
    }

    // Inform writer all buffers have been written.
    return true
}

Part 3: The VideoWriter

The VideoWriter class does all AVFoundation heavy lifting. It's mostly a wrapper around AVAssetWriter and AVAssetWriterInput. It also contains fancy code written by not me that knows how to translate an image into a CVPixelBuffer.

class VideoWriter {

let renderSettings: RenderSettings

var videoWriter: AVAssetWriter!
var videoWriterInput: AVAssetWriterInput!
var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!

var isReadyForData: Bool {
    return videoWriterInput?.isReadyForMoreMediaData ?? false
}

class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize) -> CVPixelBuffer {

    var pixelBufferOut: CVPixelBuffer?

    let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
    if status != kCVReturnSuccess {
        fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
    }

    let pixelBuffer = pixelBufferOut!

    CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))

    let data = CVPixelBufferGetBaseAddress(pixelBuffer)
    let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
    let context = CGContext(data: data, width: Int(size.width), height: Int(size.height),
                            bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)

    context!.clear(CGRect(x:0,y: 0,width: size.width,height: size.height))

    let horizontalRatio = size.width / image.size.width
    let verticalRatio = size.height / image.size.height
    //aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
    let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit

    let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)

    let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0
    let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0

    context?.draw(image.cgImage!, in: CGRect(x:x,y: y, width: newSize.width, height: newSize.height))
    CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))

    return pixelBuffer
}

init(renderSettings: RenderSettings) {
    self.renderSettings = renderSettings
}

func start() {

    let avOutputSettings: [String: Any] = [
        AVVideoCodecKey: renderSettings.avCodecKey,
        AVVideoWidthKey: NSNumber(value: Float(renderSettings.size.width)),
        AVVideoHeightKey: NSNumber(value: Float(renderSettings.size.height))
    ]

    func createPixelBufferAdaptor() {
        let sourcePixelBufferAttributesDictionary = [
            kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32ARGB),
            kCVPixelBufferWidthKey as String: NSNumber(value: Float(renderSettings.size.width)),
            kCVPixelBufferHeightKey as String: NSNumber(value: Float(renderSettings.size.height))
        ]
        pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,
                                                                  sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
    }

    func createAssetWriter(outputURL: URL) -> AVAssetWriter {
        guard let assetWriter = try? AVAssetWriter(outputURL: outputURL, fileType: AVFileType.mp4) else {
            fatalError("AVAssetWriter() failed")
        }

        guard assetWriter.canApply(outputSettings: avOutputSettings, forMediaType: AVMediaType.video) else {
            fatalError("canApplyOutputSettings() failed")
        }

        return assetWriter
    }

    videoWriter = createAssetWriter(outputURL: renderSettings.outputURL)
    videoWriterInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: avOutputSettings)

    if videoWriter.canAdd(videoWriterInput) {
        videoWriter.add(videoWriterInput)
    }
    else {
        fatalError("canAddInput() returned false")
    }

    // The pixel buffer adaptor must be created before we start writing.
    createPixelBufferAdaptor()

    if videoWriter.startWriting() == false {
        fatalError("startWriting() failed")
    }

    videoWriter.startSession(atSourceTime: CMTime.zero)

    precondition(pixelBufferAdaptor.pixelBufferPool != nil, "nil pixelBufferPool")
}

func render(appendPixelBuffers: ((VideoWriter)->Bool)?, completion: (()->Void)?) {

    precondition(videoWriter != nil, "Call start() to initialze the writer")

    let queue = DispatchQueue(label: "mediaInputQueue")
    videoWriterInput.requestMediaDataWhenReady(on: queue) {
        let isFinished = appendPixelBuffers?(self) ?? false
        if isFinished {
            self.videoWriterInput.markAsFinished()
            self.videoWriter.finishWriting() {
                DispatchQueue.main.async {
                    completion?()
                }
            }
        }
        else {
            // Fall through. The closure will be called again when the writer is ready.
        }
    }
}

func addImage(image: UIImage, withPresentationTime presentationTime: CMTime) -> Bool {

    precondition(pixelBufferAdaptor != nil, "Call start() to initialze the writer")

    let pixelBuffer = VideoWriter.pixelBufferFromImage(image: image, pixelBufferPool: pixelBufferAdaptor.pixelBufferPool!, size: renderSettings.size)
    return pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
}

Part 4: Make it happen

Once everything is in place, these are your 3 magic lines:

let settings = RenderSettings()
let imageAnimator = ImageAnimator(renderSettings: settings)
imageAnimator.render() {
    print("yes")
}
Lightface answered 29/3, 2016 at 17:9 Comment(14)
Oh man, wish you had posted one day sooner. :) Just finished porting a Swift version, added to this thread. Incorporated your save to library function, hope you don't mind.Theorem
@Theorem - I hear ya! I had a feeling it was being worked on. Still, was a good exercise in understanding it all and breaking apart the various bits.Lightface
Crashalot and @Scott Raposa. This is a comment related to this question but in a special test case. Your code is absolutely phenomenal and amazing, but it seems to not be working for a SINGLE image, where images.count == 1. I have changed the code to try and address this issue, but it seems to be incredibly difficult. Any help from you guys would be absolutely amazing. I have also asked a question here at #38036309 …. I was hoping the special case of images.count == 1 could be addressed! Thanks!Broach
@Broach I don't have the code running at the moment so I can't test it specifically for that case. I'm not sure why it wouldn't work though. When you say it doesn't work, what are you seeing? Maybe play with the fps setting. Bring it to 1. Or if you have it working with 2 images, then just duplicate the image.Lightface
Say I have 1 image, right now I am duplicating it and it works. But for whatever reason it doesn't work with a single image. I figured I would just inform you! Using images = [image, image] works but a single image does not. Let's say I change the time to 10sec, then I see a black screen up until the very last frame. Quite strange.Broach
Do you know how to add sound to this video? (like a sound recorded within the app?)Cestode
@RenatoPimpão Here's the basic idea: After the final render is complete, you can load the video from the URL into an AVAsset. You also need your audio in the form of an AVAsset. From there, create a new AVMutableComposition, add the video track (AVMediaTypeVideo) and then audio track (AVMediaTypeAudio) and then export using AVAssetExportSession. Google around and you'll probably find lots of examples.Lightface
Has anyone had issues running this in iOS 10? I was using it before just fine in iOS 8-9, now getting an error about pixel buffer being null. Tried the Obj-C answers and got the same problem.Alimony
For me, it fails at fatalError("canApplyOutputSettings() failed") (iOS 11, Swift 4). Any fixes?Matherne
For the fatalError("canApplyOutputSettings() failed") failure, you need to set RenderSettings's size value to something other than .zero.Bathtub
if you're having trouble copying, the above code is here: github.com/dldnh/make-movie/blob/main/make-movie.swiftEthiopian
One thing to note is you excluded the "if(Int(width) % 16 != 0)" statement from the other implementations. This should probably be added to the RendererSettings? Otherwise, I like you're using the asset writer pixel buffer pool. The other implementations blow up memory because they're creating a pixel buffer for each image. Thanks for posting this!Rustyrut
Looks like some close brackets are missing.Reprisal
How to feed image continuously?Troy
G
44

Here is the latest working code on iOS8 in Objective-C.

We had to make a variety of tweaks to @Zoul's answer above to get it to work on the latest version of Xcode and iOS8. Here is our complete working code that takes an array of UIImages, makes them into a .mov file, saves it to a temp directory, then moves it to the camera roll. We assembled code from multiple different posts to get this working. We have highlighted the traps we had to solve to get the code working in our comments.

(1) Create a collection of UIImages

[self saveMovieToLibrary]


- (IBAction)saveMovieToLibrary
{
    // You just need the height and width of the video here
    // For us, our input and output video was 640 height x 480 width
    // which is what we get from the iOS front camera
    ATHSingleton *singleton = [ATHSingleton singletons];
    int height = singleton.screenHeight;
    int width = singleton.screenWidth;

    // You can save a .mov or a .mp4 file        
    //NSString *fileNameOut = @"temp.mp4";
    NSString *fileNameOut = @"temp.mov";

    // We chose to save in the tmp/ directory on the device initially
    NSString *directoryOut = @"tmp/";
    NSString *outFile = [NSString stringWithFormat:@"%@%@",directoryOut,fileNameOut];
    NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:outFile]];
    NSURL *videoTempURL = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@%@", NSTemporaryDirectory(), fileNameOut]];

    // WARNING: AVAssetWriter does not overwrite files for us, so remove the destination file if it already exists
    NSFileManager *fileManager = [NSFileManager defaultManager];
    [fileManager removeItemAtPath:[videoTempURL path]  error:NULL];


    // Create your own array of UIImages        
    NSMutableArray *images = [NSMutableArray array];
    for (int i=0; i<singleton.numberOfScreenshots; i++)
    {
        // This was our routine that returned a UIImage. Just use your own.
        UIImage *image =[self uiimageFromCopyOfPixelBuffersUsingIndex:i];
        // We used a routine to write text onto every image 
        // so we could validate the images were actually being written when testing. This was it below. 
        image = [self writeToImage:image Text:[NSString stringWithFormat:@"%i",i ]];
        [images addObject:image];     
    }

// If you just want to manually add a few images - here is code you can uncomment
// NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:@"Documents/movie.mp4"]];
//    NSArray *images = [[NSArray alloc] initWithObjects:
//                      [UIImage imageNamed:@"add_ar.png"],
//                      [UIImage imageNamed:@"add_ja.png"],
//                      [UIImage imageNamed:@"add_ru.png"],
//                      [UIImage imageNamed:@"add_ru.png"],
//                      [UIImage imageNamed:@"add_ar.png"],
//                      [UIImage imageNamed:@"add_ja.png"],
//                      [UIImage imageNamed:@"add_ru.png"],
//                      [UIImage imageNamed:@"add_ar.png"],
//                      [UIImage imageNamed:@"add_en.png"], nil];



    [self writeImageAsMovie:images toPath:path size:CGSizeMake(height, width)];
}

This is the main method that creates your AssetWriter and adds images to it for writing.

(2) Wire up an AVAssetWriter

-(void)writeImageAsMovie:(NSArray *)array toPath:(NSString*)path size:(CGSize)size
{

    NSError *error = nil;

    // FIRST, start up an AVAssetWriter instance to write your video
    // Give it a destination path (for us: tmp/temp.mov)
    AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path]
                                                           fileType:AVFileTypeQuickTimeMovie
                                                              error:&error];


    NSParameterAssert(videoWriter);

    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   AVVideoCodecH264, AVVideoCodecKey,
                                   [NSNumber numberWithInt:size.width], AVVideoWidthKey,
                                   [NSNumber numberWithInt:size.height], AVVideoHeightKey,
                                   nil];

    AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
                                                                         outputSettings:videoSettings];

    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
                                                                                                                     sourcePixelBufferAttributes:nil];
    NSParameterAssert(writerInput);
    NSParameterAssert([videoWriter canAddInput:writerInput]);
    [videoWriter addInput:writerInput];

(3) Start a writing Session (NOTE: the method is continuing from above)

    //Start a SESSION of writing. 
    // After you start a session, you will keep adding image frames 
    // until you are complete - then you will tell it you are done.
    [videoWriter startWriting];
    // This starts your video at time = 0
    [videoWriter startSessionAtSourceTime:kCMTimeZero];

    CVPixelBufferRef buffer = NULL;

    // This was just our utility class to get screen sizes etc.    
    ATHSingleton *singleton = [ATHSingleton singletons];

    int i = 0;
    while (1)
    {
        // Check if the writer is ready for more data, if not, just wait
        if(writerInput.readyForMoreMediaData){

            CMTime frameTime = CMTimeMake(150, 600);
            // CMTime = Value and Timescale.
            // Timescale = the number of tics per second you want
            // Value is the number of tics
            // For us - each frame we add will be 1/4th of a second
            // Apple recommend 600 tics per second for video because it is a 
            // multiple of the standard video rates 24, 30, 60 fps etc.
            CMTime lastTime=CMTimeMake(i*150, 600);
            CMTime presentTime=CMTimeAdd(lastTime, frameTime);

            if (i == 0) {presentTime = CMTimeMake(0, 600);} 
            // This ensures the first frame starts at 0.


            if (i >= [array count])
            {
                buffer = NULL;
            }
            else
            {
                // This command grabs the next UIImage and converts it to a CGImage
                buffer = [self pixelBufferFromCGImage:[[array objectAtIndex:i] CGImage]];
            }


            if (buffer)
            {
                // Give the CGImage to the AVAssetWriter to add to your video
                [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
                i++;
            }
            else
            {

(4) Finish the Session (Note: Method continues from above)

                //Finish the session:
                // This is important to be done exactly in this order
                [writerInput markAsFinished];
                // WARNING: finishWriting in the solution above is deprecated. 
                // You now need to give a completion handler.
                [videoWriter finishWritingWithCompletionHandler:^{
                    NSLog(@"Finished writing...checking completion status...");
                    if (videoWriter.status != AVAssetWriterStatusFailed && videoWriter.status == AVAssetWriterStatusCompleted)
                    {
                        NSLog(@"Video writing succeeded.");

                        // Move video to camera roll
                        // NOTE: You cannot write directly to the camera roll. 
                        // You must first write to an iOS directory then move it!
                        NSURL *videoTempURL = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@", path]];
                        [self saveToCameraRoll:videoTempURL];

                    } else
                    {
                        NSLog(@"Video writing failed: %@", videoWriter.error);
                    }

                }]; // end videoWriter finishWriting Block

                CVPixelBufferPoolRelease(adaptor.pixelBufferPool);

                NSLog (@"Done");
                break;
            }
        }
    }    
}

(5) Convert your UIImages to a CVPixelBufferRef
This method will give you a CV pixel buffer reference which is needed by the AssetWriter. This is obtained from a CGImageRef which you get from your UIImage (above).

- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
    // This again was just our utility class for the height & width of the
    // incoming video (640 height x 480 width)
    ATHSingleton *singleton = [ATHSingleton singletons];
    int height = singleton.screenHeight;
    int width = singleton.screenWidth;

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];
    CVPixelBufferRef pxbuffer = NULL;

    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width,
                                          height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
                                          &pxbuffer);

    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();

    CGContextRef context = CGBitmapContextCreate(pxdata, width,
                                                 height, 8, 4*width, rgbColorSpace,
                                                 kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);
    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                           CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

(6) Move Your Video to the Camera Roll Because AVAssetWriter cannot write directly to the camera roll, this moves the video from "tmp/temp.mov" (or whatever filename you named it above) to the camera roll.

- (void) saveToCameraRoll:(NSURL *)srcURL
{
    NSLog(@"srcURL: %@", srcURL);

    ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
    ALAssetsLibraryWriteVideoCompletionBlock videoWriteCompletionBlock =
    ^(NSURL *newURL, NSError *error) {
        if (error) {
            NSLog( @"Error writing image with metadata to Photo Library: %@", error );
        } else {
            NSLog( @"Wrote image with metadata to Photo Library %@", newURL.absoluteString);
        }
    };

    if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:srcURL])
    {
        [library writeVideoAtPathToSavedPhotosAlbum:srcURL
                                    completionBlock:videoWriteCompletionBlock];
    }
}

Zoul's answer above gives a nice outline of what you will be doing. We extensively commented this code so you can then see how it was done using working code.

Guipure answered 12/6, 2015 at 10:39 Comment(24)
I used the code and it creates the video, i had a problem at first because the pictures would get cropped but i fixed that by doubling the screen size and redoing the singleton. My problem now is that in the video doesnt take up the whole screen and causes the images to be cramped and messes up the aspect ratio. Any suggestions?Merely
@Merely Are you not able to change the output video size using this line: NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: AVVideoCodecH264, AVVideoCodecKey, [NSNumber numberWithInt:size.width], AVVideoWidthKey, [NSNumber numberWithInt:size.height], AVVideoHeightKey, nil];Guipure
Yes i was @Praxiteles, but the aspect ratio gets messed up, how would i include the image? do i have to create the black/white space around the image to keep it looking how its supposed to within the video size? Also, i get the sizes (320 Width x 568 Height, iPhone 5) so i double that and send that to the options but it only goes full screen in landscape view mode?Merely
Also, ive been messing with the time for each image, but in every case, the first image gets displayed for twice as long as the rest. Cant explain why though.Merely
Can anyone have a solution in Swift 2.1?Wreckfish
Does anyone have a solution in Swift ?Blesbok
@KyleKIM did you find a solution in Swift ?Blesbok
Yes. Think of this as creating a stack of images. Just write your middle images with the watermark.Guipure
Are there any solutions to add Audio and text also to the outputted video ?Juno
@Juno To add text, just write text onto the images that you are adding to the video but before you add them. There are lots of examples of how to write text into images on SO. For audio, I expect you could add that to the video as a step after the video aspect is complete.Guipure
@Blesbok did you find a Swift solution?Theorem
@Guipure How many images did you convert and how long did the process take, e.g., ~100 images per second? 30 images per second?Theorem
@Theorem We didn't time it as we were processing about 50 images and the process was virtually undetectable to our end users (i.e. 1 second or less for those 50 images)Guipure
@Guipure cool thanks that performance is more than sufficient. you didn't happen to port this to swift did you?Theorem
@Theorem I have not ported it to Swift yetGuipure
ok thanks. once we're done porting something, would you like a version or no?Theorem
@Theorem Yes that would be great. You could add it as an answer here - notice many others request it - or feel free to message me. It would be great to migrate our code.Guipure
@Guipure in the process of converting your code but wondering why you didn't use requestMediaDataWhenReadyOnQueue:usingBlock: for the avassetwriterinput. seems like you could avoid the while loop by using this?Theorem
Hey all. I needed a Swift solution as well. Just posted my code as an answer here.Lightface
@Guipure finished porting a Swift version, please suggest improvements if you see problems: #3741823Theorem
@user6April That is our own class as described in the comment. Get rid of it and set your own height and width. Something like this: int height = ####; int width =####; using whatever height and width you need.Guipure
Anyone have a solution for memory issues? Around 3 or 4 hundred images, my app is killed. I'm passing an array of image file paths and loading each image on demand, but it seems the memory still stacks upDumanian
Thanks for this awesome answer. Has anyone gotten this working in iOS 10? I had it working in iOS 9, and suddenly, it's always unable to add the pixel buffer ([adaptor appendPixelBuffer:buffer withPresentationTime:presentTime]; returns NO).Alimony
@Guipure HI, thanks for your works, but I find there are some memory leaks in your code, you should CVPixelBufferRelease(buffer); after [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime], sorry for my ugly English.Megdal
A
21

I took Zoul's main ideas and incorporated the AVAssetWriterInputPixelBufferAdaptor method and made the beginnings of a little frameworks out of it.

Feel free to check it out and improve upon it! CEMovieMaker

Antihistamine answered 17/9, 2014 at 21:3 Comment(3)
@CameronE Good one but I have one issue, What if my video is 1080*1920 ? iPhone 5s,6,6plus rear camera resolution, The video mess-up in this situation please help me.Meadowsweet
Hi, can you let me know how can I set less speed in the way images appear in the video?Cease
How to add some delay in video like changing the time for which frame will appear?Jehanna
T
13

Here's a Swift 2.x version tested on iOS 8. It combines answers from @Scott Raposa and @Praxiteles along with code from @acj contributed for another question. The code from @acj is here: https://gist.github.com/acj/6ae90aa1ebb8cad6b47b. @TimBull also provided code as well.

Like @Scott Raposa, I had never even heard of CVPixelBufferPoolCreatePixelBuffer and several other functions, let alone understood how to use them.

What you see below was cobbled together mostly by trial and error and from reading Apple docs. Please use with caution, and provide suggestions if there are mistakes.

Usage:

import UIKit
import AVFoundation
import Photos

writeImagesAsMovie(yourImages, videoPath: yourPath, videoSize: yourSize, videoFPS: 30)

Code:

func writeImagesAsMovie(allImages: [UIImage], videoPath: String, videoSize: CGSize, videoFPS: Int32) {
    // Create AVAssetWriter to write video
    guard let assetWriter = createAssetWriter(videoPath, size: videoSize) else {
        print("Error converting images to video: AVAssetWriter not created")
        return
    }

    // If here, AVAssetWriter exists so create AVAssetWriterInputPixelBufferAdaptor
    let writerInput = assetWriter.inputs.filter{ $0.mediaType == AVMediaTypeVideo }.first!
    let sourceBufferAttributes : [String : AnyObject] = [
        kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32ARGB),
        kCVPixelBufferWidthKey as String : videoSize.width,
        kCVPixelBufferHeightKey as String : videoSize.height,
        ]
    let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: sourceBufferAttributes)

    // Start writing session
    assetWriter.startWriting()
    assetWriter.startSessionAtSourceTime(kCMTimeZero)
    if (pixelBufferAdaptor.pixelBufferPool == nil) {
        print("Error converting images to video: pixelBufferPool nil after starting session")
        return
    }

    // -- Create queue for <requestMediaDataWhenReadyOnQueue>
    let mediaQueue = dispatch_queue_create("mediaInputQueue", nil)

    // -- Set video parameters
    let frameDuration = CMTimeMake(1, videoFPS)
    var frameCount = 0

    // -- Add images to video
    let numImages = allImages.count
    writerInput.requestMediaDataWhenReadyOnQueue(mediaQueue, usingBlock: { () -> Void in
        // Append unadded images to video but only while input ready
        while (writerInput.readyForMoreMediaData && frameCount < numImages) {
            let lastFrameTime = CMTimeMake(Int64(frameCount), videoFPS)
            let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)

            if !self.appendPixelBufferForImageAtURL(allImages[frameCount], pixelBufferAdaptor: pixelBufferAdaptor, presentationTime: presentationTime) {
                print("Error converting images to video: AVAssetWriterInputPixelBufferAdapter failed to append pixel buffer")
                return
            }

            frameCount += 1
        }

        // No more images to add? End video.
        if (frameCount >= numImages) {
            writerInput.markAsFinished()
            assetWriter.finishWritingWithCompletionHandler {
                if (assetWriter.error != nil) {
                    print("Error converting images to video: \(assetWriter.error)")
                } else {
                    self.saveVideoToLibrary(NSURL(fileURLWithPath: videoPath))
                    print("Converted images to movie @ \(videoPath)")
                }
            }
        }
    })
}


func createAssetWriter(path: String, size: CGSize) -> AVAssetWriter? {
    // Convert <path> to NSURL object
    let pathURL = NSURL(fileURLWithPath: path)

    // Return new asset writer or nil
    do {
        // Create asset writer
        let newWriter = try AVAssetWriter(URL: pathURL, fileType: AVFileTypeMPEG4)

        // Define settings for video input
        let videoSettings: [String : AnyObject] = [
            AVVideoCodecKey  : AVVideoCodecH264,
            AVVideoWidthKey  : size.width,
            AVVideoHeightKey : size.height,
            ]

        // Add video input to writer
        let assetWriterVideoInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
        newWriter.addInput(assetWriterVideoInput)

        // Return writer
        print("Created asset writer for \(size.width)x\(size.height) video")
        return newWriter
    } catch {
        print("Error creating asset writer: \(error)")
        return nil
    }
}


func appendPixelBufferForImageAtURL(image: UIImage, pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor, presentationTime: CMTime) -> Bool {
    var appendSucceeded = false

    autoreleasepool {
        if  let pixelBufferPool = pixelBufferAdaptor.pixelBufferPool {
            let pixelBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
            let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(
                kCFAllocatorDefault,
                pixelBufferPool,
                pixelBufferPointer
            )

            if let pixelBuffer = pixelBufferPointer.memory where status == 0 {
                fillPixelBufferFromImage(image, pixelBuffer: pixelBuffer)
                appendSucceeded = pixelBufferAdaptor.appendPixelBuffer(pixelBuffer, withPresentationTime: presentationTime)
                pixelBufferPointer.destroy()
            } else {
                NSLog("Error: Failed to allocate pixel buffer from pool")
            }

            pixelBufferPointer.dealloc(1)
        }
    }

    return appendSucceeded
}


func fillPixelBufferFromImage(image: UIImage, pixelBuffer: CVPixelBufferRef) {
    CVPixelBufferLockBaseAddress(pixelBuffer, 0)

    let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer)
    let rgbColorSpace = CGColorSpaceCreateDeviceRGB()

    // Create CGBitmapContext
    let context = CGBitmapContextCreate(
        pixelData,
        Int(image.size.width),
        Int(image.size.height),
        8,
        CVPixelBufferGetBytesPerRow(pixelBuffer),
        rgbColorSpace,
        CGImageAlphaInfo.PremultipliedFirst.rawValue
    )

    // Draw image into context
    CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage)

    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0)
}


func saveVideoToLibrary(videoURL: NSURL) {
    PHPhotoLibrary.requestAuthorization { status in
        // Return if unauthorized
        guard status == .Authorized else {
            print("Error saving video: unauthorized access")
            return
        }

        // If here, save video to library
        PHPhotoLibrary.sharedPhotoLibrary().performChanges({
            PHAssetChangeRequest.creationRequestForAssetFromVideoAtFileURL(videoURL)
        }) { success, error in
            if !success {
                print("Error saving video: \(error)")
            }
        }
    }
}
Theorem answered 30/3, 2016 at 0:42 Comment(7)
This needs a completion callback. Otherwise, it returns before it's done writing. I changed that, and it works. Thanks!Alimony
Gist for Swift 3 on macOS - gist.github.com/isthisjoe/7f712512f6efd3f4d7500e98a7c48f8fBathtub
Created video but no image in the video only black screen. any solution??Rexfourd
I get this message "CGBitmapContextCreate: invalid data bytes/row: should be at least 13056 for 8 integer bits/component, 3 components, kCGImageAlphaPremultipliedFirst." in the function fillPixelBufferFromImage: while creating CGBitmapContextCreate. Any idea why this is happening??Rexfourd
@Bathtub Hey man, thanks for that, but I'm getting the following error: "Error converting images to video: pixelBufferPool nil after starting session". Any idea what I could be doing wrong?Rod
For the "Error converting images to video: pixelBufferPool nil after starting session" With the code above you'll ask the user for permission to access the photo library and create the file, than you'll try again and the file at path will already be created, so check with a FileManager and delete the file if it exists prior to saving it.Tonjatonjes
@Alimony can you please describe how to add the completion callback? I'm getting a file of size 0 bytes – perhaps it returns before it's done writing? Thanks.Rufus
C
8

Just translated @Scott Raposa answer to swift3 (with some very little changes):

import AVFoundation
import UIKit
import Photos

struct RenderSettings {

    var size : CGSize = .zero
    var fps: Int32 = 6   // frames per second
    var avCodecKey = AVVideoCodecH264
    var videoFilename = "render"
    var videoFilenameExt = "mp4"


    var outputURL: URL {
        // Use the CachesDirectory so the rendered video file sticks around as long as we need it to.
        // Using the CachesDirectory ensures the file won't be included in a backup of the app.
        let fileManager = FileManager.default
        if let tmpDirURL = try? fileManager.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: true) {
            return tmpDirURL.appendingPathComponent(videoFilename).appendingPathExtension(videoFilenameExt)
        }
        fatalError("URLForDirectory() failed")
    }
}


class ImageAnimator {

    // Apple suggests a timescale of 600 because it's a multiple of standard video rates 24, 25, 30, 60 fps etc.
    static let kTimescale: Int32 = 600

    let settings: RenderSettings
    let videoWriter: VideoWriter
    var images: [UIImage]!

    var frameNum = 0

    class func saveToLibrary(videoURL: URL) {
        PHPhotoLibrary.requestAuthorization { status in
            guard status == .authorized else { return }

            PHPhotoLibrary.shared().performChanges({
                PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: videoURL)
            }) { success, error in
                if !success {
                    print("Could not save video to photo library:", error)
                }
            }
        }
    }

    class func removeFileAtURL(fileURL: URL) {
        do {
            try FileManager.default.removeItem(atPath: fileURL.path)
        }
        catch _ as NSError {
            // Assume file doesn't exist.
        }
    }

    init(renderSettings: RenderSettings) {
        settings = renderSettings
        videoWriter = VideoWriter(renderSettings: settings)
//        images = loadImages()
    }

    func render(completion: (()->Void)?) {

        // The VideoWriter will fail if a file exists at the URL, so clear it out first.
        ImageAnimator.removeFileAtURL(fileURL: settings.outputURL)

        videoWriter.start()
        videoWriter.render(appendPixelBuffers: appendPixelBuffers) {
            ImageAnimator.saveToLibrary(videoURL: self.settings.outputURL)
            completion?()
        }

    }

//    // Replace this logic with your own.
//    func loadImages() -> [UIImage] {
//        var images = [UIImage]()
//        for index in 1...10 {
//            let filename = "\(index).jpg"
//            images.append(UIImage(named: filename)!)
//        }
//        return images
//    }

    // This is the callback function for VideoWriter.render()
    func appendPixelBuffers(writer: VideoWriter) -> Bool {

        let frameDuration = CMTimeMake(Int64(ImageAnimator.kTimescale / settings.fps), ImageAnimator.kTimescale)

        while !images.isEmpty {

            if writer.isReadyForData == false {
                // Inform writer we have more buffers to write.
                return false
            }

            let image = images.removeFirst()
            let presentationTime = CMTimeMultiply(frameDuration, Int32(frameNum))
            let success = videoWriter.addImage(image: image, withPresentationTime: presentationTime)
            if success == false {
                fatalError("addImage() failed")
            }

            frameNum += 1
        }

        // Inform writer all buffers have been written.
        return true
    }

}


class VideoWriter {

    let renderSettings: RenderSettings

    var videoWriter: AVAssetWriter!
    var videoWriterInput: AVAssetWriterInput!
    var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!

    var isReadyForData: Bool {
        return videoWriterInput?.isReadyForMoreMediaData ?? false
    }

    class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize) -> CVPixelBuffer {

        var pixelBufferOut: CVPixelBuffer?

        let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
        if status != kCVReturnSuccess {
            fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
        }

        let pixelBuffer = pixelBufferOut!

        CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))

        let data = CVPixelBufferGetBaseAddress(pixelBuffer)
        let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
        let context = CGContext(data: data, width: Int(size.width), height: Int(size.height),
                                bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)

        context!.clear(CGRect(x:0,y: 0,width: size.width,height: size.height))

        let horizontalRatio = size.width / image.size.width
        let verticalRatio = size.height / image.size.height
        //aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
        let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit

        let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)

        let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0
        let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0

        context?.draw(image.cgImage!, in: CGRect(x:x,y: y, width: newSize.width, height: newSize.height))
        CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))

        return pixelBuffer
    }

    init(renderSettings: RenderSettings) {
        self.renderSettings = renderSettings
    }

    func start() {

        let avOutputSettings: [String: Any] = [
            AVVideoCodecKey: renderSettings.avCodecKey,
            AVVideoWidthKey: NSNumber(value: Float(renderSettings.size.width)),
            AVVideoHeightKey: NSNumber(value: Float(renderSettings.size.height))
        ]

        func createPixelBufferAdaptor() {
            let sourcePixelBufferAttributesDictionary = [
                kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32ARGB),
                kCVPixelBufferWidthKey as String: NSNumber(value: Float(renderSettings.size.width)),
                kCVPixelBufferHeightKey as String: NSNumber(value: Float(renderSettings.size.height))
            ]
            pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,
                                                                      sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
        }

        func createAssetWriter(outputURL: URL) -> AVAssetWriter {
            guard let assetWriter = try? AVAssetWriter(outputURL: outputURL, fileType: AVFileTypeMPEG4) else {
                fatalError("AVAssetWriter() failed")
            }

            guard assetWriter.canApply(outputSettings: avOutputSettings, forMediaType: AVMediaTypeVideo) else {
                fatalError("canApplyOutputSettings() failed")
            }

            return assetWriter
        }

        videoWriter = createAssetWriter(outputURL: renderSettings.outputURL)
        videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: avOutputSettings)

        if videoWriter.canAdd(videoWriterInput) {
            videoWriter.add(videoWriterInput)
        }
        else {
            fatalError("canAddInput() returned false")
        }

        // The pixel buffer adaptor must be created before we start writing.
        createPixelBufferAdaptor()

        if videoWriter.startWriting() == false {
            fatalError("startWriting() failed")
        }

        videoWriter.startSession(atSourceTime: kCMTimeZero)

        precondition(pixelBufferAdaptor.pixelBufferPool != nil, "nil pixelBufferPool")
    }

    func render(appendPixelBuffers: ((VideoWriter)->Bool)?, completion: (()->Void)?) {

        precondition(videoWriter != nil, "Call start() to initialze the writer")

        let queue = DispatchQueue(label: "mediaInputQueue")
        videoWriterInput.requestMediaDataWhenReady(on: queue) {
            let isFinished = appendPixelBuffers?(self) ?? false
            if isFinished {
                self.videoWriterInput.markAsFinished()
                self.videoWriter.finishWriting() {
                    DispatchQueue.main.async {
                        completion?()
                    }
                }
            }
            else {
                // Fall through. The closure will be called again when the writer is ready.
            }
        }
    }

    func addImage(image: UIImage, withPresentationTime presentationTime: CMTime) -> Bool {

        precondition(pixelBufferAdaptor != nil, "Call start() to initialze the writer")

        let pixelBuffer = VideoWriter.pixelBufferFromImage(image: image, pixelBufferPool: pixelBufferAdaptor.pixelBufferPool!, size: renderSettings.size)
        return pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
    }

}
Civilize answered 28/5, 2017 at 19:25 Comment(2)
What would be an example of the usage?Ifc
canApplyOutputSettings() failed I receive thes error with Swift 5Adumbrate
E
7

Here's the swift3 version how to convert Images array to the Video

import Foundation
import AVFoundation
import UIKit

typealias CXEMovieMakerCompletion = (URL) -> Void
typealias CXEMovieMakerUIImageExtractor = (AnyObject) -> UIImage?


public class ImagesToVideoUtils: NSObject {

    static let paths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
    static let tempPath = paths[0] + "/exprotvideo.mp4"
    static let fileURL = URL(fileURLWithPath: tempPath)
//    static let tempPath = NSTemporaryDirectory() + "/exprotvideo.mp4"
//    static let fileURL = URL(fileURLWithPath: tempPath)


    var assetWriter:AVAssetWriter!
    var writeInput:AVAssetWriterInput!
    var bufferAdapter:AVAssetWriterInputPixelBufferAdaptor!
    var videoSettings:[String : Any]!
    var frameTime:CMTime!
    //var fileURL:URL!

    var completionBlock: CXEMovieMakerCompletion?
    var movieMakerUIImageExtractor:CXEMovieMakerUIImageExtractor?


    public class func videoSettings(codec:String, width:Int, height:Int) -> [String: Any]{
        if(Int(width) % 16 != 0){
            print("warning: video settings width must be divisible by 16")
        }

        let videoSettings:[String: Any] = [AVVideoCodecKey: AVVideoCodecJPEG, //AVVideoCodecH264,
                                           AVVideoWidthKey: width,
                                           AVVideoHeightKey: height]

        return videoSettings
    }

    public init(videoSettings: [String: Any]) {
        super.init()


        if(FileManager.default.fileExists(atPath: ImagesToVideoUtils.tempPath)){
            guard (try? FileManager.default.removeItem(atPath: ImagesToVideoUtils.tempPath)) != nil else {
                print("remove path failed")
                return
            }
        }


        self.assetWriter = try! AVAssetWriter(url: ImagesToVideoUtils.fileURL, fileType: AVFileTypeQuickTimeMovie)

        self.videoSettings = videoSettings
        self.writeInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
        assert(self.assetWriter.canAdd(self.writeInput), "add failed")

        self.assetWriter.add(self.writeInput)
        let bufferAttributes:[String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32ARGB)]
        self.bufferAdapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: self.writeInput, sourcePixelBufferAttributes: bufferAttributes)
        self.frameTime = CMTimeMake(1, 5)
    }

    func createMovieFrom(urls: [URL], withCompletion: @escaping CXEMovieMakerCompletion){
        self.createMovieFromSource(images: urls as [AnyObject], extractor:{(inputObject:AnyObject) ->UIImage? in
            return UIImage(data: try! Data(contentsOf: inputObject as! URL))}, withCompletion: withCompletion)
    }

    func createMovieFrom(images: [UIImage], withCompletion: @escaping CXEMovieMakerCompletion){
        self.createMovieFromSource(images: images, extractor: {(inputObject:AnyObject) -> UIImage? in
            return inputObject as? UIImage}, withCompletion: withCompletion)
    }

    func createMovieFromSource(images: [AnyObject], extractor: @escaping CXEMovieMakerUIImageExtractor, withCompletion: @escaping CXEMovieMakerCompletion){
        self.completionBlock = withCompletion

        self.assetWriter.startWriting()
        self.assetWriter.startSession(atSourceTime: kCMTimeZero)

        let mediaInputQueue = DispatchQueue(label: "mediaInputQueue")
        var i = 0
        let frameNumber = images.count

        self.writeInput.requestMediaDataWhenReady(on: mediaInputQueue){
            while(true){
                if(i >= frameNumber){
                    break
                }

                if (self.writeInput.isReadyForMoreMediaData){
                    var sampleBuffer:CVPixelBuffer?
                    autoreleasepool{
                        let img = extractor(images[i])
                        if img == nil{
                            i += 1
                            print("Warning: counld not extract one of the frames")
                            //continue
                        }
                        sampleBuffer = self.newPixelBufferFrom(cgImage: img!.cgImage!)
                    }
                    if (sampleBuffer != nil){
                        if(i == 0){
                            self.bufferAdapter.append(sampleBuffer!, withPresentationTime: kCMTimeZero)
                        }else{
                            let value = i - 1
                            let lastTime = CMTimeMake(Int64(value), self.frameTime.timescale)
                            let presentTime = CMTimeAdd(lastTime, self.frameTime)
                            self.bufferAdapter.append(sampleBuffer!, withPresentationTime: presentTime)
                        }
                        i = i + 1
                    }
                }
            }
            self.writeInput.markAsFinished()
            self.assetWriter.finishWriting {
                DispatchQueue.main.sync {
                    self.completionBlock!(ImagesToVideoUtils.fileURL)
                }
            }
        }
    }

    func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
        let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
        var pxbuffer:CVPixelBuffer?
        let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
        let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int

        let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
        assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")

        CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
        let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
        let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
        let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
        assert(context != nil, "context is nil")

        context!.concatenate(CGAffineTransform.identity)
        context!.draw(cgImage, in: CGRect(x: 0, y: 0, width: cgImage.width, height: cgImage.height))
        CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
        return pxbuffer
    }
}

I use it together with screen capturing, to basically create a video of screen capturing, here's the full story/complete example.

Evite answered 21/5, 2017 at 19:54 Comment(1)
This is pretty poor Swift code. It's a useful example so I didn't downvote, but in the future, don't force unwrap (even when you know it can't fail, it's just a bad habit and makes cod less readable), use guard statements, and "let unwrappedValue = optionalValue" makes code much more reasonable and obviously correct. Also don't put parens around conditionals in Swift. And some spaces your parameters/variable definitions would help with readability, but that's not a Swift issue.Grekin
P
0

For those still doing the journey in 2020, and getting distortion in their movies because its not width 16px

change

CGContextRef context = CGBitmapContextCreate(pxdata,
                                             width, height,
                                             8, 4 * width,
                                             rgbColorSpace,
                                             kCGImageAlphaNoneSkipFirst);

to

CGContextRef context = CGBitmapContextCreate(pxdata,
                                             width, height,
                                             8, CVPixelBufferGetBytesPerRow(pxbuffer),
                                             rgbColorSpace,
                                             kCGImageAlphaNoneSkipFirst);

Credit to @bluedays Output from AVAssetWriter (UIImages written to video) distorted

Particia answered 20/11, 2020 at 3:24 Comment(0)
I
-9

Well this is a bit hard to be implemented in pure Objective-C....If you are developing for jailbroken devices , a good idea is to use the command-line tool ffmpeg from inside your app. it's quite easy to create a movie from images with a command like:

ffmpeg -r 10 -b 1800 -i %03d.jpg test1800.mp4

Note that the images have to be named sequentially , and also be placed in the same directory. For more information take a look at: http://electron.mit.edu/~gsteele/ffmpeg/

Ilex answered 18/9, 2010 at 15:7 Comment(2)
ffmpeg would be super slow. better to use the hardware accelerated AVFoundation classes.Hypochondriac
It's not hard to do, it just requires reading documentation and writing code. A far more appropriate way to go for developing apps than requiring potential users of your app to jailbreak their phones and install ffmpeg.Mcminn

© 2022 - 2024 — McMap. All rights reserved.