The method described here is implemented in an Xcode project at this link (multi-platform SwiftUI app):
ReverseAudio Xcode Project
It is not sufficient to write the audio samples in the reverse order. The sample data needs to be reversed itself.
In Swift, we create an extension to AVAsset.
The samples must be processed as decompressed samples. To that end create audio reader settings with kAudioFormatLinearPCM:
let kAudioReaderSettings = [
AVFormatIDKey: Int(kAudioFormatLinearPCM) as AnyObject,
AVLinearPCMBitDepthKey: 16 as AnyObject,
AVLinearPCMIsBigEndianKey: false as AnyObject,
AVLinearPCMIsFloatKey: false as AnyObject,
AVLinearPCMIsNonInterleaved: false as AnyObject]
Use our AVAsset extension method audioReader:
func audioReader(outputSettings: [String : Any]?) -> (audioTrack:AVAssetTrack?, audioReader:AVAssetReader?, audioReaderOutput:AVAssetReaderTrackOutput?) {
if let audioTrack = self.tracks(withMediaType: .audio).first {
if let audioReader = try? AVAssetReader(asset: self) {
let audioReaderOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: outputSettings)
return (audioTrack, audioReader, audioReaderOutput)
}
}
return (nil, nil, nil)
}
let (_, audioReader, audioReaderOutput) = self.audioReader(outputSettings: kAudioReaderSettings)
to create an audioReader (AVAssetReader) and audioReaderOutput (AVAssetReaderTrackOutput) for reading the audio samples.
We need to keep track of the audio sample:
var audioSamples:[CMSampleBuffer] = []
Now start reading samples.
if audioReader.startReading() {
while audioReader.status == .reading {
if let sampleBuffer = audioReaderOutput.copyNextSampleBuffer(){
// process sample
}
}
}
Save the audio sample buffer, we need it later when we create the reversed sample:
audioSamples.append(sampleBuffer)
We need an AVAssetWriter:
guard let assetWriter = try? AVAssetWriter(outputURL: destinationURL, fileType: AVFileType.wav) else {
// error handling
return
}
The file type is 'wav' because the reversed samples will be written as uncompressed audio format Linear PCM, as follows.
For the assetWriter we specify audio compression settings, and a ‘source format hint’ and can acquire this from an uncompressed sample buffer:
let sampleBuffer = audioSamples[0]
let sourceFormat = CMSampleBufferGetFormatDescription(sampleBuffer)
let audioCompressionSettings = [AVFormatIDKey: kAudioFormatLinearPCM] as [String : Any]
Now we can create the AVAssetWriterInput, add it to the writer and start writing:
let assetWriterInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings:audioCompressionSettings, sourceFormatHint: sourceFormat)
assetWriter.add(assetWriterInput)
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: CMTime.zero)
Now iterate through the samples, in reverse order, and for each reverse the samples themselves.
We have an extension for CMSampleBuffer that does just that, called ‘reverse’.
Using requestMediaDataWhenReady we do this as follows:
let nbrSamples = audioSamples.count
var index = 0
let serialQueue: DispatchQueue = DispatchQueue(label: "com.limit-point.reverse-audio-queue")
assetWriterInput.requestMediaDataWhenReady(on: serialQueue) {
while assetWriterInput.isReadyForMoreMediaData, index < nbrSamples {
let sampleBuffer = audioSamples[nbrSamples - 1 - index]
if let reversedBuffer = sampleBuffer.reverse(), assetWriterInput.append(reversedBuffer) == true {
index += 1
}
else {
index = nbrSamples
}
if index == nbrSamples {
assetWriterInput.markAsFinished()
finishWriting() // call assetWriter.finishWriting, check assetWriter status, etc.
}
}
}
So the last thing to explain is how do you reverse the audio sample in the ‘reverse’ method?
We create an extension to CMSampleBuffer that takes a sample buffer and returns the reversed sample buffer, as an extension on CMSampleBuffer:
func reverse() -> CMSampleBuffer?
The data that has to be reversed needs to be obtained using the method:
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer
The CMSampleBuffer header files descibes this method as follows:
“Creates an AudioBufferList containing the data from the CMSampleBuffer, and a CMBlockBuffer which references (and manages the lifetime of) the data in that AudioBufferList.”
Call it as follows, where ‘self’ refers to the CMSampleBuffer we are reversing since this is an extension:
var blockBuffer: CMBlockBuffer? = nil
let audioBufferList: UnsafeMutableAudioBufferListPointer = AudioBufferList.allocate(maximumBuffers: 1)
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
self,
bufferListSizeNeededOut: nil,
bufferListOut: audioBufferList.unsafeMutablePointer,
bufferListSize: AudioBufferList.sizeInBytes(maximumBuffers: 1),
blockBufferAllocator: nil,
blockBufferMemoryAllocator: nil,
flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
blockBufferOut: &blockBuffer
)
Now you can access the raw data as:
let data: UnsafeMutableRawPointer = audioBufferList.unsafePointer.pointee.mBuffers.mData
Reversing data we need to access the data as an array of ‘samples’ called sampleArray, and is done as follows in Swift:
let samples = data.assumingMemoryBound(to: Int16.self)
let sizeofInt16 = MemoryLayout<Int16>.size
let dataSize = audioBufferList.unsafePointer.pointee.mBuffers.mDataByteSize
let dataCount = Int(dataSize) / sizeofInt16
var sampleArray = Array(UnsafeBufferPointer(start: samples, count: dataCount)) as [Int16]
Now reverse the array sampleArray:
sampleArray.reverse()
Using the reversed samples we create a new CMSampleBuffer that contains the reversed samples.
Now we replace the data in the CMBlockBuffer we previously obtained with CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer:
First reassign ‘samples’ using the reversed array:
var status:OSStatus = noErr
sampleArray.withUnsafeBytes { sampleArrayPtr in
if let baseAddress = sampleArrayPtr.baseAddress {
let bufferPointer: UnsafePointer<Int16> = baseAddress.assumingMemoryBound(to: Int16.self)
let rawPtr = UnsafeRawPointer(bufferPointer)
status = CMBlockBufferReplaceDataBytes(with: rawPtr, blockBuffer: blockBuffer!, offsetIntoDestination: 0, dataLength: Int(dataSize))
}
}
if status != noErr {
return nil
}
Finally create the new sample buffer using CMSampleBufferCreate. This function needs two arguments we can get from the original sample buffer, namely the formatDescription and numberOfSamples:
let formatDescription = CMSampleBufferGetFormatDescription(self)
let numberOfSamples = CMSampleBufferGetNumSamples(self)
var newBuffer:CMSampleBuffer?
Now create the new sample buffer with the reversed blockBuffer:
guard CMSampleBufferCreate(allocator: kCFAllocatorDefault, dataBuffer: blockBuffer, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: formatDescription, sampleCount: numberOfSamples, sampleTimingEntryCount: 0, sampleTimingArray: nil, sampleSizeEntryCount: 0, sampleSizeArray: nil, sampleBufferOut: &newBuffer) == noErr else {
return self
}
return newBuffer
And that’s all there is to it!
As a final note the Core Audio and AVFoundation headers provide a lot of useful information, such as CoreAudioTypes.h, CMSampleBuffer.h, and many more.