Extracting audio channel from Linear PCM
Asked Answered
D

3

8

I would like to extract a channel audio from the an LPCM raw file ie extract left and right channel of a stereo LPCM file. The LPCM is 16 bit depth,interleaved, 2 channels,litle endian. From what I gather the order of byte is {LeftChannel,RightChannel,LeftChannel,RightChannel...} and since it is 16 bit depth there will be 2 bytes of sample for each channel right?

So my question is if i want to extract the left channel then I would take the bytes in 0,2,4,6...n*2 address? while the right channel would be 1,3,4,...(n*2+1).

Also after extracting the audio channel, should i set the format of the extracted channel as 16 bit depth ,1 channel?

Thanks in advance

This is the code that I currently use to extract PCM audio from AssetReader.. This code works fine with writing a music file without its channel being extracted so I it might be caused by the format or something...

    NSURL *assetURL = [song valueForProperty:MPMediaItemPropertyAssetURL];
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                [NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey, 
                                [NSNumber numberWithFloat:44100.0], AVSampleRateKey,
                                [NSNumber numberWithInt:2], AVNumberOfChannelsKey,
                            //  [NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)], AVChannelLayoutKey,
                                [NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
                                [NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
                                [NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
                                [NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
                                nil];
NSError *assetError = nil;
AVAssetReader *assetReader = [[AVAssetReader assetReaderWithAsset:songAsset
                                                            error:&assetError]
                              retain];
if (assetError) {
    NSLog (@"error: %@", assetError);
    return;
}

AVAssetReaderOutput *assetReaderOutput = [[AVAssetReaderAudioMixOutput 
                                           assetReaderAudioMixOutputWithAudioTracks:songAsset.tracks
                                           audioSettings: outputSettings]
                                          retain];
if (! [assetReader canAddOutput: assetReaderOutput]) {
    NSLog (@"can't add reader output... die!");
    return;
}
[assetReader addOutput: assetReaderOutput];


NSArray *dirs = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectoryPath = [dirs objectAtIndex:0];

//CODE TO SPLIT STEREO
[self setupAudioWithFormatMono:kAudioFormatLinearPCM];
NSString *splitExportPath = [[documentsDirectoryPath stringByAppendingPathComponent:@"monoleft.caf"] retain];
if ([[NSFileManager defaultManager] fileExistsAtPath:splitExportPath]) {
    [[NSFileManager defaultManager] removeItemAtPath:splitExportPath error:nil];
}

AudioFileID mRecordFile;
NSURL *splitExportURL = [NSURL fileURLWithPath:splitExportPath];


OSStatus status =  AudioFileCreateWithURL(splitExportURL, kAudioFileCAFType, &_streamFormat, kAudioFileFlags_EraseFile,
                                          &mRecordFile);

NSLog(@"status os %d",status);

[assetReader startReading];

CMSampleBufferRef sampBuffer = [assetReaderOutput copyNextSampleBuffer];
UInt32 countsamp= CMSampleBufferGetNumSamples(sampBuffer);
NSLog(@"number of samples %d",countsamp);

SInt64 countByteBuf = 0;
SInt64 countPacketBuf = 0;
UInt32 numBytesIO = 0;
UInt32 numPacketsIO = 0;
NSMutableData * bufferMono = [NSMutableData new];
while (sampBuffer) {


    AudioBufferList  audioBufferList;
    CMBlockBufferRef blockBuffer;
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
    for (int y=0; y<audioBufferList.mNumberBuffers; y++) {
        AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
        //frames = audioBuffer.mData;
        NSLog(@"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels);
        NSLog(@"The buffer size is %d",audioBuffer.mDataByteSize);






        //Append mono left to buffer data
        for (int i=0; i<audioBuffer.mDataByteSize; i= i+4) {
            [bufferMono appendBytes:(audioBuffer.mData+i) length:2];
        }

        //the number of bytes in the mutable data containing mono audio file
        numBytesIO = [bufferMono length];
        numPacketsIO = numBytesIO/2;
        NSLog(@"numpacketsIO %d",numPacketsIO);
        status = AudioFileWritePackets(mRecordFile, NO, numBytesIO, &_packetFormat, countPacketBuf, &numPacketsIO, audioBuffer.mData);
        NSLog(@"status for writebyte %d, packets written %d",status,numPacketsIO);
        if(numPacketsIO != (numBytesIO/2)){
            NSLog(@"Something wrong");
            assert(0);
        }


        countPacketBuf = countPacketBuf + numPacketsIO;
        [bufferMono setLength:0];


    }

    sampBuffer = [assetReaderOutput copyNextSampleBuffer];
    countsamp= CMSampleBufferGetNumSamples(sampBuffer);
    NSLog(@"number of samples %d",countsamp);
}
AudioFileClose(mRecordFile);
[assetReader cancelReading];
[self performSelectorOnMainThread:@selector(updateCompletedSizeLabel:)
                       withObject:0
                    waitUntilDone:NO];

The output format with audiofileservices is as follows:

        _streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
    _streamFormat.mBitsPerChannel = 16;
    _streamFormat.mChannelsPerFrame = 1;
    _streamFormat.mBytesPerPacket = 2;
    _streamFormat.mBytesPerFrame = 2;// (_streamFormat.mBitsPerChannel / 8) * _streamFormat.mChannelsPerFrame;
    _streamFormat.mFramesPerPacket = 1;
    _streamFormat.mSampleRate = 44100.0;

    _packetFormat.mStartOffset = 0;
    _packetFormat.mVariableFramesInPacket = 0;
    _packetFormat.mDataByteSize = 2;
Dispense answered 7/1, 2011 at 22:46 Comment(0)
M
4

Sounds almost right - you have a 16 bit depth, so that means each sample will take 2 bytes. That means the left channel data will be in bytes {0,1}, {4,5}, {8,9} and so on. Interleaved means the samples are interleaved, not the bytes. Other than that I would try it out and see if you have any problems with your code.

Also after extracting the audio channel, should i set the format of the extracted channel as 16 bit depth ,1 channel?

Only one of the two channels is remaining after your extraction, so yes, this is correct.

Meekins answered 7/1, 2011 at 23:22 Comment(4)
Alright i actually have attempted this and the result is not right... I used the extract algorithm like you have said but the output sounds distorted... distorted as in, it sounds "slow"...Dispense
The code that I have used for the said algorithm has been put up in my post. I would welcome any input what caused it to go wrongDispense
Are you sure that the input sample rate is 44100? If you had that one wrong that would explain it playing back "slow"Meekins
yeah I am pretty sure. Besides I have tested this kind of code with the whole audio file as stereo. I will try to investigate the format flag i guessDispense
B
1

I had a similar error that the audio sounded 'slow', the reason for this is that you specified mChannelsPerFrame of 1, whereas you have a dual channel sound. Set it to 2 and it should speed up the playback. Also do tell if after you do this the output 'sounds' correctly... :)

Bakeman answered 18/2, 2012 at 21:44 Comment(0)
T
0

I'm trying to split my stereo audio into two mono files (split stereo audio to mono streams on iOS). I've been using your code but can't seem to get it to work. Whats the contents of your setupAudioWithFormatMono method?

Tildatilde answered 12/1, 2015 at 23:12 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.