How do I stream AVAsset audio wirelessly form one iOS device to another?
Asked Answered
R

1

2

I'm making something like streaming the audio from iPod library, send the data via network or bluetooth, and playback using audio queue.

Thanks for this question and code. Help me a lot.

I have two question about it.

  1. what should I send from one device to another? CMSampleBufferRef? AudioBuffer? mData? AudioQueueBuffer? packet? I have no idea.

  2. When the app finished the playing, it crashed, and I got error (-12733). I just want to know how to handle the errors instead of letting it crash. (check the OSState? When the error happened, stop it?)

    Error: could not read sample data (-12733)

Ribbing answered 4/2, 2013 at 13:38 Comment(0)
M
6

I will answer your second question first - don't wait for the app to crash, you can stop pulling audio from the track by checking the number of samples that are available in the CMSampleBufferRef you are reading; for example (this code will also be included in the 2nd half of my answer):

CMSampleBufferRef sample;
sample = [readerOutput copyNextSampleBuffer];

CMItemCount numSamples = CMSampleBufferGetNumSamples(sample);

if (!sample || (numSamples == 0)) {
  // handle end of audio track here
  return;
}

Regarding your first question, it depends on the type of audio you are grabbing - it could be wither PCM (non-compressed) or VBR (compressed) format. I'm not even going to bother addressing the PCM part because it's simply not smart to send uncompressed audio data from one phone to another over the network - it's unnecessarily expensive and will clog your networking bandwidth. So we're left with VBR data. For that you've got to send the contents of AudioBuffer and AudioStreamPacketDescription you pulled from the sample. But then again, it's probably best to explain what I'm saying by code:

-(void)broadcastSample
{
    [broadcastLock lock];

CMSampleBufferRef sample;
sample = [readerOutput copyNextSampleBuffer];

CMItemCount numSamples = CMSampleBufferGetNumSamples(sample);

if (!sample || (numSamples == 0)) {
    Packet *packet = [Packet packetWithType:PacketTypeEndOfSong];
    packet.sendReliably = NO;
    [self sendPacketToAllClients:packet];
    [sampleBroadcastTimer invalidate];
    return;
}


        NSLog(@"SERVER: going through sample loop");
        Boolean isBufferDataReady = CMSampleBufferDataIsReady(sample);



        CMBlockBufferRef CMBuffer = CMSampleBufferGetDataBuffer( sample );                                                         
        AudioBufferList audioBufferList;  

        CheckError(CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
                                                                           sample,
                                                                           NULL,
                                                                           &audioBufferList,
                                                                           sizeof(audioBufferList),
                                                                           NULL,
                                                                           NULL,
                                                                           kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
                                                                           &CMBuffer
                                                                           ),
                   "could not read sample data");

        const AudioStreamPacketDescription   * inPacketDescriptions;

        size_t                               packetDescriptionsSizeOut;
        size_t inNumberPackets;

        CheckError(CMSampleBufferGetAudioStreamPacketDescriptionsPtr(sample, 
                                                                     &inPacketDescriptions,
                                                                     &packetDescriptionsSizeOut),
                   "could not read sample packet descriptions");

        inNumberPackets = packetDescriptionsSizeOut/sizeof(AudioStreamPacketDescription);

        AudioBuffer audioBuffer = audioBufferList.mBuffers[0];



        for (int i = 0; i < inNumberPackets; ++i)
        {

            NSLog(@"going through packets loop");
            SInt64 dataOffset = inPacketDescriptions[i].mStartOffset;
            UInt32 dataSize   = inPacketDescriptions[i].mDataByteSize;            

            size_t packetSpaceRemaining = MAX_PACKET_SIZE - packetBytesFilled - packetDescriptionsBytesFilled;
            size_t packetDescrSpaceRemaining = MAX_PACKET_DESCRIPTIONS_SIZE - packetDescriptionsBytesFilled;        

            if ((packetSpaceRemaining < (dataSize + AUDIO_STREAM_PACK_DESC_SIZE)) || 
                (packetDescrSpaceRemaining < AUDIO_STREAM_PACK_DESC_SIZE))
            {
                if (![self encapsulateAndShipPacket:packet packetDescriptions:packetDescriptions packetID:assetOnAirID])
                    break;
            }

            memcpy((char*)packet + packetBytesFilled, 
                   (const char*)(audioBuffer.mData + dataOffset), dataSize);

            memcpy((char*)packetDescriptions + packetDescriptionsBytesFilled, 
                   [self encapsulatePacketDescription:inPacketDescriptions[i]
                                         mStartOffset:packetBytesFilled
                    ],
                   AUDIO_STREAM_PACK_DESC_SIZE);  


            packetBytesFilled += dataSize;
            packetDescriptionsBytesFilled += AUDIO_STREAM_PACK_DESC_SIZE; 

            // if this is the last packet, then ship it
            if (i == (inNumberPackets - 1)) {          
                NSLog(@"woooah! this is the last packet (%d).. so we will ship it!", i);
                if (![self encapsulateAndShipPacket:packet packetDescriptions:packetDescriptions packetID:assetOnAirID])
                    break;

            }

        }

    [broadcastLock unlock];
}

Some methods that I've used in the above code are methods you don't need to worry about, such as adding headers to each packet (I was creating my own protocol, you can create your own). For more info see this tutorial.

- (BOOL)encapsulateAndShipPacket:(void *)source
              packetDescriptions:(void *)packetDescriptions
                        packetID:(NSString *)packetID
{

    // package Packet
    char * headerPacket = (char *)malloc(MAX_PACKET_SIZE + AUDIO_BUFFER_PACKET_HEADER_SIZE + packetDescriptionsBytesFilled);

    appendInt32(headerPacket, 'SNAP', 0);    
    appendInt32(headerPacket,packetNumber, 4);    
    appendInt16(headerPacket,PacketTypeAudioBuffer, 8);   
    // we use this so that we can add int32s later
    UInt16 filler = 0x00;
    appendInt16(headerPacket,filler, 10);    
    appendInt32(headerPacket, packetBytesFilled, 12);
    appendInt32(headerPacket, packetDescriptionsBytesFilled, 16);    
    appendUTF8String(headerPacket, [packetID UTF8String], 20);


    int offset = AUDIO_BUFFER_PACKET_HEADER_SIZE;        
    memcpy((char *)(headerPacket + offset), (char *)source, packetBytesFilled);

    offset += packetBytesFilled;

    memcpy((char *)(headerPacket + offset), (char *)packetDescriptions, packetDescriptionsBytesFilled);

    NSData *completePacket = [NSData dataWithBytes:headerPacket length: AUDIO_BUFFER_PACKET_HEADER_SIZE + packetBytesFilled + packetDescriptionsBytesFilled];        



    NSLog(@"sending packet number %lu to all peers", packetNumber);
    NSError *error;    
    if (![_session sendDataToAllPeers:completePacket withDataMode:GKSendDataReliable error:&error])   {
        NSLog(@"Error sending data to clients: %@", error);
    }   

    Packet *packet = [Packet packetWithData:completePacket];

    // reset packet 
    packetBytesFilled = 0;
    packetDescriptionsBytesFilled = 0;

    packetNumber++;
    free(headerPacket);    
    //  free(packet); free(packetDescriptions);
    return YES;

}

- (char *)encapsulatePacketDescription:(AudioStreamPacketDescription)inPacketDescription
                          mStartOffset:(SInt64)mStartOffset
{
    // take out 32bytes b/c for mStartOffset we are using a 32 bit integer, not 64
    char * packetDescription = (char *)malloc(AUDIO_STREAM_PACK_DESC_SIZE);

    appendInt32(packetDescription, (UInt32)mStartOffset, 0);
    appendInt32(packetDescription, inPacketDescription.mVariableFramesInPacket, 4);
    appendInt32(packetDescription, inPacketDescription.mDataByteSize,8);    

    return packetDescription;
}

receiving data:

- (void)receiveData:(NSData *)data fromPeer:(NSString *)peerID inSession:(GKSession *)session context:(void *)context
{

    Packet *packet = [Packet packetWithData:data];
    if (packet == nil)
    {
         NSLog(@"Invalid packet: %@", data);
        return;
    }

    Player *player = [self playerWithPeerID:peerID];

    if (player != nil)
    {
        player.receivedResponse = YES;  // this is the new bit
    } else {
        Player *player = [[Player alloc] init];
        player.peerID = peerID;
        [_players setObject:player forKey:player.peerID];
    }

    if (self.isServer)
    {
        [Logger Log:@"SERVER: we just received packet"];   
        [self serverReceivedPacket:packet fromPlayer:player];

    }
    else
        [self clientReceivedPacket:packet];
}

notes:

  1. There are a lot of networking details that I didn't cover here (ie, in the receiving data part. I used a lot of custom made objects without expanding on their definition). I didn't because explaining all of that is beyond the scope of just one answer on SO. However, you can follow the excellent tutorial of Ray Wenderlich. He takes his time in explaining networking principles, and the architecture I use above is almost taken verbatim from him. HOWEVER THERE IS A CATCH (see next point)

  2. Depending on your project, GKSession may not be suitable (especially if your project is realtime, or if you need more than 2-3 devices to connect simultaneously) it has a lot of limitations. You will have to dig down deeper and use Bonjour directly instead. iPhone cool projects has a nice quick chapter that gives a nice example of using Bonjour services. It's not as scary as it sounds (and the apple documentation is kinda overbearing on that subject).

  3. I noticed you use GCD for your multithreading. Again, if you are dealing with real time then you don't want to use advanced frameworks that do the heavy lifting for you (GCD is one of them). For more on this subject read this excellent article. Also read the prolonged discussion between me and justin in the comments of this answer.

  4. You may want to check out MTAudioProcessingTap introduced in iOS 6. It can potentially save you some hassle while dealing with AVAssets. I didn't test this stuff though. It came out after I did all my work.

  5. Last but not least, you may want to check out the learning core audio book. It's a widely acknowledged reference on this subject. I remember being as stuck as you were at the point you asked the question. Core audio is heavy duty and it takes time to sink in. SO will only give you pointers. You will have to take your time to absorb the material yourself then you will figure out how things work out. Good luck!

Michalmichalak answered 4/2, 2013 at 17:18 Comment(4)
That is very helpful, I'm still studying on it. but about how to handle the packet when the client receive it I found I didn't know that much. I will let you know, if i work it out.Ribbing
Np.. Just take your time and don't rush it.. If you get stuck again you know where to find meMichalmichalak
i figured i'd point this out to anyone interested.. in my experience one of the biggest problems that happens to people working on core audio/real time iOS projects happen in the planning phase.. i've witnessed several examples where the project turned out to be a lot more complex and required a lot more resources than what was initially allocated.. i'm not sure about the scope of your project.. but if you find yourself getting stuck or the progress of work getting move too slow against your projects deliverables.. i strongly recommend you guys rethink the whole thing..Michalmichalak
You are right..I'm studying on the book Learning Core Data... "Core Audio is some serious black arts shit..." I felt myself SURROUNDED by the low-level api.. so much audio basic knowledge I have to learn..(in the book not written..) .. Thus, I'm trying something high-level to do my job this week(maybe AVFoundation)...otherwise, i will restart like a beginner, try to understand how to precess the audio...Ribbing

© 2022 - 2024 — McMap. All rights reserved.