AVAudioRecorder in Swift 3: Get Byte stream instead of saving to file
Asked Answered
V

2

7

I am new to iOS programming and I want to port an Android app to iOS using Swift 3. The core functionality of the app is to read the byte stream from the microphone and to process this stream live. So it is not sufficient to store the audio stream to a file and process it after recording has stopped.

I already found the AVAudioRecorder class which works, but I don't know how to process the data stream live (filtering, sending it to a server, etc). The init-function of the AVAudioRecorder looks like that:

AVAudioRecorder(url: filename, settings: settings)

What I would need is a class where I can register an event handler or something like that which is called every time x bytes have been read so I can process it.

Is this possible with AVAudioRecorder? If not, is there another class in the Swift library that allows me to process audio streams live? In Android I use android.media.AudioRecord so It would be great if there's an equivalent class in Swift.

Regards

Vondavonni answered 18/4, 2017 at 13:5 Comment(3)
Did you find any solution to this? I'm stucked with same problem. Thanks in advance!Rutger
Did you find solution, i am stucked with same problem.Genitals
This might help #44184669Vondavonni
P
2

Use Audio Queue service in Core Audio framework. https://developer.apple.com/library/content/documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AQRecord/RecordingAudio.html#//apple_ref/doc/uid/TP40005343-CH4-SW1

static const int kNumberBuffers = 3;                            // 1
struct AQRecorderState {
    AudioStreamBasicDescription  mDataFormat;                   // 2
    AudioQueueRef                mQueue;                        // 3
    AudioQueueBufferRef          mBuffers[kNumberBuffers];      // 4
    AudioFileID                  mAudioFile;                    // 5
    UInt32                       bufferByteSize;                // 6
    SInt64                       mCurrentPacket;                // 7
    bool                         mIsRunning;                    // 8
};

Here’s a description of the fields in this structure:

1 Sets the number of audio queue buffers to use. 2 An AudioStreamBasicDescription structure (from CoreAudioTypes.h) representing the audio data format to write to disk. This format gets used by the audio queue specified in the mQueue field. The mDataFormat field gets filled initially by code in your program, as described in Set Up an Audio Format for Recording. It is good practice to then update the value of this field by querying the audio queue's kAudioQueueProperty_StreamDescription property, as described in Getting the Full Audio Format from an Audio Queue. On Mac OS X v10.5, use the kAudioConverterCurrentInputStreamDescription property instead.

For details on the AudioStreamBasicDescription structure, see Core Audio Data Types Reference.

3 The recording audio queue created by your application.

4 An array holding pointers to the audio queue buffers managed by the audio queue.

5 An audio file object representing the file into which your program records audio data.

6 The size, in bytes, for each audio queue buffer. This value is calculated in these examples in the DeriveBufferSize function, after the audio queue is created and before it is started. See Write a Function to Derive Recording Audio Queue Buffer Size.

7 The packet index for the first packet to be written from the current audio queue buffer.

8 A Boolean value indicating whether or not the audio queue is running.

Protagoras answered 6/2, 2018 at 11:52 Comment(1)
Hard to find simple working code examples for AudioQueue. Mostly are one-off poorly documented and brittleEvelunn
D
0

Use AVAudioEngine.inputNode.installTap

This post helps https://stackoverflow.com/a/48107265

Danedanegeld answered 8/2 at 5:51 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.