Level Metering with AVAudioEngine
Asked Answered
A

6

23

I just watched the WWDC Video (Session 502 AVAudioEngine in Practice) on AVAudioEngine and am very excited to make an app built on this tech.

I haven't been able to figure out how I might do level monitoring of the microphone input, or a mixer's output.

Can anyone help? To be clear, I'm talking about monitoring the current input signal (and displaying this in the UI), not the input/output volume setting of a channel/track.

I know you can do this with AVAudioRecorder, but this is not an AVAudioNode which the AVAudioEngine requires.

Aetolia answered 4/6, 2015 at 10:30 Comment(0)
K
26

Try to install a tap on main mixer, then make it faster by setting the framelength, then read the samples and get average, something like this:

import framework on top

#import <Accelerate/Accelerate.h>

add property

@property float averagePowerForChannel0;
@property float averagePowerForChannel1;

then the below the same>>

self.mainMixer = [self.engine mainMixerNode];
[self.mainMixer installTapOnBus:0 bufferSize:1024 format:[self.mainMixer outputFormatForBus:0] block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
    [buffer setFrameLength:1024];
    UInt32 inNumberFrames = buffer.frameLength;

    if(buffer.format.channelCount>0)
    {
        Float32* samples = (Float32*)buffer.floatChannelData[0];
        Float32 avgValue = 0;

        vDSP_meamgv((Float32*)samples, 1, &avgValue, inNumberFrames);
        self.averagePowerForChannel0 = (LEVEL_LOWPASS_TRIG*((avgValue==0)?-100:20.0*log10f(avgValue))) + ((1-LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel0) ;
        self.averagePowerForChannel1 = self.averagePowerForChannel0;
    }

    if(buffer.format.channelCount>1)
    {
        Float32* samples = (Float32*)buffer.floatChannelData[1];
        Float32 avgValue = 0;

        vDSP_meamgv((Float32*)samples, 1, &avgValue, inNumberFrames);
        self.averagePowerForChannel1 = (LEVEL_LOWPASS_TRIG*((avgValue==0)?-100:20.0*log10f(avgValue))) + ((1-LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel1) ;
    }
}];

then, get the target value you want

NSLog(@"===test===%.2f", self.averagePowerForChannel1);

to get the peak values, use vDSP_maxmgv instead of vDSP_meamgv.


LEVEL_LOWPASS_TRIG is a simple filter valued between 0.0 to 1.0, if you set 0.0 you will filter all values and not get any data. If you set it to 1.0 you will get too much noise. Basically the higher the value you will get more variation in data. It seems a value between 0.10 to 0.30 is good for most applications.

Kelila answered 16/10, 2015 at 8:52 Comment(9)
What is the value (or range) used for LEVEL_LOWPASS_TRIG?Frink
To use vDSP_meamgv , do "import Accelerate" to use Apple's high performance math framework.Coupe
Can you post a complete working example in Github perhaps?Fascinator
@Frink I did not know what to put either... LEVEL_LOWPASS_TRIG=0.01 worked for me.Coupe
This is the best option. I did the same thing for Swift, so this ObjC syntax was a lifesaver for me on another app. It can be adjusted for different visual representations for volume: waveform chards, simple volume bars, or volume dependent transparency (a fading microphone icon, and so on...).Coupe
Hi josh, is swift conversion working for you? if so please share.Arnica
You save my day :)Ewart
Awesome code. @FarhadMalekpour, would you be able to add more comments to what the code is doing and why?Nic
Hey so what are these values suppose to mean? Im getting around -66.0 in a very quiet room, and if I speak it moves to somewhere around -44.0 are these decibels based on the 'v = -100' parameter ? or why are they lower in quiet environments ?Goulash
E
17

Equivalent Swift 3 code of 'Farhad Malekpour''s answer

import framework on top

import Accelerate

declare globally

private var audioEngine: AVAudioEngine?
    private var averagePowerForChannel0: Float = 0
    private var averagePowerForChannel1: Float = 0
let LEVEL_LOWPASS_TRIG:Float32 = 0.30

use below code in where you required

let inputNode = audioEngine!.inputNode//since i need microphone audio level i have used `inputNode` otherwise you have to use `mainMixerNode`
let recordingFormat: AVAudioFormat = inputNode!.outputFormat(forBus: 0)
 inputNode!.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) {[weak self] (buffer:AVAudioPCMBuffer, when:AVAudioTime) in
                guard let strongSelf = self else {
                    return
                }
                strongSelf.audioMetering(buffer: buffer)
}

calculations

private func audioMetering(buffer:AVAudioPCMBuffer) {
            buffer.frameLength = 1024
            let inNumberFrames:UInt = UInt(buffer.frameLength)
            if buffer.format.channelCount > 0 {
                let samples = (buffer.floatChannelData![0])
                var avgValue:Float32 = 0
                vDSP_meamgv(samples,1 , &avgValue, inNumberFrames)
                var v:Float = -100
                if avgValue != 0 {
                    v = 20.0 * log10f(avgValue)
                }
                self.averagePowerForChannel0 = (self.LEVEL_LOWPASS_TRIG*v) + ((1-self.LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel0)
                self.averagePowerForChannel1 = self.averagePowerForChannel0
            }

            if buffer.format.channelCount > 1 {
                let samples = buffer.floatChannelData![1]
                var avgValue:Float32 = 0
                vDSP_meamgv(samples, 1, &avgValue, inNumberFrames)
                var v:Float = -100
                if avgValue != 0 {
                    v = 20.0 * log10f(avgValue)
                }
                self.averagePowerForChannel1 = (self.LEVEL_LOWPASS_TRIG*v) + ((1-self.LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel1)
            }
    }
Ewart answered 12/4, 2018 at 12:24 Comment(2)
do you have a working sample of this code ? that shows the whole cycle.. how you instanciate the AudioEngine etc..Goulash
noob question - why are there 2 channels, if node is set on channel 0?Protuberate
M
2

Swift 5+

I got help from this project.

  1. download above project & copy 'Microphone.swift' class in your project.

  2. copy paste these fowling codes in your project:

    import AVFoundation
    
    private var mic = MicrophoneMonitor(numberOfSamples: 1)
    private var timer:Timer!
    
    override func viewDidLoad() {
        super.viewDidLoad()
        timer = Timer.scheduledTimer(timeInterval: 0.1, target: self, selector: #selector(startMonitoring), userInfo: nil, repeats: true)
        timer.fire()
    }
    
    @objc func startMonitoring() {
      print("sound level:", normalizeSoundLevel(level: mic.soundSamples.first!))
    }
    
    private func normalizeSoundLevel(level: Float) -> CGFloat {
        let level = max(0.2, CGFloat(level) + 50) / 2 // between 0.1 and 25
        return CGFloat(level * (300 / 25)) // scaled to max at 300 (our height of our bar)
    }
    

3.Open a beer & celebrate!

Molding answered 26/3, 2020 at 7:45 Comment(2)
is this constantly recoding Audio into a file ? doesnt seem very efficient.Goulash
Its the only way I found!Molding
K
1

I discovered another solution which is a bit strange, but works perfectly fine and much better than tap. A mixer does not have a AudioUnit, but if you cast it to a AVAudioIONode you can get the AudioUnit and use the metering facility of the iOS. Here is how:

To enable or disable metering:

- (void)setMeteringEnabled:(BOOL)enabled;
{
    UInt32 on = (enabled)?1:0;
    AVAudioIONode *node = (AVAudioIONode*)self.engine.mainMixerNode;
    OSStatus err = AudioUnitSetProperty(node.audioUnit, kAudioUnitProperty_MeteringMode, kAudioUnitScope_Output, 0, &on, sizeof(on));
}

To update meters:

- (void)updateMeters;
{
    AVAudioIONode *node = (AVAudioIONode*)self.engine.mainMixerNode;

    AudioUnitParameterValue level;
    AudioUnitGetParameter(node.audioUnit, kMultiChannelMixerParam_PostAveragePower, kAudioUnitScope_Output, 0, &level);

    self.averagePowerForChannel1 = self.averagePowerForChannel0 = level;
    if(self.numberOfChannels>1)
    {
        err = AudioUnitGetParameter(node.audioUnit, kMultiChannelMixerParam_PostAveragePower+1, kAudioUnitScope_Output, 0, &level);
    }
}
Kelila answered 24/11, 2015 at 21:50 Comment(0)
O
1
#define LEVEL_LOWPASS_TRIG .3

#import "AudioRecorder.h"





@implementation AudioRecord


-(id)init {
     self = [super init];
     self.recordEngine = [[AVAudioEngine alloc] init];

     return self;
}


 /**  ----------------------  Snippet Stackoverflow.com not including Audio Level Meter    ---------------------     **/


-(BOOL)recordToFile:(NSString*)filePath {

     NSURL *fileURL = [NSURL fileURLWithPath:filePath];

     const Float64 sampleRate = 44100;

     AudioStreamBasicDescription aacDesc = { 0 };
     aacDesc.mSampleRate = sampleRate;
     aacDesc.mFormatID = kAudioFormatMPEG4AAC; 
     aacDesc.mFramesPerPacket = 1024;
     aacDesc.mChannelsPerFrame = 2;

     ExtAudioFileRef eaf;

     OSStatus err = ExtAudioFileCreateWithURL((__bridge CFURLRef)fileURL, kAudioFileAAC_ADTSType, &aacDesc, NULL, kAudioFileFlags_EraseFile, &eaf);
     assert(noErr == err);

     AVAudioInputNode *input = self.recordEngine.inputNode;
     const AVAudioNodeBus bus = 0;

     AVAudioFormat *micFormat = [input inputFormatForBus:bus];

     err = ExtAudioFileSetProperty(eaf, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), micFormat.streamDescription);
     assert(noErr == err);

     [input installTapOnBus:bus bufferSize:1024 format:micFormat block:^(AVAudioPCMBuffer *buffer, AVAudioTime *when) {
       const AudioBufferList *abl = buffer.audioBufferList;
       OSStatus err = ExtAudioFileWrite(eaf, buffer.frameLength, abl);
       assert(noErr == err);


       /**  ----------------------  Snippet from stackoverflow.com in different context  ---------------------     **/


       UInt32 inNumberFrames = buffer.frameLength;
       if(buffer.format.channelCount>0) {
         Float32* samples = (Float32*)buffer.floatChannelData[0]; 
         Float32 avgValue = 0;
         vDSP_maxv((Float32*)samples, 1.0, &avgValue, inNumberFrames);
         self.averagePowerForChannel0 = (LEVEL_LOWPASS_TRIG*((avgValue==0)?
                                  -100:20.0*log10f(avgValue))) + ((1- LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel0) ;
         self.averagePowerForChannel1 = self.averagePowerForChannel0;
       }

       dispatch_async(dispatch_get_main_queue(), ^{

         self.levelIndicator.floatValue=self.averagePowerForChannel0;

       });     


       /**  ---------------------- End of Snippet from stackoverflow.com in different context  ---------------------     **/

     }];

     BOOL startSuccess;
     NSError *error;

     startSuccess = [self.recordEngine startAndReturnError:&error]; 
     return startSuccess;
}



@end
Opus answered 25/7, 2019 at 2:6 Comment(1)
For @omarojo. Here is working code using a combo of two other answers. The .h file to comeOpus
O
-1
#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
#import <AudioToolbox/ExtendedAudioFile.h>
#import <CoreAudio/CoreAudio.h>
#import <Accelerate/Accelerate.h>
#import <AppKit/AppKit.h>

@interface AudioRecord : NSObject {

}

@property (nonatomic) AVAudioEngine *recordEngine;


@property float averagePowerForChannel0;
@property float averagePowerForChannel1;
@property float numberOfChannels;
@property NSLevelIndicator * levelIndicator;


-(BOOL)recordToFile:(NSString*)filePath;

@end
Opus answered 25/7, 2019 at 2:13 Comment(1)
To use, simply call newAudioRecord = [AudioRecord new]; newAudioRecord.levelIndicator=self.audioLevelIndicator; --- Experimental ( and not great ) [newAudioRecord recordToFile:fullFilePath_Name]; [newAudioRecord.recordEngine stop]; [newAudioRecord.recordEngine reset]; newAudioRecord.recordEngine pause]; To resume: [newAudioRecord.recordEngine startAndReturnError:NULL];Opus

© 2022 - 2024 — McMap. All rights reserved.