Generating a static waveform with webaudio
Asked Answered
D

1

7

I'm trying to generate a static waveform like in audio editing apps with webaudio and canvas. Right now I'm loading an mp3, creating a buffer, iterating over the data returned by getChannelData.

The problem is.. I don't really understand what's being returned.

  1. What is being returned by getChannelData - is it appropriate for a waveform?
  2. How to adjust (sample size?) to get one peak == one second?
  3. Why are ~50% of the values are negative?

    ctx.decodeAudioData(req.response, function(buffer) {
      buf = buffer;
    
    src = ctx.createBufferSource();
    src.buffer = buf;
    
    //create fft
    fft = ctx.createAnalyser();
    
    var data = new Uint8Array(samples);
      fft.getByteFrequencyData(data);
    
    bufferL = buf.getChannelData(0)
      for(var i = 0; i<buf.length; i++){
        n = bufferL[i*(1000)]
          gfx.beginPath();
          gfx.moveTo(i +0.5, 300);
          gfx.lineTo(i +0.5, 300 + (-n*100));
          gfx.stroke();
    

What I'm generating:

What I'm generating:

What I'd like to generate:

What I'd like to generate:

Thanks

Demand answered 14/9, 2014 at 18:13 Comment(0)
L
16

I wrote a sample to do precise this - https://github.com/cwilso/Audio-Buffer-Draw. It's a pretty simplistic demo - you'll have to do the zooming yourself, but the idea's there.

1) Yes, getChannelData returns the audio buffer samples for that channel. 2) Well, that's dependent on how frequent the peaks in your sample are, and that's not necessarily consistent. The draw sample I did does zoom out (that's the "step" bit of the method), but you'll likely want to optimize for your scenario. 3) Half the values are negative because sound samples go between -1 and +1. Sound waves are a positive and negative pressure wave; that's why "silence" is a flat line in the middle, not at the bottom.

Code:

var audioContext = new AudioContext();

function drawBuffer( width, height, context, buffer ) {
    var data = buffer.getChannelData( 0 );
    var step = Math.ceil( data.length / width );
    var amp = height / 2;
    for(var i=0; i < width; i++){
        var min = 1.0;
        var max = -1.0;
        for (var j=0; j<step; j++) {
            var datum = data[(i*step)+j]; 
            if (datum < min)
                min = datum;
            if (datum > max)
                max = datum;
        }
        context.fillRect(i,(1+min)*amp,1,Math.max(1,(max-min)*amp));
    }
}

function initAudio() {
    var audioRequest = new XMLHttpRequest();
    audioRequest.open("GET", "sounds/fightclub.ogg", true);
    audioRequest.responseType = "arraybuffer";
    audioRequest.onload = function() {
        audioContext.decodeAudioData( audioRequest.response, 
            function(buffer) { 
                var canvas = document.getElementById("view1");
                drawBuffer( canvas.width, canvas.height, canvas.getContext('2d'), buffer ); 
            } );
    }
    audioRequest.send();
}

window.addEventListener('load', initAudio );
Laaland answered 14/9, 2014 at 21:22 Comment(1)
Please include a relevant code sample in your answer. Should that Github URL go stale, your answer becomes utterly uselessMud

© 2022 - 2024 — McMap. All rights reserved.