How to rapidly play multiple copies of a soundfile in javascript
Asked Answered
B

2

2

I'm building a wheel of fortune in html+js that spins rather quickly. Every time a new color flies by the mark, the wheel should play a click-sound. At top speed this sounds almost like a machine gun, so a new file starts playing before the old one is finished basically. The file itself is always the same: click.wav

It works fine in Chrome, only in chrome. Firefox has a weird bug, where it only plays the sound, if there is any other audio source active, such as a youtube video playing in a different tab. Edge and Safari kinda safe up the clicks to the end and then play them all simultaniously. It's a mess... I use the method described here which uses cloning an <audio> tag

I guess this is where the problem is:

var sound = new Audio("sounds/click.wav");
sound.preload = 'auto';
sound.load();

function playsound(){
    var click=sound.cloneNode();
    click.volume=1;
    click.play();
}

Here is a simplified version of my spinning function that just calls the playsound() function several times per second:

function rotateWheel(){
  angle = angle + acceleration
  while (angle >= 360) {
    angle = angle - 360
  } 
  var wheel = document.getElementById("wheel")
  wheel.style.transform = "rotate("+angle +"deg)"
  // play the click when a new segment rotates by
  if(Math.floor(angle/21) != previousSegment){
     playsound()
     previousSegment = Math.floor(angle/21)

}

Burse answered 27/4, 2020 at 7:32 Comment(0)
A
2

You used an answer from here this methods cause at some point to crash the browser process because you either create a memory issue or you fill up the DOM with elements the browser has to handle - so you should re-think your approach AND as you found out it will not work for heavy use in most browsers like safari or FireFox
Looking deeper into the <audio> tag specification, it becomes clear that there are many things that simply can't be done with it, which isn't surprising, since it was designed for media playback. One of the limitations includes -> No fine-grained timing of sound.

So you have to find another method for what you want we use Web Audio API designed for online video games.
Web Audio API
An AudioContext is for managing and playing all sounds. To produce a sound using the Web Audio API, create one or more sound sources and connect them to the sound destination provided by the AudioContext instance (usually the speaker). The AudioBuffer
With the Web Audio API, audio files can be played only after they’ve been loaded into a buffer. Loading sounds takes time, so assets that are used in the animation/game should be loaded on page load, at the start of the game or level, or incrementally while the player is playing.
The basic steps

  • We use an XMLHttpRequest to load data into a buffer from an audio file.
  • Next, we make an asynchronous callback and send the actual request to load.
  • Once a sound has been buffered and decoded, it can be triggered instantly.
  • Each time it is triggered, a different instance of the buffered sound is created.

A key feature of sound effects in games is that there can be many of them simultaneously. So to take your example of the "machine gun": Imagine you're in the middle of a gunfight a shooting machine gun. The machine gun fires many times per second, causing tens of sound effects to be played at the same time. This is where Web Audio API really shines. A simple example for your application:

/* global AudioContext:true,
*/

var clickingBuffer = null;
// Fix up prefixing
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();

function loadClickSound(url) {
    var request = new XMLHttpRequest();
    request.open('GET', url, true);
    request.responseType = 'arraybuffer';
    // Decode asynchronously
    request.onload = function() {
        context.decodeAudioData(request.response, function(buffer) {
            if (!buffer) {
                console.log('Error decoding file data: ' + url);
                return;
            }
        clickingBuffer = buffer;
        });
    request.onerror = function() {
        console.log('BufferLoader: XHR error');        
        };
    request.send();
    };
}

function playSound(buffer, time, volume) {              
  var source = context.createBufferSource();   // creates a sound source
  source.buffer = buffer;                     // tell the source which sound to play
  source.connect(context.destination);          // connect the source to the context's destination (the speakers)
  var gainNode = context.createGain();          // Create a gain node
  source.connect(gainNode);                     // Connect the source to the gain node
  gainNode.connect(context.destination);        // Connect the gain node to the destination
  gainNode.gain.value = volume;                  // Set the volume
  source.start(time);                           // play the source at the deisred time 0=now    
}

// You call with in your document ready
   loadClickSound('sounds/click.wav');
//and this plays the sound
   playSound(clickingBuffer, 0, 1);

Now you can play around with different timings and volume variations for example by intoducing a random factor If you need a more complex solution with different clicking sounds (stored in a buffer array) and volume/ distance variations this would be a longer piece of code.

Aero answered 27/4, 2020 at 10:47 Comment(4)
amazing! Thank you soooooososo much for this detailed description and answer. It's excactly what I needed.Burse
any idea how to circumvent this? I mean there is actually a user interaction that calls all this, some functions above.... i tried to add context.resume() at a couple of places, but still get this message: The AudioContext was not allowed to start. It must be resumed (or created) after a user gesture on the page. [link](https://developers.google.com/web/updates/2017/09/autoplay-policy-changes#webaudio)Burse
Its hard to debug code I did never see -> By default, the Web Audio API is not currently affected by the autoplay policy. (from the link) Is the context defined globally? If the problem persists post a new question with the relevant code and notify my here via commentAero
Hi @Codebreaker007, I created a jsfiddle and seperate question as you suggested, to isolate the problem: #61609137. Would be great if you could check it out.Burse
P
0

While @Codebreaker007's answer is correct, it has a few issues:

  • First and foremost, it is incorrect, as it connects source to context.destination and then also to gainNode, which also connects to context.destination, so you won't be able to control the gain (volume) as expected.

    The should be connected like this:

    source.connect(gainNode).connect(context.destination);
    
  • Secondly, it doesn't keep track of source nodes currently playing, so you cannot stop them.

    To address this, you need to keep track of each source node that is created and that hasn't finished playing yet, which can be done by listen to the source node's onended.

Here's a class that encapsulates the logic to load a sound from a URL and play/stop it as many times as you want, keeping track of all currently playing sources and cleaning them up as needed (with properly working gain / volume):

window.AudioContext = window.AudioContext || window.webkitAudioContext;

const context = new AudioContext();

export class Sound {

    url = '';

    buffer = null;

    sources = [];

    constructor(url) {
        this.url = url;
    }

    load() {
        if (!this.url) return Promise.reject(new Error('Missing or invalid URL: ', this.url));

        if (this.buffer) return Promise.resolve(this.buffer);

        return new Promise((resolve, reject) => {
            const request = new XMLHttpRequest();

            request.open('GET', this.url, true);
            request.responseType = 'arraybuffer';

            // Decode asynchronously:

            request.onload = () => {
                context.decodeAudioData(request.response, (buffer) => {
                    if (!buffer) {
                        console.log(`Sound decoding error: ${ this.url }`);

                        reject(new Error(`Sound decoding error: ${ this.url }`));

                        return;
                    }

                    this.buffer = buffer;

                    resolve(buffer);
                });
            };

            request.onerror = (err) => {
                console.log('Sound XMLHttpRequest error:', err);

                reject(err);
            };

            request.send();
        });
    }

    play(volume = 1, time = 0) {
        if (!this.buffer) return;

        // Create a new sound source and assign it the loaded sound's buffer:

        const source = context.createBufferSource();

        source.buffer = this.buffer;

        // Keep track of all sources created, and stop tracking them once they finish playing:

        const insertedAt = this.sources.push(source) - 1;

        source.onended = () => {
            source.stop(0);

            this.sources.splice(insertedAt, 1);
        };

        // Create a gain node with the desired volume:

        const gainNode = context.createGain();

        gainNode.gain.value = volume;

        // Connect nodes:

        source.connect(gainNode).connect(context.destination);

        // Start playing at the desired time:

        source.start(time);
    }

    stop() {
        // Stop any sources still playing:

        this.sources.forEach((source) => {
            source.stop(0);
        });

        this.sources = [];
    }

}

You can then do something like this:

const soundOne = new Sound('./sounds/sound-one.mp3')
const soundTwo = new Sound('./sounds/sound-two.mp3')

Promises.all([
  soundOne.load(),
  soundTwo.load(),
]).then(() => {
  buttonOne.onclick = () => soundOne.play();
  buttonTwo.onclick = () => soundOne.play();
})
Pelasgian answered 16/8, 2023 at 20:26 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.