We need to replace our NPAPI browser plugin with a plugin-less solution. We have 3rd party input device that provides us live audio in the form of Opus 'frames'. We transmit those frames to the browser using binary WebSockets; and then, forward the data to our NPAPI plugin for decoding and audio playback. See picture.
What approach should we take to replace the NPAPI plugin with an HTML5-ish solution given these requirements?
- Minimize end-to-end latency to no more than 3-5s (assumes 200mS round trip network latency).
- Provide a means to apply audio filters (client/browser side)
Using the html5 audio tag seems to introduce a huge amount of latency as various browsers will require a certain amount of buffering (15-30s of audio) before beginning playback. We understand Opus may or may not be supported on all browsers. If needed (though we'd rather not to reduce bandwidth), we could encapsulate the Opus frames into a Ogg container within the web service before sending the data to the browser. Looking at one of the demos from html5rocks, HTML5 Audio Playground, it appears as though #2 is possible.
If this is a poor place to ask such a design question, please suggest other forums/groups that might be more appropriate.
Thanks for any help or suggests you might offer.