I would like to be able to play simple[1] audio in realtime from GHCI, starting and stopping independent oscillators and samples ("voices").
Using Bash on Linux, it's easy[2] to pipe data into a command that streams audio through your speakers.
In Haskell, I imagine something along these lines:
x <- realtimeAudioSink
createVoices x [VoiceName "sinewave",VoiceName "drums"]
play x (VoiceName "sinewave") $ sinewave (Hz 440)
play x (VoiceName "sinewave") $ sinewave (Hz 220)
-- replace the earlier sinewave with a lower-frequency one
play x (VoiceName "drums")
$ asSoonAs (theTime >= floor theTime)
$ every (Seconds 10) $ soundfile "snare.wav"
-- starting on the next second, play the snare drum sample every 10 seconds
destroyVoice x (VoiceName "drums")
-- the drums stop, the sinewave keeps going
Haskell offers a lot of streaming libraries. Each looks hard to learn. Moreover the audio streaming problem is complicated by the need for realtime operation at a specific sample rate, and by the buffering problem[3].
Is this easy or hard?
[1] Four sinewaves and four samples concurrently seems like enough bandwidth to explore for a lifetime.
[2] If you use ALSA, for instance, cat /dev/urandom | aplay
will play white noise. (Warning: It plays at maximum volume.)
[3] The buffering problem arises, I believe, from the following pair of opposing constraints: (1) If each sample is given its own calculate-and-stream cycle, it might overwhelm the processor. (Or maybe it wouldn't, if the generated audio is simple enough?) (2) If you calculate too many samples at a time before sending them out, you might not have finished calculating them all in time.
haskell-app | aplay
– Herculaneum