Draw a waveform, then play the result.
Synthesis takes place in a wasm file compiled from C (without Emscripten!).
WebAudio is a little weird given that its primitive is the node. What I ultimately want is just a buffer of samples I can write to. This setup is the nearest thing to that world and does allow me to just do what I want in C and write a bunch of floats to a buffer.
This works by creating an AudioWorkletProcessor and just reading from a SharedArrayBuffer.
There are some pros and cons with this approach. It is very simple and performant to share memory this way, since the main thread can make changes to the memory (such as when you edit the waveform) directly instead of having to pass messages and so on.
However, SharedArrayBuffer
s present a security concern which may affect browser support if not carefully considered.
I'm still experimenting with the best approach for this kind of
audio programming. Maybe
the AudioWorkletProcessor
should
initialise the WASM instead, and
accept AudioParam
s when configuration
is changed instead. This would result in some overhead for
passing messages between the main thread (where user interaction
takes place) and
the AudioWorkletProcessor
, but this probably
won't make a meaningful enough effect on audio processing
performance to be a problem.