I have a continuous value for which I'd like to calculate an exponential moving average. Normally I'd just use the standard formula for this:
- Sn = αY + (1-α)Sn-1
where Sn is the new average, α is the alpha, Y is the sample, and Sn-1 is the previous average.
Unfortunately, due to various issues I don't have a consistent sample time. I may know I can sample at the most, say, once per millisecond, but due to factors out of my control, I may not be able to take a sample for several milliseconds at a time. A likely more common case, however, is that I simple sample a bit early or late: instead of sampling at 0, 1 and 2 ms. I sample at 0, 0.9 and 2.1 ms. I do anticipate that, regardless of delays, my sampling frequency will be far, far above the Nyquist limit, and thus I need not worry about aliasing.
I reckon that I can deal with this in a more-or-less reasonable way by varying the alpha appropriately, based on the length of time since the last sample.
Part of my reasoning that this will work is that the EMA "interpolates linearly" between the previous data point and the current one. If we consider calculating an EMA of the following list of samples at intervals t: [0,1,2,3,4]. We should get the same result if we use interval 2t, where the inputs become [0,2,4], right? If the EMA had assumed that, at t2 the value had been 2 since t0, that would be the same as the interval t calculation calculating on [0,2,2,4,4], which it's not doing. Or does that make sense at all?
Can someone tell me how to vary the alpha appropriately? "Please show your work." I.e., show me the math that proves that your method really is doing the right thing.