Using scipy fft and ifft to solve ordinary differential equation numerically
Asked Answered
B

1

6

I have an ordinary differential equation in time domain as follows:

C*du/dt = -g*u + I

where I = A*t/tau*exp^(1-t/tau)

in the freq domain:

u(w) = I(w)/(g*(1+C/g*j*w))

j being the complex number sqrt(-1)

hence i can get u(t) by going into the freq domain using fast Fourier transform (fft) and then back using ifft.

the codes:

t = np.linspace(0.,499.9,5000)
I = q*t*np.exp(1-t/Tau_ca)/Tau_ca
u1 = np.fft.ifft(np.fft.fft(I)/(G_l*(1.+1.j*(C_m/G_l)*np.fft.fftfreq(t.shape[-1]))))

However, when I compare this u(t) obtained with other methods such as numerical integration of the differential equation or its analytical form, it is not correct. I have tried and been unsuccessful at figuring out where my mistakes are.

please enlighten.

Baleen answered 26/11, 2012 at 18:22 Comment(1)
Can you post the complete code, including the one you used for the numerical integration, including the values of the numerical constants? which method of numerical integration did you use?Mariettemarigold
A
11

The derivative of a sinusoid, or complex exponential, is directly proportional to its frequency, and phase shifted by π/2. For a complex exponential the phase shift is equivalent to multiplying by j. For example, d/dt exp(j*Ω*t) == j*Ω * exp(j*Ω*t) == Ω * exp(j*π/2) * exp(j*Ω*t) == Ω * exp(j*(Ω*t + π/2)). So if you have the Fourier transform pair u(t) <=> U(Ω), then du/dt <=> jΩ * U(Ω). Integration is pretty much the inverse operation, but it may add an impulse if there's a DC component: ∫udt <=> U(Ω) / jΩ + π*U(0)*δ(Ω)

To approximate the derivative using a sampled sequence, you have to scale the discrete-time angular frequency ω (which ranges from 0 to 2π, or -π to π) by the sample rate, fs. For example, say the discrete-time frequency is π/2 radians/sample, such as the sequence [1, 0, -1, 0, ...]. In the original signal this corresponds to fs/4. The derivative is d/dt cos(2*π*fs/4*t) == d/dt cos(π/2*fs*t) == -π/2*fs*sin(π/2*fs*t) == π/2*fs*cos(π/2*fs*t + π/2).

You have to sample at an fs that's more than twice the bandwidth of the signal. The sampled component at exactly fs/2 is unreliable. Specifically, with only 2 samples per cycle the amplitude of the fs/2 component alternates the sign of the first sample in the sequence. So for a real-valued signal the fs/2 DFT component is real valued, with a phase of either 0 or π radians, i.e. a*cos(π*fs*t + {0, π}). Given the latter, the derivative of the fs/2 component is -a*π*fs*cos(π*fs*t + {π/2, 3*π/2}), which is 0 for the sample times t == n/fs.

(Regarding the latter, the standard trigonometric interpolation of the DFT uses a cosine, and in that case the derivative will be zero at the sample points. The isn't necessarily true if you sample the signal and its derivative simultaneously. Sampling loses the phase of the fs/2 component relative to the signal, but not relative to its derivative. Depending on the time you start sampling, both the fs/2 component and its derivative may be non-zero at the sample points. If by luck one of them is 0 at the sample times, the other will be at its peak, since they're π/2 radians apart.)

Given that the fs/2 DFT component is always real valued for a real-valued signal, when you multiply it by j in the course of computing the derivative or integral, this introduces an imaginary-valued component in the result. There's a simple workaround. If N is even, just zero out the fs/2 component at index N/2. Another problem is division by zero when dividing by for integration. This can be solved by adding a small value to index 0 of the Ω vector (e.g. finfo(float64).tiny is the smallest double precision float).

Given Ω = fs * ω, the system shown in the question has the following form in the frequency-domain:

H(Ω) = 1 / (g + j*Ω*C)
U(Ω) = I(Ω) * H(Ω)

It's a single-pole low-pass filter. The solution you derived has 2 problems.

  1. You aren't scaling the frequency variable w by fs.
  2. You use fftfreq, which uses the range -0.5 to 0.5. You need -π to π. Actually you only need 0 to π because i(t) is real-valued. In this case you can use rfft and irfft for real-valued signals, which skips computing the negative frequencies.

All that said, you may still be disappointed with the result because the DFT uses the periodic extension of your signal.

Examples

First, here's a simple example of a 1 Hz sinusoid (plotted in red) sampled at 1024 samples/s for 2 seconds, and its derivative computed via the DFT (plotted in blue):

from pylab import *

fs = 1024
t = arange(2*fs, dtype=float64) / fs
N = len(t) // 2 + 1    # non-negative frequencies
w = 2 * pi / len(t) * arange(N)
Omega = w * fs

x0 = cos(2*pi*t)    # w0 == 2*pi/fs
X0 = rfft(x0);
# Zero the fs/2 component. It's zero here anyway.
X0[-1] = 0  
dx0 = irfft(X0 * 1j*Omega)
plot(t, x0, 'r', t, dx0, 'b')
show()

dt0

This is an easy case -- a periodic signal with finite bandwidth. Just make sure to sample an integer number of periods at a high enough rate to avoid aliasing.

The next example is a triangle wave, with a slope of 1 and -1, and a discontinuity in the derivative at the center and edges. Ideally, the derivative should be a square wave, but computing that perfectly would require infinite bandwidth. Instead there's Gibbs ringing around the discontinuity:

t2 = t[:fs]
m = len(t) // (2*len(t2))
x1 = hstack([t2, 1.0 - t2] * m)
X1 = rfft(x1)
X1[-1] = 0
dx1 = irfft(X1 * 1j*Omega)
plot(t, x1, 'r', t, dx1, 'b')
show()

dt1

The DFT's implicit periodic extension is problematic if you're solving a non-periodic system. Here's a solution to the system in question using both odeint and the DFT (tau is set to 0.5s; g and C are set for a 1 Hz corner frequency):

from scipy.integrate import odeint

a = 1.0; tau = 0.5
g = 1.0; C = 1.0 / (2 * pi)
i = lambda t: a * t / tau * exp(1 - t / tau)
f = lambda u, t: (-g*u + i(t)) / C

H = 1 / (g + 1j*Omega*C)  # system function
I = rfft(i(t))
I[-1] = 0
U_DFT = I * H
u_dft = irfft(U_DFT)
u_ode = odeint(f, u_dft[0], t)[:,0]

td = t[:-1] + 0.5/fs
subplot('221'); title('odeint u(t)');
plot(t, u_ode)
subplot('222'); title('DFT u(t)');
plot(t, u_dft)
subplot('223'); title('odeint du/dt')
plot(td, diff(u_ode)*fs, 'r',
     t, (-g*u_ode + i(t)) / C, 'b')           
subplot('224'); title('DFT du/dt')
plot(td, diff(u_dft)*fs, 'r',
     t, (-g*u_dft + i(t)) / C, 'b')
show()

dt2

The du/dt graphs overlay the derivative as estimated by diff (red) versus the calculated value from the differential equation (blue). They're approximately equal in both cases. I set the initial value for odeint to u_dft[0] in order to show that it returns the DFT solution for the same initial value. The difference is that the odeint solution would continue to decay to zero, but the DFT solution is periodic with a 2s period. The DFT result will look better in this case if more of i(t) is sampled, since i(t) starts at zero and decays to zero.

Zero padding is used with the DFT to perform linear convolution. Specifically in this case, zero padding of the input would help to separate the transient of the Zero State Response from its steady-state. However, more commonly the Laplace or z-transform system functions are used to analyze the ZSR/ZIR. scipy.signal has tools to analyze LTI systems, including mapping continuous-time to discrete-time in polynomial, zero-pole-gain, and state-space forms.

John P. Boyd discusses a Chebyshev approximation method for non-periodic functions in Chebyshev and Fourier Spectral Methods (free online at his University of Michigan page).

You'll probably get more help with a question such as this if you ask on the Signal Processing or Mathematics Stack Exchanges.

Abrade answered 29/11, 2012 at 22:23 Comment(1)
Wow...now that's an answer! :)Jaredjarek

© 2022 - 2024 — McMap. All rights reserved.