How do I perform a convolution in python with a variable-width Gaussian?
Asked Answered
S

2

14

I need to perform a convolution using a Gaussian, however the width of the Gaussian needs to change. I'm not doing traditional signal processing but instead I need to take my perfect Probability Density Function (PDF) and ``smear" it, based on the resolution of my equipment.

For instance, suppose my PDF starts out as a spike/delta-function. I'll model this as a very narrow Gaussian. After being run through my equipment, it will be smeared out according to some Gaussian resolution. I can calculate this using the scipy.signal convolution functions.

    import numpy as np
    import matplotlib.pylab as plt

    import scipy.signal as signal
    import scipy.stats as stats

    # Create the initial function. I model a spike
    # as an arbitrarily narrow Gaussian
    mu = 1.0 # Centroid
    sig=0.001 # Width
    original_pdf = stats.norm(mu,sig)

    x = np.linspace(0.0,2.0,1000) 
    y = original_pdf.pdf(x)
    plt.plot(x,y,label='original')


    # Create the ``smearing" function to convolve with the
    # original function.
    # I use a Gaussian, centered at 0.0 (no bias) and
    # width of 0.5
    mu_conv = 0.0 # Centroid
    sigma_conv = 0.5 # Width
    convolving_term = stats.norm(mu_conv,sigma_conv)

    xconv = np.linspace(-5,5,1000)
    yconv = convolving_term.pdf(xconv)

    convolved_pdf = signal.convolve(y/y.sum(),yconv,mode='same')

    plt.plot(x,convolved_pdf,label='convolved')
    plt.ylim(0,1.2*max(convolved_pdf))
    plt.legend()
    plt.show()

This all works no problem. But now suppose my original PDF is not a spike, but some broader function. For example, a Gaussian with sigma=1.0. And now suppose my resolution actually varys over x: at x=0.5, the smearing function is a Gaussian with sigma_conv=0.5, but at x=1.5, the smearing function is a Gaussian with sigma_conv=1.5. And suppose I know the functional form of the x-dependence of my smearing Gaussian. Naively, I thought I would change the line above to

    convolving_term = stats.norm(mu_conv,lambda x: 0.2*x + 0.1)

But that doesn't work, because the norm function expects a value for the width, not a function. In some sense, I need my convolving function to be a 2D array, where I have a different smearing Gaussian for each point in my original PDF, which remains a 1D array.

So is there a way to do this with functions already defined in Python? I have some code to do this that I wrote myself....but I want to make sure I've not just re-invented the wheel.

Thanks in advance!

Matt

Swedish answered 4/9, 2013 at 21:21 Comment(2)
Your xconv "step" (last - first) / (length - 1) being different to x "step" makes the width scaled (i.e., the sigmas aren't in the same "unit"), you really want that?Uzia
"I have some code to do this that I wrote myself" => can you show us this code? It might be helpful.Uzia
D
8

Question, in brief:
How to convolve with a non-stationary kernel, for example, a Gaussian that changes width for different locations in the data, and does a Python an existing tool for this?

Answer, sort-of:
It's difficult to prove a negative, but I do not think that a function to perform a convolution with a non-stationary kernel exists in scipy or numpy. Anyway, as you describe it, it can't really be vectorized well, so you may as well do a loop or write some custom C code.

One trick that might work for you is, instead of changing the kernel size with position, stretch the data with the inverse scale (ie, at places where you'd want to the Gaussian with to be 0.5 the base width, stretch the data to 2x). This way, you can do a single warping operation on the data, a standard convolution with a fixed width Gaussian, and then unwarp the data to original scale.

The advantages of this approach are that it's very easy to write, and is completely vectorized, and therefore probably fairly fast to run.

Warping the data (using, say, an interpolation method) will cause some loss of accuracy, but if you choose things so that the data is always expanded and not reduced in your initial warping operation, the losses should be minimal.

Dioptrics answered 12/6, 2014 at 14:21 Comment(2)
This thread, and especially this answer, where cited in a research paper: doi.org/10.1093/mnras/stad2597 ... As I detail in my answer, the problem can be vectorized efficiently.Aluminize
Also, if for some reason my approach is not good nor the approach in the paper, then there exists even another code: github.com/sheliak/varconvolve/tree/masterAluminize
A
1

Convolution can be expressed as matrix multiplication. So you can create a matrix where each row contains the kernel according to the position in the vector to be convolved. There are some padding issues to be taken care of at the edges. The resulting matrix will be sparse and banded, which allows for very efficient computation.

I implemented such an algorithm for a Gaussian that varies as a function of the position in a software for matched filtering of integral field spectroscopic datacubes: https://bitbucket.org/Knusper2000/lsdcat/src/master/

I documented this specific algorithm here: https://bitbucket.org/Knusper2000/lsdcat/src/master/doc/spec_filt_notes/spec_filt_notes.pdf

The file that implements this algorithm can be found here: https://bitbucket.org/Knusper2000/lsdcat/src/master/lib/wavelength_smooth_lib.py

Here the routine create_filter_matrix_vel creates this matrix the filtering of the vector, in my case a spectrum, is done with filter_spectrum. It makes use of scipy's sparse matrix routines. It is quite fast, and when you have a lot of vectors it can be trivially parallelized.

Aluminize answered 27/6 at 12:2 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.