How to convolve an image with different gabor filters adjusted according to the local orientation and density using FFT?
Asked Answered
N

2

14

I'm currently working on a library to generate synthetic fingerprints using the SFinGe method (by Maltoni, Maio and Cappelli) link :http://biolab.csr.unibo.it/research.asp?organize=Activities&select=&selObj=12&pathSubj=111%7C%7C12&

One of the steps requires me to apply different gabor filters to an image, each pixel in the image have an orientation and a frequency associated, so the convolution is not done with one kernel over the entire image but the filter must change during the process depending on those attributes of the pixels, that way each pixel on the image is altered in a different way.

If you apply the filters this way, and convolve the image several times(you also have to binarize the image after each convolution) you obtain this:

enter image description here

A master fingerprint, this image took about 20 seconds to be generated (which is way too slow, this is why I want to do it with FFT), since I had to perform 5 times the convolution to complete it (you start from a few random black points).

My filters are 30x30, and the image is 275x400. There are a total of 36000 filters, one for each degree and density (density goes from 0 to 100). I'm planing on reducing the number of filters from 36000 to 9000 since I can cover all the angles with those. Also all the filters are pre-calculated and stored in a filter bank.

This is the source code in C# of the gabor convolution implementation:

This two methods execute the convolution:

    /// <summary>
    /// Convolve the image with the different filters depending on the orientation and density of the pixel.
    /// </summary>
    /// <param name="image">The image to be filtered.</param>
    /// <param name="directionalMap">The directional map.</param>
    /// <param name="densityMap">The density map.</param>
    /// <returns></returns>
    public double[,] Filter(double[,] image, double[,] directionalMap, double[,] densityMap)
    {
        int       midX                          = FILTER_SIZE / 2;
        int       midY                          = FILTER_SIZE / 2;
        double[,] filteredImage                 = new double[image.GetLength(0), image.GetLength(1)];
        double[,] filteredImageWithValuesScaled = new double[image.GetLength(0), image.GetLength(1)];
        double[,] finalImage                    = new double[image.GetLength(0), image.GetLength(1)];

        for (int i = 0; i < image.GetLength(0); i++)
            for (int j = 0; j < image.GetLength(1); j++)
            {

                double pixelValue = GetPixelConvolutionValue(image, this.filterBank[(int)Math.Floor((directionalMap[i, j] * 180 / Math.PI))][Math.Round(densityMap[i, j], 2)], i - midX, j - midY);

                filteredImage[i, j] = pixelValue;
            }

        filteredImageWithValuesScaled = this.RescaleValues(filteredImage, 0.0, 255.0);

        return filteredImageWithValuesScaled;
    }
    /// <summary>
    /// Gets the pixel convolution value.
    /// </summary>
    /// <param name="image">The image.</param>
    /// <param name="filter">The filter.</param>
    /// <param name="sourceX">The source X.</param>
    /// <param name="sourceY">The source Y.</param>
    /// <returns></returns>
    private double GetPixelConvolutionValue(double[,] image, double[,] filter, int sourceX, int sourceY)
    {
        double result      = 0.0;
        int    totalPixels = 0;

        for (int i = 0; i < filter.GetLength(0); i++)
        {
            if(i + sourceX < 0 || i + sourceX >= image.GetLength(0))
                continue;

            for (int j = 0; j < filter.GetLength(1); j++)
            {
                if(j + sourceY < 0 || j + sourceY >= image.GetLength(1))
                    continue;

                double deltaResult = image[sourceX + i,sourceY + j] * filter[i, j];
                result += deltaResult;

                ++totalPixels;
            }
        }

        double filteredValue = result / totalPixels;
        return filteredValue;
    }

This two methods generate the different gabor filters for the filter bank:

    /// <summary>
    /// Creates the gabor filter.
    /// </summary>
    /// <param name="size">The size.</param>
    /// <param name="angle">The angle.</param>
    /// <param name="wavelength">The wavelength.</param>
    /// <param name="sigma">The sigma.</param>
    /// <returns></returns>
    public double[,] CreateGaborFilter(int size, double angle, double wavelength, double sigma)
    {
        double[,] filter    = new double[size, size];
        double    frequency = 7 + (100 - (wavelength * 100)) * 0.03;

        int windowSize = FILTER_SIZE/2;

        for (int y = 0; y < size; ++y)
        {
            for (int x = 0; x < size; ++x)
            {
                int dy = -windowSize + y;
                int dx = -windowSize + x;

                filter[x, y] = GaborFilterValue(dy, dx, frequency, angle, 0, sigma, 0.80);
            }
        }

        return filter;
    }
    /// <summary>
    /// Gabor filter values generation.
    /// </summary>
    /// <param name="x">The x.</param>
    /// <param name="y">The y.</param>
    /// <param name="lambda">The wavelength.</param>
    /// <param name="theta">The orientation.</param>
    /// <param name="phi">The phaseoffset.</param>
    /// <param name="sigma">The gaussvar.</param>
    /// <param name="gamma">The aspectratio.</param>
    /// <returns></returns>
    double GaborFilterValue(int x, int y, double lambda, double theta, double phi, double sigma, double gamma)
    {
        double xx = x * Math.Cos(theta) + y * Math.Sin(theta);
        double yy = -x * Math.Sin(theta) + y * Math.Cos(theta);

        double envelopeVal = Math.Exp(-((xx * xx + gamma * gamma * yy * yy) / (2.0f * sigma * sigma)));

        double carrierVal = Math.Cos(2.0f * (float)Math.PI * xx / lambda + phi);

        double g = envelopeVal * carrierVal;

        return g;
    }

My goal is to reduce this time to under 1 second (There are several programs that do the exactly same thing in such time). So since the direct convolution approach is not working for me I decide to implement the Fast Fourier Transform Convolution, but the problem with this is that FFT applies the same kernel to the entire image at once, and I need to change the kernel per pixel, because each pixel must be altered depending on his attributes (density and orientation). In this post How to apply Gabor wavelets to an image? reve-etrange explains how to apply different gabor filters to an image, but the thing is that the way he does it, is applying the different filters to the whole image and then sum the responses , and what I need is the responses from different pixels to the different filters.

This is whats happening when I convolve one filter with the image (using FFT):

enter image description here

This was the filter used:

enter image description here

And this was the image it was convolved with:

enter image description here

This is the algorithm in C# of the FFT implementation:

    /// <summary>
    /// Convolve the image using FFT.
    /// </summary>
    /// <param name="image">The image to be filtered.</param>
    /// <param name="directionalMap">The directional map.</param>
    /// <param name="densityMap">The density map.</param>
    /// <param name="FFT">if set to <c>true</c> [FFT].</param>
    /// <returns></returns>
    public double[,] Filter(double[,] image, double[,] directionalMap, double[,] densityMap, bool FFT)
    {
        double[,] filter        = null;
        double[,] paddedFilter  = null;
        double[,] paddedImage   = null;
        double[,] croppedImage  = null;
        double[,] filteredImage = new double[image.GetLength(0), image.GetLength(1)];
        double[,] filteredImageWithValuesScaled = new double[image.GetLength(0), image.GetLength(1)];
        double[,] finalImage = new double[image.GetLength(0), image.GetLength(1)];

        filter = this.filterBank[70][0];
        paddedFilter = PadImage(filter, 512, 512, 0, 0); // Pad the filter to have a potency of 2 dimensions.
        paddedImage = PadImage(image, 512, 512, 0, 0);   // Pad the image to have a potency of 2 dimensions.

        FFT fftOne = new FFT(paddedImage);
        FFT fftTwo = new FFT(paddedFilter);

        fftOne.ForwardFFT();
        fftTwo.ForwardFFT();

        FFT result = fftOne * fftTwo;

        result.InverseFFT();

        filteredImage = result.GreyImage;

        filteredImageWithValuesScaled = this.RescaleValues(filteredImage, 0.0, 255.0);

        croppedImage = CropImage(filteredImageWithValuesScaled, image.GetLength(0), image.GetLength(1));

        return croppedImage;
    }

So what I'm asking is, how do you get the response from different pixels to different kernels with FFT? If this is not possible, is there a way to improve my direct convolution to make it at least 20 times faster?

Also would it be possible to make one kernel using all the filters, so I can apply those to the whole image?

Naman answered 14/10, 2012 at 2:16 Comment(3)
I'm not sure how much time it would save, but since you are working with binary images, you could try to skip the filter lookup and multiplication wherever the image is 0. I guess you can also just insert the filter value when the image is 1 (or, not 0). This probably won't give a 20x boost, though, if any at all.Pacemaker
Also, I am not familiar with C#, but what about using multiprocessing or threading?Pacemaker
Hello aganders3, even with multi threading the process would be to slow, as for skiping pixels in the convolution process, It doesnt work =(, you need to do the convonlution on all the pixels otherwise the result is not whats expected. I manage to lower the time to 2 seconds (from 20), using an image and filter half the size, and resizing to the desired size afterwards.Naman
N
8

I found a way to convolve the image with different gabor filters and gather the responses of the pixels base on their local characteristics using FFT.

This is call contextual filtering, usually when you filter an image you only apply a single kernel to the entire thing, but in contextual filtering the characteristics of the filter change according to the local context, in this case, density and orientation of the pixel.

In direct convolution the process is pretty straight forward, you simply change the kernel at each step of the convolution, but in FFT convolution since the kernel is applied to the image in the frequency domain you can't change the filter properties during the process. So the way you do it is by making the convolution of the image with each filter separately, this will give N number of filtered images where N is the numbers of filters in your filter bank, then you have to construct your final image taking information from the different filtered images based on the context of the pixel you are recreating.

So for each pixel, you look at his orientation and density properties and then grab the value of that pixel position from the filtered image that was generated by the convolving the original image with the filter with those same properties. Here is an example of the process:

This is a graphic representation of the directional map.

enter image description here

I used the same density for all pixels to reduce the amount of filters generated.

This is the source image:

enter image description here

This is an example of three of the filters used (0 degrees, 45 degrees, 90 degrees):

enter image description hereenter image description hereenter image description here

These are three examples of the source image being convolve with the different filters at different degrees:

enter image description here

enter image description here

enter image description here

And finally this is the resulting image, the image was created taking the values of the pixels from the different filtered images base on the direction and density of the pixel.

enter image description here

This process is A LOT slower than the direct convolution =(, since you have to convolve the original image with all the filters first. The final image took about a minute to be generated. So far I'm stuck with the direct convolution it seems =/.

Thanks for reading.

Naman answered 14/10, 2012 at 21:42 Comment(1)
I have been working on the fingerprint minutiae extraction & I have been stuck on the finger image enhancement. Could you please tell me how are you creating filtered images or provide me any helpful link for the same. Could you tell me the algorithm for whole of the process?Hargrave
R
0

Did you try with CUDA? with the convolution and the FFT (in this case cufft) it will be faster! Try to see if there is a possibility of parallelization, actually I'm working on that and I can say that the improvement is huge. Currently I'm going to implement the Gabor filtering

Remiss answered 4/8, 2015 at 13:2 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.