Accelerating FFTW pruning to avoid massive zero padding
Asked Answered
D

3

7

Suppose that I have a sequence x(n) which is K * N long and that only the first N elements are different from zero. I'm assuming that N << K, say, for example, N = 10 and K = 100000. I want to calculate the FFT, by FFTW, of such a sequence. This is equivalent to having a sequence of length N and having a zero padding to K * N. Since N and K may be "large", I have a significant zero padding. I'm exploring if I can save some computation time avoid explicit zero padding.

The case K = 2

Let us begin by considering the case K = 2. In this case, the DFT of x(n) can be written as

enter image description here

If k is even, namely k = 2 * m, then

enter image description here

which means that such values of the DFT can be calculated through an FFT of a sequence of length N, and not K * N.

If k is odd, namely k = 2 * m + 1, then

enter image description here

which means that such values of the DFT can be again calculated through an FFT of a sequence of length N, and not K * N.

So, in conclusion, I can exchange a single FFT of length 2 * N with 2 FFTs of length N.

The case of arbitrary K

In this case, we have

enter image description here

On writing k = m * K + t, we have

enter image description here

So, in conclusion, I can exchange a single FFT of length K * N with K FFTs of length N. Since the FFTW has fftw_plan_many_dft, I can expect to have some gaining against the case of a single FFT.

To verify that, I have set up the following code

#include <stdio.h>
#include <stdlib.h>     /* srand, rand */
#include <time.h>       /* time */
#include <math.h>
#include <fstream>

#include <fftw3.h>

#include "TimingCPU.h"

#define PI_d            3.141592653589793

void main() {

    const int N = 10;
    const int K = 100000;

    fftw_plan plan_zp;

    fftw_complex *h_x = (fftw_complex *)malloc(N     * sizeof(fftw_complex));
    fftw_complex *h_xzp = (fftw_complex *)calloc(N * K, sizeof(fftw_complex));
    fftw_complex *h_xpruning = (fftw_complex *)malloc(N * K * sizeof(fftw_complex));
    fftw_complex *h_xhatpruning = (fftw_complex *)malloc(N * K * sizeof(fftw_complex));
    fftw_complex *h_xhatpruning_temp = (fftw_complex *)malloc(N * K * sizeof(fftw_complex));
    fftw_complex *h_xhat = (fftw_complex *)malloc(N * K * sizeof(fftw_complex));

    // --- Random number generation of the data sequence
    srand(time(NULL));
    for (int k = 0; k < N; k++) {
        h_x[k][0] = (double)rand() / (double)RAND_MAX;
        h_x[k][1] = (double)rand() / (double)RAND_MAX;
    }

    memcpy(h_xzp, h_x, N * sizeof(fftw_complex));

    plan_zp = fftw_plan_dft_1d(N * K, h_xzp, h_xhat, FFTW_FORWARD, FFTW_ESTIMATE);
    fftw_plan plan_pruning = fftw_plan_many_dft(1, &N, K, h_xpruning, NULL, 1, N, h_xhatpruning_temp, NULL, 1, N, FFTW_FORWARD, FFTW_ESTIMATE);

    TimingCPU timerCPU;
    timerCPU.StartCounter();
    fftw_execute(plan_zp);
    printf("Stadard %f\n", timerCPU.GetCounter());

    timerCPU.StartCounter();
    double factor = -2. * PI_d / (K * N);
    for (int k = 0; k < K; k++) {
        double arg1 = factor * k;
        for (int n = 0; n < N; n++) {
            double arg = arg1 * n;
            double cosarg = cos(arg);
            double sinarg = sin(arg);
            h_xpruning[k * N + n][0] = h_x[n][0] * cosarg - h_x[n][1] * sinarg;
            h_xpruning[k * N + n][1] = h_x[n][0] * sinarg + h_x[n][1] * cosarg;
        }
    }
    printf("Optimized first step %f\n", timerCPU.GetCounter());

    timerCPU.StartCounter();
    fftw_execute(plan_pruning);
    printf("Optimized second step %f\n", timerCPU.GetCounter());
    timerCPU.StartCounter();
    for (int k = 0; k < K; k++) {
        for (int p = 0; p < N; p++) {
            h_xhatpruning[p * K + k][0] = h_xhatpruning_temp[p + k * N][0];
            h_xhatpruning[p * K + k][1] = h_xhatpruning_temp[p + k * N][1];
        }
    }
    printf("Optimized third step %f\n", timerCPU.GetCounter());

    double rmserror = 0., norm = 0.;
    for (int n = 0; n < N; n++) {
        rmserror = rmserror + (h_xhatpruning[n][0] - h_xhat[n][0]) * (h_xhatpruning[n][0] - h_xhat[n][0]) + (h_xhatpruning[n][1] - h_xhat[n][1]) * (h_xhatpruning[n][1] - h_xhat[n][1]);
        norm = norm + h_xhat[n][0] * h_xhat[n][0] + h_xhat[n][1] * h_xhat[n][1];
    }
    printf("rmserror %f\n", 100. * sqrt(rmserror / norm));

    fftw_destroy_plan(plan_zp);

}

The approach I have developed consists of three steps:

  1. Multiplying the input sequence by "twiddle" complex exponentials;
  2. Performing the fftw_many;
  3. Reorganizing the results.

The fftw_many is faster than the single FFTW on K * N input points. However, steps #1 and #3 completely destroy such a gain. I would expect that steps #1 and #3 be computationally much lighter than step #2.

My questions are:

  1. How is it possible that steps #1 and #3 a so computationally more demanding than step #2?
  2. How can I improve steps #1 and #3 to have a net gain against the "standard" approach?

Thank you very much for any hint.

EDIT

I'm working with Visual Studio 2013 and compiling in Release mode.

Digastric answered 16/11, 2016 at 15:46 Comment(3)
Did you compile your test code with optimisation enabled, e.g. gcc -O3 ... ?Midlothian
@PaulR I'm compiling using Visual Studio 2013 in Release mode. I have edited the post accordingly.Digastric
@JackOLantern You can have the FFT calculations themselves run multithreaded. See fftw.org/doc/… And that's what those CPUs are for. Using them isn't "cheating".Casarez
M
2

For the third step you might want to try switching the order of the loops:

for (int p = 0; p < N; p++) {
    for (int k = 0; k < K; k++) {
        h_xhatpruning[p * K + k][0] = h_xhatpruning_temp[p + k * N][0];
        h_xhatpruning[p * K + k][1] = h_xhatpruning_temp[p + k * N][1];
    }
}

since it's generally more beneficial to have the store addresses be contiguous than the load addresses.

Either way you have a cache-unfriendly access pattern though. You could try working with blocks to improve this, e.g. assuming N is a multiple of 4:

for (int p = 0; p < N; p += 4) {
    for (int k = 0; k < K; k++) {
        for (int p0 = 0; p0 < 4; p0++) {
            h_xhatpruning[(p + p0) * K + k][0] = h_xhatpruning_temp[(p + p0) + k * N][0];
            h_xhatpruning[(p + p0) * K + k][1] = h_xhatpruning_temp[(p + p0) + k * N][1];
        }
    }
}

This should help to reduce the churn of cache lines somewhat. If it does then maybe also experiment with block sizes other than 4 to see if there is a "sweet spot".

Midlothian answered 17/11, 2016 at 9:38 Comment(4)
Your first suggestion significantly improves the timing of the third step, see EDIT #3 to my post. Loop unrolling does not help further improving the result. Many thanks for your answer.Digastric
For the successful for loop exchange you suggested, multi-threading just worsens the results.Digastric
Finally, I have been successful to improve my code, also following your suggestions. I have provided a full answer below. Thank you for your interest.Digastric
@JackOLantern: great - thanks for following up on this and writing up your findings - could be very useful to other readers in the future!Midlothian
C
5

Several options to run faster:

  1. Run multi-threaded if you're only running single-threaded and have multiple cores available.

  2. Create and save an FFTW wisdom file, especially if the FFT dimensions are known in advance. Use FFTW_EXHAUSTIVE, and reload the FFTW wisdom instead of recalculating it every time. This is also important if you want your results to be consistent. Since FFTW may compute FFTs differently with different calculated wisdom, and the wisdom results aren't necessarily going to always be the same, different runs of your process may produce different results when both are given identical input data.

  3. If you're on x86, run 64-bit. The FFTW algorithm is extremely register-intensive, and an x86 CPU running in 64-bit mode has a lot more general-purpose registers available than it does when running in 32-bit mode.

  4. Since the FFTW algorithm is so register intensive, I've had good success improving FFTW performance by compiling FFTW with compiler options that prevent the use of prefetching and prevent the implicit inlining of functions.

Casarez answered 16/11, 2016 at 16:12 Comment(2)
Thank you very much for your answer. However, and conceptually, I'm primarily interested in speeding up the two for loops, rather than the fftws. I have edited my post, adding an OpenMP solution for the two for loops, but I'm feeling like I'm making an unfair comparison: the two FFTWs are sequential, while the for loops are multi-threaded.Digastric
I was finally able to improve my code. I have posted a full answer. Thank you for your interest.Digastric
M
2

For the third step you might want to try switching the order of the loops:

for (int p = 0; p < N; p++) {
    for (int k = 0; k < K; k++) {
        h_xhatpruning[p * K + k][0] = h_xhatpruning_temp[p + k * N][0];
        h_xhatpruning[p * K + k][1] = h_xhatpruning_temp[p + k * N][1];
    }
}

since it's generally more beneficial to have the store addresses be contiguous than the load addresses.

Either way you have a cache-unfriendly access pattern though. You could try working with blocks to improve this, e.g. assuming N is a multiple of 4:

for (int p = 0; p < N; p += 4) {
    for (int k = 0; k < K; k++) {
        for (int p0 = 0; p0 < 4; p0++) {
            h_xhatpruning[(p + p0) * K + k][0] = h_xhatpruning_temp[(p + p0) + k * N][0];
            h_xhatpruning[(p + p0) * K + k][1] = h_xhatpruning_temp[(p + p0) + k * N][1];
        }
    }
}

This should help to reduce the churn of cache lines somewhat. If it does then maybe also experiment with block sizes other than 4 to see if there is a "sweet spot".

Midlothian answered 17/11, 2016 at 9:38 Comment(4)
Your first suggestion significantly improves the timing of the third step, see EDIT #3 to my post. Loop unrolling does not help further improving the result. Many thanks for your answer.Digastric
For the successful for loop exchange you suggested, multi-threading just worsens the results.Digastric
Finally, I have been successful to improve my code, also following your suggestions. I have provided a full answer below. Thank you for your interest.Digastric
@JackOLantern: great - thanks for following up on this and writing up your findings - could be very useful to other readers in the future!Midlothian
D
1

Also following Paul R's comments, I have improved my code. Now, the alternative approach is faster than the standard (zero padded) one. Below is the full C++ script. For step #1 and #3, I have commented other tried solutions, which have shown to be slower or as fast as the uncommented one. I have priviledged non-nested for loops, also in view of a simpler future CUDA parallelization. I'm not yet using multi-threading for FFTW.

#include <stdio.h>
#include <stdlib.h>     /* srand, rand */
#include <time.h>       /* time */
#include <math.h>
#include <fstream>

#include <omp.h>

#include <fftw3.h>

#include "TimingCPU.h"

#define PI_d            3.141592653589793

/******************/
/* STEP #1 ON CPU */
/******************/
void step1CPU(fftw_complex * __restrict h_xpruning, const fftw_complex * __restrict h_x, const int N, const int K) {

//  double factor = -2. * PI_d / (K * N);
//  int n;
//  omp_set_nested(1);
//#pragma omp parallel for private(n) num_threads(4)
//  for (int k = 0; k < K; k++) {
//      double arg1 = factor * k;
//#pragma omp parallel for num_threads(4)
//      for (n = 0; n < N; n++) {
//          double arg = arg1 * n;
//          double cosarg = cos(arg);
//          double sinarg = sin(arg);
//          h_xpruning[k * N + n][0] = h_x[n][0] * cosarg - h_x[n][1] * sinarg;
//          h_xpruning[k * N + n][1] = h_x[n][0] * sinarg + h_x[n][1] * cosarg;
//      }
//  }

    //double factor = -2. * PI_d / (K * N);
    //int k;
    //omp_set_nested(1);
    //#pragma omp parallel for private(k) num_threads(4)
    //for (int n = 0; n < N; n++) {
    //  double arg1 = factor * n;
    //  #pragma omp parallel for num_threads(4)
    //  for (k = 0; k < K; k++) {
    //      double arg = arg1 * k;
    //      double cosarg = cos(arg);
    //      double sinarg = sin(arg);
    //      h_xpruning[k * N + n][0] = h_x[n][0] * cosarg - h_x[n][1] * sinarg;
    //      h_xpruning[k * N + n][1] = h_x[n][0] * sinarg + h_x[n][1] * cosarg;
    //  }
    //}

    //double factor = -2. * PI_d / (K * N);
    //for (int k = 0; k < K; k++) {
    //  double arg1 = factor * k;
    //  for (int n = 0; n < N; n++) {
    //      double arg = arg1 * n;
    //      double cosarg = cos(arg);
    //      double sinarg = sin(arg);
    //      h_xpruning[k * N + n][0] = h_x[n][0] * cosarg - h_x[n][1] * sinarg;
    //      h_xpruning[k * N + n][1] = h_x[n][0] * sinarg + h_x[n][1] * cosarg;
    //  }
    //}

    //double factor = -2. * PI_d / (K * N);
    //for (int n = 0; n < N; n++) {
    //  double arg1 = factor * n;
    //  for (int k = 0; k < K; k++) {
    //      double arg = arg1 * k;
    //      double cosarg = cos(arg);
    //      double sinarg = sin(arg);
    //      h_xpruning[k * N + n][0] = h_x[n][0] * cosarg - h_x[n][1] * sinarg;
    //      h_xpruning[k * N + n][1] = h_x[n][0] * sinarg + h_x[n][1] * cosarg;
    //  }
    //}

    double factor = -2. * PI_d / (K * N);
    #pragma omp parallel for num_threads(8)
    for (int n = 0; n < K * N; n++) {
        int row = n / N;
        int col = n % N;
        double arg = factor * row * col;
        double cosarg = cos(arg);
        double sinarg = sin(arg);
        h_xpruning[n][0] = h_x[col][0] * cosarg - h_x[col][1] * sinarg;
        h_xpruning[n][1] = h_x[col][0] * sinarg + h_x[col][1] * cosarg;
    }
}

/******************/
/* STEP #3 ON CPU */
/******************/
void step3CPU(fftw_complex * __restrict h_xhatpruning, const fftw_complex * __restrict h_xhatpruning_temp, const int N, const int K) {

    //int k;
    //omp_set_nested(1);
    //#pragma omp parallel for private(k) num_threads(4)
    //for (int p = 0; p < N; p++) {
    //  #pragma omp parallel for num_threads(4)
    //  for (k = 0; k < K; k++) {
    //      h_xhatpruning[p * K + k][0] = h_xhatpruning_temp[p + k * N][0];
    //      h_xhatpruning[p * K + k][1] = h_xhatpruning_temp[p + k * N][1];
    //  }
    //} 

    //int p;
    //omp_set_nested(1);
    //#pragma omp parallel for private(p) num_threads(4)
    //for (int k = 0; k < K; k++) {
    //  #pragma omp parallel for num_threads(4)
    //  for (p = 0; p < N; p++) {
    //      h_xhatpruning[p * K + k][0] = h_xhatpruning_temp[p + k * N][0];
    //      h_xhatpruning[p * K + k][1] = h_xhatpruning_temp[p + k * N][1];
    //  }
    //}

    //for (int p = 0; p < N; p++) {
    //  for (int k = 0; k < K; k++) {
    //      h_xhatpruning[p * K + k][0] = h_xhatpruning_temp[p + k * N][0];
    //      h_xhatpruning[p * K + k][1] = h_xhatpruning_temp[p + k * N][1];
    //  }
    //}

    //for (int k = 0; k < K; k++) {
    //  for (int p = 0; p < N; p++) {
    //      h_xhatpruning[p * K + k][0] = h_xhatpruning_temp[p + k * N][0];
    //      h_xhatpruning[p * K + k][1] = h_xhatpruning_temp[p + k * N][1];
    //  }
    //}

    #pragma omp parallel for num_threads(8)
    for (int p = 0; p < K * N; p++) {
        int col = p % N;
        int row = p / K;
        h_xhatpruning[col * K + row][0] = h_xhatpruning_temp[col + row * N][0];
        h_xhatpruning[col * K + row][1] = h_xhatpruning_temp[col + row * N][1];
    }

    //for (int p = 0; p < N; p += 2) {
    //  for (int k = 0; k < K; k++) {
    //      for (int p0 = 0; p0 < 2; p0++) {
    //          h_xhatpruning[(p + p0) * K + k][0] = h_xhatpruning_temp[(p + p0) + k * N][0];
    //          h_xhatpruning[(p + p0) * K + k][1] = h_xhatpruning_temp[(p + p0) + k * N][1];
    //      }
    //  }
    //}

}

/********/
/* MAIN */
/********/
void main() {

    int N = 10;
    int K = 100000;

    // --- CPU memory allocations
    fftw_complex *h_x = (fftw_complex *)malloc(N     * sizeof(fftw_complex));
    fftw_complex *h_xzp = (fftw_complex *)calloc(N * K, sizeof(fftw_complex));
    fftw_complex *h_xpruning = (fftw_complex *)malloc(N * K * sizeof(fftw_complex));
    fftw_complex *h_xhatpruning = (fftw_complex *)malloc(N * K * sizeof(fftw_complex));
    fftw_complex *h_xhatpruning_temp = (fftw_complex *)malloc(N * K * sizeof(fftw_complex));
    fftw_complex *h_xhat = (fftw_complex *)malloc(N * K * sizeof(fftw_complex));
    //double2        *h_xhatGPU = (double2 *)malloc(N * K * sizeof(double2));


    // --- Random number generation of the data sequence on the CPU - moving the data from CPU to GPU
    srand(time(NULL));
    for (int k = 0; k < N; k++) {
        h_x[k][0] = (double)rand() / (double)RAND_MAX;
        h_x[k][1] = (double)rand() / (double)RAND_MAX;
    }
    //gpuErrchk(cudaMemcpy(d_x, h_x, N * sizeof(double2), cudaMemcpyHostToDevice));

    memcpy(h_xzp, h_x, N * sizeof(fftw_complex));

    // --- FFTW and cuFFT plans
    fftw_plan h_plan_zp      = fftw_plan_dft_1d(N * K, h_xzp, h_xhat, FFTW_FORWARD, FFTW_ESTIMATE);
    fftw_plan h_plan_pruning = fftw_plan_many_dft(1, &N, K, h_xpruning, NULL, 1, N, h_xhatpruning_temp, NULL, 1, N, FFTW_FORWARD, FFTW_ESTIMATE);

    double totalTimeCPU = 0., totalTimeGPU = 0.;
    double partialTimeCPU, partialTimeGPU;

    /****************************/
    /* STANDARD APPROACH ON CPU */
    /****************************/
    printf("Number of processors available = %i\n", omp_get_num_procs());
    printf("Number of threads              = %i\n", omp_get_max_threads());

    TimingCPU timerCPU;
    timerCPU.StartCounter();
    fftw_execute(h_plan_zp);
    printf("\nStadard on CPU: \t \t %f\n", timerCPU.GetCounter());

    /******************/
    /* STEP #1 ON CPU */
    /******************/
    timerCPU.StartCounter();
    step1CPU(h_xpruning, h_x, N, K);
    partialTimeCPU = timerCPU.GetCounter();
    totalTimeCPU = totalTimeCPU + partialTimeCPU;
    printf("\nOptimized first step CPU: \t %f\n", totalTimeCPU);

    /******************/
    /* STEP #2 ON CPU */
    /******************/
    timerCPU.StartCounter();
    fftw_execute(h_plan_pruning);
    partialTimeCPU = timerCPU.GetCounter();
    totalTimeCPU = totalTimeCPU + partialTimeCPU;
    printf("Optimized second step CPU: \t %f\n", timerCPU.GetCounter());

    /******************/
    /* STEP #3 ON CPU */
    /******************/
    timerCPU.StartCounter();
    step3CPU(h_xhatpruning, h_xhatpruning_temp, N, K);
    partialTimeCPU = timerCPU.GetCounter();
    totalTimeCPU = totalTimeCPU + partialTimeCPU;
    printf("Optimized third step CPU: \t %f\n", partialTimeCPU);

    printf("Total time CPU: \t \t %f\n", totalTimeCPU);

    double rmserror = 0., norm = 0.;
    for (int n = 0; n < N; n++) {
        rmserror = rmserror + (h_xhatpruning[n][0] - h_xhat[n][0]) * (h_xhatpruning[n][0] - h_xhat[n][0]) + (h_xhatpruning[n][1] - h_xhat[n][1]) * (h_xhatpruning[n][1] - h_xhat[n][1]);
        norm = norm + h_xhat[n][0] * h_xhat[n][0] + h_xhat[n][1] * h_xhat[n][1];
    }
    printf("\nrmserror %f\n", 100. * sqrt(rmserror / norm));

    fftw_destroy_plan(h_plan_zp);

}

For the case

N = 10
K = 100000

my timing is the following

Stadard on CPU:                  23.895417

Optimized first step CPU:        4.472087
Optimized second step CPU:       4.926603
Optimized third step CPU:        2.394958
Total time CPU:                  11.793648
Digastric answered 18/11, 2016 at 15:39 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.