How can I get the nvcc CUDA compiler to optimize more?
Asked Answered
S

2

25

When using a C or C++ compiler, if we pass the -O3 switch, execution becomes faster. In CUDA, is there something equivalent?

I am compiling my code using the command nvcc filename.cu. After that I execute ./a.out.

Sunglass answered 30/4, 2017 at 13:10 Comment(1)
E
40

warning: compiling with nvcc -O3 filename.cu will pass the -O3 option to host code only.

In order to optimize CUDA kernel code, you must pass optimization flags to the PTX compiler, for example:

nvcc -Xptxas -O3,-v filename.cu

will ask for optimization level 3 to cuda code (this is the default), while -v asks for a verbose compilation, which reports very useful information we can consider for further optimization techniques (more on this later).

Another speed optimization flag available for nvcc compiler is the -use_fast_math which will use intrinsics at the expense of floating-point precision (see Options for Steering GPU code generation).

Anyway, from my experience, such automatic compiler optimization options do not achieve in general great boosts. Best performances can be achieved through explicit coding optimizations, such as:

  1. Instruction Level Parallelism (ILP): let each CUDA thread execute its task on more than one element - this approach will keep pipeline loaded and maximize throughput. For example, suppose you want to process the elements of a NxN tile, you can use a level 2 TLP launching an NxM block of threads (where M=N/2) and let the threadIdx.y loop over 2 different element lines.
  2. register spilling control: keep under control the number of used registers per kernel and experiment with the -maxrrregcount=N option. The less registers a kernel requires, the more blocks are eligible to run concurrently (until register spilling will take over).
  3. loop unrolling: try to add #pragma unroll N before any independent loop, if any, inside your CUDA kernel. N can be 2,3,4. Best results are met when you reach a good balance between register pressure and achieved unrolling level. This approach falls into the ILP technique, afterall.
  4. data packing: sometimes you can join different correlated buffer data, say float A[N],B[N], into one buffer of float2 AB[N] data. This will translate into less operations for the load/store units and bus usage efficiency.

Of course, always, always, always check your code to have coalesced memory accesses to global memory and avoiding bank conflicts in shared memory. Use the nVIDIA Visual Profiler to get a deeper insight of such issues.

Emelia answered 8/5, 2017 at 11:2 Comment(2)
-Xptxas -O2 is only one half of setting the optimization level to 2, -Xcompiler -O2 the other. Both ptxas and cicc are optimizing compilers, each which their own (although partially overlapping) set of optimizations.Ranchero
What is the maximum optimisation level with ptxas? It gives no error with -O4, -O5, -O6 etc.Didactic
S
13

nvcc supports many options which are similar to CPU-targeting C/C++ compilers. This is documented in the nvcc documentation; and you can also run nvcc --help to get a long description of these options (perhaps nvcc --help | less to be able to scroll through them more easily).

The default optimization level is actually -O3 (unless you specified -G, for debugging, which disables most optimizations). You can instead specify -O0 or -O1 etc., but that will only disable optimizations.

If you only want to optimize the code which will run on the GPU, but not the code which will run on the CPU, you can pass a different optimization switch directly to the ptxas device code compiler.

Also, if you write nvcc -o foo filename.cu the resulting executable file will be named foo rather than a.out, in case you want a meaningful name for the executable. This is also the same as for C/C++ compilers.

Stricker answered 30/4, 2017 at 13:10 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.