GPU shared memory size is very small - what can I do about it?
Asked Answered
P

3

18

The size of the shared memory ("local memory" in OpenCL terms) is only 16 KiB on most nVIDIA GPUs of today.
I have an application in which I need to create an array that has 10,000 integers. so the amount of memory I will need to fit 10,000 integers = 10,000 * 4b = 40kb.

  • How can I work around this?
  • Is there any GPU that has more than 16 KiB of shared memory ?
Pongee answered 13/2, 2011 at 11:4 Comment(0)
F
26

Think of shared memory as explicitly managed cache. You will need to store your array in global memory and cache parts of it in shared memory as needed, either by making multiple passes or some other scheme which minimises the number of loads and stores to/from global memory.

How you implement this will depend on your algorithm - if you can give some details of what it is exactly that you are trying to implement you may get some more concrete suggestions.

One last point - be aware that shared memory is shared between all threads in a block - you have way less than 16 kb per thread, unless you have a single data structure which is common to all threads in a block.

Flagg answered 13/2, 2011 at 11:30 Comment(1)
+1 often it's easier to just use global memory and only consider performance optimizations with shared memory after your code works.Yawata
N
9

All compute capability 2.0 and greater devices (most in the last year or two) have 48KB of available shared memory per multiprocessor. That begin said, Paul's answer is correct in that you likely will not want to load all 10K integers into a single multiprocessor.

Nailhead answered 26/4, 2012 at 23:21 Comment(0)
J
4

You can try to use cudaFuncSetCacheConfig(nameOfKernel, cudaFuncCachePrefer{Shared, L1}) function.

If you prefer L1 to Shared, then 48KB will go to L1 and 16KB will go to Shared. If you prefer Shared to L1, then 48KB will go to Shared and 16KB will go to L1.

Usage:

cudaFuncSetCacheConfig(matrix_multiplication, cudaFuncCachePreferShared);
matrix_multiplication<<<bla, bla>>>(bla, bla, bla); 
Janerich answered 12/4, 2014 at 21:56 Comment(1)
And the link for further explanation: developer.download.nvidia.com/compute/cuda/4_1/rel/toolkit/docs/…Admittedly

© 2022 - 2024 — McMap. All rights reserved.