The memory bus of your GPU isn't simply 48 bytes wide (which would be quite cumbersome as it is not a power of 2). Instead, it is composed of 6 memory channels of 8 bytes (64 bits) each. Memory transactions are usually much wider than the channel width, in order to take advantage of the memory's burst mode. Good transaction sizes start from 64 bytes to produce a size-8 burst, which matches nicely with 16 32-bit words of a half-warp on compute capability 1.x devices.
128 byte wide transactions are still a bit faster, and match the warp-wide 32-bit word accesses of compute capability 2.0 (and higher) devices. Cache lines are also 128 bytes wide to match. Note that all of these accesses must be aligned on a multiple of the transaction width in order to map to a single memory transaction.
Now regarding your actual problem, the best thing probably is to do nothing and to let the cache sort it out. This works the same way as you would explicitly do in shared memory, just that it is done for you by the cache hardware and no code is needed for it, which should make it slightly faster. The only thing to worry about is to have enough cache available so that each warp can have the necessary 32×32×4 bytes = 4kbytes of cache for word wide (e.g. float) or 8kbytes for double accesses. This means that it can be beneficial to limit the number of warps that are active at the same time to prevent them from thrashing each other's cache lines.
For special optimizations there is also the possibility to use vector types like float2
orfloat4
, as all CUDA capable GPUs have load and store instructions that map 8 or 16 bytes into the same thread. However on compute capability 2.0 and higher I don't really see any advantage of using them in the general matrix transpose case, as they increase the cache footprint of each warp even more.
As the default setting of 16kB cache / 48kB shared memory just allows for four warps per SM to perform the transpose at any one time (provided you have no other memory accesses at the same time), it is probably beneficial to select the 48kB cache / 16kB shared memory setting over the default 16kB/48kB split using cudaDeviceSetCacheConfig()
. Newer devices have larger caches and offer more different splits as well as opting in to using more than 48kB of shared memory. The details can be found in the linked documentation.
For completeness, I'll also mention that the warp shuffle instructions introduced with compute capability 3.0 allow to exchange register data within a warp without going through the cache or through shared memory. See Appendix B.22 of the CUDA C Programming Guide for details.
(Note that a version of the Programming Guide exists without this appendix. So if in your copy Appendix B.13 is about something else, reload it through the link provided).