Scenario:
I have two machines, a client and a server, connected with Infiniband. The server machine has an NVIDIA Fermi GPU, but the client machine has no GPU. I have an application running on the GPU machine that uses the GPU for some calculations. The result data on the GPU is never used by the server machine, but is instead sent directly to the client machine without any processing. Right now I'm doing a cudaMemcpy
to get the data from the GPU to the server's system memory, then sending it off to the client over a socket. I'm using SDP to enable RDMA for this communication.
Question:
Is it possible for me to take advantage of NVIDIA's GPUDirect technology to get rid of the cudaMemcpy
call in this situation? I believe I have the GPUDirect drivers correctly installed, but I don't know how to initiate the data transfer without first copying it to the host.
My guess is that it isn't possible to use SDP in conjunction with GPUDirect, but is there some other way to initiate an RDMA data transfer from the server machine's GPU to the client machine?
Bonus: If somone has a simple way to test if I have the GPUDirect dependencies correctly installed that would be helpful as well!
cudaMemcpyAsync
to asynchronously copy to the GPU w.r.t host. – TelegraphesecudaMemcpy
call. What I'm looking for is a way to transfer directly from the GPU to memory on another host using RDMA and Infiniband. – PaphoscudaMallocHost
), or usecudaHostRegister
function. I guess you just have to pin the memory, and GPUDirect would enable RDMA transfer if the setup is okay (if your throughput after doing this is any better than the current, then you could be certain about improvement). And as far as I know, GPUDirect would only accelerate cudaMemCpy, and that it cannot be removed, if you have many memcpy functions (H2D,D2H), then you could just usecudaMemcpyDefault
. – TelegraphesecudaHostRegister
to set up the client as a remote host and then do acudaMemcpy
call to transfer directly from the GPU to the client. – Paphos