It seems the Julia Language is already able to do multiprecision arithmetic and use the GPU (see here for a simple example combining these two), but you might have to learn Julia and rewrite your code.
The CUMP library is meant to be a GMP substitute for CUDAs, it attempts to make it easier for porting GMP code to CUDAs by offering an interface similar to GMP, for example you replace mpf_...
variables and functions by cumpf_...
ones. There is a demo you can make
if it fits your CUDA. No documentation or support though, you'll have to go through the code to see if it works.
The CAMPARY library from people in LAAS-CNRS might be a shot as well, but no documentation either. It has been applied in more research than CUMP, so some chance there. An answer here gives some clarification on how to use it.
GRNS uses the residue number system on CUDA-compatible processors, it has no documentation but an interesting paper is available. Also, see this one.
XMP comes directly from NVIDIA labs, but seems incomplete and has no docs also. Some info here and a paper here.
XMP 2.0 seems newer but only support sizes up to 32k bits for now.
GPUMP looks promising, but seems not available for download, perhaps by contacting the authors.
The MPRES-BLAS is a multiprecision library for linear algebra on CUDAs, and does - of course - have code for basic arithmetic. Also, papers here and here.
I haven't tested any of those, so please let us know if any of those work well.