CUDA: Wrapping device memory allocation in C++
Asked Answered
S

4

17

I'm starting to use CUDA at the moment and have to admit that I'm a bit disappointed with the C API. I understand the reasons for choosing C but had the language been based on C++ instead, several aspects would have been a lot simpler, e.g. device memory allocation (via cudaMalloc).

My plan was to do this myself, using overloaded operator new with placement new and RAII (two alternatives). I'm wondering if there are any caveats that I haven't noticed so far. The code seems to work but I'm still wondering about potential memory leaks.

The usage of the RAII code would be as follows:

CudaArray<float> device_data(SIZE);
// Use `device_data` as if it were a raw pointer.

Perhaps a class is overkill in this context (especially since you'd still have to use cudaMemcpy, the class only encapsulating RAII) so the other approach would be placement new:

float* device_data = new (cudaDevice) float[SIZE];
// Use `device_data` …
operator delete [](device_data, cudaDevice);

Here, cudaDevice merely acts as a tag to trigger the overload. However, since in normal placement new this would indicate the placement, I find the syntax oddly consistent and perhaps even preferable to using a class.

I'd appreciate criticism of every kind. Does somebody perhaps know if something in this direction is planned for the next version of CUDA (which, as I've heard, will improve its C++ support, whatever they mean by that).

So, my question is actually threefold:

  1. Is my placement new overload semantically correct? Does it leak memory?
  2. Does anybody have information about future CUDA developments that go in this general direction (let's face it: C interfaces in C++ s*ck)?
  3. How can I take this further in a consistent manner (there are other APIs to consider, e.g. there's not only device memory but also a constant memory store and texture memory)?

// Singleton tag for CUDA device memory placement.
struct CudaDevice {
    static CudaDevice const& get() { return instance; }
private:
    static CudaDevice const instance;
    CudaDevice() { }
    CudaDevice(CudaDevice const&);
    CudaDevice& operator =(CudaDevice const&);
} const& cudaDevice = CudaDevice::get();

CudaDevice const CudaDevice::instance;

inline void* operator new [](std::size_t nbytes, CudaDevice const&) {
    void* ret;
    cudaMalloc(&ret, nbytes);
    return ret;
}

inline void operator delete [](void* p, CudaDevice const&) throw() {
    cudaFree(p);
}

template <typename T>
class CudaArray {
public:
    explicit
    CudaArray(std::size_t size) : size(size), data(new (cudaDevice) T[size]) { }

    operator T* () { return data; }

    ~CudaArray() {
        operator delete [](data, cudaDevice);
    }

private:
    std::size_t const size;
    T* const data;

    CudaArray(CudaArray const&);
    CudaArray& operator =(CudaArray const&);
};

About the singleton employed here: Yes, I'm aware of its drawbacks. However, these aren't relevant in this context. All I needed here was a small type tag that wasn't copyable. Everything else (i.e. multithreading considerations, time of initialization) don't apply.

Smelt answered 18/11, 2008 at 18:59 Comment(2)
Your implementation of singleton is dangerous at best. Please see the many other discussion about how to create a singleton in C++.Concerto
Yea, you're right. However, see my new clarification below the code.Smelt
E
5

I would go with the placement new approach. Then I would define a class that conforms to the std::allocator<> interface. In theory, you could pass this class as a template parameter into std::vector<> and std::map<> and so forth.

Beware, I have heard that doing such things is fraught with difficulty, but at least you will learn a lot more about the STL this way. And you do not need to re-invent your containers and algorithms.

Enshrine answered 19/11, 2008 at 1:26 Comment(1)
Hadn't thought about an allocator. I've actually done this before so it shouldn't be too difficult.Smelt
G
7

In the meantime there were some further developments (not so much in terms of the CUDA API, but at least in terms of projects attempting an STL-like approach to CUDA data management).

Most notably there is a project from NVIDIA research: thrust

Grady answered 22/7, 2010 at 6:51 Comment(0)
E
5

I would go with the placement new approach. Then I would define a class that conforms to the std::allocator<> interface. In theory, you could pass this class as a template parameter into std::vector<> and std::map<> and so forth.

Beware, I have heard that doing such things is fraught with difficulty, but at least you will learn a lot more about the STL this way. And you do not need to re-invent your containers and algorithms.

Enshrine answered 19/11, 2008 at 1:26 Comment(1)
Hadn't thought about an allocator. I've actually done this before so it shouldn't be too difficult.Smelt
S
2

There are several projects that attempt something similar, for example CUDPP.

In the meantime, however, I've implemented my own allocator and it works well and was straightforward (> 95% boilerplate code).

Smelt answered 19/11, 2008 at 17:55 Comment(2)
The stdcuda link is dead.Elata
@Elata Thanks. Stands to reason that a 10-year-old answer would at some point be outdated. I’ve removed the link.Smelt
E
2

Does anybody have information about future CUDA developments that go in this general direction (let's face it: C interfaces in C++ s*ck)?

Yes, I've done something like that:

https://github.com/eyalroz/cuda-api-wrappers/

nVIDIA's Runtime API for CUDA is intended for use both in C and C++ code. As such, it uses a C-style API, the lower common denominator (with a few notable exceptions of templated function overloads).

This library of wrappers around the Runtime API is intended to allow us to embrace many of the features of C++ (including some C++11) for using the runtime API - but without reducing expressivity or increasing the level of abstraction (as in, e.g., the Thrust library). Using cuda-api-wrappers, you still have your devices, streams, events and so on - but they will be more convenient to work with in more C++-idiomatic ways.

Elata answered 22/2, 2018 at 9:33 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.