C++ custom allocator that utilizes a underlying memory pool
Asked Answered
P

2

7

I'm using a memory pool class which reuses allocated memory addresses and a custom allocator which wraps that class. The following code snippet gives you a basic idea of the interface.

template<class alloc>
class memory_pool
    : boost::noncopyable,
      public allocator_traits<void>
{
public:
    memory_pool(typename alloc::size_type alloc_size);
    memory_pool(typename alloc::size_type alloc_size, alloc const&);
    template<typename U> memory_pool(typename alloc::size_type alloc_size,
        typename alloc::rebind<U>::other const&);
    virtual ~memory_pool();

    pointer allocate  (); /*throw(std::bad_alloc)*/
    void    collect   ();
    void    deallocate(pointer) throw(); /*noexcept*/
};

pointer allocate()
{/*
    Checks if a suitable chunk of memory is available in a internal linked list.
    If true, then the chunk is returned and the next chunk moves up.
    Otherwise, new memory is allocated by the underlying allocator.
*/}

void deallocate(pointer)
{/*
    Interprets the passed pointer as a chunk of memory and stores it in a linked list.
    Please note that memory isn't actually deallocated.
*/}

void collect()
{/*
    Effectively deallocates the cunks in the linked list.
    This will be called at least once during destruction.
*/}

Sure, the need for something like this is limitated. However, it's very useful in situations where you need to: - Specifiy a allocator type for a class which uses that allocator in a very naive way (e.g. Avoids allocation of larger pieces even if it would be advisable). - Allocate and deallocate repeatedly the same size of memory. - The type you wish to use the allocator for is of very small size (e.g. built-in types like char, short, int etc.).

Theoretically, an implementation could take advantage of a memory_pool which allocates a multiple of the actual allocation size, each time it needs to do it (from the underlying memory manager). Objects that are close together are more suitable for any cache and / or prefetching algorithm. I've implemented such a memory pool with some overhead to handle the correct allocation, splitting and deallocation (We cannot deallocate each address the user will pass to deallocate. We need to deallocate only that addresses which are the beginning of each memory block we have been previously allocated).

I've tested both cases with the following really simple code:

std::list<int, allocator<int>> list;

std::clock_t t = std::clock();
for (int i = 0; i < 1 << 16; ++i)
{
    for (int j = 0; j < 1 << 16; ++j)
        list.push_back(j);
    list.unique();
    for (int j = 0; j < 1 << 16; ++j)
        list.pop_back();
}
std::cout << (std::clock() - t) / CLOCKS_PER_SEC << std::endl;

std::list is calling allocactor::allocate(1, 0) each time push_back is called. unique() makes sure that each element will be touched and compared to the next element. However, the result was disappointing. The minimal overhead which is needed to manage the blockwise allocating memory pool is greater than any possible advantage the system gets.

Can you think of a scenario where it will improve performance?

EDIT: Of course, it's much faster than std::allocator.

Photoelectric answered 6/7, 2011 at 12:45 Comment(1)
Please note that the wrapping allocator cannot allocate an array.Photoelectric
H
1

C++0x has better support for scoped allocators such as memory pools.

Profile your code., it's very hard to predict what advantages this will confer unless your algorithm performs very regular allocation/deallocation patterns such as LIFO.

It's dead easy to write a very fast allocator when all the allocated objects are the same size. Once I wrote something along the lines of

template <size_t ObjectSize> class allocator {
    // ...
};

template <typename T> class allocator : public allocator <sizeof (T)> {
    // ...
};

Before you can design an allocator you need to be sure what will be allocated and how. The answers for operator new are "anything" and "anyhow", which is why it's sometimes sub-optimal. If you can't answer these questions properly, your allocator probably won't be a big improvement.

Hartshorn answered 6/7, 2011 at 12:57 Comment(3)
Quoted from your link: "it is possible to use an allocator with state, say an allocator that holds a pointer to an arena from which to allocate". Why it should be impossible with C++03?Photoelectric
For STL at least, allocators are defined to be per-type not per-instance. You can write your own stateful allocators but they will break the established library implementation if use use them in std::vector or one of its bretheren.Hartshorn
I've done it that way before (using the ObjectSize as an compile-time expression). It's nice to read but is limitating the use. You may have noticed im only working with void* on this stage.Photoelectric
A
-1

Can you think of a scenario where it will improve performance?

Something that does a lot (10k+ per sec) of allocations and deallocations would benefit from not having to run into complex queries every time to allocs/frees, however, its only if the combined saving on delaying the allocs/frees into groups is more than it takes to handle a group (basically you need to amortize the group over head with the per-unit saving).

the use of contiguous memory would help any node/pointer based structures like trees (but only to a point). however, the real world benefits can be drastically different (or non-existent!) from what they where planned to be, which is why when walking down the road of creating custom systems like this, you should be profiling your code and already have a picture in mind of how it will be used (ie: there is no point in making a custom pool allocator for something that does so few allocations that the speed gain doesn't matter at all).

Something like this can be handy for debugging however, as you have a nice interface for tagging memory for monitoring leaks and overwrites/invalid writes, so even if it has the same performance as the standard system, you can gain in other ways.

Astomatous answered 6/7, 2011 at 13:13 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.