Any reason to overload global new and delete?
Asked Answered
G

16

55

Unless you're programming parts of an OS or an embedded system are there any reasons to do so? I can imagine that for some particular classes that are created and destroyed frequently overloading memory management functions or introducing a pool of objects might lower the overhead, but doing these things globally?

Addition
I've just found a bug in an overloaded delete function - memory wasn't always freed. And that was in a not-so memory critical application. Also, disabling these overloads decreases performance by ~0.5% only.

Gereron answered 20/7, 2009 at 9:11 Comment(0)
T
82

We overload the global new and delete operators where I work for many reasons:

  • pooling all small allocations -- decreases overhead, decreases fragmentation, can increase performance for small-alloc-heavy apps
  • framing allocations with a known lifetime -- ignore all the frees until the very end of this period, then free all of them together (admittedly we do this more with local operator overloads than global)
  • alignment adjustment -- to cacheline boundaries, etc
  • alloc fill -- helping to expose usage of uninitialized variables
  • free fill -- helping to expose usage of previously deleted memory
  • delayed free -- increasing the effectiveness of free fill, occasionally increasing performance
  • sentinels or fenceposts -- helping to expose buffer overruns, underruns, and the occasional wild pointer
  • redirecting allocations -- to account for NUMA, special memory areas, or even to keep separate systems separate in memory (for e.g. embedded scripting languages or DSLs)
  • garbage collection or cleanup -- again useful for those embedded scripting languages
  • heap verification -- you can walk through the heap data structure every N allocs/frees to make sure everything looks ok
  • accounting, including leak tracking and usage snapshots/statistics (stacks, allocation ages, etc)

The idea of new/delete accounting is really flexible and powerful: you can, for example, record the entire callstack for the active thread whenever an alloc occurs, and aggregate statistics about that. You could ship the stack info over the network if you don't have space to keep it locally for whatever reason. The types of info you can gather here are only limited by your imagination (and performance, of course).

We use global overloads because it's convenient to hang lots of common debugging functionality there, as well as make sweeping improvements across the entire app, based on the statistics we gather from those same overloads.

We still do use custom allocators for individual types too; in many cases the speedup or capabilities you can get by providing custom allocators for e.g. a single point-of-use of an STL data structure far exceeds the general speedup you can get from the global overloads.

Take a look at some of the allocators and debugging systems that are out there for C/C++ and you'll rapidly come up with these and other ideas:

(One old but seminal book is Writing Solid Code, which discusses many of the reasons you might want to provide custom allocators in C, most of which are still very relevant.)

Obviously if you can use any of these fine tools you will want to do so rather than rolling your own.

There are situations in which it is faster, easier, less of a business/legal hassle, nothing's available for your platform yet, or just more instructive: dig in and write a global overload.

True answered 1/8, 2009 at 4:7 Comment(2)
Wow, you've practically built something similar to a garbage collectorPease
@Andrei But five times faster in constant memory!Interlude
C
27

The most common reason to overload new and delete are simply to check for memory leaks, and memory usage stats. Note that "memory leak" is usually generalized to memory errors. You can check for things such as double deletes and buffer overruns.

The uses after that are usually memory-allocation schemes, such as garbage collection, and pooling.

All other cases are just specific things, mentioned in other answers (logging to disk, kernel use).

Campeche answered 20/7, 2009 at 9:22 Comment(0)
I
15

In addition to the other important uses mentioned here, like memory tagging, it's also the only way to force all allocations in your app to go through fixed-block allocation, which has enormous implications for performance and fragmentation.

For example, you may have a series of memory pools with fixed block sizes. Overriding global new lets you direct all 61-byte allocations to, say, the pool with 64-byte blocks, all 768-1024 byte allocs to the the 1024b-block pool, all those above that to the 2048 byte block pool, and anything larger than 8kb to the general ragged heap.

Because fixed block allocators are much faster and less prone to fragmentation than allocating willy-nilly from the heap, this lets you force even crappy 3d party code to allocate from your pools and not poop all over the address space.

This is done often in systems which are time- and space-critical, such as games. 280Z28, Meeh, and Dan Olson have described why.

Interlude answered 31/7, 2009 at 1:31 Comment(2)
nb Leander explores this in much greater depth below.Interlude
I used custom allocators for my text type. I use AllocHeap for sizes less than 1024 and only alloc multiples of 8 for it to fit in lookaside tables. I then use VirtualAlloc directly to alloc more than 1024 bytes, without any CRT overhead.Hypoploid
C
10

UnrealEngine3 overloads global new and delete as part of its core memory management system. There are multiple allocators that provide different features (profiling, performance, etc.) and they need all allocations to go through it.

Edit: For my own code, I would only ever do it as a last resort. And by that I mean I would almost positively never use it. But my personal projects are obviously much smaller/very different requirements.

Coronary answered 20/7, 2009 at 9:13 Comment(1)
sure, game development is quite a special area. One would have to overload new/delete globally for, say, applications targeted at special multi-core architecture, etc..Gereron
B
6

Some realtime systems overload them to avoid them being used after init..

Bunyip answered 20/7, 2009 at 9:16 Comment(0)
P
4

Overloading new & delete makes it possible to add a tag to your memory allocations. I tag allocations per system or control or by middleware. I can view, at runtime, how much each uses. Maybe I want to see the usage of a parser separated from the UI or how much a piece of middleware is really using!

You can also use it to put guard bands around the allocated memory. If/when your app crashes you can take a look at the address. If you see the contents as "0xABCDABCD" (or whatever you choose as guard) you are accessing memory you don't own.

Perhaps after calling delete you can fill this space with a similarly recognizable pattern. I believe VisualStudio does something similar in debug. Doesn't it fill uninitialized memory with 0xCDCDCDCD?

Finally, if you have fragmentation issues you could use it to redirect to a block allocator? I am not sure how often this is really a problem.

Praetor answered 30/7, 2009 at 20:42 Comment(0)
H
3

You need to overload them when the call to new and delete doesn't work in your environment.

For example, in kernel programming, the default new and delete don't work as they rely on user mode library to allocate memory.

Hortenciahortensa answered 20/7, 2009 at 9:35 Comment(0)
P
2

From a practical standpoint it may just be better to override malloc on a system library level, since operator new will probably be calling it anyway.

On linux, you can put your own version of malloc in place of the system one, as in this example here:

http://developers.sun.com/solaris/articles/lib_interposers.html

In that article, they are trying to collect performance statistics. But you may also detect memory leaks if you also override free.

Since you are doing this in a shared library with LD_PRELOAD, you don't even need to recompile your application.

Parthenon answered 28/7, 2009 at 16:24 Comment(1)
I asked the question here. And it looks like there is a way. https://mcmap.net/q/15526/-interposers-on-windowsParthenon
K
2

I've seen it done in a system that for 'security'* reasons was required to write over all memory it used on de-allocation. The approach was to allocate an extra few bytes at the start of each block of memory which would contain the size of the overall block which would then be overwritten with zeros on delete.

This had a number of problems as you can probably imagine but it did work (mostly) and saved the team from reviewing every single memory allocation in a reasonably large, existing application.

Certainly not saying that it is a good use but it is probably one of the more imaginative ones out there...

* sadly it wasn't so much about actual security as the appearance of security...

Korea answered 28/7, 2009 at 16:56 Comment(4)
that one is actually reasonable. in some (paranoid) systems you are required to overwrite the freed memory few times :-)Gereron
Is that actually feasible when you have an MMU and non-trivial memory usage patterns including the use of realloc?Cambist
Short answer - yes, as far as I know. Longer: how would an MMU affect this? You don't typically use realloc with new and delete - how would that work? To be fair however, this was not intended to protect against physical level attacks. For us it was sufficient that information couldn't be easily found in memory by software. In other words, without the overloads we could search memory and find out data there with the overloads we couldn't. So... As I said - appearance of security more than actual security.Korea
To follow up a little more here. If you think about it this way - you are running an app as a non-admin user. That app has some very important data that should not be available to other apps (say a credit card). The only scenarios that I can think of in which another app can reliably gain access to memory allocated to another process mean that you are already compromised in some way. (If a process is sitting there scanning memory allocated to other processes for potential credit card numbers then you've already lost).Korea
M
2

Photoshop plugins written in C++ should override operator new so that they obtain memory via Photoshop.

Maryrosemarys answered 30/7, 2009 at 20:15 Comment(0)
L
2

I've done it with memory mapped files so that data written to the memory is automatically also saved to disk.
It's also used to return memory at a specific physical address if you have memory mapped IO devices, or sometimes if you need to allocate a certain block of contiguous memory.

But 99% of the time it's done as a debugging feature to log how often, where, when memory is being allocated and released.

Limulus answered 30/7, 2009 at 20:16 Comment(1)
Thanks. Writing to the file might be useful on debug stages indeed. Allocating memory at specific physical address again applies only to embedded systems and such, not a general purpose software.Gereron
G
2

It's actually pretty common for games to allocate one huge chunk of memory from the system and then provide custom allocators via overloaded new and delete. One big reason is that consoles have a fixed memory size, making both leaks and fragmentation large problems.

Usually (at least on a closed platform) the default heap operations come with a lack of control and a lack of introspection. For many applications this doesn't matter, but for games to run stably in fixed-memory situations the added control and introspection are both extremely important.

Gamble answered 31/7, 2009 at 1:5 Comment(0)
B
2

It can be a nice trick for your application to be able to respond to low memory conditions by something else than a random crash. To do this your new can be a simple proxy to the default new that catches its failures, frees up some stuff and tries again.

The simplest technique is to reserve a blank block of memory at start-up time for that very purpose. You may also have some cache you can tap into - the idea is the same.

When the first allocation failure kicks in, you still have time to warn your user about the low memory conditions ("I'll be able to survive a little longer, but you may want to save your work and close some other applications"), save your state to disk, switch to survival mode, or whatever else makes sense in your context.

Blesbok answered 31/7, 2009 at 19:48 Comment(0)
B
0

The most common use case is probably leak checking.

Another use case is when you have specific requirements for memory allocation in your environment which are not satisfied by the standard library you are using, like, for instance, you need to guarantee that memory allocation is lock free in a multi threaded environment.

Briquette answered 3/8, 2009 at 9:31 Comment(0)
E
0

As many have already stated this is usually done in performance critical applications, or to be able to control memory alignment or track your memory. Games frequently use custom memory managers, especially when targeting specific platforms/consoles.

Here is a pretty good blog post about one way of doing this and some reasoning.

Evidential answered 3/8, 2009 at 18:51 Comment(0)
W
0

Overloaded new operator also enables programmers to squeeze some extra performance out of their programs. For example, In a class, to speed up the allocation of new nodes, a list of deleted nodes is maintained so that their memory can be reused when new nodes are allocated.In this case, the overloaded delete operator will add nodes to the list of deleted nodes and the overloaded new operator will allocate memory from this list rather than from the heap to speedup memory allocation. Memory from the heap can be used when the list of deleted nodes is empty.

Wnw answered 26/5, 2019 at 20:50 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.