How could pairing new[] with delete possibly lead to memory leak only?
Asked Answered
T

10

29

First of all, using delete for anything allocated with new[] is undefined behaviour according to C++ standard.

In Visual C++ 7 such pairing can lead to one of the two consequences.

If the type new[]'ed has trivial constructor and destructor VC++ simply uses new instead of new[] and using delete for that block works fine - new just calls "allocate memory", delete just calls "free memory".

If the type new[]'ed has a non-trivial constructor or destructor the above trick can't be done - VC++7 has to invoke exactly the right number of destructors. So it prepends the array with a size_t storing the number of elements. Now the address returned by new[] points onto the first element, not onto the beginning of the block. So if delete is used it only calls the destructor for the first element and the calls "free memory" with the address different from the one returned by "allocate memory" and this leads to some error indicaton inside HeapFree() which I suspect refers to heap corruption.

Yet every here and there one can read false statements that using delete after new[] leads to a memory leak. I suspect that anything size of heap corruption is much more important than a fact that the destructor is called for the first element only and possibly the destructors not called didn't free heap-allocated sub-objects.

How could using delete after new[] possibly lead only to a memory leak on some C++ implementation?

Tgroup answered 16/12, 2009 at 9:15 Comment(3)
To all answerers: the question is how it can lead to only a memory leak, i.e., how it can possibly not cause heap corruption.Assyriology
Quite easily. It all depends on how the memory management is written. Since this is undefined by the standard all answers are just speculation (but I am sure that I could write a version that would not crash the heap but did leak memory). The memory management sub-sytem is as fast and efficient as possible. The standard has given them a set of pre and post conditions under which the sub-system can be optimized. Break these conditions and you have undefined behavior (probably heap corruption). In debug, stability not speed is the goal of the memory sub-system. Hence leaking is more likely.Ephrayim
stackoverflow.com/questions/1553382/…Lepsy
A
31

Suppose I'm a C++ compiler, and I implement my memory management like this: I prepend every block of reserved memory with the size of the memory, in bytes. Something like this;

| size | data ... |
         ^
         pointer returned by new and new[]

Note that, in terms of memory allocation, there is no difference between new and new[]: both just allocate a block of memory of a certain size.

Now how will delete[] know the size of the array, in order to call the right number of destructors? Simply divide the size of the memory block by sizeof(T), where T is the type of elements of the array.

Now suppose I implement delete as simply one call to the destructor, followed by the freeing of the size bytes, then the destructors of the subsequent elements will never be called. This results in leaking resources allocated by the subsequent elements. Yet, because I do free size bytes (not sizeof(T) bytes), no heap corruption occurs.

Assyriology answered 16/12, 2009 at 9:24 Comment(7)
Thumbs up. As you just said, the OP is assuming new and new[] are handled differently, but this may not be the case. "new" may just be "new[]" with a size_t prepended w/ a value of 1.Flection
I actually meant size to indicate the number of bytes, not elements. Something that a function like malloc could do. I'll edit my post a bit to make this explicit.Assyriology
If that memory management technique was used. BUT then you have an overhead of x bytes to hold size, an increase of 100% for small objects. Yes we could pay that cost if we wanted to compensate for bad programmers. But I don't want to pay that price just to support 'sharptooth' so I would prefer that the memory management is very efficient (even for small types). As a result the standard does not require and most implementations do not prepend the size for new in the release version. Though some do in the debug version just to help in debugging/profiling.Ephrayim
@Thomas: Yes, but this is very artificial, forced and never-used-in-practice approach to implementaing memory management. It certainly cannot serve as an explanation as to how the popular "memory leak" legend came to be.Nondisjunction
That's all well and good but if that was always the case then there would be no need for the "delete []" construct at all because the run-time library would be smart enough to figure out how many objects to free. The specification does not require the allocation to use any specific implementation so for all practical purposes the behavior is undefined and should be avoided.Polynomial
@MikeCollins Of course, but that wasn't the question.Assyriology
@MikeCollins, doesn't the heap management system in runtime keep track of all the heap blocks? Which means the size of the block, or the start and the end should be stored in whatever way, so that heap allocation should occur based on the already allocated space info in the first place!Flitting
N
16

The fairy tale about mixing new[] and delete allegedly causing a memory leak is just that: a fairy tale. It has absolutely no footing in reality. I don't know where it came from, but by now it acquired a life of its own and survives like a virus, propagating by the word of mouth from one beginner to another.

The most likely rationale behind this "memory leak" nonsense is that from the innocently naive point of view the difference between delete and delete[] is that delete is used to destroy just one object, while delete[] destroys an array of objects ("many" objects). A naive conclusion that is usually derived from this is that the first element of the array will be destroyed by delete, while the rest will persist, thus creating the alleged "memory leak". Of course, any programmer with at least basic understanding of typical heap implementations would immediately understand that the most likely consequence of that is heap corruption, not a "memory leak".

Another popular explanation for the naive "memory leak" theory is that since the wrong number of destructors gets called, the secondary memory owned by the objects in the array does not get deallocated. This might be true, but it is obviously a very forced explanation, which bears little relevance in the face of much more serious problem with heap corruption.

In short, mixing different allocation functions is one of those error that lead to solid, unpredictable and very practical undefined behavior. Any attempts to impose some concrete limits on the manifestations of this undefined behavior are just waste of time and sure sign of the lack of basic understanding.

Needless to add, new/delete and new[]/delete[] are in fact two independent memory management mechanisms, which are independently customizable. Once they get customized (by replacing raw memory management functions) there's absolutely no way to even begin to predict what might happen if they get mixed.

Nondisjunction answered 19/12, 2009 at 2:12 Comment(0)
F
7

It seems that your question is really "why heap corruption doesn't happen?". The answer to that one is "because the heap manager keeps track of allocated block sizes". Let's go back to C for a minute: if you want to allocate a single int in C you would do int* p = malloc(sizeof(int)), if you want to allocate array of size n you can either write int* p = malloc(n*sizeof(int)) or int* p = calloc(n, sizeof(int)). But in any case you'll free it by free(p), no matter how you allocated it. You never pass size to free(), free() just "knows" how much to free, because the size of a malloc()-ed block is saved somewhere "in front" of the block. Back to C++, new/delete and new[]/delete[] are usually implemented in terms of malloc (although they don't have to be, you shouldn't rely on that). This is why new[]/delete combination doesn't corrupt the heap - delete will free the right amount of memory, but, as explained by everyone before me, you can get leaks by not calling the right number of destructors.

That said, reasoning about undefined behavior in C++ is always pointless exercise. Why does it matter if new[]/delete combination happens to work, "only" leaks or causes heap corruption? You shouldn't code like that, period! And, in practice, I would avoid manual memory management whenever possible - STL & boost are there for a reason.

Forestforestage answered 16/12, 2009 at 10:21 Comment(0)
H
4

If the non-trivial destructor that are not called for all but the first element in the array are supposed to free some memory you get a memory leak as these objects are not cleaned up properly.

Heliocentric answered 16/12, 2009 at 9:19 Comment(0)
D
3

Apart from resulting in undefined behavior, the most straightforward cause of leaks lies in the implementation not calling the destructor for all but the first object in the array. This will obviously result in leaks if the objects have allocated resources.

This is the simplest possible class I could think of resulting in this behaviour:

 struct A { 
       char* ch;
       A(): ch( new char ){}
       ~A(){ delete ch; }
    };

A* as = new A[10]; // ten times the A::ch pointer is allocated

delete as; // only one of the A::ch pointers is freed.

PS: note that constructors fail to get called in lots of other programming mistakes, too: non-virtual base class destructors, false reliance on smart pointers, ...

Dempsey answered 16/12, 2009 at 9:20 Comment(2)
@Suma: the problem I tried to show here is how only the destructor of the first object is called, resulting in 9 leaked blocks containing 1 char. You are right about the array of A elements, but that wasn't the question.Dempsey
@Suma: no harm in pointing out that the explanation was a little hidden. Thanks for being critical, we need that!Dempsey
O
3

It will lead to a leak in ALL implementations of C++ in any case where the destructor frees memory, because the destructor never gets called.

In some cases it can cause much worse errors.

Orvalorvan answered 16/12, 2009 at 9:21 Comment(1)
-1 for "in some cases" A professional answer should point to a reference. e.g. Item 5 of S.Meyers's Effective C++: "What would happen if you used the [] ... The result is undefined. ... it's undefined even for built-in types ... The rule, then, is simple: if you use [] when you call new, you must use [] when you call delete. If you don't use [] when you call new, don't use [] when you call delete."Manes
C
3

memory leak might happen if new() operator is overridden but new[] is not. same goes to the delete / delete[] operator

Crucial answered 16/12, 2009 at 9:25 Comment(0)
G
3

Late for an answer, but...

If your delete mechanism is simply to call the destructor and put the freed pointer, together with the size implied by sizeof, onto a free stack, then calling delete on a chunk of memory allocated with new[] will result memory being lost -- but not corruption. More sophisticated malloc structures could corrupt on, or detect, this behaviour.

Ghetto answered 16/3, 2010 at 10:9 Comment(0)
B
2

Why can't the answer be that it causes both?

Obviously memory is leaked whether heap corruption occurs or not.

Or rather, since I can re-implement new and delete..... can't it not cause anything at all. Technically I can cause new and delete to perform new[] and delete[].

HENCE: Undefined Behavior.

Bamboo answered 28/2, 2011 at 16:10 Comment(2)
Yes, it can cause both, but IMO once you have heap corruption you no longer care of a memory leak.Tgroup
Point is that it's undefined. It doesn't matter what the known answer is for the majority of compilers. If there by chance is one compiler that implements new and new[], delete and delete[] in exactly identical manners, there will never be a heap corruption. Therefore the answer to the question is, if the compiler implements in such a manner to avoid a heap corruption, then it may only cause a memory leak. Except if the compiler implements in such a manner to avoid both. Therefore it's pointless to answer such a question unless we refer to a specific compiler.Bamboo
P
1

I was answering a question which was marked off as a duplicate, so i'll just copy it here in case it metters. It was said well before me the way memory allocation works, i`ll just explain the cause & effects.

Just a little thing right off google: http://en.cppreference.com/w/cpp/memory/new/operator_delete

Anyhow, delete is a function for a single object. It frees the instance from the pointer, and leaves;

delete[] is a function used in order to deallocate arrays. That means, it doesnt just free the pointer; It declares the whole memory block of that array as garbage.

That's all cool in practice, but you tell me your application works. You are probably wondering... why?

The solution is C++ does not fix memory leaks. If you`ll use delete without the parenthesises, it'll delete just the array as an object - a proccess which might cause a memory leak.

cool story, memory leak, why should i care?

Memory leak happens when allocated memory doesn't get deleted. That memory then requires unneccessary disk-space, which will make you lose useful memory for pretty much no reason. That's bad programming, and you should probably fix it in your systems.

Pill answered 4/2, 2015 at 21:17 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.