Why do I need to delete[]?
Asked Answered
A

15

45

Lets say I have a function like this:

int main()
{
    char* str = new char[10];

    for(int i=0;i<5;i++)
    {
        //Do stuff with str
    }

    delete[] str;
    return 0;
}
  1. Why would I need to delete str if I am going to end the program anyways? I wouldn't care if that memory goes to a land full of unicorns if I am just going to exit, right?

  2. Is it just good practice?

  3. Does it have deeper consequences?

Autarchy answered 18/3, 2013 at 22:24 Comment(12)
Note that delete isn't a function, you don't need the ().Craze
I like this "land full of unicorns" thing...Barclay
You also should be calling delete[] since str allocated as an arrayFootcandle
Rename your function to do_something and call do_something over and over in a loop. Enjoy your rapid loss of address space.Anatollo
With a char string it's maybe not so important. But, if a class holds some lock on a resource, then it's not just about memory any more.Mettah
If you leave out the delete[], then tomorrow, some other programmer is going to grab your logic out of main() and stuff it into function a() that gets called ten times. The day after that, someone else will write function b() that calls a() 100 times. The day after that, someone else will write function c() that calls function b() a bunch. On Friday, the poor bastard working on function d() wonders why he's out of memory.Judicator
I strongly recommend replacing dynamically allocated char arrays with std::string.Greensand
Why would I ever go to the toilet? I will eventually die anyways.Coronation
Came here expecting a discussion about why C++ has delete[] in addition to delete. Seems to me the question should really be re-titled, "Why should I de-allocate all dynamically-allocated resources before exiting?" In fact, the question could be even broader - open files, open sockets, etc... I'm not comfortable editing the title as such, if anyone else agrees, be my guest. IMO it would help future searches and better indicate the nature of the discussion & answers.Bathometer
This is mentioned in Effective C++, take a look.Burchell
blogs.msdn.com/b/oldnewthing/archive/2012/01/05/10253268.aspxPheni
Btw, I am really grateful that Google Chrome does NOT swap-in every single memory page when closing 100 tabs.Pheni
S
77

If in fact your question really is "I have this trivial program, is it OK that I don't free a few bytes before it exits?" the answer is yes, that's fine. On any modern operating system that's going to be just fine. And the program is trivial; it's not like you're going to be putting it into a pacemaker or running the braking systems of a Toyota Camry with this thing. If the only customer is you then the only person you can possibly impact by being sloppy is you.

The problem then comes in when you start to generalize to non-trivial cases from the answer to this question asked about a trivial case.

So let's instead ask two questions about some non-trivial cases.

I have a long-running service that allocates and deallocates memory in complex ways, perhaps involving multiple allocators hitting multiple heaps. Shutting down my service in the normal mode is a complicated and time-consuming process that involves ensuring that external state -- files, databases, etc -- are consistently shut down. Should I ensure that every byte of memory that I allocated is deallocated before I shut down?

Yes, and I'll tell you why. One of the worst things that can happen to a long-running service is if it accidentally leaks memory. Even tiny leaks can add up to huge leaks over time. A standard technique for finding and fixing memory leaks is to instrument the allocation heaps so that at shutdown time they log all the resources that were ever allocated without being freed. Unless you like chasing down a lot of false positives and spending a lot of time in the debugger, always free your memory even if doing so is not strictly speaking necessary.

The user is already expecting that shutting the service down might take billions of nanoseconds so who cares if you cause a little extra pressure on the virtual allocator making sure that everything is cleaned up? This is just the price you pay for big complicated software. And it's not like you're shutting down the service all the time, so again, who cares if its a few milliseconds slower than it could be?

I have that same long-running service. If I detect that one of my internal data structures is corrupt I wish to "fail fast". The program is in an undefined state, it is likely running with elevated privileges, and I am going to assume that if I detect corrupted state, it is because my service is actively being attacked by hostile parties. The safest thing to do is to shut down the service immediately. I would rather allow the attackers to deny service to the clients than to risk the service staying up and compromising my users' data further. In this emergency shutdown scenario should I make sure that every byte of memory I allocated is freed?

Of course not. The operating system is going to take care of that for you. If your heap is corrupt, the attackers may be hoping that you free memory as part of their exploit. Every millisecond counts. And why would you bother polishing the doorknobs and mopping the kitchen before you drop a tactical nuke on the building?

So the answer to the question "should I free memory before my program exits?" is "it depends on what your program does".

Swahili answered 19/3, 2013 at 0:58 Comment(8)
I may be taking this out of context, but what defense does the CLR have for the latter case? If the runtime detects corruption will it just crash?Lacrimatory
@Roddy: Do you suppose the original poster would have asked the question if the answer had been obvious to them, or if they were familiar already with memory leak detection tools?Swahili
@EricLippert: Apologies, I was being a bit picky when I wrote that comment. But you're right that "It depends" is the only valid answer. For instance, Objects that are constructed on first use are sometimes best left dangling, as fixing destruction ordering issues can be a nightmare to address and verify.Midriff
@EricLippert: Did you happen to write anything on the subject of memory allocation and exploits, such as why freeing memory would be a part of the exploit? I would love to read further on the subject.Vegetal
@GeReV: I have never written anything on the subject of heap corruption exploits, but there are lots of articles on the internet about it.Swahili
@EricLippert: Great answer with high voted! I'm interested in what your program does. Since the code of deallocation would be also in the program. Is that included in what your program does?Sundog
@KenKin: I was once time-travelling to 1220 AD, Amiens, France. I asked a guy with a chisel what he did. "I'm a stonecutter; I cut these blocks of stone, smooth them, and assemble them into colums". I asked a guy at a forge what he did. "I'm a glass blower. I make large flat pieces of glass and cut them into shapes for the windows". I asked a guy with a saw what he did. "I'm a carpenter; I make the doors". I asked an old woman with a broom, sweeping up the stone, glass and wood chips what she did. "I build cathedrals."Swahili
This ignores the possibility that some of the memory has been swapped out. "most of the memory the program had allocated during its lifetime has been paged out, which means that the program pages all that memory back in from the hard drive, just so it could call free on it." blogs.msdn.com/b/oldnewthing/archive/2012/01/05/10253268.aspxFlocculant
A
39

Yes it is good practice. You should NEVER assume that your OS will take care of your memory deallocation, if you get into this habit, it will screw you later on.

To answer your question, however, upon exiting from the main, the OS frees all memory held by that process, so that includes any threads that you may have spawned or variables allocated. The OS will take care of freeing up that memory for others to use.

Adda answered 18/3, 2013 at 22:27 Comment(5)
@JakobS.: It depends on your definition. I'd say that smart pointers effectively manage your memory...Craze
If you don't specify the OS, that last sentence doesn't really hold. Maybe for "popular desktop OS" it does, but not generally.Edmea
@Edmea : Please name an unpopular (or non-desktop) OS which doesn't free process-owned memory after process termination.Midriff
@Midriff Anything running a full *nix kernel etc. will obviously do it. As you go more into the embedded side of things, where you don't even have an OS, only a minimal kernel with some task controls, you might not be as lucky. Those creepers usually don't have a name and get developed / stripped / patched in-house for some specific device.Edmea
@Edmea - yes, familiar with all too many of those. But in most cases they are just a minimal RTOS that's designed to run one program that never stops. You normally "start" the program by powering up, and terminate by power down.Midriff
M
24

Important note : delete's freeing of memory is almost just a side-effect. The important thing it does is to destruct the object. With RAII designs, this could mean anything from closing files, freeing OS handles, terminating threads, or deleting temporary files.

Some of these actions would be handled by the OS automatically when your process exits, but not all.

In your example, there's no reason NOT to call delete. but there's no reason to call new either, so you can sidestep the issue this way.

char str[10];

Or, you can sidestep the delete (and the exception safety issues involved) by using smart pointers...

So, generally you should always be making sure your object's lifetime is properly managed.

But it's not always easy: Workarounds for the static initialization order fiasco often mean that you have no choice but to rely on the OS cleaning up a handful of singleton-type objects for you.

Midriff answered 18/3, 2013 at 22:47 Comment(2)
+1 for mentioning that it's not all about memory. (Although "automagically" nearly put me off ;) )Craze
Let's call it a typo, and keep the +1 then... Fixed!Midriff
G
16

Contrary answer: No, it is a waste of time. A program with a vast amount of allocated data would have to touch nearly every page in order to return all of the allocations to the free list. This wastes CPU time, creates memory pressure for uninteresting data, and possibly even causes the process to swap pages back in from disk. Simply exiting releases all of the memory back to the OS without any further action.

(not that I disagree with the reasons in "Yes", I just think there are arguments both ways)

Glyoxaline answered 18/3, 2013 at 22:35 Comment(5)
This is what I see as the main reason not to free memory. If you have a gigantic chunk of allocated memory, you don't want your program hanging around at the end waiting for all the memory to be freed. The important thing is to be absolutely sure that the OS the program being run on will deal with it appropriately before you rely on that detail.Summersault
Freeing memory is usuallu much faster than allocating it, and I have never seen a heap implementation where performance of free increases with block size. If you're talking about zillions of small blocks, that's another problem : but the problem is then one of broken design. Still no excuse to get in the lazy mindset of not bothering to delete objects.Midriff
@Roddy: A large program with many complex allocations will touch every page of its memory (repeatedly) in a random order. This can definitely have adverse effects on other applications on the system, especially if parts of the exiting application were swapped out.Glyoxaline
@BenJackson, so how do you implement this 'design' in a way that still allows you to nicely destruct those things that absolutely must be correctly destructed to avoid file/database corruption and (or even hardware damage)? And don't forget that complex programs are multithreaded...Midriff
@Roddy: Your application has to survive unexpected termination due to system crashes, impatient restart scripts, unhandled exceptions, etc. RAII is nice, but it's not what I rely on to ensure my 40 watt laser doesn't fire when the control program exits! And like I said, I'm just providing the counterargument.Glyoxaline
S
6

Your Operating System should take care of the memory and clean it up when you exit your program, but it is in general good practice to free up any memory you have reserved. I think personally it is best to get into the correct mindset of doing so, as while you are doing simple programs, you are most likely doing so to learn.

Either way, the only way to guaranteed that the memory is freed up is by doing so yourself.

Stitching answered 18/3, 2013 at 22:29 Comment(0)
A
4

new and delete are reserved keyword brothers. They should cooperate with each other through a code block or through the parent object's lifecyle. Whenever the younger brother commits a fault (new), the older brother will want to to clean (delete) it up. Then the mother (your program) will be happy and proud of them.

Asperges answered 19/3, 2013 at 7:35 Comment(6)
I am trying to be not so geeky to expain it. :)Asperges
Actually, no: With smart pointers you have the new but never the delete. boost::scoped_ptr<Foo> f(new Foo());Midriff
Henry, Smart pointers are not an OS feature, or even a compiler feature. They are classes in various libraries such as boost::scoped_ptr, std::auto_ptr, etc - or you can write your own. Your compiler needs to handle invoking constructors and destructors correctly, and have some (pretty limited) support for templates. If it can't do that, it's NOT a c++ compiler. Also, the question is tagged C++, not C. C is a different language entirely.Midriff
My point is : new & delete or (allocation/deallocation) is a normal / usual practice. you are talking about an advanced practice which is too complex to a starter. when you said "Actually, NO.". you mena the asker can do with it following an advanced practice. in some case, i agree with you. so I don't think we have any difference.Asperges
"they should show up together in the same code block". enterely wrong: For example, something very usual - Ob*o=CreateObject(); the new, and delete are not in the same code block.Harbaugh
@HenryLeu Regarding smart pointers: The idea is that using smart pointers makes your life simpler, not more complicated. Not using smart pointers is what I would consider as more (and often unnecessarily) advanced.Berthold
A
3

I cannot agree more to Eric Lippert's excellent advice:

So the answer to the question "should I free memory before my program exits?" is "it depends on what your program does".

Other answers here have provided arguments for and against both, but the real crux of the matter is what your program does. Consider a more non-trivial example wherein the type instance being dynamically allocated is an custom class and the class destructor performs some actions which produces side effect. In such a situation the argument of memory leaks or not is trivial the more important problem is that failing to call delete on such a class instance will result in Undefined behavior.

[basic.life] 3.8 Object lifetime
Para 4:

A program may end the lifetime of any object by reusing the storage which the object occupies or by explicitly calling the destructor for an object of a class type with a non-trivial destructor. For an object of a class type with a non-trivial destructor, the program is not required to call the destructor explicitly before the storage which the object occupies is reused or released; however, if there is no explicit call to the destructor or if a delete-expression (5.3.5) is not used to release the storage, the destructor shall not be implicitly called and any program that depends on the side effects produced by the destructor has undefined behavior.

So the answer to your question is as Eric says "depends on what your program does"

Agnostic answered 19/3, 2013 at 4:34 Comment(1)
Key words: "program that depends on the side effects".Flocculant
S
2

It's a fair question, and there are a few things to consider when answering:

  • some objects have more complex destructors which don't just release memory when they're deleted. They may have other side effects, which you don't want to skip.
  • It is not guaranteed by the C++ standard that your memory will be released when the process terminates. (Of course on a modern OS it will be freed, but if you were on some weird OS which didn't do that, you'd have to free your memory properly
  • on the other hand, running destructors at program exit can actually take up quite a lot of time, and if all the do is release memory (which would be released anyway), then yes, it makes a lot of sense to just short-circuit that and exit immediately instead.
Sherrod answered 20/3, 2013 at 7:28 Comment(0)
T
2

Most operating systems will reclaim memory upon process exit. Exceptions may include certain RTOS's, old mobile devices etc.

In an absolute sense your app won't leak memory; however it's good practice to clean up memory you allocate even if you know it won't cause a real leak. This issue is leaks are much, much harder to fix than not having them to begin with. Let's say you decide that you want to move the functionality in your main() to another function. You'll may end up with a real leak.

It's also bad aesthetics, many developers will look at the unfreed 'str' and feel slight nausea :(

Trictrac answered 20/3, 2013 at 8:17 Comment(0)
S
2

You got a lot of professional experience answers. Here I'm telling a naive but an answer I considered as the fact.

  • Summary

    3. Does it have deeper consequences?

    A: Will answer in some detail.

    2. Is it just good practice?

    A: It is considered a good practice. Release resources/memory you've retrieved when you're sure about it no longer used.

    1. Why would I need to delete str if I am going to end the program anyways?
      I wouldn't care if that memory goes to a land full of unicorns if I am just going to exit, right?

    A: You need or need not, in fact, you tell why. There're some explanation follows.

    I think it depends. Here are some assumed questions; the term program may mean either an application or a function.

    Q: Does it depend on what the program does?

    A: If universe destroyed was acceptable, then no. However, the program might not work correctly as expected, and even be a program that doesn't complete what it supposed to. You might want to seriously think about why you build a program like this?

    Q: Does it depend on how the program is complicated?

    A: No. See Explanation.

    Q: Does it depend on what the stability of the program is expected?

    A: Closely.

    And I consider it depends on

    1. What's the universe of the program?
    2. How's the expectation of the program that it done its work?
    3. How much does the program care about others, and the universe where it is?

      About the term universe, see Explanation.

    For summary, it depends on what do you care about.


  • Explanation

    Important: If we define the term program as a function, then its universe is application. There are many details omitted; as an idea for understanding, it's long enough, though.

    We may ever seen this kind of diagram which illustrates the relationship between application software and system software.

    9RJKM.gif

    But for being aware the scope of which one covers, I'd suggest a reversed layout. Since we are talking about software only, the hardware layer is omitted in following diagram.

    mjLai.jpg

    With this diagram, we realize that the OS covers the biggest scope, which is current the universe, sometimes we say the environment. You may imagine that the whole achitecture consists of a lot of disks like the diagram, to be either a cylinder or torus(a ball is fine but difficult to imagine). Here I should mention that the outmost of OS is in fact a unibody, the runtime may be either single or multiple by different implemention.

    It's important that the runtime is responsible to both OS and applications, but the latter is more critical. Runtime is the universe of applications, if it's destroyed all applications run under it are gone.

    Unlike human on the Earth, we live here, but we don't consist of Earth; we will still live in other suitable environment if the time when the Earth are destroying and we weren't there.

    However, we can no longer exist when the universe is destroyed, because we are not only live in the universe, but also consist of the universe.

    As mentioned above, the runtime is also responsible to the OS. The left circle in the following diagram is what it may looks like.

    ScsZs.jpg

    This is mostly like a C program in the OS. When the relationship between an application and OS match this, is just the same situation as runtime in the OS above. In this diagram, the OS is the universe of applications. The reason of the applications here should be responsible to the OS, is that OS might not virtualize the code of them, or allowed to be crashed. If OS are always prevent they to do so, then it's self-responsible, no matter what applications do. But think about the drivers, it's one of the scenarios that OS must allowed to be crashed, since this kind of applications are treated as part of OS.

    Finally, let's see the right circle in the diagram above. In this case, the application itself be the universe. Sometimes, we call this kind of application operating system. If an OS never allowed custom code to be loaded and run, then it does all the things itself. Even it allowed, after itself is terminated, the memory goes nowhere but hardware. All the deallocation that may be necessary, is before it was terminated.

    So, how much does your program care about the others? How much does it care about its universe? And how's the expectation of the program that it done its work? It depends on what do you care about.

Sundog answered 20/3, 2013 at 9:33 Comment(0)
M
2

Why would I need to delete str if I am going to end the program anyways?

Because you don't want to be lazy ...

I wouldn't care if that memory goes to a land full of unicorns if I am just going to exit, right?

Nope, I don't care about the land of unicorns either. The Land of Arwen is a different matter, Then we could cut their horns off and put them to good use(I've heard its a good aphrodisiac).

Is it just good practice?

It is justly a good practice.

Does it have deeper consequences?

Someone else has to clean up after you. Maybe you like that, I moved out from under my parents' roof many years ago.

Place a while(1) loop construct around your code without delete. The code-complexity does not matter. Memory leaks are related to process time.

From the perspective of debug, not releasing system resources(file handles, etc) can cause more significant and hard to find bugs. Memory-leaks while important are typically much easier to diagnose(why can't I write to this file?). Bad style will become more of a problem if you start working with threads.

int main()
{

    while(1)
    { 
        char* str = new char[10];

        for(int i=0;i<5;i++)
        {
            //Do stuff with str
        }
    }

    delete[] str;
    return 0;
}
Mammalian answered 25/3, 2013 at 16:22 Comment(0)
S
2

TECHNICALLY, a programmer shouldn't rely on the OS to do any thing. The OS isn't required to reclaim lost memory in this fashion.

If you do write the code that deletes all your dynamically allocated memory, then you are future proofing the code and letting others use it in a larger project.

Source: Allocation and GC Myths(PostScript alert!)

Allocation Myth 4: Non-garbage-collected programs should always
deallocate all memory they allocate.

The Truth: Omitted deallocations in frequently executed code cause
growing leaks. They are rarely acceptable. but Programs that retain
most allocated memory until program exit often perform better without
any intervening deallocation. Malloc is much easier to implement if
there is no free.

In most cases, deallocating memory just before program exit is
pointless. The OS will reclaim it anyway. Free will touch and page in
the dead objects; the OS won't.

Consequence: Be careful with "leak detectors" that count allocations.
Some "leaks" are good!
  • I think it's a very poor practice to use malloc/new without calling free/delete.

  • If the memory's going to get reclaimed anyway, what harm can there be from explicitly deallocating when you need to?

  • Maybe if the OS "reclaims" memory faster than free does then you'll see increased performance; this technique won't help you with any program that must remain running for any long period of time.

Having said that, so I'd recommend you use free/delete.


If you get into this habit, who's to say that you won't one day accidentally apply this approach somewhere it matters?


One should always deallocate resources after one is done with them, be it file handles/memory/mutexs. By having that habit, one will not make that sort of mistake when building servers. Some servers are expected to run 24x7. In those cases, any leak of any sort means that your server will eventually run out of that resource and hang/crash in some way. A short utility program, ya a leak isn't that bad. Any server, any leak is death. Do yourself a favor. Clean up after yourself. It's a good habit.


Think about your class 'A' having to deconstruct. If you don't call
'delete' on 'a', that destructor won't get called. Usually, that won't
really matter if the process ends anyway. But what if the destructor
has to release e.g. objects in a database? Flush a cache to a logfile?
Write a memory cache back to disk? **You see, it's not just 'good
practice' to delete objects, in some situations it is required**. 
Streaming answered 26/3, 2013 at 7:9 Comment(0)
K
2

Another reason that I haven't see mentioned yet is to keep the output of static and dynamic analyzer tools (e.g. valgrind or Coverity) cleaner and quieter. Clean output with zero memory leaks or zero reported issues means that when a new one pops up it is easier to detect and fix.

You never know how your simple example will be used or evolved. Its better to start as clean and crisp as possible.

Kordofanian answered 27/3, 2013 at 19:29 Comment(0)
J
2

Not to mention that if you are going to apply for a job as a C++ programmer there is a very good chance that you won't get past the interview because of the missing delete. First - programmers don't like any leaks usually (and the guy at the interview will be surely one of them) and second - most companies (all I worked in at least) have the "no-leak" policy. Generally, the software you write is supposed to run for quite a while, creating and destroying objects on the go. In such an environment leaks can lead to disasters...

Jerrybuilt answered 28/3, 2013 at 8:48 Comment(0)
F
1

Instead of talking about this specific example i will talk about general cases, so generally it is important to explicitly call delete to de-allocate memory because (in case of C++) you may have some code in the destructor that you want to execute. Like maybe writing some data to a log file or sending shutting down signal to some other process etc. If you let the OS free your memory for you, your code in your destructor will not be executed.

On the other hand most operating systems will deallocate the memory when your program ends. But it is good practice to deallocate it yourself and like I gave destructor example above the OS won't call your destructor, which can create undesirable behavior in certain cases!

I personally consider it bad practice to rely on OS to free your memory (even though it will do) the reason is if later on you have to integrate your code with a larger program you will spend hours to track down and fix memory leaks!

So clean your room before leaving!

Frit answered 30/3, 2013 at 6:52 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.