Is it worth setting pointers to NULL in a destructor?
Asked Answered
D

12

26

Imagine I have a class that allocates memory (forget about smart pointers for now):

class Foo
{
public:
  Foo() : bar(new Bar)
  {
  }

  ~Foo()
  {
    delete bar;
  }

  void doSomething()
  {
    bar->doSomething();
  }

private:
  Bar* bar;
};

As well as deleting the objects in the destructor is it also worth setting them to NULL?

I'm assuming that setting the pointer to NULL in the destructor of the example above is a waste of time.

Duo answered 17/6, 2010 at 8:23 Comment(3)
Presumably you have no getBar() method...Bondswoman
More important; provide a assigment operator and a copy-constructor when your class deals with allocated memory.Oversubscribe
It's a partial example of code I have to maintain. I would personally use smart pointers but am not in a position to at the moment.Duo
A
29

Since the destructor is the last thing that is called on an object before it "dies," I would say there is no need to set it to NULL afterwards.

In any other case, I always set a pointer to NULL after calling delete on it.

Allometry answered 17/6, 2010 at 8:26 Comment(0)
V
43

Several answer mention that it might be worthwhile to do this in a DEBUG build to aid in debugging.

Do not do this.

You'll just potentially help hide a problem in a debug build that isn't hidden in your release builds that you actually give to customers (which is the opposite effect that your debug builds should have).

If you're going to 'clear' the pointer in the dtor, a different idiom would be better - set the pointer to a known bad pointer value. That way if there is a dangling reference to the object somewhere that ultimately tries to use the pointer, you'll get a diagnosable crash instead of the buggy code avoiding use of the pointer because it notices that it's NULL.

Say that doSomething() looked like:

void doSomething()
{
    if (bar) bar->doSomething();
}

Then setting bar to NULL has just helped hide a bug if there was a dangling reference to a deleted Foo object that called Foo::doSomething().

If your pointer clean up looked like:

~Foo()
{
    delete bar;
    if (DEBUG) bar = (bar_type*)(long_ptr)(0xDEADBEEF);
}

You might have a better chance of catching the bug (though just leaving bar alone would probably have a similar effect).

Now if anything has a dangling reference to the Foo object that's been deleted, any use of bar will not avoid referencing it due to a NULL check - it'll happily try to use the pointer and you'll get a crash that you can fix instead of nothing bad happening in debug builds, but the dangling reference still being used (to ill effect) in your customer's release builds.

When you're compiling in debug mode, chances are pretty good that the debug heap manager will already be doing this for you anyway (at least MSVC's debug runtime heap manager will overwrite the freed memory with 0xDD to indicate the memory is dead/freed).

The key thing is that if you're using raw pointers as class members, don't set the pointers to NULL when the dtor runs.

This rule might also apply to other raw pointers, but that depends on exactly how the pointer is used.

Visa answered 12/1, 2011 at 8:11 Comment(3)
I agree with this answer 100%.Staley
Leaving bar alone can hide accesses after deletes. Setting it to NULL can hide double deletes. setting it to DEADBEEF is an excellent idea.Citrus
Michael: This answer is referred to in another question and OP wonders what (long_ptr) does and why it's needed: #78664785Harpole
A
29

Since the destructor is the last thing that is called on an object before it "dies," I would say there is no need to set it to NULL afterwards.

In any other case, I always set a pointer to NULL after calling delete on it.

Allometry answered 17/6, 2010 at 8:26 Comment(0)
O
8

Yes, it's a waste of time.

Oxytetracycline answered 17/6, 2010 at 8:25 Comment(0)
C
5

You shouldn't, for two reasons:

  • it aids debugging, but in modern environments, deleted objects are usually overwritten already with a recognizable bit pattern in debug builds.

  • In a large application, it may significantly degrade shutdown performance. In a worst-case scenario, closing your app means calling dozens of distinct destructors, and writing to hundreds of heap pages that are currently swapped to disk.

Cordi answered 17/6, 2010 at 14:23 Comment(1)
Deleted objects are usually overwritten in DEBUG builds. Destroyed objects are not however, so setting a known-bad address in the dtor can make a lot of sense. (Setting the pointer NULL is questionable though as Michael Burr explained - and I completely agree with him.)Rettke
J
4

First of all, this is a C practice, and a controversed one. Some argue (for C) that it hides bugs that would surface sooner if it was not used, and it's impossible to distinguish the use of a freed memory portion from the use of one that was never allocated...

Now in C++ ? It's useless, but not for the same reason than in C.

In C++ it's a bug to use delete. If you were to use smart pointers, you would not be worried about this, and you would not risk leaking (ie: are you sure your copy constructor and assignement operator are exception safe ? thread safe ?)

Finally, in a destructor it really is completely useless... accessing any field of an object once a destructor has run is undefined behavior. You've released the memory, so there could perfectly be something else written there, overwriting your carefully placed NULL. And in fact, the way memory allocators work is often to reallocate first the freshly deallocated zones: this enhance caching performances... and this is even truer (haha!) if we're talking about the stack, of course.

My opinion: SAFE_DELETE is a sign of impending doom.

Java answered 17/6, 2010 at 14:15 Comment(2)
Smart pointers are not a silver bullet, and in the case you're using them, you also need to worry about circular references.Siskin
@Nande: You always have to worry about circular references in your ownership graph, regardless of whether you use smart pointers or not. With that said, I agree, smart pointers are not the only solution: standard collections are also pretty good, for example. Still, delete is generally a bug, unless you're implementing your own collection/smart pointer.Java
O
2

Might actually be worth it for debugging reasons.

Oversubscribe answered 17/6, 2010 at 8:28 Comment(3)
Doesn't your compiler pattern-fill objects on destruction in debug mode?Boanerges
Still, you might suspect a double destruction, and wanna see if this specific destructor has executed etc.Oversubscribe
Not NULL'ing the pointer would get you a crash in the double destructed object from the double delete, NULL'ing them out could hide the problemButanone
R
2

Generally, no, there is no need to set pointers to NULL explicitly after deleting them in the destructor - though it can be a handy aid during debugging when inspecting to class to indicate whether a resource has been correctly freed or not.

A common idiom is to declare a SAFE_DELETE macro, which deletes the pointer and sets it NULL for you:

#define SAFE_DELETE(x) delete (x); x = NULL

SAFE_DELETE(bar)

This is especially useful in cases where the pointer may later get reused.

Riehl answered 17/6, 2010 at 8:29 Comment(6)
We already have that macro and it's used in destructors everywhere which I think is probably overkill.Duo
Deleting then in the destructor, not the constructor :)Supper
Oops. I accidentally your object. Fixed.Riehl
-1 for an unsafe macro. try "if (x->refs == 0) SAFE_DELETE(x)". Rather use: template <typename T> void SafeDelete(T*& p) { delete p; p=0; }Cordi
Wow, my first downvote!! You're correct in that use of templates technically guarantees type safety, however, and would give you better compilation errors. Though, checking x->refs == 0 explicitly is not good form; it is better to perform the check in a helper function.Riehl
Blair the issue is not in the x->refs, but the --> if (anyCondition) SAFE_DELETE(x) <-- - x = NULL will always execute. If you insist on the macro (or can't use templates on your platform), a common workaround is --> #define SAFE_DELETE do { delete x; x=NULL; } while (0) <--- which avoids this type of macro problems.Cordi
D
1

I think it's always worth doing this (even though you don't technically need to). I set a pointer to NULL to indicate the memory it points to does not need to be deallocated.

Also if is useful for checking the pointer is valid before using it.

if (NULL == pSomething)
{
  // Safe to operate on pSomething
}
else
{
  // Not safe to operate on pSomething
}

Putting NULL first in the if condition guards against indavertently setting pSomething to NULL when you slip and miss out the second '='. You get a compile error rather than a bug that takes time to track down.

Dubois answered 17/6, 2010 at 9:21 Comment(1)
Where are you going to check if that pointer is NULL? After the object was destroyed, its members no longer exist. You can't do anything with them anymore, not even check if they're NULL. You should only set a pointer to NULL when the pointer itself continues to exist, but not the pointed-to object.Catherinacatherine
I
1

IMO its worth it in DEBUG mode. I do find it helpful frequently. In RELEASE mode it is usually skipped by the compiler due to code optimization, so you shouldn't really rely on this in your production code.

Indiscriminate answered 25/6, 2010 at 12:18 Comment(0)
B
1

I'd avoid raw pointers at all cost, example with and without smart point where setting it to NULL explicitly is useful:

With:

 class foo
 {
   public:
      foo() : m_something( new int ) { }

      void doStuff()
      {
          // delete + new again - for whatever reason this might need doing
          m_something.reset( new int );
      }

   private:
      std::unique_ptr<int> m_something; // int as an example, no need for it to be on the heap in "real" code
 }

Without:

class foo
{
   public:
     foo() : m_something( new int ) { }
     ~foo()
     {
        delete m_something;
     }

     void doStuff()
     {
        delete m_something;

       // Without this, if the next line throws then the dtor will do a double delete
       m_something = nullptr;

       m_something = new int;
     }

  private:
     int* m_something
 }
Butanone answered 15/11, 2013 at 21:23 Comment(0)
C
0

No, it ain't worth it.

But if you want to be consistent then you should probably do it.

What I would usually do is to create a Free function that I can use each time I need to free the allocated data (event more Free functions if needed). In those functions you are advised to set the pointers to NULL (or identifier to an invalid value).

Conoid answered 17/6, 2010 at 8:26 Comment(0)
A
-1

It's good practice to always set pointers to NULL after deleting them. Also some code checking tools enforce this.

Angelia answered 17/6, 2010 at 8:53 Comment(2)
It is a bad practice, it hides double delete bugs.Chokefull
There is a significant difference between after deleting them and in the destructor.Cordi

© 2022 - 2025 — McMap. All rights reserved.