What does Visual Studio do with a deleted pointer and why?
Asked Answered
C

6

133

A C++ book I have been reading states that when a pointer is deleted using the delete operator the memory at the location it is pointing to is "freed" and it can be overwritten. It also states that the pointer will continue to point to the same location until it is reassigned or set to NULL.

In Visual Studio 2012 however; this doesn't seem to be the case!

Example:

#include <iostream>

using namespace std;

int main()
{
    int* ptr = new int;
    cout << "ptr = " << ptr << endl;
    delete ptr;
    cout << "ptr = " << ptr << endl;

    system("pause");

    return 0;
}

When I compile and run this program I get the following output:

ptr = 0050BC10
ptr = 00008123
Press any key to continue....

Clearly the address that the pointer is pointing to changes when delete is called!

Why is this happening? Does this have something to do with Visual Studio specifically?

And if delete can change the address it is pointing to anyways, why wouldn't delete automatically set the pointer to NULL instead of some random address?

Comply answered 27/10, 2015 at 17:19 Comment(23)
Delete a pointer, doesn't mean it will be set to NULL, you have to take care of that.Consternate
I know that, but the book I'm reading specifically says that it will still contain the same address it was pointing to before delete, but the contents of that address may be overwritten.Comply
@tjwrona1992, yes, because this is what is usually happening. The book just lists most likely outcome, not the hard rule.Tsushima
Instead of using cout, what is the value of the pointer when viewed in the debugger? The reason why this is important is that you are running the pointer through the gauntlet of operator <<. Who knows what will come out at the other end if the pointer is no longer valid.Erasure
@tjwrona1992 A C++ book I have been reading -- and the name of the book is ... ?Erasure
what you are doing is undefined behaviour, you can see my answerApplejack
The book is "Sam's Teach Yourself C++ in One Hour a Day." It appears that the book is from 2009 so it is possible that some of the information is outdated. Also none of the programs in the book are directed towards any single compiler so anything special that Visual Studio does would not be mentioned.Comply
It really isn't... the question is based off of what I saw in Visual Studio, not what I read in the book... And if you read my answer it shows that the book was actually right. If you disable the feature in Visual Studio that redirects the deleted pointer, it will leave the pointer pointing to its original location.Comply
@tjwrona1992: As I've stated in my answer what you have done is undefined behaviour, and I suppose you should not expect any consistent behaviourApplejack
@Giorgi The behavior is consistent. Trying to USE a deleted pointer would cause undefined results, but I'm not using it. I'm just checking where it is pointing to. My examples provided consistent results that were conclusive about what Visual Studio is actually doing with the pointer.Comply
@tjwrona1992: It may be surprising, but it's all usage of the invalid pointer value that is undefined behavior, not only dereferencing. "Checking where it is pointing to" IS using the value in a disallowed way.Lashio
@tjwrona1992: What you are doing is already using itApplejack
@Giorgi whether I'm using it or not, the results are consistent. Visual studio will ALWAYS redirect the pointer to 0x8123 if that feature is enabled and it will ALWAYS leave it pointing to its original location if the feature is disabled. The results are not unpredictable. I'm no rocket scientist, but I'm fairly certain that when something is ALWAYS the same it by definition is considered to be consistent.Comply
@BenVoigt: Do you think it is UB to read it after deletion, even if compiler automatically assigned 0x8123 to pointer after delete? (see my answer)Applejack
@BenVoigt: Thanks just my point was maybe since compiler assigned 0x8123 to the pointer, maybe now reading the pointer value is no longer undefined behaviour? (maybe now it has no more uninitialized state?)Applejack
@Giorgi: If what you're asking is whether the compiler has replaced the "invalid pointer value" left over, by writing 0x00008123, the answer is "It doesn't matter". Firstly, it's non-portable to expect a compiler to do so, and secondly, the value written is also an "invalid pointer value" and illegal to read. It's illegal to read not because it is an "indeterminate value" (which might not be true after the compiler writes to it), but by virtue of containing an "invalid pointer value".Lashio
@BenVoigt: That was my point, I thought maybe since compiler assigned 0x00008123 to the deleted pointer, maybe now, it was fine to read it? (if not I'll have to modify last part of my answer now). But if it is UB to read any invalid pointer value, not because it was "deleted", but because pointer has invalid value, then it is UB to read it even if compiler assigned 0x00008123Applejack
@Giorgi: What the compiler is doing doesn't count as an assignment under the language rules. It's just that "reads as 0x00008123" is a legal result of undefined-behavior. Although, strictly speaking, reading an "invalid pointer value" is implementation-defined behavior according to the Standard (but a footnote is pretty clear that an implementation can specify any behavior, including what we normally associate with undefined behavior)Lashio
What makes you think 0x00008123 isn't NULL? (I don't think it is, but I know that the only guaranteed relevant promise in source is "0" -> "NULL". There's no guarantee that the runtime representation of some NULL is 0x0 or any other specific bit pattern.)Featherweight
@EricTowers: The Standard is very clear that the deallocation function invalidates values contained in pointers which, prior to deallocation, pointed into an object in the deallocated space. Since they are invalid pointer values, you can't portably do anything with them, including talk about whether or not they are null.Lashio
@EricTowers, try setting a pointer variable to NULL: ptr = NULL, then print it's value. cout << ptr << endl;. You will find that when a pointer is explicitly set to NULL it will point to the address 00000000...Comply
@tjwrona1992 : Visual Studio's implementation defined behaviour (in that version, with those patches, with those compilation flags) is not universal. stackoverflow.com/questions/27714377Featherweight
@BenVoigt: No. I can definitely evaluate a pointer and compare its value with other values. I cannot dereference it. Which is fine; I don't want to.Featherweight
C
178

I noticed that the address stored in ptr was always being overwritten with 00008123...

This seemed odd, so I did a little digging and found this Microsoft blog post containing a section discussing "Automated pointer sanitization when deleting C++ objects".

...checks for NULL are a common code construct meaning that an existing check for NULL combined with using NULL as a sanitization value could fortuitously hide a genuine memory safety issue whose root cause really does needs addressing.

For this reason we have chosen 0x8123 as a sanitization value – from an operating system perspective this is in the same memory page as the zero address (NULL), but an access violation at 0x8123 will better stand out to the developer as needing more detailed attention.

Not only does it explain what Visual Studio does with the pointer after it is deleted, it also answers why they chose NOT to set it to NULL automatically!


This "feature" is enabled as part of the "SDL checks" setting. To enable/disable it go to: PROJECT -> Properties -> Configuration Properties -> C/C++ -> General -> SDL checks

To confirm this:

Changing this setting and rerunning the same code produces the following output:

ptr = 007CBC10
ptr = 007CBC10

"feature" is in quotes because in a case where you have two pointers to the same location, calling delete will only sanitize ONE of them. The other one will be left pointing to the invalid location...


UPDATE:

After 5 more years of C++ programming experience I realize this entire issue is basically a moot point. If you are a C++ programmer and are still using new and delete to manage raw pointers instead of using smart pointers (which circumvent this entire issue) you may want to consider a change in career path to become a C programmer. ;)

Comply answered 27/10, 2015 at 17:31 Comment(18)
That's a nice find. I wish MS would better document debugging behavior like this. For example, it would be nice to know which compiler version started implementing this and what options enable/disable the behavior.Pool
The link works for me. Here is the full address: blogs.microsoft.com/cybertrust/2012/04/24/…Comply
Well, you now know that this is enabled by the /sdl compile option, just add it to your answer.Discard
"from an operating system perspective this is in the same memory page as the zero address" - huh? Isn't the standard (ignoring large pages) page size on x86 still 4kb for both windows and linux? Although I dimly remember something about the first 64kb of address space on Raymond Chen's blog, so in practice I take it same result,Toddle
@Toddle windows reserves the first (and last) 64kB worth of RAM as dead space for trapping. 0x8123 falls in there nicelySymmetry
I don't have VS at hand, so just wondering... does it do this in all configurations or just in Debug?Brummett
Actually, it doesn't encourage bad habits, and it doesn't allow you to skip setting the pointer to NULL - that's the whole reason they're using 0x8123 instead of 0. The pointer is still invalid, but causes an exception when attempting to dereference it (good), and it doesn't pass NULL checks (also good, because it's an error not to do that). Where's the place for bad habits? It really is just something that helps you debug.Hypogeal
@Hypogeal The place for bad habits is when you have two pointers to the same location. When you delete one Visual studio will set that one to 0x8123, but it will leave the other one pointing to what is now an invalid address. What it SHOULD do is set both of them to 0x8123 that way it would be consistent. If Visual Studio didn't do this at all and made you set it to NULL yourself, there would be no question about what is happening because both pointers would be left pointing to an invalid address and things would be exactly as you would expect.Comply
Well, it can't set both (all) of them, so this is the second best option. If you don't like it, just turn off the SDL checks - I find them rather useful, especially when debugging someone else's code.Hypogeal
I agree, there is a benefit to this when debugging other people's code. You can be sure that you set all of your own deleted pointers to NULL, but there's no telling what someone else will do.Comply
Good for you for figuring out what's actually going on and why rather than accepting the standard "the C++ standard leaves this as undefined behavior, anything can happen" non-answer.Coral
Thanks for this. MS didn't do a good job of documenting it and this breaks Application Verifier's double delete test. You now get a protection exception and without knowing what 0X8123 means good luck finding it.Greenstone
Most game developers do not use shared pointers. It leads to bad design patterns where it is confusing who owns what and makes multi-threading code for game engine optimization very difficult. Should we change careers as well? :DLeakage
@PaulRenton you don't have to use shared pointers, use std::unique_ptr instead. That way ownership is always crystal clear. ;)Comply
@tjwrona1992 Many of the std lib shared ptrs do hidden heap allocations for ref counts and have extra overhead for their use. It is better to have a custom weakobject ptr that operates based on a guid or some id. I agree that unique_ptr is useful and conveys ownership. I also believe that at a company it is fair to say a raw ptr on a struct or class is unique unless explicitly marked otherwise. Shared ptrs are banned period because of how they fragment the heap, at least for our coding standards. To me, raw ptr's already say I am unique. I do like unique ptrs for scoped destruction though.Leakage
@tjwrona1992 To be clear, I am not implying that weakptrs shouldn't be used on structs and classes. That is the difference between raw and weakptrs. Raw is ownership, weak is no ownership.Leakage
@PaulRenton that's actually very interesting, the way I have always seen it done is using std::unique_ptr where you want to show ownership and use the .get() method to get the raw pointer and pass it to things that do not own the pointer. So in other words, std::unique_ptr would imply ownership and raw pointers would imply non-ownership. This works very similarly, however the std::unique_ptr does the cleanup for you.Comply
@tjwrona1992 The way I view it is that C++ has a collection of tools and it is important to know what is at your disposal. Those std smart ptrs simply don't work well for game development as I explained the overhead and heap fragmentation. Also, sharedptr's encourage ppl. not to use best practices for multi-threading setup, especially lockless threading. To me, raw ptr means ownership and weakptr the opposite. I will admit I use unique ptr for scoped destruction on the stack of heap allocations. There are actually good use cases for that.Leakage
D
30

You see the side-effects of the /sdl compile option. Turned on by default for VS2015 projects, it enables additional security checks beyond those provided by /gs. Use Project > Properties > C/C++ > General > SDL checks setting to alter it.

Quoting from the MSDN article:

  • Performs limited pointer sanitization. In expressions that do not involve dereferences and in types that have no user-defined destructor, pointer references are set to a non-valid address after a call to delete. This helps to prevent the reuse of stale pointer references.

Do keep in mind that setting deleted pointers to NULL is a bad practice when you use MSVC. It defeats the help you get from both the Debug Heap and this /sdl option, you can no longer detect invalid free/delete calls in your program.

Discard answered 27/10, 2015 at 18:4 Comment(6)
Confirmed. After disabling this feature, the pointer is no longer redirected. Thanks for providing the actual setting that modifies it!Comply
Hans, is it still considered bad practice to set deleted pointers to NULL in a case where you have two pointers pointing to the same location? When you delete one, Visual Studio will leave the second pointer pointing to its original location which is now invalid.Comply
Pretty unclear to me what kind of magic you expect to happen by setting the pointer to NULL. That other pointer isn't so it doesn't solve anything, you still need the debug allocator to find the bug.Discard
My point is, if you always rely on Visual Studio to clean up your pointers for you, that second pointer would not be sanitized properly; however if you sanitize all of your pointers yourself, you are more likely to stop and think about it and sanitize both of them. It isn't documented that Visual studio will fail to sanitize the second pointer so most users would just assume it did and move on with a dangling reference left in their program.Comply
VS does not clean up pointers. It corrupts them. So your program will crash when you use them anyway. The debug allocator does much the same thing with heap memory. The big problem with NULL, it is not corrupt enough. Otherwise a common strategy, google "0xdeadbeef".Discard
Setting the pointer to NULL is still much better than leaving it pointing to its previous address which is now invalid. Attempting to write to a NULL pointer will not corrupt any data and will probably crash the program. Attempting to reuse the pointer at that point may not even crash the program, it may just produce very unpredictable results!Comply
E
19

It also states that the pointer will continue to point to the same location until it is reassigned or set to NULL.

That is definitely misleading information.

Clearly the address that the pointer is pointing to changes when delete is called!

Why is this happening? Does this have something to do with Visual Studio specifically?

This is clearly within the language specifications. ptr is not valid after the call to delete. Using ptr after it has been deleted is cause for undefined behavior. Don't do it. The run time environment is free to do whatever it wants to with ptr after the call to delete.

And if delete can change the address it is pointing to anyways, why wouldn't delete automatically set the pointer to NULL instead of some random address???

Changing the value of the pointer to any old value is within the language specification. As far as changing it to NULL, I would say, that would be bad. The program would behave in a more sane manner if the value of the pointer were set to NULL. However, that will hide the problem. When the program is compiled with different optimization settings or ported to a different environment, the problem will likely show up in the most inopportune moment.

Experimentalism answered 27/10, 2015 at 17:25 Comment(6)
I do not believe it answers OP's question.Tsushima
Disagree even after edit. Setting it to NULL will not hide the problem - in fact, it would EXPOSE it in more cases than without that. There is a reason normal implementations do not do this, and the reason is different.Tsushima
@SergeyA, most implementations don't do it for the sake of efficiency. However, if an implementation decides to set it, it is better to set it to something that is not NULL. It would reveal the problems sooner than if it were set to NULL. It is set to NULL, calling delete twice on the pointer would not cause a problem. That is definitely not good.Experimentalism
No, not the efficiency - at least, it is not the primary concern.Tsushima
@SergeyA, I am missing something that you are thinking of. Please add that to your answer, if you don't mind.Experimentalism
@Tsushima Setting a pointer to a value that's not NULL but also definitely outside the process' address space will expose more cases than the two alternatives. Leaving it dangling won't necessarily cause a segfault if it's used after being freed; setting it to NULL won't cause a segfault if it's deleted again.Campman
A
10
delete ptr;
cout << "ptr = " << ptr << endl;

In general even reading (like you do above, note: this is different from dereferencing) values of invalid pointers (pointer becomes invalid for example when you delete it) is implementation defined behaviour. This was introduced in CWG #1438. See also here.

Please note that before that reading values of invalid pointers was undefined behaviour, so what you have above would be undefined behaviour, which means anything could happen.

Applejack answered 28/10, 2015 at 8:54 Comment(4)
Also relevant is the quote from [basic.stc.dynamic.deallocation]: "If the argument given to a deallocation function in the standard library is a pointer that is not the null pointer value, the deallocation function shall deallocate the storage referenced by the pointer, rendering invalid all pointers referring to any part of the deallocated storage" and the rule in [conv.lval] (section 4.1) that says reading (lvalue->rvalue conversion) any invalid pointer value is implementation-defined behavior.Lashio
Even UB can be implemented in a specific way by a specific vendor such that it's reliable, at least for that compiler. If Microsoft had decided to implement their pointer-sanitization feature prior to CWG #1438, that wouldn't have made that feature any more or less reliable, and in particular it's simply not true that "anything could happen" if that feature is turned on, regardless of what the standard says.Coral
@KyleStrand:I basically gave definition of UB(blog.regehr.org/archives/213).Applejack
To most of the C++ community on SO, "anything could happen" is taken entirely too literally. I think that this is ridiculous. I understand the definition of UB, but I also understand that compilers are just pieces of software implemented by real people, and if those people implement the compiler so that it behaves certain way, that's how the compiler will behave, regardless of what the standard says.Coral
T
1

I believe, you are running some sort of debug mode and VS is attempting to repoint your pointer to some known location, so that further attempt to dereference it could be traced and reported. Try compiling/running the same program in release mode.

Pointers are usually not changed inside delete for the sake of efficiency and to avoid giving a false idea of safety. Setting delete pointer to pre-defined value will do no good in most of complex scenarios, since the pointer being deleted is likely to be only one of several pointing to this location.

As a matter of fact, the more I think about it, the more I find that VS is at fault when doing so, as usual. What if the pointer is const? Is it still gonna change it?

Tsushima answered 27/10, 2015 at 17:24 Comment(9)
Yup, even constant pointers get redirected to this mysterious 8123!Comply
There goes another stone to VS :) Just this morning someone asked why they should be using g++ instead of VS. Here it goes.Tsushima
Then again, this feature isn't necessarily bad. Sure it will cause your program to blow up in your face if you try to use the invalid pointer, but it's better than having your whole computer crash from trying to reuse a pointer that gets left pointing to the old location!Comply
@tjwrona1992 that feature is bad. It provides you with illusion of safety, which it can not guarantee. The reason is that in any complex program you will end up with more than one pointer pointing to the same memory. So what good does it do if you have 'sanitized' one of them? No good, right.Tsushima
You are right... if two pointers point to the same location and one is deleted, only one of them is "sanitized". The other pointer is left pointing to the now invalid memory... shame on you MicrosoftComply
@tjwrona1992, it was debated 20 or more years ago and the consensus was that we are not going to 'sanitize' pointers. And the reason is exactly this. Microsoft again reinvented the wheel, and made it square.Tsushima
@Tsushima but from the other side dereffing that deleted pointer will show you by segfault that you tried to deref a deleted pointer and it won't be equal to NULL. In the other case it will only crash if the page also gets freed (which is very unlikely). Fail faster; solve sooner.Symmetry
@ratchetfreak "Fail fast, solve sooner" is a very valuable mantra, but "Fail fast by destroying key forensic evidence" does not start such a valuable mantra. In simple cases, it may be convenient, but in more complicated cases (the ones we tend to need the most help on), erasing valuable information decreases my tools available to solve the problem.Hardigg
@tjwrona1992: Microsoft is doing the right thing here in my opinion. Sanitizing one pointer is better than doing none at all. And if this causes you a problem in debugging, put a break point before the bad delete call. Odds are that without something like this you'd never spot the problem. And if you have a better solution to locate these bugs, then use it and why do you care what Microsoft does?Nagoya
S
1

After deleting the pointer the memory to which it points may still be valid. To manifest this error, the pointer value is set to an obvious value. This really helps the debugging process. If the value were set to NULL, it may never show up as potential bug in the program flow. So it may hide a bug when you test later against NULL.

Another point is, that some run time optimizer may check that value and change its results.

In earlier times MS set the value to 0xcfffffff.

Saied answered 4/11, 2015 at 7:12 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.