RAII can not really guarantee to prevent resource leak, can it?
Asked Answered
S

4

6

Pardon me if this question is too silly. The most common example of usefulness of using RAII is :

void func(){
  // create some object pointer using any smart pointer
  // do some operation that may throw

  return;
}
// whether method returns from the *return* statement or because of any exception it is guaranteed that the memory will be released

This article says that (if I understood correctly), if runtime system knows that there is no exception handler that can catch an exception after being thrown it may skip calling the destructors of automatic objects.

There is also a proposed solution to that problem, namely use catch(..) in main.

Now my concern is if the proposed solution is not used, then there may be resource leak even after using RAII. And there are situation when the solution can not be applied ( like creating a library which will be used by others). In that case serious problem can occur like corrupting a file that contains valuable information.

Should we really be concerned about the problem? Or I am just missing something?

Slinkman answered 8/5, 2014 at 15:32 Comment(8)
If your program is crashing, do you really consider a leak your biggest problem?Literatim
I haven't read the article, but if there is no exception handler, won't the program be terminated, and so the resource will be freed anyway? (Assuming it's not an outside-of-the-process thing like a temporary file or a database lock.)Disrupt
@ThomasPadron-McCarthy Unless std::terminate is replaced by the user for some reason. (Not saying that it's a good idea, though.)Octavo
If I unplug your machine, the destructors won't be called. I'm serious: if your destructors do a query to a remote API, you can't assume that you'll get perfectly balanced lock/unlock calls to it!Melton
@KerrekSB: yes, some very well known programs (including Windows' Explorer) use the "Bird of Phoenix" strategy of crashing gracefully, saving critical state, and restarting itself.Kass
@KerrekSB, concern is context-sensative, as is C++. Besides, the situation can be just throwing an exception in a method for not meeting precondition, deep in a library class, but the user forgot to handle it.Slinkman
Relevant section of the standard is 15.3/9: If no matching handler is found, the function std::terminate() is called; whether or not the stack is unwound before this call to std::terminate() is implementation-defined.Orazio
I find that any operation that involves a resource that requires non-trivial clean up (deleting a temporary, logging out of a server, etc., anything that cannot simply be "reclaimed / closed" by the OS) generally has to be written with strong exception-safety guarantees, and RAII alone is rarely sufficient (except in some clever examples). Strong exception-safety often involves a catch-all (or all possible exceptions) with some clean up and roll-back code. It's impossible to provide strong guarantees when you only control the resource access (RAII class) and not the overall operation.Loyola
K
1

Re

“Should we really be concerned about the problem?”

it depends on the context.

If you’re using destructor breakpoints or logging in debugging, then you need the relevant destructor(s) to be called. Likewise if a destructor is saving crucial state for a "Bird of Phoenix" process re-instantiation after crash. And if possible it’s nice to have unneeded-for-recovery temporary files removed, not laying around after a crash.

On the other hand, since the solution is so utterly simple – a try-catch around some calling code, e.g. up in main – it’s not really a practical problem. It’s more of just a thing to be aware of. E.g., to not expect destructors to necessarily be executed in someone else’s code that’s crashing via an unhandled exception.

Kass answered 8/5, 2014 at 15:58 Comment(0)
A
6

In order for your concern to be valid, you need some kind of resource which could be cleaned up by RAII, but which the OS won't clean up when std::terminate is called and your process dies.

So, let's examine the sort of resources you could reasonably use RAII to clean up:

  1. process-local memory: the OS will clean this up
  2. open files: the OS will close them, and flush anything already written, but any commit/finalize style operations you left to the RAII dtor won't happen
  3. open sockets: the OS will close them, but if your RAII dtor was supposed to send a friendly logout/goodbye, that won't happen
  4. shared memory: again, OS-level resources will be released but any explicit cleanup or consistency code won't be executed
  5. etc. (especially, Alf suggested some more externally-visible resources I hadn't thought of)

So the issue isn't generally with resources, which will generally be released by the OS, but semantics, where your RAII dtor was supposed to guarantee some clean state of a shared resource (shared memory, or files, or a network stream).

Should we really be concerned about the problem?

Well, we should be concerned about correct program semantics anyway. If your program has some external side-effects you need to guarantee, then a catch-all (at least around the relevant code) to guarantee RAII cleanup is the simplest of your concerns.

Adnopoz answered 8/5, 2014 at 15:58 Comment(4)
It seems this answer concentrates on resources that the OS will clean up. Other resources include temporary files (not their open-state but their existence), injected DLLs, states of devices. Other reasons to need destructors executed include logging and breakpoints.Kass
I'm not sure I see why logging and breakpoints are essential if the process is crashing anyway since I'd rather have a core dump without stack unwinding. However, it could easily be an idiom I haven't used, or platform-specific (like temporary file and shared library handling).Adnopoz
most of what you mention is a platform specific viewpoint (post-mortem debugging, automatic removal of temp files). but you're right that DLL injection is platform specific. c++ is meant to cater for all platforms, just as c, and although c++11 fails spectacularly in that respect (e.g. core language wchar_t incompatible with windows) we should imho pretend that c++ is really not just *nix, and act on that basis.Kass
Thought it might be, thanks. Anyway, I hoped to separate the process resource concerns from the external side-effect concerns. Maybe the edit is a little clearer.Adnopoz
O
2

I'm assuming here that you are not so much concerned about things such a memory leaks, but more about data corruption.

You'll need to analyze your design and application carefully, but I suppose it is theoretically possible that you might want a "last-chance" type of failsafe that would kick-in in the case of a serious program bug that would cause an uncaught exception. In which case, you could replace std::terminate.

That said, if you are truly concerned about file corruption, what you describe would be inadequate. The way to prevent file corruption is by careful ordering of operations such as read/writes and proper flushing of file buffers (using fsync or fdatasync on Linux, for example) before considering an operation to be complete (i.e., committing an operation).

Octavo answered 8/5, 2014 at 15:47 Comment(1)
If the issue is just your program terminating, you don't need fsync; just flushing the output stream is sufficient. (Also, there's no standard way of getting the system level file descriptor, needed for fsync, from an std::filebuf.)Koralie
K
2

If you don't catch the exception, the program will terminate. In this case, most of the resources you're worried about (memory, mutex locks, etc.) will be cleaned up by the OS, so you don't have to worry. The big exception is temporary files; output files might also be an issue, since they may be incomplete or inconsistent, and you don't want to leave incomplete or inconsistent files lying around for someone to accidentally use. (I usually use an OutputFile class which wraps an std::ofstream, and deletes the file in the destructor if commit hasn't been called on it, or if the close in commit fails.)

Of course, if there's an exception you don't expect, that's a serious error in the program; I've often found it useful for temporary files not to be deleted when I'm debugging, or trying to figure out why the code isn't working. (Such exceptions will never occur at a user site, of course, since you'll have sufficient tested the program before releasing it:-).)

If it is really an issue, you can use std::set_terminate to set a terminate handler, which can do any last minute clean-up. (Be aware, however, that in this case, it is unspecified whether the stack wil have been unwound or not, so you may have problems determining what needs cleaning up.)

Koralie answered 8/5, 2014 at 16:34 Comment(0)
K
1

Re

“Should we really be concerned about the problem?”

it depends on the context.

If you’re using destructor breakpoints or logging in debugging, then you need the relevant destructor(s) to be called. Likewise if a destructor is saving crucial state for a "Bird of Phoenix" process re-instantiation after crash. And if possible it’s nice to have unneeded-for-recovery temporary files removed, not laying around after a crash.

On the other hand, since the solution is so utterly simple – a try-catch around some calling code, e.g. up in main – it’s not really a practical problem. It’s more of just a thing to be aware of. E.g., to not expect destructors to necessarily be executed in someone else’s code that’s crashing via an unhandled exception.

Kass answered 8/5, 2014 at 15:58 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.