It's more a philosophical type of question.
In C++ we have nice shiny idiom - RAII. But often I see it as incomplete. It does not well aligns with the fact that my application can be killed with SIGSEGV.
I know, I know, programms like that are malformed you say. But there is sad fact that on POSIX (specifically Linux) you can allocate beyond physical memory limits and meet SIGSEGV in the middle of the execution, working with correctly allocated memory.
You may say: "Application dies, why should you care about those poor destructors not being called?". Unfortunately there are some resources that are not automatically freed when application terminates, such as File System entities.
I am pretty sick right now of designing hacks, breaking good application design just to cope with this. So, what I am asking is for a nice, elegant solution to this kind of problems.
Edit:
It seems that I was wrong, and on Linux applications are killed by a kernel pager. In which case the question is still the same, but the cause of application death is different.
Code snippet:
struct UnlinkGuard
{
UnlinkGuard(const std::string path_to_file)
: _path_to_file(path_to_file)
{ }
~UnlinkGuard() {
unlink();
}
bool unlink() {
if (_path_to_file.empty())
return true;
if (::unlink(_path_to_file.c_str())) {
/// Probably some logging.
return false;
}
disengage();
return true;
}
void disengage() {
_path_to_file.clear();
}
private:
std::string _path_to_file;
};
void foo()
{
/// Pick path to temp file.
std::string path_to_temp_file = "...";
/// Create file.
/// ...
/// Set up unlink guard.
UnlinkGuard unlink_guard(path_to_temp_file);
/// Call some potentially unsafe library function that can cause process to be killed either:
/// * by a SIGSEGV
/// * by out of memory
/// ...
/// Work done, file content is appropriate.
/// Rename tmp file.
/// ...
/// Disengage unlink guard.
unlink_guard.disengage();
}
On success I use file. On failure I want this file to be missing.
This could be achived if POSIX had support for link()
-ing of previously unlinked file by file descriptor, but there is no such feature :(.
0
since AFAIK, all heap memory allocation functions should return0
if not enough memory available ( someone correct me if im wrong ). And that write to address0
causes yoursigsegv
. – Mangenew
, it will return a null pointer on failure. But the normalnew
just throwsstd::bad_alloc
. – Minestronevirtual memory
. Every page of that memory backed with real physical memory only when you access the page first time. – Corlissx64
it's practically impossible. It does not fail becausenew
(malloc()
) is based onmmap()
. Andmmap()
just let's say extends virtual address space. – CorlissSIGSEGV
rather than being killed by a pager. Maybe it's on FreeBSD this way. We have both kind of servers with different versions of FreeBSD. So, if this is apager
who kills application, is there an elegant way of handle it? – Corliss