Should you save a temporary pointer?
It can be a good thing in certain situations, as have been pointed out in previous answers. Provided that you have a plan for how to continue execution after the failure.
The most important thing is not how you handle the error. The important thing is that you do something and not just assume there's no error. Exiting is a perfectly valid way of handling an error.
Don't do it, unless you plan a sensible recovery
However, do note that in most situations, a failure from realloc
is pretty hard to recover from. Often is exiting the only sensible option. If you cannot acquire enough memory for your task, what are you going to do? I have encountered a situation where recovering was sensible only once. I had an algorithm for a problem, and I realized that I could make significant improvement to performance if I allocated a few gigabytes of ram. It worked fine with just a few kilobytes, but it got noticeably faster with the extra ram usage. So that code was basically like this:
int *huge_buffer = malloc(1000*1000*1000*sizeof *hugebuffer);
if(!huge_buffer)
slow_version();
else
fast_version();
In those cases, just do this:
p = realloc(p, 2 * sizeof *p)
if(!p) {
fprintf(stderr, "Error allocating memory");
exit(EXIT_FAILURE);
}
Do note both changes to the call. I removed casting and changed the sizeof. Read more about that here: Do I cast the result of malloc?
Or even better, if you don't care in general. Write a wrapper.
void *my_realloc(void *p, size_t size) {
void *tmp = realloc(p, size);
if(tmp) return tmp;
fprintf(stderr, "Error reallocating\n");
free(p);
exit(EXIT_FAILURE);
return NULL; // Will never be executed, but to avoid warnings
}
Note that this might contradict what I'm writing below, where I'm writing that it's not always necessary to free before exiting. The reason is that since proper error handling is so easy to do when I have abstracted it all out to a single function, I might as well do it right. It only took one extra line in this case. For the whole program.
Related: What REALLY happens when you don't free after malloc?
About backwards compatibility in general
Some would say that it's good practice to free before exiting, just because it d actually does matter in certain circumstances. My opinion is that these circumstances are quite specialized, like when coding embedded systems with an OS that does not free memory automatically when they terminate. If you're coding in such an environment, you should know that. And if you're coding for an OS that does this for you, then why not utilize it to keep down code complexity?
In general, I think some C coders focuses too much on backwards compatibility with ancient computers from the 1970th that you only encounter on museums today. In most cases, it's pretty fair to assume ascii, two complement, char size of 8 bits and such things.
A comparison would be to still code web pages so that they are possible to view in Netscape Navigator, Mosaic and Lynx. Only spend time on that if there really is a need.
Even if you skip backwards compatibility, use some guards
However, whenever you make assumptions it can be a good thing to include some meta code that makes the compilation fail with wrong target. For instance with this, if your program relies on 8 bit chars:
_Static_assert(CHAR_BITS == 8, "Char bits");
That will make your program crash upon compilation. If you're doing cross compiling, this might possibly be more complicated. I don't know how to do it properly then.
reallocf()
function that does the same thing asrealloc()
but frees the existing pointer if sufficient additional memory cannot be allocated. Failed calls torealloc()
are apparently a common source of memory leaks, so you should definitely handle this situation if your program isn't going to quit immediately when memory allocation fails. – Editorialp
with withNULL
.realloc(NULL, size)
works likemalloc(size)
– Jericajerichosizeof
should beint
instead ofint*
– Impregnate