I am using realloc
in every iteration of a for
loop that iterates more that 10000 times.
Is this a good practice? Will realloc
cause an error if it was called a lot of times?
I am using realloc
in every iteration of a for
loop that iterates more that 10000 times.
Is this a good practice? Will realloc
cause an error if it was called a lot of times?
It won't fail unless you've run out of memory (which would happen with any other allocator as well) - but your code will usually run much quicker if you manage to estimate the required storage upfront.
Often it's better to do an extra loop run solely to determine the storage requirements.
I wouldn't say that realloc
is a no-go, but it's not good practice either.
realloc
(i.e. in the case described by David, leaving out obvious C++ alternatives), make sure you use it with care. Re-allocating for every single loop iteration is a bad idea. But I think the search for the best growth factor for arrays is a different topic that has already been debated a lot on SO. –
Dickdicken I stumbled upon this question recently, and while it is quite old, I feel the information is not entirely accurate.
Regarding an extra loop to predetermine how many bytes of memory are needed,
Using an extra loop is not always or even often better. What is involved in predetermining how much memory is needed? This might incur additional I/O that is expensive and unwanted.
Regarding using realloc in general,
The alloc family of functions (malloc, calloc, realloc, and free) are very efficient. The underlying alloc system allocates a big chunk from the OS and then passes parts out to the user as requested. Consecutive calls to realloc will almost certainly just tack on additional space to the current memory location.
You do not want to maintain a Heap Pool yourself if the system does it for you more efficiently and correctly from the start.
You run the risk of fragmenting your memory if you do this. This causes performance degredation and for 32 bit systems can lead to memory shortages due to lack of availability of large contiguous blocks of memory.
I'm guessing you are increasing the length of an array by 1 each time round. If so then you are far better keeping track of a capacity and length and only increasing the capacity when you need a length that exceeds the current capacity. When you increase the capacity do so by a larger amount than just 1.
Of course, the standard containers will do this sort of thing for you so if you can use them, it's best to do so.
In addition to what's being said before, there's a few more things to consider:
Performance of realloc(<X-sized-buf>, X + inc)
depends on two things:
malloc(N + inc)
which usually degrades towards O(N)
with the size of the allocated blockmemcpy(newbuf, oldbuf, N)
which is also O(N)
with the size of the blockThat means for small increments but large existing blocks, realloc()
performance is O(N^2)
with respect to the size of the existing data block. Think bubblesort vs. quicksort ...
It's comparatively cheap if you start with a small block but will significantly punish you if the to-be-reallocated block is large. To mitigate, you should make sure that inc
is not small relative to the existing size; realloc'ing by a constant amount is a recipe for performance problems.
Additionally, even if you grow in large increments (say, scale the new size to be 150% of the old), there's the memory usage spike from realloc'ing a large buffer; during the copy of the existing contents you use twice the amount of memory. A sequence of:
addr = malloc(N);
addr = realloc(addr, N + inc);
therefore fails (much) sooner than:
addr[0] = malloc(N);
addr[1] = malloc(inc);
There are data structures out there which do not require realloc()
to grow; linked lists, skip lists, interval trees all can append data without having to copy existing data. C++ vector<>
grows in this fashion, it starts with an array for the initial size, and keeps on appending if you grow it beyond that, but it won't realloc()
(i.e. copy). Consider implementing (or using a preexisting implementation of) something like that.
realloc
I've seen is resizing a buffer whose contents you don't intend to use, rather than just freeing it and allocating a new one... –
Gloucestershire realloc(buf, size++)
magic ... there's an endless supply of bad ideas. –
Mcentire realloc
? Two separate operations that are each O(N) is still considered just O(N). In order to get O(N^2), you would have to have for each item n
in N
another O(N) complexity operation performed on the item. –
Juvenile (i + k)*O(N)
with i
the share of malloc()
and k
that of memcpy()
, you still end up with k >> i
for large memory blocks - a cost you may not want to bear. My statement re C++ vector<>
is also no longer correct; the behaviour was allowed pre-C++11, but C++11 requires contiguous mem for the vector contents, and therefore cannot avoid the copy on resize anymore. –
Mcentire you should realloc to sizes that are power of 2. This is the policy used by stl and is good because of the way memory is managed. realloc donesn't fail except when you run out of memory (and will return NULL) but will copy your existing (old) data in the new location and that can be a performance issue.
realloc
with exponentially-increasing sizes to avoid O(n^2)
loop performance, but the base can be any value greater than 1, not necessarily 2. Lots of people like 1.5 (growing the buffer 50% each time you run out of space). –
Gloucestershire In C:
Used properly, there's nothing wrong with realloc. That said, it's easy to use it incorrectly. See Writing Solid Code for an in-depth discussion of all the ways to mess up calling realloc and for the additional complications it can cause while debugging.
If you find yourself reallocating the same buffer again and again with only a small incremental size bump, be aware that it's usually much more efficient to allocate more space than you need, and then keep track of the actual space used. If you exceed the allocated space, allocate a new buffer at a larger size, copy the contents, and free the old buffer.
In C++:
You probably should avoid realloc (as well as malloc and free). Whenever possible, use a container class from the standard library (e.g., std::vector). They are well-tested and well-optimized and relieve you of the burden of a lot of the housekeeping details of managing the memory correctly (like dealing with exceptions).
C++ doesn't have the concept of reallocating an existing buffer. Instead, a new buffer is allocated at the new size, contents are copied, and the old buffer is deleted. This is what realloc does when it cannot satisfy the new size at the existing location, which makes it seem like C++'s approach is less efficient. But it's rare that realloc can actually take advantage of an in-place reallocation. And the standard C++ containers are quite smart about allocating in a way that minimizes fragmentation and about amortizing the cost across many updates, so it's generally not worth the effort to pursue realloc if you're goal is to increase performance.
I thought I would add some empirical data to this discussion.
A simple test program:
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
void *buf = NULL, *new;
size_t len;
int n = 0, cpy = 0;
for (len = 64; len < 0x100000; len += 64, n++) {
new = realloc(buf, len);
if (!new) {
fprintf(stderr, "out of memory\n");
return 1;
}
if (new != buf) {
cpy++;
printf("new buffer at %#zx\n", len);
}
buf = new;
}
free(buf);
printf("%d memcpys in %d iterations\n", cpy, n);
return 0;
}
GLIBC on x86_64 yields this output:
new buffer at 0x40
new buffer at 0x80
new buffer at 0x20940
new buffer at 0x21000
new buffer at 0x22000
new buffer at 0x23000
new buffer at 0x24000
new buffer at 0x25000
new buffer at 0x26000
new buffer at 0x4d000
new buffer at 0x9b000
11 memcpys in 16383 iterations
musl on x86_64:
new buffer at 0x40
new buffer at 0xfc0
new buffer at 0x1000
new buffer at 0x2000
new buffer at 0x3000
new buffer at 0x4000
new buffer at 0xa000
new buffer at 0xb000
new buffer at 0xc000
new buffer at 0x21000
new buffer at 0x22000
new buffer at 0x23000
new buffer at 0x66000
new buffer at 0x67000
new buffer at 0xcf000
15 memcpys in 16383 iterations
So it looks like you can usually rely on libc to handle resizes that do not cross page boundaries without having to copy the buffer.
The way I see it, unless you can find a way to use a data structure that avoids the copies altogether, skip the track-capacity-and-do-power-of-2-resizes approach in your application and let your libc do the heavy-lifting for you.
if you're realloc()-ing the same buffer in the loop I see no problems as long as you have enough memory to horror the additional memory requests :)
usually realloc() will extend/shrink the existent allocated space you're working against and will give you back same pointer; if it fails to do so inplace then a copy and free are involved so in this case the realloc() gets to be costly; and you also get a new pointer :)
© 2022 - 2024 — McMap. All rights reserved.