Why shouldn't we have dynamic allocated memory with different size in embedded system
Asked Answered
D

3

5

I have heard in embedded system, we should use some preallocated fixed-size memory chunks(like buddy memory system?). Could somebody give me a detailed explanation why? Thanks,

Debauched answered 26/1, 2014 at 22:58 Comment(0)
D
14

In embedded systems you have very limited memory. Therefore, if you occasionally lose only one byte of memory (because you allocate it , but you dont free it), this will eat up the system memory pretty quickly (1 GByte of RAM, with a leak rate of 1/hour will take its time. If you have 4kB RAM, not as long)

Essentially the behaviour of avoiding dynamic memory is to avoid the effects of bugs in your program. As static memory allocation is fully deterministic (while dynamic memory alloc is not), by using only static memory allocation one can counteract such bugs. One important factor for that is that embedded systems are often used in security-critical application. A few hours of downtime could cost millions or an accident could happen.

Furthermore, depending on the dynamic memory allocator, the indeterminism also might take an indeterminate amount of time, which can lead to more bugs especially in systems relying on tight timing (thanks to Clifford for mentioning this). This type of bug is often hard to test and to reproduce because it relies on a very specific execution path.

Additionally, embedded systems don't usually have MMUs, so there is nothing like memory protection. If you run out of memory and your code to handle that condition doesn't work, you could end up executing any memory as instruction (bad things could happen! However this case is only indirectly related to dynamic mem allocation).

As Hao Shen mentioned, fragmentation is also a danger. Whether it may occur depends on your exact usecase, but in embedded systems it is quite easy to loose 50% of your RAM due to fragmentation. You can only avoid fragmentation if you allocate chunks that always have the exact same size.

Performance also plays a role (depends on the usecase - thanks Hao Shen). Statically allocated memory is allocated by the compiler whereas malloc() and similar need to run on the device and therefore consume CPU time (and power).

Many embedded OSs (e.g. ChibiOS) support some kind of dynamic memory allocator. But using it only increases the possibility of unexpected issues to occur.

Note that these arguments are often circumvented by using smaller statically allocated memory pools. This is not a real solution, as one can still run out of memory in those pools, but it will only affect a small part of the system.

As pointed out by Stephano Sanfilippo, some system don't even have enough resources to support dynamic memory allocation.

Note: Most coding standard, including the JPL coding standard and DO-178B (for critical avionics code - thanks Stephano Sanfilippo) forbid the use of malloc.

I also assume the MISRA C standard forbids malloc() because of this forum post -- however I don't have access to the standard itself.

Divest answered 26/1, 2014 at 23:2 Comment(4)
Just to give a practical example, DO-178B US military standard forbids the usage of malloc in safety-critical embedded avionics code.Usm
Hi Uli Thanks for your information.I believe also fragmentation will waste valuable memory in embedded system. But do you think speed is also a concern? Maybe using smaller statically allocated memory is faster?Debauched
@HaoShen Yes, I agree! If fragmentation occurs depends on your usecase but OP specifically asked for memory with different size. I will edit that into my answer!Mensch
Note that the lack of a (full) MMU, in addition to perhaps meaning no memory protection, can also make fragmentation a greater concern as you can't map any random collection of free(d) physical pages into a logically continuous set to satisfy a new large allocation.Biamonte
R
6

The main reasons not to use dynamic heap memory allocation here are basically:

a) Determinism and, correlated, b) Memory fragmentation.

Memory leaks are usually not a problem in those small embedded applications, because they will be detected very early in development/testing.

Memory fragmentation can however become non-deterministic, causing (best case) out-of-memory errors at random times and points in the application in the field.

It may also be non-trivial to predict the actual maximum memory usage of the application during development with dynamic allocation, whereas the amount of statically allocated memory is known at compile time and it is absolutely trivial to check if that memory can be provided by the hardware or not.

Ritornello answered 26/1, 2014 at 23:24 Comment(4)
+1 for determinism, but the explanation is missing an improtant considerayion: In a real-time system, non-deterministic behaviour is concerned with operations that take a variable and unbounded length of time - regardless of whether they fail or succeed. The typical "first-fit" memory allocation cannot find a block in a fixed-length time, so deadlines may be missed in real-time tasks. It is not so much that dynamic memory should not be used in embedded systems but rather should not be used in real-time processing.Exaggeration
@Exaggeration Thanks for the suggestion regarding indeterministic timing. I edited that in in my answer. Even for non-RT-systems I would tend not to use dynamic memalloc (if possible), because of missing determinism and the risk of leaking memory.Mensch
@Exaggeration It's not just a question of deterministic timelines. When memory gets fragmented, and in the absence of a MMU, a specific malloc call may succeed or fail just depending on the history of events the application encountered before, even though in sum there is enough memory available. This makes it hard to predict if memory can be allocated when needed in the live system.Ritornello
@HannoBinder: That point was already made in your answer, I clearly did not say that was the only issue. However it is the primary issue in real-time systems since even with sufficient memory and correct allocation/deallocation, a system can fail simply by failing to meet timing constraints.Exaggeration
F
4

Allocating memory from a pool of fixed size chunks has a couple advantages over dynamic memory allocation. It prevents heap fragmentation and it is more deterministic.

With dynamic memory allocation, dynamically sized memory chunks are allocated from a fixed size heap. The allocations aren't necessarily freed in the same order that they're allocated. Over time this can lead to a situation where the free portions of the heap are divided up between allocated portions of the heap. As this fragmentation occurs, it can become more difficult to fulfill requests for larger allocations of memory. If a request for a large memory allocation is made, and there is no contiguous free section in the heap that's large enough then the allocation will fail. The heap may have enough total free memory but if it's all fragmented and there is not a contiguous section then the allocation will fail. The possibility of malloc() failing due to heap fragmentation is undesirable in embedded systems.

One way to combat fragmentation is rejoin the smaller memory allocations into larger contiguous sections as they are freed. This can be done in various ways but they all take time and can make the system less deterministic. For example, if the memory manager scans the heap when a memory allocation is freed then the amount of time it takes free() to complete can vary depending on what types of memory are adjacent to the allocation being freed. That is non-deterministic and undesirable in many embedded systems.

Allocating from a pool of fixed sized chunks does not cause fragmentation. So long as there is some free chunks then an allocation won't fail because every chunk is the right size. Plus allocating and freeing from a pool of fixed size chunks is simpler. So the allocate and free functions can be written to be deterministic.

Frere answered 26/1, 2014 at 23:53 Comment(2)
Thanks for your reply. You say "Allocating from a pool of fixed sized chunks does not cause fragmentation". Though I know it's true, actually I didn't quite get it. If I understand correctly, with the time going there will still be some fragmented small fixed sized chunks,right? Big memory requests still can not use them, right?Debauched
@HaoShen, When you use a pool of fixed size chunks, you have to design your application to allocate chunks of only that particular size. Your application should never request a larger (or smaller) chunk. So if any chunks are available, then they are always the right size. This prevents fragmentation when done properly.Frere

© 2022 - 2024 — McMap. All rights reserved.