In C, on a small embedded system, is there any reason not to do the following?
const char * filter_something(const char * original, const int max_length)
{
static char buffer[BUFFER_SIZE];
// Checking inputs for safety omitted
// Copy input to buffer here with appropriate filtering, etc.
return buffer;
}
This is essentially a utility function the source is flash memory which may be corrupted, we do a kind of "safe copy" to make sure we have a null terminated string. I chose to use a static buffer and make it available read only to the caller.
A colleague is telling me that I am somehow not respecting the scope of the buffer by doing this, to me it makes perfect sense for the use case we have.
Is there a reason not to do this?
Many thanks to all who responded. You have generally confirmed my ideas on this, which I am grateful for. I was looking for major reasons not to do this, I don't think that there are any. To clarify a few points:
rentrancy/thread safety is not a concern. It is a small (bare metal) embedded system with a single run loop. This code will not be called from ISRs, ever.
in this system we are not short on memory, but we do want very predictable behavior. For this reason I prefer declaring an object like this statically, even though it might be a little "wasteful". We have already had issues with large objects declared carelessly on the stack, which caused intermittent crashes (now fixed, but it took a while to diagnose). So in general, I am preferring static allocation, simply to have very predictability, reliability, and less potential issues downstream.
So basically it's a case of taking a certain approach for a specific system design.
filter_something
from different threads/processes will all access the one single buffer. It might be better to pass the buffer as an argument to the function, and let the caller handle the memory and buffering. – Splenetic