What would you consider "worst practices" to follow when developing an embedded system?
Some of my ideas of what not to do are:
I'm sure there are plenty of good ideas out there on what not to do, let's hear them!
What would you consider "worst practices" to follow when developing an embedded system?
Some of my ideas of what not to do are:
I'm sure there are plenty of good ideas out there on what not to do, let's hear them!
I've got a ton more but that should get us started....
Somebody stop me before I hurt myself.
BTW, I realize not all of these are strictly specific to embedded development, but I believe each of them is at least as important in the embedded world as the real world.
When making a schedule, go ahead & assume everything's going to work the first time.
Approach board bring-up without an oscilliscope and/or logic analyzer. Esp. the scope, that's never useful.
Don't consider the power supply during design. Issues like heat, efficiency, effects of ripple on ADC readings & system behavior, EMF radiation, start up time, etc.. aren't important.
Whatever you do, don't use a reset controller (the 5 cent IC type), just use an RC circuit (hopefully one with lots of high frequency AC noise coupled into it)
EMBRACE THE BIG BANG!!! Don't develop little pieces incrementally & integrate often, silly fool!!! Just code away for months, along side co-workers, and then slap it all together the night before the big tradeshow demo!
Don't instrument code with debugging / trace statements. Visibility is bad.
Do lots of stuff in your ISRs. Bubble sorts, database queries, etc... Hey, chances are no one's gonna interrupt you, you have the floor, enjoy it buddy!!!
Ignore board layout in a design. Let the autorouter go to town on those matched impedance traces and that high-current, high-frequency power supply. Hey, you have more important things to worry about, partner!!!
Use brand new, beta, unreleased, early adopter silicon, especially if it's safety critical (aviation, medical) or high-volume (it's fun to recall 1 million units). why go to vegas when there's new silicon sampling on that 4-core, 300 MHz 7-stage pipeline chip?
OK round 2.... just a few more:
Don't use a watchdog timer (esp. the built-in one!)
Use floating point types & routines when scaled integer math would suffice
Use an RTOS when it's not warranted
Don't use an RTOS when it would really make sense
Never look at the generated assembly code to understand what's going on under the hood
Write the firmware so that it can't be updated in the field
Don't document any assumptions you're making
If you see something strange while testing / debugging, just ignore it until it happens again; it probably wasn't anything important like a brownout, a missed interrupt, a sign of stack corruption, or some other fleeting & intermittent problem
When sizing stacks, the best philosophy is to "start small and keep increasing until the program stops crashing, then we're probably OK"
Don't take advantage of runtime profiling tools like Micrium's uC/Probe (I'm sure there are others)
Don't include Power-On Self Tests of the Hardware before running the main app - hey the boot code is running, what could possibly be not working?
Definitely don't include a RAM test in the POST (above) that you're not going to implement
If the target processor has an MMU, for all that is holy, don't use that scary MMU!!! Especially don't let it protect you from writes to code space, execution from data space, etc....
If you've been testing, debugging & integrating with a certain set of compiler options (e.g. no/low optimization), BE SURE TO TURN ON FULL OPTIMIZATION before your final release build!!! But only turn on optimization if you're not going to test. I mean, you've already tested for months - what could go wrong?!??!
Dynamic memory allocation after initialization. The memory pool should remain static after the system is up and running.
Trying to develop without access to the actual hardware you're developing for.
Use multiple processors in your solution and make sure they have opposite endianness. Then make sure that the interface between them is one of them having direct access to the other's memory.
Yes, I've programmed that architecture before.
Assume endianess will be the same forever.
(Extend it to the size of the registers and anything about hardware specifications)
(Case explanation in the comments).
Without defining 'embedded programming' a bit more, then it's impossible to say what's good or bad practice.
Many of the the techniques you might use to program an 8-bit micro in a dodgy non-standard dialect of 'C' would be completely inappropriate on a CE or XPe platform, for example.
Abstraction is an (over-)expensive luxury in many cases, so 'avoiding it' might be good rather than bad.
Here are a few:
Don't design an easily explainable architecture that both your developers, managers and customers can understand.
An embedded system is almost always a cost sensitive platform. Don't plan on the HW getting slower (cheaper) and don't plan for new features in the critical data path.
Most embedded systems are "headless" (no keyboard or mouse or any other HID). Don't plan in your schedule to write debugging tools. And don't resource at least one developer to maintain them.
Be sure to underestimate how long it will take to to get the prompt. That is how long it takes to get the core CPU to a point where it can talk to you and you to it.
Always assume HW subsystems work out-of-the-box, like memory, clocks and power.
Don't:
Leave unused interrupt vectors which point nowhere (after all, they're never going to be triggered, so where's the harm in that...), rather than having them jump to a default unused interrupt handler which does something useful.
Be unfamiliar with the specifics of the processor you're using, especially if you're writing any low-level drivers.
Pick the version of a family of processors with the smallest amount of flash, on the grounds that you can always "upgrade later", unless costs make this unavoidable.
An important thing in embedded systems is to evaluate the technology, both software (compiler, libraries, os) and hardware (chipsets) independently from your application. Avoiding using test beds for these is dangerous. One should either buy evaluation kits or build his/her own test beds.
Write your FW module to be totally generic accepting every possible parameter as a variable even though the layer above you will always call with the same parameters.
Use memcpy everywhere in the code even though you have a DMA engine in the system (why bother the HW).
Design a complex layered FW architecture and then have a module access directly to global variables owned by higher level modules.
Choose a RTOS but don't bother to test its actual performance (can't we trust the numbers given by the vendor?)
Printf.
If your tracing facility requires a context switch and/or interrupts, you'll never be able to debug anything even vaguely timing related.
Write to a memory buffer (bonus points for memcpy'ing enums instead of s(n)printf), and read it at another time.
This is perhaps more of a hardware answer -- but for starting new projects from scratch, underestimating the resource requirement is a big problem, especially when working on small self-contained microcontrollers with no easy way to expand code/storage size.
That's not just for embedded systems, but spending all this time finding bugs (debugging) instead of avoiding bugs with cool stuff like like e.g. code reviews is definitely one commonly applied worst practice.
Another one is letting one huge processor do all the work instead of breaking the problem into small problems e.g. with more little processors. Remember COCOMO?
It depends a lot on the type of controller you are programming for. Sometimes cost is the most important thing and you are trying to get by with as little as possible. That's the boat I'm usually in. Here are some worst practices I've used:
From a software perspective, not taking the time to learn the hardware.
A few extra don'ts:
Some of the worst practices from my experience of working in embedded systems for over 8 years and teaching embedded systems:
Wrong data types can also be disastrous.
Doing a lot of work in ISR - ISRs should be as short as possible. Some people I have seen implement the entire logic in ISRs which is very very bad. So bad that it should be listed as a crime. Use flags instead
Using integers as flags - This is more of an extension of point 1. You need only one bit. Do not use 16 or 32 bits for that.
But the worst of all that I have seen is thinking over the algorithm over and over to get the best and the most perfect approach. Stop!! Keep the best practices in mind and get the system to work first.
There are lot more. You can read some of them here
© 2022 - 2024 — McMap. All rights reserved.