I've heard nothing but good things about RTOSs--they give the programmer greater control over the scheduler so as to e.g. avoid priority inversion, their timing is more consistent, better multitasking. But all standard desktop setups use OSs that aren't real-time. So there must be some tradeoffs to using an RTOS, what are they?
RTOS typically trade throughput performance and features for predictability and tractability. The usual definition of "real-time" folks apply is "deterministic"; you can't have determinism without paying for it.
In general purpose OS'es, we're motivated by "common-case" behaviors -- we want really good average performance, and a lot of flexibility. In RTOS, we want a reliable ceiling on "worst case" behaviors, and we pay (often dearly) in throughput or common-case behaviors.
Yes, it's possible to create hybrids, like Windows or even Linux real-time threads. But somewhere you're typically paying a penalty because ultimately there is only the finite set of resources available (CPUs, IO bandwidth, whatever) and consumer OS'es and RTOS'es optimize around different criteria. Some of the RT Linux approaches deal with this explicitly, by having partitions. Different assumptions and different optimality criteria are optimized for in each partition.
What features are traded? I can't offer a precise list -- it's more that general-purpose OS'es tend to have a zillion drivers, and be able to keep up with the churn of new devices; RTOS tend to focus on a much smaller set for which timeliness can be either well-understood or explicitly kept from interfering with other activities. You probably won't have the same selection of drivers on a normal RTOS because it's not reasonable to implement them, typically.
Throughput Remember "real-time" != "real-fast". When a system is real-time, it means that activities' time of completion is part of their correctness. In some cases, this means processing many activities very quickly (high throughput); in others it may be processing at a relatively slow but extremely predictable period. The structures in an RTOS may have high throughput, but typically can't achieve the throughput of an equivalent RTOS because the techniques used to achieve that throughput fairly (caching, fancy interactivity-driven scheduling approaches, "fair" queuing and lock contention) militate against predictability of any single task's timeliness.
I'm not sure if this is a big reason, but I believe the existence of non-realtime features like System Management Mode in the processor doesn't really allow for a real-time OS, because SMM can take as arbitrarily long as it wants to respond to an SMI, and the best the OS can do is just time out and crash when it gains back control -- if it gains back control in a timely manner. So you'd need the BIOS to be realtime as well, which is not quite as easy as having just one company like Microsoft make its OS realtime (which isn't easy by itself anyway).
And there probably wouldn't be too much gain for the average user, anyway.
Features that help with desktop application development just aren't important in applications that require a real-time OS. So RTOS vendors tend to focus their engineering time on things that are important to their customers, like fast booting and error recovery.
Until there's a market for overlap between rapid application development and real-time, you're unlikely to see an OS vendor split its resources between both. Rapid development and safety-critical just don't go together.
With the Blackberry Playbook moving to QNX, we might see for the first time a friendly development environment (libraries as well as tools) for an RTOS.
It is basically the same reason as "why doesn't everyone write webapps in C?". Its a lot faster, but, it is much more difficult. RTOSs on a large system becomes unwieldy because a lot of the control is given to the application programmer.
© 2022 - 2024 — McMap. All rights reserved.