What makes a kernel/OS real-time?
Asked Answered
D

2

26

I was reading this article, but my question is on a generic level, I was thinking along the following lines:

  1. Can a kernel be called real time just because it has a real time scheduler? Or in other words, say I have a linux kernel, and if I change the default scheduler from O(1) or CFS to a real time scheduler, will it become an RTOS?
  2. Does it require any support from the hardware? Generally I have seen embedded devices having an RTOS (eg VxWorks, QNX), do these have any special provisions/hw to support them? I know RTOS process's running time is deterministic, but then one can use longjump/setjump to get the output in determined time.

I'd really appreciate some input/insight on it, if I am wrong about something, please correct me.

Donte answered 7/3, 2014 at 4:8 Comment(3)
All "real-time" means is that interrupt latency (time during which interrupts are disabled) is guaranteed to be less than some specified number of microseconds. In other words, the kernel guarantees that it can respond to incoming external events up to some maximum frequency (1/maxlatency). It takes a lot of careful programming and testing of all interrupt-handling paths to make this guarantee. The actual details of how this is accomplished will depend on the kernel architecture.Doorsill
@Jim: So, does it requires any support from the hardware?Donte
@JimGarrison: Can you please copy your comment into an answer ?Donte
D
37

After doing some research, talking to poeple (Jamie Hanrahan, Juha Aaltonen @linkedIn Group - Device Driver Experts) and ofcourse the input from @Jim Garrison, this what I can conclude:

In Jamie Hanrahan's words-

What makes a kernel real time?
The sine qua non of a real time OS -

  • The ability to guarantee a maximum latency between an external interrupt and the start of the interrupt handler.

    Note that the maximum latency need not be particularly short (e.g. microseconds), you could have a real time OS that guaranteed an absolute maximum latency of 137 milliseconds.

  • A real time scheduler is one that offers completely predictable (to the developer) behavior of thread scheduling - "which thread runs next".

    This is generally separate from the issue of a guaranteed maximum latency to responding to an interrupt (since interrupt handlers are not necessarily scheduled like ordinary threads) but it is often necessary to implement a real-time application. Schedulers in real-time OSs generally implement a large number of priority levels. And they almost always implement priority inheritance, to avoid priority inversion situations.

So, it is good to have a guaranteed latency for an interrupt and predictability of thread scheduling, then why not make every OS real time?

  • Because an OS suited for general purpose use (servers and/or desktops) needs to have characteristics that are generally at odds with real-time latency guarantees.

    For example, a real-time scheduler should have completely predictable behavior. That means, among other things, that whatever priorities have been assigned to the various tasks by the developer should be left alone by the OS. This might mean that some low-priority tasks end up being starved for long periods of time. But the RT OS has to shrug and say "that's what the dev wanted." Note that to get the correct behavior, the RT system developer has to worry a lot about things like task priorities and CPU affinities.

    A general-purpose OS is just the opposite. You want to be able to just throw apps and services on it, almost always things written by many different vendors (instead of being one tightly integrated system as in most R-T systems), and get good performance. Perhaps not the absolute best possible performance, but good.

    Note that "good performance" is not just measured in interrupt latency. In particular, you want CPU and other resource allocations that are often described as "fair", without the user or admin or even the app developers having to worry much if at all about things like thread priorities and CPU affinities and NUMA nodes. One job might be more important than another, but in a general-purpose OS, that doesn't mean that the second job should get no resources at all.

    So the general purpose OS will usually implement time-slicing among threads of equal priority, and it may adjust the priorities of threads according to their past behavior (e.g. a CPU hog might have its priority reduced; an I/O bound thread might have its priority increased, so it can keep the I/O devices working; a CPU-starved thread might have its priority boosted so it can get a little bit of CPU time now and then).

Can a kernel be called real time just because it has a real time scheduler?

  • No, an RT scheduler is a necessary component of an RT OS, but you also need predictable behavior in other parts of the OS.

Does it require any support from the hardware?

  • In general, the simpler the hardware the more predictable its behavior is. So PCI-E is less predictable than PCI, and PCI is less predictable than ISA, etc. There are specific I/O buses that were designed for (among other things) easy predictability of e.g. interrupt latency, but a lot of R-T requirements can be met these days with commodity hardware.
Donte answered 15/3, 2014 at 0:57 Comment(4)
Good resource (talk from kernel developer) - Kernel Recipes 2016 - Who needs a Real-Time Operating System (Not You!) - Steven RostedtDonte
Another one - Kernel Recipes 2016 - Understanding a Real-Time System (more than just a kernel) - Steven RostedtDonte
It should be noted that in the decade since this question and answer was posted a lot of components developed in the RT_PREEMPT patch (used to make real-time linux kernels), have been accepted into the mainline kernel. There is more to go, but mainline linux is getting closer to being an optionally RT OS.Schmid
Slightly more recent version of the "real-time, Who needs it? (not you )" talk, Starts about a minute and a half (part of a longer video on RT linux) youtube.com/watch?v=IBUlUYJrVSQSchmid
T
6

The specific description of real-time is that processes have minimum response time guarantees. This is often not sufficient for the application, and even less important than determinism. This is especially hard to achieve with modern feature rich OS's. Consider:

If I want to command some hardware or a machine at precise points in time, I need to be able to generate command signals at those specific moments, often with far sub millisecond accuracy. Generally if you compile let's say a C-code that runs a loop that waits for "half a millisecond" and does something, the wait time is not exactly half a millisecond, it is a little bit more, since the way common OS's handle this, is that they put the process aside at least up until the correct time has passed, after which the scheduler might (at some point) pick it up again.

What is seriously problematic is not that the time t is not exactly half a second but that it cannot be known in advance how much more it is. This inaccuracy is not constant nor deterministic.

This has surprising consequences when doing physical automation. For example it is impossible to command a stepper motor accurately with any typical OS without using dedicated hardware through kernel interfaces and telling them how long time steps you really want. Because of this, a single AVR module can command several motors accurately, but a Raspberry Pi (that absolutely stomps the AVR in terms of clockspeed) cannot manage more than 2 with any typical OS.

Thundering answered 10/1, 2018 at 13:22 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.