How preemption is handled in single core systems?
Asked Answered
W

3

6

Given a single core system that employs preemption for multitasking, how does the OS carry out thread interrupts when a user mode application is executing on the system? Since the processor is handling the user code when does it get a chance to ask the thread scheduler for context switch. Is it handled similarly to page faults? And if it is, how often it happens and isn't it causing a performance hit?

Whittier answered 1/2, 2016 at 17:34 Comment(4)
This is brand new to me, but i'd imagine it is handled on a timer interrupt intervalProgestin
See also wikipedia: An interrupt is scheduled to allow the operating system kernel to switch between processes when their time slices expire.V1
This question is similar (maybe even a duplicate?) to this SO question: #16601212Bernita
@MichaelPetch - your linked question is linux-specific and so can go into greater detail.Importunacy
E
10

Read up on hardware interrupts.

The hardware timer issues periodic interrupts. When one happens, the CPU switches into kernel mode and executes a routine, the interrupt handler. That's where context switching takes place. When the CPU comes back from kernel to user mode, it returns control to a different thread than the one that the interrupt happened in. The state of the preempted thread is saved, naturally.

Eger answered 1/2, 2016 at 18:33 Comment(1)
Note that this how it works on multiple core systems as well.Mess
E
4

Preemption happens immediately, not on the next time slice boundary. If not due to an interrupt, then it's due to a lower priority thread setting some type of event that a higher priority thread is pending on (waiting for). Normally the OS API switch from user mode to kernel mode is done via the X86 sysenter instruction.

Time slicing is used to switch between threads of equal priority, which isn't considered to be preemption.

Eatmon answered 2/2, 2016 at 4:45 Comment(2)
Better than the other answers/comments, which are all fixated on the timer interrupt and ignore all the other, vitally-important reasons why one thread may be preempted/usurped/replaced by another on such systems, IO completion interrupts being the most obvious and important by far.Importunacy
Right, any system call can lead to a context switch. This depends on OS design, though. To minimize latency, Linux can pre-empt itself (timer interrupt arrives while the kernel is already running kernel code to handle a system call). It didn't used to be possible for Linux to do a context switch to a different user-space thread at that point, but it is with a pre-emptible kernel. (This requires SMP-style locking inside the kernel even on a single-core system, except that IIRC the x86 lock prefix can be patched into a NOP, leaving a "normal" read-modify-write insn like xadd.)Brinkley
I
4

One-by-one:

Given a single core system that employs preemption for multitasking, how does the OS carry out thread interrupts when a user mode application is executing on the system?

Two ways. 1) The user code makes a syscall that causes the kernel to be entered or a hardware peripheral requests a 'real' processor interrupt that causes a driver to be run, and that driver makes an appropriate kernel entry, eg. to ask for a scheduler run upon interrupt-return because it has made a thread ready.

Since the processor is handling the user code when does it get a chance to ask the thread scheduler for context switch.

The running user code can make syscalls that may change thread state whenever it wants to. On top of which, the kernel may be entered upon hardware interrupts from drivers, eg. KB, mouse, disk, memory-manager, network, timer.

Is it handled similarly to page faults?

A page-fault interrupt is a hardware interrupt from a peripheral: the memory-management hardware. It needs to load the appropriate page/s and restart the failed instruction that generated the interrupt. It's one in the set of hardware interrupts that may change thread state. 'Is it handled similarly to page faults?' is looking at the issue backwards.

And if it is, how often it happens

Well, that depends on how often threads make relevant system calls and how often they request resources that are not immediately available and so do not need CPU execution until the resources do become available.

and isn't it causing a performance

hit?

Well, no:) Preemptive multitasking usually results in a large performance gain since it does not apply the execution resource to thread that do not require it and can rapidly apply execution to threads that require it urgently. Without preemption, systems would be unable to respond promptly to IO completions and the IO performance would be abysmally bad. You could forget about apps like audio/video streaming/playback, high-speed net downloads and the like. The system performance would be too poor for them to work at all. This performance gain is the overriding reason for the wide use of such systems in most environments. There are examples where a preemptive OS may result in a perceived and/or real performance 'hit', but you have to try quite hard to find one on your typical desktop or server box.

Importunacy answered 2/2, 2016 at 12:23 Comment(5)
It's not so much that without pre-emption, we couldn't have low latency systems. It's that it would push the burden of keeping latency low onto every single process. One badly-written program could make your videos stutter, and of course trivially hard-lock your whole system with an infinite loop. (Context switches can happen when you make any system call). MacOS was like this, right up until the last version before OS X if I understand correctly.Brinkley
So was Windows 3.x and, to some extent, Windows 9x/Me.Eger
Well, at least W3.x never pretended to be anything other than what it was. Windows 9x/Me, (~~shudder~~), though running preemptive taskers, were just BSOD-generators. Windows became useful, (for most of us, ie. ignoring NT), with W2k.Importunacy
@PeterCordes well, it would mean only one low-latency thread per core. Let's face it, there is a reason why developers put up with all the locking etc. problems with threads on preemptive systems: the alternative is pretty hopeless on typical desktops/servers.Importunacy
Totally agreed that pre-emptive multitasking is the only viable option in the real world. Even in an embedded system where no 3rd-party software is needed, it's still by far the easiest choice. I'm not arguing against it, just trying to correctly state the implications of not having it. What do you mean "only one low-latency thread per core"? That every low-latency thread needs its own core? That's only true if there are programs that don't give the OS a chance to schedule for a long time. Multiple low-latency processes can be cooperatively scheduled if they do cooperate, which is hardBrinkley

© 2022 - 2024 — McMap. All rights reserved.