Does a hyperthreading CPU implement parallelism or just concurrency (context switching)?
My guess is no parallelism, only concurrency by context switching.
Does a hyperthreading CPU implement parallelism or just concurrency (context switching)?
My guess is no parallelism, only concurrency by context switching.
A single physical CPU core with hyperthreading appears as two logical CPUs to an operating system. The CPU is still a single CPU, so it’s “cheating” a bit — while the operating system sees two CPUs for each core, the actual CPU hardware only has a single set of execution resources for each core. The CPU pretends it has more cores than it does, and it uses its own logic to speed up program execution. Hyper-threading allows the two logical CPU cores to share physical execution resources. This can speed things up somewhat — for example, if one virtual CPU is stalled and waiting, the other virtual CPU can borrow its execution resources. Also, free resources can be utilized for simultaneous execution of other tasks. Hyper-threading can help speed your system up, but it’s nowhere near as good as having additional cores. Parallelism in its real sense (independent execution as in GPGPU architecture or multiple physical cores), is not attainable on a single-core processor unless you are considering a superscalar architecture.
From: https://en.wikipedia.org/wiki/Superscalar_processor
Superscalar processors differ from multi-core processors in that the several execution units are not entire processors. A single processor is composed of finer-grained execution units such as the ALU, integer multiplier, integer shifter, FPU, etc. There may be multiple versions of each execution unit to enable execution of many instructions in parallel. This differs from a multi-core processor that concurrently processes instructions from multiple threads, one thread per processing unit (called "core"). It also differs from a pipelined processor, where the multiple instructions can concurrently be in various stages of execution, assembly-line fashion.
Hyper Threading technology makes a single physical processor appear to be multiple logical processors. There is one copy of the architectural state for each logical processor, and these processors share a single set of physical execution resources. From a software or architecture perspective, this means operating systems and user programs can schedule processes or threads to logical processors as they would on conventional physical processors in a multiprocessor system. From a microarchitecture perspective, it means that instructions from logical processors will persist and execute simultaneously on shared execution resources. This can greatly improve processor resource utilization. The hyper threading technology implementation on the Netburst microarchitecture has two logical processors on each physical processor. Figure 1 shows a conceptual view of processors with hyperthreading technology capability. Each logical processor maintains a complete set of the architectural state. The architectural state consists of registers, including general-purpose registers, and those for control, the advanced programmable interrupt controller (APIC), and some for machine state. From a software perspective, duplication of the architectural state makes each physical processor appear to be two processors. Each logical processor has its own interrupt controller, or APIC, which handles just the interrupts sent to its specific logical processor.
Note: For simultaneous multithreading using a superscalar core (i.e., one that can issue more than one operation per cycle), the execution process is significantly different.
Without hyper-threading hardware, we can have concurrency provided there indeed are more than one task which can be executed concurrently. How? Take Process P1 and P2 which can be safely executed concurrently and take a core (called C). P1 runs for 1 time quantum on C then P2 runs for another time quantum on C followed by P1 running for next time quantum on C and so on.
There was only one core C - There was no hyper-threading - and we had concurrent execution of P1 and P2.
Without hyper-threading hardware, we can have parallelism if there is a task which can be parallelly executed and we have more than one core to indeed run that task in parallel. Take Mapping part of Mapreduce.
Let's say you have two text files to read from, you have started two mappers and you have two non-hyperthreaded physical cores. In this case you can (and probably will) run the mappers in parallel without any hyper-threading. Each mapper will read from its own text file, will run on its own core and will generate its own mapped output.
There were 2 cores - There was no hyper-threading - and we had parallel execution of a task.
Conclusion: Hyperthreading is a hardware improvement and can be successfully disconnected from parallelism and concurrency.
1 by reducing the amount of data needed to copy in order to effectively perform a context-switch.
*A good SO answer about Parallelism and Concurrency states that Concurrency is like having a juggler juggle many balls. Regardless of how it seems, the juggler is only catching/throwing one ball at a time. Parallelism is having multiple jugglers juggle balls simultaneously.
© 2022 - 2024 — McMap. All rights reserved.