What is the difference between concurrent programming and parallel programming?
Asked Answered
A

22

412

What is the difference between concurrent programming and parallel programing? I asked google but didn't find anything that helped me to understand that difference. Could you give me an example for both?

For now I found this explanation: http://www.linux-mag.com/id/7411 - but "concurrency is a property of the program" vs "parallel execution is a property of the machine" isn't enough for me - still I can't say what is what.

Albrecht answered 13/12, 2009 at 22:17 Comment(1)
possible duplicate of Concurrency vs Parallelism - What is the difference?Coniferous
P
346

If your program is using threads (concurrent programming), it's not necessarily going to be executed as such (parallel execution), since it depends on whether the machine can handle several threads.

Here's a visual example. Threads on a non-threaded machine:

        --  --  --
     /              \
>---- --  --  --  -- ---->>

Threads on a threaded machine:

     ------
    /      \
>-------------->>

The dashes represent executed code. As you can see, they both split up and execute separately, but the threaded machine can execute several separate pieces at once.

Party answered 13/12, 2009 at 22:26 Comment(11)
Parallel execution and parallel programming are not the same thing. The answer from Jon Harrop is correct. But it seems that the question itself confuses parallel execution and parallel programming.Judicature
The ability to execute threads in parallel depends upon more than just the machine. For example, OCaml (and Python?) executes threads concurrently but not in parallel due to a global lock for the garbage collector.Othilia
Parallel programming is not a subset of concurrent programming, according to this blog; you're answer doesn't take that into account, what do you think about this statement?Grizzly
@Kevin: "Parallel programming is not a subset of concurrent programming, according to this blog" but the blog article you cite says "Concurrency...is a concept more general than parallelism"Othilia
@Jon Does "more general" means that concurrency includes parallelism, or that it's a different question? (because it's not included, for instance SIMP codes are parallel but not concurrent)Grizzly
@Kevin: I think "more general" means superset. I agree that it is wrong.Othilia
@Grizzly The difference between parallel and concurrent is a semantic discussion. The main point is code vs machine, or programming vs execution. Parallel programming is the same as concurrent programming. Parallel execution is the same as concurrent execution.Party
This answer is good for visualizing the difference between concurrent & parallel executing, but not for poster's original question about programming.Extravehicular
@Extravehicular Then you must have failed to read the first paragraph. And it is perfectly good for visualizing programming with threads.Party
The diagram is wrong. Concurrent code actually does run at the same time, and there are no gaps.Naze
In those two diagrams, both show tasks that are both concurrent and parallel at the zoomed-out level. Note that you are literally using parallel line segments. But if you zoom-in, the work is done sequentially, and you literally use sequential line segments which are not concurrent.Senhor
O
409

Concurrent programming regards operations that appear to overlap and is primarily concerned with the complexity that arises due to non-deterministic control flow. The quantitative costs associated with concurrent programs are typically both throughput and latency. Concurrent programs are often IO bound but not always, e.g. concurrent garbage collectors are entirely on-CPU. The pedagogical example of a concurrent program is a web crawler. This program initiates requests for web pages and accepts the responses concurrently as the results of the downloads become available, accumulating a set of pages that have already been visited. Control flow is non-deterministic because the responses are not necessarily received in the same order each time the program is run. This characteristic can make it very hard to debug concurrent programs. Some applications are fundamentally concurrent, e.g. web servers must handle client connections concurrently. Erlang, F# asynchronous workflows and Scala's Akka library are perhaps the most promising approaches to highly concurrent programming.

Multicore programming is a special case of parallel programming. Parallel programming concerns operations that are overlapped for the specific goal of improving throughput. The difficulties of concurrent programming are evaded by making control flow deterministic. Typically, programs spawn sets of child tasks that run in parallel and the parent task only continues once every subtask has finished. This makes parallel programs much easier to debug than concurrent programs. The hard part of parallel programming is performance optimization with respect to issues such as granularity and communication. The latter is still an issue in the context of multicores because there is a considerable cost associated with transferring data from one cache to another. Dense matrix-matrix multiply is a pedagogical example of parallel programming and it can be solved efficiently by using Straasen's divide-and-conquer algorithm and attacking the sub-problems in parallel. Cilk is perhaps the most promising approach for high-performance parallel programming on multicores and it has been adopted in both Intel's Threaded Building Blocks and Microsoft's Task Parallel Library (in .NET 4).

Othilia answered 20/10, 2010 at 22:16 Comment(11)
"The hard part of parallel programming ... such as granularity and communication." If parallel tasks need to communicate, doesn't that make them concurrent?Gregg
"If parallel tasks need to communicate, doesn't that make them concurrent?". Wow, great question! Not necessarily, no. Supercomputers are often programmed with bulk parallel operations followed by global redistribution of data and more bulk parallelism. So there is parallelism and communication but no real concurrency to speak of. In this context, I was thinking more of multicore parallelism where communication means cache complexity, e.g. communication required for cache coherency. Although that is concurrent it is also not directly visible.Othilia
How would you describe a time slicing algorithm in terms of determism/non-determinism? E.g. a single-core processor system that multi-tasks (time slices) to give the appearance of overlapping processing? When concurrency is defined as execution in overlapping time periods then this kind of multi-tasking is included in the definition for concurrency, but not parallelism.Intermix
@BoppityBop Just because I can say in a drawing what he said in a novel doesn't make my answer less correct. Just easier to read for those who actually don't know the answer. Which I guess is the point of coming here. You can write a book in the language used by this post, but that's going to be absolutely jibberish to most readers, since you probably didn't google this question if you already know half of what jon wrote.Party
@acarlon: "How would you describe a time slicing algorithm in terms of determism/non-determinism? E.g. a single-core processor system that multi-tasks (time slices) to give the appearance of overlapping processing? When concurrency is defined as execution in overlapping time periods then this kind of multi-tasking is included in the definition for concurrency, but not parallelism.". Yes, that is concurrent and not parallel.Othilia
@TorValamo: I didn't see non-deterministic control flow vs throughput in your diagram.Othilia
The picture was very helpful for me, someone pretty new to the topic, and the description from @JonHarrop was useful to me, someone who appreciates correct, even if technical, language. Both answers contributed to my more complete understanding. We all win! (although I do appreciate the distinction made between parallel execution and parallel programming)Pyxis
please don't use the word concurrently when explaining concurrent, it's confusing.Curvaceous
"Erlang is perhaps the most promising upcoming language...". Interesting choice of words, since Erlang is ~30 years old and was open sourced in 1998.Laxity
This answer is essentially wrong. True overlap of processing happens in concurrent code as well as parallel code.Naze
The question asks specifically about concurrent vs parallel programming and yet in the second paragraph you immediately introduce a new term "Multicore programming" without explaining why that is important or even defining it. Really poorly explained from the perspective of someone wanting a clear answer, reads like a technical wikipedia article.Brutalize
P
346

If your program is using threads (concurrent programming), it's not necessarily going to be executed as such (parallel execution), since it depends on whether the machine can handle several threads.

Here's a visual example. Threads on a non-threaded machine:

        --  --  --
     /              \
>---- --  --  --  -- ---->>

Threads on a threaded machine:

     ------
    /      \
>-------------->>

The dashes represent executed code. As you can see, they both split up and execute separately, but the threaded machine can execute several separate pieces at once.

Party answered 13/12, 2009 at 22:26 Comment(11)
Parallel execution and parallel programming are not the same thing. The answer from Jon Harrop is correct. But it seems that the question itself confuses parallel execution and parallel programming.Judicature
The ability to execute threads in parallel depends upon more than just the machine. For example, OCaml (and Python?) executes threads concurrently but not in parallel due to a global lock for the garbage collector.Othilia
Parallel programming is not a subset of concurrent programming, according to this blog; you're answer doesn't take that into account, what do you think about this statement?Grizzly
@Kevin: "Parallel programming is not a subset of concurrent programming, according to this blog" but the blog article you cite says "Concurrency...is a concept more general than parallelism"Othilia
@Jon Does "more general" means that concurrency includes parallelism, or that it's a different question? (because it's not included, for instance SIMP codes are parallel but not concurrent)Grizzly
@Kevin: I think "more general" means superset. I agree that it is wrong.Othilia
@Grizzly The difference between parallel and concurrent is a semantic discussion. The main point is code vs machine, or programming vs execution. Parallel programming is the same as concurrent programming. Parallel execution is the same as concurrent execution.Party
This answer is good for visualizing the difference between concurrent & parallel executing, but not for poster's original question about programming.Extravehicular
@Extravehicular Then you must have failed to read the first paragraph. And it is perfectly good for visualizing programming with threads.Party
The diagram is wrong. Concurrent code actually does run at the same time, and there are no gaps.Naze
In those two diagrams, both show tasks that are both concurrent and parallel at the zoomed-out level. Note that you are literally using parallel line segments. But if you zoom-in, the work is done sequentially, and you literally use sequential line segments which are not concurrent.Senhor
A
179

https://joearms.github.io/published/2013-04-05-concurrent-and-parallel-programming.html

Concurrent = Two queues and one coffee machine.

Parallel = Two queues and two coffee machines.

Alcaraz answered 6/6, 2016 at 12:13 Comment(5)
Incorrect and misleading. Concurrent = allowing one or more queues (nondeterministic composition). Parallel = having more than one queues to make any of them shorter than the original one if not empty (asymptotic efficiency).Bendick
Concurrent code requires two or more processors (or "coffee machines"). Thus this answer is essentially wrong.Naze
@GeoffreyAnderson No it doesn't. For example, threads and processes are executed concurrently on a single core machine.Othilia
@Bendick - Please have a look at https://mcmap.net/q/16138/-what-is-the-difference-between-concurrent-programming-and-parallel-programming and do look at the source link - on Oracle site - So it cann't be wrong but our understanding can be. So time to rethink. I did changed my view after reading that.Domino
@GeoffreyAnderson - Please have a look at https://mcmap.net/q/16138/-what-is-the-difference-between-concurrent-programming-and-parallel-programming . It contains link from oracle and clearly states what is what. So need to align ourselves with it.Domino
C
60

Interpreting the original question as parallel/concurrent computation instead of programming.

In concurrent computation two computations both advance independently of each other. The second computation doesn't have to wait until the first is finished for it to advance. It doesn't state however, the mechanism how this is achieved. In single-core setup, suspending and alternating between threads is required (also called pre-emptive multithreading).

In parallel computation two computations both advance simultaneously - that is literally at the same time. This is not possible with single CPU and requires multi-core setup instead.

Images from article: "Parallel vs Concurrent in Node.js"

suspending and taking turns versus parallel computing

Chatter answered 7/9, 2015 at 18:22 Comment(2)
Image^ order: Concurrent is on the left; Parallel is on the right.Veliz
>this is not possible with single CPU and requires multi-core setup instead. That is not exactly true. Hyperthreading and Pipelining are a thing. With that each core can have code from 2 different threads being processed in the pipeline at the same time. Cores also have multiple ALUs that can run at the same time but work on different threads.Orleanist
E
38

In the view from a processor, It can be described by this pic

In the view  from a processor, It can be described by this pic

In the view from a processor, It can be described by this pic

Exalted answered 8/3, 2017 at 5:43 Comment(1)
The left is sequential, one task happens after the other, and they aren't concurrent or parallel.Senhor
L
26

I believe concurrent programming refers to multithreaded programming which is about letting your program run multiple threads, abstracted from hardware details.

Parallel programming refers to specifically designing your program algorithms to take advantage of available parallel execution. For example, you can execute in parallel two branches of some algorithms in expectation that it will hit the result sooner (on average) than it would if you first checked the first then the second branch.

Lacquer answered 13/12, 2009 at 22:22 Comment(2)
To put it another way, executing two things in parallel can get them done twice as fast. Executing two things concurrently might still take the same amount of time as doing first one and then the other if there is just one CPU time-slicing back and forth between running a bit of the first and then a bit of the second, etc.Microtone
One of the best explanationCannelloni
P
14

I found this content in some blog. Thought it is useful and relevant.

Concurrency and parallelism are NOT the same thing. Two tasks T1 and T2 are concurrent if the order in which the two tasks are executed in time is not predetermined,

T1 may be executed and finished before T2, T2 may be executed and finished before T1, T1 and T2 may be executed simultaneously at the same instance of time (parallelism), T1 and T2 may be executed alternatively, ... If two concurrent threads are scheduled by the OS to run on one single-core non-SMT non-CMP processor, you may get concurrency but not parallelism. Parallelism is possible on multi-core, multi-processor or distributed systems.

Concurrency is often referred to as a property of a program, and is a concept more general than parallelism.

Source: https://blogs.oracle.com/yuanlin/entry/concurrency_vs_parallelism_concurrent_programming

Pork answered 23/1, 2013 at 9:5 Comment(0)
G
13

They're two phrases that describe the same thing from (very slightly) different viewpoints. Parallel programming is describing the situation from the viewpoint of the hardware -- there are at least two processors (possibly within a single physical package) working on a problem in parallel. Concurrent programming is describing things more from the viewpoint of the software -- two or more actions may happen at exactly the same time (concurrently).

The problem here is that people are trying to use the two phrases to draw a clear distinction when none really exists. The reality is that the dividing line they're trying to draw has been fuzzy and indistinct for decades, and has grown ever more indistinct over time.

What they're trying to discuss is the fact that once upon a time, most computers had only a single CPU. When you executed multiple processes (or threads) on that single CPU, the CPU was only really executing one instruction from one of those threads at a time. The appearance of concurrency was an illusion--the CPU switching between executing instructions from different threads quickly enough that to human perception (to which anything less than 100 ms or so looks instantaneous) it looked like it was doing many things at once.

The obvious contrast to this is a computer with multiple CPUs, or a CPU with multiple cores, so the machine is executing instructions from multiple threads and/or processes at exactly the same time; code executing one can't/doesn't have any effect on code executing in the other.

Now the problem: such a clean distinction has almost never existed. Computer designers are actually fairly intelligent, so they noticed a long time ago that (for example) when you needed to read some data from an I/O device such as a disk, it took a long time (in terms of CPU cycles) to finish. Instead of leaving the CPU idle while that happened, they figured out various ways of letting one process/thread make an I/O request, and let code from some other process/thread execute on the CPU while the I/O request completed.

So, long before multi-core CPUs became the norm, we had operations from multiple threads happening in parallel.

That's only the tip of the iceberg though. Decades ago, computers started providing another level of parallelism as well. Again, being fairly intelligent people, computer designers noticed that in a lot of cases, they had instructions that didn't affect each other, so it was possible to execute more than one instruction from the same stream at the same time. One early example that became pretty well known was the Control Data 6600. This was (by a fairly wide margin) the fastest computer on earth when it was introduced in 1964--and much of the same basic architecture remains in use today. It tracked the resources used by each instruction, and had a set of execution units that executed instructions as soon as the resources on which they depended became available, very similar to the design of most recent Intel/AMD processors.

But (as the commercials used to say) wait--that's not all. There's yet another design element to add still further confusion. It's been given quite a few different names (e.g., "Hyperthreading", "SMT", "CMP"), but they all refer to the same basic idea: a CPU that can execute multiple threads simultaneously, using a combination of some resources that are independent for each thread, and some resources that are shared between the threads. In a typical case this is combined with the instruction-level parallelism outlined above. To do that, we have two (or more) sets of architectural registers. Then we have a set of execution units that can execute instructions as soon as the necessary resources become available. These often combine well because the instructions from the separate streams virtually never depend on the same resources.

Then, of course, we get to modern systems with multiple cores. Here things are obvious, right? We have N (somewhere between 2 and 256 or so, at the moment) separate cores, that can all execute instructions at the same time, so we have clear-cut case of real parallelism--executing instructions in one process/thread doesn't affect executing instructions in another.

Well, sort of. Even here we have some independent resources (registers, execution units, at least one level of cache) and some shared resources (typically at least the lowest level of cache, and definitely the memory controllers and bandwidth to memory).

To summarize: the simple scenarios people like to contrast between shared resources and independent resources virtually never happen in real life. With all resources shared, we end up with something like MS-DOS, where we can only run one program at a time, and we have to stop running one before we can run the other at all. With completely independent resources, we have N computers running MS-DOS (without even a network to connect them) with no ability to share anything between them at all (because if we can even share a file, well, that's a shared resource, a violation of the basic premise of nothing being shared).

Every interesting case involves some combination of independent resources and shared resources. Every reasonably modern computer (and a lot that aren't at all modern) has at least some ability to carry out at least a few independent operations simultaneously, and just about anything more sophisticated than MS-DOS has taken advantage of that to at least some degree.

The nice, clean division between "concurrent" and "parallel" that people like to draw just doesn't exist, and almost never has. What people like to classify as "concurrent" usually still involves at least one and often more different types of parallel execution. What they like to classify as "parallel" often involves sharing resources and (for example) one process blocking another's execution while using a resource that's shared between the two.

People trying to draw a clean distinction between "parallel" and "concurrent" are living in a fantasy of computers that never actually existed.

Guillotine answered 13/12, 2009 at 22:23 Comment(0)
P
10
  • Concurrent programming is in a general sense to refer to environments in which the tasks we define can occur in any order. One task can occur before or after another, and some or all tasks can be performed at the same time.

  • Parallel programming is to specifically refer to the simultaneous execution of concurrent tasks on different processors. Thus, all parallel programming is concurrent, but not all concurrent programming is parallel.

Source: PThreads Programming - A POSIX Standard for Better Multiprocessing, Buttlar, Farrell, Nichols

Pitchfork answered 29/7, 2017 at 11:6 Comment(1)
"Parallel" refers to parallel side-by-side line segments in an execution diagram, which has the same meaning as concurrent.Senhor
P
6

Classic scheduling of tasks can be serial, parallel or concurrent.

  • Serial: tasks must be executed one after the other in a known tricked order or it will not work. Easy enough.

  • Parallel: tasks must be executed at the same time or it will not work.

    • Any failure of any of the tasks - functionally or in time - will result in total system failure.
    • All tasks must have a common reliable sense of time.

    Try to avoid this or we will have tears by tea time.

  • Concurrent: we do not care. We are not careless, though: we have analysed it and it doesn't matter; we can therefore execute any task using any available facility at any time. Happy days.

Often, the available scheduling changes at known events which we call a state change.

People often think this is about software, but it is in fact a systems design concept that pre-dates computers; software systems were a little slow in the uptake, very few software languages even attempt to address the problem. You might try looking up the transputer language occam if you are interested.

Succinctly, systems design addresses the following:

  • the verb - what you are doing (operation or algorithm)
  • the noun - what you are doing it to (data or interface)
  • when - initiation, schedule, state changes
  • how - serial, parallel, concurrent
  • where - once you know when things happen, you can say where they can happen and not before.
  • why - is this the way to do it? Are there other ways, and more importantly, a better way? What happens if you don't do it?

Good luck.

Puett answered 7/12, 2014 at 20:39 Comment(2)
I see caps everywhereAndris
This answer is more complicated than the topics of concurrency and parallelism together.Commodious
D
6

Parallel programming happens when code is being executed at the same time and each execution is independent of the other. Therefore, there is usually not a preoccupation about shared variables and such because that won't likely happen.

However, concurrent programming consists on code being executed by different processes/threads that share variables and such, therefore on concurrent programming we must establish some sort of rule to decide which process/thread executes first, we want this so that we can be sure there will be consistency and that we can know with certainty what will happen. If there is no control and all threads compute at the same time and store things on the same variables, how would we know what to expect in the end? Maybe a thread is faster than the other, maybe one of the threads even stopped in the middle of its execution and another continued a different computation with a corrupted (not yet fully computed) variable, the possibilities are endless. It's in these situations that we usually use concurrent programming instead of parallel.

Doggoned answered 3/1, 2016 at 4:9 Comment(0)
L
5

In programming, concurrency is the composition of independently executing processes, while parallelism is the simultaneous execution of (possibly related) computations.
- Andrew Gerrand -

And

Concurrency is the composition of independently executing computations. Concurrency is a way to structure software, particularly as a way to write clean code that interacts well with the real world. It is not parallelism.

Concurrency is not parallelism, although it enables parallelism. If you have only one processor, your program can still be concurrent but it cannot be parallel. On the other hand, a well-written concurrent program might run efficiently in parallel on a multiprocessor. That property could be important...
- Rob Pike -

To understand the difference, I strongly recommend to see this Rob Pike(one of Golang creators)'s video. Concurrency Is Not Parallelism

Lett answered 6/5, 2015 at 9:14 Comment(0)
R
4

Concurrency provides a way to structure a solution to solve a problem that may (but not necessarily) be parallelizable, Concurrency is about structure, parallelism is about execution.

enter image description here

Regality answered 16/9, 2021 at 9:27 Comment(4)
I just don't get the right side picture. What happens on that?Alleluia
@Alleluia just small comparison of real world , puppies( ie thread) are trying to eat in limited number of food bowl (cpu). when they are eating some puppies need to drink water from water bowl (shared resource). for assumption only one water bowl is available that can be accesses by only one puppies. Then puppies need to deal with lot of thing instead of actual eating (execution/doing) such as resource fighting ,starvation, bowl switching, spilling etc...Regality
If puppies are threads here, and bowls are CPU cores, then concurrency would mean that puppies who share the same bowl eat in a way that only a single puppy eat from that bowl at the same time - the picture on the right side is not like that, more like a random mess. They don't even touch the shared resource though. I think this picture is good for only one reason: to confuse people who are trying to understand the concepts of concurrency. I understand the concepts well it tries to visualize, but it makes a terrible job in my opinion.Alleluia
"Concurrency is about structure, parallelism is about execution." ? Yet your picture has parallelism the structured picture, and concurrency is the messy execution. I think you have something backwards.Sightly
J
3

I understood the difference to be:

1) Concurrent - running in tandem using shared resources 2) Parallel - running side by side using different resources

So you can have two things happening at the same time independent of each other, even if they come together at points (2) or two things drawing on the same reserves throughout the operations being executed (1).

Janice answered 7/2, 2010 at 18:12 Comment(0)
P
3

Although there isn’t complete agreement on the distinction between the terms parallel and concurrent, many authors make the following distinctions:

  • In concurrent computing, a program is one in which multiple tasks can be in progress at any instant.
  • In parallel computing, a program is one in which multiple tasks cooperate closely to solve a problem.

So parallel programs are concurrent, but a program such as a multitasking operating system is also concurrent, even when it is run on a machine with only one core, since multiple tasks can be in progress at any instant.

Source: An introduction to parallel programming, Peter Pacheco

Pellerin answered 27/7, 2014 at 15:28 Comment(0)
D
3

Concurrency and Parallelism Source

In a multithreaded process on a single processor, the processor can switch execution resources between threads, resulting in concurrent execution.

In the same multithreaded process in a shared-memory multiprocessor environment, each thread in the process can run on a separate processor at the same time, resulting in parallel execution.

When the process has fewer or as many threads as there are processors, the threads support system in conjunction with the operating environment ensure that each thread runs on a different processor.

For example, in a matrix multiplication that has the same number of threads and processors, each thread (and each processor) computes a row of the result.

Domino answered 26/7, 2019 at 15:27 Comment(4)
This source only shows a special case of the implementation - a specialized form of multithreading. Yeah, it does not even cover the whole story of multithreading, e.g. M:N userspace threading model and the role of thread scheduling. Threading is only a specialized way of the implementation in the sense of the system architecture (OS, VM, CPU with HT enabled, etc) and/or the programming interface. There do exist more, like instruction-level parallelism in the implementation of a modern CPU which exposes no programming interface and has nothing to do with threads.Bendick
@FrankHB: I would appreciate if you can share any authentic links about your content. I would really like to explore if there's more to it. My current understanding is quite simplistic - Running a multi-threaded app on given any OS architecture with given thread scheduling mechanism is it parallel or concurrent is the question? Even if you given the M:N userspace - How you make out is the RUN is parallel or concurrent?Domino
I've written an answer to discuss the problems in different abstractions.Bendick
Running a multi-threaded app is actually quite complex compared to the basic abstraction, as "run" is a general action fit for many abstractions. There are many details must have been complemented by the threading model in the implementation (typically, both the language spec and the language runtime implementation used to program the app) onto the basic abstraction.Bendick
L
2

Just sharing an example that helps to highlight the distinction:

Parallel Programming: Say you want to implement the merge-sort algorithm. Each time that you divide the problem into two sub-problems, you can have two threads that solve them. However, in order to do the merge step you have to wait for these two threads to finish since merging requires both sub-solutions. This "mandatory wait" makes this a parallel program.

Concurrent Program: Say you want to compress n text files and generate a compressed file for each of them. You can have from 2 (up to n) threads that each handle compressing a subset of the files. When each thread is done, it's just done, it doesn't have to wait or do anything else. So, since different tasks are performed in an interleaved manner in "any arbitrary order" the program is concurrent but not parallel.

As someone else mentioned, every parallel program is concurrent (has to be in fact), but not the other way around.

Lophobranch answered 8/10, 2019 at 20:12 Comment(0)
A
2

I will try to explain it in my own style, it might not be in computer terms but it gives you the general idea.

Let's take an example, say Household chores: cleaning dishes, taking out trash, mowing the lawn etc, also we have 3 people(threads) A, B, C to do them

Concurrent: The three individuals start different tasks independently i.e.,

A --> cleaning dishes
B --> taking out trash 
C --> mowing the lawn 

Here, the order of tasks are indeterministic and responses depends on the amount of work

Parallel: Here if we want to improve the throughput we can assign multiple people to the single task, for example, cleaning dishes we assign two people, A soaping the dishes and B washing the dishes which might improve the throughput.

cleaning the dishes:

A --> soaping the dishes
B --> washing the dishes

so on

Hope this gives an idea! now move on to the technical terms which are explained in the other answers ;)

Anglesey answered 16/3, 2020 at 0:16 Comment(3)
It seems that you have explained parallelism in both. When you talk about "3 individuals" performing "3 tasks" independently, then it's parallelism. Concurrency (without parallelism would) be a single entity working on all 3 tasks. Not one by one, but in a time-sliced manner. Washing few dishes Taking some trash out, Wash some more dishes, Move the lawn a bit, Take some more trash out ... Repeat till tasks are done. These 3 tasks may not be the best practical example, as no one would do these 3 tasks concurrently. Parallelism comes, when you have 2 or 3 people for the same tasks.Volding
This is the best answer.Cannelloni
@Tushar, your understanding of concurrency is wrong. It is about perspective. parallelism is breaking down tasks. Concurrency is not about breaking down.Cannelloni
B
0

Different people talk about different kinds of concurrency and parallelism in many different specific cases, so some abstractions to cover their common nature are needed.

The basic abstraction is done in computer science, where both concurrency and parallelism are attributed to the properties of programs. Here, programs are formalized descriptions of computing. Such programs need not to be in any particular language or encoding, which is implementation-specific. The existence of API/ABI/ISA/OS is irrelevant to such level of abstraction. Surely one will need more detailed implementation-specific knowledge (like threading model) to do concrete programming works, the spirit behind the basic abstraction is not changed.

A second important fact is, as general properties, concurrency and parallelism can coexist in many different abstractions.

For the general distinction, see the relevant answer for the basic view of concurrency v. parallelism. (There are also some links containing some additional sources.)

Concurrent programming and parallel programming are techniques to implement such general properties with some systems which expose programmability. The systems are usually programming languages and their implementations.

A programming language may expose the intended properties by built-in semantic rules. In most cases, such rules specify the evaluations of specific language structures (e.g. expressions) making the computation involved effectively concurrent or parallel. (More specifically, the computational effects implied by the evaluations can perfectly reflect these properties.) However, concurrent/parallel language semantics are essentially complex and they are not necessary to practical works (to implement efficient concurrent/parallel algorithms as the solutions of realistic problems). So, most traditional languages take a more conservative and simpler approach: assuming the semantics of evaluation totally sequential and serial, then providing optional primitives to allow some of the computations being concurrent and parallel. These primitives can be keywords or procedural constructs ("functions") supported by the language. They are implemented based on the interaction with hosted environments (OS, or "bare metal" hardware interface), usually opaque (not able to be derived using the language portably) to the language. Thus, in this particular kind of high-level abstractions seen by the programmers, nothing is concurrent/parallel besides these "magic" primitives and programs relying on these primitives; the programmers can then enjoy less error-prone experience of programming when concurrency/parallelism properties are not so interested.

Although primitives abstract the complex away in the most high-level abstractions, the implementations still have the extra complexity not exposed by the language feature. So, some mid-level abstractions are needed. One typical example is threading. Threading allows one or more thread of execution (or simply thread; sometimes it is also called a process, which is not necessarily the concept of a task scheduled in an OS) supported by the language implementation (the runtime). Threads are usually preemptively scheduled by the runtime, so a thread needs to know nothing about other threads. Thus, threads are natural to implement parallelism as long as they share nothing (the critical resources): just decompose computations in different threads, once the underlying implementation allows the overlapping of the computation resources during the execution, it works. Threads are also subject to concurrent accesses of shared resources: just access resources in any order meets the minimal constraints required by the algorithm, and the implementation will eventually determine when to access. In such cases, some synchronization operations may be necessary. Some languages treat threading and synchronization operations as parts of the high-level abstraction and expose them as primitives, while some other languages encourage only relatively more high-level primitives (like futures/promises) instead.

Under the level of language-specific threads, there come multitasking of the underlying hosting environment (typically, an OS). OS-level preemptive multitasking are used to implement (preemptive) multithreading. In some environments like Windows NT, the basic scheduling units (the tasks) are also "threads". To differentiate them with userspace implementation of threads mentioned above, they are called kernel threads, where "kernel" means the kernel of the OS (however, strictly speaking, this is not quite true for Windows NT; the "real" kernel is the NT executive). Kernel threads are not always 1:1 mapped to the userspace threads, although 1:1 mapping often reduces most overhead of mapping. Since kernel threads are heavyweight (involving system calls) to create/destroy/communicate, there are non 1:1 green threads in the userspace to overcome the overhead problems at the cost of the mapping overhead. The choice of mapping depending on the programming paradigm expected in the high-level abstraction. For example, when a huge number of userspace threads expected being concurrently executed (like Erlang), 1:1 mapping is never feasible.

The underlying of OS multitasking is ISA-level multitasking provided by the logical core of the processor. This is usually the most low-level public interface for programmers. Beneath this level, there may exist SMT. This is a form of more low-level multithreading implemented by the hardware, but arguably, still somewhat programmable - though it is usually only accessible by the processor manufacturer. Note the hardware design is apparently reflecting parallelism, but there is also concurrent scheduling mechanism to make the internal hardware resources being efficiently used.

In each level of "threading" mentioned above, both concurrency and parallelism are involved. Although the programming interfaces vary dramatically, all of them are subject to the properties revealed by the basic abstraction at the very beginning.

Bendick answered 1/8, 2019 at 8:25 Comment(0)
B
0

Concurrent programming is the general concept where a program can perform multiple tasks in an undefined order of completion and that may or may not be executing simultaneously.

Parallel programming is just a type of concurrent programming where these tasks are running on threads that execute simultaneously.

I really don't understand many of the overly verbose answers here seemingly implying that concurrent and parallel programming are distinct programming approaches that don't overlap.

If you're writing a parallel program you are by definition writing a special case of concurrent program. The terms seem to have been needlessly confused and complicated over the years.

One of the best and most detailed coverage of concurrent programming is the book "Concurrent programming on Windows" by Joe Duffy. This book defines concurrency and then goes on to explain the various OS resources, libraries etc available to write "parallel" programs such as the Task Parallel library in .NET.

On page 5:

"Parallelism is the use of concurrency to decompose an operation into finer grained constituent parts so that independent parts can run on separate processors on the machine"

So, again, parallel programming is just a special type of concurrent programming where multiple threads/tasks will be running simultaneously.

PS I have always disliked how, in programming, the words concurrent and parallel have such overloaded meaning. e.g. In the big wide world outside programming "the basketball games will be run concurrently" and "the basketball games will be run in parallel" are identical.

Imagine the laughable confusion at a developers conference where on day one they advertise sessions will be run in "parallel" but on day two they will be run "concurrently". It would be hilarious!

Brutalize answered 24/10, 2022 at 3:11 Comment(1)
You say parallel programming is a type of concurrent programming. I'd agree. But I'd also claim the converse such that the two terms have logical equivalence. Can you cite a counter example of concurrency that is not parallel?Senhor
S
0

The two terms mean the same thing.

  • Concurrent: Merriam Webster dictionary has the primary definition: "operating or occurring at the same time". A secondary definition is "running parallel".

  • Parallel: As opposed to sequential. The word parallel refers to parallel lines (or line segments more precisely) when using a visual diagram with line segments representing threads of execution. To avoid confusion, "threads" is being used in the general sense, not necessarily any particular thread technology. Sequential line segments means tasks execute sequentially, one after the other, and parallel line segments mean the tasks execute in parallel or concurrently.

Several other answers and some authorities suggest that there is a difference regarding truly simultaneous execution on different CPU cores, but this distinction isn't generally accepted and recognized.

Senhor answered 27/8, 2023 at 23:6 Comment(0)
D
-1

Concurrent: On single core machine, multi-tasks are running in a CPU-time-slice-sharing style.
Parallel: On multi-cores machine, multi-tasks are running in each core simultaneously.

Desalvo answered 8/7, 2022 at 1:37 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.