Message passing vs locking
Asked Answered
A

5

10

What exactly is the difference between message passing concurrency schemes and lock-based concurrency schemes, in terms of performance? A thread that is waiting on a lock blocks, so other threads can run. As a result, I don't see how message-passing can be faster than lock-based concurrency.

Edit: Specifically, I'm discussing a message-passing approach like in Erlang, compared to a shared-data approach using locks (or atomic operations).

Altissimo answered 21/8, 2011 at 19:27 Comment(4)
Is it possible that you simply extend your question ? What exactly you are asking about? Some example of case ? Because the 'real' answer to your question - is like writing a book :) - really longAutobiography
Is this a fair comparison? Isn't this apples and oranges?Marmot
is this question about comparing traditional threading and SEDA-like approaches? if yes, then my understanding is that blocking of thread involves significant performance overhead (memory fences and such). You may take a look at discussion: How to significantly improve java performance?Novation
+1, this is a very important question. Though as if it hasn't been asked already.Fidget
O
10

As some others have suggested ("apples and oranges"), I see these two techniques as orthogonal. The underlying assumption here seems to be that one will choose one or the other: we'll either use locking and shared resources or we'll use message passing, and that one renders the other unnecessary, or perhaps the other is even unavailable.

Much like, say, a metacircular evaluator, it's not obvious which are the real primitives here. For instance, in order to implement message passing, you're probably going to need atomic CAS and particular memory visibility semantics, or maybe some locking and shared state. One can implement atomic operations in terms of locks, or one can implement locks in terms of atomic operations (as Java does in its java.util.concurrent.locks types).

Likewise, though admittedly a stretch, one could implement locking with message passing. Asking which one performs better doesn't make much sense in general, because that's really more a question about which are built in terms of which. Most likely, the one that's at the lower level can be driven better by a capable programmer than the one built on top—as has been the case with manual transmission cars until recently (quite a debate there too).

Usually the message-passing approach is lauded not for better performance, but rather for safety and convenience, and it's usually sold by denying the programmer control of locking and shared resources. As a result, it bets against programmer capability; if the programmer can't acquire a lock, he can't do it poorly and slow the program down. Much like a debate concerning manual memory management and garbage collection, some will claim to be "good drivers," making the most of manual control; others—especially those implementing and promoting use of a garbage collector—will claim that in the aggregate, the collector can do a better job than "not-so-good drivers" can with manual management.

There's no absolute answer. The difference here will lie with the skill level of the programmers, not with the tools they may wield.

Omeara answered 21/8, 2011 at 21:19 Comment(0)
I
7

IMHO, Message passing is probably not exactly a concurrency scheme. It is basically a form of (IPC) Inter Process Communication, an alternative to Shared objects. Erlang just favors Message passing to Shared Objects.

Cons of Shared Objects (Pros od Message Passing):

  • The state of Mutable/Shared objects are harder to reason about in a context where multiple threads run concurrently.
  • Synchronizing on a Shared Objects would lead to algorithms that are inherently non-wait free or non-lock free.
  • In a multiprocessor system, A shared object can be duplicated across processor caches. Even with the use of Compare and swap based algorithms that doesn't require synchronization, it is possible that a lot of processor cycles will be spent sending cache coherence messages to each of the processors.
  • A system built of Message passing semantics is inherently more scalable. Since message passing implies that messages are sent asynchronously, the sender is not required to block until the receiver acts on the message.

Pros of Shared Objects (Cons of Message Passing):

  • Some algorithms tend to be much simpler.
  • A message passing system that requires resources to be locked will eventually degenerate into a shared object systems. This is sometimes apparent in Erlang when programmers start using ets tables etc. to store shared state.
  • If algorithms are wait-free, you will see improved performance and reduced memory footprint as there is much less object allocation in the form of new messages.
Inherence answered 22/8, 2011 at 5:8 Comment(0)
A
2

Using message passing when all you wish to do is locking is wrong. In those cases, use locking. However, message passing gives you much more than just locking - as its name suggests, it allows you to pass messages, i.e. data, between threads or processes.

Apostil answered 21/8, 2011 at 19:38 Comment(0)
I
2

Message passing (with immutable messages) is easier to get right. With locking and shared mutable state it's very hard to avoid concurrency bugs.

As for performance, it's best that you measure it yourself. Every system is different - what are the workload characteristics, are operations dependent on the results of other operations or are they completely or mostly independent (which would allow massive parallelism), is latency or throughput more important, how many machines there are etc. Locking might be faster, or then again message passing might, or something completely different. If the same approach as in LMAX fits the problem at hand, then maybe that could be. (I would categorize the LMAX architecture as message passing, though it's very different from actor-based message passing.)

Isiah answered 21/8, 2011 at 20:48 Comment(0)
B
0

Message Passing don't use shared memory, which means that it doesn't need locks, cause each thread(process) can only load or store its own memory, the way they communicate each other is to send&receive messages.

Brady answered 15/2, 2015 at 21:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.