blocking channels vs async message passing
Asked Answered
M

2

10

I've noticed two methods to "message passing". One I've seen Erlang use and the other is from Stackless Python. From what I understand here's the difference

Erlang Style - Messages are sent and queued into the mailbox of the receiving process. From there they are removed in a FIFO basis. Once the first process sends the message it is free to continue.

Python Style - Process A queues up to send to process B. B is currently performing some other action, so A is frozen until B is ready to receive. Once B opens a read channel, A sends the data, then they both continue.

Now I see the pros of the Erlang method being that you don't have any blocked processes. If B never is able to receive, A can still continue. However I have noticed in some programs I have written, that it is possible for Erlang message boxes to get full of hundreds (or thousands) of messages since the inflow of messages is greater than the outflow.

Now I haven't written a large program in either framework/language so I'm wondering your experiences are with this, and if it's something I should even worry about.

Yes, I know this is abstract, but I'm also looking for rather abstract answers.

Materials answered 10/2, 2010 at 19:32 Comment(0)
S
8

My experience in Erlang programming is that when you expect a high messaging rate (that is, a faster producer than consumer) then you add your own flow control. A simple scenario

  • The consumer will: send message, wait for ack, then repeat.
  • The producer will: wait for message, send ack when message received and processed, then repeat.

One can also invert it, the producer waits for the consumer to come and grab the N next available messages.

These approaches and other flow control can be hidden behind functions, the first one is mostly already available in gen_server:call/2,3 against a gen_server OTP behavior process.

I see asynchronous messaging as in Erlang as the better approach, since when latencies are high you might very much want to avoid a synchronization when messaging between computers. One can then compose clever ways to implement flow control. Say, requiring an ack from the consumer for every N messages the producer have sent it, or send a special "ping me when you have received this one" message now and then, to count ping time.

Shelashelagh answered 10/2, 2010 at 21:29 Comment(0)
W
4

Broadly speaking, this is unbounded queues vs bounded queues. A stackless channel can be considered a special case of a queue with 0 size.

Bounded queues have a tendency to deadlock. Two threads/processes trying to send a message to each other, both with a full queue.

Unbounded queues have more subtle failure. A large mailbox won't meet latency requirements, as you mentioned. Go far enough and it will eventually overflow; no such thing as infinite memory, so it's really just a bounded queue with a huge limit that aborts the process when full.

Which is best? That's hard to say. There's no easy answers here.

Willful answered 10/2, 2010 at 20:34 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.