Difference between ZeroMQ and IPC
Asked Answered
S

3

11

Q1: What exactly is the difference between using ZeroMQ to send messages to child processes, as compared to the default inter process communication explained here?

Q2: For direct process to child communication, which would be more appropriate? (Faster)

Q3: The docs say: Creates an IPC channel, what kind of IPC does it use? TCP? Sockets?

Smaragdite answered 20/9, 2015 at 16:45 Comment(1)
This is discussion question and, therefore, not a good fit for StackOverflow. Have you tried chat?Greatgranduncle
B
14

A good point to state in the very inital moment - ZeroMQ is broker-less


A1: The difference between using ZeroMQ to send messages & IPC

well, put in this way, ZeroMQ concentrates on much different benefits, than just the ability to send message & scale-up ( both of which is helpfull ).

ZeroMQ introduces ( well Scaleable ) Formal Communication Patterns

This said, the core application-side focus is directed into what ZeroMQ-library pattern primitives could be used to either straight fulfill the actual needed behaviour model between participating agents ( one PUB + many SUB-s / many PUB-s + many cross-connected SUB-s )
or
how to compose a bit more complex, application specific, signalling-plane ( using available ZeroMQ building blocks behaviorally-primitive-socket archetypes + devices + application logic, providing finite-state-machine or transactional engines for signalling-plane added functionality ).

Standard IPC provides a dumb O/S-based service, no behaviour

which is fine, if understood in the pure O/S-context ( i.e. "batteries included" is not the case ).

Nevertheless, any higher level messaging support and other grand features ( alike fair-queue, round-robin scheduling, mux-ed transport-agnostic service composition over any/all { inproc:// | ipc:// | tcp:// | pqm:// | ... } transport-classes, millisecond-tuned multi-channel pollers, zero-copy message handovers and many other smart features ) are to be designed / implemented on your own ( which is the very case, why ZeroMQ was put in the game, not to have to do so, wasn't it? many thanks, Martin SUSTRIK & Pieter HINTJENS' team )


The best next step?

To see a bigger picture on this subject >>> with more arguments, a simple signalling-plane picture and a direct link to a must-read book from Pieter HINTJENS.


A2: Faster? I would worry if anybody grants an easy answer. It depends... A lot...

If interested in a younger sister of ZeroMQ, a nanomsg, check even a more lightweight framework from Martin SUSTRIK nanomsg.org >>>.

Fast, Faster, Fastest ...

For inspiration on minimum-overhead ( read as a high potential for speed ) zero-copy ( read as efficient overhead-avoidance ) read about inproc:// transport classes for inter-thread messaging:


A3: It uses IPC.

IPC is a transport-class on its own. There is no need to re-wrap/align/assemble/CRC/encapsulate/dispatch|decode\CRC-recheck\demap... the raw IPC-data into a higher abstraction TCP-packets if being transported right between localhost processes over an IPC-channel, is it?

Ban answered 21/9, 2015 at 11:59 Comment(2)
For Q3: I believe it uses unix socket on Linux and TCP on windows. But IPC is not a transport on its own, IPC transport could be implemented using shared memory for example (would be faster but way harder to do).Welloiled
Not necessarily: Win ipc:// need not TCP-overhead at all and may go alike in nanomsg over named pipes >>> nanomsg.org/v0.1/nn_ipc.7.htmlBan
N
2

Using a message queue like ZeroMQ gives you the ability to scale out to multiple machines, whereas child process communication is only local and can only scale out to take advantage of hardware on that machine.

Having a message broker is going to be slower as it relies on TCP, where as childprocess communication uses a pipe or standard I/O. Which is faster because it avoids the overhead of TCP and the network stack. Though I'd say the speed advantage here is negligible especially if you plan on scaling out to multiple machines.

It's also worth noting that ZeroMQ can use unix_sockets, and offers other forms of IPC which are pretty much similar to what would be offered by the child_process core module. Though it would likely be more difficult to use.

Perhaps it wouldn't be a bad idea to use ZeroMQ with unix_sockets or piping until you need to scale out across multiple machines.

Nobukonoby answered 20/9, 2015 at 18:28 Comment(0)
S
1

Well in the case you linked to, we are talking about two competing protocols. One which is generic (ZMQ) and another which is bound to NodeJS. Quoting from the article on IPC that you linked to "Accessing the IPC channel fd in any way other than process.send() or using the IPC channel with a child process that is not a Node.js instance is not supported."

In general, IPC (inter process communication) can refer to a number of different protocols, including ZMQ. ZMQ is an IPC mechanism. There are other lower level mechanisms such as PIPEs and UDP sockets. I have worked with both PIPEs and UDP sockets and found that a higher level protocol such as ZMQ is almost always better even in a simple two party case, because of two problems with PIPEs and UDP:

Buffering and chunking.

Buffering: The operating system is constantly creating buffers to store messages parts of that are sent between processes. These buffers must be flushed. You end up having to write a bunch of fiddly code that is hard to get right, to both flush messages, and read and stitch together, partially sent messages.

chunking: UDP has a variable maximum message size of somewhere between 500 and 65000 bytes. You have to deal with these max sizes and chunk your messages when using UDP directly. ZMQ deals with chunking automatically and you don't have to fiddle with it. There is no maximum message size in ZMQ (well, in ZMQ's case, messages have to fit in memory).

Somatotype answered 15/7, 2021 at 9:35 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.