How to use Zeromq's inproc and ipc transports?
Asked Answered
E

4

19

Im a newbie to ZERMQ. ZeroMQ has TCP, INPROC and IPC transports. I'm looking for examples using python and inproc in Winx64 and python 2.7, which could also be used for linux.

Also, I have been looking for UDP methods of transport and cant find examples.

The only example I found is

import zmq
import zhelpers

context = zmq.Context()

sink = context.socket(zmq.ROUTER)
sink.bind("inproc://example")

# First allow 0MQ to set the identity
anonymous = context.socket(zmq.XREQ)
anonymous.connect("inproc://example")
anonymous.send("XREP uses a generated UUID")
zhelpers.dump(sink)

# Then set the identity ourself
identified = context.socket(zmq.XREQ)
identified.setsockopt(zmq.IDENTITY, "Hello")
identified.connect("inproc://example")
identified.send("XREP socket uses REQ's socket identity")
zhelpers.dump(sink)

The use case I'm thinking about is: UDP like distribution of info. Testing Push/Pull using TCP is faster or would inproc be faster.

Here's test example>..............

Server:

import zmq
import time

context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("inproc://example2")

while True:
    #  Wait for next request from client
    message = socket.recv()
    print "Received request: ", message

    #  Do some 'work'
    time.sleep (1)        #   Do some 'work'

    #  Send reply back to client
    socket.send("World")

Client:

import zmq

context = zmq.Context()

#  Socket to talk to server
print "Connecting to hello world server..."
socket = context.socket(zmq.REQ)
socket.connect ("inproc://example2")

#  Do 10 requests, waiting each time for a response
for request in range (1,10):
    print "Sending request ", request,"..."
    socket.send ("Hello")

    #  Get the reply.
    message = socket.recv()
    print "Received reply ", request, "[", message, "]"

Error Msg:

 socket.connect ("inproc://example2")
File "socket.pyx", line 547, in zmq.core.socket.Socket.connect (zmq\core\socket.c:5347)
zmq.core.error.ZMQError: Connection refused
Epistaxis answered 13/12, 2011 at 16:13 Comment(1)
L
15

To the best of my knowledge, UDP is not supported by 0MQ. Also, IPC is only supported on OSes which have a POSIX-conforming implementation of named pipes; so, on Windows, you can really only use 'inproc', TCP, or PGM. However, above and beyond all this, one of 0MQ's major features is that your protocol is just part of the address. You can take any example, change the socket address, and everything should still work just fine (subject, of course, to the afore-mentioned restrictions). Also, the ZGuide has many examples (a good number of which are available in Python).

Lickerish answered 13/12, 2011 at 16:27 Comment(12)
@Merlin: are these in separate process? because 'inproc' is only suitable as a replacement for threading scenarios.Lickerish
here's the basic form of it: http://zguide.zeromq.org/py:mtserver. More details available here: http://zguide.zeromq.org/page:all#Multithreading-with-MQ.Lickerish
zguide.zeromq.org/py:mtserver Doesnt may any sense to me. looked at this. its just more confusing. where is client side of example???? or where is worker side of example????Epistaxis
Sorry, I should have been a bit clearer. The mtserver example is a reworked version of the server from the hello world example. It uses the same client: http://zguide.zeromq.org/py:hwclient. Effectively, the server process, creates several worker processes (each on a new thread). It also creates a socket for handling client requests and a socket for sending messages to the workers (i.e. threads). Finally, the zmq.device(zmq.QUEUE, clients, workers) line transfers messages between the two sockets.Lickerish
Ok that make more sense, But REQ/REP does not use worker, it use client....Furthermore, I thought Inproc was faster than TCP. adding a thread layer, would it slow that down.....Basically, looking for examples which backs up the speed claims of ZeroMQ. Do you know any. Yes, Windows is not the best for testing speed of anything. But, I can test in VM for linux distros.....Epistaxis
let us continue this discussion in chatLickerish
are you around to chat, and can chat session last more than a day?Epistaxis
I'm available now... and the chat session should be available in perpetuity (assuming no one decides to close it).Lickerish
will you be at F# meetup on 2/9/2012 in NYC.Epistaxis
@raffian: I believe you misread the comment. inproc will work on all platforms. On Windows, you simply can't use IPC.Lickerish
@Lickerish so, you can really only use 'inproc' and TCP on Windows. this comes off as inproc and tcp working only on windows, but I see your point, you may want to clarify that part of your answer, nonetheless.Dodger
@raffian: meh... kind of a subjective point. But I've re-worded the answer anyway.Lickerish
P
11

If (and only if) you use the ZMQ_PUB or ZMQ_SUB sockets - which you don't do in the examples you gave, where you use ROUTER, XREQ, etc - you can use UDP, or more precisely, UDP multicast via

"epgm://host:port"

EPGM stands for Encapsulated PGM, i.e. PGM encapsulated in UDP, which is more compatible with existing network infrastructure than raw PGM.

See also http://api.zeromq.org/2-1:zmq-pgm

I don't know of any UDP support for unicast scenarios though.

Pancreatin answered 5/7, 2012 at 6:16 Comment(0)
P
6

ZeroMQ has thread-safe UDP support as of March 2016:

  • You have to use the Radio/Dish pattern (very similar to Pub/Sub)
  • Supported in libzmq and czmq
  • See tests/test_udp.cpp, tests/test_radio_dish.cppin the libzmq source code
  • Full breakdown providided by Doron Somech on zeromq-dev@ list thread: Thread safe Pub/Sub and Multicast
Pskov answered 11/4, 2016 at 19:44 Comment(2)
The link to the mailing list is broken, go here insteadInterline
No support in Python please?Kindrakindred
C
1

I had the same problem when my pyzmq's and zmq's version is older version, I upgrade the version to 15.2.0, then resolved the problem, the ipc address prefix that I used is "inproc://"

os: win7-x64 python: 2.7.6

Citole answered 18/2, 2016 at 2:12 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.