can a python script know that another instance of the same script is running... and then talk to it?
Asked Answered
D

4

14

I'd like to prevent multiple instances of the same long-running python command-line script from running at the same time, and I'd like the new instance to be able to send data to the original instance before the new instance commits suicide. How can I do this in a cross-platform way?

Specifically, I'd like to enable the following behavior:

  1. "foo.py" is launched from the command line, and it will stay running for a long time-- days or weeks until the machine is rebooted or the parent process kills it.
  2. every few minutes the same script is launched again, but with different command-line parameters
  3. when launched, the script should see if any other instances are running.
  4. if other instances are running, then instance #2 should send its command-line parameters to instance #1, and then instance #2 should exit.
  5. instance #1, if it receives command-line parameters from another script, should spin up a new thread and (using the command-line parameters sent in the step above) start performing the work that instance #2 was going to perform.

So I'm looking for two things: how can a python program know another instance of itself is running, and then how can one python command-line program communicate with another?

Making this more complicated, the same script needs to run on both Windows and Linux, so ideally the solution would use only the Python standard library and not any OS-specific calls. Although if I need to have a Windows codepath and an *nix codepath (and a big if statement in my code to choose one or the other), that's OK if a "same code" solution isn't possible.

I realize I could probably work out a file-based approach (e.g. instance #1 watches a directory for changes and each instance drops a file into that directory when it wants to do work) but I'm a little concerned about cleaning up those files after a non-graceful machine shutdown. I'd ideally be able to use an in-memory solution. But again I'm flexible, if a persistent-file-based approach is the only way to do it, I'm open to that option.

More details: I'm trying to do this because our servers are using a monitoring tool which supports running python scripts to collect monitoring data (e.g. results of a database query or web service call) which the monitoring tool then indexes for later use. Some of these scripts are very expensive to start up but cheap to run after startup (e.g. making a DB connection vs. running a query). So we've chosen to keep them running in an infinite loop until the parent process kills them.

This works great, but on larger servers 100 instances of the same script may be running, even if they're only gathering data every 20 minutes each. This wreaks havoc with RAM, DB connection limits, etc. We want to switch from 100 processes with 1 thread to one process with 100 threads, each executing the work that, previously, one script was doing.

But changing how the scripts are invoked by the monitoring tool is not possible. We need to keep invocation the same (launch a process with different command-line parameters) but but change the scripts to recognize that another one is active, and have the "new" script send its work instructions (from the command line params) over to the "old" script.

BTW, this is not something I want to do on a one-script basis. Instead, I want to package this behavior into a library which many script authors can leverage-- my goal is to enable script authors to write simple, single-threaded scripts which are unaware of multi-instance issues, and to handle the multi-threading and single-instancing under the covers.

Diorama answered 29/5, 2010 at 16:46 Comment(1)
Why do you stick to the worker script being the same than the command invocation scripts? The worker script could be a server process that receives commands, sent by command relay clients, called by your monitoring framework, which only have one task: telling the server what it shall do.Felice
I
11

The Alex Martelli approach of setting up a communications channel is the appropriate one. I would use a multiprocessing.connection.Listener to create a listener, in your choice. Documentation at: http://docs.python.org/library/multiprocessing.html#multiprocessing-listeners-clients

Rather than using AF_INET (sockets) you may elect to use AF_UNIX for Linux and AF_PIPE for Windows. Hopefully a small "if" wouldn't hurt.

Edit: I guess an example wouldn't hurt. It is a basic one, though.

#!/usr/bin/env python

from multiprocessing.connection import Listener, Client
import socket
from array import array
from sys import argv

def myloop(address):
    try:
        listener = Listener(*address)
        conn = listener.accept()
        serve(conn)
    except socket.error, e:
        conn = Client(*address)
        conn.send('this is a client')
        conn.send('close')

def serve(conn):
    while True:
        msg = conn.recv()
        if msg.upper() == 'CLOSE':
            break
        print msg
    conn.close()

if __name__ == '__main__':
    address = ('/tmp/testipc', 'AF_UNIX')
    myloop(address)

This works on OS X, so it needs testing with both Linux and (after substituting the right address) Windows. A lot of caveats exists from a security point, the main one being that conn.recv unpickles its data, so you are almost always better of with recv_bytes.

Indrawn answered 29/5, 2010 at 18:16 Comment(1)
Great answer! Being able to used a named pipe (windows) or fifo (unix), since I can name the pipe/fifo after the script which will be unique, seems much easier than having to keep a mapping in place between scripts and port numbers.Diorama
H
9

The general approach is to have the script, on startup, set up a communication channel in a way that's guaranteed to be exclusive (other attempts to set up the same channel fail in a predictable way) so that further instances of the script can detect the first one's running and talk to it.

Your requirements for cross-platform functionality strongly point towards using a socket as the communication channel in question: you can designate a "well known port" that's reserved for your script, say 12345, and open a socket on that port listening to localhost only (127.0.0.1). If the attempt to open that socket fails, because the port in question is "taken", then you can connect to that port number instead, and that will let you communicate with the existing script.

If you're not familiar with socket programming, there's a good HOWTO doc here. You can also look at the relevant chapter in Python in a Nutshell (I'm biased about that one, of course;-).

Heaney answered 29/5, 2010 at 16:54 Comment(3)
Hi Alex - thanks for the quick response! My main concerns with a well-known-port approach would be the possibility of conflicts (we don't own the servers so other programs may use those ports) and port-number-management (since we'll apply the single-instance trick to many scripts maintained by different script authors). Are there ways around the issues above, or would I be better off with a "named IPC" mechanism? I suspect named pipes on windows and domain sockets on *nix could do this, but I don't know how easy they'd be to use from Python.Diorama
@Justin, I'm not sure how you'd use mechanisms such as named pipes and unix-domain sockets in a cross-platform and "intrinsically mutually exclusive" way. To support the specific needs you identify, you could have the scripts record what "not so well known port" a script of name X is supposed to use, by accessing and updating a .dbm (or sqlite etc) archive keeping the name to port correspondence (if a script upon starting doesn't find its name there, it gets a fresh port from the OS and records it), perhaps with some file-locking mechanism to avoid race conditions.Heaney
@Muhammad Alkarouri's answer below (use the multiprocessing package) seems like a workable cross-platform solution, while avoiding the complexity of mapping scripts to port numbers. Any downside to using multiprocessing?Diorama
D
1

Perhaps try using sockets for communication?

Defensible answered 29/5, 2010 at 16:50 Comment(0)
P
0

Sounds like your best bet is sticking with a pid file but have it not only contain the process Id - have it also include the port number that the prior instance is listening on. So when starting up check for the pid file and if present see if a process with that Id is running - if so send your data to it and quit otherwise overwrite the pid file with the current process's info.

Pandemonium answered 29/5, 2010 at 18:12 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.