Make sure only a single instance of a program is running
Asked Answered
R

25

161

Is there a Pythonic way to have only one instance of a program running?

The only reasonable solution I've come up with is trying to run it as a server on some port, then second program trying to bind to same port - fails. But it's not really a great idea, maybe there's something more lightweight than this?

(Take into consideration that program is expected to fail sometimes, i.e. segfault - so things like "lock file" won't work)

Ruse answered 19/12, 2008 at 12:42 Comment(4)
Perhaps your life would be easier if you tracked down and fixed the segfault. Not that it's an easy thing to do.Larhondalari
It's not in my library, it's in python's libxml bindings and extremely shy - fires only once a couple days.Ruse
Python's standard library supports flock(), which is The Right Thing for modern UNIX programs. Opening a port uses a spot in a much more constrained namespace, whereas pidfiles are more complex as you need to check running processes to invalidate them safely; flock has neither problem.Timberland
This can alternatively be managed outside python using the command-line utility flock.Extradition
L
123

The following code should do the job, it is cross-platform and runs on Python 2.4-3.2. I tested it on Windows, OS X and Linux.

from tendo import singleton
me = singleton.SingleInstance() # will sys.exit(-1) if other instance is running

The latest code version is available singleton.py. Please file bugs here.

You can install tend using one of the following methods:

Lucchesi answered 12/8, 2009 at 10:45 Comment(16)
Works under Linux, but only if the class is defined in the same file. That is, if you tried, "from singlething import SingleInstance" and then used "me = SingleInstance()", it would not lock the file.Kaseykasha
Should NOT we close the file and unlock in the destructor in case of non windows?Misogamy
I updated the answer and included a link to the latest version. If you find a bug please submit it to github and I will solve it as soon as possible.Lucchesi
I like this solution, but it failed(winXP32,python-3.2). You can see the Error here: pastebin.com/a40kj7ae It failed because of wrong path(lockfile: e:\somepath\temp\E:\testLock\test_fileLock.lock) I modified line 9: used relpath instead of abspath. I hope nothing will break by changing this. Sorry, i dont know how to use github.Northeastwards
@Northeastwards Thanks, I made a patch and released a newer version on pypi.python.org/pypi/tendoLucchesi
This syntax didn't work for me on windows under Python 2.6. What worked for me was: 1:from tendo import singleton 2:me = singleton.SingleInstance()Gumwood
Another dependancy for something as trivial as this? Doesn't sound very attractive.Cureton
Just a note: the link github.com/ssbarnea/tendo/blob/master/tendo/singleton.py is not available anymore (404'd)Semicircle
I'd much rather if this silently exited... it threw an error message of "Another instance is already running, quitting." which was quite noticeable to the user.Melt
@Melt it is open source, feel free to add this feature, as optional, and I will be happy to accept it. So far in all my use cases having two instances running at the same time was considered something exceptional and I wanted to be notified about it, in same cases it may indicate a blocked instance...Lucchesi
The class doc still use the syntax before correction of the sample code.Boart
Does the singleton handle processes that get a sigterm (for example if a process is running too long), or do I have to handle that?Wanhsien
This solution does not work. I tried it on Ubuntu 14.04, run the same script from 2 terminal windows simultaneously. They both run just fine.Sweeny
It works in Ubuntu 14.04 for me Dimon I think you must be doing it wrongDike
Doesn't work in Debian. ("error: Could not find suitable distribution for Requirement.parse('install')")Counterweight
It doesn't work with Python >=3.7 when executable is made.Eucalyptol
R
57

Simple, cross-platform solution, found in another question by zgoda:

import fcntl
import os
import sys

def instance_already_running(label="default"):
    """
    Detect if an an instance with the label is already running, globally
    at the operating system level.

    Using `os.open` ensures that the file pointer won't be closed
    by Python's garbage collector after the function's scope is exited.

    The lock will be released when the program exits, or could be
    released if the file pointer were closed.
    """

    lock_file_pointer = os.open(f"/tmp/instance_{label}.lock", os.O_WRONLY)

    try:
        fcntl.lockf(lock_file_pointer, fcntl.LOCK_EX | fcntl.LOCK_NB)
        already_running = False
    except IOError:
        already_running = True

    return already_running

A lot like S.Lott's suggestion, but with the code.

Ruse answered 21/12, 2008 at 14:2 Comment(10)
Out of curiosity: is this really cross-platform? Does it work on Windows?Naoma
There is no fcntl module on Windows (though the functionality could be emulated).Chemisorb
This is not "cross platform" (and I did not pretend it is). For Windows you have to use mutexes to achieve similar result - but I don't do Windows anymore and I have no code to share.Backchat
The problem here is that if you want to write any data to the pid_file (like the PID), you will lose it. See: coding.derkeiler.com/Archive/Python/comp.lang.python/2008-06/…Tasty
this is nice! Much better than do external calls for ps and grep stuff from there.Roustabout
TIP: if you want to wrap this in a function 'fp' must be global or the file will be closed after the function exits.Deicer
I have tried this for some days but when the application exits with control+z the file remains locked and I cannot start the app anymore until I delete the pid fileFusion
@Fusion Control+Z does not exit an application (on any OS I'm aware of), it suspends it. The application can be returned to the foreground with fg. So, it sounds like it is working correctly for you (i.e. app is still active, but suspended, so the lock remains in place).Phage
This code in my situation (Python 3.8.3 on Linux) needed modyfication: lock_file_pointer = os.open(lock_path, os.O_WRONLY | os.O_CREAT)Unused
If you don't want to keep lock files hanging around in /tmp, use module atexit to define a handler that closes the pointer(s) and deletes the file(s) when the script exits. This requires that you save the pointers someplace. (The lock files might remain if the script is killed, but not under normal operation.)Nosewheel
P
42

This code is Linux specific. It uses 'abstract' UNIX domain sockets, but it is simple and won't leave stale lock files around. I prefer it to the solution above because it doesn't require a specially reserved TCP port.

try:
    import socket
    s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
    ## Create an abstract socket, by prefixing it with null. 
    s.bind( '\0postconnect_gateway_notify_lock') 
except socket.error as e:
    error_code = e.args[0]
    error_string = e.args[1]
    print "Process already running (%d:%s ). Exiting" % ( error_code, error_string) 
    sys.exit (0) 

The unique string postconnect_gateway_notify_lock can be changed to allow multiple programs that need a single instance enforced.

Pansy answered 2/11, 2009 at 17:10 Comment(9)
Roberto, are you sure that after kernel panic or hard reset , file \0postconnect_gateway_notify_lock will not be present at boot up ? In my case AF_UNIX socket file still present after this and this destroys whole idea. Solution above with acquiring lock on specific filename is much reliable in this case.Roustabout
As noted above, this solution works on Linux but not on Mac OS X.Earphone
This solution does not work. I tried it on Ubuntu 14.04. Run the same script from 2 terminal windows simultaneously. They both run just fine.Sweeny
This worked for me in Ubuntu 16. And killing the process by any means allowed another one to start. Dimon I think you did something wrong in your test. (Perhaps you forgot to make your script sleep after the above code ran, so it immediately exited and released the socket.)Theriot
How long of a sleep is necessary since the original poster didn't specify this requirement? Sleeping for one second did not make this work under Ubuntu 18.0.4Montgolfier
It's not a question of sleep. The code works but only as inline code. I was putting it into a function. The socket was disappearing as soon as the function existed.Montgolfier
' postconnect_gateway_notify_lock' is just an arbitrary string, you can use 'baa_baa_black_sheep' and it still works fine. I thought it was some specific linux or socket related constant..Filing
what is the secret behind "by prefixing it with null" ? When I don't put that symbol, the solution makes stale file.Lemire
Just realized, that "abstract socket" doesn't create an actual file in Linux. That is the trick.Lemire
N
27

I don't know if it's pythonic enough, but in the Java world listening on a defined port is a pretty widely used solution, as it works on all major platforms and doesn't have any problems with crashing programs.

Another advantage of listening to a port is that you could send a command to the running instance. For example when the users starts the program a second time, you could send the running instance a command to tell it to open another window (that's what Firefox does, for example. I don't know if they use TCP ports or named pipes or something like that, 'though).

Naoma answered 19/12, 2008 at 12:46 Comment(2)
+1 to this, specially since it allows me to notify the running instance, so it creates another window, pops up, etc.Cureton
Use e.g. import socket; s = socket.socket(socket.AF_INET, socket.SOCK_STREAM); s.bind(('localhost', DEFINED_PORT)). An OSError will be raised if another process is bound to the same port.Genitor
E
16

Never written python before, but this is what I've just implemented in mycheckpoint, to prevent it being started twice or more by crond:

import os
import sys
import fcntl
fh=0
def run_once():
    global fh
    fh=open(os.path.realpath(__file__),'r')
    try:
        fcntl.flock(fh,fcntl.LOCK_EX|fcntl.LOCK_NB)
    except:
        os._exit(0)

run_once()

Found Slava-N's suggestion after posting this in another issue (http://stackoverflow.com/questions/2959474). This one is called as a function, locks the executing scripts file (not a pid file) and maintains the lock until the script ends (normal or error).

Emmenagogue answered 31/8, 2011 at 14:2 Comment(2)
Very elegant. I changed it so it gets the path from the script's arguments. Also recommends embedding this in somewhere common - ExampleAppreciation
I found this helpful link In case you're using Windows fctnl's substitute on Windows is win32api. Hope this helps.Avisavitaminosis
P
11

Use a pid file. You have some known location, "/path/to/pidfile" and at startup you do something like this (partially pseudocode because I'm pre-coffee and don't want to work all that hard):

import os, os.path
pidfilePath = """/path/to/pidfile"""
if os.path.exists(pidfilePath):
   pidfile = open(pidfilePath,"r")
   pidString = pidfile.read()
   if <pidString is equal to os.getpid()>:
      # something is real weird
      Sys.exit(BADCODE)
   else:
      <use ps or pidof to see if the process with pid pidString is still running>
      if  <process with pid == 'pidString' is still running>:
          Sys.exit(ALREADAYRUNNING)
      else:
          # the previous server must have crashed
          <log server had crashed>
          <reopen pidfilePath for writing>
          pidfile.write(os.getpid())
else:
    <open pidfilePath for writing>
    pidfile.write(os.getpid())

So, in other words, you're checking if a pidfile exists; if not, write your pid to that file. If the pidfile does exist, then check to see if the pid is the pid of a running process; if so, then you've got another live process running, so just shut down. If not, then the previous process crashed, so log it, and then write your own pid to the file in place of the old one. Then continue.

Pyxie answered 19/12, 2008 at 13:16 Comment(1)
This has a race condition. The test-then-write sequence may raise an exception of two programs start almost simultaneously, find no file and try to open for write concurrently. It should raise an exception on one, allowing the other to proceed.Polson
E
9

The best solution for this on windows is to use mutexes as suggested by @zgoda.

import win32event
import win32api
from winerror import ERROR_ALREADY_EXISTS

mutex = win32event.CreateMutex(None, False, 'name')
last_error = win32api.GetLastError()

if last_error == ERROR_ALREADY_EXISTS:
   print("App instance already running")

Some answers use fctnl (included also in @sorin tendo package) which is not available on windows and should you try to freeze your python app using a package like pyinstaller which does static imports, it throws an error.

Also, using the lock file method, creates a read-only problem with database files( experienced this with sqlite3).

Electroscope answered 18/2, 2019 at 11:36 Comment(1)
It doesn't seems to work for me (I'm running Python 3.6 on Windows 10)Janellejanene
E
6

Here is my eventual Windows-only solution. Put the following into a module, perhaps called 'onlyone.py', or whatever. Include that module directly into your __ main __ python script file.

import win32event, win32api, winerror, time, sys, os
main_path = os.path.abspath(sys.modules['__main__'].__file__).replace("\\", "/")

first = True
while True:
        mutex = win32event.CreateMutex(None, False, main_path + "_{<paste YOUR GUID HERE>}")
        if win32api.GetLastError() == 0:
            break
        win32api.CloseHandle(mutex)
        if first:
            print "Another instance of %s running, please wait for completion" % main_path
            first = False
        time.sleep(1)

Explanation

The code attempts to create a mutex with name derived from the full path to the script. We use forward-slashes to avoid potential confusion with the real file system.

Advantages

  • No configuration or 'magic' identifiers needed, use it in as many different scripts as needed.
  • No stale files left around, the mutex dies with you.
  • Prints a helpful message when waiting
Enhanced answered 23/9, 2016 at 16:4 Comment(0)
C
5

For anybody using wxPython for their application, you can use the function wx.SingleInstanceChecker documented here.

I personally use a subclass of wx.App which makes use of wx.SingleInstanceChecker and returns False from OnInit() if there is an existing instance of the app already executing like so:

import wx

class SingleApp(wx.App):
    """
    class that extends wx.App and only permits a single running instance.
    """

    def OnInit(self):
        """
        wx.App init function that returns False if the app is already running.
        """
        self.name = "SingleApp-%s".format(wx.GetUserId())
        self.instance = wx.SingleInstanceChecker(self.name)
        if self.instance.IsAnotherRunning():
            wx.MessageBox(
                "An instance of the application is already running", 
                "Error", 
                 wx.OK | wx.ICON_WARNING
            )
            return False
        return True

This is a simple drop-in replacement for wx.App that prohibits multiple instances. To use it simply replace wx.App with SingleApp in your code like so:

app = SingleApp(redirect=False)
frame = wx.Frame(None, wx.ID_ANY, "Hello World")
frame.Show(True)
app.MainLoop()
Cellulose answered 8/4, 2016 at 0:34 Comment(2)
After coding a socket-listing thread for a singleton I found this, which works great and I've already installed in a couple program, however, I would like the additional "wakeup" I can give the singleton so I can bring it to the front-and-center of a big pile of overlapping windows. Also: the "documented here" link points to pretty useless auto-generated documentation this is a better linkEous
@Eous You're right - that's a much better documentation link, have updated the answer.Cellulose
P
4

This may work.

  1. Attempt create a PID file to a known location. If you fail, someone has the file locked, you're done.

  2. When you finish normally, close and remove the PID file, so someone else can overwrite it.

You can wrap your program in a shell script that removes the PID file even if your program crashes.

You can, also, use the PID file to kill the program if it hangs.

Polson answered 19/12, 2008 at 12:56 Comment(1)
But if you're program aborts, then it leaves behind a PID file. Then next run of the script has to figure out if it's a stale PID file, which has lots of potential race conditions. That's why you use the system locking functions.Tailwind
M
4

Building upon Roberto Rosario's answer, I come up with the following function:

SOCKET = None
def run_single_instance(uniq_name):
    try:
        import socket
        global SOCKET
        SOCKET = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
        ## Create an abstract socket, by prefixing it with null.
        # this relies on a feature only in linux, when current process quits, the
        # socket will be deleted.
        SOCKET.bind('\0' + uniq_name)
        return True
    except socket.error as e:
        return False

We need to define global SOCKET vaiable since it will only be garbage collected when the whole process quits. If we declare a local variable in the function, it will go out of scope after the function exits, thus the socket be deleted.

All the credit should go to Roberto Rosario, since I only clarify and elaborate upon his code. And this code will work only on Linux, as the following quoted text from https://troydhanson.github.io/network/Unix_domain_sockets.html explains:

Linux has a special feature: if the pathname for a UNIX domain socket begins with a null byte \0, its name is not mapped into the filesystem. Thus it won’t collide with other names in the filesystem. Also, when a server closes its UNIX domain listening socket in the abstract namespace, its file is deleted; with regular UNIX domain sockets, the file persists after the server closes it.

Macready answered 24/12, 2019 at 8:9 Comment(0)
T
4

Late answer, but for windows you can use:

from win32event import CreateMutex
from win32api import CloseHandle, GetLastError
from winerror import ERROR_ALREADY_EXISTS
import sys

class singleinstance:
    """ Limits application to single instance """

    def __init__(self):
        self.mutexname = "testmutex_{D0E858DF-985E-4907-B7FB-8D732C3FC3B9}"
        self.mutex = CreateMutex(None, False, self.mutexname)
        self.lasterror = GetLastError()
    
    def alreadyrunning(self):
        return (self.lasterror == ERROR_ALREADY_EXISTS)
        
    def __del__(self):
        if self.mutex:
            CloseHandle(self.mutex)

Usage

# do this at beginnig of your application
myapp = singleinstance()

# check is another instance of same program running
if myapp.alreadyrunning():
    print ("Another instance of this program is already running")
    sys.exit(1)
Thuja answered 30/12, 2020 at 3:23 Comment(1)
Perfect. Well documented and works as well !Prefab
W
3

Using a lock-file is a quite common approach on unix. If it crashes, you have to clean up manually. You could stor the PID in the file, and on startup check if there is a process with this PID, overriding the lock-file if not. (However, you also need a lock around the read-file-check-pid-rewrite-file). You will find what you need for getting and checking pid in the os-package. The common way of checking if there exists a process with a given pid, is to send it a non-fatal signal.

Other alternatives could be combining this with flock or posix semaphores.

Opening a network socket, as saua proposed, would probably be the easiest and most portable.

Whipperin answered 19/12, 2008 at 13:1 Comment(0)
P
3

Here is a cross platform example that I've tested on Windows Server 2016 and Ubuntu 20.04 using Python 3.7.9:

import os

class SingleInstanceChecker:
    def __init__(self, id):
        if isWin():
            ensure_win32api()
            self.mutexname = id
            self.lock = win32event.CreateMutex(None, False, self.mutexname)
            self.running = (win32api.GetLastError() == winerror.ERROR_ALREADY_EXISTS)

        else:
            ensure_fcntl()
            self.lock = open(f"/tmp/isnstance_{id}.lock", 'wb')
            try:
                fcntl.lockf(self.lock, fcntl.LOCK_EX | fcntl.LOCK_NB)
                self.running = False
            except IOError:
                self.running = True


    def already_running(self):
        return self.running
        
    def __del__(self):
        if self.lock:
            try:
                if isWin():
                    win32api.CloseHandle(self.lock)
                else:
                    os.close(self.lock)
            except Exception as ex:
                pass

# ---------------------------------------
# Utility Functions
# Dynamically load win32api on demand
# Install with: pip install pywin32
win32api=winerror=win32event=None
def ensure_win32api():
    global win32api,winerror,win32event
    if win32api is None:
        import win32api
        import winerror
        import win32event


# Dynamically load fcntl on demand
# Install with: pip install fcntl
fcntl=None
def ensure_fcntl():
    global fcntl
    if fcntl is None:
        import fcntl


def isWin():
    return (os.name == 'nt')
# ---------------------------------------

Here is it in use:

import time, sys

def main(argv):
    _timeout = 10
    print("main() called. sleeping for %s seconds" % _timeout)
    time.sleep(_timeout)
    print("DONE")


if __name__ == '__main__':
    SCR_NAME = "my_script"
    sic = SingleInstanceChecker(SCR_NAME)
    if sic.already_running():
        print("An instance of {} is already running.".format(SCR_NAME))
        sys.exit(1)
    else:
        main(sys.argv[1:])
Pacifism answered 2/2, 2021 at 0:38 Comment(2)
This is great... just a couple of tweaks: I've turned the whole thing into a self-contained class with the OS-specific imports simply happening in __init__. Also silently swallowing the exception on __del__ is a bit undesirable: adding logging.exception(ex) will spare future developers and maintainers much pain (NB if no logging is configured this still prints a stack trace... but obviously does not actually raise any exception).Piquet
PS in my version another import win32api line has to be added before win32api.CloseHandle(self.lock). Does no harm. No nasty globals.Piquet
A
2

I'm posting this as an answer because I'm a new user and Stack Overflow won't let me vote yet.

Sorin Sbarnea's solution works for me under OS X, Linux and Windows, and I am grateful for it.

However, tempfile.gettempdir() behaves one way under OS X and Windows and another under other some/many/all(?) *nixes (ignoring the fact that OS X is also Unix!). The difference is important to this code.

OS X and Windows have user-specific temp directories, so a tempfile created by one user isn't visible to another user. By contrast, under many versions of *nix (I tested Ubuntu 9, RHEL 5, OpenSolaris 2008 and FreeBSD 8), the temp dir is /tmp for all users.

That means that when the lockfile is created on a multi-user machine, it's created in /tmp and only the user who creates the lockfile the first time will be able to run the application.

A possible solution is to embed the current username in the name of the lock file.

It's worth noting that the OP's solution of grabbing a port will also misbehave on a multi-user machine.

Aila answered 2/12, 2010 at 16:29 Comment(1)
For some readers (e.g. me) the desired behaviour is that only one copy can run period, regardless of how many users are involved. So the per-user tmp directories are broken, while the shared /tmp or port lock exhibit desired behaviour.Nadabas
S
2

I use single_process on my gentoo;

pip install single_process

example:

from single_process import single_process

@single_process
def main():
    print 1

if __name__ == "__main__":
    main()   

refer: https://pypi.python.org/pypi/single_process/

Soldier answered 24/4, 2014 at 7:31 Comment(3)
Fails in Py3. The package seems misconstructed.Euthenics
On Windows I get: ImportError: No module named fcntlMartinemartineau
pypi.python.org/pypi/single_process/1.0 is 404Thuja
N
1

I keep suspecting there ought to be a good POSIXy solution using process groups, without having to hit the file system, but I can't quite nail it down. Something like:

On startup, your process sends a 'kill -0' to all processes in a particular group. If any such processes exist, it exits. Then it joins the group. No other processes use that group.

However, this has a race condition - multiple processes could all do this at precisely the same time and all end up joining the group and running simultaneously. By the time you've added some sort of mutex to make it watertight, you no longer need the process groups.

This might be acceptable if your process only gets started by cron, once every minute or every hour, but it makes me a bit nervous that it would go wrong precisely on the day when you don't want it to.

I guess this isn't a very good solution after all, unless someone can improve on it?

Nadabas answered 24/7, 2013 at 15:44 Comment(0)
D
1

I ran into this exact problem last week, and although I did find some good solutions, I decided to make a very simple and clean python package and uploaded it to PyPI. It differs from tendo in that it can lock any string resource name. Although you could certainly lock __file__ to achieve the same effect.

Install with: pip install quicklock

Using it is extremely simple:

[nate@Nates-MacBook-Pro-3 ~/live] python
Python 2.7.6 (default, Sep  9 2014, 15:04:36)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from quicklock import singleton
>>> # Let's create a lock so that only one instance of a script will run
...
>>> singleton('hello world')
>>>
>>> # Let's try to do that again, this should fail
...
>>> singleton('hello world')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/nate/live/gallery/env/lib/python2.7/site-packages/quicklock/quicklock.py", line 47, in singleton
    raise RuntimeError('Resource <{}> is currently locked by <Process {}: "{}">'.format(resource, other_process.pid, other_process.name()))
RuntimeError: Resource <hello world> is currently locked by <Process 24801: "python">
>>>
>>> # But if we quit this process, we release the lock automatically
...
>>> ^D
[nate@Nates-MacBook-Pro-3 ~/live] python
Python 2.7.6 (default, Sep  9 2014, 15:04:36)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from quicklock import singleton
>>> singleton('hello world')
>>>
>>> # No exception was thrown, we own 'hello world'!

Take a look: https://pypi.python.org/pypi/quicklock

Docila answered 28/1, 2015 at 6:51 Comment(3)
I just installed it via "pip install quicklock", but when I try to use it via "from quicklock import singleton" I get an exception: "ImportError: cannot import name 'singleton'". This is on a Mac.Baffle
It turns out quicklock does not work with python 3. That is the reason it was failing for me.Baffle
Yes, sorry, it was not future-proofed at all. I will welcome a contribution to get it working!Docila
K
0

linux example

This method is based on the creation of a temporary file automatically deleted after you close the application. the program launch we verify the existence of the file; if the file exists ( there is a pending execution) , the program is closed ; otherwise it creates the file and continues the execution of the program.

from tempfile import *
import time
import os
import sys


f = NamedTemporaryFile( prefix='lock01_', delete=True) if not [f  for f in     os.listdir('/tmp') if f.find('lock01_')!=-1] else sys.exit()

YOUR CODE COMES HERE
Keynes answered 14/10, 2015 at 13:44 Comment(2)
Welcome to Stack Overflow! While this answer may be correct, please add some explanation. Imparting the underlying logic is more important than just giving the code, because it helps the OP and other readers fix this and similar issues themselves.Baker
Is this threadsafe? Seems like the check and the temp file creation are not atomic...Pape
P
0

On a Linux system one could also ask pgrep -a for the number of instances, the script is found in the process list (option -a reveals the full command line string). E.g.

import os
import sys
import subprocess

procOut = subprocess.check_output( "/bin/pgrep -u $UID -a python", shell=True, 
                                   executable="/bin/bash", universal_newlines=True)

if procOut.count( os.path.basename(__file__)) > 1 :        
    sys.exit( ("found another instance of >{}<, quitting."
              ).format( os.path.basename(__file__)))

Remove -u $UID if the restriction should apply to all users. Disclaimer: a) it is assumed that the script's (base)name is unique, b) there might be race conditions.

Pluralize answered 17/1, 2019 at 16:12 Comment(0)
D
0

Here's a good example for django with contextmanager and memcached: https://docs.celeryproject.org/en/latest/tutorials/task-cookbook.html

Can be used to protect simultaneous operation on different hosts. Can be used to manage multiple tasks. Can also be changed for simple python scripts.

My modification of the above code is here:

import time
from contextlib import contextmanager
from django.core.cache import cache


@contextmanager
def memcache_lock(lock_key, lock_value, lock_expire):
    timeout_at = time.monotonic() + lock_expire - 3

    # cache.add fails if the key already exists
    status = cache.add(lock_key, lock_value, lock_expire)
    try:
        yield status
    finally:
        # memcache delete is very slow, but we have to use it to take
        # advantage of using add() for atomic locking
        if time.monotonic() < timeout_at and status:
            # don't release the lock if we exceeded the timeout
            # to lessen the chance of releasing an expired lock owned by someone else
            # also don't release the lock if we didn't acquire it
            cache.delete(lock_key)


LOCK_EXPIRE = 60 * 10  # Lock expires in 10 minutes


def main():
    lock_name, lock_value = "lock_1", "locked"
    with memcache_lock(lock_name, lock_value, LOCK_EXPIRE) as acquired:
        if acquired:
            # single instance code here:
            pass


if __name__ == "__main__":
    main()
Downhaul answered 28/2, 2021 at 16:52 Comment(0)
D
0

Here is a cross-platform implementation, creating a temporary lock file using a context manager.

Can be used to manage multiple tasks.

import os
from contextlib import contextmanager
from time import sleep


class ExceptionTaskInProgress(Exception):
    pass


# Context manager for suppressing exceptions
class SuppressException:
    def __init__(self):
        pass

    def __enter__(self):
        return self

    def __exit__(self, *exc):
        return True


# Context manager for task
class TaskSingleInstance:
    def __init__(self, task_name, lock_path):
        self.task_name = task_name
        self.lock_path = lock_path
        self.lock_filename = os.path.join(self.lock_path, self.task_name + ".lock")

        if os.path.exists(self.lock_filename):
            raise ExceptionTaskInProgress("Resource already in use")

    def __enter__(self):
        self.fl = open(self.lock_filename, "w")
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        self.fl.close()
        os.unlink(self.lock_filename)


# Here the task is silently interrupted
# if it is already running on another instance.
def main1():
    task_name = "task1"
    tmp_filename_path = "."
    with SuppressException():
        with TaskSingleInstance(task_name, tmp_filename_path):
            print("The task `{}` has started.".format(task_name))
            # The single task instance code is here.
            sleep(5)
            print("The task `{}` has completed.".format(task_name))


# Here the task is interrupted with a message
# if it is already running in another instance.
def main2():
    task_name = "task1"
    tmp_filename_path = "."
    try:
        with TaskSingleInstance(task_name, tmp_filename_path):
            print("The task `{}` has started.".format(task_name))
            # The single task instance code is here.
            sleep(5)
            print("Task `{}` completed.".format(task_name))
    except ExceptionTaskInProgress as ex:
        print("The task `{}` is already running.".format(task_name))


if __name__ == "__main__":
    main1()
    main2()
Downhaul answered 28/2, 2021 at 18:20 Comment(3)
I've tried this with a pyinstaller created exe on Windows. It works alright. However, if the process is killed the lock file is not deleted, so the users cannot start any instance. Adding atexit.register(my_exit_func) seems to solves this issue. However, there is still a risk in case of a power cut etc.Ottoottoman
To do this, you can add an additional timeout check.Downhaul
And you can add a task that clears the lock files after the system boots.Downhaul
R
0

Create a file with the name lockexclusive.py

import os
import sys
import atexit
import hashlib


@atexit.register # clean up at exit 
def cleanup():
    try:
        if config.lock_file:
            config.lock_file.close()
        if config.fname:
            os.remove(config.fname)
    except Exception:
        pass


config = sys.modules[__name__] # this allows us to share variables with the main script
config.file = None
config.fname = None
config.lock_file = None
config.maxinstances = 1


def configure_lock(
    maxinstances: int = 1,
    message: str | None = None,
    file: str | None = None,
) -> None:
    """
    Configures a lock file for a given file path and maximum number of instances.

    Args:
        maxinstances (int, optional): The maximum number of instances allowed to access the file. Defaults to 1.
        message (str, optional): The message to print if the maximum number of instances is reached. Defaults to None.
        file (str, optional): The file path to configure the lock file for. Defaults to None.


    Returns:
        None

    Raises:
        None
    """
    if not file: # if not file is passed, we get the calling filename from the frame
        f = sys._getframe(1)
        dct = f.f_globals
        file = dct.get("__file__", "")
    config.file = os.path.normpath(file)
    config.maxinstances = int(maxinstances)

    for inst in range(config.maxinstances):
        try:
            hash = hashlib.sha256((config.file + f"{inst}").encode("utf-8", "ignore")) # unique name to make sure other that it doesn't interfere with other py files using this function 
            config.fname = hash.digest().hex() + ".locfi"
            tmpf = os.path.join(os.environ.get("TMP"), config.fname)
            if os.path.exists(tmpf):
                os.remove(tmpf)
            config.lock_file = os.open(tmpf, os.O_CREAT | os.O_EXCL)
            break
        except Exception as fe:
            if inst + 1 == config.maxinstances:
                if message:
                    print(message)
                try:
                    sys.exit(1)
                finally:
                    os._exit(1) # just to make sure :) 
            else:
                continue

Import it in your script:

import sys
from time import sleep

from lockexclusive import configure_lock

# it can be used like this:
# configure_lock(maxinstances=1, message="More than one instance running",file=sys.argv[0])

# or without the file argument:
configure_lock(maxinstances=1, message="More than one instance running")

sleep(100)
Remscheid answered 23/4, 2023 at 1:41 Comment(0)
T
0

I wanted to do this with file locking, but to write the pid of the process that got the lock into the file. This turned out to be surprisingly subtle because they're no way to make a single open() call that will:

  • Create the file if it doesn't exist
  • Open the file for read/write at the beginning of the file if it does exist

Just to review open() modes:

  • r = read,nocreate
  • r+ = read,write,nocreate
  • w = write, create if it didn't exist, truncate the file if it did exist
  • w+ = read, write, create if it didn't exist, truncate the file if it did exist
  • a = append, create if it didn't exist, don't truncate if it already existed, position at the end of the file. seek(0) is not allowed

I ended up doing the following in a Linux environment:

import fcntl, os, sys, time
fh=open("myfile.dat",'a')  # Append creates the file if it didn't exist, but doesn't truncate the file if it already existed (which is what 'w' would do)
fh.close()
fh=open("myfile.dat","r+") # read,write,nocreate.  Position at beginning of the file
try:
  fcntl.flock(fh, fcntl.LOCK_EX | fcntl.LOCK_NB)
except BlockingIOError:
  already_running_pid=fh.readline().strip('\n')
  print( f"Process with pid {already_running_pid} already has a lock on myfile.dat. Aborting.")
  sys.exit(1)
mypid=os.getpid()
print(f"writing pid {mypid} to myfile.dat")
fh.write(str(mypid)+"\n")
fh.flush()
print(f"Semaphore lock acquired on myfile.dat")
time.sleep(10)
print("Exiting")
Tailwind answered 6/11, 2023 at 22:37 Comment(0)
S
-1
import sys,os

# start program
try:  # (1)
    os.unlink('lock')  # (2)
    fd=os.open("lock", os.O_CREAT|os.O_EXCL) # (3)  
except: 
    try: fd=os.open("lock", os.O_CREAT|os.O_EXCL) # (4) 
    except:  
        print "Another Program running !.."  # (5)
        sys.exit()  

# your program  ...
# ...

# exit program
try: os.close(fd)  # (6)
except: pass
try: os.unlink('lock')  
except: pass
sys.exit()  
Slurry answered 22/10, 2018 at 19:57 Comment(1)
Welcome to Stack Overflow! While this code block may answer the question, it would be best if you could provide a little explanation for why it does so. Please edit your answer to include such a description.Manteltree

© 2022 - 2025 — McMap. All rights reserved.