After researching python daemons, this walk through seemed to be the most robust: http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
Now I am trying to implement a pool of workers inside the daemon class which I believe is working (I have not thoroughly tested the code) except that on the close I get a zombie process. I have read I need to wait for the return code from the child but I just cannot see exactly how I need to do this yet.
Here are some code snippets:
def stop(self):
...
try:
while 1:
self.pool.close()
self.pool.join()
os.kill(pid, SIGTERM)
time.sleep(0.1)
...
Here I have tried os.killpg
and a number of os.wait
methods but with no improvement. I also have played with closing
/joining
the pool before and after the os.kill
. This loop as it stands, never ends and as soon as it hits the os.kill
I get a zombie process. self.pool = Pool(processes=4)
occurs in the __init__
section of the daemon. From the run(self)
which is excecuted after start(self)
, I will call self.pool.apply_async(self.runCmd, [cmd, 10], callback=self.logOutput)
. However, I wanted to address this zombie process before looking into that.
How can I properly implement the pool inside the daemon to avoid this zombie process?
runCmd()
function which issignal.signal(signal.SIGALRM, self.handler)
. Here the handler throws a custom exception saying that the command has gone past the allocated execution time. Why would I need this handler? I thought multiprocessing took care of that in thepool.close
andpool.join
. Frankly, I don't know where the process is coming from as I have not calledapply_async
and so I do not have workers or the callback threads. – Bernhard