how to automatically run a bash script when my qsub jobs are finished on a server?
Asked Answered
O

3

7

I would like to run a script when all of the jobs that I have sent to a server are done.

for example, I send

ssh server "for i in config*; do qsub ./run 1 $i; done"

And I get back a list of the jobs that were started. I would like to automatically start another script on the server to process the output from these jobs once all are completed.

I would appreciate any advice that would help me avoid the following inelegant solution:

If I save each of the 1000 job id's from the above call in a separate file, I could check the contents of each file against the current list of running jobs, i.e. output from a call to:

ssh qstat

I would only need to check every half hour, but I would imagine that there is a better way.

Orator answered 7/10, 2010 at 21:57 Comment(0)
C
8

It depends a bit on what job scheduler you are using and what version, but there's another approach that can be taken too if your results-processing can also be done on the same queue as the job.

One very handy way of managing lots of related job in more recent versions of torque (and with grid engine, and others) is to launch the any individual jobs as a job array (cf. http://docs.adaptivecomputing.com/torque/4-1-4/Content/topics/commands/qsub.htm#-t). This requires mapping the individual runs to numbers somehow, which may or may not be convenient; but if you can do it for your jobs, it does greatly simplify managing the jobs; you can qsub them all in one line, you can qdel or qhold them all at once (while still having the capability to deal with jobs individually).

If you do this, then you could submit an analysis job which had a dependency on the array of jobs which would only run once all of the jobs in the array were complete: (cf. http://docs.adaptivecomputing.com/torque/4-1-4/Content/topics/commands/qsub.htm#dependencyExamples). Submitting the job would look like:

qsub analyze.sh -W depend=afterokarray:427[]

where analyze.sh had the script to do the analysis, and 427 would be the job id of the array of jobs you launched. (The [] means only run after all are completed). The syntax differs for other schedulers (eg, SGE/OGE) but the ideas are the same.

Getting this right can take some doing, and certainly Tristan's approach has the advantage of being simple, and working with any scheduler; but learning to use job arrays in this situation if you'll be doing alot of this may be worth your time.

Cargo answered 12/10, 2010 at 2:46 Comment(0)
T
5

Something you might consider is having each job script just touch a filename in a dedicated folder like $i.jobdone, and in your master script, you could simply use ls *.jobdone | wc -l to test for the right number of jobs done.

Teresetereshkova answered 7/10, 2010 at 22:47 Comment(0)
F
1

You can use wait to stop execution until all your jobs are done. You can even collect all the exit statuses and other running statistics (time it took, count of jobs done at the time, whatever) if you cycle around waiting for specific ids.

I'd write a small C program to do the waiting and collecting (if you have permissions to upload and run executables), but you can easily use the bash wait built-in for roughly the same purpose, albeit with less flexibility.

Edit: small example.

#!/bin/bash

...
waitfor=''

for i in tasks; do
    task &
    waitfor="$waitfor $!"
done

wait $waitfor
...

If you run this script in background, It won't bother you and whatever comes after the wait line will run when your jobs are over.

Fordone answered 8/10, 2010 at 8:36 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.