SLURM sbatch job array for the same script but with different input arguments run in parallel
Asked Answered
P

3

15

I have a problem where I need to launch the same script but with different input arguments.

Say I have a script myscript.py -p <par_Val> -i <num_trial>, where I need to consider N different par_values (between x0 and x1) and M trials for each value of par_values.

Each trial of M is such that almost reaches the time limits of the cluster where I am working on (and I don't have priviledges to change this). So in practice I need to run NxM independent jobs.

Because each batch jobs has the same node/cpu configuration, and invokes the same python script, except for changing the input parameters, in principle, in pseudo-language I should have a sbatch script that should do something like:

#!/bin/bash
#SBATCH --job-name=cv_01
#SBATCH --output=cv_analysis_eis-%j.out
#SBATCH --error=cv_analysis_eis-%j.err
#SBATCH --partition=gpu2
#SBATCH --nodes=1
#SBATCH --cpus-per-task=4

for p1 in 0.05 0.075 0.1 0.25 0.5
do
    for i in {0..150..5}
    do
        python myscript.py -p p1 -v i
    done
done

where every call of the script is itself a batch job. Looking at the sbatch doc, the -a --array option seems promising. But in my case I need to change the input parameters for every script of the NxM that I have. How can I do this? I would like not to write NxM batch scripts and then list them in a txt file as suggested by this post. Nor the solution proposed here seems ideal, as this is the case imho of a job array. Moreover I would like to make sure that all the NxM scripts are launched at the same time, and the invoking above script is terminated right after, so that it won't clash with the time limit and my whole job will be terminated by the system and remain incomplete (whereas, since each of the NxM jobs is within such limit, if they are run together in parallel but independent, this won't happen).

Perni answered 27/1, 2017 at 18:23 Comment(0)
N
13

The best approach is to use job arrays.

One option is to pass the parameter p1 when submitting the job script, so you will only have one script, but will have to submit it multiple times, once for each p1 value.

The code will be like this (untested):

#!/bin/bash
#SBATCH --job-name=cv_01
#SBATCH --output=cv_analysis_eis-%j-%a.out
#SBATCH --error=cv_analysis_eis-%j-%a.err
#SBATCH --partition=gpu2
#SBATCH --nodes=1
#SBATCH --cpus-per-task=4
#SBATCH -a 0-150:5

python myscript.py -p $1 -v $SLURM_ARRAY_TASK_ID

and you will submit it with:

sbatch my_jobscript.sh 0.05
sbatch my_jobscript.sh 0.075
...

Another approach is to define all the p1 parameters in a bash array and submit NxM jobs (untested)

#!/bin/bash
#SBATCH --job-name=cv_01
#SBATCH --output=cv_analysis_eis-%j-%a.out
#SBATCH --error=cv_analysis_eis-%j-%a.err
#SBATCH --partition=gpu2
#SBATCH --nodes=1
#SBATCH --cpus-per-task=4
#Make the array NxM
#SBATCH -a 0-150

PARRAY=(0.05 0.075 0.1 0.25 0.5)    

#p1 is the element of the array found with ARRAY_ID mod P_ARRAY_LENGTH
p1=${PARRAY[`expr $SLURM_ARRAY_TASK_ID % ${#PARRAY[@]}`]}
#v is the integer division of the ARRAY_ID by the lenght of 
v=`expr $SLURM_ARRAY_TASK_ID / ${#PARRAY[@]}`
python myscript.py -p $p1 -v $v
Niello answered 27/1, 2017 at 21:26 Comment(3)
Thanks this is exactly what I was looking for. Before I approve it however, in the second example that you provide, I am not convinced by the $1 argument in expr when you assign p1. Could you clarify on that. It does not make sense to me indeed, since afaik $<num> refer to input arguments...Perni
Ok, your answer is correct, with a minor typo on the assignment of p1 which should be instead p1=${PARRAY['expr $SLURM_ARRAY_TASK_ID % ${#PARRAY[@]}']}.Perni
Why don't you call your Python script with srun?Macmacabre
E
1

If you use SLURM job arrays, you could linearise the index of your two for loops, and then do a comparison of the loop index and the array task id:

#!/bin/bash
#SBATCH --job-name=cv_01
#SBATCH --output=cv_analysis_eis-%j.out
#SBATCH --error=cv_analysis_eis-%j.err
#SBATCH --partition=gpu2
#SBATCH --nodes=1
#SBATCH --cpus-per-task=4
#SBATCH -a 0-154

# NxM = 5 * 31 = 154

p1_arr=(0.05 0.075 0.1 0.25 0.5)

# SLURM_ARRAY_TASK_ID=154 # comment in for testing

for ip1 in {0..4} # 5 steps
do
    for i in {0..150..5} # 31 steps
    do
        let task_id=$i/5+31*$ip1

        # printf $task_id"\n" # comment in for testing

        if [ "$task_id" -eq "$SLURM_ARRAY_TASK_ID" ]
        then
          p1=${p1_arr[ip1]}
          # printf "python myscript.py -p $p1 -v $i\n" # comment in for testing
          python myscript.py -p $p1 -v $i\n
        fi
    done
done

This answer is pretty similar to Carles. I would thus have preferred to write it as a comment but do not have enough reputation.

Emmett answered 30/1, 2017 at 11:0 Comment(0)
P
0

According to this page, job arrays incur significant overhead:

If the running time of your program is small, say ten minutes or less, creating a job array will incur a lot of overhead and you should consider packing your jobs.

That page provides a few examples to run your kind of job, using both arrays and "packed jobs."

If you don't want/need to specify the resources for your job, here is another approach: I'm not sure if it's a usecase that was intended by Slurm, but it appears to work, and the submission script looks a little bit nicer since we don't have to linearize the indices to fit it into the job-array paradigm. Plus it works well with nested loops of arbitrary depth.

Run this directly as a shell script:

#!/bin/bash
FLAGS="--ntasks=1 --cpus-per-task=1"
for i in 1 2 3 4 5; do
        for j in 1 2 3 4 5; do
            for k in 1 2 3 4 5; do
                sbatch $FLAGS testscript.py $i $j $k
        done
    done
done

where you need to make sure testscript.py points to the correct interpreter in the first line using the #! e.g.

#!/usr/bin/env python 
import time
import sys
time.sleep(5)
print "This is my script"
print sys.argv[1], sys.argv[2], sys.argv[3] 

Alternatively (untested), you can use the --wrap flag like this

sbatch $FLAGS --wrap="python testscript.py $i $j $k"

and you won't need the #!/usr/bin/env python line in testscript.py

Parity answered 9/5, 2017 at 23:1 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.