I am trying to submit a large number of jobs (several hundred) to a Slurm server and was hoping to avoid having to submit a new shell script for each job I wanted to run. The code submitted is a Python script that takes two input variables in the shell script and those variables are the only thing that changes between jobs. An example of a short shell script that works for a single job is:
#!/bin/bash
#SBATCH -n 1
#SBATCH -t 01:00:00
srun python retrieve.py --start=0 --end=10
What I want is to submit a large number of jobs with the same python script and only change the 'start' and 'end' variables between jobs. I read something about just increasing the number of cores needed ('-n') and writing an & symbol after each srun-command, but I've been unable to get it to work so far.
If anyone knows a quick way to do this, I would appreciate the help a lot!
for ((i=0; i<=100; i+=10)); do srun python retrieve.py --start="$i" --end="$((i+10))" & done
– Polytypic