Specifying SLURM Resources When Executing Multiple Jobs in Parallel
Asked Answered
D

1

3

According to the answers here What does the --ntasks or -n tasks does in SLURM? one can run multiple jobs in parallel via ntasks parameter for sbatch followed by srun. To ask a follow up question - how would one specify the amount of memory needed when running jobs in parallel like so?

If say 3 jobs are running in parallel each needing 8G of memory, would one specify 24G of memory in sbatch(i.e. the sum of memory from all jobs) or not give memory parameters in sbatch but instead specify 8G of memory for each srun?

Dosimeter answered 28/12, 2018 at 0:48 Comment(0)
S
3

You need to specify the memory requirement in the script submitted with sbatch, otherwise you will end up with the default memory allocation, which might not correspond to your needs. If you then specify the 8GB memory in the srun call, you might end up with no jobs being able to start if the default memory is lower than that, or having only one or two jobs running in parallel if the default memory is between 16 and 24GB.

You can request --mem=24GB, but that offer less flexibility than specifying --mem-per-cpu=8G.

Stand answered 30/12, 2018 at 12:35 Comment(2)
Hey, thanks for answering! I have another follow up question - since the default for ntasks is one task per node, would I specify --mem=24GB or --mem=8GB? Since --mem is per node.Dosimeter
it depends if nodes are shared among jobs or notStand

© 2022 - 2024 — McMap. All rights reserved.