sbatch Questions
4
All my slurm jobs fail with exit code 0:53 within two seconds of starting.
When I look at job details with scontrol show jobid <JOBID> it doesn't say anything suspicious.
When I look at the f...
3
I have a simple test.ksh that I am running with the command:
sbatch test.ksh
I keep getting "JobState=FAILED Reason=NonZeroExitCode" (using "scontrol show job")
I have already made sure of the f...
3
3
I was trying to run slurm jobs with srun on the background. Unfortunately, right now due to the fact I have to run things through docker its a bit annoying to use sbatch so I am trying to find out ...
1
I would like to let the slurm system send myprogram output via email when the computing is done. So I wrote the SBATCH as following
#!/bin/bash -l
#SBATCH -J MyModel
#SBATCH -n 1 # Number of cores...
1
I created some slurm scripts and then tried to execute them with sbatch. But the output file is updated not frequently (once a minute maybe).
Is there a way to change the output buffering latency...
1
I am trying to submit a large number of jobs (several hundred) to a Slurm server and was hoping to avoid having to submit a new shell script for each job I wanted to run. The code submitted is a Py...
3
Solved
When I submit a SLURM job with the option --gres=gpu:1 to a node with two GPUs, how can I get the ID of the GPU which is allocated for the job? Is there an environment variable for this purpose? Th...
1
I was provided two sbatch scripts to submit and run. The input of the second one is based on the output of the first one. The assignment I need to do this for simply tells us to check on the first ...
Egoism asked 7/3, 2020 at 23:48
1
Solved
I am running a snakemake pipeline on a HPC that uses slurm. The pipeline is rather long, consisting of ~22 steps. Periodically, snakemake will encounted a problem when attempting to submit a job. T...
1
Solved
I have set of an array job as follows:
sbatch --array=1:100%5 ...
which will limit the number of simultaneously running tasks to 5. The job is now running, and I would like to change this number...
1
Solved
According to the answers here What does the --ntasks or -n tasks does in SLURM? one can run multiple jobs in parallel via ntasks parameter for sbatch followed by srun. To ask a follow up question -...
2
Solved
I want to run a script on cluster (SBATCH file).
How can activate my virtual environment (path/to/env_name/bin/activate).
Does I need only to add the following code to my_script.sh file?
module l...
1
Solved
With SBATCH you can use the job-id in automatically generated output files using the following syntax with %j:
#!/bin/bash
# omitting some other sbatch commands here ...
#SBATCH -o slurm-%j.out-...
1
with PBS scheduler is possible launch a batch command without script in this way:
qsub -l select=1:ncpus=12:mem=112GB -l walltime=00:30:00 -- /usr/bin/bash -c "mpirun -np 12 sleep 10"
Is it poss...
Ponce asked 13/12, 2017 at 11:33
1
Solved
I am running a pipeline on a SLURM-cluster, and for some reason a lot of smaller files (between 500 and 2000 bytes in size) named along the lines of slurm-XXXXXX.out (where XXXXXX is a number). I'v...
1
Solved
I have a python submission script that I run with sbatch using slurm:
sbatch batch.py
when I do this things do not work properly because I assume, the batch.py process does not inherit the right...
2
Solved
I found some very similar questions which helped me arrive at a script which seems to work however I'm still unsure if I fully understand why, hence this question..
My problem (example): On 3 node...
Execrative asked 25/8, 2017 at 14:12
3
Solved
I have a problem where I need to launch the same script but with different input arguments.
Say I have a script myscript.py -p <par_Val> -i <num_trial>, where I need to consider N diff...
2
Solved
I am trying to understand what the difference is between SLURM's srun and sbatch commands. I will be happy with a general explanation, rather than specific answers to the following questions, but h...
Polygamy asked 3/5, 2017 at 18:49
3
Solved
Can I submit "one-liners" to SLURM?
Using bsub from LSF and the standard Linux utility xargs, I can easily submit a separate job for uncompressing all of the files in a directory:
ls *.gz | sed '...
1
I’m trying to align 168 sequence files on our HPC using slurm version 14.03.0. I’m only allowed to use a maximum of 9 compute nodes at once to keep some nodes open for other people.
I changed the ...
3
Solved
I have a couple of thousand jobs to run on a SLURM cluster with 16 nodes. These jobs should run only on a subset of the available nodes of size 7. Some of the tasks are parallelized, hence use all ...
Marj asked 6/10, 2014 at 12:57
1
© 2022 - 2024 — McMap. All rights reserved.