|
@@ -22,9 +22,32 @@ breadcrumbs:
|
|
- Show job details: `scontrol show jobid -dd <jobid>`
|
|
- Show job details: `scontrol show jobid -dd <jobid>`
|
|
- Job handling:
|
|
- Job handling:
|
|
- Create a job (overview): Make a Slurm script, make it executable and submit it.
|
|
- Create a job (overview): Make a Slurm script, make it executable and submit it.
|
|
- - Submit interactive/blocking job: `srun [--pty bash] <...>`
|
|
|
|
- - Submit batch/non-blocking job: `sbatch <...>`
|
|
|
|
|
|
+ - Using GPUs: See example Slurm-file, using `--gres=gpu[:<type>]:<n>`.
|
|
|
|
+ - Submit batch/non-blocking job: `sbatch <slurm-file>`
|
|
|
|
+ - Start interactive/blocking job: `srun <job options> [--pty] <bash|app>`
|
|
- Cancel specific job: `scancel <jobid>`
|
|
- Cancel specific job: `scancel <jobid>`
|
|
- - Cancel multiple jobs: `scancel [-t <state>] [-u <user>]`
|
|
|
|
|
|
+ - Cancel set of jobs: `scancel [-t <state>] [-u <user>]`
|
|
|
|
+
|
|
|
|
+## Example Slurm-File
|
|
|
|
+
|
|
|
|
+```sh
|
|
|
|
+#!/bin/sh
|
|
|
|
+
|
|
|
|
+#SBATCH --partition=<partition>
|
|
|
|
+#SBATCH --time=03:00:00
|
|
|
|
+#SBATCH --nodes=2
|
|
|
|
+# #SBATCH --nodelist=compute-2-0-[17-18],compute-5-0-[20-21]
|
|
|
|
+#SBATCH --ntasks-per-node=2
|
|
|
|
+# #SBATCH --exclusive
|
|
|
|
+# #SBATCH --mem=64G
|
|
|
|
+#SBATCH --gres=gpu:V100:2
|
|
|
|
+#SBATCH --job-name="xxx"
|
|
|
|
+#SBATCH --output=log.txt
|
|
|
|
+## SBATCH --mail-user=user@example.net
|
|
|
|
+# #SBATCH --mail-type=ALL
|
|
|
|
+
|
|
|
|
+# Run some program on all processors using mpirun
|
|
|
|
+mpirun uname -a
|
|
|
|
+```
|
|
|
|
|
|
{% include footer.md %}
|
|
{% include footer.md %}
|