Single node jobs¶
This page includes example job scripts for jobs that run on a single node,
noted as "Serial jobs" throughout this site. All partitions other than
parallel will accept serial jobs.
For simplicity, cores and tasks should be considered equivalent on Apocrita, and may be used interchangeably for most use cases.
Using an Array job when submitting lots of similar jobs
Note that if you intend to submit multiple similar jobs, you should submit them as an array instead.
This reduces load on the scheduler and streamlines the job submission process. Please see the Arrays section for more details.
Compute jobs requiring large amounts of RAM
If you have compute jobs with very large RAM requirements, you may want to
make use of our public highmem nodes by submitting to the
highmem partition, rather than compute. See the
high memory jobs page for more
information.
Single task¶
The most basic job requests 1 task (core), and 1GB of RAM for 1 hour.
Applications which do not support multi-threading must use the single node, single task job script as seen below:
#!/bin/bash
## Request 1 task
#SBATCH -n 1 # (or --ntasks=1)
## Request the "compute" partition
## (optional, as this is the default partition)
#SBATCH -p compute # (or --partition=compute)
## Request 1 hour runtime
#SBATCH -t 1:0:0 # (or --time=1:0:0)
## Request 1GB RAM per task
#SBATCH --mem-per-cpu=1G
# ---
# Module load
module load app
# Run application
app \
--input in.dat \
--output out.dat
This example loads a module called app, and launches
a program also named app with input and output arguments.
Multiple tasks¶
A multi-task serial job should be used for jobs which can use multiple CPU cores on a single machine concurrently, such as those using OpenMP. Requesting many tasks for a job which cannot use them is wasteful.
Slurm refers to CPUs as "tasks" and you should request the number of CPUs
you require in most job scripts using -n or --ntasks, as per the
example below. The -c option is for number of CPUs required per task, and
should only normally be used for advanced jobs, such as those combining Open
MPI ranks and OpenMP threads for example.
The below example demonstrates how to request 4 tasks on a single node:
#!/bin/bash
## Request 4 tasks
#SBATCH -n 4 # (or --ntasks=s)
## Request the "compute" partition
## (optional, as this is the default partition)
#SBATCH -p compute # (or --partition=compute)
## Request 1 hour runtime
#SBATCH -t 1:0:0 # (or --time=1:0:0)
## Request 1GB RAM per task
#SBATCH --mem-per-cpu=1G
# ---
# Module load
module load app
# Using $SLURM_NTASKS for threading
app \
--threads ${SLURM_NTASKS}
--input in.dat \
--output out.dat
In this example, the app program supports multi-threading with the
--threads option. The $SLURM_NTASKS variable will be substituted with the
number of tasks requested (4 in this example).
Please check the application documentation for a threading option (common
options include but are not limited to: --threads, -t, --cores,
--multicore, --parallel and -p). We recommend using the value of
$SLURM_NTASKS to reference the number of tasks requested rather than a
hard-coded value, to ease the process when scaling up your job.
If you are running an application that supports OpenMP, you should check
whether the $OMP_NUM_THREADS variable has been set correctly to the value of
$SLURM_NTASKS, otherwise your application may run with poor performance.
Some application modules may automatically set this when loaded.