High memory jobs¶
This page includes example job scripts for CPU-only jobs that run on a single node, requiring large amounts of RAM.
To access these nodes, submit jobs to the highmem partition; Jobs must
request a minimum of 128GB RAM to be eligible for this partition. The
highmem partition only consists of compute nodes. High memory parallel and
GPU jobs are not currently supported on the cluster.
For simplicity, cores and tasks should be considered equivalent on Apocrita, and may be used interchangeably for most use cases.
Use of the high memory nodes
Users are expected to be vigilant when submitting jobs to the highmem
partition, ensuring that their jobs genuinely require at least 128GB of
RAM. We reserve the right to enforce account restrictions in cases of
repeated or improper use of these nodes.
Single task¶
The most basic job requests 1 task (core), and 128GB of RAM for 1 hour.
Applications which do not support multi-threading must use the single node, single task job script as seen below:
#!/bin/bash
## Request 1 task
#SBATCH -n 1 # (or --ntasks=1)
## Request the "highmem" partition
#SBATCH -p highmem # (or --partition=highmem)
## Request 1 hour runtime
#SBATCH -t 1:0:0 # (or --time=1:0:0)
## Request 128GB RAM per task
#SBATCH --mem-per-cpu=128G
# ---
# Module load
module load app
# Run application
app \
--input in.dat \
--output out.dat
This example loads a module called app, and launches
a program also named app with input and output arguments.
Multiple tasks¶
A multi-task serial job should be used for jobs which can use multiple CPU cores on a single machine concurrently, such as those using OpenMP. Requesting many tasks for a job which cannot use them is wasteful.
Slurm refers to CPUs as "tasks" and you should request the number of CPUs
you require in most job scripts using -n or --ntasks, as per the
example below. The -c option is for number of CPUs required per task, and
should only normally be used for advanced jobs, such as those combining Open
MPI ranks and OpenMP threads for example.
The below example demonstrates how to request 4 tasks on a single node (128GB RAM):
#!/bin/bash
## Request 4 tasks
#SBATCH -n 4 # (or --ntasks=4)
## Request the "highmem" partition
#SBATCH -p highmem # (or --partition=highmem)
## Request 1 hour runtime
#SBATCH -t 1:0:0 # (or --time=1:0:0)
## Request 32GB RAM per task
#SBATCH --mem-per-cpu=32G
# ---
# Module load
module load app
# Using $SLURM_NTASKS for threading
app \
--threads ${SLURM_NTASKS}
--input in.dat \
--output out.dat
In this example, the app program supports multi-threading with the
--threads option. The $SLURM_NTASKS variable will be substituted with the
number of tasks requested (4 in this example).
Please check the application documentation for a threading option (common
options include but are not limited to: --threads, -t, --cores,
--multicore, --parallel and -p). We recommend using the value of
$SLURM_NTASKS to reference the number of tasks requested rather than a
hard-coded value, to ease the process when scaling up your job.
If you are running an application that supports OpenMP, you should check
whether the $OMP_NUM_THREADS variable has been set correctly to the value of
$SLURM_NTASKS, otherwise your application may run with poor performance.
Some application modules may automatically set this when loaded.