Skip to content

VASP

VASP (Vienna Ab initio Simulation Package) computes atomic scale materials modelling, electronic structure calculations and quantum-mechanical molecular dynamics.

VASP is available as a module on Apocrita.

Versions

The stock version of VASP (with no extra functions or implementations) can be loaded with a vasp/VERSION module. Simply replace "VERSION" with the actual version you wish to load.

VASP binaries with GPU support have been compiled and are available by loading a vasp-gpu/VERSION module. Simply replace "VERSION" with the actual version you wish to load.

Usage

To run the default installed version of VASP, simply load the vasp module:

module load vasp

For full usage documentation, see the "VASP documentation" linked in the references below.

Licensing

To use any version of VASP you must contact us and provide evidence that you have a licence to use the software.

Example jobs

Use mpirun instead of srun --mpi

ITSR recommends using mpirun for all MPI processes under Slurm and not srun, as it can cause issues.

Serial job

Here is an example job running on 4 cores and 8GB of memory:

#!/bin/bash
#SBATCH --ntasks=4         # Request 4 cores
#SBATCH --mem-per-cpu=2G   # Request 2GB RAM per core
#SBATCH --time 1:0:0       # Request 1 hour runtime

module load vasp

mpirun -np ${SLURM_NTASKS} vasp_std

Parallel job

Here is an example job running on 96 cores across 2 ddy nodes with MPI:

#!/bin/bash
#SBATCH --nodes=2            # Request 2 ddy nodes
#SBATCH --ntasks=96          # Request 96 cores
#SBATCH --mem=0              # request all available RAM on nodes
#SBATCH --time 240:0:0       # Request 240 hour runtime
#SBATCH --partition parallel # Request parallel partition
#SBATCH --exclusive          # request exclusive use of nodes

module load vasp

mpirun -np ${SLURM_NTASKS} vasp_std

GPU job

MPI ranks sharing the GPU

Ensure the number of MPI-ranks (-np X) is less than or equal to the number of GPUs being requested. Each Slurm task requested represents one MPI rank. Request an appropriate number of CPUs per task as detailed below. VASP will present an INIT_ACC warning otherwise. The below example demonstrates how to call mpirun when requesting 1 GPU.

Use --bind-to none

The version of hwloc shipped with the NVIDIA SDK used to compile the GPU versions of VASP on Apocrita isn't compatible with the newer version we use to compile Slurm (see this issue for details). Ensure you don't use srun for VASP GPU jobs. Use mpirun -np X (where "X" is less than or equal to the number of GPUs being requested, as above) and ensure you add the --bind-to none argument as below.

Here is an example job running on 1 GPU:

#!/bin/bash
#SBATCH --ntasks=1          # Request 1 task (rank)
#SBATCH --cpus-per-task=8   # Request 8 CPUs per task (rank)
#SBATCH --mem-per-cpu=11G   # Request 11GB RAM per core
#SBATCH --time=240:0:0      # Request 240 hour runtime
#SBATCH --partition=gpu     # Request GPU partition
#SBATCH --gres=gpu:1        # Request 1 GPU

# Load a GPU compatible version of VASP. Replace VERSION with
# the actual version of VASP you intend to run
module load vasp-gpu/VERSION

# Ensure the number of MPI ranks is less than or equal to the
# number of GPUs being requested. See the above notice for
# more information
mpirun -np 1 --bind-to none vasp_std

References