All molecular simulation software on JADE2 is compiled and looked after by the HECBioSim consortia support, if you have problems with any of the consortium compiled codes on JADE2 then please contact This email address is being protected from spambots. You need JavaScript enabled to view it.

AMBER 18

An example of using AMBER 18 on a single node and single GPU.

#!/bin/bash

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH -p small
#SBATCH --gres=gpu:1
#SBATCH --time=01:00:00
#SBATCH -J jobname

module load amber/18

srun pmemd.cuda -O -i benchmark.in -c benchmark.rst -p benchmark.top -o benchmark.out

 

Gromacs 2020.3

An example using a single GPU on a single node and single GPU. Note that in the example below we are oversubscribing the CPUs which often results in performance increases.

#!/bin/bash

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=5
#SBATCH --gres=gpu:1
#SBATCH --time=01:00:00
#SBATCH -J jobname
#SBATCH -p small

module load gromacs/2020.3

gmx mdrun -deffnm benchmark -pin on -ntmpi 1 -ntomp 10 -nb gpu -bonded gpu -pme gpu

 

An example using a single GPU on a single node and single GPU using the NVidia performance upgrades. A nice write up can be found here

#!/bin/bash

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=5
#SBATCH --gres=gpu:1
#SBATCH --time=01:00:00
#SBATCH -J jobname
#SBATCH -p small

module load gromacs/2020.3

# Single GPU NVidia optimisations on
export GMX_FORCE_UPDATE_DEFAULT_GPU=true

gmx mdrun -deffnm benchmark -pin on -ntmpi 1 -ntomp 10 -nb gpu -bonded gpu -pme gpu -nstlist 400

 

An example using a single GPU on a single node and four GPUs using the NVidia peformance upgrades. This is using thread MPI and not the external MPI library. Note, this is only really worth it if you have a very large system 2-3M+ atoms, if time to solution is not an issue, then you are better off running 4 single GPU simulations and having the statistics. A nice write up can be found here

#!/bin/bash

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=5
#SBATCH --gres=gpu:4
#SBATCH --time=01:00:00
#SBATCH -J jobname
#SBATCH -p big

module load gromacs/2020.3

# Single GPU NVidia optimisations on
export GMX_FORCE_UPDATE_DEFAULT_GPU=true
export GMX_GPU_DD_COMMS=true
export GMX_GPU_PME_PP_COMMS=true

gmx mdrun -deffnm benchmark -pin on -ntmpi 4 -ntomp 10 -nb gpu -bonded gpu -pme gpu -nstlist 400

 

NAMD 2.14

Here is an example script to submit a single node single GPU NAMD2 simulation on JADE2

#!/bin/bash

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=5
#SBATCH --gres=gpu:1
#SBATCH --time=01:00:00
#SBATCH -J jobname
#SBATCH -p small

module load namd/2.14

namd2 +p $SLURM_NTASKS_PER_NODE +setcpuaffinity +devices $CUDA_VISIBLE_DEVICES ./benchmark.in &> bench.out

 

NAMD 3.0

The version of NAMD 3.0 currently installed on JADE2 is the alpha build 7, do note this is an alpha version currently but if you wish to test it with your work then here is an example script for a single node with a single GPU.

#!/bin/bash

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --gres=gpu:1
#SBATCH --time=01:00:00
#SBATCH -J jobname
#SBATCH -p small

module load namd/3.0-alpha7

namd3 benchmark.in > benchmark.out