Body
Large-scale Atomic/Molecular Massively Parallel Simulator is a molecular dynamics program from Sandia National Laboratories. LAMMPS makes use of the Message Passing Interface for parallel communication and is free and open-source software, distributed under the terms of the GNU General Public License.
This page documents how to use the LAMMPS application installed on SimCenter local desktop machines.
The installation contains both a serial version and a MPI/GPU enabled version.
Quick Notes:
-
Python 3.x used for MPI/GPU, Python 2.x and Python 3.x can be used for serial.
-
LAMMPS can be used with or without MPI.
-
LAMMPS can be used with or without GPU acceleration.
Loading Environment
MPI Version
To load the environment for LAMMPS to run with MPI support follow the steps below:
module load openmpi
module load cuda
module load anaconda
# This activates the python3 mpi enabled environment
source activate mpi
module load lammps/22Aug18-mpi
Serial Version
module load anaconda
# This activates the python2 environment.
source activate python2
module load lammps/22Aug18-serial
Running LAMMPS
Two supported methods exist to run LAMMPS on the local desktop machines.
This method uses LAMMPS natively without any extensions. An input script is written and then given to LAMMPS to parse and execute.
The Leonard Jones potential benchmark is presented below.
# 3d Lennard-Jones melt
variable x index 4
variable y index 4
variable z index 4
variable xx equal 20*$x
variable yy equal 20*$y
variable zz equal 20*$z
# initialization
units lj
atom_style atomic
# atom definition
lattice fcc 0.8442
region box block 0 ${xx} 0 ${yy} 0 ${zz}
create_box 1 box
create_atoms 1 box
# settings
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
# run simulation
run 100
To run this script:
lmp -in input_script.txt
Python Method
from lammps import lammps
lmp = lammps()
lmp.file('...')
lmp.command('...')
LAMMPS Python library interface
Accelerating LAMMPS
MPI Acceleration
from mpi4py import MPI
from lammps import lammps
lmp = lammps()
lmp.file('in.lj.txt')
MPI.Finalize()
pirun -np 4 lmp -nocite -log none -echo screen -in in.lj.txt
mpirun -np 4 python3 SCRIPT.py
GPU Acceleration
GPU (Graphics Processing Unit) acceleration is based on the GPU package installed only with the MPI version of LAMMPS. The assumption is if you want to accelerate your simulation you would want both MPI and GPU support. When using MPI processes and a GPU the machine model is that each MPI processes access their own instance of a GPU. The physical GPU can be shared. So multiple MPI processes on a single desktop machine can share a single GPU present. This is the default method of usage.
Several methods exist to automatically enable GPU usage:
- Command line flags
- Input script commands
Command Line Arguments
The suffix (-sf) command line argument tells LAMMPS to use GPU optimized styles for force calculations.
lmp -sf gpu
The minimum of GPUs to use for a single MPI process is 1, but N GPUs can be set with the package gpu (-pk gpu) command line argument.
lmp -pk gpu N
Script Commands
The input script can also contain commands that tell LAMMPS to use the GPU and the specific configuration of such usage which allows better usage compared to the command line arguments. The package command is used to enable the usage:
package gpu 1 neigh no
suffix gpu
We have found that the neigh no portion of the command should be used to disable neighbor list building on the GPU. We do not understand why neighbor list building is extremely slow on the GPU. The suffix gpu command is required to convert pair styles to the GPU optimized versions automatically. This can also be done manually.
Further information is available at GPU Package.