A (short) practical introduction to kinetic theory and thermodynamic properties of gases through molecular dynamics

Similar documents
MD Thermodynamics. Lecture 12 3/26/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky

What is Classical Molecular Dynamics?


Gear methods I + 1/18

Javier Junquera. Statistical mechanics

Ab initio molecular dynamics. Simone Piccinin CNR-IOM DEMOCRITOS Trieste, Italy. Bangalore, 04 September 2014

Temperature and Pressure Controls

Introduction to model potential Molecular Dynamics A3hourcourseatICTP

Introduction to molecular dynamics

Statistical Mechanics in a Nutshell

Time-Dependent Statistical Mechanics 5. The classical atomic fluid, classical mechanics, and classical equilibrium statistical mechanics

Molecular Dynamics. What to choose in an integrator The Verlet algorithm Boundary Conditions in Space and time Reading Assignment: F&S Chapter 4

Temperature and Pressure Controls

Biomolecular modeling I

Ab initio molecular dynamics and nuclear quantum effects

Scientific Computing II

Curves in the configuration space Q or in the velocity phase space Ω satisfying the Euler-Lagrange (EL) equations,

Handout 10. Applications to Solids

Dynamic force matching: Construction of dynamic coarse-grained models with realistic short time dynamics and accurate long time dynamics

Molecular Dynamics Simulations. Dr. Noelia Faginas Lago Dipartimento di Chimica,Biologia e Biotecnologie Università di Perugia

Ab Ini'o Molecular Dynamics (MD) Simula?ons

Metropolis, 2D Ising model

Example questions for Molecular modelling (Level 4) Dr. Adrian Mulholland

CHEM-UA 652: Thermodynamics and Kinetics

CE 530 Molecular Simulation

Ab initio molecular dynamics

A Study of the Thermal Properties of a One. Dimensional Lennard-Jones System

4. The Green Kubo Relations

Computer simulation methods (2) Dr. Vania Calandrini

Molecular Dynamics. The Molecular Dynamics (MD) method for classical systems (not H or He)

Molecular Dynamics 9/6/16

Molecular Dynamics Simulations

510 Subject Index. Hamiltonian 33, 86, 88, 89 Hamilton operator 34, 164, 166

Langevin Dynamics in Constant Pressure Extended Systems

CHEM3023: Spins, Atoms and Molecules

Advanced Molecular Molecular Dynamics

Molecular dynamics simulation. CS/CME/BioE/Biophys/BMI 279 Oct. 5 and 10, 2017 Ron Dror

G : Statistical Mechanics

Advanced Molecular Dynamics

Introduction to Simulation - Lectures 17, 18. Molecular Dynamics. Nicolas Hadjiconstantinou

Lecture 08 Born Oppenheimer Approximation

Time-Dependent Statistical Mechanics 1. Introduction

Liouville Equation. q s = H p s

Aspects of nonautonomous molecular dynamics

Introduction Statistical Thermodynamics. Monday, January 6, 14

MIT Weakly Nonlinear Things: Oscillators.

Ab initio Molecular Dynamics Born Oppenheimer and beyond

Molecular Mechanics. I. Quantum mechanical treatment of molecular systems

Energy and Forces in DFT

Density Functional Theory

The first order formalism and the transition to the

Density Functional Theory

Rate of Heating and Cooling

Physics 6010, Fall Relevant Sections in Text: Introduction

arxiv: v1 [cond-mat.stat-mech] 7 Mar 2019

Lecture 11: Long-wavelength expansion in the Neel state Energetic terms

Modifications of the Robert- Bonamy Formalism and Further Refinement Challenges

Molecular dynamics simulation of Aquaporin-1. 4 nm

UNDERSTANDING BOLTZMANN S ANALYSIS VIA. Contents SOLVABLE MODELS

Modeling Materials. Continuum, Atomistic and Multiscale Techniques. gg CAMBRIDGE ^0 TADMOR ELLAD B. HHHHM. University of Minnesota, USA

7 To solve numerically the equation of motion, we use the velocity Verlet or leap frog algorithm. _ V i n = F i n m i (F.5) For time step, we approxim

Understanding Molecular Simulation 2009 Monte Carlo and Molecular Dynamics in different ensembles. Srikanth Sastry

Why Proteins Fold? (Parts of this presentation are based on work of Ashok Kolaskar) CS490B: Introduction to Bioinformatics Mar.

Molecular Dynamics. A very brief introduction

Intro to ab initio methods

Brief Review of Statistical Mechanics

JASS Modeling and visualization of molecular dynamic processes

Ideal Gas Behavior. NC State University

Project 5: Molecular Dynamics

An introduction to Molecular Dynamics. EMBO, June 2016

1 The Lagrange Equations of Motion

Interatomic Potentials. The electronic-structure problem

Basics of Statistical Mechanics

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer. Lecture 9, February 8, 2006

Finite Ring Geometries and Role of Coupling in Molecular Dynamics and Chemistry

UNIVERSITY OF OSLO FACULTY OF MATHEMATICS AND NATURAL SCIENCES

Lecture V: The game-engine loop & Time Integration

Advanced sampling. fluids of strongly orientation-dependent interactions (e.g., dipoles, hydrogen bonds)

Exploring the energy landscape

Multiscale Coarse-Graining of Ionic Liquids

(a) What are the probabilities associated with finding the different allowed values of the z-component of the spin after time T?

Caltech Ph106 Fall 2001

PHYSICS 715 COURSE NOTES WEEK 1

Department of Chemical Engineering University of California, Santa Barbara Spring Exercise 3. Due: Thursday, 5/3/12

Mechanics and Statistical Mechanics Qualifying Exam Spring 2006

Lecture 11: Potential Energy Functions

ICCP Project 2 - Advanced Monte Carlo Methods Choose one of the three options below

1. Introductory Examples

Statistical Mechanics

Physics 115/242 The leapfrog method and other symplectic algorithms for integrating Newton s laws of motion

Quantum mechanics (QM) deals with systems on atomic scale level, whose behaviours cannot be described by classical mechanics.

Analysis of MD Results Using Statistical Mechanics Methods. Molecular Modeling

Once Upon A Time, There Was A Certain Ludwig

in order to insure that the Liouville equation for f(?; t) is still valid. These equations of motion will give rise to a distribution function f(?; t)

AST1100 Lecture Notes

Energy Diagrams --- Attraction

Bioengineering 215. An Introduction to Molecular Dynamics for Biomolecules

5.74 Introductory Quantum Mechanics II

MACROSCOPIC VARIABLES, THERMAL EQUILIBRIUM. Contents AND BOLTZMANN ENTROPY. 1 Macroscopic Variables 3. 2 Local quantities and Hydrodynamics fields 4

Aim: Understand equilibrium of galaxies

Transcription:

A (short) practical introduction to kinetic theory and thermodynamic properties of gases through molecular dynamics Miguel A. Caro mcaroba@gmail.com March 28, 2018 Contents 1 Preface 3 2 Review of thermodynamics of ideal gases 5 2.1 Molecular partition functions............................. 6 2.2 Equation of state.................................... 9 3 Practical introduction to molecular dynamics simulations 10 3.1 Force fields....................................... 11 3.2 Integrating the equations of motion.......................... 12 3.2.1 The Lazy Man s approach........................... 13 3.2.2 Verlet...................................... 14 3.2.3 Leapfrog..................................... 14 3.2.4 Error estimate for different algorithms.................... 15 3.3 Thermostats and barostats.............................. 15 3.4 Hands-on: Simulating an orbital system in 2D.................... 18 3.4.1 Force field interactions............................. 18 3.4.2 Integrators................................... 20 3.4.3 Simulation workflow.............................. 21 3.4.4 Visualizing the trajectories.......................... 24 4 Non-ideal gases 26 4.1 General considerations on non-ideal gases: virial expansion............ 26 4.2 Hard-sphere gases................................... 29 4.3 Lennard-Jones gases.................................. 31 4.4 Hands-on: Estimating the B 3 coefficient for hard-sphere gases........... 32 4.5 Hands-on: compressibility factor of hard-sphere gases from MD.......... 35 4.6 Hands-on: Properties of Lennard-Jones gases from MD............... 43 5 Kinetic theory of gases 51 5.1 Molecular collisions................................... 51 5.2 Continuum mechanics................................. 51 5.3 The Boltzmann equation................................ 51 5.4 Chapman-Enskog theory................................ 51 5.5 Brownian motion.................................... 51 5.6 Hands-on:?....................................... 51 1

6 Special topic: The 2PT model for liquid systems 52 6.1 The harmonic approximation: density of states................... 52 6.2 Fluidicity and conceptional partition of the degrees of freedom.......... 52 6.3 Hands-on: Validation of 2PT for liquids and liquid mixtures............ 52 References 53 2

3 Practical introduction to molecular dynamics simulations The need for molecular dynamics arises partly from the difficulty of evaluating the phase-space integral over Boltzmann factors of any system beyond ideal and toy models, which is required to compute the partition function: Q(N, V, T ) ( ) H(p, q) dp dq exp. (31) k B T The equation above converges extremely slowly with respect to the size of the sampled phasespace region. The so-called ergodic hypothesis tells us that the probability distribution for finding a system in a particular position in phase space, drawn from different times along its trajectory, is the same as the probability distribution drawn from the different microstates in the ensemble. In other words, the time average of an observable O of the system is the same as the ensemble average: ) dp dq O exp ( H(p,q) Ō = k B T dp dq exp ( H(p,q) k B T )? = 1 τ τ 0 dt O(t). (32) There is no proof that the ergodic hypothesis holds true, 10 however there is proof that for some particular systems it does not hold. So there is proof that it does not always hold. For typical problems studied with MD simulations, this hypothesis seems to work reasonably well, provided that sufficiently long simulation times are used. Since calculating time averages is a lot less expensive than ensemble averages, MD simulations are routinely used to compute average properties of molecular systems. In addition to computing thermodynamic quantities (i.e., equilibrium properties), for which approximating the partition function becomes important, one may be interested in studying the evolution of a system which is not in thermodynamic equilibrium. For instance, one may want to look at the denaturation process of a protein when the temperature is too high. In those instances, MD becomes quite useful. We shall see in the following how molecular trajectories are obtain in practice using computational approaches. For atomic or molecular systems with more than a few particles, except for the simplest cases, it is impossible to find analytical solutions describing their trajectories. The main task underlying MD simulations is the numerical integration of the equations of motion of an ensemble of interacting particles: 11 r i = F i m i, (33) where r i, m i and F i are, respectively, the position, mass and force acting on particle i. For convenience of notation, we will refer throughout this document to time derivatives by adding x dots: t ẋ, 2 x ẍ, etc. Bold symbols indicate vectors (usually in 3-dimensional space). t 2 10 That is why is it called a hypothesis. 11 We are assuming that our particles move like classical objects, and thus can use Newtonian mechanics instead of quantum mechanics. This is a very bad approximation for electrons, but it works quite well for anything heavier than a proton (deuterium and up from there) and often can be used to look at the movement of regular hydrogen atoms (protium). In MD, the assumption that typical electronic and nuclear motion time scales are decoupled and the movement of nuclei can be described with classical mechanics is known as the Born-Oppenheimer approximation. All the quantum effects within this approximation, which emanate from the electrons, are implicitly contained within the forces acting on the nuclei. 10

The force acting on particle i is connected in the usual way to the system s Hamiltonian via its gradient with respect to the position of i: F i = ri H ({ p j }, {rj } ), (34) where the Hamiltonian contains the kinetic energy and potential energy parts: H ({ p j }, {rj } ) = j p j 2 2m j + V ({r j }). (35) By stating the objective of MD above we have raised two issues, one explicit, namely how to solve Eq. (33), and another one implicit. The implicit issue is how to evaluate F i, which for a microscopic system is equivalent to the issue of how to approximate the nature and strength of atomic interactions. This boils down to approximating V ({r j }). A very accurate determination of the forces is highly non-trivial because of the quantum nature of microscopic systems, and the accuracy with which the F i are determined will strongly impact the computational cost of running a MD simulation. For instance, the CPU cost of running ab initio MD based on density-functional theory (DFT) [3], which treats the electronic interactions explicitly, is 5 6 orders of magnitude higher than a cheap MD based on simple empirical harmonic potentials. Therefore, a very active field of research is the development of accurate yet inexpensive interatomic potentials, also known as force fields. 3.1 Force fields A force field or interatomic potential is a mathematical model representing the energetic interactions between atomic or supra-atomic (e.g., molecular) systems. The main task of a force field is to approximate, as accurately as possible within a given complexity of the model, 12 the real interactions. For instance, if we were trying to model the interactions between the 3 atoms in a water molecule we could choose a force-field representation of the O H bonds and the H O H bond angle via spring constants (Fig. 1). These will effectively reproduce the harmonic vibrations since the potential will be harmonic (about the equilibrium values r 0 and θ 0 ) by construction. The system s Hamiltonian would read like this: H = p 2 H 1 + p 2 H 2 + p 2 O + k r 2m H 2m H 2m O 2 (r 1 r 0 ) 2 + k r 2 (r 2 r 0 ) 2 + k θ 2 (θ θ 0) 2. (36) This Hamiltonian, with the force constants k r and k θ fitted to reproduce experimental or ab initio data (e.g., vibrational frequencies), will give a satisfactory description of an isolated water molecule at relatively low temperatures. However, since it only contains intramolecular interactions, it will fail miserably to reproduce the dynamics of interacting water molecules in the solid, liquid and even gas phases. To improve that, a force field will often include electrostatic interactions modeled by partial charges, e.g., to take into account the fact that the O H bonds are ionic and valence electrons in water sit preferentially around O atoms. 13 Other non-bonded interactions which can be inexpensively included in empirical force fields are van der Waals-type terms. A popular representation of these interactions is the Lennard-Jones potential. Even after including partial charges and Lennard-Jones interactions to Eq. (36), our simple model will still completely fail to handle bond breaking and bond formation, since the functional 12 By complexity of the model we are referring to the intrinsic constrains that limit its accuracy, for example a model with a fixed functional form including only harmonic terms will not be able to describe non-harmonic effects, regardless of how well it is parametrized. 13 In water, the equilibrium bond length is r 0 0.96 Å, the equilibrium bond angle is θ 0 104.5, and the partial charges of H and O are typically in the order of +0.5 and 1 elementary units, respectively. 11

O r 2 r 1 H 2 H 1 θ Figure 1: Chemical bonds in a water molecule represented as springs. form of our Hamiltonian ensures that the bonded interactions (the harmonic terms) are always between the same set of atoms. Handling bond breaking and formation accurately within empirical force fields is extremely challenging. The quick and dirty solution is to define a cutoff distance at which bonded interactions are switched on (r < r cutoff ) and off (r > r cutoff ). A more sophisticated way to do this is to combine the cutoff approach with environmentallydependent potentials, where the force constants or even the functional form of the potential depend on the number of nearest neighbors of each atom (e.g., EDIP [4]). A solution which is becoming increasingly popular in recent years is to use machine learning (also known as artificial intelligence ) to generate highly-flexible interatomic potentials which do not rely on a fixed functional form (e.g., GAP [5]). More generally, to treat bond breaking and formation accurately, the safest (and most expensive) choice is to run ab initio MD. Fortunately, many systems of interest can be studied without having to worry about bonds breaking or forming during the time scale of the MD simulation. Force field development is an active field of molecular physics research, and a wealth of information on different models, both simple and complex, exists in the literature. In this introductory document, we will limit ourselves to simple force fields and use them to illustrate central concepts in MD and thermodynamic theory of gases. 3.2 Integrating the equations of motion To know where the different particles in our system are at any given time t, we need to integrate Eq. (33), assuming that we know how to compute the forces, as discussed in the previous section. Note that to integrate a second-order differential equation we need two sets of initial conditions at time t 0. In our case, we need initial positions {r j (t 0 )} and velocities {ṙ j (t 0 )}. We will deal with initialization later on. The main question that we will answer in this section is, provided that we know the state of our system (positions and velocities) at time t 0 and we can calculate the forces at t 0 from them, 14 what will be the positions and velocities at time t 0 + t? In this section we will deal with 3 different approaches and will compare them to each other: i) the Lazy Man s approach, ii) Verlet and iii) leapfrog. 15 14 In the previous section, we have discussed in some detail about the form of the Hamiltonian, but not the forces. If we know the analytical form of the Hamiltonian, then we can easily compute the analytical form of the forces from Eq. (34). Sometimes we can resort to alternative approaches, like the Hellmann-Feynman theorem [3] for ab initio methods. However, in the worst case scenario, namely we can only evaluate the Hamiltonian, we would need to approximate each partial derivative numerically by a finite difference. This would mean that evaluating the forces requires of the order of 2N evaluations of the Hamiltonian, where N is the number of particles. 15 Do not let these names fool you, I only made up the Lazy Man s approach, leapfrog integration is an actual thing. 12

3.2.1 The Lazy Man s approach At time t 0 + t, the exact solution to Eq. (33), for an arbitrary value of t, is r i (t 0 + t) = r i (t 0 ) + t 0 + t t 0 t dt ṙ i (t 0 ) + dt r i (t ). (37) } t {{ 0 } ṙ i (t) Now, Eq. (37) above presents the complication that we only know positions, velocities and forces at precisely t 0. Therefore, the innermost integral cannot be evaluated, since it requires knowledge of the value of r i at times other than t 0. However, if we make t very small, then we can claim that the forces or, equivalently, the accelerations, are approximately constant between t 0 and t 0 + t. Under such approximation, Eq. (37) reduces to: r i (t 0 + t) = r i (t 0 ) + ṙ i (t 0 ) t + F i(t 0 ) 2m i t 2, (38) which requires only knowledge of the different variables at t 0. The velocities are easily computed too: ṙ i (t 0 + t) = ṙ i (t 0 ) + F i(t 0 ) m i t. (39) After updating the positions, one can evaluate the forces at t = t 0 + t and use Eq. (38) to predict positions and velocities at t = t 0 +2 t. Recursive use of Eq. (38) allows us to propagate the equations of motion of our system from time t 0 up to an arbitrary later time. The longer the propagation time, the more expensive the simulation. To obtain the state of the system at t = t 0 + n t, that is, n time steps after initialization, we need to perform n force evaluations. The main message to keep in mind here is that the longer the time step t the worse our approximation that the forces are constant between consecutive time steps. Therefore, as a general rule, the shorter the time step the more accurate our simulation. Too long time steps will lead to unrealistic dynamics and energy drift; too short time steps will lead to waste of CPU time. Therefore, an important consideration for MD in terms of optimizing resources is to choose the longest time step which does not compromise the accuracy of the dynamics. We will see that the Lazy Man s approach is less accurate than schemes commonly used in popular MD codes, and requires shorter time steps in comparison. One should therefore avoid being Lazy whenever possible; however, for testing a new implementation and to illustrate the idea behind propagation of equations of motion, the simplicity of this approach becomes useful. The Lazy Man overlooked the fact that physical quantities rarely change abruptly with time, but rather have a smooth time evolution. 16 Strictly, we can only sample the exact values of the forces discretely (in time), at the same times for which a list of positions is available. This means that we only know the approximate positions and the exact forces (that is, those compatible with our estimated positions) at times t = t 0, t 0 + t, t 0 +2 t,.... However, from the knowledge that the positions behave smoothly between consecutive time steps, we can interpolate them in between time steps, so that the integration in Eq. (37) can be carried out more accurately. That is what the following integration algorithms do. 16 Smooth in this context has the usual mathematical meaning that the function is continuous and continuously differentiable. 13

3.2.2 Verlet The idea that physical properties evolve smoothly with time forms the basis for the Verlet algorithm. Given three data points, namely three position vectors at consecutive time steps (t = t 0 t, t 0, t 0 + t), we can unambiguously define a 2nd-order polynomial which goes through all three points (and should therefore be a good approximant within the fitting domain): r i (t) =r i (t 0 t) (t t 0)(t t 0 t) 2 t 2 r i (t 0 ) (t t 0 + t)(t t 0 t) t 2 + r i (t 0 + t) (t t 0 + t)(t t 0 ) 2 t 2. (40) The equation above can be differentiated twice with respect to t to give the acceleration, which is constant since the polynomial is of order two. The estimated acceleration will be most accurate when compared with the actual (e.g., explicitly calculated) acceleration evaluated at the central point, t = t 0 : r i (t 0 ) r i (t 0 t) 1 t 2 r i(t 0 ) 2 t 2 + r i(t 0 + t) 1 t 2. (41) We can rewrite the expression above to give: r i (t 0 + t) = 2r i (t 0 ) r i (t 0 t) + F i(t 0 ) m i t 2, (42) which is the regular Verlet integration expression to estimate r i (t 0 + t) from the knowledge of the forces at t = t 0 and the previous positions at t = t 0 and t = t 0 t. This algorithm does not involve the velocities; 17 however, we may be interested in knowing the value of the velocities for a number of reasons. A symmetric difference can give it directly: r i (t 0 ) = r i(t 0 + t) r i (t 0 t). (43) 2 t However, the expression above is not particularly accurate; other schemes allow better estimation of velocities. The velocity Verlet algorithm uses Eq. (38) to compute the positions at t = t 0 + t but also takes advantage of the knowledge of the predictions of new positions to estimate velocities more accurately: r i (t 0 + t) = r i (t 0 ) + ṙ i (t 0 ) t + F i(t 0 ) 2m i t 2, (44) ṙ i (t 0 + t) = ṙ i (t 0 ) + F i(t 0 ) + F i (t 0 + t) 2m i t, (45) where F i (t 0 + t) can be readily computed as soon as the r i (t 0 + t) are available. 3.2.3 Leapfrog It should be apparent at this point that integration methods are closely related to discrete approximations to differentiation. In particular, the regular Verlet integrator is based on a central difference approximation to the second derivative of r i (t), whereas velocity Verlet is based on a constant acceleration approximation combined with an improved velocity estimation. 17 Additionally, the algorithm is not well defined for the first step, since it needs at least two previous positions to be able to make a prediction. For the first step, one can use Eq. (38). 14

Table 1: Errors for different algorithms Truncation error Propagation error Algorithm Position Velocity Position Velocity Verlet O ( t 4) O ( t 2) O ( t 2) O ( t 2) Velocity verlet O ( t 3) O ( t 3) O ( t 2) O ( t 2) Leapfrog O ( t 3) O ( t 3) O ( t 2) O ( t 2) Lazy Man s O ( t 3) O ( t 2) O ( t) O ( t) While a central difference approximation for the 2nd derivative is most accurately given at the central of three points used for interpolation, for the first-order derivative a central difference obtained from two data points gives highest accuracy in between the data points. That is: ( ṙ i t 0 + t 2 r i (t 0 ) = ṙi ) = r i(t 0 + t) r i (t 0 ) t ) ( ṙi t0 t ) 2 t ( t0 + t 2 ( r i (t 0 + t) = r i (t 0 ) + ṙ i t 0 + t 2 ) t, (46) ( ṙ i t 0 + t ) ( = ṙ i t 0 t ) + F i(t 0 ) t, (47) 2 2 m i where positions (and accelerations/forces) are naturally given on the time grid t = t 0, t 0 + t, t 0 + 2 t,... and velocities are given on the grid t = t 0, t 0 + t 2, t 0 + 3 t 2,.... Therefore, positions and velocities are always offset by half a time step. Graphically, it is as if they were leaping as a frog, hence the name of the algorithm. Although for many practical purposes this offset is not a problem, it can become an issue when one needs access to synchronous positions and velocities, e.g., to calculate instantaneous angular momenta. 3.2.4 Error estimate for different algorithms Since the integration methods presented are approximations to the exact solution of Eq. (37), they have errors associated to them. It can be shown 18 that the truncation errors in Verlet, velocity Verlet and leapfrog are of different orders for position and velocity, while the global (propagation) errors are the same (Table 1). This means that the accumulated error, and thus the precision of each method, is basically the same for all three integrators. The error for the Lazy Man s approach is much worse. We see again that being Lazy does not pay off. However, do not take my word for it, in the hands-on exercise we will check how using the Lazy Man s approach leads to non-sense results for a simple orbital system. 3.3 Thermostats and barostats So far we have not discussed how the concepts of temperature and pressure play out in a MD simulation. A straightforward implementation of an (accurate) integration scheme for an isolated system interacting via a conservative potential leads to conservation of energy and particle number. Additionally, if we set the dimensions of the simulation box to be fixed, also the volume is conserved. This corresponds to a thermodynamic microcanonical ensemble or, in MD terminology, an NV E ensemble. 19 More generally, we may be interested in studying the dynamics of a system in contact with a temperature and/or pressure bath, which we will refer to as NV T and NP T ensembles. In such simulations, one needs to ensure that the temperature 18 Do it as an exercise. Hint: use Taylor expansions. 19 NV E refers to the conserved quantities: N for particle number, V for volume and E for energy. 15

and/or pressure of the system are regulated somehow. In this course, we will not deal with variable particle number simulations (i.e., constant chemical potential µ). Before discussing how to regulate the temperature (and pressure) it is worth providing a consistent definition. The definition of temperature is done via the equipartition theorem: T = 2E kin N DoF k B, (48) where E kin is the kinetic energy, N DoF is the number of degrees of freedom and k B is Boltzmann s constant. Note that under periodic boundary conditions one may choose to remove the degrees of freedom of the system s center of mass; this should be taken into account for a consistent definition of temperature. The pressure P is defined via the volume derivative of the system s internal (or potential) energy U: P = U V. (49) Now that we have defined temperature and pressure in the context of MD, let us discuss the mathematical tools used to keep these quantities constant 20 in a MD simulation. These are known as thermostats and barostats, for temperature and pressure regulation, respectively. The simplest thermostat is one that rescales the atomic velocities to match the target temperature T 0. Suppose that the instantaneous temperature T (t 0 ) is (from Eq. (48) and the relation between kinetic energy and velocities): T (t 0 ) = N i=1 m i 3k B ṙ i (t 0 ) 2, (50) where we have assumed three degrees of freedom per particle, N DoF = 3N. Rescaling all the atomic velocities by the appropriate factor, ṙ i (t 0 ) T0 T ṙi(t 0 ), (51) leads to the desired temperature. However, this rescaling happening too fast (e.g., at each time step) may significantly perturb the dynamics of the system. 21 Commonly used velocity rescaling thermostats tend to dampen the velocity rescaling using a characteristic time constant, in such a way that the rescaling tends to drive the system back to the target temperature with a strength which depends both on how far the instantaneous temperature is from the target temperature but also on the time scale over which the temperature is expected to equilibrate: dt dt = T 0 T, (52) τ where τ is the characteristic time constant. The correction is ṙ i (t 0 ) 1 + t ( ) T0 τ T (t 0 ) 1 ṙ i (t 0 ), (53) 20 Actually, we are mostly interested in keeping temperature and pressure constant around a target value; typical thermostats and barostats will keep the instantaneous quantities oscillating in time around the target values. 21 This is especially true for small systems. 16

where the scaling is computed for the time corresponding to one time step, t. This damped velocity-rescaling thermostat is known as the Berendsen thermostat [6]. Even though the Berendsen thermostat is quite good for equilibrating a system, 22 it does not sample a canonical ensemble, in the sense that the velocity distribution is not compatible with the expected distribution of velocities in a canonical ensemble at the target temperature. To solve this issue, other thermostats have been introduced, most notably the Nosé-Hoover thermostat [7, 8]. 23 In the Nosé-Hoover approach, the real Hamiltonian is replaced by an auxiliary Hamiltonian, and a fictitious degree of freedom is introduced. This degree of freedom can exchange energy with the real degrees of freedom. The equations of motion within the Nosé-Hoover formalism, in Hamilton s momentum-position form, are the following: ṙ i = p i, m i ( ṡ = p s Q ṗ i = F i p s Q p i, (54) ), ṗ s = i p i 2 m i (N DoF + 1) k B T 0, (55) where s is the fictitious degree of freedom and p s is its associated momentum. Q is a fictitious mass for the extra degree of freedom, which has an influence on how fast and stable the thermostating will be. This mass needs to be optimized for the system at hand, and is often supplied as a time constant τ from the relation Q = (N DoF + 1) k B T 0 τ 2. In Eq. (55), we have put the equation of motion for s in between brackets because solving that particular equation is not required to obtain the dynamics of the real degrees of freedom (the other equations do not depend on s). Stabilization of the system s pressure is sometimes required when a constant-pressure N P T simulation is carried out. Although more sophisticated barostats exist, usually one relies on Berendsen s box rescaling approach to optimize P, and then carry out a constant-volume NV T simulation with the pre-optimized simulation box. This is so for mainly two reasons: 1) barostating can easily become unstable, it is computationally expensive and systems tend to blow up some times; 2) for most practical applications, capturing the effect of instantaneous temperature fluctuations is far more important than pressure fluctuations. Therefore, to check the effect of pressure changes, one can run NV T simulations for different volumes, each corresponding to a particular pressure value. In any case, identical to the Berendsen thermostat, the Berendsen barostat relies on a characteristic time constant: dp dt = P 0 P τ P. (56) Now, instead of velocity rescaling we need to rescale the simulation box, because of the relation between volume and pressure: dv = V βdp, (57) where β is the system s compressibility. The rescaling factors for volume, positions and box 22 Equilibration is MD jargon for the period of the dynamics that it takes an (often unrealistic) initial configuration of a molecular system to reach a steady state around the desired temperature. 23 Thermostats which are also popular nowadays are Nosé-Hoover chains or Langevin dynamics, to name a couple. The regular Nosé-Hoover remains widely used, and for simplicity will be the only NV T thermostat that will be discussed here. 17

vectors 24 are, respectively: V (t 0 ) r i (t 0 ) L j (t 0 ) ( 1 t ( 1 t ) τ β (P 0 P ) τ β (P 0 P ) ( 1 t τ β (P 0 P ) V (t 0 ), (58) ) 1 3 ri (t 0 ), (59) ) 1 3 Lj (t 0 ). (60) These scaling factors are valid for liquids and gases. For isotropic solids, one can use the same expressions, but the compressibility should be replaced by the inverse of the bulk modulus B. For non-isotropic solids, the stiffness tensor should be used to compute a set of scaling factors for the lattice vectors (up to six, which is the number of independent components of the strain tensor). In this course, we will keep things nice and isotropic. 3.4 Hands-on: Simulating an orbital system in 2D It is now time to put to the test some of the ideas discussed in this section. In particular, we will write a code to simulate the dynamics of a 3-body orbital system of the Sun-Earth-Moon type, where for simplicity we will constrain the motion of the bodies to a 2D plane. This practical exercise will involve coding the force field (gravitational interaction between the massive bodies) to obtain the instantaneous forces and coding the integrator(s) for the equations of motion to propagate the positions in time. We will code the Lazy Man s, Verlet and velocity Verlet schemes, and will be able to compare each algorithm to the others and to test the effect of time step choice on the results. The example codes will be given in Fortran, which despite being antiquated and despised by some people, 25 remains quite popular for science applications (in particular among physicists). You can use your preferred language, but note that writing a MD code natively in Python 26 will result in orders of magnitude slower execution than a Fortran or C code. 3.4.1 Force field interactions Our orbital system will interact via a gravitational force field: V ij = G m im j, (61) r ij V = V ij, (62) i j i F ij = G m im j (r j r i ), F r 3 ji = F ij, (63) ij F i = j F ij, (64) where G is Newton s gravitation constant. Note that an important consideration, as discussed in Sec. 3.1, is that we provide analytical expressions for potential energy and forces. This is so 24 In the simplest case, the box vectors are perpendicular to each other and given by L 1 (L x, 0, 0), L 2 (0, L y, 0) and L 3 (0, 0, L z). 25 John Krueger famously said A computer without COBOL and Fortran is like a piece of chocolate cake without ketchup and mustard. 26 And I do like Python. 18

because an efficient evaluation of the forces cannot be carried out if we need to obtain them from numerical differentiation of the potential. Note also our convention that F ij stands for force acting on particle i due to particle j and that, according to Newton s third law, it is opposite to the force acting on j due to i. The total force acting on i is the sum of all the pair-wise forces acting on it (summation over j). In practice, since the gravitational interaction is essentially identical to an electrostatic interaction barring the value of the constants and the fact that charges are signed (and masses are not), we will write a general purpose force field module which can handle both interactions and easily switch between them by providing different constants. This looks like the following: module p o t e n t i a l s implicit none contains potentials.f90! This s u b r o u t i n e r e t u r n s t h e d i s t a n c e between r i and r j under! c e r t a i n boundary c o n d i t i o n s subroutine g e t d i s t a n c e ( posi, posj, L, PBC, d i s t, d ) implicit none real 8, intent ( in ) : : p o s i ( 1 : 3 ), p o s j ( 1 : 3 ), L ( 1 : 3 ) logical, intent ( in ) : : PBC( 1 : 3 ) real 8, intent ( out ) : : d real 8, intent ( out ) : : d i s t ( 1 : 3 ) real 8 : : d2 integer : : i d2 = 0. d0 do i = 1, 3 i f ( PBC( i ) ) then d i s t ( i ) = modulo( p o s j ( i ) p o s i ( i ), L( i ) ) i f ( d i s t ( i ) > L( i ) / 2. d0 ) then d i s t ( i ) = d i s t ( i ) L( i ) end i f else d i s t ( i ) = p o s j ( i ) p o s i ( i ) end i f d2 = d2 + d i s t ( i ) 2 d = d s q r t ( d2 ) end subroutine g e t d i s t a n c e! This r e t u r n s p o t e n t i a l energy and f o r c e f o r a 1/ r t y p e p o t e n t i a l. G i s a! c o n s t a n t p r e f a c t o r ; i t could be t h e g r a v i t y c o n s t a n t or e ˆ2/(4 p i eps0 ), e t c. subroutine p a i r w i s e e l e c t r o s t a t i c p o t e n t i a l ( posi, posj, Zi, Zj, G, L, PBC, & Epot, f i ) implicit none real 8, intent ( in ) : : p o s i ( 1 : 3 ), p o s j ( 1 : 3 ), Zi, Zj, G, L ( 1 : 3 ) logical, intent ( in ) : : PBC( 1 : 3 ) real 8, intent ( out ) : : Epot, f i ( 1 : 3 ) real 8 : : d, d i s t ( 1 : 3 ) 19

c a l l g e t d i s t a n c e ( posi, posj, L, PBC, d i s t, d ) Epot = G Zi Zj / d! The f o r c e on i i s c a l c u l a t e d assuming t h e convention t h a t d i s t (1) = x j x i f i ( 1 : 3 ) = G Zi Zj d i s t ( 1 : 3 ) / d 3. d0 end subroutine end module 3.4.2 Integrators We are making the choice to keep our force fields, integrators and main code separated by design. Compartmentalizing code is generally a good idea whenever possible to keep things tidy, even if sometimes it comes at the cost of efficiency. We put our force fields in the potentials module (in file potentials.f90). Now, we are going to put our integrators in the integrators module (file integrators.f90). 27 For the time being, we are coding microcanonical (non thermostated) versions of Verlet, velocity Verlet and the Lazy Man; adding velocity rescaling thermostating is straightforward, however because Nosé-Hoover involves rewriting the equations of motion, making an integrator compatible with it could require additional complexity, depending on how one does the coding in practice. 28 Our integrators look like this: module i n t e g r a t o r s integrators.f90 implicit none contains! Lazy Man s approach subroutine lazy man ( x in, v in, F, m, dt, x out, v out ) implicit none real 8, intent ( in ) : : x i n ( 1 : 3 ), v i n ( 1 : 3 ) real 8, intent ( out ) : : x out ( 1 : 3 ), v out ( 1 : 3 ) real 8, intent ( in ) : : F ( 1 : 3 ), m, dt x out ( 1 : 3 ) = x i n ( 1 : 3 ) + v i n ( 1 : 3 ) dt + F ( 1 : 3 ) /m/ 2. d0 dt 2 v out ( 1 : 3 ) = v i n ( 1 : 3 ) + F ( 1 : 3 ) /m dt end subroutine! Regular V e r l e t subroutine v e r l e t ( x in, F, m, dt, x out ) implicit none real 8, intent ( in ) : : x i n ( 1 : 2, 1 : 3 ) real 8, intent ( out ) : : x out ( 1 : 3 ) real 8, intent ( in ) : : F ( 1 : 3 ), m, dt 27 Did I also mention the virtues of keeping an explicit naming convention for your code? 28 Later in this course we will code a leapfrog Nosé-Hoover integrator. 20

x out ( 1 : 3 ) = 2. d0 x i n ( 2, 1 : 3 ) x i n ( 1, 1 : 3 ) + F ( 1 : 3 ) /m dt 2 end subroutine! V e l o c i t y V e r l e t i s two s u b r o u t i n e s subroutine v e l o c i t y v e r l e t v e l ( v in, F, m, dt, v out ) implicit none real 8, intent ( in ) : : v i n ( 1 : 3 ) real 8, intent ( out ) : : v out ( 1 : 3 ) real 8, intent ( in ) : : F ( 1 : 2, 1 : 3 ), m, dt v out ( 1 : 3 ) = v i n ( 1 : 3 ) + (F ( 2, 1 : 3 ) + F ( 1, 1 : 3 ) ) / 2. d0/m dt end subroutine subroutine v e l o c i t y v e r l e t p o s ( x in, v in, F, m, dt, x out ) implicit none real 8, intent ( in ) : : x i n ( 1 : 3 ), v i n ( 1 : 3 ) real 8, intent ( out ) : : x out ( 1 : 3 ) real 8, intent ( in ) : : F ( 1 : 3 ), m, dt x out ( 1 : 3 ) = x i n ( 1 : 3 ) + v i n ( 1 : 3 ) dt + F ( 1 : 3 ) /m/ 2. d0 dt 2 end subroutine end module 3.4.3 Simulation workflow In this example we will only have 3 particles and want to get going fast, so we will make the lazy choice of keeping initial positions and velocities, time step definition, etc. in the code (i.e., hard coded). Later in the course we will create an interface so that the code can read input files and does not need to be recompiled every time we want to change some simulation details. Our full workflow (for the Lazy Man s approach) looks like this: program o r b i t a l orbital.f90 use use p o t e n t i a l s i n t e g r a t o r s implicit none real 8 : : pos ( 1 : 3, 1 : 3 ), v e l ( 1 : 3, 1 : 3 ), m( 1 : 3 ), dt, L ( 1 : 3 ), E sum, E i j real 8 : : f i s u m ( 1 : 3, 1 : 3 ), f i ( 1 : 3 ), f i p r e v ( 1 : 3, 1 : 3 ), f i a r r a y ( 1 : 2, 1 : 3 ) real 8 : : x i p r e v ( 1 : 3, 1 : 3 ), x i a r r a y ( 1 : 2, 1 : 3 ), new pos ( 1 : 3 ), new vel ( 1 : 3 ) integer : : step, n, i, j l o g i c a l : : PBC( 1 : 3 )! Sun i n i t i a l c o n d i t i o n s pos ( 1, 1 : 3 ) = (/ 0. d0, 0. d0, 0. d0 /) v e l ( 1, 1 : 3 ) = (/ 0. d0, 0.335d0, 0. d0 /) m( 1 ) = 1 0. d0 21

! Earth i n i t i a l c o n d i t i o n s pos ( 2, 1 : 3 ) = (/ 1.d0, 0. d0, 0. d0 /) v e l ( 2, 1 : 3 ) = (/ 0. d0, 4. d0, 0. d0 /) m( 2 ) = 1. d0! Moon i n i t i a l c o n d i t i o n s pos ( 3, 1 : 3 ) = (/ 0.95d0, 0. d0, 0. d0 /) v e l ( 3, 1 : 3 ) = (/ 0. d0, 1.d0, 0. d0 /) m( 3 ) = 0. 1 d0! Time s t e p dt = 1. d 3 n = 10000! BC PBC =. f a l s e. L = (/ 1. d0, 1. d0, 1. d0 /) open( unit =10, f i l e= o r b i t l a z y, status= unknown )! Run f o r n time s t e p s do s t e p = 0, n E sum = 0. d0 f i s u m = 0. d0 do i = 1, 3 do j = i +1, 3 c a l l p a i r w i s e e l e c t r o s t a t i c p o t e n t i a l ( pos ( i, 1 : 3 ), pos ( j, 1 : 3 ), m( i ), & m( j ), 1.d0, L, PBC, Eij, f i ( 1 : 3 ) ) E sum = E sum + E i j f i s u m ( i, 1 : 3 ) = f i s u m ( i, 1 : 3 ) + f i ( 1 : 3 ) f i s u m ( j, 1 : 3 ) = f i s u m ( j, 1 : 3 ) f i ( 1 : 3 ) write ( 1 0, ) d f l o a t ( s t e p ) dt, E sum, pos ( 1, 1 : 2 ), pos ( 2, 1 : 2 ), pos ( 3, 1 : 2 ) do i = 1, 3 c a l l lazy man ( pos ( i, 1 : 3 ), v e l ( i, 1 : 3 ), f i s u m ( i, 1 : 3 ), m( i ), dt, & new pos, new vel ) pos ( i, 1 : 3 ) = new pos ( 1 : 3 ) v e l ( i, 1 : 3 ) = new vel ( 1 : 3 ) close ( 1 0 ) end program Note that we are not worrying too much at the moment about the units of our positions, velocities and masses. We even choose our gravitation constant to be unity. For this toy example, this is fine. When we look at calculating the properties of realistic gases we will make an effort to ensure all the units and magnitudes make sense. We are writing our results to file orbit lazy in text format, which will allow us to do some plotting later on. Verlet integration can be done by substituting the last part of the code above by this: Verlet workflow open( unit =10, f i l e= o r b i t v e r l e t, status= unknown )! Run f o r n time s t e p s do s t e p = 0, n E sum = 0. d0 f i s u m = 0. d0 22

do i = 1, 3 do j = i +1, 3 c a l l p a i r w i s e e l e c t r o s t a t i c p o t e n t i a l ( pos ( i, 1 : 3 ), pos ( j, 1 : 3 ), m( i ), & m( j ), 1.d0, L, PBC, Eij, f i ( 1 : 3 ) ) E sum = E sum + E i j f i s u m ( i, 1 : 3 ) = f i s u m ( i, 1 : 3 ) + f i ( 1 : 3 ) f i s u m ( j, 1 : 3 ) = f i s u m ( j, 1 : 3 ) f i ( 1 : 3 ) write ( 1 0, ) d f l o a t ( s t e p ) dt, E sum, pos ( 1, 1 : 2 ), pos ( 2, 1 : 2 ), pos ( 3, 1 : 2 ) do i = 1, 3 i f ( s t e p == 0 ) then x i p r e v ( i, 1 : 3 ) = pos ( i, 1 : 3 ) c a l l lazy man ( pos ( i, 1 : 3 ), v e l ( i, 1 : 3 ), f i s u m ( i, 1 : 3 ), m( i ), dt, & new pos, new vel ) else x i a r r a y ( 1, 1 : 3 ) = x i p r e v ( i, 1 : 3 ) x i a r r a y ( 2, 1 : 3 ) = pos ( i, 1 : 3 ) c a l l v e r l e t ( x i a r r a y, f i s u m ( i, 1 : 3 ), m( i ), dt, new pos ) end i f x i p r e v ( i, 1 : 3 ) = pos ( i, 1 : 3 ) pos ( i, 1 : 3 ) = new pos ( 1 : 3 )! v e l ( i, 1 : 3 ) = n ew v e l ( 1 : 3 ) close ( 1 0 ) And velocity Verlet can be used like this: Velocity Verlet workflow open( unit =10, f i l e= o r b i t v e l o c i t y v e r l e t, status= unknown )! Run f o r n time s t e p s do s t e p = 0, n E sum = 0. d0 f i s u m = 0. d0 do i = 1, 3 do j = i +1, 3 c a l l p a i r w i s e e l e c t r o s t a t i c p o t e n t i a l ( pos ( i, 1 : 3 ), pos ( j, 1 : 3 ), m( i ), & m( j ), 1.d0, L, PBC, Eij, f i ( 1 : 3 ) ) E sum = E sum + E i j f i s u m ( i, 1 : 3 ) = f i s u m ( i, 1 : 3 ) + f i ( 1 : 3 ) f i s u m ( j, 1 : 3 ) = f i s u m ( j, 1 : 3 ) f i ( 1 : 3 ) write ( 1 0, ) d f l o a t ( s t e p ) dt, E sum, pos ( 1, 1 : 2 ), pos ( 2, 1 : 2 ), pos ( 3, 1 : 2 ) do i = 1, 3 i f ( s t e p == 0 ) then c a l l v e l o c i t y v e r l e t p o s ( pos ( i, 1 : 3 ), new vel, f i s u m ( i, 1 : 3 ), m( i ), dt, & new pos ) f i p r e v ( i, 1 : 3 ) = f i s u m ( i, 1 : 3 ) pos ( i, 1 : 3 ) = new pos ( 1 : 3 ) else f i a r r a y ( 1, 1 : 3 ) = f i p r e v ( i, 1 : 3 ) f i a r r a y ( 2, 1 : 3 ) = f i s u m ( i, 1 : 3 )! Note t h a t t h i s v e l o c i t y t r a i l s t h e p o s i t i o n by one time s t e p c a l l v e l o c i t y v e r l e t v e l ( v e l ( i, 1 : 3 ), f i a r r a y, m( i ), dt, new vel ) c a l l v e l o c i t y v e r l e t p o s ( pos ( i, 1 : 3 ), new vel, f i s u m ( i, 1 : 3 ), m( i ), dt, & new pos ) f i p r e v ( i, 1 : 3 ) = f i s u m ( i, 1 : 3 ) pos ( i, 1 : 3 ) = new pos ( 1 : 3 ) 23

v e l ( i, 1 : 3 ) = new vel ( 1 : 3 ) end i f close ( 1 0 ) To compile and run the code we can execute the following commands in a Linux terminal: g f o r t r a n c p o t e n t i a l s. f 9 0 i n t e g r a t o r s. f 9 0 g f o r t r a n o o r b i t a l. ex o r b i t a l. f 9 0. o. / o r b i t a l. ex This will generate the trajectory files, which we can use for visualization. 3.4.4 Visualizing the trajectories Use your favorite plotting program to visualize the trajectory. In Fig. 2 we show a comparison between approximate and exact trajectories (where exact results are obtained with a time step 100 times smaller). Full trajectories can be visualized on YouTube: Verlet: https://youtu.be/1h-g59426ou Velocity Verlet: https://youtu.be/kqap90swtiq Lazy Man: https://youtu.be/nhazgkkn1-g It is now manifestly clear that the Lazy Man approach is terrible, leading not only to quantitatively wrong results but also to completely nonsensical behavior, such as our Moon-like object shooting away into an unstable orbit. Verlet and velocity Verlet predict that our Moon will lag behind the exact solution, with the error accumulating over time. In Fig. 2 we can also see the time evolution of the accumulated error in the predicted position of the Moon, and how this error depends on the chosen time step. We know that the accumulated error for Verlet should behave as t t 2, meaning that the logarithm of the error should behave as log t + 2 log t. We see that this is indeed reflected in Fig. 2. If we were to increase the time step even further, at some point we could make the orbital motion unstable for spurious numerical reasons (as with the Lazy Man s approach). The main lesson learned here is that one needs to carefully select a sensible integration scheme together with an integration time step which preserves the system dynamics. A final interesting point to note from Fig. 2 is that the positions obtained with regular Verlet are instantaneously more accurate than those computed with velocity Verlet, even though the accumulated error in the position is the same. This resonates again with the summary of errors given in Table 1. 24

Time = 10000 Exact solution Verlet Time = 10000 Exact solution Velocity Verlet Time = 10000 Exact solution Lazy Man Error (distance to exact solution) Accumulated error in Moon s position 10 1 0.1 Verlet Velocity Verlet Lazy Man 0.01 0 2 4 6 8 10 Time Error (distance to exact solution) Accumulated error in Moon s position 100 10 3 10 1 Verlet, t = 1 10 0.1 4 Verlet, t = 2 10 4 Verlet, t = 5 10 4 Verlet, t = 10 10 4 0.01 0 2 4 6 8 10 Time Figure 2: Comparison between different approximate solutions and the exact one, after many time steps, and accumulated errors for different integrators and time steps. 25

References [1] D. A. McQuarrie. Statistical Mechanics. Harper & Row, New York, 1976. [2] C. J. Cramer. Essentials of computational chemistry: theories and models. 2nd ed. John Wiley & Sons, 2004. [3] R. M. Martin. Electronic Structure. Cambridge University Press, 2004. [4] M. Z. Bazant, E. Kaxiras, and J. F. Justo. Environment-dependent interatomic potential for bulk silicon. In: Phys. Rev. B 56 (1997), p. 8542. [5] A. P. Bartók, M. C. Payne, R. Kondor, and G. Csányi. Gaussian approximation potentials: The accuracy of quantum mechanics, without the electrons. In: Phys. Rev. Lett. 104 (2010), p. 136403. [6] H. J. C. Berendsen, J. P. M. Postma, W. F. van Gunsteren, A. R. H. J. DiNola, and J. R. Haak. Molecular dynamics with coupling to an external bath. In: J. Chem. Phys. 81 (1984), p. 3684. [7] S. Nosé. A unified formulation of the constant temperature molecular dynamics methods. In: J. Chem. Phys. 81 (1984), p. 511. [8] W. G. Hoover. Canonical dynamics: equilibrium phase-space distributions. In: Phys. Rev. A 31 (1985), p. 1695. [9] N. F. Carnahan and K. E. Starling. Equation of state for nonattracting rigid spheres. In: J. Chem. Phys. 51 (1969), p. 635. [10] B. E. F. Fender and G. D. Halsey Jr. Second Virial Coefficients of Argon, Krypton, and Argon-Krypton Mixtures at Low Temperatures. In: J. Chem. Phys. 36 (1962), p. 1881. [11] A. P. Thompson, S. J. Plimpton, and W. Mattson. General formulation of pressure and stress tensor for arbitrary many-body interaction potentials under periodic boundary conditions. In: J. Chem. Phys. 131 (2009), p. 154107. 53