A (short) practical introduction to kinetic theory and thermodynamic properties of gases through molecular dynamics

Size: px
Start display at page:

Download "A (short) practical introduction to kinetic theory and thermodynamic properties of gases through molecular dynamics"

Transcription

1 A (short) practical introduction to kinetic theory and thermodynamic properties of gases through molecular dynamics Miguel A. Caro March 28, 2018 Contents 1 Preface 3 2 Review of thermodynamics of ideal gases Molecular partition functions Equation of state Practical introduction to molecular dynamics simulations Force fields Integrating the equations of motion The Lazy Man s approach Verlet Leapfrog Error estimate for different algorithms Thermostats and barostats Hands-on: Simulating an orbital system in 2D Force field interactions Integrators Simulation workflow Visualizing the trajectories Non-ideal gases General considerations on non-ideal gases: virial expansion Hard-sphere gases Lennard-Jones gases Hands-on: Estimating the B 3 coefficient for hard-sphere gases Hands-on: compressibility factor of hard-sphere gases from MD Hands-on: Properties of Lennard-Jones gases from MD Kinetic theory of gases Molecular collisions Continuum mechanics The Boltzmann equation Chapman-Enskog theory Brownian motion Hands-on:?

2 6 Special topic: The 2PT model for liquid systems The harmonic approximation: density of states Fluidicity and conceptional partition of the degrees of freedom Hands-on: Validation of 2PT for liquids and liquid mixtures References 53 2

3 3 Practical introduction to molecular dynamics simulations The need for molecular dynamics arises partly from the difficulty of evaluating the phase-space integral over Boltzmann factors of any system beyond ideal and toy models, which is required to compute the partition function: Q(N, V, T ) ( ) H(p, q) dp dq exp. (31) k B T The equation above converges extremely slowly with respect to the size of the sampled phasespace region. The so-called ergodic hypothesis tells us that the probability distribution for finding a system in a particular position in phase space, drawn from different times along its trajectory, is the same as the probability distribution drawn from the different microstates in the ensemble. In other words, the time average of an observable O of the system is the same as the ensemble average: ) dp dq O exp ( H(p,q) Ō = k B T dp dq exp ( H(p,q) k B T )? = 1 τ τ 0 dt O(t). (32) There is no proof that the ergodic hypothesis holds true, 10 however there is proof that for some particular systems it does not hold. So there is proof that it does not always hold. For typical problems studied with MD simulations, this hypothesis seems to work reasonably well, provided that sufficiently long simulation times are used. Since calculating time averages is a lot less expensive than ensemble averages, MD simulations are routinely used to compute average properties of molecular systems. In addition to computing thermodynamic quantities (i.e., equilibrium properties), for which approximating the partition function becomes important, one may be interested in studying the evolution of a system which is not in thermodynamic equilibrium. For instance, one may want to look at the denaturation process of a protein when the temperature is too high. In those instances, MD becomes quite useful. We shall see in the following how molecular trajectories are obtain in practice using computational approaches. For atomic or molecular systems with more than a few particles, except for the simplest cases, it is impossible to find analytical solutions describing their trajectories. The main task underlying MD simulations is the numerical integration of the equations of motion of an ensemble of interacting particles: 11 r i = F i m i, (33) where r i, m i and F i are, respectively, the position, mass and force acting on particle i. For convenience of notation, we will refer throughout this document to time derivatives by adding x dots: t ẋ, 2 x ẍ, etc. Bold symbols indicate vectors (usually in 3-dimensional space). t 2 10 That is why is it called a hypothesis. 11 We are assuming that our particles move like classical objects, and thus can use Newtonian mechanics instead of quantum mechanics. This is a very bad approximation for electrons, but it works quite well for anything heavier than a proton (deuterium and up from there) and often can be used to look at the movement of regular hydrogen atoms (protium). In MD, the assumption that typical electronic and nuclear motion time scales are decoupled and the movement of nuclei can be described with classical mechanics is known as the Born-Oppenheimer approximation. All the quantum effects within this approximation, which emanate from the electrons, are implicitly contained within the forces acting on the nuclei. 10

4 The force acting on particle i is connected in the usual way to the system s Hamiltonian via its gradient with respect to the position of i: F i = ri H ({ p j }, {rj } ), (34) where the Hamiltonian contains the kinetic energy and potential energy parts: H ({ p j }, {rj } ) = j p j 2 2m j + V ({r j }). (35) By stating the objective of MD above we have raised two issues, one explicit, namely how to solve Eq. (33), and another one implicit. The implicit issue is how to evaluate F i, which for a microscopic system is equivalent to the issue of how to approximate the nature and strength of atomic interactions. This boils down to approximating V ({r j }). A very accurate determination of the forces is highly non-trivial because of the quantum nature of microscopic systems, and the accuracy with which the F i are determined will strongly impact the computational cost of running a MD simulation. For instance, the CPU cost of running ab initio MD based on density-functional theory (DFT) [3], which treats the electronic interactions explicitly, is 5 6 orders of magnitude higher than a cheap MD based on simple empirical harmonic potentials. Therefore, a very active field of research is the development of accurate yet inexpensive interatomic potentials, also known as force fields. 3.1 Force fields A force field or interatomic potential is a mathematical model representing the energetic interactions between atomic or supra-atomic (e.g., molecular) systems. The main task of a force field is to approximate, as accurately as possible within a given complexity of the model, 12 the real interactions. For instance, if we were trying to model the interactions between the 3 atoms in a water molecule we could choose a force-field representation of the O H bonds and the H O H bond angle via spring constants (Fig. 1). These will effectively reproduce the harmonic vibrations since the potential will be harmonic (about the equilibrium values r 0 and θ 0 ) by construction. The system s Hamiltonian would read like this: H = p 2 H 1 + p 2 H 2 + p 2 O + k r 2m H 2m H 2m O 2 (r 1 r 0 ) 2 + k r 2 (r 2 r 0 ) 2 + k θ 2 (θ θ 0) 2. (36) This Hamiltonian, with the force constants k r and k θ fitted to reproduce experimental or ab initio data (e.g., vibrational frequencies), will give a satisfactory description of an isolated water molecule at relatively low temperatures. However, since it only contains intramolecular interactions, it will fail miserably to reproduce the dynamics of interacting water molecules in the solid, liquid and even gas phases. To improve that, a force field will often include electrostatic interactions modeled by partial charges, e.g., to take into account the fact that the O H bonds are ionic and valence electrons in water sit preferentially around O atoms. 13 Other non-bonded interactions which can be inexpensively included in empirical force fields are van der Waals-type terms. A popular representation of these interactions is the Lennard-Jones potential. Even after including partial charges and Lennard-Jones interactions to Eq. (36), our simple model will still completely fail to handle bond breaking and bond formation, since the functional 12 By complexity of the model we are referring to the intrinsic constrains that limit its accuracy, for example a model with a fixed functional form including only harmonic terms will not be able to describe non-harmonic effects, regardless of how well it is parametrized. 13 In water, the equilibrium bond length is r Å, the equilibrium bond angle is θ , and the partial charges of H and O are typically in the order of +0.5 and 1 elementary units, respectively. 11

5 O r 2 r 1 H 2 H 1 θ Figure 1: Chemical bonds in a water molecule represented as springs. form of our Hamiltonian ensures that the bonded interactions (the harmonic terms) are always between the same set of atoms. Handling bond breaking and formation accurately within empirical force fields is extremely challenging. The quick and dirty solution is to define a cutoff distance at which bonded interactions are switched on (r < r cutoff ) and off (r > r cutoff ). A more sophisticated way to do this is to combine the cutoff approach with environmentallydependent potentials, where the force constants or even the functional form of the potential depend on the number of nearest neighbors of each atom (e.g., EDIP [4]). A solution which is becoming increasingly popular in recent years is to use machine learning (also known as artificial intelligence ) to generate highly-flexible interatomic potentials which do not rely on a fixed functional form (e.g., GAP [5]). More generally, to treat bond breaking and formation accurately, the safest (and most expensive) choice is to run ab initio MD. Fortunately, many systems of interest can be studied without having to worry about bonds breaking or forming during the time scale of the MD simulation. Force field development is an active field of molecular physics research, and a wealth of information on different models, both simple and complex, exists in the literature. In this introductory document, we will limit ourselves to simple force fields and use them to illustrate central concepts in MD and thermodynamic theory of gases. 3.2 Integrating the equations of motion To know where the different particles in our system are at any given time t, we need to integrate Eq. (33), assuming that we know how to compute the forces, as discussed in the previous section. Note that to integrate a second-order differential equation we need two sets of initial conditions at time t 0. In our case, we need initial positions {r j (t 0 )} and velocities {ṙ j (t 0 )}. We will deal with initialization later on. The main question that we will answer in this section is, provided that we know the state of our system (positions and velocities) at time t 0 and we can calculate the forces at t 0 from them, 14 what will be the positions and velocities at time t 0 + t? In this section we will deal with 3 different approaches and will compare them to each other: i) the Lazy Man s approach, ii) Verlet and iii) leapfrog In the previous section, we have discussed in some detail about the form of the Hamiltonian, but not the forces. If we know the analytical form of the Hamiltonian, then we can easily compute the analytical form of the forces from Eq. (34). Sometimes we can resort to alternative approaches, like the Hellmann-Feynman theorem [3] for ab initio methods. However, in the worst case scenario, namely we can only evaluate the Hamiltonian, we would need to approximate each partial derivative numerically by a finite difference. This would mean that evaluating the forces requires of the order of 2N evaluations of the Hamiltonian, where N is the number of particles. 15 Do not let these names fool you, I only made up the Lazy Man s approach, leapfrog integration is an actual thing. 12

6 3.2.1 The Lazy Man s approach At time t 0 + t, the exact solution to Eq. (33), for an arbitrary value of t, is r i (t 0 + t) = r i (t 0 ) + t 0 + t t 0 t dt ṙ i (t 0 ) + dt r i (t ). (37) } t {{ 0 } ṙ i (t) Now, Eq. (37) above presents the complication that we only know positions, velocities and forces at precisely t 0. Therefore, the innermost integral cannot be evaluated, since it requires knowledge of the value of r i at times other than t 0. However, if we make t very small, then we can claim that the forces or, equivalently, the accelerations, are approximately constant between t 0 and t 0 + t. Under such approximation, Eq. (37) reduces to: r i (t 0 + t) = r i (t 0 ) + ṙ i (t 0 ) t + F i(t 0 ) 2m i t 2, (38) which requires only knowledge of the different variables at t 0. The velocities are easily computed too: ṙ i (t 0 + t) = ṙ i (t 0 ) + F i(t 0 ) m i t. (39) After updating the positions, one can evaluate the forces at t = t 0 + t and use Eq. (38) to predict positions and velocities at t = t 0 +2 t. Recursive use of Eq. (38) allows us to propagate the equations of motion of our system from time t 0 up to an arbitrary later time. The longer the propagation time, the more expensive the simulation. To obtain the state of the system at t = t 0 + n t, that is, n time steps after initialization, we need to perform n force evaluations. The main message to keep in mind here is that the longer the time step t the worse our approximation that the forces are constant between consecutive time steps. Therefore, as a general rule, the shorter the time step the more accurate our simulation. Too long time steps will lead to unrealistic dynamics and energy drift; too short time steps will lead to waste of CPU time. Therefore, an important consideration for MD in terms of optimizing resources is to choose the longest time step which does not compromise the accuracy of the dynamics. We will see that the Lazy Man s approach is less accurate than schemes commonly used in popular MD codes, and requires shorter time steps in comparison. One should therefore avoid being Lazy whenever possible; however, for testing a new implementation and to illustrate the idea behind propagation of equations of motion, the simplicity of this approach becomes useful. The Lazy Man overlooked the fact that physical quantities rarely change abruptly with time, but rather have a smooth time evolution. 16 Strictly, we can only sample the exact values of the forces discretely (in time), at the same times for which a list of positions is available. This means that we only know the approximate positions and the exact forces (that is, those compatible with our estimated positions) at times t = t 0, t 0 + t, t 0 +2 t,.... However, from the knowledge that the positions behave smoothly between consecutive time steps, we can interpolate them in between time steps, so that the integration in Eq. (37) can be carried out more accurately. That is what the following integration algorithms do. 16 Smooth in this context has the usual mathematical meaning that the function is continuous and continuously differentiable. 13

7 3.2.2 Verlet The idea that physical properties evolve smoothly with time forms the basis for the Verlet algorithm. Given three data points, namely three position vectors at consecutive time steps (t = t 0 t, t 0, t 0 + t), we can unambiguously define a 2nd-order polynomial which goes through all three points (and should therefore be a good approximant within the fitting domain): r i (t) =r i (t 0 t) (t t 0)(t t 0 t) 2 t 2 r i (t 0 ) (t t 0 + t)(t t 0 t) t 2 + r i (t 0 + t) (t t 0 + t)(t t 0 ) 2 t 2. (40) The equation above can be differentiated twice with respect to t to give the acceleration, which is constant since the polynomial is of order two. The estimated acceleration will be most accurate when compared with the actual (e.g., explicitly calculated) acceleration evaluated at the central point, t = t 0 : r i (t 0 ) r i (t 0 t) 1 t 2 r i(t 0 ) 2 t 2 + r i(t 0 + t) 1 t 2. (41) We can rewrite the expression above to give: r i (t 0 + t) = 2r i (t 0 ) r i (t 0 t) + F i(t 0 ) m i t 2, (42) which is the regular Verlet integration expression to estimate r i (t 0 + t) from the knowledge of the forces at t = t 0 and the previous positions at t = t 0 and t = t 0 t. This algorithm does not involve the velocities; 17 however, we may be interested in knowing the value of the velocities for a number of reasons. A symmetric difference can give it directly: r i (t 0 ) = r i(t 0 + t) r i (t 0 t). (43) 2 t However, the expression above is not particularly accurate; other schemes allow better estimation of velocities. The velocity Verlet algorithm uses Eq. (38) to compute the positions at t = t 0 + t but also takes advantage of the knowledge of the predictions of new positions to estimate velocities more accurately: r i (t 0 + t) = r i (t 0 ) + ṙ i (t 0 ) t + F i(t 0 ) 2m i t 2, (44) ṙ i (t 0 + t) = ṙ i (t 0 ) + F i(t 0 ) + F i (t 0 + t) 2m i t, (45) where F i (t 0 + t) can be readily computed as soon as the r i (t 0 + t) are available Leapfrog It should be apparent at this point that integration methods are closely related to discrete approximations to differentiation. In particular, the regular Verlet integrator is based on a central difference approximation to the second derivative of r i (t), whereas velocity Verlet is based on a constant acceleration approximation combined with an improved velocity estimation. 17 Additionally, the algorithm is not well defined for the first step, since it needs at least two previous positions to be able to make a prediction. For the first step, one can use Eq. (38). 14

8 Table 1: Errors for different algorithms Truncation error Propagation error Algorithm Position Velocity Position Velocity Verlet O ( t 4) O ( t 2) O ( t 2) O ( t 2) Velocity verlet O ( t 3) O ( t 3) O ( t 2) O ( t 2) Leapfrog O ( t 3) O ( t 3) O ( t 2) O ( t 2) Lazy Man s O ( t 3) O ( t 2) O ( t) O ( t) While a central difference approximation for the 2nd derivative is most accurately given at the central of three points used for interpolation, for the first-order derivative a central difference obtained from two data points gives highest accuracy in between the data points. That is: ( ṙ i t 0 + t 2 r i (t 0 ) = ṙi ) = r i(t 0 + t) r i (t 0 ) t ) ( ṙi t0 t ) 2 t ( t0 + t 2 ( r i (t 0 + t) = r i (t 0 ) + ṙ i t 0 + t 2 ) t, (46) ( ṙ i t 0 + t ) ( = ṙ i t 0 t ) + F i(t 0 ) t, (47) 2 2 m i where positions (and accelerations/forces) are naturally given on the time grid t = t 0, t 0 + t, t t,... and velocities are given on the grid t = t 0, t 0 + t 2, t t 2,.... Therefore, positions and velocities are always offset by half a time step. Graphically, it is as if they were leaping as a frog, hence the name of the algorithm. Although for many practical purposes this offset is not a problem, it can become an issue when one needs access to synchronous positions and velocities, e.g., to calculate instantaneous angular momenta Error estimate for different algorithms Since the integration methods presented are approximations to the exact solution of Eq. (37), they have errors associated to them. It can be shown 18 that the truncation errors in Verlet, velocity Verlet and leapfrog are of different orders for position and velocity, while the global (propagation) errors are the same (Table 1). This means that the accumulated error, and thus the precision of each method, is basically the same for all three integrators. The error for the Lazy Man s approach is much worse. We see again that being Lazy does not pay off. However, do not take my word for it, in the hands-on exercise we will check how using the Lazy Man s approach leads to non-sense results for a simple orbital system. 3.3 Thermostats and barostats So far we have not discussed how the concepts of temperature and pressure play out in a MD simulation. A straightforward implementation of an (accurate) integration scheme for an isolated system interacting via a conservative potential leads to conservation of energy and particle number. Additionally, if we set the dimensions of the simulation box to be fixed, also the volume is conserved. This corresponds to a thermodynamic microcanonical ensemble or, in MD terminology, an NV E ensemble. 19 More generally, we may be interested in studying the dynamics of a system in contact with a temperature and/or pressure bath, which we will refer to as NV T and NP T ensembles. In such simulations, one needs to ensure that the temperature 18 Do it as an exercise. Hint: use Taylor expansions. 19 NV E refers to the conserved quantities: N for particle number, V for volume and E for energy. 15

9 and/or pressure of the system are regulated somehow. In this course, we will not deal with variable particle number simulations (i.e., constant chemical potential µ). Before discussing how to regulate the temperature (and pressure) it is worth providing a consistent definition. The definition of temperature is done via the equipartition theorem: T = 2E kin N DoF k B, (48) where E kin is the kinetic energy, N DoF is the number of degrees of freedom and k B is Boltzmann s constant. Note that under periodic boundary conditions one may choose to remove the degrees of freedom of the system s center of mass; this should be taken into account for a consistent definition of temperature. The pressure P is defined via the volume derivative of the system s internal (or potential) energy U: P = U V. (49) Now that we have defined temperature and pressure in the context of MD, let us discuss the mathematical tools used to keep these quantities constant 20 in a MD simulation. These are known as thermostats and barostats, for temperature and pressure regulation, respectively. The simplest thermostat is one that rescales the atomic velocities to match the target temperature T 0. Suppose that the instantaneous temperature T (t 0 ) is (from Eq. (48) and the relation between kinetic energy and velocities): T (t 0 ) = N i=1 m i 3k B ṙ i (t 0 ) 2, (50) where we have assumed three degrees of freedom per particle, N DoF = 3N. Rescaling all the atomic velocities by the appropriate factor, ṙ i (t 0 ) T0 T ṙi(t 0 ), (51) leads to the desired temperature. However, this rescaling happening too fast (e.g., at each time step) may significantly perturb the dynamics of the system. 21 Commonly used velocity rescaling thermostats tend to dampen the velocity rescaling using a characteristic time constant, in such a way that the rescaling tends to drive the system back to the target temperature with a strength which depends both on how far the instantaneous temperature is from the target temperature but also on the time scale over which the temperature is expected to equilibrate: dt dt = T 0 T, (52) τ where τ is the characteristic time constant. The correction is ṙ i (t 0 ) 1 + t ( ) T0 τ T (t 0 ) 1 ṙ i (t 0 ), (53) 20 Actually, we are mostly interested in keeping temperature and pressure constant around a target value; typical thermostats and barostats will keep the instantaneous quantities oscillating in time around the target values. 21 This is especially true for small systems. 16

10 where the scaling is computed for the time corresponding to one time step, t. This damped velocity-rescaling thermostat is known as the Berendsen thermostat [6]. Even though the Berendsen thermostat is quite good for equilibrating a system, 22 it does not sample a canonical ensemble, in the sense that the velocity distribution is not compatible with the expected distribution of velocities in a canonical ensemble at the target temperature. To solve this issue, other thermostats have been introduced, most notably the Nosé-Hoover thermostat [7, 8]. 23 In the Nosé-Hoover approach, the real Hamiltonian is replaced by an auxiliary Hamiltonian, and a fictitious degree of freedom is introduced. This degree of freedom can exchange energy with the real degrees of freedom. The equations of motion within the Nosé-Hoover formalism, in Hamilton s momentum-position form, are the following: ṙ i = p i, m i ( ṡ = p s Q ṗ i = F i p s Q p i, (54) ), ṗ s = i p i 2 m i (N DoF + 1) k B T 0, (55) where s is the fictitious degree of freedom and p s is its associated momentum. Q is a fictitious mass for the extra degree of freedom, which has an influence on how fast and stable the thermostating will be. This mass needs to be optimized for the system at hand, and is often supplied as a time constant τ from the relation Q = (N DoF + 1) k B T 0 τ 2. In Eq. (55), we have put the equation of motion for s in between brackets because solving that particular equation is not required to obtain the dynamics of the real degrees of freedom (the other equations do not depend on s). Stabilization of the system s pressure is sometimes required when a constant-pressure N P T simulation is carried out. Although more sophisticated barostats exist, usually one relies on Berendsen s box rescaling approach to optimize P, and then carry out a constant-volume NV T simulation with the pre-optimized simulation box. This is so for mainly two reasons: 1) barostating can easily become unstable, it is computationally expensive and systems tend to blow up some times; 2) for most practical applications, capturing the effect of instantaneous temperature fluctuations is far more important than pressure fluctuations. Therefore, to check the effect of pressure changes, one can run NV T simulations for different volumes, each corresponding to a particular pressure value. In any case, identical to the Berendsen thermostat, the Berendsen barostat relies on a characteristic time constant: dp dt = P 0 P τ P. (56) Now, instead of velocity rescaling we need to rescale the simulation box, because of the relation between volume and pressure: dv = V βdp, (57) where β is the system s compressibility. The rescaling factors for volume, positions and box 22 Equilibration is MD jargon for the period of the dynamics that it takes an (often unrealistic) initial configuration of a molecular system to reach a steady state around the desired temperature. 23 Thermostats which are also popular nowadays are Nosé-Hoover chains or Langevin dynamics, to name a couple. The regular Nosé-Hoover remains widely used, and for simplicity will be the only NV T thermostat that will be discussed here. 17

11 vectors 24 are, respectively: V (t 0 ) r i (t 0 ) L j (t 0 ) ( 1 t ( 1 t ) τ β (P 0 P ) τ β (P 0 P ) ( 1 t τ β (P 0 P ) V (t 0 ), (58) ) 1 3 ri (t 0 ), (59) ) 1 3 Lj (t 0 ). (60) These scaling factors are valid for liquids and gases. For isotropic solids, one can use the same expressions, but the compressibility should be replaced by the inverse of the bulk modulus B. For non-isotropic solids, the stiffness tensor should be used to compute a set of scaling factors for the lattice vectors (up to six, which is the number of independent components of the strain tensor). In this course, we will keep things nice and isotropic. 3.4 Hands-on: Simulating an orbital system in 2D It is now time to put to the test some of the ideas discussed in this section. In particular, we will write a code to simulate the dynamics of a 3-body orbital system of the Sun-Earth-Moon type, where for simplicity we will constrain the motion of the bodies to a 2D plane. This practical exercise will involve coding the force field (gravitational interaction between the massive bodies) to obtain the instantaneous forces and coding the integrator(s) for the equations of motion to propagate the positions in time. We will code the Lazy Man s, Verlet and velocity Verlet schemes, and will be able to compare each algorithm to the others and to test the effect of time step choice on the results. The example codes will be given in Fortran, which despite being antiquated and despised by some people, 25 remains quite popular for science applications (in particular among physicists). You can use your preferred language, but note that writing a MD code natively in Python 26 will result in orders of magnitude slower execution than a Fortran or C code Force field interactions Our orbital system will interact via a gravitational force field: V ij = G m im j, (61) r ij V = V ij, (62) i j i F ij = G m im j (r j r i ), F r 3 ji = F ij, (63) ij F i = j F ij, (64) where G is Newton s gravitation constant. Note that an important consideration, as discussed in Sec. 3.1, is that we provide analytical expressions for potential energy and forces. This is so 24 In the simplest case, the box vectors are perpendicular to each other and given by L 1 (L x, 0, 0), L 2 (0, L y, 0) and L 3 (0, 0, L z). 25 John Krueger famously said A computer without COBOL and Fortran is like a piece of chocolate cake without ketchup and mustard. 26 And I do like Python. 18

12 because an efficient evaluation of the forces cannot be carried out if we need to obtain them from numerical differentiation of the potential. Note also our convention that F ij stands for force acting on particle i due to particle j and that, according to Newton s third law, it is opposite to the force acting on j due to i. The total force acting on i is the sum of all the pair-wise forces acting on it (summation over j). In practice, since the gravitational interaction is essentially identical to an electrostatic interaction barring the value of the constants and the fact that charges are signed (and masses are not), we will write a general purpose force field module which can handle both interactions and easily switch between them by providing different constants. This looks like the following: module p o t e n t i a l s implicit none contains potentials.f90! This s u b r o u t i n e r e t u r n s t h e d i s t a n c e between r i and r j under! c e r t a i n boundary c o n d i t i o n s subroutine g e t d i s t a n c e ( posi, posj, L, PBC, d i s t, d ) implicit none real 8, intent ( in ) : : p o s i ( 1 : 3 ), p o s j ( 1 : 3 ), L ( 1 : 3 ) logical, intent ( in ) : : PBC( 1 : 3 ) real 8, intent ( out ) : : d real 8, intent ( out ) : : d i s t ( 1 : 3 ) real 8 : : d2 integer : : i d2 = 0. d0 do i = 1, 3 i f ( PBC( i ) ) then d i s t ( i ) = modulo( p o s j ( i ) p o s i ( i ), L( i ) ) i f ( d i s t ( i ) > L( i ) / 2. d0 ) then d i s t ( i ) = d i s t ( i ) L( i ) end i f else d i s t ( i ) = p o s j ( i ) p o s i ( i ) end i f d2 = d2 + d i s t ( i ) 2 d = d s q r t ( d2 ) end subroutine g e t d i s t a n c e! This r e t u r n s p o t e n t i a l energy and f o r c e f o r a 1/ r t y p e p o t e n t i a l. G i s a! c o n s t a n t p r e f a c t o r ; i t could be t h e g r a v i t y c o n s t a n t or e ˆ2/(4 p i eps0 ), e t c. subroutine p a i r w i s e e l e c t r o s t a t i c p o t e n t i a l ( posi, posj, Zi, Zj, G, L, PBC, & Epot, f i ) implicit none real 8, intent ( in ) : : p o s i ( 1 : 3 ), p o s j ( 1 : 3 ), Zi, Zj, G, L ( 1 : 3 ) logical, intent ( in ) : : PBC( 1 : 3 ) real 8, intent ( out ) : : Epot, f i ( 1 : 3 ) real 8 : : d, d i s t ( 1 : 3 ) 19

13 c a l l g e t d i s t a n c e ( posi, posj, L, PBC, d i s t, d ) Epot = G Zi Zj / d! The f o r c e on i i s c a l c u l a t e d assuming t h e convention t h a t d i s t (1) = x j x i f i ( 1 : 3 ) = G Zi Zj d i s t ( 1 : 3 ) / d 3. d0 end subroutine end module Integrators We are making the choice to keep our force fields, integrators and main code separated by design. Compartmentalizing code is generally a good idea whenever possible to keep things tidy, even if sometimes it comes at the cost of efficiency. We put our force fields in the potentials module (in file potentials.f90). Now, we are going to put our integrators in the integrators module (file integrators.f90). 27 For the time being, we are coding microcanonical (non thermostated) versions of Verlet, velocity Verlet and the Lazy Man; adding velocity rescaling thermostating is straightforward, however because Nosé-Hoover involves rewriting the equations of motion, making an integrator compatible with it could require additional complexity, depending on how one does the coding in practice. 28 Our integrators look like this: module i n t e g r a t o r s integrators.f90 implicit none contains! Lazy Man s approach subroutine lazy man ( x in, v in, F, m, dt, x out, v out ) implicit none real 8, intent ( in ) : : x i n ( 1 : 3 ), v i n ( 1 : 3 ) real 8, intent ( out ) : : x out ( 1 : 3 ), v out ( 1 : 3 ) real 8, intent ( in ) : : F ( 1 : 3 ), m, dt x out ( 1 : 3 ) = x i n ( 1 : 3 ) + v i n ( 1 : 3 ) dt + F ( 1 : 3 ) /m/ 2. d0 dt 2 v out ( 1 : 3 ) = v i n ( 1 : 3 ) + F ( 1 : 3 ) /m dt end subroutine! Regular V e r l e t subroutine v e r l e t ( x in, F, m, dt, x out ) implicit none real 8, intent ( in ) : : x i n ( 1 : 2, 1 : 3 ) real 8, intent ( out ) : : x out ( 1 : 3 ) real 8, intent ( in ) : : F ( 1 : 3 ), m, dt 27 Did I also mention the virtues of keeping an explicit naming convention for your code? 28 Later in this course we will code a leapfrog Nosé-Hoover integrator. 20

14 x out ( 1 : 3 ) = 2. d0 x i n ( 2, 1 : 3 ) x i n ( 1, 1 : 3 ) + F ( 1 : 3 ) /m dt 2 end subroutine! V e l o c i t y V e r l e t i s two s u b r o u t i n e s subroutine v e l o c i t y v e r l e t v e l ( v in, F, m, dt, v out ) implicit none real 8, intent ( in ) : : v i n ( 1 : 3 ) real 8, intent ( out ) : : v out ( 1 : 3 ) real 8, intent ( in ) : : F ( 1 : 2, 1 : 3 ), m, dt v out ( 1 : 3 ) = v i n ( 1 : 3 ) + (F ( 2, 1 : 3 ) + F ( 1, 1 : 3 ) ) / 2. d0/m dt end subroutine subroutine v e l o c i t y v e r l e t p o s ( x in, v in, F, m, dt, x out ) implicit none real 8, intent ( in ) : : x i n ( 1 : 3 ), v i n ( 1 : 3 ) real 8, intent ( out ) : : x out ( 1 : 3 ) real 8, intent ( in ) : : F ( 1 : 3 ), m, dt x out ( 1 : 3 ) = x i n ( 1 : 3 ) + v i n ( 1 : 3 ) dt + F ( 1 : 3 ) /m/ 2. d0 dt 2 end subroutine end module Simulation workflow In this example we will only have 3 particles and want to get going fast, so we will make the lazy choice of keeping initial positions and velocities, time step definition, etc. in the code (i.e., hard coded). Later in the course we will create an interface so that the code can read input files and does not need to be recompiled every time we want to change some simulation details. Our full workflow (for the Lazy Man s approach) looks like this: program o r b i t a l orbital.f90 use use p o t e n t i a l s i n t e g r a t o r s implicit none real 8 : : pos ( 1 : 3, 1 : 3 ), v e l ( 1 : 3, 1 : 3 ), m( 1 : 3 ), dt, L ( 1 : 3 ), E sum, E i j real 8 : : f i s u m ( 1 : 3, 1 : 3 ), f i ( 1 : 3 ), f i p r e v ( 1 : 3, 1 : 3 ), f i a r r a y ( 1 : 2, 1 : 3 ) real 8 : : x i p r e v ( 1 : 3, 1 : 3 ), x i a r r a y ( 1 : 2, 1 : 3 ), new pos ( 1 : 3 ), new vel ( 1 : 3 ) integer : : step, n, i, j l o g i c a l : : PBC( 1 : 3 )! Sun i n i t i a l c o n d i t i o n s pos ( 1, 1 : 3 ) = (/ 0. d0, 0. d0, 0. d0 /) v e l ( 1, 1 : 3 ) = (/ 0. d0, 0.335d0, 0. d0 /) m( 1 ) = 1 0. d0 21

15 ! Earth i n i t i a l c o n d i t i o n s pos ( 2, 1 : 3 ) = (/ 1.d0, 0. d0, 0. d0 /) v e l ( 2, 1 : 3 ) = (/ 0. d0, 4. d0, 0. d0 /) m( 2 ) = 1. d0! Moon i n i t i a l c o n d i t i o n s pos ( 3, 1 : 3 ) = (/ 0.95d0, 0. d0, 0. d0 /) v e l ( 3, 1 : 3 ) = (/ 0. d0, 1.d0, 0. d0 /) m( 3 ) = 0. 1 d0! Time s t e p dt = 1. d 3 n = 10000! BC PBC =. f a l s e. L = (/ 1. d0, 1. d0, 1. d0 /) open( unit =10, f i l e= o r b i t l a z y, status= unknown )! Run f o r n time s t e p s do s t e p = 0, n E sum = 0. d0 f i s u m = 0. d0 do i = 1, 3 do j = i +1, 3 c a l l p a i r w i s e e l e c t r o s t a t i c p o t e n t i a l ( pos ( i, 1 : 3 ), pos ( j, 1 : 3 ), m( i ), & m( j ), 1.d0, L, PBC, Eij, f i ( 1 : 3 ) ) E sum = E sum + E i j f i s u m ( i, 1 : 3 ) = f i s u m ( i, 1 : 3 ) + f i ( 1 : 3 ) f i s u m ( j, 1 : 3 ) = f i s u m ( j, 1 : 3 ) f i ( 1 : 3 ) write ( 1 0, ) d f l o a t ( s t e p ) dt, E sum, pos ( 1, 1 : 2 ), pos ( 2, 1 : 2 ), pos ( 3, 1 : 2 ) do i = 1, 3 c a l l lazy man ( pos ( i, 1 : 3 ), v e l ( i, 1 : 3 ), f i s u m ( i, 1 : 3 ), m( i ), dt, & new pos, new vel ) pos ( i, 1 : 3 ) = new pos ( 1 : 3 ) v e l ( i, 1 : 3 ) = new vel ( 1 : 3 ) close ( 1 0 ) end program Note that we are not worrying too much at the moment about the units of our positions, velocities and masses. We even choose our gravitation constant to be unity. For this toy example, this is fine. When we look at calculating the properties of realistic gases we will make an effort to ensure all the units and magnitudes make sense. We are writing our results to file orbit lazy in text format, which will allow us to do some plotting later on. Verlet integration can be done by substituting the last part of the code above by this: Verlet workflow open( unit =10, f i l e= o r b i t v e r l e t, status= unknown )! Run f o r n time s t e p s do s t e p = 0, n E sum = 0. d0 f i s u m = 0. d0 22

16 do i = 1, 3 do j = i +1, 3 c a l l p a i r w i s e e l e c t r o s t a t i c p o t e n t i a l ( pos ( i, 1 : 3 ), pos ( j, 1 : 3 ), m( i ), & m( j ), 1.d0, L, PBC, Eij, f i ( 1 : 3 ) ) E sum = E sum + E i j f i s u m ( i, 1 : 3 ) = f i s u m ( i, 1 : 3 ) + f i ( 1 : 3 ) f i s u m ( j, 1 : 3 ) = f i s u m ( j, 1 : 3 ) f i ( 1 : 3 ) write ( 1 0, ) d f l o a t ( s t e p ) dt, E sum, pos ( 1, 1 : 2 ), pos ( 2, 1 : 2 ), pos ( 3, 1 : 2 ) do i = 1, 3 i f ( s t e p == 0 ) then x i p r e v ( i, 1 : 3 ) = pos ( i, 1 : 3 ) c a l l lazy man ( pos ( i, 1 : 3 ), v e l ( i, 1 : 3 ), f i s u m ( i, 1 : 3 ), m( i ), dt, & new pos, new vel ) else x i a r r a y ( 1, 1 : 3 ) = x i p r e v ( i, 1 : 3 ) x i a r r a y ( 2, 1 : 3 ) = pos ( i, 1 : 3 ) c a l l v e r l e t ( x i a r r a y, f i s u m ( i, 1 : 3 ), m( i ), dt, new pos ) end i f x i p r e v ( i, 1 : 3 ) = pos ( i, 1 : 3 ) pos ( i, 1 : 3 ) = new pos ( 1 : 3 )! v e l ( i, 1 : 3 ) = n ew v e l ( 1 : 3 ) close ( 1 0 ) And velocity Verlet can be used like this: Velocity Verlet workflow open( unit =10, f i l e= o r b i t v e l o c i t y v e r l e t, status= unknown )! Run f o r n time s t e p s do s t e p = 0, n E sum = 0. d0 f i s u m = 0. d0 do i = 1, 3 do j = i +1, 3 c a l l p a i r w i s e e l e c t r o s t a t i c p o t e n t i a l ( pos ( i, 1 : 3 ), pos ( j, 1 : 3 ), m( i ), & m( j ), 1.d0, L, PBC, Eij, f i ( 1 : 3 ) ) E sum = E sum + E i j f i s u m ( i, 1 : 3 ) = f i s u m ( i, 1 : 3 ) + f i ( 1 : 3 ) f i s u m ( j, 1 : 3 ) = f i s u m ( j, 1 : 3 ) f i ( 1 : 3 ) write ( 1 0, ) d f l o a t ( s t e p ) dt, E sum, pos ( 1, 1 : 2 ), pos ( 2, 1 : 2 ), pos ( 3, 1 : 2 ) do i = 1, 3 i f ( s t e p == 0 ) then c a l l v e l o c i t y v e r l e t p o s ( pos ( i, 1 : 3 ), new vel, f i s u m ( i, 1 : 3 ), m( i ), dt, & new pos ) f i p r e v ( i, 1 : 3 ) = f i s u m ( i, 1 : 3 ) pos ( i, 1 : 3 ) = new pos ( 1 : 3 ) else f i a r r a y ( 1, 1 : 3 ) = f i p r e v ( i, 1 : 3 ) f i a r r a y ( 2, 1 : 3 ) = f i s u m ( i, 1 : 3 )! Note t h a t t h i s v e l o c i t y t r a i l s t h e p o s i t i o n by one time s t e p c a l l v e l o c i t y v e r l e t v e l ( v e l ( i, 1 : 3 ), f i a r r a y, m( i ), dt, new vel ) c a l l v e l o c i t y v e r l e t p o s ( pos ( i, 1 : 3 ), new vel, f i s u m ( i, 1 : 3 ), m( i ), dt, & new pos ) f i p r e v ( i, 1 : 3 ) = f i s u m ( i, 1 : 3 ) pos ( i, 1 : 3 ) = new pos ( 1 : 3 ) 23

17 v e l ( i, 1 : 3 ) = new vel ( 1 : 3 ) end i f close ( 1 0 ) To compile and run the code we can execute the following commands in a Linux terminal: g f o r t r a n c p o t e n t i a l s. f 9 0 i n t e g r a t o r s. f 9 0 g f o r t r a n o o r b i t a l. ex o r b i t a l. f 9 0. o. / o r b i t a l. ex This will generate the trajectory files, which we can use for visualization Visualizing the trajectories Use your favorite plotting program to visualize the trajectory. In Fig. 2 we show a comparison between approximate and exact trajectories (where exact results are obtained with a time step 100 times smaller). Full trajectories can be visualized on YouTube: Verlet: Velocity Verlet: Lazy Man: It is now manifestly clear that the Lazy Man approach is terrible, leading not only to quantitatively wrong results but also to completely nonsensical behavior, such as our Moon-like object shooting away into an unstable orbit. Verlet and velocity Verlet predict that our Moon will lag behind the exact solution, with the error accumulating over time. In Fig. 2 we can also see the time evolution of the accumulated error in the predicted position of the Moon, and how this error depends on the chosen time step. We know that the accumulated error for Verlet should behave as t t 2, meaning that the logarithm of the error should behave as log t + 2 log t. We see that this is indeed reflected in Fig. 2. If we were to increase the time step even further, at some point we could make the orbital motion unstable for spurious numerical reasons (as with the Lazy Man s approach). The main lesson learned here is that one needs to carefully select a sensible integration scheme together with an integration time step which preserves the system dynamics. A final interesting point to note from Fig. 2 is that the positions obtained with regular Verlet are instantaneously more accurate than those computed with velocity Verlet, even though the accumulated error in the position is the same. This resonates again with the summary of errors given in Table 1. 24

18 Time = Exact solution Verlet Time = Exact solution Velocity Verlet Time = Exact solution Lazy Man Error (distance to exact solution) Accumulated error in Moon s position Verlet Velocity Verlet Lazy Man Time Error (distance to exact solution) Accumulated error in Moon s position Verlet, t = Verlet, t = Verlet, t = Verlet, t = Time Figure 2: Comparison between different approximate solutions and the exact one, after many time steps, and accumulated errors for different integrators and time steps. 25

19 References [1] D. A. McQuarrie. Statistical Mechanics. Harper & Row, New York, [2] C. J. Cramer. Essentials of computational chemistry: theories and models. 2nd ed. John Wiley & Sons, [3] R. M. Martin. Electronic Structure. Cambridge University Press, [4] M. Z. Bazant, E. Kaxiras, and J. F. Justo. Environment-dependent interatomic potential for bulk silicon. In: Phys. Rev. B 56 (1997), p [5] A. P. Bartók, M. C. Payne, R. Kondor, and G. Csányi. Gaussian approximation potentials: The accuracy of quantum mechanics, without the electrons. In: Phys. Rev. Lett. 104 (2010), p [6] H. J. C. Berendsen, J. P. M. Postma, W. F. van Gunsteren, A. R. H. J. DiNola, and J. R. Haak. Molecular dynamics with coupling to an external bath. In: J. Chem. Phys. 81 (1984), p [7] S. Nosé. A unified formulation of the constant temperature molecular dynamics methods. In: J. Chem. Phys. 81 (1984), p [8] W. G. Hoover. Canonical dynamics: equilibrium phase-space distributions. In: Phys. Rev. A 31 (1985), p [9] N. F. Carnahan and K. E. Starling. Equation of state for nonattracting rigid spheres. In: J. Chem. Phys. 51 (1969), p [10] B. E. F. Fender and G. D. Halsey Jr. Second Virial Coefficients of Argon, Krypton, and Argon-Krypton Mixtures at Low Temperatures. In: J. Chem. Phys. 36 (1962), p [11] A. P. Thompson, S. J. Plimpton, and W. Mattson. General formulation of pressure and stress tensor for arbitrary many-body interaction potentials under periodic boundary conditions. In: J. Chem. Phys. 131 (2009), p

MD Thermodynamics. Lecture 12 3/26/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky

MD Thermodynamics. Lecture 12 3/26/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky MD Thermodynamics Lecture 1 3/6/18 1 Molecular dynamics The force depends on positions only (not velocities) Total energy is conserved (micro canonical evolution) Newton s equations of motion (second order

More information

What is Classical Molecular Dynamics?

What is Classical Molecular Dynamics? What is Classical Molecular Dynamics? Simulation of explicit particles (atoms, ions,... ) Particles interact via relatively simple analytical potential functions Newton s equations of motion are integrated

More information

A Nobel Prize for Molecular Dynamics and QM/MM What is Classical Molecular Dynamics? Simulation of explicit particles (atoms, ions,... ) Particles interact via relatively simple analytical potential

More information

Gear methods I + 1/18

Gear methods I + 1/18 Gear methods I + 1/18 Predictor-corrector type: knowledge of history is used to predict an approximate solution, which is made more accurate in the following step we do not want (otherwise good) methods

More information

Javier Junquera. Statistical mechanics

Javier Junquera. Statistical mechanics Javier Junquera Statistical mechanics From the microscopic to the macroscopic level: the realm of statistical mechanics Computer simulations Thermodynamic state Generates information at the microscopic

More information

Ab initio molecular dynamics. Simone Piccinin CNR-IOM DEMOCRITOS Trieste, Italy. Bangalore, 04 September 2014

Ab initio molecular dynamics. Simone Piccinin CNR-IOM DEMOCRITOS Trieste, Italy. Bangalore, 04 September 2014 Ab initio molecular dynamics Simone Piccinin CNR-IOM DEMOCRITOS Trieste, Italy Bangalore, 04 September 2014 What is MD? 1) Liquid 4) Dye/TiO2/electrolyte 2) Liquids 3) Solvated protein 5) Solid to liquid

More information

Temperature and Pressure Controls

Temperature and Pressure Controls Ensembles Temperature and Pressure Controls 1. (E, V, N) microcanonical (constant energy) 2. (T, V, N) canonical, constant volume 3. (T, P N) constant pressure 4. (T, V, µ) grand canonical #2, 3 or 4 are

More information

Introduction to model potential Molecular Dynamics A3hourcourseatICTP

Introduction to model potential Molecular Dynamics A3hourcourseatICTP Introduction to model potential Molecular Dynamics A3hourcourseatICTP Alessandro Mattoni 1 1 Istituto Officina dei Materiali CNR-IOM Unità di Cagliari SLACS ICTP School on numerical methods for energy,

More information

Introduction to molecular dynamics

Introduction to molecular dynamics 1 Introduction to molecular dynamics Yves Lansac Université François Rabelais, Tours, France Visiting MSE, GIST for the summer Molecular Simulation 2 Molecular simulation is a computational experiment.

More information

Statistical Mechanics in a Nutshell

Statistical Mechanics in a Nutshell Chapter 2 Statistical Mechanics in a Nutshell Adapted from: Understanding Molecular Simulation Daan Frenkel and Berend Smit Academic Press (2001) pp. 9-22 11 2.1 Introduction In this course, we will treat

More information

Time-Dependent Statistical Mechanics 5. The classical atomic fluid, classical mechanics, and classical equilibrium statistical mechanics

Time-Dependent Statistical Mechanics 5. The classical atomic fluid, classical mechanics, and classical equilibrium statistical mechanics Time-Dependent Statistical Mechanics 5. The classical atomic fluid, classical mechanics, and classical equilibrium statistical mechanics c Hans C. Andersen October 1, 2009 While we know that in principle

More information

Molecular Dynamics. What to choose in an integrator The Verlet algorithm Boundary Conditions in Space and time Reading Assignment: F&S Chapter 4

Molecular Dynamics. What to choose in an integrator The Verlet algorithm Boundary Conditions in Space and time Reading Assignment: F&S Chapter 4 Molecular Dynamics What to choose in an integrator The Verlet algorithm Boundary Conditions in Space and time Reading Assignment: F&S Chapter 4 MSE485/PHY466/CSE485 1 The Molecular Dynamics (MD) method

More information

Temperature and Pressure Controls

Temperature and Pressure Controls Ensembles Temperature and Pressure Controls 1. (E, V, N) microcanonical (constant energy) 2. (T, V, N) canonical, constant volume 3. (T, P N) constant pressure 4. (T, V, µ) grand canonical #2, 3 or 4 are

More information

Biomolecular modeling I

Biomolecular modeling I 2015, December 15 Biomolecular simulation Elementary body atom Each atom x, y, z coordinates A protein is a set of coordinates. (Gromacs, A. P. Heiner) Usually one molecule/complex of interest (e.g. protein,

More information

Ab initio molecular dynamics and nuclear quantum effects

Ab initio molecular dynamics and nuclear quantum effects Ab initio molecular dynamics and nuclear quantum effects Luca M. Ghiringhelli Fritz Haber Institute Hands on workshop density functional theory and beyond: First principles simulations of molecules and

More information

Scientific Computing II

Scientific Computing II Scientific Computing II Molecular Dynamics Simulation Michael Bader SCCS Summer Term 2015 Molecular Dynamics Simulation, Summer Term 2015 1 Continuum Mechanics for Fluid Mechanics? Molecular Dynamics the

More information

Curves in the configuration space Q or in the velocity phase space Ω satisfying the Euler-Lagrange (EL) equations,

Curves in the configuration space Q or in the velocity phase space Ω satisfying the Euler-Lagrange (EL) equations, Physics 6010, Fall 2010 Hamiltonian Formalism: Hamilton s equations. Conservation laws. Reduction. Poisson Brackets. Relevant Sections in Text: 8.1 8.3, 9.5 The Hamiltonian Formalism We now return to formal

More information

Handout 10. Applications to Solids

Handout 10. Applications to Solids ME346A Introduction to Statistical Mechanics Wei Cai Stanford University Win 2011 Handout 10. Applications to Solids February 23, 2011 Contents 1 Average kinetic and potential energy 2 2 Virial theorem

More information

Dynamic force matching: Construction of dynamic coarse-grained models with realistic short time dynamics and accurate long time dynamics

Dynamic force matching: Construction of dynamic coarse-grained models with realistic short time dynamics and accurate long time dynamics for resubmission Dynamic force matching: Construction of dynamic coarse-grained models with realistic short time dynamics and accurate long time dynamics Aram Davtyan, 1 Gregory A. Voth, 1 2, a) and Hans

More information

Molecular Dynamics Simulations. Dr. Noelia Faginas Lago Dipartimento di Chimica,Biologia e Biotecnologie Università di Perugia

Molecular Dynamics Simulations. Dr. Noelia Faginas Lago Dipartimento di Chimica,Biologia e Biotecnologie Università di Perugia Molecular Dynamics Simulations Dr. Noelia Faginas Lago Dipartimento di Chimica,Biologia e Biotecnologie Università di Perugia 1 An Introduction to Molecular Dynamics Simulations Macroscopic properties

More information

Ab Ini'o Molecular Dynamics (MD) Simula?ons

Ab Ini'o Molecular Dynamics (MD) Simula?ons Ab Ini'o Molecular Dynamics (MD) Simula?ons Rick Remsing ICMS, CCDM, Temple University, Philadelphia, PA What are Molecular Dynamics (MD) Simulations? Technique to compute statistical and transport properties

More information

Metropolis, 2D Ising model

Metropolis, 2D Ising model Metropolis, 2D Ising model You can get visual understanding from the java applets available, like: http://physics.ucsc.edu/~peter/ising/ising.html Average value of spin is magnetization. Abs of this as

More information

Example questions for Molecular modelling (Level 4) Dr. Adrian Mulholland

Example questions for Molecular modelling (Level 4) Dr. Adrian Mulholland Example questions for Molecular modelling (Level 4) Dr. Adrian Mulholland 1) Question. Two methods which are widely used for the optimization of molecular geometies are the Steepest descents and Newton-Raphson

More information

CHEM-UA 652: Thermodynamics and Kinetics

CHEM-UA 652: Thermodynamics and Kinetics 1 CHEM-UA 652: Thermodynamics and Kinetics Notes for Lecture 4 I. THE ISOTHERMAL-ISOBARIC ENSEMBLE The isothermal-isobaric ensemble is the closest mimic to the conditions under which most experiments are

More information

CE 530 Molecular Simulation

CE 530 Molecular Simulation 1 CE 530 Molecular Simulation Lecture 14 Molecular Models David A. Kofke Department of Chemical Engineering SUNY Buffalo kofke@eng.buffalo.edu 2 Review Monte Carlo ensemble averaging, no dynamics easy

More information

Ab initio molecular dynamics

Ab initio molecular dynamics Ab initio molecular dynamics Molecular dynamics Why? allows realistic simulation of equilibrium and transport properties in Nature ensemble averages can be used for statistical mechanics time evolution

More information

A Study of the Thermal Properties of a One. Dimensional Lennard-Jones System

A Study of the Thermal Properties of a One. Dimensional Lennard-Jones System A Study of the Thermal Properties of a One Dimensional Lennard-Jones System Abstract In this study, the behavior of a one dimensional (1D) Lennard-Jones (LJ) system is simulated. As part of this research,

More information

4. The Green Kubo Relations

4. The Green Kubo Relations 4. The Green Kubo Relations 4.1 The Langevin Equation In 1828 the botanist Robert Brown observed the motion of pollen grains suspended in a fluid. Although the system was allowed to come to equilibrium,

More information

Computer simulation methods (2) Dr. Vania Calandrini

Computer simulation methods (2) Dr. Vania Calandrini Computer simulation methods (2) Dr. Vania Calandrini in the previous lecture: time average versus ensemble average MC versus MD simulations equipartition theorem (=> computing T) virial theorem (=> computing

More information

Molecular Dynamics. The Molecular Dynamics (MD) method for classical systems (not H or He)

Molecular Dynamics. The Molecular Dynamics (MD) method for classical systems (not H or He) Molecular Dynamics What to choose in an integrator The Verlet algorithm Boundary Conditions in Space and time Reading Assignment: F&S Chapter 4 1 The Molecular Dynamics (MD) method for classical systems

More information

Molecular Dynamics 9/6/16

Molecular Dynamics 9/6/16 Molecular Dynamics What to choose in an integrator The Verlet algorithm Boundary Conditions in Space and time Reading Assignment: Lesar Chpt 6, F&S Chpt 4 1 The Molecular Dynamics (MD) method for classical

More information

Molecular Dynamics Simulations

Molecular Dynamics Simulations Molecular Dynamics Simulations Dr. Kasra Momeni www.knanosys.com Outline Long-range Interactions Ewald Sum Fast Multipole Method Spherically Truncated Coulombic Potential Speeding up Calculations SPaSM

More information

510 Subject Index. Hamiltonian 33, 86, 88, 89 Hamilton operator 34, 164, 166

510 Subject Index. Hamiltonian 33, 86, 88, 89 Hamilton operator 34, 164, 166 Subject Index Ab-initio calculation 24, 122, 161. 165 Acentric factor 279, 338 Activity absolute 258, 295 coefficient 7 definition 7 Atom 23 Atomic units 93 Avogadro number 5, 92 Axilrod-Teller-forces

More information

Langevin Dynamics in Constant Pressure Extended Systems

Langevin Dynamics in Constant Pressure Extended Systems Langevin Dynamics in Constant Pressure Extended Systems D. Quigley and M.I.J. Probert CMMP 2004 1 Talk Outline Molecular dynamics and ensembles. Existing methods for sampling at NPT. Langevin dynamics

More information

CHEM3023: Spins, Atoms and Molecules

CHEM3023: Spins, Atoms and Molecules CHEM3023: Spins, Atoms and Molecules Lecture 3 The Born-Oppenheimer approximation C.-K. Skylaris Learning outcomes Separate molecular Hamiltonians to electronic and nuclear parts according to the Born-Oppenheimer

More information

Advanced Molecular Molecular Dynamics

Advanced Molecular Molecular Dynamics Advanced Molecular Molecular Dynamics Technical details May 12, 2014 Integration of harmonic oscillator r m period = 2 k k and the temperature T determine the sampling of x (here T is related with v 0

More information

Molecular dynamics simulation. CS/CME/BioE/Biophys/BMI 279 Oct. 5 and 10, 2017 Ron Dror

Molecular dynamics simulation. CS/CME/BioE/Biophys/BMI 279 Oct. 5 and 10, 2017 Ron Dror Molecular dynamics simulation CS/CME/BioE/Biophys/BMI 279 Oct. 5 and 10, 2017 Ron Dror 1 Outline Molecular dynamics (MD): The basic idea Equations of motion Key properties of MD simulations Sample applications

More information

G : Statistical Mechanics

G : Statistical Mechanics G25.2651: Statistical Mechanics Notes for Lecture 15 Consider Hamilton s equations in the form I. CLASSICAL LINEAR RESPONSE THEORY q i = H p i ṗ i = H q i We noted early in the course that an ensemble

More information

Advanced Molecular Dynamics

Advanced Molecular Dynamics Advanced Molecular Dynamics Introduction May 2, 2017 Who am I? I am an associate professor at Theoretical Physics Topics I work on: Algorithms for (parallel) molecular simulations including GPU acceleration

More information

Introduction to Simulation - Lectures 17, 18. Molecular Dynamics. Nicolas Hadjiconstantinou

Introduction to Simulation - Lectures 17, 18. Molecular Dynamics. Nicolas Hadjiconstantinou Introduction to Simulation - Lectures 17, 18 Molecular Dynamics Nicolas Hadjiconstantinou Molecular Dynamics Molecular dynamics is a technique for computing the equilibrium and non-equilibrium properties

More information

Lecture 08 Born Oppenheimer Approximation

Lecture 08 Born Oppenheimer Approximation Chemistry II: Introduction to Molecular Spectroscopy Prof. Mangala Sunder Department of Chemistry and Biochemistry Indian Institute of Technology, Madras Lecture 08 Born Oppenheimer Approximation Welcome

More information

Time-Dependent Statistical Mechanics 1. Introduction

Time-Dependent Statistical Mechanics 1. Introduction Time-Dependent Statistical Mechanics 1. Introduction c Hans C. Andersen Announcements September 24, 2009 Lecture 1 9/22/09 1 Topics of concern in the course We shall be concerned with the time dependent

More information

Liouville Equation. q s = H p s

Liouville Equation. q s = H p s Liouville Equation In this section we will build a bridge from Classical Mechanics to Statistical Physics. The bridge is Liouville equation. We start with the Hamiltonian formalism of the Classical Mechanics,

More information

Aspects of nonautonomous molecular dynamics

Aspects of nonautonomous molecular dynamics Aspects of nonautonomous molecular dynamics IMA, University of Minnesota, Minneapolis January 28, 2007 Michel Cuendet Swiss Institute of Bioinformatics, Lausanne, Switzerland Introduction to the Jarzynski

More information

Introduction Statistical Thermodynamics. Monday, January 6, 14

Introduction Statistical Thermodynamics. Monday, January 6, 14 Introduction Statistical Thermodynamics 1 Molecular Simulations Molecular dynamics: solve equations of motion Monte Carlo: importance sampling r 1 r 2 r n MD MC r 1 r 2 2 r n 2 3 3 4 4 Questions How can

More information

MIT Weakly Nonlinear Things: Oscillators.

MIT Weakly Nonlinear Things: Oscillators. 18.385 MIT Weakly Nonlinear Things: Oscillators. Department of Mathematics Massachusetts Institute of Technology Cambridge, Massachusetts MA 02139 Abstract When nonlinearities are small there are various

More information

Ab initio Molecular Dynamics Born Oppenheimer and beyond

Ab initio Molecular Dynamics Born Oppenheimer and beyond Ab initio Molecular Dynamics Born Oppenheimer and beyond Reminder, reliability of MD MD trajectories are chaotic (exponential divergence with respect to initial conditions), BUT... With a good integrator

More information

Molecular Mechanics. I. Quantum mechanical treatment of molecular systems

Molecular Mechanics. I. Quantum mechanical treatment of molecular systems Molecular Mechanics I. Quantum mechanical treatment of molecular systems The first principle approach for describing the properties of molecules, including proteins, involves quantum mechanics. For example,

More information

Energy and Forces in DFT

Energy and Forces in DFT Energy and Forces in DFT Total Energy as a function of nuclear positions {R} E tot ({R}) = E DF T ({R}) + E II ({R}) (1) where E DF T ({R}) = DFT energy calculated for the ground-state density charge-density

More information

Density Functional Theory

Density Functional Theory Density Functional Theory Iain Bethune EPCC ibethune@epcc.ed.ac.uk Overview Background Classical Atomistic Simulation Essential Quantum Mechanics DFT: Approximations and Theory DFT: Implementation using

More information

The first order formalism and the transition to the

The first order formalism and the transition to the Problem 1. Hamiltonian The first order formalism and the transition to the This problem uses the notion of Lagrange multipliers and Legendre transforms to understand the action in the Hamiltonian formalism.

More information

Density Functional Theory

Density Functional Theory Density Functional Theory March 26, 2009 ? DENSITY FUNCTIONAL THEORY is a method to successfully describe the behavior of atomic and molecular systems and is used for instance for: structural prediction

More information

Rate of Heating and Cooling

Rate of Heating and Cooling Rate of Heating and Cooling 35 T [ o C] Example: Heating and cooling of Water E 30 Cooling S 25 Heating exponential decay 20 0 100 200 300 400 t [sec] Newton s Law of Cooling T S > T E : System S cools

More information

Physics 6010, Fall Relevant Sections in Text: Introduction

Physics 6010, Fall Relevant Sections in Text: Introduction Physics 6010, Fall 2016 Introduction. Configuration space. Equations of Motion. Velocity Phase Space. Relevant Sections in Text: 1.1 1.4 Introduction This course principally deals with the variational

More information

arxiv: v1 [cond-mat.stat-mech] 7 Mar 2019

arxiv: v1 [cond-mat.stat-mech] 7 Mar 2019 Langevin thermostat for robust configurational and kinetic sampling Oded Farago, Department of Chemistry, University of Cambridge, Lensfield Road, Cambridge CB EW, United Kingdom Department of Biomedical

More information

Lecture 11: Long-wavelength expansion in the Neel state Energetic terms

Lecture 11: Long-wavelength expansion in the Neel state Energetic terms Lecture 11: Long-wavelength expansion in the Neel state Energetic terms In the last class we derived the low energy effective Hamiltonian for a Mott insulator. This derivation is an example of the kind

More information

Modifications of the Robert- Bonamy Formalism and Further Refinement Challenges

Modifications of the Robert- Bonamy Formalism and Further Refinement Challenges Modifications of the Robert- Bonamy Formalism and Further Refinement Challenges Q. Ma, NASA/GISS R. H. Tipping, Univ. of Alabama C. Boulet, Univ. Paris-Sud, France Statements The presentation is not a

More information

Molecular dynamics simulation of Aquaporin-1. 4 nm

Molecular dynamics simulation of Aquaporin-1. 4 nm Molecular dynamics simulation of Aquaporin-1 4 nm Molecular Dynamics Simulations Schrödinger equation i~@ t (r, R) =H (r, R) Born-Oppenheimer approximation H e e(r; R) =E e (R) e(r; R) Nucleic motion described

More information

UNDERSTANDING BOLTZMANN S ANALYSIS VIA. Contents SOLVABLE MODELS

UNDERSTANDING BOLTZMANN S ANALYSIS VIA. Contents SOLVABLE MODELS UNDERSTANDING BOLTZMANN S ANALYSIS VIA Contents SOLVABLE MODELS 1 Kac ring model 2 1.1 Microstates............................ 3 1.2 Macrostates............................ 6 1.3 Boltzmann s entropy.......................

More information

Modeling Materials. Continuum, Atomistic and Multiscale Techniques. gg CAMBRIDGE ^0 TADMOR ELLAD B. HHHHM. University of Minnesota, USA

Modeling Materials. Continuum, Atomistic and Multiscale Techniques. gg CAMBRIDGE ^0 TADMOR ELLAD B. HHHHM. University of Minnesota, USA HHHHM Modeling Materials Continuum, Atomistic and Multiscale Techniques ELLAD B. TADMOR University of Minnesota, USA RONALD E. MILLER Carleton University, Canada gg CAMBRIDGE ^0 UNIVERSITY PRESS Preface

More information

7 To solve numerically the equation of motion, we use the velocity Verlet or leap frog algorithm. _ V i n = F i n m i (F.5) For time step, we approxim

7 To solve numerically the equation of motion, we use the velocity Verlet or leap frog algorithm. _ V i n = F i n m i (F.5) For time step, we approxim 69 Appendix F Molecular Dynamics F. Introduction In this chapter, we deal with the theories and techniques used in molecular dynamics simulation. The fundamental dynamics equations of any system is the

More information

Understanding Molecular Simulation 2009 Monte Carlo and Molecular Dynamics in different ensembles. Srikanth Sastry

Understanding Molecular Simulation 2009 Monte Carlo and Molecular Dynamics in different ensembles. Srikanth Sastry JNCASR August 20, 21 2009 Understanding Molecular Simulation 2009 Monte Carlo and Molecular Dynamics in different ensembles Srikanth Sastry Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore

More information

Why Proteins Fold? (Parts of this presentation are based on work of Ashok Kolaskar) CS490B: Introduction to Bioinformatics Mar.

Why Proteins Fold? (Parts of this presentation are based on work of Ashok Kolaskar) CS490B: Introduction to Bioinformatics Mar. Why Proteins Fold? (Parts of this presentation are based on work of Ashok Kolaskar) CS490B: Introduction to Bioinformatics Mar. 25, 2002 Molecular Dynamics: Introduction At physiological conditions, the

More information

Molecular Dynamics. A very brief introduction

Molecular Dynamics. A very brief introduction Molecular Dynamics A very brief introduction Sander Pronk Dept. of Theoretical Physics KTH Royal Institute of Technology & Science For Life Laboratory Stockholm, Sweden Why computer simulations? Two primary

More information

Intro to ab initio methods

Intro to ab initio methods Lecture 2 Part A Intro to ab initio methods Recommended reading: Leach, Chapters 2 & 3 for QM methods For more QM methods: Essentials of Computational Chemistry by C.J. Cramer, Wiley (2002) 1 ab initio

More information

Brief Review of Statistical Mechanics

Brief Review of Statistical Mechanics Brief Review of Statistical Mechanics Introduction Statistical mechanics: a branch of physics which studies macroscopic systems from a microscopic or molecular point of view (McQuarrie,1976) Also see (Hill,1986;

More information

JASS Modeling and visualization of molecular dynamic processes

JASS Modeling and visualization of molecular dynamic processes JASS 2009 Konstantin Shefov Modeling and visualization of molecular dynamic processes St Petersburg State University, Physics faculty, Department of Computational Physics Supervisor PhD Stepanova Margarita

More information

Ideal Gas Behavior. NC State University

Ideal Gas Behavior. NC State University Chemistry 331 Lecture 6 Ideal Gas Behavior NC State University Macroscopic variables P, T Pressure is a force per unit area (P= F/A) The force arises from the change in momentum as particles hit an object

More information

Project 5: Molecular Dynamics

Project 5: Molecular Dynamics Physics 2300 Spring 2018 Name Lab partner Project 5: Molecular Dynamics If a computer can model three mutually interacting objects, why not model more than three? As you ll soon see, there is little additional

More information

An introduction to Molecular Dynamics. EMBO, June 2016

An introduction to Molecular Dynamics. EMBO, June 2016 An introduction to Molecular Dynamics EMBO, June 2016 What is MD? everything that living things do can be understood in terms of the jiggling and wiggling of atoms. The Feynman Lectures in Physics vol.

More information

1 The Lagrange Equations of Motion

1 The Lagrange Equations of Motion 1 The Lagrange Equations of Motion 1.1 Introduction A knowledge of the rudiments of dynamics is essential to understanding structural dynamics. Thus this chapter reviews the basic theorems of dynamics

More information

Interatomic Potentials. The electronic-structure problem

Interatomic Potentials. The electronic-structure problem Interatomic Potentials Before we can start a simulation, we need the model! Interaction between atoms and molecules is determined by quantum mechanics: Schrödinger Equation + Born-Oppenheimer approximation

More information

Basics of Statistical Mechanics

Basics of Statistical Mechanics Basics of Statistical Mechanics Review of ensembles Microcanonical, canonical, Maxwell-Boltzmann Constant pressure, temperature, volume, Thermodynamic limit Ergodicity (see online notes also) Reading assignment:

More information

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer. Lecture 9, February 8, 2006

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer. Lecture 9, February 8, 2006 Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer Lecture 9, February 8, 2006 The Harmonic Oscillator Consider a diatomic molecule. Such a molecule

More information

Finite Ring Geometries and Role of Coupling in Molecular Dynamics and Chemistry

Finite Ring Geometries and Role of Coupling in Molecular Dynamics and Chemistry Finite Ring Geometries and Role of Coupling in Molecular Dynamics and Chemistry Petr Pracna J. Heyrovský Institute of Physical Chemistry Academy of Sciences of the Czech Republic, Prague ZiF Cooperation

More information

UNIVERSITY OF OSLO FACULTY OF MATHEMATICS AND NATURAL SCIENCES

UNIVERSITY OF OSLO FACULTY OF MATHEMATICS AND NATURAL SCIENCES UNIVERSITY OF OSLO FCULTY OF MTHEMTICS ND NTURL SCIENCES Exam in: FYS430, Statistical Mechanics Day of exam: Jun.6. 203 Problem :. The relative fluctuations in an extensive quantity, like the energy, depends

More information

Lecture V: The game-engine loop & Time Integration

Lecture V: The game-engine loop & Time Integration Lecture V: The game-engine loop & Time Integration The Basic Game-Engine Loop Previous state: " #, %(#) ( #, )(#) Forces -(#) Integrate velocities and positions Resolve Interpenetrations Per-body change

More information

Advanced sampling. fluids of strongly orientation-dependent interactions (e.g., dipoles, hydrogen bonds)

Advanced sampling. fluids of strongly orientation-dependent interactions (e.g., dipoles, hydrogen bonds) Advanced sampling ChE210D Today's lecture: methods for facilitating equilibration and sampling in complex, frustrated, or slow-evolving systems Difficult-to-simulate systems Practically speaking, one is

More information

Exploring the energy landscape

Exploring the energy landscape Exploring the energy landscape ChE210D Today's lecture: what are general features of the potential energy surface and how can we locate and characterize minima on it Derivatives of the potential energy

More information

Multiscale Coarse-Graining of Ionic Liquids

Multiscale Coarse-Graining of Ionic Liquids 3564 J. Phys. Chem. B 2006, 110, 3564-3575 Multiscale Coarse-Graining of Ionic Liquids Yanting Wang, Sergei Izvekov, Tianying Yan, and Gregory A. Voth* Center for Biophysical Modeling and Simulation and

More information

(a) What are the probabilities associated with finding the different allowed values of the z-component of the spin after time T?

(a) What are the probabilities associated with finding the different allowed values of the z-component of the spin after time T? 1. Quantum Mechanics (Fall 2002) A Stern-Gerlach apparatus is adjusted so that the z-component of the spin of an electron (spin-1/2) transmitted through it is /2. A uniform magnetic field in the x-direction

More information

Caltech Ph106 Fall 2001

Caltech Ph106 Fall 2001 Caltech h106 Fall 2001 ath for physicists: differential forms Disclaimer: this is a first draft, so a few signs might be off. 1 Basic properties Differential forms come up in various parts of theoretical

More information

PHYSICS 715 COURSE NOTES WEEK 1

PHYSICS 715 COURSE NOTES WEEK 1 PHYSICS 715 COURSE NOTES WEEK 1 1 Thermodynamics 1.1 Introduction When we start to study physics, we learn about particle motion. First one particle, then two. It is dismaying to learn that the motion

More information

Department of Chemical Engineering University of California, Santa Barbara Spring Exercise 3. Due: Thursday, 5/3/12

Department of Chemical Engineering University of California, Santa Barbara Spring Exercise 3. Due: Thursday, 5/3/12 Department of Chemical Engineering ChE 210D University of California, Santa Barbara Spring 2012 Exercise 3 Due: Thursday, 5/3/12 Objective: To learn how to write & compile Fortran libraries for Python,

More information

Mechanics and Statistical Mechanics Qualifying Exam Spring 2006

Mechanics and Statistical Mechanics Qualifying Exam Spring 2006 Mechanics and Statistical Mechanics Qualifying Exam Spring 2006 1 Problem 1: (10 Points) Identical objects of equal mass, m, are hung on identical springs of constant k. When these objects are displaced

More information

Lecture 11: Potential Energy Functions

Lecture 11: Potential Energy Functions Lecture 11: Potential Energy Functions Dr. Ronald M. Levy ronlevy@temple.edu Originally contributed by Lauren Wickstrom (2011) Microscopic/Macroscopic Connection The connection between microscopic interactions

More information

ICCP Project 2 - Advanced Monte Carlo Methods Choose one of the three options below

ICCP Project 2 - Advanced Monte Carlo Methods Choose one of the three options below ICCP Project 2 - Advanced Monte Carlo Methods Choose one of the three options below Introduction In statistical physics Monte Carlo methods are considered to have started in the Manhattan project (1940

More information

1. Introductory Examples

1. Introductory Examples 1. Introductory Examples We introduce the concept of the deterministic and stochastic simulation methods. Two problems are provided to explain the methods: the percolation problem, providing an example

More information

Statistical Mechanics

Statistical Mechanics 42 My God, He Plays Dice! Statistical Mechanics Statistical Mechanics 43 Statistical Mechanics Statistical mechanics and thermodynamics are nineteenthcentury classical physics, but they contain the seeds

More information

Physics 115/242 The leapfrog method and other symplectic algorithms for integrating Newton s laws of motion

Physics 115/242 The leapfrog method and other symplectic algorithms for integrating Newton s laws of motion Physics 115/242 The leapfrog method and other symplectic algorithms for integrating Newton s laws of motion Peter Young (Dated: April 14, 2009) I. INTRODUCTION One frequently obtains detailed dynamical

More information

Quantum mechanics (QM) deals with systems on atomic scale level, whose behaviours cannot be described by classical mechanics.

Quantum mechanics (QM) deals with systems on atomic scale level, whose behaviours cannot be described by classical mechanics. A 10-MINUTE RATHER QUICK INTRODUCTION TO QUANTUM MECHANICS 1. What is quantum mechanics (as opposed to classical mechanics)? Quantum mechanics (QM) deals with systems on atomic scale level, whose behaviours

More information

Analysis of MD Results Using Statistical Mechanics Methods. Molecular Modeling

Analysis of MD Results Using Statistical Mechanics Methods. Molecular Modeling Analysis of MD Results Using Statistical Mechanics Methods Ioan Kosztin eckman Institute University of Illinois at Urbana-Champaign Molecular Modeling. Model building. Molecular Dynamics Simulation 3.

More information

Once Upon A Time, There Was A Certain Ludwig

Once Upon A Time, There Was A Certain Ludwig Once Upon A Time, There Was A Certain Ludwig Statistical Mechanics: Ensembles, Distributions, Entropy and Thermostatting Srinivas Mushnoori Chemical & Biochemical Engineering Rutgers, The State University

More information

in order to insure that the Liouville equation for f(?; t) is still valid. These equations of motion will give rise to a distribution function f(?; t)

in order to insure that the Liouville equation for f(?; t) is still valid. These equations of motion will give rise to a distribution function f(?; t) G25.2651: Statistical Mechanics Notes for Lecture 21 Consider Hamilton's equations in the form I. CLASSICAL LINEAR RESPONSE THEORY _q i = @H @p i _p i =? @H @q i We noted early in the course that an ensemble

More information

AST1100 Lecture Notes

AST1100 Lecture Notes AST1100 Lecture Notes 5 The virial theorem 1 The virial theorem We have seen that we can solve the equation of motion for the two-body problem analytically and thus obtain expressions describing the future

More information

Energy Diagrams --- Attraction

Energy Diagrams --- Attraction potential ENERGY diagrams Visual Quantum Mechanics Teac eaching Guide ACTIVITY 1B Energy Diagrams --- Attraction Goal Changes in energy are a good way to describe an object s motion. Here you will construct

More information

Bioengineering 215. An Introduction to Molecular Dynamics for Biomolecules

Bioengineering 215. An Introduction to Molecular Dynamics for Biomolecules Bioengineering 215 An Introduction to Molecular Dynamics for Biomolecules David Parker May 18, 2007 ntroduction A principal tool to study biological molecules is molecular dynamics simulations (MD). MD

More information

5.74 Introductory Quantum Mechanics II

5.74 Introductory Quantum Mechanics II MIT OpenCourseWare http://ocw.mit.edu 5.74 Introductory Quantum Mechanics II Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. Andrei Tokmakoff,

More information

MACROSCOPIC VARIABLES, THERMAL EQUILIBRIUM. Contents AND BOLTZMANN ENTROPY. 1 Macroscopic Variables 3. 2 Local quantities and Hydrodynamics fields 4

MACROSCOPIC VARIABLES, THERMAL EQUILIBRIUM. Contents AND BOLTZMANN ENTROPY. 1 Macroscopic Variables 3. 2 Local quantities and Hydrodynamics fields 4 MACROSCOPIC VARIABLES, THERMAL EQUILIBRIUM AND BOLTZMANN ENTROPY Contents 1 Macroscopic Variables 3 2 Local quantities and Hydrodynamics fields 4 3 Coarse-graining 6 4 Thermal equilibrium 9 5 Two systems

More information

Aim: Understand equilibrium of galaxies

Aim: Understand equilibrium of galaxies 8. Galactic Dynamics Aim: Understand equilibrium of galaxies 1. What are the dominant forces? 2. Can we define some kind of equilibrium? 3. What are the relevant timescales? 4. Do galaxies evolve along

More information