Statistical Thermodynamics and Monte-Carlo Evgenii B. Rudnyi and Jan G. Korvink IMTEK Albert Ludwig University Freiburg, Germany
Preliminaries Learning Goals From Micro to Macro Statistical Mechanics (Statistical Thermodynamics) Monte-Carlo Method Polymers Lattice Model and Random Walk References P. W. Atkins, Physical Chemistry. A. R. Leach, Molecular Modelling: Principles and Applications. I. Beichl, F. Sullivan, The Metropolis algorithm, Comp. Sci. Eng. 2000, v. 2, N 1, p. 65-69 (The Ten 10 Algorithms). On-line Resources D. Kofke, Molecular Simulation, www.cheme.buffalo.edu/ courses/ce530/text/text.html A survey of statistical mechanics as it pertains to molecular simulation Markov processes Monte Carlo simulation Simple biasing methods
From Micro to Macro Macro Properties Pressure, Diffusion coefficient, Average energy, Temperature Atomic effects are smeared. Micro Properties Classical Mechanics: Positions and velocities/ momenta of the atoms Quantum Mechanics / Quantum Chemistry: Energy is quantized Energies of system states and their degeneracies { E i, Ω i } Energies and a number of eigenstates depend on volume. For a system with a large number of particles Ω i is huge and the energy levels are very close. Ω i E i
From Micro to Macro Two Approaches Molecular Dynamics - Time average: Classical Atomic Force, Classical Particle Mechanics - Newton mechanics Integrating transient Schrödinger equation Statistical Mechanics - Ensemble average: No time, net effect of many particles Introduces temperature and entropy. Classical and quantum statistics Classical statistics leads to paradoxes: need quantum statistics to solve them. It is easier to derive statistical mechanics from quantum statistics. In practice: quasi-classical approach. Gives rise to Monte-Carlo method.
Statistical Mechanics Overview Ensemble Microcanonical Ensemble Basic Postulate Ergodic Hypothesis Non-ergodic system Canonical Ensemble Definition Temperature Boltzmann Distribution Partition Function Classical Statistics Other Ensembles Classical vs. Quantum Closed-Form Solutions
Statistical Mechanics Ensemble Macro properties are the same. Micro properties are different. Each micro state has some probability. Distribution function describes everything. Ensemble average for property D ( P is the probability) quantum D = i D i P i classical: D = P( p N, r N )D( p N, r N ) dp N d r N
Microcanonical Ensemble Basic Postulate Energy, volume and a number of particles are constant. ( NVE,, ) Quantum statistics Ω( NVE,, ): Number of eigenstates of energy E for system with N particles in volume V. Basic Postulate: System with fixed ( NVE,, ) likely to be in any state. Probability 1 P i ( NVE,, ) = -------------------------- Ω( NVE,, ) Collection of all such states is the Microcanonical Ensemble.
Microcanonical Ensemble Ergodic Hypothesis Time average of N, V, E: 1 D -- t = lim. t Dt ( ) dt t 0 Initial conditions: Dt ( ) D r N ( t ) p N N N = [, ( t ), r 0, p0 ] x x 10 8 6 4 2 Time averages should not depend on initial conditions. Ensemble average is equal to the time average D = D Time average: Molecular Dynamics Ensemble average: Monte-Carlo 2 4 6 8 10 2 4 6 8 10 Time Averaging 200 400 600 800 1000 t 140 120 100 80 60 40 20 140 120 100 80 60 40 20
Microcanonical Ensemble Non-ergodic System If a time average does not give complete representation of full ensemble, system is nonergodic. Truly nonergodic: no way there from here. Practically nonergodic: very hard to find route from here to there. U E = const Kofka r
Canonical Ensemble Definition Volume and a number of particles are constant. Systems within the ensemble can exchange energy, but the total energy is constant. E 1 + E 2 = const ( N 1, V 1 ) 1 2 ( N 2, V 2 ) After Bolzmann: take L similar systems. After Gibbs: put attention on a particular system, all others are surrounding. The most probable distribution function. Temperature is introduced during the treatment. At the end we can say, that temperature, volume and a number of particles are constant: TNV,, = const.
Canonical Ensemble Temperature Energy E is extensive property: E = E 1 + E 2 over subsystems. The system interaction is neglected: energy is additive. Number of states is multiplicative: Ω( E) = Ω 1 ( E 1 ) Ω 2 ( E E 1 ) Better use ln[ Ω( E) ]. ln[ Ω( E 1, E E 1 )]= ln[ Ω 1 ( E 1 )] + ln[ Ω 2 ( E E 1 )] Most likely distribution of energy: lnω( E 1, E E 1 ) ------------------------------------------ = E 1 NVE,, Temperature: lnω --------------- 1 E 1 NV, lnω --------------- 2 E 2 Entropy: SNVE (,, ) = k B lnω( NVE,, ) = β= -------- 1 k B T 0 = NV,
Canonical Ensemble Boltzmann Distribution A B Small A in contact with large heat bath B: E = E A + E B. A in state i with energy (no E i degeneracy). Probability is defined by Ω P B ( E E i ) i = --------------------------------- Ω B ( E ) j Taylor expansion about ln[ Ω B ( E )]= E i lnω B ( E) ln[ Ω B ( E) ] E i ------------------------ E Substitute T and insert into probability exp[ E P i k B T] i = ------------------------------------------ exp[ k B T] j E j E j E i = 0
Canonical Ensemble Partition Function Q = exp[ E j k B T] j Average energy E = E i P i = i i E i exp[ E i k B T] -------------------------------------------------= exp[ k B T] i E i ln[ Q] ----------------------- ( 1 k B T) Helmholtz Energy ( F T) E = ------------------ Then ( 1 T) F = k B Tln[ Q] FTV (, ) is a potential function and from it one can determine all other equilibria properties. Caution Energy is given up to an arbitrary constant. It is impossible to determine a numerical value for the absolute energy, and Helmholtz energy. It is better Q = exp[ ( E j E o ) k B T] j F = E o k B Tln[ Q]
Canonical Ensemble Classical Statistics N p i 2 Energy E( pr, ) = -------- + U( r 2m N ) i i Partition function becomes an integral 1 Q = --------------. h 3N exp{ βe} dp N dr N N! h is the least action. h 3N makes Z dimensionless. Particles are indistinguishable, hence N!. Average (Starting point for D Monte-Carlo) Integral over kinetic energy can be taken analytically 1 Q = ----- 2πmkT ----------------- 3N exp{ βu} dr N! N Configuration integral Average becomes = D exp{ βe} D( p N, r N ) dp N dr N ------------------------------------------------------------------------ exp{ βe} dp N dr N = h 2 exp{ βu} D( r N ) dr N ------------------------------------------------------ exp{ βu} dr N
Statistical Mechanics Other Ensembles, I From Kofka
Statistical Mechanics Other Ensembles, II From Kofka
Statistical Mechanics Classical vs. Quantum Electron - always a quantum particle. Transitional movement of molecules - always classical: Enormous number of states (300 K - about ). Rotational movement of molecules - typically classical. Quantum statistics might required at low temperatures. 10 30 Vibrations - depends on temperature and the wave number. At 300 K classical approach for v< 100 sm -1.
Statistical Mechanics Closed-Form Solutions Atomic ideal gas Configuration integral is equal to volume. Molecular ideal gas E tot = E transl + E rot + E vib Rigid rotator and harmonic oscillator Dense gases: estimates for virial coefficients. Ideal crystal: Einstein and Debye approximations For a real crystal it is necessary to know the phonon spectra A variety of theories for liquids None of them is really successful.
Monte-Carlo Method Overview Basic Metropolis Algorithm Random Sampling Importance Sampling Markov Chain Metropolis Method Limits of Metropolis method
Monte-Carlo Method Basic Metropolis Algorithm Goal is to evaluate D = exp{ β[ U( r N )]} D( r N ) dr N --------------------------------------------------------------------- exp{ β[ U( r N )]} dr N For each Monte Carlo cycle: Select particle i at random. Compute particle energy U i ( r). Give particle i random displacement based on the uniform distribution r i = r i + r. Compute new energy U i ( r ). Accept transition from r to r based on probability accept( r r )= min( 1, exp[ β{ U i ( r ) U i ( r) }]) β = 1 k B T. Estimate is 1 D L -- L n D ( rn i i ) i = 1,
Monte-Carlo Method Random Sampling b a Evaluating I = f[ x] dx. Methodical approach Stochastic approach from Kofka Error n 2 d Methodical δi Monte Carlo δi n 1 2
Monte-Carlo Method Importance Sampling Most random points fall in region where f(x) is almost zero. 1 0.8 0.6 0.4 0.2 fx ( ) Random Points -20-15 -10-5 5 10 15 20 If f w is constant, variance vanishes b a I = f[ x] dx. I = ( b a) fx [ ]. b fx [ ] I = ----------- wx ( ) dx, e.g. a wx [ ] 1 fxu [ ( )] I =. wxu ------------------- d 0 [ ( )] u 1 L fxu [ ( I -- i )] --------------------. L i = 1 wxu [ ( i )] 2 1 σ I = -- [ ( f w) 2 f w 2 ]. L
Monte-Carlo Method Markov Chain Stochastic process Movement through a series of well-defined states in a way that involves some element of randomness. For our purposes, states are microstates in the governing ensemble. Markov process Stochastic process that has no memory. Selection of next state depends only on current state, and not on prior states. Process is fully defined by a set of transition probabilities. π ij π ij is probability of selecting state j next, given that presently in state i. Transition-probability matrix Π collects all π ij. Limiting probability does not depend on the initial distribution P = PΠ.
Monte-Carlo Method Metropolis Method At limiting distribution, detailed balance or the principle of microscopic reversibility: N( o)π( o n) = N( n)π( n o) Use any convenient underlying transition matrix but not accept every step. Rather accept a step with such a probability to satisfy the detailed balance. Metropolis suggested π( o n) = α( o n)acc( o n) α( o n) is underlying probability to move (uniform distribution) acc( o n) is the acceptance probability If N( n) < N( o) then acc( o n) = exp{ β[ U( n) U( o) ]} If N( n) > N( o) then acc( o n) = 1 It is possible to prove that this leads to the detailed balance.
Monte-Carlo Method Limits of Metropolis method Can not estimate the configuration integral Q conf = exp{ βu} dr N (or partition function). Metropolis algorithm can estimate only D = exp{ β[ U( r N )]} D( r N ) dr N --------------------------------------------------------------------- exp{ β[ U( r N )]} dr N Trick 1 = exp{ βu} exp{ βu} Then we can write V ------------ Q conf = exp{ βu} exp{ βu} dr N --------------------------------------------------------------- exp{ βu} dr N where D = exp{ βu} Does not help: it is necessary to sample high energy regions. Special tricks are required to estimate the free energy and entropy.
Polymers Overview Monomer, repeat unit and polymer http://www.psrc.usm.edu/macrog/index.htm End-to-end distance statistics nl 2 Radius of gyration 2 S o = Backbone chain o, from R n 2 Σ i ( r ) 2 N r o Two extreme case for simulation: dilute solution of a polymer and polymer melt. Full atomistic model is a challenge to simulate.
Polymers Lattice Model and Random Walk Lattices (two and three dimensional), bent and crankshaft from 99freire Random walks On and off lattice Self-cross and self-avoiding Demo from Atkins
Summary From Micro to Macro Statistical Mechanics (Statistical Thermodynamics) Ensemble Microcanonical Ensemble Canonical Ensemble Other Ensembles Classical vs. Quantum Closed-Form Solutions Monte-Carlo Method Basic Metropolis Algorithm Random Sampling Importance Sampling Markov Chain Metropolis Method Limits of Metropolis method Polymers Lattice Model and Random Walk