Wang-Landau Monte Carlo simulation Aleš Vítek IT4I, VP3
PART 1 Classical Monte Carlo Methods
Outline 1. Statistical thermodynamics, ensembles 2. Numerical evaluation of integrals, crude Monte Carlo (MC) 3. MC methods for statistical thermodynamics, Markov chains 4. Variants of MC methods
Statistical thermodynamics microscopic description (positions, momenta, interactions)
Statistical thermodynamics microscopic description (positions, momenta, interactions) macroscopic description (pressure, density, temperature)
Statistical thermodynamics microscopic description (positions, momenta, interactions) macroscopic description (pressure, density, temperature) Statistical Thermodynamics
Statistical thermodynamics time averages Molecular Dynamics methods 1 t 0 B b lim b( rk( t ), pk( t )) dt t 0
Statistical thermodynamics ensemble averages Monte Carlo methods B A,, A b 1 n b( r, p ) ( r, p ; A,, A ) d K K K K 1 n ( r, p ; A,, A ) d K K 1 n
Statistical thermodynamics Statistical (Gibbs) ensembles microstate (m-state): positions and momenta macrostate (m-state): macroscopic variables (A 1, ) 1 m-state = many m-states Gibbs ensemble probability distribution in system s phase space, ( rk, pk ; A1,, An )
Statistical thermodynamics Examples of statistical (Gibbs) ensembles micro-canonical: canonical: grand-canonical: isobaric-isothermic: etc. N, V, E m, V, T N, P, T N, V, T
Statistical thermodynamics Canonical Gibbs ensemble H N ( rk, pk ; V ) rk, pk; N, V, T exp kt B p H ( r, p ; V ) W ( r ; V ) N 2 K N K K K K 1 2M K
Statistical thermodynamics Statistical thermodynamics as a mathematical problem B A,, A b 1 n b( r, p ) ( r, p ; A,, A ) d K K K K 1 n ( r, p ; A,, A ) d K K 1 n 1 3 3 3 3 d d 3 1 r d p1...d rn d p N N!h N
Statistical thermodynamics Statistical thermodynamics as a mathematical problem B A,, A B 1 n kin b ( r ) ( r ; A,, A ) dr dr con K con K 1 n 1 N ( r ; A,, A ) dr dr con K 1 n 1 N Canonical ensemble con W N( rk; V ) rk ; N, V, T exp kt B
Numerical evaluation of integrals 1D rectangular, trapezoid, Simpsonian, b b a n b a b a n n a j 1 n n j 1 f ( x ) d x f ( a j ) f ( x ) j
Numerical evaluation of integrals 1D rectangular, trapezoid, Simpsonian, b b a n b a b a n n a j 1 n n j 1 f ( x ) d x f ( a j ) f ( x ) j 1D crude Monte Carlo x j randomly distributed over [a,b] b n b a n a j 1 f ( x ) d x f ( x ) j
Numerical evaluation of integrals Example 0 sin x dx
Numerical evaluation of integrals Localized functions b f ( x ) ( x )d x a
Numerical evaluation of integrals Localized functions b f ( x ) ( x )d x a Solution: Sampling from (x).
Numerical evaluation of integrals Localized functions b f ( x ) ( x )d x a Solution: Sampling from (x). How? Markov chains.
Markov chains Simplified example finite number of microstates microstates: s 1,, s n probability distribution: 1 = (s 1 ),, n = (s n ) equilibrium PD: (e),, 1 n (e)
Markov chains Simplified example finite number of microstates Markov chain: transition probabilities: (stochastic matrix) s (1) s (2) s (N) s (K) is from {s 1,, s n } ik = ik 0 1 n ik 1 k 1 ik
Markov chains Simplified example finite number of microstates PD function evolution: n ( N 1) ( N) k i ik j 1 ( N 1) ( N) equilibrium PD function: n (e) (e) k i ik j 1 (e) (e)
Markov chains Simplified example finite number of microstates Theorem: Let s (1), s (2),, s (N), be an infinite Markov chain of microstates generated using a stochastic matrix ik. Then the generated microstates are distributed over the state space of the system in congruence with the equilibrium PD function of the used stochastic matrix. Remarks: The Markov chain does not represent any physical time evolution. From the statistical thermodynamics point of view, only the equilibrium PD function is physically relevant. (Not, e. g., the stochastic matrix.)
Markov chains Construction of an appropriate stochastic matrix n n (e) (e) k j jk jk j 1 k 1, 1 Mathematically 2n linear algebraic equations for n 2 variables many (infinite number of) solutions
Markov chains Construction of an appropriate stochastic matrix n n (e) (e) k j jk jk j 1 k 1, 1 Mathematically 2n linear algebraic equations for n 2 variables many (infinite number of) solutions Detailed balance (e) (e) k kj j jk
Markov chains Decomposition of the stochastic matrix p a ik ik ik p ik probability of proposing a particular transition a ik probability of accepting the proposed transition Metropolis solution [1] (e) p (e) (e) (e) mi ki pik k pki n1, (e) (e) min 1 k k pki aki i pikaik aik aik, p i ik i [1] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, E. Teller, J. Chem. Phys. 21 (1953) 1087.
Markov chains Decomposition of the stochastic matrix p a ik ik ik p ik probability of proposing a particular transition a ik probability of accepting the proposed transition Metropolis solution [1] (e) p (e) (e) (e) mi ki pik k pki n1, (e) (e) min 1 k k pki aki i pikaik aik aik, p i ik i Notice that the PD function need not be normalized! [1] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, E. Teller, J. Chem. Phys. 21 (1953) 1087.
Metropolis algorithm for evaluating configuration integrals of statistical thermodynamics (canonical ensemble) B B W ( rk ) kt B con K 1 b ( r )e dr dr kin W ( rk ) kt B e dr d 1 r N N and, consequently, ( i ) ( k ) ( i ) W ( r ) W W W (e) W K (e) kbt kbt k kbt kbt i e e e e (e) ( i ) ( i k ) i
Metropolis algorithm from the configuration generated in the preceding step, r (i), generate a new one using an algorithm obeying that the probability for old new is the same as for new old (usually isotropic translation and/or rotation) if W (new) <W (i) then r (i+1) = r (new) if W (new) >W (i) then e r (i+1) = r (new) with probability 1 e r (i+1) = r (i) with probability ( i W new) / B W ( i new) k T / k T B repeat until infinity (convergence)
Metropolis algorithm Then W ( rk ) kt B con( )e d 1 d n K N 1 ( i ) ( ) bcon r W r K K n n kt i 1 B e dr1 drn b r r r lim ( )
Metropolis algorithm Then W ( rk ) kt B con( )e d 1 d n K N 1 ( i ) ( ) bcon r W r K K n n kt i 1 B e dr1 drn b r r r lim ( ) Canonical (or NVT or isochoric-isothermic) Monte Carlo
A general sketch of an MC simulation propose an initial configuration employ the Metropolis algorithm to generate as many as possible further configurations collect necessary data along the MC path to evaluate ensemble averages calculate the averages as simple arithmetic means of the collected data
Troublesomes Periodic boundary conditions - example
Literature M. LEWERENZ, Monte Carlo Methods: Overview and Basics, in NIC Series Volume 10: Quantum Simulations of Complex Many-Body Systems: From Theory to Algorithms, NIC Juelich 2002 (http://www.fz-juelich.de/nic-series/volume10/volume10.html) M. P. ALLEN, P. J. TILDESLEY, Computer Simulation of Liquids, Oxford University Press, Oxford 1987. D. P. LANDAU, K. BINDER, A Guide to Monte Carlo Simulations in Statistical Physics, Cambridge University Press, Cambridge 2000 and 2005.
Appendix Simulated annealing [1] and basin hopping [2] MC simulation at a high temperature T 0 K simulated annealing gradient optimization basin hopping equilibrium structure for T = 0 K [1] S. Kirckpatrick, C. D. Gellat, M. P. Vecchi, Science 220 (1983) 671; [2] D. J. Wales, H. A. Scheraga, Science 285 (1999) 1368.
Example 2 Simulated annealing of small water clusters.
PART 2 Wang-Landau algorithm
Problem: low temperatures Simulation at low temperatures combination of integration and optimization Molecular clusters, biomolecules: Complicated energy landscapes A lot of minims separated by large energy barriers
Mean value of variable B B A,, A B 1 n kin IF b ( con r ) ( ) K b W rk a 1D integral b ( r ) ( r ; A,, A ) dr dr con K con K 1 n 1 N ( r ; A,, A ) dr dr con K 1 n 1 N, then mean value of B can be expressed as B A,, A B 1 n kin W min b ( W ) ( W ; A,, A ) ( W )dw con con 1 W min ( W ; A,, A ) ( W )dw con 1 n n where ( W ) is classical density of states ( W ) lim W 0 r,, r ; W ( r,, r ) W, W W 1 N 1 1 N r,, r N 1 dr dr 1 1 dr dr N 1 N How to calculate ( W )? Wang-Landau algorithm
Calculation of Ω Wang-Landau algorithm basic idea - Completaly random walk in configuration space => energy histogram h(w)~ω(w), but low energy configurations will not be visited - Metropolis algorithm: configurations are sampled by ρ(e i (r), => energy histogram h(w)~ω(w) x ρ(w(r)) - If we use ρ(w i (r)) = 1/(Ω(W)), then energy histogram will be constant function
Wang-Landau algorithm calculation of density of states - Start of simulation: Ω(W) is unknown, we start with constant Ω(W) = 1 and from random initial configuration r - Following configuration r 2 created by small random change of foregoing configuration r1 is accepted with probability Wnew rnew Wold min 1, 1, Wold rold W new - Visiting of energy W i : change of Ω(W i ) Ω(W i ) x k 0,where modification factor k 0 >1. - e. g., k 0 =2, big k big statistical errors, small k long simulation - After reaching of flat energy histogram h(w), modification factor is changed e.g. by rule k j1 kj, energy histogram is deleted (density of states isn t deleted!) and we start a new simulation - Computations is finished, when k reaches predeterminated value (e. g. 1,000 001)
Wang-Landau algorithm accuracy of results depends on: - Final value of modification factor k final - Softness of deals of energy - Criterion of flatness of histogram - Complexity of simulated system - Details of implemented algorithm
Wangův-Landau algorithm: parallelization - MPI, each core has one own walker, all cores share actual density of states and actual energy histogram
W-L algorithm-example: THE JOURNAL OF CHEMICAL PHYSICS 134, 024109 (2012)
Thank you for your attention