Monte Carlo Methods in Statistical Mechanics

Similar documents
Markov Processes. Stochastic process. Markov process

Monte Carlo (MC) Simulation Methods. Elisa Fadda

Understanding Molecular Simulation 2009 Monte Carlo and Molecular Dynamics in different ensembles. Srikanth Sastry

Nanoscale simulation lectures Statistical Mechanics

Monte Carlo Methods. Ensembles (Chapter 5) Biased Sampling (Chapter 14) Practical Aspects

Wang-Landau Monte Carlo simulation. Aleš Vítek IT4I, VP3

CE 530 Molecular Simulation

MONTE CARLO METHOD. Reference1: Smit Frenkel, Understanding molecular simulation, second edition, Academic press, 2002.

Copyright 2001 University of Cambridge. Not to be quoted or copied without permission.

Statistical Thermodynamics and Monte-Carlo Evgenii B. Rudnyi and Jan G. Korvink IMTEK Albert Ludwig University Freiburg, Germany

Monte Carlo and cold gases. Lode Pollet.

Random Walks A&T and F&S 3.1.2

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

Monte Carlo. Lecture 15 4/9/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky

Monte Carlo Molecular Simulation with Isobaric-Isothermal and Gibbs-NPT Ensembles. Thesis by Shouhong Du

Numerical integration and importance sampling

Metropolis Monte Carlo simulation of the Ising Model

Monte Carlo importance sampling and Markov chain

6 Markov Chain Monte Carlo (MCMC)

Equilibrium, out of equilibrium and consequences

4/18/2011. Titus Beu University Babes-Bolyai Department of Theoretical and Computational Physics Cluj-Napoca, Romania

Numerical methods for lattice field theory

I. QUANTUM MONTE CARLO METHODS: INTRODUCTION AND BASICS

Progress toward a Monte Carlo Simulation of the Ice VI-VII Phase Transition

Introduction to Path Integral Monte Carlo. Part I.

Langevin Methods. Burkhard Dünweg Max Planck Institute for Polymer Research Ackermannweg 10 D Mainz Germany

Parallel Tempering Algorithm in Monte Carlo Simulation

Markov Chain Monte Carlo Method

Markov Chain Monte Carlo Methods

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Quantifying Uncertainty

Physics 115/242 Monte Carlo simulations in Statistical Physics

Stochastic optimization Markov Chain Monte Carlo

New Physical Principle for Monte-Carlo simulations

Monte Carlo Methods. Leon Gu CSD, CMU

Multiple time step Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

The Monte Carlo Method

Supplemental Material for Temperature-sensitive colloidal phase behavior induced by critical Casimir forces

Markov Chains and MCMC

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Pattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods

Methods of Computer Simulation. Molecular Dynamics and Monte Carlo

Classical Monte-Carlo simulations

Wang-Landau sampling for Quantum Monte Carlo. Stefan Wessel Institut für Theoretische Physik III Universität Stuttgart

Law of large numbers for Markov chains

A Brief Introduction to Statistical Mechanics

ELEC633: Graphical Models

Summary of Results on Markov Chains. Abstract

Homework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM.

Classical Monte Carlo Simulations

the renormalization group (RG) idea

A =, where d n w = dw

Advanced Sampling Algorithms

Introduction to Bayesian methods in inverse problems

Sobol-Hoeffding Decomposition with Application to Global Sensitivity Analysis

Markov Chain Monte Carlo Inference. Siamak Ravanbakhsh Winter 2018

Monte Caro simulations

Markov Chains and Stochastic Sampling

On the Calculation of the Chemical Potential. Using the Particle Deletion Scheme

Ergodicity and Non-Ergodicity in Economics

Thermodynamics and Phase Diagrams from Cluster Expansions

Computer simulation methods (1) Dr. Vania Calandrini

Markov Chains, Random Walks on Graphs, and the Laplacian

Expectations, Markov chains, and the Metropolis algorithm

Brief Review of Statistical Mechanics

Metropolis/Variational Monte Carlo. Microscopic Theories of Nuclear Structure, Dynamics and Electroweak Currents June 12-30, 2017, ECT*, Trento, Italy

The Monte Carlo Method An Introduction to Markov Chain MC

Three examples of a Practical Exact Markov Chain Sampling

Markov Chain Monte Carlo Simulations and Their Statistical Analysis An Overview

J ij S i S j B i S i (1)

Computational Physics (6810): Session 13

fiziks Institute for NET/JRF, GATE, IIT-JAM, JEST, TIFR and GRE in PHYSICAL SCIENCES

3.320: Lecture 19 (4/14/05) Free Energies and physical Coarse-graining. ,T) + < σ > dµ

LECTURE 10: Monte Carlo Methods II

Finite-Horizon Statistics for Markov chains

Typical quantum states at finite temperature

Introduction to Machine Learning CMU-10701

Ch5. Markov Chain Monte Carlo

Modeling the Free Energy Landscape for Janus Particle Self-Assembly in the Gas Phase. Andy Long Kridsanaphong Limtragool

arxiv:cond-mat/ v2 3 Aug 1998

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

Scientific Computing: Monte Carlo

Bayesian Estimation of Input Output Tables for Russia

Path Coupling and Aggregate Path Coupling

Lecture 2+3: Simulations of Soft Matter. 1. Why Lecture 1 was irrelevant 2. Coarse graining 3. Phase equilibria 4. Applications

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Introduction Statistical Thermodynamics. Monday, January 6, 14

STA 294: Stochastic Processes & Bayesian Nonparametrics

Monte Carlo integration (naive Monte Carlo)

Quantum Monte Carlo Methods in Statistical Mechanics

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

Monte Carlo methods for sampling-based Stochastic Optimization

Introduction to Computer Simulations of Soft Matter Methodologies and Applications Boulder July, 19-20, 2012

Basic Concepts and Tools in Statistical Physics

Answers and expectations

Gases and the Virial Expansion

Transcription:

Monte Carlo Methods in Statistical Mechanics Mario G. Del Pópolo Atomistic Simulation Centre School of Mathematics and Physics Queen s University Belfast Belfast Mario G. Del Pópolo Statistical Mechanics 1 / 35

Outline Multidimensional integrals Quadrature vs. random sampling 1 Multidimensional integrals Quadrature vs. random sampling 2 3 4 Mario G. Del Pópolo Statistical Mechanics 2 / 35

Integrals as averages Quadrature vs. random sampling Standard numerical quadrature: I = Z b a f (x)dx δx NX f (x i ) i=1 Random sampling with distribution function ρ(x): I = Z b a f (x)dx = = Z b «f (x) ρ(x)dx a ρ(x) 1 MX f (x i ) M ρ(x i ) i=1 fi fl f (xi ) ρ(x i ) ρ Mario G. Del Pópolo Statistical Mechanics 3 / 35

Uniform sampling Quadrature vs. random sampling Use a random variable X uniformly distributed on [a, b]. Then and I = with variance σ 2 I Z b a f (x)dx = (b a) ρ(x) = 1 b a = I 2 I 2 given by: Z b a f (x)ρ(x)dx = f (x) ρ σ 2 I = (b a) 2 D f (x) 2E ρ f (x) 2 ρ Using N random numbers, x 1,, x N, the integral and its variance are estimated by: 0 1 i N b a NX f (x j ) and si 2 = 1 @ (b a)2 NX f 2 (x j ) i 2 A N N N N j=1 j=1 Mario G. Del Pópolo Statistical Mechanics 4 / 35

Statistical error Multidimensional integrals Quadrature vs. random sampling The standard error in i N is given by: Standard error: σ in = σ I N Decreasing the error by one order of magnitude implies increasing the sample size, N, in two orders of magnitude Mario G. Del Pópolo Statistical Mechanics 5 / 35

Outline Multidimensional integrals Quadrature vs. random sampling 1 Multidimensional integrals Quadrature vs. random sampling 2 3 4 Mario G. Del Pópolo Statistical Mechanics 6 / 35

Quadrature vs. random sampling In the most general case: I = the variance of I is: Z b a f (x)dx = σ 2 I = Z b a Z b and the corresponding estimators are: i N 1 N NX j=1 f (x j ) ρ(x j ) a «f (x) ρ(x)dx = ρ(x) f 2 (x) dx I2 ρ(x) 0 and si 2 = 1 @ 1 N N fi fl f (xi ) ρ(x i ) ρ 1 NX f 2 (x j ) ρ(x j ) i N 2 A j=1 Mario G. Del Pópolo Statistical Mechanics 7 / 35

Quadrature vs. random sampling In the importance sampling method ρ(x) is chosen to be large where f (x) is large and small where f (x) is small A significant random sample is concentrated in the region where f (x) is large, and contributes more to the integral, instead of being distributed uniformly over the whole interval [a, b] Under such conditions the following inequality is fulfilled: b a f 2 (x) dx < (b a) ρ(x) b a f 2 (x)dx The use of ρ(x) reduces the variance, σi 2, with respect to uniform sampling and leads to a lower standard error: σ in = σ I N Mario G. Del Pópolo Statistical Mechanics 8 / 35

Importance vs. uniform sampling Quadrature vs. random sampling f (x) = r σ π exp ( σx 2 ) and ρ(x) = 1 1 π 1 + x 2 Mario G. Del Pópolo Statistical Mechanics 9 / 35

Outline Multidimensional integrals 1 Multidimensional integrals Quadrature vs. random sampling 2 3 4 Mario G. Del Pópolo Statistical Mechanics 10 / 35

Calculating ensemble averages In classical statistical mechanics the ensemble average of B(r N, p N ) is calculated as: B e = B(r N, p N )f [N] 0 (r N, p N )dr N dp N with, for example, f [N] 0 (r N, p N ) = 1 exp ( βh) h 3N N! Q N,V,T or f 0 (r N, p N ; N) = exp ( β(h Nµ)) Ξ µ,v,t How to evaluate B e numerically? 6N-dimensional integral quadrature and uniform sampling unfeasible Mario G. Del Pópolo Statistical Mechanics 11 / 35

Calculating ensemble averages Solution : generate random configurations according to a distribution W (r N ) so: B c e = R R B(r N )f0 c (r N )dr N B(r N ) ρ(rn ) R = f ρ(r N ) 0 c (r N )dr N f c 0 (r N )dr N R ρ(r N ) f c ρ(r N ) 0 (rn )dr N where we have focused on the configurational contribution to B e. Clearly: B c e = B/ρ ρ 1/ρ ρ where ρ signifies averages over the distribution: W (r N ) = ρ(r N ) f c 0 (r N ) Mario G. Del Pópolo Statistical Mechanics 12 / 35

Challenge: What is the most convenient form of ρ(r N )? ρ(r N )must be similar to B c (r N )f c 0 (rn ) Ideal choice ρ(r N ) = f c 0 (rn ) The problem has been rephrased: How to generate a series of random configurations so that each state occurs with probability ρ(r N ) = f c 0 (rn )? Generate a Markov chain of sates, Γ n (r N n ), with a limiting distribution f c 0 (rn ) Mario G. Del Pópolo Statistical Mechanics 13 / 35

Outline Multidimensional integrals 1 Multidimensional integrals Quadrature vs. random sampling 2 3 4 Mario G. Del Pópolo Statistical Mechanics 14 / 35

Multidimensional integrals Markov chain: sequence of random configurations (states or trials) satisfying the following two conditions: 1 The outcome of each trial belongs to a finite set of outcomes Γ 1, Γ 2,, Γ m, called the state space 2 The outcome of each trial depends only on the outcome of the immediately preceding one Conditional probability for sequence of steps: Statement 2 implies: Pr (j, t + 1 i, t; k t 1, t 1; ; k 0, 0) Pr (j, t + 1 i, t; k t 1, t 1; ; k 0, 0) = Pr (j, t + 1 i, t) Markov process Pr (j, t + 1 i, t) = Π ij are the elements of a transition matrix Π linking states Γ i and Γ j Mario G. Del Pópolo Statistical Mechanics 15 / 35

Multidimensional integrals Be ρ i (t) the probability of being in state i at time t. Then: ρ j (t) = X i ρ i (t 1)Π ij or using the row vector ρ(t) = (ρ 1 (t), ρ 2 (t), ) and then transition matrix: ρ(t) = ρ(t 1)Π The solution can be written in terms of the left eigenvalues (λ i ) and left eigenvectors (Φ i ) of Π: ρ(t) = X i Φ i λ t i and Φ i Π = λ i Φ i Π is an stochastic matrix so P j Π ij = 1 i. It can be proofed that: 1 λ i 1 i 2 There is at least one eigenvalue equal to unity, let us say λ 1 = 1 3 If the Markov chain is irreducible, there is only one eigenvalue equal to unity Mario G. Del Pópolo Statistical Mechanics 16 / 35

Multidimensional integrals According to the previous considerations: X lim ρ(t) = lim Φ i λ t i = Φ 1 t t Φ 1 is the unique limiting distribution of the Markov chain The stationary distribution satisfies: i In statistical mechanics: Φ 1 Π = Φ 1 or ρ 0 Π = ρ 0 ρ 0 : vector with elements ρ 0 (Γ n) (Γ n = position in phase space) Need to determine the elements of Π satisfying: X Π i,j 0 i ; Π i,j = 1 i and X j i ρ i Π ij = ρ j Π i,j must not depend on the normalisation constant (partition function) of ρ 0 Mario G. Del Pópolo Statistical Mechanics 17 / 35

Outline Multidimensional integrals 1 Multidimensional integrals Quadrature vs. random sampling 2 3 4 Mario G. Del Pópolo Statistical Mechanics 18 / 35

Condition of detailed balance A useful trick in searching for a solution of the previous equations is to replace: ρ i Π ij = ρ j ρ i Π ij = ρ j Π ji i Summing over all states i: ρ i Π ij = i i In the practice: ρ j Π ji = ρ j Π ji = ρ j Need to generate a sequence of configurations (states, Γ n ) according to the specified equilibrium distribution ρ 0 (Γ) Use ρ i Π ij = ρ j Π ji and j Π i,j = 1 to build the transition matrix elements in terms of ρ 0 i Mario G. Del Pópolo Statistical Mechanics 19 / 35

Outline Multidimensional integrals 1 Multidimensional integrals Quadrature vs. random sampling 2 3 4 Mario G. Del Pópolo Statistical Mechanics 20 / 35

Metropolis algorithm This is an asymmetrical solution to the previous problem: Π ij = α ij for ρ j ρ i and i j Π ij = α ij (ρ j /ρ i ) for ρ j < ρ i and i j Π ii = 1 j i Π ij α is a symmetrical matrix often called the underlying matrix of the Markov chain Since we use (ρ j /ρ i ) we circumvent the problem of calculating the normalisation factor (partition function) Mario G. Del Pópolo Statistical Mechanics 21 / 35

Barker algorithm Symmetrical solution: Barker method Π ij = α ij ρ j /(ρ i + ρ j ) for i j Π ii = 1 j i Π ij In both the Metropolis and the Barker method the Markov chain will be irreducible provided ρ i > 0 i and the underlying symmetric Markov chain is irreducible. Mario G. Del Pópolo Statistical Mechanics 22 / 35

The calculation of ensemble average of B(r N, p N ) : B e = B(r N, p N )f [N] 0 (r N, p N )dr N dp N = B id e + B(r N )ρ 0 (r N )(r N )dr N is achieved by averaging over M successive states of the Markov chain. The average converges to the desired value as M. B M = 1 M M t=1 B(r N t ) = r N Γ B(r N )ρ 0 (r N )+O(M 1/2 ) B e +O(M 1/2 ) Non-ergodicity can be a serious problem Mario G. Del Pópolo Statistical Mechanics 23 / 35

Outline Multidimensional integrals 1 Multidimensional integrals Quadrature vs. random sampling 2 3 4 Mario G. Del Pópolo Statistical Mechanics 24 / 35

Sampling the canonical ensemble Aim: Generate a series of particle configurations distributed according to: with ρ 0 (r N ) = exp ( βvn ) Z Z Z = dr N exp ( βv N ) In order to implement the Metropolis algorithm we need to specify α, which satisfies: α ij = α ji For particle n at position r n, α ij is defined as: α ij = 1/N R r i n R α ij = 0 r i n / R Mario G. Del Pópolo Statistical Mechanics 25 / 35

Sampling the canonical ensemble Metropolis algorithm: Π ij = α ij for ρ j ρ i and i j Π ij = α ij (ρ j /ρ i ) for ρ j ρ i and i < j Π ii = 1 X Π ij j i for i = j with ρ j = exp ( βvn(rn j )) ρ i exp ( βv N (r N i )) = exp ( βδvji N ) A randomly chosen particle is moved according to α Trial move If δv ji N < 0 then ρ j ρ i and the new configuration is accepted If δv ji N > 0 the new configuration is accepted with probability exp ( βδv ji N ) Mario G. Del Pópolo Statistical Mechanics 26 / 35

Organisation of a simulation Read input Initialise Ising H = J X NX s i s j Hs i <ij> i=1 Markov chain Accumulate averages N End of Y run? Write output values Mario G. Del Pópolo Statistical Mechanics 27 / 35

Boundary conditions Periodic boundary conditions: Avoid surface effects Periodicity introduces correlations System size N: Finite size effects depend on correlation length and range of interactions Dependence on cell symmetry and shape Configurational energy: The Ewald method (truly periodic b.c.) Truncation of the intermolecular forces: Minimum image plus cutoff x L Mario G. Del Pópolo Statistical Mechanics 28 / 35

Assessment of the results The main advantage of Monte Carlo methods is the great flexibility in the choice of the stochastic matrix Π When designing a new MC algorithm or running a MC simulations one must: Ensure the algorithm samples the desired ensemble distribution detailed balance condition Ensure every state can eventually be reached from any other ( Markov chain must be irreducible or ergodic) Test accuracy of random number generator Standard checks on simulation results: Steady-state distribution must be reached (discard initial relaxation) Same distribution must be reached starting from different initial conditions Estimate statistical uncertainties and correlation times Finite size effects Mario G. Del Pópolo Statistical Mechanics 29 / 35

Quality of random number generator Set of X-Y coordinates produced with a bad random number generator Coordinates produced with a good random number generator Figure taken from Binder & Landau Mario G. Del Pópolo Statistical Mechanics 30 / 35

Time scales Multidimensional integrals Evolution of the internal energy, U, and magentisation, M, in the Ising model in the absence of magentic field. Note: Initial relaxation. The two quantities evolve with different characteristic time scales Intermediate times. Series are stationary and show equilibrium fluctuations Longer time scale. Global spin inversion. Figure taken from Binder & Landau Mario G. Del Pópolo Statistical Mechanics 31 / 35

Outline Multidimensional integrals 1 Multidimensional integrals Quadrature vs. random sampling 2 3 4 Mario G. Del Pópolo Statistical Mechanics 32 / 35

Simulations in other ensembles Isobaric-isothermal ensemble (N, P, T ) allows fluctuations in the volume Grand canonical ensemble (µ, V, T ) allows fluctuations in the number of particles etc, etc. Example: In a Metropolis grand canonical simulation: Particle displacements are accepted with probability: min [1, exp ( βδv ij )] Particles are destroyed with probability: min [1, exp ( βδv ij + ln (N/zV ))], where z = exp (βµ)/ 3 Particles are created with probability: min [1, exp ( βδv ij + ln (zv /(N + 1)))] Mario G. Del Pópolo Statistical Mechanics 33 / 35

Comment on biasing and detailed balance Condition of detailed balance: ρ 0 (o) α(o n) P acc(o n) = ρ 0 (n) α(n o) P acc(n o) P acc is the probability that the trial move(o n) will be accepted. For a canonical simulation it follows that: P acc(o n) α(n o) = exp ( βv) P acc(n o) α(o n) Using Metropolis solution, the acceptance rule for a trial MC move is: «α(n o) P acc(o n) = min 1, exp ( βv) α(o n) By biasing the probability to generate a trial conformation, α, one could make the term on right hand side very close to one. In that case almost every trial move will be accepted. Mario G. Del Pópolo Statistical Mechanics 34 / 35

Bibliography "Computer Simulations of Liquids", by M. P. Allen, D. J. Tildesley. Oxford University Press, 1987 " A guide to Monte Carlo simulations in Statistical Physics", by D. P. Landau and K. Binder. Cambridge University Press, 2005 "Modern Theoretical Chemistry", Volume 5, part A. Edited by B. Berne, Plenum Press, 1977. "The Monte Carlo Methods in the Physical Sciences ", Edited by J. E. Gubernatis, AIP Conference Proceedings, vol 690, 2003. " Monte Carlo Methods in Chemical Physics ", Edited by D. Ferguson, J. I. Siepmann and D. G. Truhlar. Advances in Chemical Physics, vol 105, Wiley, 1999. Mario G. Del Pópolo Statistical Mechanics 35 / 35