Monte Carlo and cold gases. Lode Pollet.

Similar documents
Monte Carlo importance sampling and Markov chain

Classical Monte Carlo Simulations

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

Introduction to Machine Learning CMU-10701

Quantum Monte Carlo. Matthias Troyer, ETH Zürich

17 : Markov Chain Monte Carlo

Convex Optimization CMU-10725

Classical and Quantum Monte Carlo Algorithms and Exact Diagonalization. Matthias Troyer, ETH Zürich

ICCP Project 2 - Advanced Monte Carlo Methods Choose one of the three options below

Convergence Rate of Markov Chains

Their Statistical Analvsis. With Web-Based Fortran Code. Berg

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version)

16 : Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo Inference. Siamak Ravanbakhsh Winter 2018

Advanced Monte Carlo Methods Problems

Generating the Sample

Monte Carlo Methods in Statistical Mechanics

Lecture 6: Markov Chain Monte Carlo

Lecture 2 : CS6205 Advanced Modeling and Simulation

Quantifying Uncertainty

Monte Carlo Methods. Geoff Gordon February 9, 2006

CSC 446 Notes: Lecture 13

Monte Carlo Methods. PHY 688: Numerical Methods for (Astro)Physics

Pattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods

Lecture 8: The Metropolis-Hastings Algorithm

functions Poisson distribution Normal distribution Arbitrary functions

Computational statistics

Markov chain Monte Carlo Lecture 9

2 Random Variable Generation

Markov Chain Monte Carlo (MCMC)

Computational Physics (6810): Session 13

Answers and expectations

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling

Today: Fundamentals of Monte Carlo

Bayesian Methods with Monte Carlo Markov Chains II

ABSTRACT. We measure the efficiency of the Metropolis, Swendsen-Wang, and Wolff algorithms in

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

Monte Caro simulations

Wang-Landau sampling for Quantum Monte Carlo. Stefan Wessel Institut für Theoretische Physik III Universität Stuttgart

Today: Fundamentals of Monte Carlo

Numerical integration and importance sampling

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Monte Carlo Simulations

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

Autocorrelation Study for a Coarse-Grained Polymer Model

Markov Chains and MCMC

6 Markov Chain Monte Carlo (MCMC)

Physics 115/242 Monte Carlo simulations in Statistical Physics

Markov chain Monte Carlo

Lecture 6: Monte-Carlo methods

Data Analysis I. Dr Martin Hendry, Dept of Physics and Astronomy University of Glasgow, UK. 10 lectures, beginning October 2006

Markov Chain Monte Carlo Simulations and Their Statistical Analysis An Overview

MCMC and Gibbs Sampling. Sargur Srihari

Introduction to MCMC. DB Breakfast 09/30/2011 Guozhang Wang

Introduction to Machine Learning CMU-10701

Methods of Data Analysis Random numbers, Monte Carlo integration, and Stochastic Simulation Algorithm (SSA / Gillespie)

Cheng Soon Ong & Christian Walder. Canberra February June 2018

CPSC 540: Machine Learning

Sampling Distributions Allen & Tildesley pgs and Numerical Recipes on random numbers

Monte Carlo. Lecture 15 4/9/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky

Phase transitions and finite-size scaling

19 : Slice Sampling and HMC

National Sun Yat-Sen University CSE Course: Information Theory. Maximum Entropy and Spectral Estimation

in Computer Simulations for Bioinformatics

General Principles in Random Variates Generation

Markov Chain Monte Carlo Lecture 1

Guiding Monte Carlo Simulations with Machine Learning

Markov Chains and MCMC

Metropolis Monte Carlo simulation of the Ising Model

Monte Carlo Methods in Statistical Physics: I. Error analysis for uncorrelated data

Sampling Methods (11/30/04)

16 : Approximate Inference: Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

CSC 2541: Bayesian Methods for Machine Learning

Simulation. Where real stuff starts

From Random Numbers to Monte Carlo. Random Numbers, Random Walks, Diffusion, Monte Carlo integration, and all that

SAMPLING ALGORITHMS. In general. Inference in Bayesian models

Stat 516, Homework 1

Numerical methods for lattice field theory

( ) ( ) Monte Carlo Methods Interested in. E f X = f x d x. Examples:

Lecture 15 Random variables

Markov Processes. Stochastic process. Markov process

The University of Auckland Applied Mathematics Bayesian Methods for Inverse Problems : why and how Colin Fox Tiangang Cui, Mike O Sullivan (Auckland),

Wang-Landau Monte Carlo simulation. Aleš Vítek IT4I, VP3

Markov chain Monte Carlo

General Construction of Irreversible Kernel in Markov Chain Monte Carlo

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Critical Dynamics of Two-Replica Cluster Algorithms

Independent Events. Two events are independent if knowing that one occurs does not change the probability of the other occurring

Numerical integration - I. M. Peressi - UniTS - Laurea Magistrale in Physics Laboratory of Computational Physics - Unit V

Stochastic optimization Markov Chain Monte Carlo

Math 456: Mathematical Modeling. Tuesday, April 9th, 2018

Monte Carlo Simulation of Spins

Lecture 7 and 8: Markov Chain Monte Carlo

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC

OBJECTIVES Use the area under a graph to find total cost. Use rectangles to approximate the area under a graph.

4. Cluster update algorithms

Random Walks A&T and F&S 3.1.2

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Markov Chain Monte Carlo Using the Ratio-of-Uniforms Transformation. Luke Tierney Department of Statistics & Actuarial Science University of Iowa

Transcription:

Monte Carlo and cold gases Lode Pollet lpollet@physics.harvard.edu 1

Outline Classical Monte Carlo The Monte Carlo trick Markov chains Metropolis algorithm Ising model critical slowing down Quantum Monte Carlo diffusion Monte Carlo worm algorithm sign problem Applications 2

What I will not do high performing computing parallelization programming language (C++) other numerical methods Required knowledge statistical mechanics quantum mechanics some knowledge about many-body physics 3

Integration h = b a N f(x) discrete sum: I = b a f(x)dx = h N f(a + ih)+o(1/n ) i=1 a b higher order integrators trapezoidal I = h N 1 1 2 f(a)+ i=1 f(a + ih)+ 1 2 f(b) +O(1/N 2 ) Simpson I = h 3 f(a)+ (3 ( 1) i )f(a + ih)+f(b) N 1 i=1 +O(1/N 4 ) (N even) 4

Integration Simpson rule with M points per dimension The error in one dimension is O(M 4 ) In d dimensions we need N = M d points the error is O(M 4 )=O(N 4/d ) Integration becomes inefficient in higher dimensions In statistical mechanics the integrals are 6N-dimensional (3N positions, 3N momenta) 5

The Monte Carlo trick I = D g(x)dx identical and independent random samples uniformly drawn from D Î m = 1 m ( ) g(x (1) )+ + g(x (m) ) g(x) D P law of the large numbers [ ] lim m Îm I =0 =1 6

The Monte Carlo trick lim m Cdf m central limit theorem (Îm I) convergence is O(m 1/2 ) = CdfN(0,σ 2 ) regardless of the dimensionality of D 7

random numbers Pseudo random numbers not random at all, but generated by an algorithm probably good enough reproducible Linear congruential generators x n+1 = f(x n ) GGL x n+1 =(ax + c) modm a = 16807,c=0,m=2 31 1 problem is periodicity with 500 million random numbers per second : 4s should not be used any more 8

Lagged Fibonacci generator x n = x n p x n q mod m 0 <p<q, =+,,, XOR Initialization using linear congruential generator very long period (large block of seeds) very fast complex mathematics see: http://beta.boost.org/doc/libs/1_42_0/libs/random/random-generators.html 9

Independent sampling Known geometry converges to π/4 10

demonstration Segmentation fault 10 1./sqrt(x) 1 0.1! - <MC> 0.01 0.001 0.0001 1e-05 1 10 100 1000 10000 100000 1e+06 1e+07 1e+08 Nsteps 11

Importance sampling I = 0 g(x)e x dx draw random numbers that are exponentially distributed, then Î m = 1 m ( ) g(x (1) )+ + g(x (m) ) how? u [0, 1[ p = lnu p [0, [ inverse transformation is needed standard random number generator 12

Importance sampling f(x) N f(x i ) f = 1 V f(x)dx = 1 V p(x) p(x)dx 1 N Varf/p i=1 p(x i ) = N imagine function f is sharply peaked then the variance can be reduced by finding p(x) such that p(x) is close to f(x) and that it is easy to generate random numbers according to p(x) 13

change of variables y = F (y) = y 0 x = F 1 (y) general f(u)du f(x) =x exponential y = F (x) = x =2 y x 0 x dx = x2 2 p = ln(u) Box-Mueller (gaussian) p 1 = R cos(θ) = 2ln(u 1 ) cos(2πu 2 ) p 2 = R sin(θ) = 2ln(u 1 )sin(2πu 2 ) 14

Statistical mechanics : Q = 1 Z Tr [ Qe βh] = Tr[Qe βh ] unnormalized weights W (x) =e βe(x) Tr[e βh ] how do we get random variables that are distributed according to W(x)? 15

Markov chains and rejections Small steps random walk through configuration space at each time : measure transition function? Rejection : stay at same configuration, update clock and measure 16

Markov chains A Markov chain is a sequence of random variables X1, X2, X3,... with the property that the future state depends only on the past via the present state. P [X n+1 = x X n = x n,...,x 1 = x 1,X 0 = x 0 ]=P [X n+1 = x X n = x n ] transition function y T (x, y) T (x, y) =1 conservation of probability irreducible : it must be possible to reach any configuration x from any other configuration y in a finite number of steps. 17

Markov chain irreducible aperiodic } convergence to the stationary distribution W transition kernel has one eigenvalue 1, while all other eigenvalues satisfy λ j < 1,j =2,...N The second largest eigenvalue determines the correlations in the Markov process 18

Detailed balance A transition rule T(x,y) leaves the target distribution W(x) invariant if x W (x)t (x, y) W (y) This will certainly be the case if detailed balance is fulfilled, W (x)t (x, y) =W (y)t (y, x) 19

Metropolis algorithm we cannot construct a transition kernel T that fulfills detailed balance. proposal function P(x,y) q = min acceptance factor [ 1, W (y)p (y, x) W (x)p (x, y) go to the proposed configuration y with prob q, otherwise stay in x ] 20

thermalization energy landscape Initial configuration λ2 λ2 τint Discard the first 20% of the Markov steps! Markov Chain should be sufficiently long 21

recapitulation δ W (X) =e βe(x) P (X Y )= 1 1 N W (Y )=e βe(y ) P (Y X) = 1 N δ 2 1 δ 2 N particles interacting via Lennard-Jones interactions q X Y = min 1,e β(e(y ) E(X)) 22

autocorrelations Markov chains trivially correlate measurements autocorrelations decay exponentially integrated autocorrelation time number of independent measurements is reduced, but central limit theorem still holds 23

Binning analysis How to get correct error bars? Markov chain correlates measurements if chain is long enough, then the configuration is independent of the initial one 1 2 m identically and independent τ int (l) = lσ2 (l) 2σ 2 http://arxiv.org/abs/cond-mat/9707221 appendix 24

Jackknife analysis R (0) R (j), j = 1, k Bias = (k-1)(r av - R (0) ) δr = k 1 1 k k (R (j) ) 2 (R av ) 2 j R = R (0) - Bias 1/2 25

2d Ising model H = J σ i σ j <i,j> Select random spin W (x)t (x, y) =W (y)t (y, x) q = min [ 1, ] W (y)p (y, x) W (x)p (x, y) P (x, y) =P (y, x) = 1 L 2 W (y) W (x) = exp q = min 2βJσ i [ 1,e 2βJσ i <i,j> σ j <i,j> σ j ] 26

Critical slowing down 0.1 0.01 0.001 error(m) T c = ln(1 + 2) 2 0.0001 1e-05 1e-06 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1/T Magnetization m = ±2 divergence of correlation length, critical fluctuations 27

improved estimator m 2 = 1 L 2 σ i σ j = 1 L S(cluster) i,j domain size ~ susceptibility There exist more formal ways and also other ways to solve the problem of critical slowing down 28

cluster algorithm select like spins with probability p p =1 exp[ 2β] then update always accepted (exercise : show that this is true) (why is this not the same as flipping many spins at the same time?) 29

e β(c 1 c 2 ) (1 p) c 2 q X Y = e β(c 2 c 1 ) (1 p) c 1 min 1, e β(c 2 c 1 ) (1 p) c 1 e β(c 1 c 2) (1 p) c 2 30

phase transitions and critical phenomena 0.1 0.01 0.001 0.0001 1e-05 Error on magnetization 1e-06 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1/T local Wolff 1 cluster size in Wolff 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1/T 31

phase transitions and critical phenomena critical exponents have to be calculated numerically P.S. you need to know about finite size scaling 32

Wang-Landau sampling Cluster algorithms do not help near first order transitions 33

Wang-Landau sampling Z = E ρ(e)e E/kT Density of states is however unknown. probability minimum disappears however if we choose - start with ρ(e) = 1 and f =1 - repeat - reset histogram H(E) = 0 p(e) 1/ρ(E) - perform simulations by picking random site and Metropolis updates p(e) 1/ρ(E) H(E) H(E)+1 ρ(e) ρ(e) f - when histogram is flat, reduce f f - stop when f 1 + 10 8 34

References J. S. Liu, Monte Carlo strategies in scientific computing, Springer Verlag W. Krauth, Statistical Mechanics : algorithms and computations, Oxford University Press ETH Zurich professor scripts : http:// www.ifb.ethz.ch/education/introductioncomphys ( and references therein) 35

Homework calculate (mean value, no error analysis) : I = 6 0 exp( x/2)dx by : 1. direct integration (analytical/monte Carlo by exponentially distributed random numbers) 2. Markov chain Monte Carlo : choose a step size d (wisely), and update the current position according to the Metropolis algorithm by choosing a random step of. Don t forget that in every step you can move to larger or smaller x values. Show that you satisfy detailed balance. 3. modification (advanced). Suppose the following. Suppose you start with moving to the right. If you accept the move, then always choose moving to the right in the next step. If you reject the move, then start moving to the left. So, you keep moving in one direction until you have a rejection (bounce), and then you change the direction and keep moving in this direction until there is another bounce after which you change the direction again. Does this Monte Carlo Markov chain produce the right answer? 36