TP 10:Importance Sampling-The Metropolis Algorithm-The Ising Model-The Jackknife Method

Similar documents
Session 13

Pi evaluation. Monte Carlo integration

Chapter 2 Organizing and Summarizing Data. Chapter 3 Numerically Summarizing Data. Chapter 4 Describing the Relation between Two Variables

20.2. The Transform and its Inverse. Introduction. Prerequisites. Learning Outcomes

CS667 Lecture 6: Monte Carlo Integration 02/10/05

Monte Carlo method in solving numerical integration and differential equation

PHYS 601 HW 5 Solution. We wish to find a Fourier expansion of e sin ψ so that the solution can be written in the form

6810 Session 13 (last revised: April 3, 2017) 13 1

STABILITY and Routh-Hurwitz Stability Criterion

CHOOSING THE NUMBER OF MODELS OF THE REFERENCE MODEL USING MULTIPLE MODELS ADAPTIVE CONTROL SYSTEM

8 Laplace s Method and Local Limit Theorems

Reinforcement learning

Artificial Intelligence Markov Decision Problems

Exam 2, Mathematics 4701, Section ETY6 6:05 pm 7:40 pm, March 31, 2016, IH-1105 Instructor: Attila Máté 1

Review of Calculus, cont d

Analysis of Variance and Design of Experiments-II

A REVIEW OF CALCULUS CONCEPTS FOR JDEP 384H. Thomas Shores Department of Mathematics University of Nebraska Spring 2007

4-4 E-field Calculations using Coulomb s Law

1 The Riemann Integral

2. The Laplace Transform

Math 2142 Homework 2 Solutions. Problem 1. Prove the following formulas for Laplace transforms for s > 0. a s 2 + a 2 L{cos at} = e st.

Lecture 20: Numerical Integration III

3.4 Numerical integration

Physics 116C Solution of inhomogeneous ordinary differential equations using Green s functions

n f(x i ) x. i=1 In section 4.2, we defined the definite integral of f from x = a to x = b as n f(x i ) x; f(x) dx = lim i=1

APPROXIMATE INTEGRATION

63. Representation of functions as power series Consider a power series. ( 1) n x 2n for all 1 < x < 1

Reinforcement Learning and Policy Reuse

38.2. The Uniform Distribution. Introduction. Prerequisites. Learning Outcomes

Solutions Problem Set 2. Problem (a) Let M denote the DFA constructed by swapping the accept and non-accepting state in M.

APPENDIX 2 LAPLACE TRANSFORMS

CONTROL SYSTEMS LABORATORY ECE311 LAB 3: Control Design Using the Root Locus

Continuous Random Variables

Lecture 21: Order statistics

Math 360: A primitive integral and elementary functions

The Dirac distribution

NUMERICAL INTEGRATION. The inverse process to differentiation in calculus is integration. Mathematically, integration is represented by.

1.1. Linear Constant Coefficient Equations. Remark: A differential equation is an equation

Numerical integration

CHM Physical Chemistry I Chapter 1 - Supplementary Material

13: Diffusion in 2 Energy Groups

Properties of Integrals, Indefinite Integrals. Goals: Definition of the Definite Integral Integral Calculations using Antiderivatives

Linear predictive coding

THE EXISTENCE-UNIQUENESS THEOREM FOR FIRST-ORDER DIFFERENTIAL EQUATIONS.

Reinforcement Learning for Robotic Locomotions

SPACE VECTOR PULSE- WIDTH-MODULATED (SV-PWM) INVERTERS

1.2. Linear Variable Coefficient Equations. y + b "! = a y + b " Remark: The case b = 0 and a non-constant can be solved with the same idea as above.

MA 124 January 18, Derivatives are. Integrals are.

MATH , Calculus 2, Fall 2018

Markov Decision Processes

Joint distribution. Joint distribution. Marginal distributions. Joint distribution

The Riemann Integral

Math 426: Probability Final Exam Practice

Math& 152 Section Integration by Parts

Numerical Integration

MATH SS124 Sec 39 Concepts summary with examples

Numerical Integration

5 Probability densities

Heat flux and total heat

Integral equations, eigenvalue, function interpolation

Overview of Calculus I

221B Lecture Notes WKB Method

The Regulated and Riemann Integrals

PRACTICE EXAM 2 SOLUTIONS

Lecture 3 Gaussian Probability Distribution

Section 11.5 Estimation of difference of two proportions

1 Probability Density Functions

Chapter 3 Solving Nonlinear Equations

Integrals - Motivation

Line Integrals. Partitioning the Curve. Estimating the Mass

Summary of equations chapters 7. To make current flow you have to push on the charges. For most materials:

INTRODUCTION TO INTEGRATION

University of Washington Department of Chemistry Chemistry 453 Winter Quarter 2009

Module 6: LINEAR TRANSFORMATIONS

Approximation of continuous-time systems with discrete-time systems

Classical Mechanics. From Molecular to Con/nuum Physics I WS 11/12 Emiliano Ippoli/ October, 2011

THE INTERVAL LATTICE BOLTZMANN METHOD FOR TRANSIENT HEAT TRANSFER IN A SILICON THIN FILM

New Expansion and Infinite Series

Best Approximation. Chapter The General Case

Transfer Functions. Chapter 5. Transfer Functions. Derivation of a Transfer Function. Transfer Functions

The Fundamental Theorem of Calculus. The Total Change Theorem and the Area Under a Curve.

Week 10: Line Integrals

Lecture 1: Introduction to integration theory and bounded variation

The Fundamental Theorem of Calculus

ODE: Existence and Uniqueness of a Solution

Chapters 4 & 5 Integrals & Applications

Physics 202H - Introductory Quantum Physics I Homework #08 - Solutions Fall 2004 Due 5:01 PM, Monday 2004/11/15

Accelerator Physics. G. A. Krafft Jefferson Lab Old Dominion University Lecture 5

Math 135, Spring 2012: HW 7

x = b a n x 2 e x dx. cdx = c(b a), where c is any constant. a b

Abstract inner product spaces

Unit #9 : Definite Integral Properties; Fundamental Theorem of Calculus

Chapter 5 : Continuous Random Variables

Advanced Calculus: MATH 410 Notes on Integrals and Integrability Professor David Levermore 17 October 2004

Orthogonal Polynomials

Predict Global Earth Temperature using Linier Regression

7 - Continuous random variables

Discrete Mathematics and Probability Theory Spring 2013 Anant Sahai Lecture 17

Fatigue Failure of an Oval Cross Section Prismatic Bar at Pulsating Torsion ( )

SYDE 112, LECTURES 3 & 4: The Fundamental Theorem of Calculus

Transcription:

TP 0:Importnce Smpling-The Metropoli Algorithm-The Iing Model-The Jckknife Method June, 200 The Cnonicl Enemble We conider phyicl ytem which re in therml contct with n environment. The environment i uully much lrger thn the phyicl ytem of interet nd conequence energy exchnge between the two of them will not chnge the temperture of the environement. The environement i clled het bth or het reervoir. When the ytem reche equilibrium with the het bth it temperture will be given by the temperture of the het bth. A ytem in equilibrium with het bth i decribed ttiticlly by the cnonicl enemble in which the temperture i fixed. In contrt n iolted ytem i decribed ttiticlly by the microcnonicl enemble in which the energy i fixed. Mot ytem in nture re not iolted but re in therml contct with the environment. It i fundmentl reult of ttiticl mechnic tht the probbility of finding ytem in equilibrium with het bth t temperture T in microtte with energy E i given by the Boltzmnn ditribution The normliztion conntnt Z i the prtition function. It i defined by P = Z e βe, β = k T. () Z = e βe. (2) The um i over ll the microtte of the ytem with fixed N nd V. The Helmholtz free energy F of ytem i given by F = kt lnz. (3) In equilibrium the free energy i minimum. All other thermodynmicl quntitie cn be given by vriou derivtive of F. For exmple the internl energy U of the ytem which i the expecttion vlue of the energy cn be expreed in term of F follow The pecific het i given by U =< E >= 2 Importnce Smpling E P = Z E e βe = β lnz = (βf). (4) β C v = U. (5) T In ny Monte Crlo integrtion the numericl error i proportionl to the tndrd devition of the integrnd nd i inverely proportionl to the number of mple. Thu in order to reduce the error we hould either reduce

the vrince or incree the number of mple. The firt option i preferble ince it doe not require ny extr computer time. Importnce mpling llow u to reduce the tndrd devition of the integrnd nd hence the error by mpling more often the importnt region of the integrl where the integrnd i lrget. Importnce mpling ue lo in crucil wy nonuniform probbility ditribution. Let u gin conider the one dimenionl integrl F = We introduce the probbility ditribution p(x) uch tht dx f(x). (6) We write the integrl = dx p(x). (7) F = dx p(x) f(x) p(x). (8) We evlute thi integrl by mpling ccording to the probbility ditribution p(x). In other word we find et of N rndom number x i which re ditributed ccording to p(x) nd then pproximte the integrl by the um F N = N f(x i ) p(x i ). (9) The probbility ditribution p(x) i choen uch tht the function f(x)/p(x) i lowly vrying which reduce the correponding tndrd devition. 3 The Metropoli Algorithm The internl energy U =< E > cn be put into the form < E >= E e βe. (0) e βe Generlly given ny phyicl quntity A it expecttion vlue < A > cn be computed uing imilr expreion, viz < A >= A e βe. () e βe The number A i the vlue of A in the microtte. In generl the number of microtte N i very lrge. In ny Monte Crlo imultion we cn only generte very mll number n of the totl number N of the microtte. In other word < E > nd < A > will be pproximted with < E > < E > n = = E e βe = e βe. (2) < A > < A > n = = A e βe = e βe. (3) The clcultion of < E > n nd < A > n proceed therefore by ) chooing t rndom microtte, 2) computing E, A nd e βe then 3) evluting the contribution of thi microtte to the expecttion vlue 2

< E > n nd < A > n. Thi generl Monte Crlo procedure i however highly inefficient ince the microtte i very improbble nd therefore it contribution to the expecttion vlue i negligible. We need to ue importnce mpling. To thi end we introduce probbility ditribution p nd rewrite the expecttion vlue < A > A p < A >= e βe p. (4) p e βe p Now we generte the microtte with probbilitie p nd pproximte < A > with < A > n given by = A p < A > n = e βe =. (5) p e βe Thi i importntce mpling. The Metropoli lgorithm i importnce mpling with p given by the Boltzmnn ditribution, i.e p = e βe = e βe. (6) We get then the rithmetic verge < A > n = n n A. (7) The Metropoli lgorithm in the ce of pin ytem uch the Iing model cn be ummrized follow ) Chooe n initil microtte. 2) Chooe pin t rndom nd flip it. 3) Compute E = E tril E old. Thi i the chnge in the energy of the ytem due to the tril flip. 4) Check if E 0. In thi ce the tril microtte i ccepted. 5) Check if E > 0. In thi ce compute the rtio of probbilitie w = e β E. = 6) Chooe uniform rndom number r in the inetrvl [0, ]. 7) Verify if r w. In thi ce the tril microtte i ccepted, otherwie it i rejected. 8) Repet tep 2) through 7) until ll pin of the ytem re teted. Thi weep count one unit of Monte Crlo time. 9) Repet etp 2) through 8) ufficient number of time until thermliztion (i.e equilibrium) i reched. 0) Compute the phyicl quntitie of interet in n thermlized microtte. Thi cn be done periodiclly in order to reduce correltion between the dt point. ) Compute verge. We kip here the proof tht thi lgorithm led indeed to equence of tte which re ditributed ccording to the Boltzmnn ditribution. It i cler tht the tep 2) through 7) correpond to trnition probbility between the microtte { i } nd { j } given by W(i j) = min(, e β E ), E = E j E i. (8) 3

Since only the rtio of probbilitie w = e β E i needed it i not necery to normlize the Boltzmnn probbility ditribution. It i cler tht thi probbility function tifie the detiled blnce condition W(i j) e βei = W(j i) e βej. (9) Any other probbility function W which tifie thi condition will generte equence of tte which re ditributed ccording to the Boltzmnn ditribution. Thi cn be hown by umming over the index j in the bove eqution nd uing j W(i j) =. We get e βei = j W(j i) e βej. (20) The Boltzmnn ditribution i n eigenvector of W. In other word W leve the equilibrium enemble in equilibrium. A it turn out thi eqution i lo ufficient condition for ny enemble to pproch equilibrium. 4 The Het-Bth Algorithm The het-bth lgorithm i generlly le efficient lgorithm thn the Metropoli lgorithm. The cceptnce probbility i given by W(i j) = min(, + e β E ), E = E j E i. (2) Thi cceptnce probbility tifie lo detiled blnce for the Boltzmnn probbility ditribution. In other word the detiled blnce condition which i ufficient but not necery for n enemble to rech equilibrium doe not hve unique olution. 5 The Iing Model We conider d dimenionl periodic lttice with n point in every direction o tht there re N = n d point in totl in thi lttice. In every point (lttice ite) we put pin vrible i (i =,..., N) which cn tke either the vlue + or. A configurtion of thi ytem of N pin i therefore pecified by et of number { i }. In the Iing model the energy of thi ytem of N pin in the configurtion { i } i given by E I { i } = <ij> ǫ ij i j H i. (22) The prmeter H i the externl mgnetic field. The ymbol < ij > tnd for neret neighbor pin. The um over < ij > extend over γn 2 term where γ i the number of neret neighbor. In 2, 3, 4 dimenion γ = 4, 6, 8. The prmeter ǫ ij i the interction energy between the pin i nd j. For iotropic interction ǫ ij = ǫ. For ǫ > 0 we obtin ferromgnetim while for ǫ < 0 we obtin ntiferromgnetim. We conider only ǫ > 0. The energy i E I { i } = ǫ <ij> i j H i. (23) The prtition function i given by Z =... e βei{i}. (24) 2 N There re 2 N term in the um nd β = k BT. 4

In d = 2 we hve N = n 2 pin in the qure lttice. The configurtion { i } cn be viewed n n n mtrix. We impoe periodic boundry condition follow. We conider (n + ) (n + ) mtrix where the (n + )th row i identified with the firt row nd the (n + )th column i identified with the firt column. The qure lttice i therefore toru. 6 The Jckknife Method Any et of dt point in typicl imultion will generlly tend to contin correltion between the different point. In other word the dt point will not be ttiticlly independent nd conequence one cn not ue the uul formul to compute the tndrd devition of the men (i.e the probble error). The im of the Jckknife method i to etimte the error in et of dt point which contin correltion. Thi method work follow. ) We trt with mple of N meurement (dt point) {X,..., X N }. We compute the men < X >= N X i. (25) 2) We throw out the dt point X j. We get mple of N meurement {X,..., X j, X j+,..., X N }. Thi mple i clled bin. Since j =,..., N we hve N bin.we compute the men < X > j = N N ( X i X j ). (26) 3) The tndrd devition of the men will be etimted uing the formul The Jckknife error i σ. It i not difficult to how tht Thu σ 2 = σ 2 = N N (< X > j < X >) 2. (27) j= < X > j < X >= < X > X j. (28) N N(N ) (X j < X >) 2 = σmen. 2 (29) j= However in generl thi will not be true nd the Jckknife etimte of the error i more robut. 4) Thi cn be generlized by throwing out z dt point from the et {X,..., X N }. We end up with n = N/z bin. We compute the men < X > j over the bin in n obviou wy. The correponding tndrd devition will be given by σ 2 z = n n n (< X > j < X >) 2. (30) j= 5) The z tke the vlue z =,..., N. The error i the mximum of σ z function of z. 5