Computational Physics (6810): Session 13

Similar documents
(# = %(& )(* +,(- Closed system, well-defined energy (or e.g. E± E/2): Microcanonical ensemble

Intro. Each particle has energy that we assume to be an integer E i. Any single-particle energy is equally probable for exchange, except zero, assume

Introduction to the Renormalization Group

Physics 115/242 Monte Carlo simulations in Statistical Physics

Numerical Analysis of 2-D Ising Model. Ishita Agarwal Masters in Physics (University of Bonn) 17 th March 2011

Parallel Tempering Algorithm in Monte Carlo Simulation

213 Midterm coming up

Physics Sep Example A Spin System

Advanced Monte Carlo Methods Problems

Monte Carlo. Lecture 15 4/9/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky

REVIEW: Derivation of the Mean Field Result

3.320 Lecture 18 (4/12/05)

Markov Chain Monte Carlo Simulations and Their Statistical Analysis An Overview

Lecture 8. The Second Law of Thermodynamics; Energy Exchange

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

Statistical Thermodynamics and Monte-Carlo Evgenii B. Rudnyi and Jan G. Korvink IMTEK Albert Ludwig University Freiburg, Germany

Monte Carlo Simulation of the Ising Model. Abstract

2m + U( q i), (IV.26) i=1

Monte Carlo and cold gases. Lode Pollet.

Physics 127b: Statistical Mechanics. Renormalization Group: 1d Ising Model. Perturbation expansion

Basics of Statistical Mechanics

Classical Monte Carlo Simulations

Statistical Physics. How to connect the microscopic properties -- lots of changes to the macroscopic properties -- not changing much.

1 The fundamental equation of equilibrium statistical mechanics. 3 General overview on the method of ensembles 10

Lecture 8. The Second Law of Thermodynamics; Energy Exchange

LECTURE 10: Monte Carlo Methods II

Exam TFY4230 Statistical Physics kl Wednesday 01. June 2016

Ch. 8 Math Preliminaries for Lossy Coding. 8.5 Rate-Distortion Theory

9.1 System in contact with a heat reservoir

Markov Chain Monte Carlo Method

Kyle Reing University of Southern California April 18, 2018

Physics 127b: Statistical Mechanics. Second Order Phase Transitions. The Ising Ferromagnet

[S R (U 0 ɛ 1 ) S R (U 0 ɛ 2 ]. (0.1) k B

to satisfy the large number approximations, W W sys can be small.

Metropolis Monte Carlo simulation of the Ising Model

Summer Lecture Notes Thermodynamics: Fundamental Relation, Parameters, and Maxwell Relations

Chapter 3. Entropy, temperature, and the microcanonical partition function: how to calculate results with statistical mechanics.

Monte Caro simulations

Assignment 3. Tyler Shendruk February 26, 2010

8.044 Lecture Notes Chapter 4: Statistical Mechanics, via the counting of microstates of an isolated system (Microcanonical Ensemble)

CH 240 Chemical Engineering Thermodynamics Spring 2007

UNIVERSITY OF OSLO FACULTY OF MATHEMATICS AND NATURAL SCIENCES

In-class exercises. Day 1

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Statistical Physics I Spring Term 2013 Notes on the Microcanonical Ensemble

1. Thermodynamics 1.1. A macroscopic view of matter

Simulation of the two-dimensional square-lattice Lenz-Ising model in Python

although Boltzmann used W instead of Ω for the number of available states.

Lecture 9 Examples and Problems

Random Walks A&T and F&S 3.1.2

Chapter 4: Going from microcanonical to canonical ensemble, from energy to temperature.

Monte Carlo study of the Baxter-Wu model

Copyright 2001 University of Cambridge. Not to be quoted or copied without permission.

Their Statistical Analvsis. With Web-Based Fortran Code. Berg

The XY-Model. David-Alexander Robinson Sch th January 2012

Lecture 19: Solving linear ODEs + separable techniques for nonlinear ODE s

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Numerical integration and importance sampling

IV. Classical Statistical Mechanics

Wang-Landau sampling for Quantum Monte Carlo. Stefan Wessel Institut für Theoretische Physik III Universität Stuttgart

Data Analysis I. Dr Martin Hendry, Dept of Physics and Astronomy University of Glasgow, UK. 10 lectures, beginning October 2006

Metropolis/Variational Monte Carlo. Microscopic Theories of Nuclear Structure, Dynamics and Electroweak Currents June 12-30, 2017, ECT*, Trento, Italy

Monte-Carlo simulation of small 2D Ising lattice with Metropolis dynamics

Monte Carlo Methods. PHY 688: Numerical Methods for (Astro)Physics

Scaling Theory. Roger Herrigel Advisor: Helmut Katzgraber

Lecture 6: Markov Chain Monte Carlo

Statistical Mechanics. Atomistic view of Materials

Notes on Random Variables, Expectations, Probability Densities, and Martingales

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries

A Monte Carlo Implementation of the Ising Model in Python

Understanding temperature and chemical potential using computer simulations

ICCP Project 2 - Advanced Monte Carlo Methods Choose one of the three options below

On Markov Chain Monte Carlo

Lecture Notes on Statistical Mechanics

3.320: Lecture 19 (4/14/05) Free Energies and physical Coarse-graining. ,T) + < σ > dµ

Potentially useful reading Sakurai and Napolitano, sections 1.5 (Rotation), Schumacher & Westmoreland chapter 2

Optimized statistical ensembles for slowly equilibrating classical and quantum systems

Markov Processes. Stochastic process. Markov process

Today: Fundamentals of Monte Carlo

PHYS 352 Homework 2 Solutions

Chapter 2 Ensemble Theory in Statistical Physics: Free Energy Potential

MATH 56A: STOCHASTIC PROCESSES CHAPTER 7

1 Foundations of statistical physics

Introduction Statistical Thermodynamics. Monday, January 6, 14

J ij S i S j B i S i (1)

Progress toward a Monte Carlo Simulation of the Ice VI-VII Phase Transition

First Problem Set for Physics 847 (Statistical Physics II)

Phase transitions and finite-size scaling

Physics 403. Segev BenZvi. Choosing Priors and the Principle of Maximum Entropy. Department of Physics and Astronomy University of Rochester

To factor an expression means to write it as a product of factors instead of a sum of terms. The expression 3x

Temperature Fluctuations and Entropy Formulas

Advanced sampling. fluids of strongly orientation-dependent interactions (e.g., dipoles, hydrogen bonds)

Today: Fundamentals of Monte Carlo

Basic Concepts and Tools in Statistical Physics

i=1 n i, the canonical probabilities of the micro-states [ βǫ i=1 e βǫn 1 n 1 =0 +Nk B T Nǫ 1 + e ǫ/(k BT), (IV.75) E = F + TS =

Solution to the exam in TFY4230 STATISTICAL PHYSICS Wednesday december 21, 2011

Basics of Statistical Mechanics

Statistical Thermodynamics Solution Exercise 8 HS Solution Exercise 8

Convergence Rate of Markov Chains

Thermodynamics of the nucleus

PAC-learning, VC Dimension and Margin-based Bounds

Transcription:

Computational Physics (6810): Session 13 Dick Furnstahl Nuclear Theory Group OSU Physics Department April 14, 2017 6810 Endgame Various recaps and followups Random stuff (like RNGs :) Session 13 stuff

6810 Computational Physics Endgame April 21 is our last class period! Projects are due at the end of Monday, May 1 Create a Project subfolder of your BuckeyeBox folder Include codes, makefiles, plots, etc. (like for problem sets) Include an explanation of how it all fits together in a separate file or in comments of the codes. It is recommended that you turn in something earlier than the due date to get feedback while there s time for iteration. Session guides and homework will be accepted until 5pm on Thursday, April 27 Remember that you can always upgrade a check Get in some version (even if incomplete) soon!

Which one is not really random?

Gaussian or Normal distributions Gaussian is fully characterized by mean µ and variance σ 2 p(x) = 1 2 2πσ 2 e (x µ) /2σ 2 Standard deviation σ is square root of variance About 2/3 of numbers in [ σ, +σ] within 1 sigma PDG requires a 5σ signal to declare a new particle is found

Comments on probability distribution functions (PDFs) Two common ones are uniform and gaussian [think histograms!] Always normalized: p(x) dx = 1 (total probability is one) Uniform: p(x) = 1 b aθ(x a)θ(b x)

Comments on probability distribution functions (PDFs) Two common ones are uniform and gaussian [think histograms!] Always normalized: p(x) dx = 1 (total probability is one) Uniform: p(x) = 1 b aθ(x a)θ(b x) Gaussian is fully characterized by mean µ and variance σ 2 p(x) = 1 2 2πσ 2 e (x µ) /2σ 2 Standard deviation σ is square root of variance About 2/3 of numbers in [ σ, +σ] within 1 sigma PDG requires a 5σ signal to declare a new particle is found

Comments on probability distribution functions (PDFs) Two common ones are uniform and gaussian [think histograms!] Always normalized: p(x) dx = 1 (total probability is one) Uniform: p(x) = 1 b aθ(x a)θ(b x) Gaussian is fully characterized by mean µ and variance σ 2 p(x) = 1 2 2πσ 2 e (x µ) /2σ 2 Standard deviation σ is square root of variance About 2/3 of numbers in [ σ, +σ] within 1 sigma PDG requires a 5σ signal to declare a new particle is found 2D random walk and N: How far away after N steps? (label by i) R 2 avg ( x 1 + x 2 + + x N ) 2 + ( y 1 + y 2 + + y N ) 2 This is not one walk, but the average of many (note s) x i and y i chosen by p(x) with means x i = y i = 0 Uncorrelated steps means x i x j = y i y j = 0 if i j = R 2 avg N x 2 i + y 2 i N r 2 = R rms Nr rms Applies generally to processes with combined random errors

Scaling of fluctuations with N Uniform Random Number Distributions 2000 1500 # in bin 1000 500 uniform 1 uniform 2 uniform 3 uniform 4 0 0 0.2 0.4 0.6 0.8 1 Four times as many points on top. Scaling of fluctuations? Sat Mar 26 08:33:30 2016 x

Scaling of fluctuations with N Uniform Random Number Distributions 2000 1500 # in bin 1000 500 uniform 1 uniform 2 uniform 3 uniform 4 0 0 0.2 0.4 0.6 0.8 1 Four times as many points on top. Scaling of fluctuations? answer: proportional to N, so twice as large on top What is the relative scaling of the fluctuations with N? Sat Mar 26 08:33:30 2016 x

Scaling of fluctuations with N Uniform Random Number Distributions 2000 1500 # in bin 1000 500 uniform 1 uniform 2 uniform 3 uniform 4 0 0 0.2 0.4 0.6 0.8 1 Four times as many points on top. Scaling of fluctuations? answer: proportional to N, so twice as large on top What is the relative scaling of the fluctuations with N? answer: N = 1 N N Sat Mar 26 08:33:30 2016 x

Things to watch for in Session 12 and beyond For Monte Carlo methods, we need a set of random numbers uncorrelated means we can t predict x i+1 given x 1, x 2,, x i Doesn t have to be uniform the histogram of random numbers will approach the shape of the corresponding probability distribution function (PDF) We ll use pseudo-random numbers from GSL random number generators (rng) trade-offs between period and speed see Session 12 notes for tests (e.g., using your eye) and Session 13 notes for pitfalls! You need to seed the rng, or you ll get the same numbers!

Things to watch for in Session 12 and beyond (cont.) Random walks in two dimensions: N steps Be able to derive standard deviation distance R Nr rms Remember how to find the average of a function in an interval: f = 1 b a b a f (x) dx = r 2 = (Crude) approximation of an integral I f = 1 b a b a f (x) dx 1 N N f (x i ) = I = i=1 1 (b a) 2 b a b a dx b a f (x) dx b a N dy (x 2 +y 2 ) Better approximation: sample according to an appropriate pdf b b I = dx f (x) = dx w(x) f (x) f w(x) = 1 N f (x i ) w N w(x i ) a a w i=1 N f (x i ) = Sample the integrand (i.e., choose points) where big or steep i=1

Averaging functions over probability distributions f (x) f (x)p(x) dx where P(x) dx = 1 (usually from to + ) generalizes to g(x, y) g(x, y)p 2 (x, y) dx dy where P 2 (x, y) dx dy = 1

Averaging functions over probability distributions f (x) f (x)p(x) dx where P(x) dx = 1 (usually from to + ) generalizes to g(x, y) g(x, y)p 2 (x, y) dx dy where P 2 (x, y) dx dy = 1 r 2 = x 2 + y 2? = x 2 + y 2 = 2 x 2 Here we have special cases of P 2 and g: P 2 (x, y) = P(x)P(y) and g(x, y) = f (x) + f (y)

Averaging functions over probability distributions f (x) f (x)p(x) dx where P(x) dx = 1 (usually from to + ) generalizes to g(x, y) g(x, y)p 2 (x, y) dx dy where P 2 (x, y) dx dy = 1 r 2 = x 2 + y 2? = x 2 + y 2 = 2 x 2 Here we have special cases of P 2 and g: P 2 (x, y) = P(x)P(y) and g(x, y) = f (x) + f (y) f (x) + f (y) = [f (x) + f (y)]p(x)p(y) dx dy [ ] [ ] [ ] [ = f (x)p(x) dx P(y) dy + P(x) dx = f (x)p(x) dx + f (y)p(y) dy = f (x) + f (y) ] f (y)p(y) dy

Ways to do Importance Sampling In general, identify a pdf P(x) in your integrand: b b f (x) dx g(x)p(x) dx 1 N g( x (i) ) N a a where the points { x (i) }, i = 1, N, are sampled from P(x) Note that P(x) does not appear explicitly in the sum This is called Importance Sampling Pick a distribution P(x) that matches the integrand as best as possible (e.g., make g(x) as close to constant as possible) Ways to determine the x s: 1 Choose an explicit P(x) if you know how to sample it (e.g., a Gaussian distribution) 2 Determine P(x) adaptively. E.g., vegas in GSL 3 Build up a set of configurations { x (i) } via a Markov chain (e.g., via the Metropolis algorithm or some variation) i=1

First pass at statistical mechanics basics The state of our (approximated) system is specified by a configuration For example, for an N-particle quantum system, it could be where each particle is located (plus other quantum # s) For the Ising model, it is whether each spin on the lattice (which could be in any number of dimensions) is up or down Our approximation to the system is such that specifying a configuration takes a finite amount of data but the number of possible configurations can still be (and usually is!) enormous. The partition function tells us what we need for thermodynamics E.g., the expectation value of the energy or of an observable like the pressure or magnetization We can express it two equal ways (canonical ensemble here) Z = i e E (i) /k B T = E j (# of E j s)e E j /k B T E j Ω(E j )e E j /k B T The first sum is over all configurations, labeled by i The second sum is over all energies, labeled by j

Example of one-dimensional Ising model with N spins Specify a configuration by whether each of N spins is up or down: 0 1 2 3 4 5 N 1 Suppose the Hamiltonian specified the energy as: J ij S is j Here J is a constant with dimensions of energy ij means all adjacent spins ( nearest neighbors ) S i = 1 for and S i = 1 for (e.g., so S i S j = 1 if both down) Consider N = 3 with periodic boundary conditions: states and energies config. i state energy (with J = 1) 0 E (0) = (+1 + 1 + 1) = 3 1 E (1) = (+1 1 1) = +1 2 E (2) = (+1 + 1 + 1) = +1 3 E (3) = (+1 + 1 + 1) = +1 4 E (4) = (+1 + 1 + 1) = +1 5 E (5) = (+1 + 1 + 1) = +1 6 E (6) = (+1 + 1 + 1) = +1 7 E (7) = (+1 + 1 + 1) = 3 There are 7 different configurations but E 0 = 3 and E 1 = +1 are the only two choices for E j. Find Z by summing over configurations i or energies E j

More statistical physics in a nutshell (I) Maximizing the Free Energy (not the energy!) Return to partition function and probability P(E) of finding energy E: Z = e E (i) /k B T = (# of E j s)e E j /k B T Ω(E j )e E j /k B T all i E j E j Here P(E) Ω(E)e E/kBT. The system will maximize E. How? Move Ω(E) to exponent, as e α maximized when α minimized Ω = e ln Ω = P(E) e (E TS)/kBT with S = k B ln Ω(E) What is the minimized quantity at fixed T? E TS Note the tradeoff between E and TS!

More statistical physics in a nutshell (II) Temperature and Entropy Consider two subsystems with energies E and E and fixed E tot = E + E Exchange energy back and forth, reach equilibrium. Meaning what? Static? Define Ω(E) and Ω(E ) to be the number of configurations with E and E (i.e., microstates ). How many total ways? Ω(E) Ω(E ) = Ω(E)Ω(E tot E) Basic principle: any accessible state (satisfies E + E = E tot) is equally likely. = probability of particular E is total number with that E divided by grand total So maximize Ω(E)Ω(E ) to maximize probability = Why is this equilibrium? Because Ω(E) = e S(E)/k B, maximize S(E) + S(E ). 2nd Law of thermo? Set derivative to zero and use chain rule with de /de = 1 (why?): d de (S(E) + S(E )) = 0 = ds(e) de + ds(e ) de de de So equilibrium means ds(e) de = ds(e ) de We use this to define the temperature T = ds(e) de!

More statistical physics in a nutshell (III) Probability and Boltzmann factors Find the probability of energy P(E) for one of two subsystems that is very much smaller than the other, so E E tot Ratio of probabilities is ratio of total number of possible states P(E 1 ) P(E 2 ) = [Ω(E 1)Ω(E tot E 1 )] [Ω(E 2 )Ω(E tot E 2 )] = Ω(E 1 S(E k tot E 1 ) 1)e B k Ω(E 2 )e 1 S(E tot E 2 ) B Now Taylor expand (of course!): S(E tot E 1 ) S(E tot) E 1 But ds de = 1 T and es(etot)/k B factors cancel, leaving P(E 1 ) P(E 2 ) Ω(E 1)e E1/kBT Ω(E 2 )e E 2/k B T So P(E) Ω(E)e E/k B T. How to normalize P(E)? ds de Etot Ω(E i )e E i /k B T = 1 = P(E) = 1 Z Ω(E)e E/k BT i

Example of one-dimensional Ising model with 20 spins 0.6 0.5 Energy Distributions random sampling exact T=10. Metropolis T=10. exact T=1.0 Metropolis T=1.0 0.4 P(E) 0.3 0.2 0.1 0-20 -15-10 -5 0 5 10 15 20 energy E Sat Mar 26 08:48:39 2016

Example of one-dimensional Ising model with 20 spins 0.6 0.5 Energy Distributions random sampling exact T=10. Metropolis T=10. exact T=1.0 Metropolis T=1.0 Metropolis small T=1.0 0.4 P(E) 0.3 0.2 0.1 0-20 -15-10 -5 0 5 10 15 20 energy E Wed Apr 20 11:37:10 2016

Example of one-dimensional Ising model with 20 spins 0.6 0.5 Energy Distributions random sampling exact T=10. Metropolis T=10. exact T=1.0 Metropolis T=1.0 Metropolis small T=1.0 0.4 P(E) 0.3 0.2 0.1 0-20 -15-10 -5 0 5 10 15 20 energy E Wed Apr 20 11:39:44 2016

Two-D Ising model: Equilibration and cooling -200-250 -300-350 -400 2D Ferromagnetic Ising Model Energy vs. Time using Monte Carlo kt=1.0, trial 1 kt=1.0, trial 2 kt=1.0, trial 3 energy E -450-500 -550-600 -650-700 -750 0 100 200 300 400 500 600 700 800 900 1000 time t Sat Mar 26 08:53:38 2016