Markov Processes. Stochastic process. Markov process

Similar documents
Doing Physics with Random Numbers

CE 530 Molecular Simulation

Monte Carlo Methods in Statistical Mechanics

A Geometric Interpretation of the Metropolis Hastings Algorithm

The Monte Carlo Method An Introduction to Markov Chain MC

Random Walks A&T and F&S 3.1.2

Wang-Landau Monte Carlo simulation. Aleš Vítek IT4I, VP3

Copyright 2001 University of Cambridge. Not to be quoted or copied without permission.

Metropolis/Variational Monte Carlo. Microscopic Theories of Nuclear Structure, Dynamics and Electroweak Currents June 12-30, 2017, ECT*, Trento, Italy

Numerical integration and importance sampling

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

STA 294: Stochastic Processes & Bayesian Nonparametrics

Convex Optimization CMU-10725

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

Numerical methods for lattice field theory

Classical Monte-Carlo simulations

Introduction to Machine Learning CMU-10701

Convergence Rate of Markov Chains

Language Acquisition and Parameters: Part II

Markov Chain Monte Carlo

Brief introduction to Markov Chain Monte Carlo

Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics

Lecture 3: September 10

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Finite-Horizon Statistics for Markov chains

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Monte Caro simulations

Session 3A: Markov chain Monte Carlo (MCMC)

6.842 Randomness and Computation March 3, Lecture 8

18.600: Lecture 32 Markov Chains

Markov Chain Monte Carlo

Guiding Monte Carlo Simulations with Machine Learning

Monte Carlo (MC) Simulation Methods. Elisa Fadda

Markov Chain Monte Carlo Simulations and Their Statistical Analysis An Overview

General Construction of Irreversible Kernel in Markov Chain Monte Carlo

Homework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM.

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

Results: MCMC Dancers, q=10, n=500

Markov Chain Monte Carlo Methods

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Markov Chains Handout for Stat 110

J ij S i S j B i S i (1)

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Data Analysis I. Dr Martin Hendry, Dept of Physics and Astronomy University of Glasgow, UK. 10 lectures, beginning October 2006

Advanced Sampling Algorithms

Multiple time step Monte Carlo

Markov Chains and MCMC

18.175: Lecture 30 Markov chains

Perron Frobenius Theory

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Markov Chain Monte Carlo Method

Markov Chain Monte Carlo

18.440: Lecture 33 Markov Chains

PageRank. Ryan Tibshirani /36-662: Data Mining. January Optional reading: ESL 14.10

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

25.1 Markov Chain Monte Carlo (MCMC)

Robert Collins CSE586, PSU. Markov-Chain Monte Carlo

A =, where d n w = dw

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

eigenvalues, markov matrices, and the power method

MSc MT15. Further Statistical Methods: MCMC. Lecture 5-6: Markov chains; Metropolis Hastings MCMC. Notes and Practicals available at

STOCHASTIC PROCESSES Basic notions

Stat 516, Homework 1

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

3.320 Lecture 18 (4/12/05)

Monte Carlo. Lecture 15 4/9/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky

Monte Carlo importance sampling and Markov chain

The Monte Carlo Algorithm

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov chain Monte Carlo Lecture 9

Numerical methods for lattice field theory

Monte Carlo Methods. Leon Gu CSD, CMU

Stochastic Simulation

1 Stat 605. Homework I. Due Feb. 1, 2011

Previously Monte Carlo Integration

Monte Carlo Simulation of the Ising Model. Abstract

MCMC Review. MCMC Review. Gibbs Sampling. MCMC Review

Math Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah

8 STOCHASTIC SIMULATION

Parallel Tempering Algorithm in Monte Carlo Simulation

Summary of Results on Markov Chains. Abstract

André Schleife Department of Materials Science and Engineering

Physics 403. Segev BenZvi. Propagation of Uncertainties. Department of Physics and Astronomy University of Rochester

Statistical Thermodynamics and Monte-Carlo Evgenii B. Rudnyi and Jan G. Korvink IMTEK Albert Ludwig University Freiburg, Germany

Stochastic optimization Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

Approximate inference in Energy-Based Models

Online Social Networks and Media. Link Analysis and Web Search

Statistics & Data Sciences: First Year Prelim Exam May 2018

Reminder of some Markov Chain properties:

Markov Processes Hamid R. Rabiee

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano

ELEC633: Graphical Models

26 : Spectral GMs. Lecturer: Eric P. Xing Scribes: Guillermo A Cidre, Abelino Jimenez G.

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified.

The Leslie Matrix. The Leslie Matrix (/2)

Transcription:

Markov Processes Stochastic process movement through a series of well-defined states in a way that involves some element of randomness for our purposes, states are microstates in the governing ensemble Markov process stochastic process that has no memory selection of next state depends only on current state, and not on prior states process is fully defined by a set of transition probabilities ij ij probability of selecting state j next, given that presently in state i. Transition-probability matrix Π collects all ij

Transition-Probability Matrix Example system with three states 3 0. 0.5 0.4 Π 0.9 0. 0.0 3 3 3 33 0.3 0.3 0.4 If in state, will stay in state with probability 0. Requirements of transition-probability matrix If in state, will move to state 3 with probability 0.4 Never go to state 3 from state all probabilities non-negative, and no greater than unity sum of each row is unity probability of staying in present state may be non-zero

3 Distribution of State Occupancies Consider process of repeatedly moving from one state to the next, choosing each subsequent state according to Π 3 3 3 3 etc. Histogram the occupancy number for each state n 3 0.33 n 5 0.4 n 3 4 3 0.5 3 After very many steps, a limiting distribution emerges Click here for an applet that demonstrates a Markov process and its approach to a limiting distribution

The Limiting Distribution. 4 Consider the product of Π with itself 3 3 Π 3 3 3 3 33 3 3 33 All ways of going from state to state in two steps + + 33 + + 33 etc. + + 33 + + 33 etc. 3 + 3 + 333 3 + 3 + 333 etc. Probability of going from state 3 to state in two steps In general is the n-step transition probability matrix Π n probabilities of going from state i to j in exactly n steps ( n) ( n) ( n) 3 n ( n) ( n) ( n) Π 3 ( n) defines ( n) ( n) ( n) ij 3 3 33

5 The Limiting Distribution. (0) i Define as a unit state vector (0) ( ) (0) ( ) (0) 0 0 0 0 3 ( 0 0 ) ( ) (0) Then i n i Πn is a vector of probabilities for ending at each state after n steps if beginning at state i ( n) ( n) ( n) 3 ( n) (0) n ( n) ( n) ( n) ( n) ( n) ( n) Π 3 3 ( n) ( n) ( n) 3 3 33 The limiting distribution corresponds to n independent of initial state ( 0 0) ( ) ( ) ( ) ( ) 3

The Limiting Distribution 3. 6 Stationary property of (0) lim i Π n n ( (0) n ) lim i n Π Π Π is a left eigenvector of Π with unit eigenvalue such an eigenvector is guaranteed to exist for matrices with rows that each sum to unity Equation for elements of limiting distribution i j ji 0.+ 0.9 + 0.33 0. 0.5 0.4 j 0.5 +0. + 0.33 eg.. Π 0.9 0. 0.0 3 0.4 0.0 0.43 0.3 0.3 0.4 + + + + 3 + + 3 not independent

7 Detailed Balance Eigenvector equation for limiting distribution A sufficient (but not necessary) condition for solution is i ij jji detailed balance or microscopic reversibility Thus i j ji j i j ji j j i ij i ij i j Π 0. 0.5 0.4 0.9 0. 0.0 0.3 0.3 0.4 For a given Π, it is not always possible to satisfy detailed balance; e.g. for this Π 3 3 3 zero

8 Deriving Transition Probabilities Turn problem around... given a desired, what transition probabilities will yield this as a limiting distribution? Construct transition probabilities to satisfy detailed balance Many choices are possible 0.97 0.0 0.0 e.g. ( 0.5 0.5 0.5) try them out Π 0.0 0.98 0.0 0.0 0.0 0.97 Least efficient 0 0 Π 0.5 0 0.5 0 0 Most efficient 0.4 0.33 0.5 Π 0.7 0.66 0.7 0.5 0.33 0.4 Barker 0.0 0.5 0.5 Π 0.5 0.5 0.5 0.5 0.5 0.0 Metropolis

9 Metropolis Algorithm. Prescribes transition probabilities to satisfy detailed balance, given desired limiting distribution Recipe: From a state i with probability τ ij, choose a trial state j for the move (note: τ ij τ ji ) if j > i, accept j as the new state otherwise, accept state j with probability j / i generate a random number R on (0,); accept if R < j / i if not accepting j as the new state, take the present state as the next one in the Markov chain ( 0) ii Metropolis, Rosenbluth, Rosenbluth, Teller and Teller, J. Chem. Phys., 087 (953)

Metropolis Algorithm. 0 What are the transition probabilities for this algorithm? Without loss of generality, define i as the state of greater probability j i > j ij τij i j in general: ij τij min, ji τ ji i ii ij Do they obey detailed balance?? i ij jji? j i ij jτji i τ τ ij τ ji j i Yes, as long as the underlying matrix Τ of the Markov chain is symmetric this can be violated, but acceptance probabilities must be modified

Markov Chains and Importance Sampling. Importance sampling specifies the desired limiting distribution We can use a Markov chain to generate quadrature points according to this distribution Example inside R s 0 outside R r + 0.5 + 0.5 dx dy( x y ) s( x, y) r s 0.5 + 0.5 V + 0.5 + 0.5 dx dys( x, y) s V 0.5 0.5 q normalization constant Method : let then r ( x, y) s( x, y)/ q r s qr q r s q q r Simply sum r with points given by Metropolis sampling V

Markov Chains and Importance Sampling. Example (cont d) Method : let then r r s ( xy, ) rs/ q q q s q/ r q / r r Algorithm and transition probabilities given a point in the region R generate a new point in the vicinity of given point x new x + r(-,+)δx y new y + r(-,+)δy new old accept with probability min(, / ) new new new note s / q s old old old s / q s Method : accept all moves that stay in R Method : if in R, accept with probability Normalization constants cancel! ( r ) new ( r ) old

3 Markov Chains and Importance Sampling 3. Subtle but important point Underlying matrix Τ is set by the trial-move algorithm (select new point uniformly in vicinity of present point) It is important that new points are selected in a volume that is independent of the present position If we reject configurations outside R, without taking the original point as the new one, then the underlying matrix becomes asymmetric Different-sized trial sampling regions

4 Evaluating Areas with Metropolis Sampling What if we want the absolute area of the region R, not an average over it? + 0.5 + 0.5 Let then A dx dys( x, y) s 0.5 0.5 ( xy, ) sxy (, )/ q s A q q We need to know the normalization constant q but this is exactly the integral that we are trying to solve! Absolute integrals difficult by MC relates to free-energy evaluation V

5 Summary Markov process is a stochastic process with no memory Full specification of process is given by a matrix of transition probabilities Π A distribution of states are generated by repeatedly stepping from one state to another according to Π A desired limiting distribution can be used to construct transition probabilities using detailed balance Many different Π matrices can be constructed to satisfy detailed balance Metropolis algorithm is one such choice, widely used in MC simulation Markov Monte Carlo is good for evaluating averages, but not absolute integrals