Doing Physics with Random Numbers

Similar documents
Markov Processes. Stochastic process. Markov process

CS168: The Modern Algorithmic Toolbox Lecture #14: Markov Chain Monte Carlo

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

Use of Eigen values and eigen vectors to calculate higher transition probabilities

CS168: The Modern Algorithmic Toolbox Lecture #6: Markov Chain Monte Carlo

Markov Processes Hamid R. Rabiee

Application. Stochastic Matrices and PageRank

COMS 4771 Probabilistic Reasoning via Graphical Models. Nakul Verma

Copyright 2001 University of Cambridge. Not to be quoted or copied without permission.

Markov Chains and MCMC

Chapter 10. Finite-State Markov Chains. Introductory Example: Googling Markov Chains

Math Camp Notes: Linear Algebra II

CS 188: Artificial Intelligence Spring 2009

Approximate Inference

CPSC 540: Machine Learning

Plan for today. ! Part 1: (Hidden) Markov models. ! Part 2: String matching and read mapping

18.440: Lecture 33 Markov Chains

Probability, Random Processes and Inference

n(1 p i ) n 1 p i = 1 3 i=1 E(X i p = p i )P(p = p i ) = 1 3 p i = n 3 (p 1 + p 2 + p 3 ). p i i=1 P(X i = 1 p = p i )P(p = p i ) = p1+p2+p3

Today. Next lecture. (Ch 14) Markov chains and hidden Markov models

Law of large numbers for Markov chains

Markov Chains. Chapter 16. Markov Chains - 1

ISE/OR 760 Applied Stochastic Modeling

18.600: Lecture 32 Markov Chains

Statistical Problem. . We may have an underlying evolving system. (new state) = f(old state, noise) Input data: series of observations X 1, X 2 X t

Theory of Stochastic Processes 8. Markov chain Monte Carlo

CPSC 540: Machine Learning

Lecture 6: Entropy Rate

18.175: Lecture 30 Markov chains

CS 188: Artificial Intelligence Spring Announcements

8: Hidden Markov Models

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.

Special Mathematics. Tutorial 13. Markov chains

Discrete Markov Chain. Theory and use

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

Monte Carlo Methods for Computation and Optimization (048715)

Machine Learning for OR & FE

Parametric Models Part III: Hidden Markov Models

Markov Chains Handout for Stat 110

Results: MCMC Dancers, q=10, n=500

HMM part 1. Dr Philip Jackson

The Naïve Bayes Classifier. Machine Learning Fall 2017

Date: Tuesday, 18 February :00PM. Location: Museum of London

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Introduction and Overview STAT 421, SP Course Instructor

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Lecture 5: Introduction to Markov Chains

Strong Lens Modeling (II): Statistical Methods

Practical Numerical Methods in Physics and Astronomy. Lecture 5 Optimisation and Search Techniques

Data Mining in Bioinformatics HMM

Math 180B Problem Set 3

L23: hidden Markov models

COMS 4721: Machine Learning for Data Science Lecture 20, 4/11/2017

Lecture 7: Stochastic Dynamic Programing and Markov Processes

The Monte Carlo Method An Introduction to Markov Chain MC

Stochastic Problems. 1 Examples. 1.1 Neoclassical Growth Model with Stochastic Technology. 1.2 A Model of Job Search

CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering

Entropy Rate of Stochastic Processes

Bayes Nets: Sampling

Modelling data networks stochastic processes and Markov chains

Simple math for a complex world: Random walks in biology and finance. Jake Hofman Physics Department Columbia University

Markov Chains and Hidden Markov Models

2 DISCRETE-TIME MARKOV CHAINS

Professor: Arpita Upadhyaya Physics 131

Random Walks A&T and F&S 3.1.2

CS 343: Artificial Intelligence

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Expectations, Markov chains, and the Metropolis algorithm

4452 Mathematical Modeling Lecture 16: Markov Processes

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Homework 4 solutions

Lecture 4 - Random walk, ruin problems and random processes

Chapter 14. From Randomness to Probability. Copyright 2012, 2008, 2005 Pearson Education, Inc.

Diffusion and cellular-level simulation. CS/CME/BioE/Biophys/BMI 279 Nov. 7 and 9, 2017 Ron Dror

October 21, 2015 Physics 131 Prof. E. F. Redish. Theme Music: Elvis Presley All Shook Up Cartoon: Scott & Borgman Zits

APPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One

CS188 Outline. CS 188: Artificial Intelligence. Today. Inference in Ghostbusters. Probability. We re done with Part I: Search and Planning!

A.I. in health informatics lecture 8 structured learning. kevin small & byron wallace

Introduction to Artificial Intelligence (AI)

Monte Carlo. Lecture 15 4/9/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky

Modelling data networks stochastic processes and Markov chains

Note Set 5: Hidden Markov Models

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Our Status in CSE 5522

A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

Cluster Analysis using SaTScan. Patrick DeLuca, M.A. APHEO 2007 Conference, Ottawa October 16 th, 2007

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Numerical methods for lattice field theory

Spatial point processes in the modern world an

CS 5522: Artificial Intelligence II

How Do We Get Oxygen From the Air Into Our Cells?

The Monte Carlo method what and how?

CS 5522: Artificial Intelligence II

Definition and Examples of DTMCs

Updating PageRank. Amy Langville Carl Meyer

Chapter 6: Temporal Difference Learning

Transcription:

Doing Physics with Random Numbers Andrew J. Schultz Department of Chemical and Biological Engineering University at Buffalo The State University of New York

Concepts Random numbers can be used to measure things that aren t so random Uncertainty in averages can be estimated from the measurements themselves A Markov process can be used to sample from a probability distribution Physical properties can be computed using a Markov process 2

Monte Carlo Simulation: Buffon s Needle Consider a grid of equally spaced lines, separated by a distance d Take a needle of length l, and repeatedly toss it at random on the grid Record the number of hits, times that the needle touches a line, and misses, times that it doesn t OK, do it! (Buffon s toothpick) 3

Monte Carlo Simulation: Buffon s Needle Consider a grid of equally spaced lines, separated by a distance d Take a needle of length l, and repeatedly toss it at random on the grid Record the number of hits, times that the needle touches a line, and misses, times that it doesn t Buffon showed that the probability of a hit is P 2l = π d 4 This experiment provides a means to evaluate π 2l π = Pd

Quantifying Uncertainty Averages <m> obtained by a stochastic process are known to follow a Gaussian distribution Any given <m> will represent a sample from this distribution The width of the distribution is related to the standard deviation of the same set of numbers used to compute <m> 2 σ m = 1 n σ 2 m σ p(<m>) m <m> m a sample Use this to quantify uncertainty in <m> 5

The Weather Model the weather as having three states: Sunny, Cloudy, Rainy The weather tomorrow is related to the weather today For example, Cloudy today: 10% chance it will be Cloudy again tomorrow 50% chance it will be Sunny tomorrow 40% chance it will be Rainy tomorrow Transition probabilities define the likelihood of the state of the weather tomorrow given the state of the weather today We might ask, knowing all (9) transition probabilities, what are the fraction of days that are Sunny, Cloudy, and Rainy? 6

Markov Processes Random walk movement through a series of well-defined states in a way that involves some element of randomness ( stochastic ) Markov process random walk that has no memory selection of next state depends only on current state, and not on prior states process is fully defined by a set of transition probabilities p ij p ij = probability of selecting state j next, given that presently in state i.

Transition-Probability Matrix Example 1 = Cloudy 2 = Sunny 3 = Rainy system with three states Π p 11 p 12 p 13 p 21 p 22 p 23 p 31 p 32 p 33 = 0.1 0.5 0.4 0.9 0.1 0.0 0.3 0.3 0.4 If Cloudy, will stay Cloudy tomorrow, with probability 0.1 Requirements of transition-probability matrix all probabilities non-negative, and no greater than unity sum of each row is unity probability of staying in present state may be non-zero If Cloudy, will be Rainy tomorrow with probability 0.4 Never Rainy tomorrow if Sunny today

Distribution of State Occupancies Consider process of repeatedly moving from one state to the next, choosing each subsequent state according to Π 1 2 2 1 3 2 2 3 3 1 2 3 etc. Histogram the occupancy number for each state n 1 = 3 p 1 = 0.33 (Cloudy) n 2 = 5 p 2 = 0.42 (Sunny) n 3 = 4 p 3 = 0.25 (Rainy) 1 2 3 After very many steps, a limiting distribution emerges Click here for an applet that demonstrates a Markov process and its approach to a limiting distribution

Some Uses of Markov Processes Physics (molecular simulation) Chemistry (modeling kinetics) Designing tests Speech recognition Information science Queuing theory Google PageRank Economics and Finance Social sciences Mathematical biology Genetics Games Algorithmic music composition Text generators 10 http://en.wikipedia.org/wiki/markov_chain

Designer Transition Probabilities Say we want to sample states with a desired probability distribution (p i are given), using a Markov process How do we design transition probabilities p ij? Many choices are possible for a given distribution Metropolis algorithm provides a good choice p ij = min p ii = 1 j i p j,1 p i p ij 11

In-class Markov Process Three groups Group 1: p 1 = 0.3 Group 2: p 2 = 0.1 Group 3: p 3 = 0.6 For state at group n, perform trial: Select one of the other states (m) with equal probability Compare state probabilities Repeat If p m > p n, let new state be m; done with trial Otherwise, select a random number r in (0, 1) If p m / p n > r, let new state be m; done with trial Otherwise, let new state be n (again); done with trial 12

Calculating Physical Properties Molecular simulation Generate box of atoms/ molecules Postulate a model for how they interact Generate configurations appropriate to postulated model Record averages over generated configurations 13

Generating Configurations Monte Carlo Sample configurations using a Markov process Atoms move around randomly, but in a controlled way Movements aren t physically meaningful, but the sampled distribution of configurations are Molecular dynamics Sample configurations according to Newton s laws F = ma Move/accelerate all atoms at once Direction and amount depends on current velocities and forces Movements looks realistic, like a movie Let s see some demos 14

15 An Application of Molecular Simulation

Concepts Random numbers can be used to measure things that aren t so random Uncertainty in averages can be estimated from the measurements themselves A Markov process can be used to sample from a probability distribution Physical properties can be computed using a Markov process 16