Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Size: px
Start display at page:

Download "Math 456: Mathematical Modeling. Tuesday, March 6th, 2018"

Transcription

1 Math 456: Mathematical Modeling Tuesday, March 6th, 2018

2 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018

3 Last time 1. Weighted graphs. 2. Existence of stationary distributions (irreducible chains). 3. Informal definition of transient and recurrent states 4. Definition of stopping times. 5. Statement of the Strong Markov Property.

4 Today Today we will see: 1. Stopping times and Exit distributions 2. The proof of the Strong Markov Property. 3. Probability of return: properties and computations.

5 Notation reminder Probability with respect to initial state Hitting time Recall the notation from last time: Given a Markov Chain X 0, X 1, X 2,... we will write P x [A] := P[A X 0 = x] E x [Y ] := E[Y X 0 = x] Also recall: given a state y, we have the first hitting time T y := min{n 1 X n = y} which is a positive, integer valued random variable.

6 Warm up Consider the Gambler s ruin with a fair coin What is P x [(Winning)]? Using exit times, and writing A = {0, M} What is P x [X TA = M]?

7 Warm up From the total probability formula, it follows that (for x 0, M) P x [X TA = M] = 1 2 P x+1[x TA = M] P x 1[X TA = M] From here, with a little effort, one can show that P x [X TA = M] = x, x = 0, 1,... M

8 Warm up From the total probability formula, it follows that (for x 0, M) P x [X TA = M] = 1 2 P x+1[x TA = M] P x 1[X TA = M] From here, with a little effort, one can show that P x [X TA = M] = x, x = 0, 1,... M What if the coin is not a fair?

9 Exit probabilities We are given: An irreducible Markov chain with state space S A set A S. The state of the chain at the time it first reaches A is given by X TA We may think of A as representing exits to the state space, so X TA represents the location of the chain where we exit.

10 Exit probabilities To study the distribution of X TA, fix y A, and define G y (x) := P[X TA = y X 0 = x] this defines a function G y : S R. This is known as the Green function of the chain (with Dirichlet conditions on A).

11 Exit probabilities Observe: if x A, then T A = 0, so P[X TA = y X 0 = x] = P[X 0 = y X 0 = x] and this is 1 or 0 depending on whether x = y or not, so { 1 if x A, x = y G y (x) = 0 if x A, x y What can we say for x A?

12 Exit probabilities For x A, we divide and conquer. In this case, the chain will move to a new state X 1, and we can condition on this new state. The total probability formula + Markov property then says that P[X TA = y X 0 = x] = x S P[X TA = y X 1 = x ]P[X 1 = x X 0 = x]

13 Exit probabilities Now, the identity P[X TA = y X 0 = x] = x S P[X TA = y X 1 = x ]P[X 1 = x X 0 = x] can be rewritten as G y (x) = x S G y (x )p(x, x ) or, using that p(x, x ) = 1 x (G y (x ) G y (x))p(x, x ) = 0 x S \ A. x S

14 Exit probabilities Example Let us go back to the Gambler s ruin. Assume we have a biased coin (p q) Take y = M. Then for x {1,..., M 1} q(g M (x + 1) G M (x)) + p(g M (x 1) G M (x)) = 0 With some cleverness, we can write this as a recurrence relation: G M (x + 1) G M (x) = p q (G M(x) G M (x 1))

15 Exit probabilities Example Thus, we have for x {1,..., M 1} ( ) p x G M (x + 1) G M (x) = (G M (1) G M (0)) q ( ) p x = G M (1) q this being since G M (0) = 0. Now, we can add up all these identities, from x = 0 up to some arbitrary element of {1,..., M}, obtaining x 1 ( ) p i G M (x) G M (0) = G M (1) q i=0

16 Exit probabilities Example Therefore (again, since G M (0) = 0) x 1 ( ) p i G M (x) = G M (1) q i=0 What about G M (1)?. We use the elementary identity Substituting... x 1 ( ) p i = (p/q)x 1 q (p/q) 1 i=0

17 Exit probabilities Example...we obtain for every x {0,..., M} the formula G M (x) = (p/q)x 1 (p/q) 1 G M(1) Since G M (M) = 1, it follows that Therefore, 1 = (p/q)m 1 (p/q) 1 G M(1) G M (1) = (p/q) 1 (p/q) M 1

18 Exit probabilities Example Thus, we have for x {1,..., M 1} ( ) p x G M (x + 1) G M (x) = (G M (1) G M (0)) q ( ) p x = G M (1) q this being since G M (0) = 0. At the same time, we have G M (M) = 1, so 1 G M (M 1) = ( ) p M 1 G M (1) q

19 Exit probabilities Example In conclusion, for every x G M (x) = (p/q)x 1 (p/q) M+1 1

20 The Laplacian Operator To every function f : S R we associate a new one, f(x) := x S(f(x )) f(x))p(x, x ) and known as the Laplacian of f.

21 The Laplacian Operator Then, for a given set A S and y A, we have { Gy (x) = δ xy if x A, G y (x) = 0 if x S \ A. Solving the above amounts to a linear algebra problem involving the transition matrix.

22 The Laplacian Operator This means we can compute an exact formula for the exit distribution for a set A, without even running a single simulation. The flip side is, if one wants to approximate the solution to the above problem, one could run lots of simulations, and tally the exit points.

23 Math 456: Mathematical Modeling Thursday, March 8th, 2018

24 Markov Chains: The Strong Markov Property (...and a bit more about the Laplacian) Thursday, March 8th, 2018

25 Last time 1. A bit on calculus on weighted graphs 2. Definitions: gradient, divergence, and Laplacian. 3. The Laplacian as a measure of a function s smoothness. 4. Exit distributions. Example: Gambler s ruin. 5. Exit distributions and the associated graph Laplacian.

26 Today 1. The Laplacian and stationary distributions. 2. The Strong Markov property. 3. Probability of return: properties and computations.

27 vs. Stationary distributions Last time we defined the Laplacian on a weighted graph. For the graph associated to a Markov chain (G = S, weights given by transition probabilities), for a function f on the state space, its Laplacian is given by f(x) = y G(f(y) f(x))p(x, y)

28 vs. Stationary distributions Let π be a stationary distribution for the chain, and let f be a function solving f = 0. How do the two equations compare? We have π(x) = y G π(y)p(y, x) (f(y) f(x))p(x, y) = 0 y G

29 vs. Stationary distributions Let π be a stationary distribution for the chain, and let f be a function solving f = 0. How do the two equations compare?...rewriting the second one, π(x) = y G π(y)p(y, x) f(x) = y G f(y)p(x, y) where we have used that y p(x, y) = 1 for every x. What is the difference? We have p(x, y) in one, and p(y, x) in the other (and we are summing in y in both cases).

30 vs. Stationary distributions What does this look like in vector notation? Solving f = 0 in all of G means pf = f and as we know, π being stationary means p t π = π The two conditions are very similar, but they are not exactly the same in general! If π t = π, they of course coincide! In general, if p is doubly stochastic then they coincide.

31 Classification of states The Strong Markov Property Recall the Strong Markov Property from last time. Theorem (Strong Markov Property) Let X n denote a Markov Chain. If T is a stopping time and Y n := X T +n, then Y n is also a Markov chain. Moreover, the transition probability for Y n is the same as the one for X n.

32 Classification of states The Strong Markov Property Theorem (Strong Markov Property) Let X n denote a Markov Chain. If T is a stopping time and Y n := X T +n, then Y n is also a Markov chain. Moreover, the transition probability for Y n is the same as the one for X n. What does this mean? It means that given states α 1,..., α n, we have P(Y n+1 = α n+1 Y n = α n,... Y 1 = α 1 ) = p(α n, α n+1 ) where p(x, y) is the transition matrix for the original chain.

33 Classification of states The Strong Markov Property Theorem (Strong Markov Property) Let X n denote a Markov Chain. If T is a stopping time and Y n := X T +n, then Y n is also a Markov chain. Moreover, the transition probability for Y n is the same as the one for X n. What does this mean? Equivalently, it means that P(Y n+1 = α n+1, Y n = α n,... Y 1 = α 1 ) is equal to p(α n, α n+1 )p(α n 1, α n )... p(α 1, α 2 )P(Y 1 = α 1 ). (we shall skip the proof of this for now and revisit it in a couple of lectures)

34 Probability of return Now, armed with the Strong Markov Property, let us put it to use to analyze various questions about the long time behavior of a chain. Remember: we were trying to study the even that starting from x, we eventually reach some other state y at some (random) time T y, as well as the probability that starting from x, we eventually return to x itself at some later (random) time T x.

35 Probability of return Recall The first hitting time, with the convention that in the event that {X n y n}. T y = min{n 0 X n = y} T y =

36 Probability of return also recall (with the same convention in case the respective event is empty) T k x := min{n > T k 1 x X n = x} T k A := min{n > T k 1 A X n A} This is known as the time of the k-th return to x or the k-th hitting time for A.

37 Probability of return also recall (with the same convention in case the respective event is empty) T k x := min{n > T k 1 x X n = x} T k A := min{n > T k 1 A X n A} This is known as the time of the k-th return to x or the k-th hitting time for A. Evidently, each T k x is a stopping time.

38 Probability of return The probability of returning to x ρ xx := P x [T x < ] The probability of visiting y starting from x. ρ xy := P x [T y < ] Note: ρ xy > 0 if and only if x y.

39 Probability of return Using the Strong Markov Property, we can show the following: Lemma (See equation (1.4) in Durrett) P x [T k x < ] = ρ k xx

40 Probability of return Using the Strong Markov Property, we can show the following: Lemma (See equation (1.4) in Durrett) P x [T k x < ] = ρ k xx Note: In particular, if ρ xx < 1, the probability of returning to the state k times goes to zero exponentially fast as k 0.

41 Probability of return Proof. Observe that, P x [T k x < ] = P x [T k x <, T k 1 x < ] Therefore P x [T k x < ] k 1 = P x [X T k 1 x +n = x for some n, Tx < ] k 1 = P x [X T k 1 x +n = x for some n Tx < ]P[Tx k 1 < ]

42 Probability of return Proof. Let Y n := X T k 1 x +n. By the Strong Markov Property k 1 P x [X T k 1 x +n = x for some n Tx < ] = P[Y n = x for some n Tx k 1 < ] = P[X n = x for some n X 0 = x] = P[T x < X 0 = x] = P x [T x < ] Repeating this argument, we obtain the desired formula.

43 Probability of return Now, reviewing the definition of transient and recurrent from last time, we agree to say that 1. a state x will be called recurrent if ρ xx = a state x will be called transient if ρ xx < 1.

44 Probability of return The previous lemma and these definitions lead us to the following realizations (one trivial, two not so trivial) 1. If x is transient, then the probability of returning to x k times decreases geometrically with k. 2. If x is recurrent and k N, then with probability equal to 1, the chain will be on the state x at least k times. 3. Actually, if x is recurrent, then with probability 1, starting from x, the chain will revisit x infinitely manytimes. This last point is particularly delicate, and the proof uses an important tool in probability known as the Borel-Cantelli lemma.

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

Some Definition and Example of Markov Chain

Some Definition and Example of Markov Chain Some Definition and Example of Markov Chain Bowen Dai The Ohio State University April 5 th 2016 Introduction Definition and Notation Simple example of Markov Chain Aim Have some taste of Markov Chain and

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

RANDOM WALKS IN ONE DIMENSION

RANDOM WALKS IN ONE DIMENSION RANDOM WALKS IN ONE DIMENSION STEVEN P. LALLEY 1. THE GAMBLER S RUIN PROBLEM 1.1. Statement of the problem. I have A dollars; my colleague Xinyi has B dollars. A cup of coffee at the Sacred Grounds in

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Math 456: Mathematical Modeling. Tuesday, April 9th, 2018

Math 456: Mathematical Modeling. Tuesday, April 9th, 2018 Math 456: Mathematical Modeling Tuesday, April 9th, 2018 The Ergodic theorem Tuesday, April 9th, 2018 Today 1. Asymptotic frequency (or: How to use the stationary distribution to estimate the average amount

More information

Introduction to Stochastic Processes

Introduction to Stochastic Processes 18.445 Introduction to Stochastic Processes Lecture 1: Introduction to finite Markov chains Hao Wu MIT 04 February 2015 Hao Wu (MIT) 18.445 04 February 2015 1 / 15 Course description About this course

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Mathematical Methods for Computer Science

Mathematical Methods for Computer Science Mathematical Methods for Computer Science Computer Science Tripos, Part IB Michaelmas Term 2016/17 R.J. Gibbens Problem sheets for Probability methods William Gates Building 15 JJ Thomson Avenue Cambridge

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

Stochastic Processes (Week 6)

Stochastic Processes (Week 6) Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov

More information

12 Markov chains The Markov property

12 Markov chains The Markov property 12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Lecture 7. 1 Notations. Tel Aviv University Spring 2011

Lecture 7. 1 Notations. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 2011 Lecture date: Apr 11, 2011 Lecture 7 Instructor: Ron Peled Scribe: Yoav Ram The following lecture (and the next one) will be an introduction

More information

Summary of Results on Markov Chains. Abstract

Summary of Results on Markov Chains. Abstract Summary of Results on Markov Chains Enrico Scalas 1, 1 Laboratory on Complex Systems. Dipartimento di Scienze e Tecnologie Avanzate, Università del Piemonte Orientale Amedeo Avogadro, Via Bellini 25 G,

More information

MATH HOMEWORK PROBLEMS D. MCCLENDON

MATH HOMEWORK PROBLEMS D. MCCLENDON MATH 46- HOMEWORK PROBLEMS D. MCCLENDON. Consider a Markov chain with state space S = {0, }, where p = P (0, ) and q = P (, 0); compute the following in terms of p and q: (a) P (X 2 = 0 X = ) (b) P (X

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient: Stochastic2010 Page 1 Long-Time Properties of Countable-State Markov Chains Tuesday, March 23, 2010 2:14 PM Homework 2: if you turn it in by 5 PM on 03/25, I'll grade it by 03/26, but you can turn it in

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Lecture 4 - Random walk, ruin problems and random processes

Lecture 4 - Random walk, ruin problems and random processes Lecture 4 - Random walk, ruin problems and random processes Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 4 - Random walk, ruin problems and random processesapril 19, 2009 1 / 30 Part I Random

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

HW 2 Solutions. The variance of the random walk is explosive (lim n Var (X n ) = ).

HW 2 Solutions. The variance of the random walk is explosive (lim n Var (X n ) = ). Stochastic Processews Prof Olivier Scaillet TA Adrien Treccani HW 2 Solutions Exercise. The process {X n, n } is a random walk starting at a (cf. definition in the course. [ n ] E [X n ] = a + E Z i =

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

1 Sequences of events and their limits

1 Sequences of events and their limits O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For

More information

Wed Feb The vector spaces 2, 3, n. Announcements: Warm-up Exercise:

Wed Feb The vector spaces 2, 3, n. Announcements: Warm-up Exercise: Wed Feb 2 4-42 The vector spaces 2, 3, n Announcements: Warm-up Exercise: 4-42 The vector space m and its subspaces; concepts related to "linear combinations of vectors" Geometric interpretation of vectors

More information

MA3H2 Markov Processes and Percolation theory. Stefan Adams

MA3H2 Markov Processes and Percolation theory. Stefan Adams MA3H2 Markov Processes and Percolation theory Stefan Adams 2011, update 13.03.2015 Contents 1 Simple random walk 1 1.1 Nearest neighbour random walk on Z.............. 1 1.2 How often random walkers return

More information

Markov Chains. Chapter 16. Markov Chains - 1

Markov Chains. Chapter 16. Markov Chains - 1 Markov Chains Chapter 16 Markov Chains - 1 Why Study Markov Chains? Decision Analysis focuses on decision making in the face of uncertainty about one future event. However, many decisions need to consider

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

Math 564 Homework 1. Solutions.

Math 564 Homework 1. Solutions. Math 564 Homework 1. Solutions. Problem 1. Prove Proposition 0.2.2. A guide to this problem: start with the open set S = (a, b), for example. First assume that a >, and show that the number a has the properties

More information

Inference for Stochastic Processes

Inference for Stochastic Processes Inference for Stochastic Processes Robert L. Wolpert Revised: June 19, 005 Introduction A stochastic process is a family {X t } of real-valued random variables, all defined on the same probability space

More information

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) Last modified: March 7, 2009 Reference: PRP, Sections 3.6 and 3.7. 1. Tail-Sum Theorem

More information

Countable state discrete time Markov Chains

Countable state discrete time Markov Chains Countable state discrete time Markov Chains Tuesday, March 18, 2014 2:12 PM Readings: Lawler Ch. 2 Karlin & Taylor Chs. 2 & 3 Resnick Ch. 1 Countably infinite state spaces are of practical utility in situations

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

Stochastic Simulation

Stochastic Simulation Stochastic Simulation Ulm University Institute of Stochastics Lecture Notes Dr. Tim Brereton Summer Term 2015 Ulm, 2015 2 Contents 1 Discrete-Time Markov Chains 5 1.1 Discrete-Time Markov Chains.....................

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

LIMITS AT INFINITY MR. VELAZQUEZ AP CALCULUS

LIMITS AT INFINITY MR. VELAZQUEZ AP CALCULUS LIMITS AT INFINITY MR. VELAZQUEZ AP CALCULUS RECALL: VERTICAL ASYMPTOTES Remember that for a rational function, vertical asymptotes occur at values of x = a which have infinite its (either positive or

More information

4 Branching Processes

4 Branching Processes 4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring

More information

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO 1. RENEWAL PROCESSES A renewal process is the increasing sequence of random nonnegative numbers S 0,S 1,S 2,... gotten by adding i.i.d. positive random

More information

T 1. The value function v(x) is the expected net gain when using the optimal stopping time starting at state x:

T 1. The value function v(x) is the expected net gain when using the optimal stopping time starting at state x: 108 OPTIMAL STOPPING TIME 4.4. Cost functions. The cost function g(x) gives the price you must pay to continue from state x. If T is your stopping time then X T is your stopping state and f(x T ) is your

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

MATH 56A SPRING 2008 STOCHASTIC PROCESSES MATH 56A SPRING 008 STOCHASTIC PROCESSES KIYOSHI IGUSA Contents 4. Optimal Stopping Time 95 4.1. Definitions 95 4.. The basic problem 95 4.3. Solutions to basic problem 97 4.4. Cost functions 101 4.5.

More information

Modern Discrete Probability Spectral Techniques

Modern Discrete Probability Spectral Techniques Modern Discrete Probability VI - Spectral Techniques Background Sébastien Roch UW Madison Mathematics December 22, 2014 1 Review 2 3 4 Mixing time I Theorem (Convergence to stationarity) Consider a finite

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Finish section 3.6 on Determinants and connections to matrix inverses. Use last week's notes. Then if we have time on Tuesday, begin:

Finish section 3.6 on Determinants and connections to matrix inverses. Use last week's notes. Then if we have time on Tuesday, begin: Math 225-4 Week 7 notes Sections 4-43 vector space concepts Tues Feb 2 Finish section 36 on Determinants and connections to matrix inverses Use last week's notes Then if we have time on Tuesday, begin

More information

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results Discrete time Markov chains Discrete Time Markov Chains, Definition and classification 1 1 Applied Mathematics and Computer Science 02407 Stochastic Processes 1, September 5 2017 Today: Short recap of

More information

Math-Stat-491-Fall2014-Notes-V

Math-Stat-491-Fall2014-Notes-V Math-Stat-491-Fall2014-Notes-V Hariharan Narayanan November 18, 2014 Martingales 1 Introduction Martingales were originally introduced into probability theory as a model for fair betting games. Essentially

More information

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

Readings: Finish Section 5.2

Readings: Finish Section 5.2 LECTURE 19 Readings: Finish Section 5.2 Lecture outline Markov Processes I Checkout counter example. Markov process: definition. -step transition probabilities. Classification of states. Example: Checkout

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Mathematics 206 Solutions for HWK 23 Section 6.3 p358

Mathematics 206 Solutions for HWK 23 Section 6.3 p358 Mathematics 6 Solutions for HWK Section Problem 9. Given T(x, y, z) = (x 9y + z,6x + 5y z) and v = (,,), use the standard matrix for the linear transformation T to find the image of the vector v. Note

More information

MAA704, Perron-Frobenius theory and Markov chains.

MAA704, Perron-Frobenius theory and Markov chains. November 19, 2013 Lecture overview Today we will look at: Permutation and graphs. Perron frobenius for non-negative. Stochastic, and their relation to theory. Hitting and hitting probabilities of chain.

More information

Monte Carlo importance sampling and Markov chain

Monte Carlo importance sampling and Markov chain Monte Carlo importance sampling and Markov chain If a configuration in phase space is denoted by X, the probability for configuration according to Boltzman is ρ(x) e βe(x) β = 1 T (1) How to sample over

More information

SOLVING AND COMPUTING THE DISCRETE DIRICHLET PROBLEM

SOLVING AND COMPUTING THE DISCRETE DIRICHLET PROBLEM SOLVING AND COMPUTING THE DISCRETE DIRICHLET PROBLEM EVAN GORSTEIN Abstract. We introduce simple random walks (SRW) and use them to model a gambling scenario in one dimension. Extending to higher dimensions,

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

Applied Stochastic Processes

Applied Stochastic Processes Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................

More information

Lecture 6 Random walks - advanced methods

Lecture 6 Random walks - advanced methods Lecture 6: Random wals - advanced methods 1 of 11 Course: M362K Intro to Stochastic Processes Term: Fall 2014 Instructor: Gordan Zitovic Lecture 6 Random wals - advanced methods STOPPING TIMES Our last

More information

1.3 Convergence of Regular Markov Chains

1.3 Convergence of Regular Markov Chains Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain

More information

IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence

IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence 1 Overview We started by stating the two principal laws of large numbers: the strong

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

MAS275 Probability Modelling Exercises

MAS275 Probability Modelling Exercises MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.

More information

What is a random variable

What is a random variable OKAN UNIVERSITY FACULTY OF ENGINEERING AND ARCHITECTURE MATH 256 Probability and Random Processes 04 Random Variables Fall 20 Yrd. Doç. Dr. Didem Kivanc Tureli didemk@ieee.org didem.kivanc@okan.edu.tr

More information

Homework set 3 - Solutions

Homework set 3 - Solutions Homework set 3 - Solutions Math 495 Renato Feres Problems 1. (Text, Exercise 1.13, page 38.) Consider the Markov chain described in Exercise 1.1: The Smiths receive the paper every morning and place it

More information

Advanced Computer Networks Lecture 2. Markov Processes

Advanced Computer Networks Lecture 2. Markov Processes Advanced Computer Networks Lecture 2. Markov Processes Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/28 Outline 2/28 1 Definition

More information

CIS 2033 Lecture 5, Fall

CIS 2033 Lecture 5, Fall CIS 2033 Lecture 5, Fall 2016 1 Instructor: David Dobor September 13, 2016 1 Supplemental reading from Dekking s textbook: Chapter2, 3. We mentioned at the beginning of this class that calculus was a prerequisite

More information

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

INTRODUCTION TO MARKOV CHAIN MONTE CARLO INTRODUCTION TO MARKOV CHAIN MONTE CARLO 1. Introduction: MCMC In its simplest incarnation, the Monte Carlo method is nothing more than a computerbased exploitation of the Law of Large Numbers to estimate

More information

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1 Lecture I jacques@ucsd.edu Notation: Throughout, P denotes probability and E denotes expectation. Denote (X) (r) = X(X 1)... (X r + 1) and let G n,p denote the Erdős-Rényi model of random graphs. 10 Random

More information

Lecture 11: Random Variables

Lecture 11: Random Variables EE5110: Probability Foundations for Electrical Engineers July-November 2015 Lecture 11: Random Variables Lecturer: Dr. Krishna Jagannathan Scribe: Sudharsan, Gopal, Arjun B, Debayani The study of random

More information

Math 163: Lecture notes

Math 163: Lecture notes Math 63: Lecture notes Professor Monika Nitsche March 2, 2 Special functions that are inverses of known functions. Inverse functions (Day ) Go over: early exam, hw, quizzes, grading scheme, attendance

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Name of the Student:

Name of the Student: SUBJECT NAME : Probability & Queueing Theory SUBJECT CODE : MA 6453 MATERIAL NAME : Part A questions REGULATION : R2013 UPDATED ON : November 2017 (Upto N/D 2017 QP) (Scan the above QR code for the direct

More information

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y. CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook

More information

CPSC 536N: Randomized Algorithms Term 2. Lecture 9

CPSC 536N: Randomized Algorithms Term 2. Lecture 9 CPSC 536N: Randomized Algorithms 2011-12 Term 2 Prof. Nick Harvey Lecture 9 University of British Columbia 1 Polynomial Identity Testing In the first lecture we discussed the problem of testing equality

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

1 Random Walks and Electrical Networks

1 Random Walks and Electrical Networks CME 305: Discrete Mathematics and Algorithms Random Walks and Electrical Networks Random walks are widely used tools in algorithm design and probabilistic analysis and they have numerous applications.

More information

CS8803: Statistical Techniques in Robotics Byron Boots. Hilbert Space Embeddings

CS8803: Statistical Techniques in Robotics Byron Boots. Hilbert Space Embeddings CS8803: Statistical Techniques in Robotics Byron Boots Hilbert Space Embeddings 1 Motivation CS8803: STR Hilbert Space Embeddings 2 Overview Multinomial Distributions Marginal, Joint, Conditional Sum,

More information

Twitter: @Owen134866 www.mathsfreeresourcelibrary.com Prior Knowledge Check 1) Simplify: a) 3x 2 5x 5 b) 5x3 y 2 15x 7 2) Factorise: a) x 2 2x 24 b) 3x 2 17x + 20 15x 2 y 3 3) Use long division to calculate:

More information

STAT 430/510 Probability Lecture 7: Random Variable and Expectation

STAT 430/510 Probability Lecture 7: Random Variable and Expectation STAT 430/510 Probability Lecture 7: Random Variable and Expectation Pengyuan (Penelope) Wang June 2, 2011 Review Properties of Probability Conditional Probability The Law of Total Probability Bayes Formula

More information

1/2 1/2 1/4 1/4 8 1/2 1/2 1/2 1/2 8 1/2 6 P =

1/2 1/2 1/4 1/4 8 1/2 1/2 1/2 1/2 8 1/2 6 P = / 7 8 / / / /4 4 5 / /4 / 8 / 6 P = 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Andrei Andreevich Markov (856 9) In Example. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 P (n) = 0

More information

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet YOUR NAME INSTRUCTIONS: No notes, no calculators, and no communications devices are permitted. Please keep all materials away from your

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information