18.175: Lecture 30 Markov chains

Size: px
Start display at page:

Download "18.175: Lecture 30 Markov chains"

Transcription

1 18.175: Lecture 30 Markov chains Scott Sheffield MIT

2 Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup

3 Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup

4 Markov chains Consider a sequence of random variables X 0, X 1, X 2,... each taking values in the same state space, which for now we take to be a finite set that we label by {0, 1,..., M}.

5 Markov chains Consider a sequence of random variables X 0, X 1, X 2,... each taking values in the same state space, which for now we take to be a finite set that we label by {0, 1,..., M}. Interpret X n as state of the system at time n.

6 Markov chains Consider a sequence of random variables X 0, X 1, X 2,... each taking values in the same state space, which for now we take to be a finite set that we label by {0, 1,..., M}. Interpret X n as state of the system at time n. Sequence is called a Markov chain if we have a fixed collection of numbers P ij (one for each pair i, j {0, 1,..., M}) such that whenever the system is in state i, there is probability P ij that system will next be in state j.

7 Markov chains Consider a sequence of random variables X 0, X 1, X 2,... each taking values in the same state space, which for now we take to be a finite set that we label by {0, 1,..., M}. Interpret X n as state of the system at time n. Sequence is called a Markov chain if we have a fixed collection of numbers P ij (one for each pair i, j {0, 1,..., M}) such that whenever the system is in state i, there is probability P ij that system will next be in state j. Precisely, P{X n+1 = j X n = i, X n 1 = i n 1,..., X 1 = i 1, X 0 = i 0 } = P ij.

8 Markov chains Consider a sequence of random variables X 0, X 1, X 2,... each taking values in the same state space, which for now we take to be a finite set that we label by {0, 1,..., M}. Interpret X n as state of the system at time n. Sequence is called a Markov chain if we have a fixed collection of numbers P ij (one for each pair i, j {0, 1,..., M}) such that whenever the system is in state i, there is probability P ij that system will next be in state j. Precisely, P{X n+1 = j X n = i, X n 1 = i n 1,..., X 1 = i 1, X 0 = i 0 } = P ij. Kind of an almost memoryless property. Probability distribution for next state depends only on the current state (and not on the rest of the state history).

9 Simple example For example, imagine a simple weather model with two states: rainy and sunny.

10 Simple example For example, imagine a simple weather model with two states: rainy and sunny. If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny.

11 Simple example For example, imagine a simple weather model with two states: rainy and sunny. If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy.

12 Simple example For example, imagine a simple weather model with two states: rainy and sunny. If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy. In this climate, sun tends to last longer than rain.

13 Simple example For example, imagine a simple weather model with two states: rainy and sunny. If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy. In this climate, sun tends to last longer than rain. Given that it is rainy today, how many days to I expect to have to wait to see a sunny day?

14 Simple example For example, imagine a simple weather model with two states: rainy and sunny. If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy. In this climate, sun tends to last longer than rain. Given that it is rainy today, how many days to I expect to have to wait to see a sunny day? Given that it is sunny today, how many days to I expect to have to wait to see a rainy day?

15 Simple example For example, imagine a simple weather model with two states: rainy and sunny. If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy. In this climate, sun tends to last longer than rain. Given that it is rainy today, how many days to I expect to have to wait to see a sunny day? Given that it is sunny today, how many days to I expect to have to wait to see a rainy day? Over the long haul, what fraction of days are sunny?

16 Matrix representation To describe a Markov chain, we need to define P ij for any i, j {0, 1,..., M}.

17 Matrix representation To describe a Markov chain, we need to define P ij for any i, j {0, 1,..., M}. It is convenient to represent the collection of transition probabilities P ij as a matrix: A = P 00 P P 0M P 10 P P 1M P M0 P M1... P MM

18 Matrix representation To describe a Markov chain, we need to define P ij for any i, j {0, 1,..., M}. It is convenient to represent the collection of transition probabilities P ij as a matrix: A = P 00 P P 0M P 10 P P 1M P M0 P M1... P MM For this to make sense, we require P ij 0 for all i, j and M j=0 P ij = 1 for each i. That is, the rows sum to one.

19 Transitions via matrices Suppose that p i is the probability that system is in state i at time zero.

20 Transitions via matrices Suppose that p i is the probability that system is in state i at time zero. What does the following product represent? P 00 P P 0M P 10 P P 1M ( ) p0 p 1... p M P M0 P M1... P MM

21 Transitions via matrices Suppose that p i is the probability that system is in state i at time zero. What does the following product represent? P 00 P P 0M P 10 P P 1M ( ) p0 p 1... p M P M0 P M1... P MM Answer: the probability distribution at time one.

22 Transitions via matrices Suppose that p i is the probability that system is in state i at time zero. What does the following product represent? P 00 P P 0M P 10 P P 1M ( ) p0 p 1... p M P M0 P M1... P MM Answer: the probability distribution at time one. How about the following product? ( p0 p 1... p M ) A n

23 Transitions via matrices Suppose that p i is the probability that system is in state i at time zero. What does the following product represent? P 00 P P 0M P 10 P P 1M ( ) p0 p 1... p M P M0 P M1... P MM Answer: the probability distribution at time one. How about the following product? ( p0 p 1... p M ) A n Answer: the probability distribution at time n.

24 Powers of transition matrix We write P (n) ij over n steps. for the probability to go from state i to state j

25 Powers of transition matrix We write P (n) ij for the probability to go from state i to state j over n steps. From the matrix point of view P (n) 00 P (n) P (n) 0M P (n) 10 P (n) P (n) P (n) M0 1M P (n) M1... P (n) MM = P 00 P P 0M P 10 P P 1M P M0 P M1... P MM n

26 Powers of transition matrix We write P (n) ij for the probability to go from state i to state j over n steps. From the matrix point of view P (n) 00 P (n) P (n) 0M P (n) 10 P (n) P (n) P (n) M0 1M P (n) M1... P (n) MM = P 00 P P 0M P 10 P P 1M P M0 P M1... P MM If A is the one-step transition matrix, then A n is the n-step transition matrix. n

27 Questions What does it mean if all of the rows are identical?

28 Questions What does it mean if all of the rows are identical? Answer: state sequence X i consists of i.i.d. random variables.

29 Questions What does it mean if all of the rows are identical? Answer: state sequence X i consists of i.i.d. random variables. What if matrix is the identity?

30 Questions What does it mean if all of the rows are identical? Answer: state sequence X i consists of i.i.d. random variables. What if matrix is the identity? Answer: states never change.

31 Questions What does it mean if all of the rows are identical? Answer: state sequence X i consists of i.i.d. random variables. What if matrix is the identity? Answer: states never change. What if each P ij is either one or zero?

32 Questions What does it mean if all of the rows are identical? Answer: state sequence X i consists of i.i.d. random variables. What if matrix is the identity? Answer: states never change. What if each P ij is either one or zero? Answer: state evolution is deterministic.

33 Simple example Consider the simple weather example: If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy.

34 Simple example Consider the simple weather example: If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy. Let rainy be state zero, sunny state one, and write the transition matrix by ( ).5.5 A =.2.8

35 Simple example Consider the simple weather example: If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy. Let rainy be state zero, sunny state one, and write the transition matrix by ( ).5.5 A =.2.8 Note that A 2 = ( )

36 Simple example Consider the simple weather example: If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy. Let rainy be state zero, sunny state one, and write the transition matrix by ( ).5.5 A =.2.8 Note that ( ) A = ( ) Can compute A 10 =

37 Does relationship status have the Markov property? In a relationship Single It s complicated Married Engaged

38 Does relationship status have the Markov property? In a relationship Single It s complicated Married Engaged Can we assign a probability to each arrow?

39 Does relationship status have the Markov property? In a relationship Single It s complicated Married Engaged Can we assign a probability to each arrow? Markov model implies time spent in any state (e.g., a marriage) before leaving is a geometric random variable.

40 Does relationship status have the Markov property? In a relationship Single It s complicated Married Engaged Can we assign a probability to each arrow? Markov model implies time spent in any state (e.g., a marriage) before leaving is a geometric random variable. Not true... Can we make a better model with more states?

41 Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup

42 Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup

43 Ergodic Markov chains Say Markov chain is ergodic if some power of the transition matrix has all non-zero entries.

44 Ergodic Markov chains Say Markov chain is ergodic if some power of the transition matrix has all non-zero entries. Turns out that if chain has this property, then π j := lim n P (n) ij exists and the π j are the unique non-negative solutions of π j = M k=0 π kp kj that sum to one.

45 Ergodic Markov chains Say Markov chain is ergodic if some power of the transition matrix has all non-zero entries. Turns out that if chain has this property, then π j := lim n P (n) ij exists and the π j are the unique non-negative solutions of π j = M k=0 π kp kj that sum to one. This means that the row vector π = ( ) π 0 π 1... π M is a left eigenvector of A with eigenvalue 1, i.e., πa = π.

46 Ergodic Markov chains Say Markov chain is ergodic if some power of the transition matrix has all non-zero entries. Turns out that if chain has this property, then π j := lim n P (n) ij exists and the π j are the unique non-negative solutions of π j = M k=0 π kp kj that sum to one. This means that the row vector π = ( ) π 0 π 1... π M is a left eigenvector of A with eigenvalue 1, i.e., πa = π. We call π the stationary distribution of the Markov chain.

47 Ergodic Markov chains Say Markov chain is ergodic if some power of the transition matrix has all non-zero entries. Turns out that if chain has this property, then π j := lim n P (n) ij exists and the π j are the unique non-negative solutions of π j = M k=0 π kp kj that sum to one. This means that the row vector π = ( ) π 0 π 1... π M is a left eigenvector of A with eigenvalue 1, i.e., πa = π. We call π the stationary distribution of the Markov chain. One can solve the system of linear equations π j = M k=0 π kp kj to compute the values π j. Equivalent to considering A fixed and solving πa = π. Or solving (A I )π = 0. This determines π up to a multiplicative constant, and fact that π j = 1 determines the constant.

48 Simple example If A = ( ), then we know πa = ( π 0 π 1 ) ( ) = ( ) π 0 π 1 = π.

49 Simple example If A = ( ), then we know πa = ( π 0 π 1 ) ( ) = ( ) π 0 π 1 = π. This means that.5π 0 +.2π 1 = π 0 and.5π 0 +.8π 1 = π 1 and we also know that π 1 + π 2 = 1. Solving these equations gives π 0 = 2/7 and π 1 = 5/7, so π = ( 2/7 5/7 ).

50 Simple example If A = ( ), then we know πa = ( π 0 π 1 ) ( ) = ( ) π 0 π 1 = π. This means that.5π 0 +.2π 1 = π 0 and.5π 0 +.8π 1 = π 1 and we also know that π 1 + π 2 = 1. Solving these equations gives π 0 = 2/7 and π 1 = 5/7, so π = ( 2/7 5/7 ). Indeed, πa = ( 2/7 5/7 ) ( ) = ( 2/7 5/7 ) = π.

51 Simple example If A = ( ), then we know πa = ( π 0 π 1 ) ( ) = ( ) π 0 π 1 = π. This means that.5π 0 +.2π 1 = π 0 and.5π 0 +.8π 1 = π 1 and we also know that π 1 + π 2 = 1. Solving these equations gives π 0 = 2/7 and π 1 = 5/7, so π = ( 2/7 5/7 ). Indeed, πa = ( 2/7 5/7 ) ( Recall ( that ) A 10 = ) = ( 2/7 5/7 ) = π. ( 2/7 5/7 2/7 5/7 ) = ( π π )

52 Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup

53 Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup

54 Markov chains: general definition Consider a measurable space (S, S).

55 Markov chains: general definition Consider a measurable space (S, S). A function p : S S R is a transition probability if

56 Markov chains: general definition Consider a measurable space (S, S). A function p : S S R is a transition probability if For each x S, A p(x, A) is a probability measure on S, S).

57 Markov chains: general definition Consider a measurable space (S, S). A function p : S S R is a transition probability if For each x S, A p(x, A) is a probability measure on S, S). For each A S, the map x p(x, A) is a measurable function.

58 Markov chains: general definition Consider a measurable space (S, S). A function p : S S R is a transition probability if For each x S, A p(x, A) is a probability measure on S, S). For each A S, the map x p(x, A) is a measurable function. Say that X n is a Markov chain w.r.t. F n with transition probability p if P(X n+1 B F n ) = p(x n, B).

59 Markov chains: general definition Consider a measurable space (S, S). A function p : S S R is a transition probability if For each x S, A p(x, A) is a probability measure on S, S). For each A S, the map x p(x, A) is a measurable function. Say that X n is a Markov chain w.r.t. F n with transition probability p if P(X n+1 B F n ) = p(x n, B). How do we construct an infinite Markov chain? Choose p and initial distribution µ on (S, S). For each n < write P(X j B j, 0 j n) = µ(dx 0 ) B 0 p(x 0, dx 1 ) B 1 B n p(x n 1, dx n ). Extend to n = by Kolmogorov s extension theorem.

60 Markov chains Definition, again: Say X n is a Markov chain w.r.t. F n with transition probability p if P(X n+1 B F n ) = p(x n, B).

61 Markov chains Definition, again: Say X n is a Markov chain w.r.t. F n with transition probability p if P(X n+1 B F n ) = p(x n, B). Construction, again: Fix initial distribution µ on (S, S). For each n < write P(X j B j, 0 j n) = µ(dx 0 ) B 0 p(x 0, dx 1 ) B 1 B n p(x n 1, dx n ). Extend to n = by Kolmogorov s extension theorem.

62 Markov chains Definition, again: Say X n is a Markov chain w.r.t. F n with transition probability p if P(X n+1 B F n ) = p(x n, B). Construction, again: Fix initial distribution µ on (S, S). For each n < write P(X j B j, 0 j n) = µ(dx 0 ) B 0 p(x 0, dx 1 ) B 1 B n p(x n 1, dx n ). Extend to n = by Kolmogorov s extension theorem. Notation: Extension produces probability measure P µ on sequence space (S 0,1,..., S 0,1,... ).

63 Markov chains Definition, again: Say X n is a Markov chain w.r.t. F n with transition probability p if P(X n+1 B F n ) = p(x n, B). Construction, again: Fix initial distribution µ on (S, S). For each n < write P(X j B j, 0 j n) = µ(dx 0 ) B 0 p(x 0, dx 1 ) B 1 B n p(x n 1, dx n ). Extend to n = by Kolmogorov s extension theorem. Notation: Extension produces probability measure P µ on sequence space (S 0,1,..., S 0,1,... ). Theorem: (X 0, X 1,...) chosen from P µ is Markov chain.

64 Markov chains Definition, again: Say X n is a Markov chain w.r.t. F n with transition probability p if P(X n+1 B F n ) = p(x n, B). Construction, again: Fix initial distribution µ on (S, S). For each n < write P(X j B j, 0 j n) = µ(dx 0 ) B 0 p(x 0, dx 1 ) B 1 B n p(x n 1, dx n ). Extend to n = by Kolmogorov s extension theorem. Notation: Extension produces probability measure P µ on sequence space (S 0,1,..., S 0,1,... ). Theorem: (X 0, X 1,...) chosen from P µ is Markov chain. Theorem: If X n is any Markov chain with initial distribution µ and transition p, then finite dim. probabilities are as above.

65 Examples Random walks on R d.

66 Examples Random walks on R d. Branching processes: p(i, j) = P ( im=1 ξ m = j ) where ξ i are i.i.d. non-negative integer-valued random variables.

67 Examples Random walks on R d. Branching processes: p(i, j) = P ( im=1 ξ m = j ) where ξ i are i.i.d. non-negative integer-valued random variables. Renewal chain.

68 Examples Random walks on R d. Branching processes: p(i, j) = P ( im=1 ξ m = j ) where ξ i are i.i.d. non-negative integer-valued random variables. Renewal chain. Card shuffling.

69 Examples Random walks on R d. Branching processes: p(i, j) = P ( im=1 ξ m = j ) where ξ i are i.i.d. non-negative integer-valued random variables. Renewal chain. Card shuffling. Ehrenfest chain.

18.600: Lecture 32 Markov Chains

18.600: Lecture 32 Markov Chains 18.600: Lecture 32 Markov Chains Scott Sheffield MIT Outline Markov chains Examples Ergodicity and stationarity Outline Markov chains Examples Ergodicity and stationarity Markov chains Consider a sequence

More information

18.440: Lecture 33 Markov Chains

18.440: Lecture 33 Markov Chains 18.440: Lecture 33 Markov Chains Scott Sheffield MIT 1 Outline Markov chains Examples Ergodicity and stationarity 2 Outline Markov chains Examples Ergodicity and stationarity 3 Markov chains Consider a

More information

Lecture 5: Random Walks and Markov Chain

Lecture 5: Random Walks and Markov Chain Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Math Camp Notes: Linear Algebra II

Math Camp Notes: Linear Algebra II Math Camp Notes: Linear Algebra II Eigenvalues Let A be a square matrix. An eigenvalue is a number λ which when subtracted from the diagonal elements of the matrix A creates a singular matrix. In other

More information

Markov Chains (Part 4)

Markov Chains (Part 4) Markov Chains (Part 4) Steady State Probabilities and First Passage Times Markov Chains - 1 Steady-State Probabilities Remember, for the inventory example we had (8) P &.286 =.286.286 %.286 For an irreducible

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

ISM206 Lecture, May 12, 2005 Markov Chain

ISM206 Lecture, May 12, 2005 Markov Chain ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Stochastic Modelling Unit 1: Markov chain models

Stochastic Modelling Unit 1: Markov chain models Stochastic Modelling Unit 1: Markov chain models Russell Gerrard and Douglas Wright Cass Business School, City University, London June 2004 Contents of Unit 1 1 Stochastic Processes 2 Markov Chains 3 Poisson

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

COMS 4721: Machine Learning for Data Science Lecture 20, 4/11/2017

COMS 4721: Machine Learning for Data Science Lecture 20, 4/11/2017 COMS 4721: Machine Learning for Data Science Lecture 20, 4/11/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University SEQUENTIAL DATA So far, when thinking

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

Discrete Markov Chain. Theory and use

Discrete Markov Chain. Theory and use Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical

More information

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Markov Chains. Sarah Filippi Department of Statistics  TA: Luke Kelly Markov Chains Sarah Filippi Department of Statistics http://www.stats.ox.ac.uk/~filippi TA: Luke Kelly With grateful acknowledgements to Prof. Yee Whye Teh's slides from 2013 14. Schedule 09:30-10:30 Lecture:

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006. Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

Linear Algebra Solutions 1

Linear Algebra Solutions 1 Math Camp 1 Do the following: Linear Algebra Solutions 1 1. Let A = and B = 3 8 5 A B = 3 5 9 A + B = 9 11 14 4 AB = 69 3 16 BA = 1 4 ( 1 3. Let v = and u = 5 uv = 13 u v = 13 v u = 13 Math Camp 1 ( 7

More information

Lecture 7: Stochastic Dynamic Programing and Markov Processes

Lecture 7: Stochastic Dynamic Programing and Markov Processes Lecture 7: Stochastic Dynamic Programing and Markov Processes Florian Scheuer References: SLP chapters 9, 10, 11; LS chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology

More information

Markov Chains. Chapter 16. Markov Chains - 1

Markov Chains. Chapter 16. Markov Chains - 1 Markov Chains Chapter 16 Markov Chains - 1 Why Study Markov Chains? Decision Analysis focuses on decision making in the face of uncertainty about one future event. However, many decisions need to consider

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC CompSci 590.02 Instructor: AshwinMachanavajjhala Lecture 4 : 590.02 Spring 13 1 Recap: Monte Carlo Method If U is a universe of items, and G is a subset satisfying some property,

More information

Today. Next lecture. (Ch 14) Markov chains and hidden Markov models

Today. Next lecture. (Ch 14) Markov chains and hidden Markov models Today (Ch 14) Markov chains and hidden Markov models Graphical representation Transition probability matrix Propagating state distributions The stationary distribution Next lecture (Ch 14) Markov chains

More information

EECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+

EECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+ EECS 495: Randomized Algorithms Lecture 14 Random Walks Reading: Motwani-Raghavan Chapter 6 Powerful tool for sampling complicated distributions since use only local moves Given: to explore state space.

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

On random walks. i=1 U i, where x Z is the starting point of

On random walks. i=1 U i, where x Z is the starting point of On random walks Random walk in dimension. Let S n = x+ n i= U i, where x Z is the starting point of the random walk, and the U i s are IID with P(U i = +) = P(U n = ) = /2.. Let N be fixed (goal you want

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature. Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

APPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One

APPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One APPM /556 Markov Processes Fall 9, Some Review Problems for Exam One (Note: You will not have to invert any matrices on the exam and you will not be expected to solve systems of or more unknowns. There

More information

Lecture 2: September 8

Lecture 2: September 8 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 Lecturer: Prof. Alistair Sinclair Scribes: Anand Bhaskar and Anindya De Disclaimer: These notes have not been

More information

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 4: Discrete-Time Markov Chains September 1, 215 Dr. Itamar Arel College of Engineering Department of Electrical Engineering & Computer

More information

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems Lecture 31: Some Applications of Eigenvectors: Markov Chains and Chemical Reaction Systems Winfried Just Department of Mathematics, Ohio University April 9 11, 2018 Review: Eigenvectors and left eigenvectors

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

2 DISCRETE-TIME MARKOV CHAINS

2 DISCRETE-TIME MARKOV CHAINS 1 2 DISCRETE-TIME MARKOV CHAINS 21 FUNDAMENTAL DEFINITIONS AND PROPERTIES From now on we will consider processes with a countable or finite state space S {0, 1, 2, } Definition 1 A discrete-time discrete-state

More information

MAA704, Perron-Frobenius theory and Markov chains.

MAA704, Perron-Frobenius theory and Markov chains. November 19, 2013 Lecture overview Today we will look at: Permutation and graphs. Perron frobenius for non-negative. Stochastic, and their relation to theory. Hitting and hitting probabilities of chain.

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

G(t) := i. G(t) = 1 + e λut (1) u=2

G(t) := i. G(t) = 1 + e λut (1) u=2 Note: a conjectured compactification of some finite reversible MCs There are two established theories which concern different aspects of the behavior of finite state Markov chains as the size of the state

More information

Markov Chain Model for ALOHA protocol

Markov Chain Model for ALOHA protocol Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability

More information

4.7.1 Computing a stationary distribution

4.7.1 Computing a stationary distribution At a high-level our interest in the rest of this section will be to understand the limiting distribution, when it exists and how to compute it To compute it, we will try to reason about when the limiting

More information

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018 Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

Chapter 11 Advanced Topic Stochastic Processes

Chapter 11 Advanced Topic Stochastic Processes Chapter 11 Advanced Topic Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section

More information

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains AM : Introduction to Optimization Models and Methods Lecture 7: Markov Chains Yiling Chen SEAS Lesson Plan Stochastic process Markov Chains n-step probabilities Communicating states, irreducibility Recurrent

More information

Stochastic Problems. 1 Examples. 1.1 Neoclassical Growth Model with Stochastic Technology. 1.2 A Model of Job Search

Stochastic Problems. 1 Examples. 1.1 Neoclassical Growth Model with Stochastic Technology. 1.2 A Model of Job Search Stochastic Problems References: SLP chapters 9, 10, 11; L&S chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology Production function y = Af k where A is random Let A s t

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

Lectures on Probability and Statistical Models

Lectures on Probability and Statistical Models Lectures on Probability and Statistical Models Phil Pollett Professor of Mathematics The University of Queensland c These materials can be used for any educational purpose provided they are are not altered

More information

HMM part 1. Dr Philip Jackson

HMM part 1. Dr Philip Jackson Centre for Vision Speech & Signal Processing University of Surrey, Guildford GU2 7XH. HMM part 1 Dr Philip Jackson Probability fundamentals Markov models State topology diagrams Hidden Markov models -

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

Lecture 1 Systems of Linear Equations and Matrices

Lecture 1 Systems of Linear Equations and Matrices Lecture 1 Systems of Linear Equations and Matrices Math 19620 Outline of Course Linear Equations and Matrices Linear Transformations, Inverses Bases, Linear Independence, Subspaces Abstract Vector Spaces

More information

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes DTU Informatics 247 Stochastic Processes 6, October 27 Today: Limiting behaviour of birth and death processes Birth and death processes with absorbing states Finite state continuous time Markov chains

More information

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 6, 2016 Outline 1. Introduction 2. Chapman-Kolmogrov Equations

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Lecture 6: Entropy Rate

Lecture 6: Entropy Rate Lecture 6: Entropy Rate Entropy rate H(X) Random walk on graph Dr. Yao Xie, ECE587, Information Theory, Duke University Coin tossing versus poker Toss a fair coin and see and sequence Head, Tail, Tail,

More information

Discrete Time Markov Chain (DTMC)

Discrete Time Markov Chain (DTMC) Discrete Time Markov Chain (DTMC) John Boccio February 3, 204 Sources Taylor & Karlin, An Introduction to Stochastic Modeling, 3rd edition. Chapters 3-4. Ross, Introduction to Probability Models, 8th edition,

More information

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Chapter 5 Markov Chain Monte Carlo MCMC is a kind of improvement of the Monte Carlo method By sampling from a Markov chain whose stationary distribution is the desired sampling distributuion, it is possible

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model

More information

Lecture 7. Sums of random variables

Lecture 7. Sums of random variables 18.175: Lecture 7 Sums of random variables Scott Sheffield MIT 18.175 Lecture 7 1 Outline Definitions Sums of random variables 18.175 Lecture 7 2 Outline Definitions Sums of random variables 18.175 Lecture

More information

0.1 Naive formulation of PageRank

0.1 Naive formulation of PageRank PageRank is a ranking system designed to find the best pages on the web. A webpage is considered good if it is endorsed (i.e. linked to) by other good webpages. The more webpages link to it, and the more

More information

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS Stochastic Processes Theory for Applications Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS Contents Preface page xv Swgg&sfzoMj ybr zmjfr%cforj owf fmdy xix Acknowledgements xxi 1 Introduction and review

More information

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E. 3.3 Diagonalization Let A = 4. Then and are eigenvectors of A, with corresponding eigenvalues 2 and 6 respectively (check). This means 4 = 2, 4 = 6. 2 2 2 2 Thus 4 = 2 2 6 2 = 2 6 4 2 We have 4 = 2 0 0

More information

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition R. G. Gallager January 31, 2011 i ii Preface These notes are a draft of a major rewrite of a text [9] of the same name. The notes and the text are outgrowths

More information

ISyE 6650 Probabilistic Models Fall 2007

ISyE 6650 Probabilistic Models Fall 2007 ISyE 6650 Probabilistic Models Fall 2007 Homework 4 Solution 1. (Ross 4.3) In this case, the state of the system is determined by the weather conditions in the last three days. Letting D indicate a dry

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results Discrete time Markov chains Discrete Time Markov Chains, Definition and classification 1 1 Applied Mathematics and Computer Science 02407 Stochastic Processes 1, September 5 2017 Today: Short recap of

More information

Lecture 1: Systems of linear equations and their solutions

Lecture 1: Systems of linear equations and their solutions Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications

More information

STA 294: Stochastic Processes & Bayesian Nonparametrics

STA 294: Stochastic Processes & Bayesian Nonparametrics MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a

More information

Continuous Time Markov Chain Examples

Continuous Time Markov Chain Examples Continuous Markov Chain Examples Example Consider a continuous time Markov chain on S {,, } The Markov chain is a model that describes the current status of a match between two particular contestants:

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play?

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play? Overview Last week introduced the important Diagonalisation Theorem: An n n matrix A is diagonalisable if and only if there is a basis for R n consisting of eigenvectors of A. This week we ll continue

More information

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather 1. Markov chain LTCC. Exercises Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4} and transition matrix 1/2 1/2 0 0 P = 0 1/2 1/3 1/6. 0 0 0 1 (a) What happens if the chain starts in

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Today s Outline. CS 362, Lecture 13. Matrix Chain Multiplication. Paranthesizing Matrices. Matrix Multiplication. Jared Saia University of New Mexico

Today s Outline. CS 362, Lecture 13. Matrix Chain Multiplication. Paranthesizing Matrices. Matrix Multiplication. Jared Saia University of New Mexico Today s Outline CS 362, Lecture 13 Jared Saia University of New Mexico Matrix Multiplication 1 Matrix Chain Multiplication Paranthesizing Matrices Problem: We are given a sequence of n matrices, A 1, A

More information

Homework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM.

Homework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM. Stationary Distributions: Application Monday, October 05, 2015 2:04 PM Homework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM. To prepare to describe the conditions under which the

More information

µ n 1 (v )z n P (v, )

µ n 1 (v )z n P (v, ) Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic

More information

Homework set 2 - Solutions

Homework set 2 - Solutions Homework set 2 - Solutions Math 495 Renato Feres Simulating a Markov chain in R Generating sample sequences of a finite state Markov chain. The following is a simple program for generating sample sequences

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

Eigenvectors. Prop-Defn

Eigenvectors. Prop-Defn Eigenvectors Aim lecture: The simplest T -invariant subspaces are 1-dim & these give rise to the theory of eigenvectors. To compute these we introduce the similarity invariant, the characteristic polynomial.

More information

Lecture 11: Hidden Markov Models

Lecture 11: Hidden Markov Models Lecture 11: Hidden Markov Models Cognitive Systems - Machine Learning Cognitive Systems, Applied Computer Science, Bamberg University slides by Dr. Philip Jackson Centre for Vision, Speech & Signal Processing

More information

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Markov Model. Model representing the different resident states of a system, and the transitions between the different states Markov Model Model representing the different resident states of a system, and the transitions between the different states (applicable to repairable, as well as non-repairable systems) System behavior

More information

Please simplify your answers to the extent reasonable without a calculator. Show your work. Explain your answers, concisely.

Please simplify your answers to the extent reasonable without a calculator. Show your work. Explain your answers, concisely. Please simplify your answers to the extent reasonable without a calculator. Show your work. Explain your answers, concisely. 1. Consider a game which involves flipping a coin: winning $1 when it lands

More information

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications Random Walks on Graphs Outline One Concrete Example of a random walk Motivation applications shuffling cards universal traverse sequence self stabilizing token management scheme random sampling enumeration

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information