Markov Chains. October 5, Stoch. Systems Analysis Markov chains 1

Size: px
Start display at page:

Download "Markov Chains. October 5, Stoch. Systems Analysis Markov chains 1"

Transcription

1 Markov Chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania October 5, 2011 Stoch. Systems Analysis Markov chains 1

2 Markov chains. Definition and examples Markov chains. Definition and examples Chapman Kolmogorov equations Gambler s ruin problem Queues in communication networks: Transition probabilities Classes of States Limiting distributions Ergodicity Queues in communication networks: Limit probabilities Stoch. Systems Analysis Markov chains 2

3 Markov chains Consider time index n = 0, 1, 2,... & time dependent random state X n State X n takes values on a countable number of states In general denotes states as i = 0, 1, 2,... Might change with problem Denote the history of the process X n = [X n, X n 1,..., X 0 ] T Denote stochastic process as X N The stochastic process X N is a Markov chain (MC) if P [ X n+1 = j Xn = i, X n 1 ] = P [ Xn+1 = j Xn = i ] = P ij Future depends only on current state X n Stoch. Systems Analysis Markov chains 3

4 Observations Process s history X n 1 irrelevant for future evolution of the process Probabilities P ij are constant for all times (time invariant) From the definition we have that for arbitrary m P [ X n+m Xn, X n 1 ] = P [ Xn+m Xn ] X n+m depends only on X n+m 1, which depends only onx n+m 2,... which depends only on X n Since P ij s are probabilities they re positive and sum up to 1 P ij 0 P ij = 1 j=1 Stoch. Systems Analysis Markov chains 4

5 Matrix representation Group transition probabilities P ij in a matrix P P 00 P 01 P P 10 P 11 P P :=.... P i0 P i1 P i Not really a matrix if number of states is infinite Stoch. Systems Analysis Markov chains 5

6 Graph representation A graph representation is also used P i 1,i 1 P ii P i+1,i+1 P i 2,i 1 P i 1,i P i,i+1 P i+1,i+2 i 1 i i +1 P i 1,i 2 P i,i 1 P i+1,i P i+2,i+1 Useful when number of states is infinite Stoch. Systems Analysis Markov chains 6

7 Example: Happy - Sad I can be happy (X n = 0) or sad (X n = 1). Happiness tomorrow affected by happiness today only Model as Markov chain with transition probabilities P := ( ) 0.8 H S Inertia happy or sad today, likely to stay happy or sad tomorrow (P 00 = 0.8, P 11 = 0.7) 0.3 But when sad, a little less likely so (P 00 > P 11 ) Stoch. Systems Analysis Markov chains 7

8 Example: Happy - Sad, version 2 Happiness tomorrow affected by today and yesterday Define double states HH (happy-happy), HS (happy-sad), SH, SS Only some transitions are possible HH and SH can only become HH or HS HS and SS can only become SH or SS 0.1 P := HH HS SH SS More time happy or sad increases likelihood of staying happy or sad 0.3 State augmentation Capture longer time memory Stoch. Systems Analysis Markov chains 8

9 Random (drunkard s) walk Step to the right with probability p, to the left with prob. (1-p) p p p p i 1 i i +1 1 p 1 p 1 p 1 p States are 0, ±1, ±2,..., number of states is infinite Transition probabilities are P i,i+1 = p, P i,i 1 = 1 p, P ij = 0 for all other transitions Stoch. Systems Analysis Markov chains 9

10 Random (drunkard s) walk - continued Random walks behave differently if p < 1/2, p = 1/2 or p > 1/2 100 p = 0.45 p = 0.50 p = position (in steps) position (in steps) position (in steps) time time time With p > 1/2 diverges to the right (grows unbounded almost surely) With p < 1/2 diverges to the left With p = 1/2 always come back to visit origin (almost surely) Because number of states is infinite we can have all states transient They are not revisited after some time (more later) Stoch. Systems Analysis Markov chains 10

11 Two dimensional random walk Take a step in random direction East, West, South or North E, W, S, N chosen with equal probability States are pairs of coordinates (x, y) x = 0, ±1, ±2,... and y = 0, ±1, ±2,... Transiton probabilities are not zero only for points adjacent in the grid Latitude (North South) Longitude (East West) P [ x(t +1) = i +1, y(t + 1) = j x(t) = i, y(t) = j ] = P [ x(t +1) = i 1, y(t + 1) = j x(t) = i, y(t) = j ] = 1 4 P [ x(t +1) = i, y(t + 1) = j +1 x(t) = i, y(t) = j ] = 1 4 P [ x(t +1) = i, y(t + 1) = j 1 x(t) = i, y(t) = j ] = 1 4 Latitude (North South) Longitude (East West) Stoch. Systems Analysis Markov chains 11

12 More about random walks Some random facts of life for equiprobable random walks In one and two dimensions probability of returning to origin is 1 Will almost surely return home In more than two dimensions, probability of returning to origin is less than 1 In three dimensions probability of returning to origin is 0.34 Then 0.19, 0.14, 0.10, 0.08,... Stoch. Systems Analysis Markov chains 12

13 Random walk with borders (gambling) As a random walk, but stop moving when i = 0 or i = J Models a gambler that stops playing when ruined, Xn = 0 Or when reaches target gains Xn = J 1 p p 0... i 1 i i p 1 p 1 J States are 0, 1,..., J. Finite number of states (J). Transition probs. P i,i+1 = p, P i,i 1 = 1 p, P 00 = 1, P JJ = 1 P ij = 0 for all other transitions States 0 and J are called absorbing. Once there stay there forever The rest are transient states. Visits stop almost surely Stoch. Systems Analysis Markov chains 13

14 Chapman Kolmogorov equations Markov chains. Definition and examples Chapman Kolmogorov equations Gambler s ruin problem Queues in communication networks: Transition probabilities Classes of States Limiting distributions Ergodicity Queues in communication networks: Limit probabilities Stoch. Systems Analysis Markov chains 14

15 Multiple step transition probabilities What can be said about multiple transitions? Transition probabilities between two time slots P 2 ij := P [ X m+2 = j Xm = i ] Probabilities of X m+n given X n n-step transition probabilities Pij n := P [ X m+n = j Xm = i ] Relation between n-step, m-step and (m + n)-step transition probs. Write P m+n ij in terms of P m ij and P n ij All questions answered by Chapman-Kolmogorov s equations Stoch. Systems Analysis Markov chains 15

16 2-step transition probabilities Start considering transition probs. between two time slots P 2 ij = P [ X n+2 = j Xn = i ] Using the theorem of total probability Pij 2 = P [ X n+2 = j Xn+1 = k, X n = i ] P [ X n+1 = k Xn = i ] k=1 In the first probability, conditioning on X n = i is unnecessary. Thus P 2 ij = P [ X n+2 = j X n+1 = k ] P [ X n+1 = k X n = i ] k=1 Which by definition yields Pij 2 = P kj P ik k=1 Stoch. Systems Analysis Markov chains 16

17 n-step, m-step and (m + n)-step Identical argument can be made (condition on X 0 to simplify notation, possible because of time invariance) P m+n ij = P [ X n+m = j X 0 = i ] Use theorem of total probability, remove unnecessary conditioning and use definitions of n-step and m-step transition probabilities P m+n ij = P m+n ij = P m+n ij = P [ X m+n = j X m = k, X 0 = i ] P [ X m = k X 0 = i ] k=1 P [ X m+n = j Xm = k ] P [ X m = k X0 = i ] k=1 k=1 P n kjp m ik Stoch. Systems Analysis Markov chains 17

18 Interpretation Chapman Kolmogorov is intuitive. Recall P m+n ij = k=1 P n kjp m ik Between times 0 and m + n time m occurred At time m, the chain is in some state X m = k Pik m is the probability of going from X 0 = i to X m = k Pkj n is the probability of going from X m = k to X m+n = j Product P m ik Pn kj is then the probability of going from X 0 = i to X m+n = j passing through X m = k at time m Since any k might have occurred sum over all k Stoch. Systems Analysis Markov chains 18

19 Matrix form Define matrices P (m) with elements P m ij, P(n) with elements P n ij and P (m+n) with elements P m+n ij k=1 Pn kj Pm ik is the (i, j)-th element of matrix product P(m) P (n) Chapman Kolmogorov in matrix form P (m+n) = P (m) P (n) Matrix of (n + m)-step transitions is product of n-step and m-step Stoch. Systems Analysis Markov chains 19

20 n-step transition probabilities For m = n = 1 (2-step transition probabilities) matrix form is P (2) = PP = P 2 Proceed recursively backwards from n Have proved the following P (n) = P (n 1) P = P (n 2) PP =... = P n Theorem The matrix of n-step transition probabilities P (n) is given by the n-th power of the transition probability matrix P. i.e., Henceforth we write P n P (n) = P n Stoch. Systems Analysis Markov chains 20

21 Example: Happy-Sad Happiness transitions in one day (not the same as earlier example) P := ( ) 0.8 H S 0.3 Transition probabilities between today and the day after tomorrow? P 2 := ( ) H 0.30 S 0.45 Stoch. Systems Analysis Markov chains 21

22 Example: Happy-Sad (continued)... After a week and after a month ( ) P := P 30 := ( ) Matrices P 7 and P 30 almost identical lim n P n exists Note that this is a regular limit After a month transition from H to H with prob. 0.6 and from S to H also 0.6 State becomes independent of initial condition Rationale: 1-step memory initial condition eventually forgotten Stoch. Systems Analysis Markov chains 22

23 Unconditional probabilities All probabilities so far are conditional, i.e., P [ X n = j X0 = i ] Want unconditional probabilities p j (n) := P [X n = j] Requires specification of initial conditions p i (0) := P [X 0 = i] Using theorem of total probability and definitions of Pij n and p j (n) p j (n) := P [X n = j] = P [ X n = j X0 = i ] P [X 0 = i] i=1 = Pij n p i (0) Or in matrix form (define vector p(n) := [p 1 (n), p 2 (n),...] T ) p(n) = P nt p(0) i=1 Stoch. Systems Analysis Markov chains 23

24 Example: Happy-Sad Transition probability matrix P := ( p(0) = [1, 0] p(0) = [0, 1] ) P(Happy) P(Sad) P(Happy) P(Sad) Probabilities Probabilities Time (days) Time (days) For large n probabilities p(t) are independent of initial state p(0) Stoch. Systems Analysis Markov chains 24

25 Gambler s ruin problem Markov chains. Definition and examples Chapman Kolmogorov equations Gambler s ruin problem Queues in communication networks: Transition probabilities Classes of States Limiting distributions Ergodicity Queues in communication networks: Limit probabilities Stoch. Systems Analysis Markov chains 25

26 Gambler s ruin problem You place $b bets, (a) With probability p you gain $b and (b) With probability q = (1 p) you loose your $b bet Start with an initial wealth of $ib Define bias factor α := q/p If α > 1 more likely to loose than win (biased against gambler) α < 1 favors gambler (more likely to win than loose ) α = 1/2 game is fair You keep playing until (a) You go broke (loose all your money) (b) You reach a wealth of $bn Prob. S i of reaching $bn before going broke for initial wealth $ib? S stands for success Stoch. Systems Analysis Markov chains 26

27 Gambler s Markov chain Model as Markov chain. In fact, have met before (normalize by $b) Transition probabilities are P i,i+1 = p, P i,i 1 = q, P 00 = P NN = 1 Sates 0 and N said absorbing. Eventually end up in one of them Remaining states said transient 1 0 p p... i 1 i i q q 1 N Initial state = initial wealth, say $ib Stoch. Systems Analysis Markov chains 27

28 Recursive relations Prob. S i of successful betting run depends on current state i only We can relate probabilities of SBR from adjacent states S i = S i+1 P i,i+1 + S i 1 P i,i 1 = S i+1 p + S i 1 q Recall p + q = 1. Reorder terms p(s i+1 S i ) = q(s i S i 1 ) Recall definition of bias α = q/p S i+1 S i = α(s i S i 1 ) Stoch. Systems Analysis Markov chains 28

29 Recursive relations (continued) If current state is 0 then S i = S 0 = 0. Can write S 2 S 1 = α(s 1 S 0 ) = αs 1 Substitute this in the expression for S 3 S 2 S 3 S 2 = α(s 2 S 1 ) = α 2 S 1 Apply recursively backwards from S i S i 1 S i S i 1 = α(s i 1 S i 2 ) =... = α i 1 S 1 Sum up all of the former to obtain S i S 1 = S 1 ( α + α α i 1) The latter can be written as a geometric series S i = S 1 ( 1 + α + α α i 1) Stoch. Systems Analysis Markov chains 29

30 Geometric series Geometric series can be summed. Assuming α 1 S i = 1 αi 1 α S 1 Write for i = 1. When in state N, S N = 1 1 = S N = 1 αn 1 α S 1 Compute S 1 from latter and substitute into expression for S i S i = 1 αi 1 α N For α = 1 S i = is 1, 1 = S N = NS 1, S i = (i/n) Stoch. Systems Analysis Markov chains 30

31 For large N Consider exit bound N arbitrarily large. For α 1, S i (α i 1)/α N 0 If win prob. does not exceed loose probability will almost surely loose all money For α < 1, P i 1 α i If win prob. exceeds loose probability might win If initial wealth i sufficiently high, will most likely win Stoch. Systems Analysis Markov chains 31

32 Queues in communication systems Markov chains. Definition and examples Chapman Kolmogorov equations Gambler s ruin problem Queues in communication networks: Transition probabilities Classes of States Limiting distributions Ergodicity Queues in communication networks: Limit probabilities Stoch. Systems Analysis Markov chains 32

33 Queues in communication systems Communication systems goal Move packets from generating sources to intended destinations Between arrival and departure we hold packets in a memory buffer Want to design buffers appropriately Stoch. Systems Analysis Markov chains 33

34 Non concurrent queue Time slotted in intervals of duration t n-th slot between times n t and (n + 1) t During each time slot a packet arrives with probability λ Packet arrival rate is expected number of arrivals per unit of time Expected number of arrivals is λ, then arrival rate is λ/ t A packet is transmitted (departs) with probability µ Departure rate (expected departures per unit time) is µ/ t Assume no simultaneous arrival and departure (no concurrence) Reasonable for small t (µ and λ are likely small) Stoch. Systems Analysis Markov chains 34

35 Queue evolution equations q n denotes number of packets in queue in n-th time slot A n = nr. of packet arrivals, D n = nr. of departures (during n-th slot) If there are no packets in queue q n = 0 then there are no departures Queue length at time n + 1 can be written as q n+1 = q n + A n, if q n = 0 If q n > 0, departures and arrivals may happen q n+1 = [ ] +, q n + A n D n if qn > 0 A n {0, 1}, D n {0, 1} and either A n = 1 or D n = 1 but not both Arrival and departure probabilities are P [A n = 1] = λ, P [D n = 1] = µ Stoch. Systems Analysis Markov chains 35

36 Queue evolution probabilities Future queue lengths depend on current length only Probability of queue length increasing P [ q n+1 = i + 1 q n = i ] = P [A n = 1] = λ, for all i Queue length might decrease only if q n > 0. Probability is P [ q n+1 = i 1 qn = i ] = P [D n = 1] = µ, for all i > 0 Queue length stays the same if it neither increases nor decreases P [ q n+1 = i qn = i ] = 1 λ µ, for all i > 0 P [ q n+1 = 0 qn = 0 ] = 1 λ No departures when q n = 0 explain second equation Stoch. Systems Analysis Markov chains 36

37 Queue as a Markov chain MC with states 0, 1, 2,.... Identify states with queue lengths Transition probabilities for i 0 are P i,i 1 = λ, P i,i = 1 λ µ, P i,i+1 = µ For i = 0 P 0,0 = 1 λ and P 01 = λ 1 λ 0 1 λ µ 1 λ ν 1 λ µ λ λ λ λ... i 1 i i µ µ µ Stoch. Systems Analysis Markov chains 37

38 Numerical example: Probability propagation Build matrix P truncating at maximum queue length L = 100 Arrival rate λ = 0.3. Departure rate µ = 0.33 Initial probability distribution p(0) = [1, 0, 0,...] T (queue empty) queue length 0 queue length 10 queue length 20 Propagate probabilities with product P n p(0) Probabilities obtained are Probabilities P [ q n = i q0 = 0 ] = p i (n) Time A few i s (0, 10, 20) shown Probability of empty queue 0.1. Occupancy decrease with index Stoch. Systems Analysis Markov chains 38

39 Classes of states Markov chains. Definition and examples Chapman Kolmogorov equations Gambler s ruin problem Queues in communication networks: Transition probabilities Classes of States Limiting distributions Ergodicity Queues in communication networks: Limit probabilities Stoch. Systems Analysis Markov chains 39

40 Transient and recurrent states States of a MC can be recurrent or transient Transient states might be visited at the beginning but eventually visits stop Almost surely, X n i for n sufficiently large (qualifications needed) Visits to recurrent states keep happening forever Fix arbitrary m Almost surely, X n = i for some n m (qualifications needed) T R R T R Stoch. Systems Analysis Markov chains 40

41 Definitions Let f i be the probability that starting at i, MC ever reenters state i [ f i := P X n = i ] [ X0 = i = P X n = i ] Xm = i n=1 n=m+1 State i is recurrent if f i = 1 Process reenters i again and again (almost surely). Infinitely often State i is transient if f i < 1 Positive probability (1 f i ) of never coming back to i Stoch. Systems Analysis Markov chains 41

42 Recurrent states example State R 3 is recurrent because P [ X 1 = R 3 X0 = R 3 ] = 1 State R 1 is recurrent because P [ X 1 = R 1 X0 = R 1 ] = 0.3 P [ X 2 = R 1, X 1 R 1 X0 = R 1 ] = (0.7)(0.6) P [ X 3 = R 1, X 2 R 1, X 1 R 1 X 0 = R 1 ] = (0.7)(0.4)(0.6). P [ X n = R 1, X n 1 R 1,..., X 1 R 1 X0 = R 1 ] = (0.7)(0.4) n 1 (0.6) 1 T 1 R T R 1 R Sum up: f i = P [ ] X n = R 1, X n 1 R 1,..., X 1 R 1 X 0 = R 1 n=1 = ( n=1 ) ( ) 0.4 n = = Stoch. Systems Analysis Markov chains 42

43 Transient state example States T 1 and T 2 are transient Probability of returning to T 1 is f T1 = (0.6) 2 = 0.36 Might come back to T 1 only if it goes to T 2 (with prob. 0.6) Will come back only if it moves back from T 2 to T 1 (with prob. 0.6) T R R T R Likewise, f T2 = (0.6) 2 = 0.36 Stoch. Systems Analysis Markov chains 43

44 Accessibility State j is accessible from state i if P n ij > 0 for some n 0 It is possible to enter j if MC initialized at X 0 = i Since P 0 ii = P [ X 0 = 1 X0 = i ] = 1, state i is accessible from itself T R R T R All states accessible from T 1 and T 2 Only R 1 and R 2 accessible from R 1 or R 2 None other than itself accessible from R 3 Stoch. Systems Analysis Markov chains 44

45 Communication States i and j are said to communicate (i j) if i is accessible from j, P n ij > 0 for some n; and j is accessible from i, P m ji > 0 for some m Communication is an equivalence relation Reflexivity: i i true because P 0 ii = 1 Symmetry: If i j then j i If i j then P n ij > 0 and P m ji > 0 from where j i Transitivity: If i j and j k, then i k Just notice that P n+m ik Pij n Pjk m > 0 Partitions set of states into disjoint classes (as all equivalences do) What are these classes? (start with a brief detour) Stoch. Systems Analysis Markov chains 45

46 Expected number of visits to states Define N i as the number of visits to state i given that X 0 = i N i := I {X n = i} n=1 If X n = i, this is the last visit to i with probability 1 f i Prob. revisiting state i exactly n times is (n visits no more visits) P [N i = n] = f n i (1 f i ) Number of visits N i has a geometric distribution with parameter f i Expected number of visits is E [N i ] = nfi n (1 f i ) = 1 1 f i n=1 For recurrent states N i = almost surely and E [N i ] = (f i = 1) Stoch. Systems Analysis Markov chains 46

47 Alternative transience/recurrence characterization Another way of writing E [N i ] E [N i ] = n=1 [ ] E I {X n = i} = n=1 Recall that: for transient states E [N i ] = 1/(1 f 1 ) for recurrent states E [N i ] = Therefore proving Theorem State i is transient if and only if n=1 Pn ii < State i is recurrent if and only if n=1 Pn ii = Number of future visits to transient states is finite If number of states is finite some states have to be recurrent P n ii Stoch. Systems Analysis Markov chains 47

48 Recurrence and communication Theorem If state i is recurrent and i j, then j is recurrent Proof. If i j then there are l, m such that P l ji > 0 and P m ij > 0 Then, for any n we have P l+n+m jj P l jip n ii P m ij Sum for all n. Note that since i is recurrent n=1 Pn ii = ( ) PjiP l ii n Pij m = Pji l Pii n Pij m = n=1 P l+n+m jj n=1 Which implies j is recurrent n=1 Stoch. Systems Analysis Markov chains 48

49 Recurrence and transience are class properties Corollary If state i is transient and i j then j is transient Proof. If j were recurrent, then i would be recurrent from previous theorem Since communication defines classes and recurrence is shared by elements of this class, we say that recurrence is a class property Likewise, transience is also a class property States of a MC are separated in classes of transient and recurrent states Stoch. Systems Analysis Markov chains 49

50 Irreducible Markov chains A MC is called irreducible if it has only one class All states communicate with each other If MC also has finite number of states the single class is recurrent If MC infinite, class might be transient When it has multiple classes (not irreducible) Classes of transient states T1, T 2,... Classes of recurrent states R1, R 2,... If MC initialized in a recurrent class Rk, stays within the class If starts in transient class Tk, might stay on T k or end up in a recurrent class R l For large time index n, MC restricted to one class Can be separated into irreducible components Stoch. Systems Analysis Markov chains 50

51 Example T R R T R Three classes T := {T 1, T 2 }, class with transient states R 1 := {R 1, R 2 }, class with recurrent states R 2 := {R 3 }, class with recurrent states Asymptotically in n suffices to study behavior for the irreducible components R 1 and R 2 Stoch. Systems Analysis Markov chains 51

52 Example: Right-biased random walk Step right with probability p > 1/2, left with probability (1-p) p p p p i 1 i i +1 1 p 1 p 1 p 1 p Write current position of random walker as X n = n m=1 S m S m are the steps taken. Note that E [S m ] = 1 2p and E [ S 2 m] = 1 From Central Limit Theorem [ n m=1 P S m n(1 2p) n ] a Φ(a) Stoch. Systems Analysis Markov chains 52

53 Example: Right-biased random walk (continued) Choose a = (1 2p) n [ n ] P [X n 0] = P S m 0 Summing up n=1 State 0 is transient m=1 n=1 ( Φ (1 2p) ) n < e (1 2p)2 n P00 n < P [X n 0] < e (1 2p)2n < n=1 Since all states communicate, all states are transient Stoch. Systems Analysis Markov chains 53

54 Messages States of a MC can be transient of recurrent A MC can be partitioned in classes of states reachable from each other Elements of the class are either all recurrent or all transient A MC with only one class is irreducible If not irreducible can be separated in irreducible components Stoch. Systems Analysis Markov chains 54

55 Limiting distributions Markov chains. Definition and examples Chapman Kolmogorov equations Gambler s ruin problem Queues in communication networks: Transition probabilities Classes of States Limiting distributions Ergodicity Queues in communication networks: Limit probabilities Stoch. Systems Analysis Markov chains 55

56 Limiting distributions MCs have one-step memory. Eventually they forget initial state What can we say about probabilities for large n? π j := lim n P [ X n = j X0 = i ] = lim n Pn ij Implicitly assumed that limit is independent of initial state X 0 = i We ve seen that this problem is related to the matrix power P n P := P 2 := ( ) ( ) P 7 := P 30 := ( ( ) ) Matrix product converges probs. independent of time (large n) All columns are equal probs. independent of initial condition Stoch. Systems Analysis Markov chains 56

57 Periodicity The period of a state i is defined as (ḋ is set of multiples of d) { } d = max d : Pii n = 0 for all n / ḋ State i is periodic with period d if and only if P n ii 0 only if n is a multiple of d (n ḋ) d is the largest number with this property Positive probability of returning to i only every d time steps If period d = 1 state is aperiodic (most often the case) Periodicity is a class property p 1 p State 1 has period 2. So do 0 and 2 (class property) One dimensional random walk also has period 2 Stoch. Systems Analysis Markov chains 57

58 Positive recurrence and ergodicity Recall: state i is recurrent if chain returns to i with probability 1 Proved it was equivalent to n=1 Pn ii = Positive recurrent when expected value of return time is finite E [return time] = npii n n=1 m=0 n 1 (1 Pii m ) < Null recurrent if recurrent but E [return time] = Positive and null recurrence are class properties Recurrent states in a finite-state MC are positive recurrent Ergodic states are those that are positive recurrent and aperiodic An irreducible MC with ergodic states is said to be an ergodic MC Stoch. Systems Analysis Markov chains 58

59 Example of a null recurrent MC 1/2 1/3 1/ /2 2/3 3/4... P [return time = 2] = 1 2 P [return time = 4] = = P [return time = 3] = P [return time = n] = (n 1) n It is recurrent because probability of returning is 1 (use induction) n P [return time = m] = m=2 n m=2 1 (m 1) m = n 1 1 n Null recurrent because expected return time is infinite np [return time = n] = n=2 n=2 n (n 1) n = n=2 1 (n 1) = Stoch. Systems Analysis Markov chains 59

60 Limit distribution of ergodic Markov chains Theorem For an irreducible ergodic MC, lim n P ij exists and is independent of the initial state i. That is π j = lim n Pn ij exists Furthermore, steady state probabilities π j 0 are the unique nonnegative solution of the system of linear equations π j = π i P ij, i=0 π j = 1 j=0 As observed, limit probs. independent of initial condition exist Simple algebraic equations can be solved to find π j No periodic states, transient states, multiple classes or null recurrent Stoch. Systems Analysis Markov chains 60

61 Algebraic relation to determine limit probabilities Difficult part of theorem is to prove that π j = lim n P n ij exists To see that algebraic relation is true use theorem of total probability (omit conditioning on X 0 to simplify notation) P [X n+1 = j] = = P [ X n+1 = j Xn = i ] P [X n = i] i=1 P ij P [X n = i] i=1 If limits exists, P [X n+1 = j] P [X n = j] π j (sufficiently large n) π j = π i P ij The other equation is true because the π j are probabilities i=0 Stoch. Systems Analysis Markov chains 61

62 Vector/matrix notation: Matrix limit More compact and illuminating on vector/matrix notation Finite MC with J states First part of theorem says that lim n P n exists and π 1 π 2... π J π 1 π 2... π J lim n Pn =.... π 1 π 2... π J Same probs. for all rows independent of initial state Probability distribution for large n. lim p(n) = lim n n PT n p(0) = [π 0, π 1,..., π J ] T Independent of initial condition Stoch. Systems Analysis Markov chains 62

63 Vector/matrix notation: Eigenvector Define vector stationary distribution π := [π 0, π 1,..., π J ] T Limit distribution is unique solution of (1 = [1, 1,...] T ) π = P T π, π T 1 = 1 π eigenvector associated with eigenvalue 1 of P T Eigenvectors are defined up to a constant Normalize to sum 1 All other eigenvectors of P T have modulus smaller than 1 If not, P n diverges, but we know P n contains n-step transition probs. π eigenvector associated with largest eigenvalue of P T Computing π as eigenvector is computationally efficient and robust in some problems Stoch. Systems Analysis Markov chains 63

64 Vector/matrix notation: Rank Can also write as (I is identity matrix, 0 = [0, 0,...] T ) ( I P T ) π = 0 π T 1 = 1 π has J elements, but there are J + 1 equations overdetermined If 1 is eigenvalue of P T, then 0 is eigenvalue of I P T I P T is rank deficient, in fact rank(i P T ) = J 1 Then, there are in fact only J equations π is eigenvector associated with eigenvalue 0 of I P T π spans null space of P T (not much significance) Stoch. Systems Analysis Markov chains 64

65 Example: Aperiodic, irreducible Markov chain MC with transition probability matrix P := Does P correspond to an ergodic MC? All states communicate with state 2 (full row and column P2j 0 and P j2 0 for all j) No transient states (irreducible with one recurrent state and finite) Aperiodic (period of state 2 is 1) Then, there exist π 1, π 2 and π 3 such that π j = lim n P n ij Limit is independent of i Stoch. Systems Analysis Markov chains 65

66 Example: Aperiodic, irreducible MC (continued) How do we determine limit probabilities π j? Solve system of linear equations π j = i=0 π ip ij and j=0 π j = 1 π 1 π 2 π 3 1 = The upper part of matrix above is P T There are three variables and four equations Some equations might be linearly dependent Indeed, summing first three equations: π1 + π 2 + π 3 = π 1 + π 2 + π 3 Always true, because probabilities in rows of P sum up to 1 This is because of rank deficiency of I P T Solution yields π 1 = , π 2 = and π 3 = π 1 π 2 π 3 Stoch. Systems Analysis Markov chains 66

67 Stationary distribution Limit distributions are sometimes called stationary distributions Select initial distribution such that P [X 0 = i] = π i for all i Probabilities at time n = 1 follow from theorem of total probability P [X 1 = i] = P [ X 1 = j X0 = i ] P [X 0 = i] i=1 Definitions of P ij, and P [X 0 = i] = π i. Algebraic property of π j P [X 1 = i] = P ij π i = π j i=1 Probability distribution stayed unchanged Proceeding recursively, system initialized with P [X 0 = i] = π i, Probability distribution constant, P [X n = i] = π i for all n MC stationary in a probabilistic sense (states change, probs. do not) Stoch. Systems Analysis Markov chains 67

68 Ergodicity Markov chains. Definition and examples Chapman Kolmogorov equations Gambler s ruin problem Queues in communication networks: Transition probabilities Classes of States Limiting distributions Ergodicity Queues in communication networks: Limit probabilities Stoch. Systems Analysis Markov chains 68

69 Ergodicity Define T (n) i as fraction of time spent in i-th state up to time n T (n) i := 1 n n I {X m = i} m=1 Compute expected value of T (n) i [ ] E T (n) i = 1 n E [I {X m = i}] = 1 n n m=1 n P [X m = i] π i m=1 As time n, probabilities P [X m = i] approach π i. Then [ ] lim E T (n) 1 n t i = lim P [X m = i] = π i t n m=1 For ergodic MCs same is true without expected value ergodicity lim T (n) 1 n i = lim n n n I {X m = i} = π i, m=1 a.s. Stoch. Systems Analysis Markov chains 69

70 Example: Ergodic Markov chain Recall transition probability matrix P := Visits to states, nt (n) i Ergodic averages, T (n) i 25 State 1 State 2 State State 1 State 2 State number of visits ergodic averages time time Ergodic averages slowly converge to limit probabilities Stoch. Systems Analysis Markov chains 70

71 Function s Ergodic average Use of ergodic averages is more general than T (n) i Theorem Consider an irreducible Markov chain with states X n = 0, 1, 2,... and stationary probabilities π j. Let f (X n ) be a bounded function of the state X (n). Then, with probability 1 1 lim n n n f (X m ) = f (i)π i m=1 i=1 T (n) i is a particular case with f (X m ) = I {X m = i} Think of f (X m ) as a reward associated with state X (m) (1/n) n m=1 f (X m) is the time average of rewards Stoch. Systems Analysis Markov chains 71

72 Function s Ergodic average (proof) Proof. Because I {X m = i} = 1 if and only if X m = i we can write ( 1 n f (X m ) = 1 n ) f (i)i {X m = i} n n m=1 m=1 i=1 Change order of summations. Use definition of T (n) i ) 1 n n f (X m ) = f (i) m=1 i=1 Let n in both sides ( 1 n n I {X m = i} m=1 = i=1 f (i)t (n) i Use ergodic average result for lim n T (n) i = π i [cf. page 69] Stoch. Systems Analysis Markov chains 72

73 Ensemble and ergodic averages There s more depth to ergodic results than meets the eye Ensemble average: across different realizations of the MC E [f (X n )] = f (i)p (X n = i) i=1 f (i)π i i=1 Ergodic average: across time for a single realization of the MC f (n) = 1 n n f (X n ) m=1 These quantities are fundamentally different but their values coincide asymptotically in n Observing one realization of the MC provides as much information as observing all realizations Practical consequence: Observe/simulate only one path of the MC Stoch. Systems Analysis Markov chains 73

74 Ergodicity in periodic MCs In some sense, still true if MC is periodic For irreducible positive recurrent MC (periodic or aperiodic) define π j = π i P ij, i=0 π j = 1 j=0 A unique solution exists (we say π j are well defined) The fraction of time spent in state i converges to π i lim T (n) 1 n i = lim n n n I {X m = i} π i m=1 If MC is periodic, probabilities oscillate, but fraction of time spent in state i converges to π i Stoch. Systems Analysis Markov chains 74

75 Example: Periodic irreducible Markov chain Matrix P and state diagram of a periodic MC P := MC has period 2. If initialized with X 0 = 1, then P 2n+1 11 = P [ X 2n+1 = 1 X 0 = 1 ] = 0, P 2n 11 = P [ X 2n = 1 X0 = 1 ] = 1 0 Define π := [π 1, π 2, π 3 ] T as solution of π π 2 π 3 = Normalized eigenvector for eigenvalue 1 (π = P T π, π T 1 = 1) π 1 π 2 π 3 Stoch. Systems Analysis Markov chains 75

76 Example: Periodic irreducible MC (continued) Solution yields π 1 = 0.15, π 2 = 0.50 and π 3 = Visits to states, nt (n) i State 1 State 0 State Ergodic averages, T (n) i State 1 State 0 State number of visits ergodic averages time Ergodic averages T (n) i time converge to the ergodic limits π i Stoch. Systems Analysis Markov chains 76

77 Example: Periodic irreducible MC (continued) Powers of the transition probability matrix do not converge P 2 = P 3 = = P In general we have P 2n = P 2 and P 2n+1 = P At least one other eigenvalue of the transition probability matrix has modulus 1 eig 2 ( P T ) = 1 In this example, eigenvalues of P T are 1, 1 and 0 Stoch. Systems Analysis Markov chains 77

78 Reducible Markov chains If MC is not irreducible it can be decomposed in transient (T k ), ergodic (E k ), periodic (P k ) and null recurrent (N k ) components All of these are class properties Limit probabilities for transient states are null P [X n = i] 0, for all X n T k For arbitrary ergodic component E k, define conditional limits π i = lim n P [ X n = i X 0 E k ], i Ek Result in page 60 is true with this (re)defined π i Stoch. Systems Analysis Markov chains 78

79 Reducible Markov chains (continued) Likewise, for arbitrary periodic component P k (re)define π j as π j = i P k π i P ij, j P k π j = 1, j P k A conditional version of the result in page 74 is true lim T (n) 1 n i := lim n n n I { X m = i } X 0 P k πi m=1 For null recurrent components limit probabilities are null P [X n = i] 0, for all X n N k Stoch. Systems Analysis Markov chains 79

80 Example: Reducible Markov chain Transition matrix and state diagram of a reducible MC 0.3 P := States 1 and 2 are transient T = {1, 2} States 3 and 4 form an ergodic class E 1 = {3, 4} State 5 is a separate ergodic class E 2 = {5} 0.4 Stoch. Systems Analysis Markov chains 80

81 Example: Reducible MC - matrix powers 10-step and 20 step transition probabilities P 5 = P10 = Transition into transient states is vanishing (columns 1 and 2) Transition from 3 and 4 into 3 and 4 only If initialized in ergodic class E1 = {3, 4} stays in E 1 Transition from 5 only into 5 From transient states T = {1, 2} can go into either ergodic component E 1 = {3, 4} or E 2 = {5} Stoch. Systems Analysis Markov chains 81

82 Example: Reducible MC - matrix decomposition Matrix P can be separated in blocks P = = P T P T E1 P T E2 0 P E P E2 Block P T describes transition between transient states Blocks P E1 and P E2 describe transitions in ergodic components Blocks P T E1 and P T E2 describe transitions from T to E 1 and E 2 Powers of n can be written as P n P n T Q T E1 Q T E2 = 0 P n E P n E 2 The transient transition block converges to 0, lim n P n T = 0 Stoch. Systems Analysis Markov chains 82

83 Example: Reducible MC - limiting behavior As n grows the MC hits an ergodic state with probability 1 Henceforth, MC stays within ergodic component P [ X n+m E i Xn E i ] = 1, for all m For large n suffices to study ergodic components MC behaves like a MC with transition probabilities P E1 Or like one with transition probabilities P E2 We can think that all MCs as ergodic Ergodic behavior cannot be inferred a priori (before observing) Becomes known a posteriori (after observing sufficiently large time) Culture micro: Something is known a priori if its knowledge is independent of experience (MCs exhibit ergodic behavior). A posteriori knowledge depends on experience (values of the ergodic limits). They are inherently different forms of knowledge (search for Immanuel Kant) Stoch. Systems Analysis Markov chains 83

84 Queues in communication systems Markov chains. Definition and examples Chapman Kolmogorov equations Gambler s ruin problem Queues in communication networks: Transition probabilities Classes of States Limiting distributions Ergodicity Queues in communication networks: Limit probabilities Stoch. Systems Analysis Markov chains 84

85 Non concurrent communication queue Communication system: Move packets from source to destination Between arrival and transmission hold packets in a memory buffer Example problem, buffer design: Packets arrive at a rate of 0.45 packets per unit of time and depart at a rate of How many packets the buffer needs to hold to have a drop rate smaller than 10 6 (one packet dropped for every million packets handled) Time slotted in intervals of duration t During each time slot n A packet arrives with prob. λ, arrival rate is λ/ t A packet is transmitted with prob. µ, departure rate is µ/ t No concurrence: No simultaneous arrival and departure (small t) Stoch. Systems Analysis Markov chains 85

86 Queue evolution probabilities (reminder) Future queue lengths depend on current length only Probability of queue length increasing P [ q n+1 = i + 1 q n = i ] = λ, for all i Queue length might decrease only if q n > 0. Probability is P [ q n+1 = i 1 qn = i ] = µ, for all i > 0 Queue length stays the same if it neither increases nor decreases P [ q n+1 = i qn = i ] = 1 λ µ, for all i > 0 P [ q n+1 = 0 qn = 0 ] = 1 λ No departures when q n = 0 explain second equation Stoch. Systems Analysis Markov chains 86

87 Queue as a Markov chain (reminder) MC with states 0, 1, 2,.... Identify states with queue lengths Transition probabilities for i 0 are P i,i 1 = λ, P i,i = 1 λ µ, P i,i+1 = µ For i = 0 P 0,0 = 1 λ and P 01 = λ 1 λ 0 1 λ µ 1 λ ν 1 λ µ λ λ λ λ... i 1 i i µ µ µ Stoch. Systems Analysis Markov chains 87

88 Numerical example: Limit probabilities Build matrix P truncating at maximum queue length L = 100 Arrival rate λ = 0.3. Departure rate µ = 0.33 Find eigenvector of P T associated with largest eigenvalue (i.e., 1) Yields limit probabilities π = lim n p(n) linear scale logarithmic scale Limiting probabilities Limiting probabilities state state Limit probabilities appear linear in logarithmic scale Seemingly implying an exponential expression π i α i Stoch. Systems Analysis Markov chains 88

89 Limit distribution equations 1 λ 1 λ µ 1 λ ν 1 λ µ λ λ λ λ 0... i 1 i i µ µ µ Limit distribution equations for state 0 (empty queue) For the remaining states π 0 = (1 λ)π 0 + µπ 1 π i = λπ i 1 + (1 λ µ)π i + µπ i+1 Propose candidate solution π i = cα i Stoch. Systems Analysis Markov chains 89

90 Verification of candidate solution Substitute candidate solution π i = cα i in equation for π 0 cα 0 = (1 λ)cα 0 + µcα 1 1 = (1 λ) + µα The above equation is true if we make α = λ/µ Does α = λ/µ verify the remaining equations? From the equation for generic π i (divide by cα i 1 ) cα i = λcα i 1 + (1 λ µ)cα i + µcα i+1 µα 2 (λ + µ)α + λ = 0 The above quadratic equation is satisfied by α = λ/µ And α = 1, which is irrelevant Stoch. Systems Analysis Markov chains 90

91 Compute normalization constant Determine c so that probabilities sum 1 ( i=0 π i = 1) π i = i=0 Used geometric sum J c(λ/µ) i c = 1 λ/µ = 1 i=0 Solving for c and substituting in π i = cα i yields π i = (1 λ/µ) ( ) i λ µ The ratio µ/λ is the queues stability margin Larger µ/λ larger probability of having few queued packets Stoch. Systems Analysis Markov chains 91

92 Queue balance equations Rearrange terms in equation for limit probabilities [cf. page 89] λπ 0 = µπ 1 (λ + µ)π i = λπ i 1 + µπ i+1 λπ 0 is average rate at which the queue leaves state 0 Likewise (λ + µ)π i is the rate at which queue leaves state i µπ 0 is average rate at which the queue enters state 0 λπ i 1 + µπ i+1 is rate at which queue enters state i Limit equations prove validity of queue balance equations Rate at which leaves = Rate at which enters 1 λ 0 1 λ µ 1 λ ν 1 λ µ λ λ λ λ... i 1 i i µ µ µ Stoch. Systems Analysis Markov chains 92

93 Concurrent arrival and departures Packets may arrive and depart in same time slot (concurrence) Queue evolution equations remain the same, [cf. 35] But queue probabilities change [cf. 86] Probability of queue length increasing (for all i) P [ q n+1 = i + 1 qn = i ] = P [A n = 1] P [D n = 0] = λ(1 µ) Queue length might decrease only if q n > 0 (for all i > 0) P [ q n+1 = i 1 qn = i ] = P [D n = 1] P [D n = 0] = µ(1 λ) Queue length stays the same if it neither increases nor decreases P [ q n+1 = i qn = i ] = λµ + (1 λ)(1 µ) = ν, for all i > 0 P [ q n+1 = 0 qn = 0 ] = (1 λ) + λµ Stoch. Systems Analysis Markov chains 93

94 Limit distribution / queue balance equations Write limit distribution equations queue balance equations Rate at which leaves = rate at which enters λ(1 µ)π 0 = µ(1 λ)π 1 ( λ(1 µ) + µ(1 λ) ) πi = λ(1 µ)π i 1 + µ(1 λ)π i+1 Propose exponential solution π = cα i (1 λ) + λν ν ν ν λ(1 µ) λ(1 µ) λ(1 µ) λ(1 µ) 0... i 1 i i µ(1 λ) µ(1 λ) µ(1 λ) Stoch. Systems Analysis Markov chains 94

95 Solving for limit distribution Substitute candidate solution in equation for π 0 λ(1 µ)c = µ(1 λ)cα α = λ(1 µ) µ(1 λ) Same substitution in equation for generic π i µ(1 λ)cα 2 + ( λ(1 µ) + µ(1 λ) ) cα + λ(1 µ)c = 0 which as before is solved for α = λ(1 µ)/µ(1 λ) Find constant c to ensure i=0 cαi = 1 (geometric series). Yields π i = (1 α)α i = ( 1 λ(1 µ) µ(1 λ) ) ( ) i λ(1 µ) µ(1 λ) Stoch. Systems Analysis Markov chains 95

96 Limited queue size Packets dropped if there are too many packets in queue Too many packets in queue, then delays too large, packets useless when they arrive. Also preserve memory Equation for state J requires modification (rate leaves = rate enters) µ(1 λ)π J = λ(1 µ)π J 1 π i = cα i with α = λ(1 µ)/µ(1 λ) also solve this equation (Yes!) (1 λ) + λν ν ν ν (1 µ) + λν λ(1 µ) λ(1 µ) λ(1 µ) λ(1 µ) i 1 i i +1 J µ(1 λ) µ(1 λ) µ(1 λ) µ(1 λ) Stoch. Systems Analysis Markov chains 96

97 Compute limit distribution Limit probabilities are not the same because constant c is different To compute c, sum a finite geometric series 1 = J i=0 cα i = c 1 αj+1 1 α c = 1 α 1 α J+1 Limit distributions for the finite queue are then π i = 1 α 1 α J+1 αi (1 α)α i with α = λ(1 µ)/µ(1 λ), and approximation valid for large J Approximation for large J yields same result as infinite length queue As it should Stoch. Systems Analysis Markov chains 97

98 Simulations Arrival rate λ = 0.3. Departure rate µ = Resulting α 0.87 Maximum queue length J = 100. Initial state q 0 = 0 (queue empty) Not the same as initial probability distribution Queue lenght as function of time queue length time(s) Stoch. Systems Analysis Markov chains 98

99 Simulations: Average occupancy and limit distribution Average time spent at each queue state is predicted by limit distribution For i = 60 occupancy probability is π Explains inaccurate prediction for large i 60 states 20 states 10 1 limit probs ergodic average limit probs ergodic average Limit probabs/ergodic average Limit probabs/ergodic average state state Stoch. Systems Analysis Markov chains 99

100 Buffer overflow If λ = 0.45 and µ = 0.55 how many packets the buffer needs to hold to have a drop rate smaller than 10 6 (one packet dropped for every million packets handled) What is the probability of buffer overflow? Packet discarded if queue is in state J and a new packet arrives P [overflow] = λπ J = 1 α 1 α J+1 pαj (1 α)pα J Withλ = 0.45 and µ = 0.55, α 0.82 J 57 A final caveat Still assuming only 1 packet arrives per time slot Lifting this assumption requires introduction of continuous time Markov processes Stoch. Systems Analysis Markov chains 100

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Continuous time Markov chains

Continuous time Markov chains Continuous time Markov chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 16, 2017

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

Markov Chains Introduction

Markov Chains Introduction Markov Chains 4 4.1. Introduction In this chapter, we consider a stochastic process {X n,n= 0, 1, 2,...} that takes on a finite or countable number of possible values. Unless otherwise mentioned, this

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

Week 5: Markov chains Random access in communication networks Solutions

Week 5: Markov chains Random access in communication networks Solutions Week 5: Markov chains Random access in communication networks Solutions A Markov chain model. The model described in the homework defines the following probabilities: P [a terminal receives a packet in

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

ISM206 Lecture, May 12, 2005 Markov Chain

ISM206 Lecture, May 12, 2005 Markov Chain ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov

More information

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Examples of Countable State Markov Chains Thursday, October 16, :12 PM stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Classification of Countable State Markov Chains

Classification of Countable State Markov Chains Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive

More information

Countable state discrete time Markov Chains

Countable state discrete time Markov Chains Countable state discrete time Markov Chains Tuesday, March 18, 2014 2:12 PM Readings: Lawler Ch. 2 Karlin & Taylor Chs. 2 & 3 Resnick Ch. 1 Countably infinite state spaces are of practical utility in situations

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18. IEOR 6711: Stochastic Models I, Fall 23, Professor Whitt Solutions to Final Exam: Thursday, December 18. Below are six questions with several parts. Do as much as you can. Show your work. 1. Two-Pump Gas

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

12 Markov chains The Markov property

12 Markov chains The Markov property 12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues

More information

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather 1. Markov chain LTCC. Exercises Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4} and transition matrix 1/2 1/2 0 0 P = 0 1/2 1/3 1/6. 0 0 0 1 (a) What happens if the chain starts in

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018 Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary

More information

Readings: Finish Section 5.2

Readings: Finish Section 5.2 LECTURE 19 Readings: Finish Section 5.2 Lecture outline Markov Processes I Checkout counter example. Markov process: definition. -step transition probabilities. Classification of states. Example: Checkout

More information

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

At the boundary states, we take the same rules except we forbid leaving the state space, so,. Birth-death chains Monday, October 19, 2015 2:22 PM Example: Birth-Death Chain State space From any state we allow the following transitions: with probability (birth) with probability (death) with probability

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

Markov Chains. Contents

Markov Chains. Contents 6 Markov Chains Contents 6.1. Discrete-Time Markov Chains............... p. 2 6.2. Classification of States................... p. 9 6.3. Steady-State Behavior.................. p. 13 6.4. Absorption Probabilities

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

Markov Chains (Part 4)

Markov Chains (Part 4) Markov Chains (Part 4) Steady State Probabilities and First Passage Times Markov Chains - 1 Steady-State Probabilities Remember, for the inventory example we had (8) P &.286 =.286.286 %.286 For an irreducible

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Question Points Score Total: 70

Question Points Score Total: 70 The University of British Columbia Final Examination - April 204 Mathematics 303 Dr. D. Brydges Time: 2.5 hours Last Name First Signature Student Number Special Instructions: Closed book exam, no calculators.

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

1.3 Convergence of Regular Markov Chains

1.3 Convergence of Regular Markov Chains Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Lectures on Probability and Statistical Models

Lectures on Probability and Statistical Models Lectures on Probability and Statistical Models Phil Pollett Professor of Mathematics The University of Queensland c These materials can be used for any educational purpose provided they are are not altered

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Discrete Time Markov Chain (DTMC)

Discrete Time Markov Chain (DTMC) Discrete Time Markov Chain (DTMC) John Boccio February 3, 204 Sources Taylor & Karlin, An Introduction to Stochastic Modeling, 3rd edition. Chapters 3-4. Ross, Introduction to Probability Models, 8th edition,

More information

Markov chains (week 6) Solutions

Markov chains (week 6) Solutions Markov chains (week 6) Solutions 1 Ranking of nodes in graphs. A Markov chain model. The stochastic process of agent visits A N is a Markov chain (MC). Explain. The stochastic process of agent visits A

More information

STAT 380 Continuous Time Markov Chains

STAT 380 Continuous Time Markov Chains STAT 380 Continuous Time Markov Chains Richard Lockhart Simon Fraser University Spring 2018 Richard Lockhart (Simon Fraser University)STAT 380 Continuous Time Markov Chains Spring 2018 1 / 35 Continuous

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes

More information

M3/4/5 S4 Applied Probability

M3/4/5 S4 Applied Probability M3/4/5 S4 Applied Probability Autumn 215 Badr Missaoui Room 545 Huxley Building Imperial College London E-Mail: badr.missaoui8@imperial.ac.uk Department of Mathematics Imperial College London 18 Queens

More information

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India Markov Chains INDER K RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400076, India email: ikrana@iitbacin Abstract These notes were originally prepared for a College

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

Markov Chain Model for ALOHA protocol

Markov Chain Model for ALOHA protocol Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability

More information

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006. Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or

More information

RANDOM WALKS IN ONE DIMENSION

RANDOM WALKS IN ONE DIMENSION RANDOM WALKS IN ONE DIMENSION STEVEN P. LALLEY 1. THE GAMBLER S RUIN PROBLEM 1.1. Statement of the problem. I have A dollars; my colleague Xinyi has B dollars. A cup of coffee at the Sacred Grounds in

More information

Lecture 4 - Random walk, ruin problems and random processes

Lecture 4 - Random walk, ruin problems and random processes Lecture 4 - Random walk, ruin problems and random processes Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 4 - Random walk, ruin problems and random processesapril 19, 2009 1 / 30 Part I Random

More information

Markov Processes Cont d. Kolmogorov Differential Equations

Markov Processes Cont d. Kolmogorov Differential Equations Markov Processes Cont d Kolmogorov Differential Equations The Kolmogorov Differential Equations characterize the transition functions {P ij (t)} of a Markov process. The time-dependent behavior of the

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Introduction to Queuing Networks Solutions to Problem Sheet 3

Introduction to Queuing Networks Solutions to Problem Sheet 3 Introduction to Queuing Networks Solutions to Problem Sheet 3 1. (a) The state space is the whole numbers {, 1, 2,...}. The transition rates are q i,i+1 λ for all i and q i, for all i 1 since, when a bus

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

Chapter 11 Advanced Topic Stochastic Processes

Chapter 11 Advanced Topic Stochastic Processes Chapter 11 Advanced Topic Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

Statistics 992 Continuous-time Markov Chains Spring 2004

Statistics 992 Continuous-time Markov Chains Spring 2004 Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

µ n 1 (v )z n P (v, )

µ n 1 (v )z n P (v, ) Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic

More information

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains AM : Introduction to Optimization Models and Methods Lecture 7: Markov Chains Yiling Chen SEAS Lesson Plan Stochastic process Markov Chains n-step probabilities Communicating states, irreducibility Recurrent

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

Notes on Measure Theory and Markov Processes

Notes on Measure Theory and Markov Processes Notes on Measure Theory and Markov Processes Diego Daruich March 28, 2014 1 Preliminaries 1.1 Motivation The objective of these notes will be to develop tools from measure theory and probability to allow

More information

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Markov Chains. Sarah Filippi Department of Statistics  TA: Luke Kelly Markov Chains Sarah Filippi Department of Statistics http://www.stats.ox.ac.uk/~filippi TA: Luke Kelly With grateful acknowledgements to Prof. Yee Whye Teh's slides from 2013 14. Schedule 09:30-10:30 Lecture:

More information

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature. Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.

More information

2 DISCRETE-TIME MARKOV CHAINS

2 DISCRETE-TIME MARKOV CHAINS 1 2 DISCRETE-TIME MARKOV CHAINS 21 FUNDAMENTAL DEFINITIONS AND PROPERTIES From now on we will consider processes with a countable or finite state space S {0, 1, 2, } Definition 1 A discrete-time discrete-state

More information

Stochastic Models: Markov Chains and their Generalizations

Stochastic Models: Markov Chains and their Generalizations Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction

More information

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 4: Discrete-Time Markov Chains September 1, 215 Dr. Itamar Arel College of Engineering Department of Electrical Engineering & Computer

More information