ISE/OR 760 Applied Stochastic Modeling

Size: px
Start display at page:

Download "ISE/OR 760 Applied Stochastic Modeling"

Transcription

1 ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR / 60

2 Outline Examples Definitions Markov property Transition probabilities Transient behavior Chapman-Kolmogorov equation Gambler s ruin problem State classification Communication Periodicity Recurrence and transience Steady state of good DTMC Balance equation πp = π DTMC having absorbing states Canonical form Time reversibility Random walks on weighted graphs Yunan Liu (NC State University) ISE/OR / 60

3 Examples 1 Economics (Asset pricing, market crashes) 2 Chemistry (Eenzyme activity, growth of certain compounds) 3 Internet Applications (Google PageRank) 4 Physics 5 Information sciences 6 Sports, etc. Figure: A DTMC system modeling a a remote medical assistance system Yunan Liu (NC State University) ISE/OR / 60

4 Ab Example: PageRank Algorithm Application: Best-known search engine algorithm Idea: Counting the number of links to a webpage to estimate its importance Developer: Larry Page Yunan Liu (NC State University) ISE/OR / 60

5 Definition and Transient Performance Yunan Liu (NC State University) ISE/OR / 60

6 Markov Processes Figure: Andrei Markov ( ) Markov Property: Given present state, forecast of future is independent of past P(Future Present, Past) = P(Future Present) Yunan Liu (NC State University) ISE/OR / 60

7 Markov Processes A process that follows the Markov Property is called a Markov Process. { Discrete State Space Continuous [ Time {}}{ Discrete Continuous DTMC e.g., random walks CTMC e.g., Brownian motions ] This class will focus on the discrete space Markov processes. (DTMC and CTMC) Examples: Discrete State Space Number of customers in queue Inventory Level Location (with discrete elements) Continuous State Space Temperature Level Geographic Location Amount of time left in process Yunan Liu (NC State University) ISE/OR / 60

8 Definition Definition (DTMC) A discrete time discrete space stochastic process {X n, n = 0, 1,...} is called a DTMC if P(X n+1 = j X n = i, X n 1 = i n 1... X 0 = i 0 ) = Markov Property - Given present (today), future (tomorrow) is independent of past (yesterday) Example: Tomorrow will be sunny with an 80% chance if today is also sunny or with a 40% chance if today is rainy. Set up a DTMC for weather forecasting. Yunan Liu (NC State University) ISE/OR / 60

9 Transition Probability Matrix (TPM) Let P be a transition probability matrix (TPM) with elements defined as P i,j = P(X n+1 = j X n = i) Properties of TPM P: Nonnegativity: Row sum: Remark: DTMC may in general be time nonhomogeneous: P i,j (n) = P(X n+1 = j X n = i). In this class, we focus on time homogeneous DTMC. Yunan Liu (NC State University) ISE/OR / 60

10 Dynamics of DTMC {X n, n 0} evolves in a Markovian manner: The chain stays in a state for 1 time unit; When in state i, X n moves to a state j w.p. P i,j, i, j S. Sample path of a DTMC with state space S = {1, 2, 3, 4, 5, 6}. Yunan Liu (NC State University) ISE/OR / 60

11 Questions of Interest 1 Transient performance (short-term behavior) Transition probabilities shortly after the process begins? P(X1 = j X 0 = i) =? P(X1 = i, X 4 = j, X 7 = k) =? 2 Steady-state performance (long-run behavior) Asymptotic performance after many transitions: P(X100 = j X 0 = i)? Long-run proportion of time (steps): for N large, 1 N N n=1 1 {X n=j}? Termination Probability of which state the process will terminate in Expected number of transitions until termination. Yunan Liu (NC State University) ISE/OR / 60

12 Transient Behavior Yunan Liu (NC State University) ISE/OR / 60

13 Chapman-Kolmogorov Equation Define the n-step transition probability from i j P (n) i,j n-step TPM P (n) has element P (n) i,j P (1) i,j = Review: P(A) = P(A B) = Yunan Liu (NC State University) ISE/OR / 60

14 Derivation of C-K Equation (P (m+n) ) i,j = P (m+n) i,j = P(X n+m = j X 0 = i) = k P(X n+m = j X m = k, X 0 = i) P(X m = k X 0 = i) = k P(X n+m = j X m = k) P(X m = k X 0 = i) = k = P (n) i,k P(m) k,j ( P (m) P (n)) i,j Chapman-Kolmogorov (C-K) equation P (m+n) = P (m) P (n) = P (n) P (m) (in matrix notation) The C-K Equation allows us to relate the (m + n) step transition matrix to the product of two smaller step-size (m) and (n) transition matrices. Yunan Liu (NC State University) ISE/OR / 60

15 Derivation of C-K Equation A quick implication of the C-K Equation: ( ) P (n) = P (n 1) P (1) = P (n 1) P = P (n 2) P P ( ) ( ) = P (n 3) P P P = = P (2) P P P }{{} n 3 terms = P P... P }{{} n terms Summary: n-step TPM is the n th power of the 1-step TPM P (n) = P n Yunan Liu (NC State University) ISE/OR / 60

16 Finite-Dimensional Distribution (F.D.D.) For 0 n 1 n 2 n m, i 1, i 2,..., i m S P(X n1 = i 1, X n2 = i 2,..., X nm = i m ) P(X 10 = 1, X 7 = 5, X 3 = 10) = P(A B C) = P(A B C) P(B C) = P(A B C) P(B C) P(C) = P(X 10 = 1 X 7 = 5, X 3 = 10) P(X 7 = 5 X 3 = 10) P(X 3 = 10) = P(X 10 = 1 X 7 = 5) P(X 7 = 5 X 3 = 10) P(X 3 = 10) = P (3) 5,1 P(4) 10,5 k S P(X 3 = 10 X 0 = k) P(X 0 = k) = P (3) 5,1 P(4) 10,5 Only need k S P (3) k,3 P(X 0 = k) = (P 3 ) 5,1 (P 4 ) 10,5 (P 3 ) k,3 P(X 0 = k) Distribution of initial state P(X 0 = k) 1-step TPM P. k S Yunan Liu (NC State University) ISE/OR / 60

17 An Example: Pooh Bear A bear of little brain named Pooh is fond of honey. Bees producing honey are located in three trees: tree A, tree B, and tree C. Tending to be somewhat forgetful, Pooh goes back and form among these three honey trees randomly: from A, Pooh goes next to B or C with probability 1/2; from B, Pooh goes next to A w.p 3/4 and to C w.p. 1/4; from C, Pooh always goes next to A. Construct a DTMC and specify its TPM. Yunan Liu (NC State University) ISE/OR / 60

18 State Classification Yunan Liu (NC State University) ISE/OR / 60

19 Communication Two states i, j S Accessibility: denoted by i j State j is accessible from state i if there exists n 0 s.t. P (n) i,j > 0 Communication: denoted by i j States i and j communicate if i j, j i Properties: (i) i i (note when n = 0, P (0) = I) (ii) if i j, then j i (iii) if i j, j k, then i k Proof of (iii): i j there exists m s.t. P (m) i,j > 0 P (m+n) i,k j k there exists n s.t. P (n) j,k > 0 = j S P (m) i,j P (n) j,k > P(m) i,j P (n) j,k > 0 Yunan Liu (NC State University) ISE/OR / 60

20 Irreducibility and Periodicity 1 Class: We say i, j belong to the same class, if i j 2 Irreducible: We say a DTMC is irreducible if there is only one class. Periodicity: 1 The period of a state i, denoted d(i), is the greatest common divisor of all n 0 s.t. P (n) i,i > 0 2 If d(i) = 1 we say i is aperiodic. Property of Periodicity: Periodicity (aperiodicity) is a class property, that is if i j, then d(i) = d(j) Yunan Liu (NC State University) ISE/OR / 60

21 Periodicity Proof: Assume P (s) i,i > 0 for some s > 0 Yunan Liu (NC State University) ISE/OR / 60

22 Ever Hitting Probabilities Define the probability of visiting state j from state i in exactly n steps: f (n) i,j = P(X n = j, X k j, 1 k n 1 X 0 = i) = P(1 st transition to j occurs at step n starting in i) Next define the probability that state j will ever be visited after leaving state i: Note: f i,j = f i,j > 0 if i j; f i,j = 0 if i j n=1 f (n) i,j = P(j is ever visited after leaving i) Yunan Liu (NC State University) ISE/OR / 60

23 State Classification A state j of DTMC is recurrent if f j,j = 1 (Starting in j, will always return to j w.p.1.) transient if f j,j < 1 Starting in j, will ever (never) return to j w.p. f j,j (w.p. 1 f j,j > 0). N j,j = total number of visits in j after leaving j. {, if j is recurrent N j,j = Geo(1 f j,j ) 1, if j is transient A necessary and sufficient condition: j is recurrent n=1 P(n) j,j = j is transient n=1 P(n) j,j < Proof: [ ] E[N j,j ] = E 1(X n = j X 0 = j) = P(X n = j X 0 = j) = n=1 n=1 n=1 P (n) j,j Yunan Liu (NC State University) ISE/OR / 60

24 State Classification Both transience & recurrence are class properties. If i j (i, j belong to the same class) Proof: i is recurrent j is recurrent i is transient j is transient Because P (m+n+l) j,j i j P (n) i,j > 0, P (m) j,i > 0 for some m 0, n 0 P (m) j,i P (l) i,i P(n) i,j, we have k=1 P (k) j,j k>m+n P (k) j,j = l=1 P (m+n+l) j,j l=1 P (m) j,i P (l) i,i P(n) i,j = P (m) j,i P (n) i,j l=1 P (l) i,i. Yunan Liu (NC State University) ISE/OR / 60

25 Gambler s Ruin Problem A gambler who at each round has probability p of winning one unit and probability q = 1 p of losing one unit. The game ends either the gambler s total fortune reaches level N > 0, or his fortune reaches 0 (the gambler is broke). Let X 0 be the initial fortune. Define the X i, i 1, as the outcome of i th bet: { +1, w.p. p X i = 1, w.p. q = 1 p The gambler s fortune after the n th bet: S n = n X i. Question: Is {S n, n 0} a DTMC? If yes, how many classes? Which are transient and which are recurrent? What are the periods? i=0 Yunan Liu (NC State University) ISE/OR / 60

26 Probability of Winning Let P i be the probability the gambler reaches target fortune N before going bankrupt, given a starting fortune of i: P i = P(hits N before 0 X 0 = i) = P(hits N before 0 X 1 = 1, X 0 = i) p + P(hits N before 0 X 1 = 1, X 0 = i) q Therefore, P i p + P i q = P i = P i+1 p + P i 1 q P i+1 P i = γ(p i P i 1 ), γ = q p. A recursion with boundary conditions P 0 = 0 and P N = 1 P 2 P 1 = γ(p 1 P 0 ) = γp 1 P 3 P 2 = γ(p 2 P 1 ) = γ 2 P 1. P i P i 1 = γ(p i 1 P i 2 ) = γ i 1 P 1. P N P N 1 = γ(p N 1 P N 2 ) = γ N 1 P 1 Yunan Liu (NC State University) ISE/OR / 60

27 Probability of Winning Summing up the i 1 equations yields: P i = [ 1 + q p + + ( q p ) i 1 ] Summing up all N equations yields: 1 = P N = [ 1 + q p + + ( q p ) N 1 ] P 1 = P 1 = { 1 ( q p ) i 1 q p P 1, if p q i P 1, if p = q. { 1 ( q p ) N 1 q p P 1, if p q N P 1, if p = q. Finally, P i = 1 ( q p ) i, if p q (unfair game) 1 ( q p ) N, if p = q = 1/2 (fair game). i N Yunan Liu (NC State University) ISE/OR / 60

28 State Classification (Continued) Denote T j,j = total # of steps until returning to j the 1 st, after leaving j Definition: A recurrent state j is positive recurrent if E[T j,j ]< ; null recurrent if E[T j,j ]=. It is clear that if a class is transient, then w.p. 1 f j,j > 0 DTMC never returns. Summary, for a state j, =, if j is transient E[T j,j ] <, if j is positive recurrent =, if j is null recurrent Remark: A null recurrence state will always (w.p.1.) return to itself, but it will take nearly forever to do so... Yunan Liu (NC State University) ISE/OR / 60

29 An Example: One-Dimensional Simple Random Walks Consider a simple random walk X 1, X 2,... I.I.D. P(X 1 = 1) = p, P(X 1 = 1) = 1 p, 0 < p < 1. S 0 = 0, S n = X X n, n 1. The process {S n, n 0} is a simple symmetric (asymmetric) random walk if p = 1/2 (p 1/2). Questions: Is {S n, n 0} a DTMC? If so, what is the state space? Is it irreducible? Classify the states (transient, null-recurrent, positive-recurrent). Yunan Liu (NC State University) ISE/OR / 60

30 Summary of State Classification A state i is transient: if f j,j < 1 k=1 P(k) j,j < N j,j Geo(1 f j,j ) recurrent: if f j,j = 1 k=1 P(k) j,j = N j,j = positive recurrent: if E[Tj,j ] < null recurrent: if E[Tj,j ] = All following features are class properties: 1 Communication 2 Transience 3 Recurrence (null-recurrence, positive-recurrence) 4 Periodicity and aperiodicity Yunan Liu (NC State University) ISE/OR / 60

31 Long-Term Performance Yunan Liu (NC State University) ISE/OR / 60

32 Stationary Distribution Definition (Stationary distribution) A vector π = (π 1, π 2, π 3,...) s.t. π i 0, i S π i = 1 is called the stationary distribution for a DTMC with TPM P, if πp = π or π i P i,j = π j for all j S. i S Implication: If P(X 0 = j) = π j, j S P(X 1 = j) = i S P(X 1 = j X 0 = i)p(x 0 = i) = i S P i,j π i = π j P(X 2 = j) = P(X 2 = j X 1 = i)p(x 1 = i) = P i,j π i = π j i S i S P(X n = j) = π j for all n 1. If a DTMC is initially in stationarity, it is in stationarity for all time n. Yunan Liu (NC State University) ISE/OR / 60

33 Theorem 1: Convergence to Steady State Theorem (steady-state distribution) consider an (i) irreducible and (ii) aperiodic DTMC {X n : n 0}. (i) if all states are transient or null-recurrent, = P(X n = j X 0 = i) n 0. P (n) i,j (ii) if all states are positive recurrent, = P(X n = j X 0 = i) n π j > 0, P (n) i,j π = (π 1, π 2,... ) is the unique solution to Equivalently, (X n X 0 = i) D X where P(X = j) = π j. πp = π π i = 1 i S (balance equation) Proof: omitted here, see p of Blue Ross. Yunan Liu (NC State University) ISE/OR / 60

34 Theorem 1: Convergence to Steady State Remarks of Theorem 1: Steady state is defined through a limit as n. In case (i), there is no steady state; In case (ii), convergence occurs (to a limit). Define: ergodicity = aperiodic + irreducibility + positive-recurrent. Given convergence in (ii), the limit must be π. π j = lim n P(n+1) i,j = lim n k P (n) i,k P k,j = k lim n P(n) i,k P k,j = k π k P k,j. Claim: All states of a finite state space irreducible DTMC are positive-recurrent. If a DTMC is good (irreducible and positive-recurrent), then the balance equation has a unique solution (aperiodicity is not necessary) Yunan Liu (NC State University) ISE/OR / 60

35 Theorem 1: Implication Corollary (Existence of good state in a finite DTMC) Consider a DTMC with a finite state space. There must exist at least one positive recurrent state. Proof: Assume no state is positive recurrent - all states are either transient or null recurrent to reach a contradiction. By Theorem 1: lim n P (n) i,j = 0 for all j S = {1, 2,..., N}. Row sum N j=1 P(n) i,j = 1 and take n : 1 = lim n N j=1 P (n) i,j = N N lim n P(n) i,j = 0 = 0. j=1 Contradiction! There must exists at least one positive recurrent state! Remark: For an irreducible and finite-state DTMC, all states are positive recurrent. Yunan Liu (NC State University) ISE/OR / 60 j=1

36 Theorem 2: Long-Run Proportion of Time Theorem (long-run proportion of time) (i) For an irreducible, positive-recurrent DTMC, BE has a unique solution, and the long-run proportion of time DTMC is in a state k: 1 π k = lim N N N 1(X n = k) (ii) Let r( ) be a bounded (cost) function on S, then the long-run average cost: 1 lim n N n=1 N r(x n ) = r(j)π j = r(j)p(x = j) = E[r(X )] j S j S n=1 Remarks: (i) is a special case of (ii) when r(x ) = 1(X = k) If r(x ) = X, then (ii) becomes long-run average of {X n }: 1 lim N N N n=1 X n = j S jπ j = E[X ] Yunan Liu (NC State University) ISE/OR / 60

37 Proof of Theorem 2 Define: N n (i) = N n (i, j) = N n (i, j) = n k=1 n 1(X k = i) = time (days) in state i by n. k=1 N n(i) l=1 Partial proof (i): let πj 1 n N n(j) = 1 n 1(X k = j) = 1 n n k=1 1(X k 1 = i, X k = j) = # of direct trans from i j by n 1(next visit is j after l th visit in i) = N n(i) }{{} A (l) i,j = lim n N n (j)/n. n 1(X k 1 = i, X k = j) k=1 i S l=1 1 A (l) i,j = 1 n 1(X k 1 = i, X k = j) = 1 n n N n(i, j) = N 1 n(i) 1 (l) n A i,j i S k=1 i S i S l=1 = ( ) Nn (i) 1 N n(i) 1 (l) n N n (i) A πi E[1 Ai,j ] = πi P i,j. i,j i S }{{} l=1 i S i S }{{}?? Yunan Liu (NC State University) ISE/OR / 60

38 Proof of Theorem 2 Continued The total cost by time n: n r(x k ) = k=1 k=1 n 1(X k = j)r(x k ) = k=1 j S n = 1(X k = j)r(j) = j S k=1 j S n 1(X k = j)r(j) k=1 j S r(j) n 1(X k = j) Dividing n on both sides and taking n : 1 n lim r(x k ) = ( ) 1 n r(j) lim 1(X k = j) = r(j)π j. n n n n j S j S k=1 k=1 Yunan Liu (NC State University) ISE/OR / 60

39 Theorem 3: Mean First Passage Time Theorem (Mean First Passing Time) Consider an irreducible and positive-recurrent DTMC: (i) If aperiodic: P (n) i,j π j = 1 E[T j,j ] (ii) If periodic with period d > 1: (n) The sequence P j,j doesn t converge; The subsequence P n d j,j dπ j = d E[T j,j ] Intuition: A cycle: time from j to j In each cycle: spend exactly 1 time unit (day) in j long-run proportion of time in j: 1/E[cycle length] = 1/E[T j,j ]. Proof: will be given in Review Theory (last topic) Yunan Liu (NC State University) ISE/OR / 60

40 Example: DTMC in Hospitals When a patient is hospitalized at the NC State hospital, his/her health condition can be described by a DTMC with 3 states: low risk (L), medium risk (M) and high risk (H). Patients are admitted into the hospital in low-, medium- and high-risk conditions with probability p 1 = 0.6, p 2 = 0.3 and p 3 = 0.1, respectively. Their health status are updated once every day according to the TPM We are interested in: P = H M L Long-run proportion of days a patient is in low (medium, high) risk. 2 Daily medication costs are independent exponential r.v. s with rates µ 1 = 1/2, µ 2 = 1/4 and µ 3 = 1/10 for a low-, medium- and high-risk patient. What is long-run average daily cost?. Yunan Liu (NC State University) ISE/OR / 60

41 Example: Google PageRank Develop a DTMC: 1 Each state represents a web page; 2 Create direct transition (an arc) from state i to state j if page i has a link to page j; 3 If page i has k > 0 outgoing links, then set the probability on each outgoing arc to be 1 k ; 4 Solve the DTMC to determine the stationary distribution. Pages are then ranked based on their π i. Yunan Liu (NC State University) ISE/OR / 60

42 Reducible DTMCs Yunan Liu (NC State University) ISE/OR / 60

43 Canonical Form of Reducible DTMC Consider a DTMC with both transient states and absorbing states. Define A = set of absorbing states = {i S, s.t. P i,i = 1} T = set of transient states Suppose A = n, T = m Relabel the states and put the TPM P in its canonical form: P = A T [ A T In n 0 n m R m n Q m m ] Q m m : transient-to-transient TPM R m n : transient-to-absorbing TPM 0 n m : 0 matrix I n n : identity matrix Yunan Liu (NC State University) ISE/OR / 60

44 An Example: Markov Mouse in an Open Maze Question: Starting in room 1, what is the mean number of steps until escaping the maze? what is the probability of exiting from door 3? In general, for a reducible DTMC E[number of steps until absorption X 0 = i], for i T P(being absorbed by state j X 0 = i), for j A, i T Yunan Liu (NC State University) ISE/OR / 60

45 Mean Number of Steps until Absorption For transient states i, j T N i,j = total # of steps in j starting in i = n=0 1(X n = j X 0 = i). Define the fundamental matrix S R m m : [ ] S i,j E[N i,j ] = E 1(X n = j X 0 = i) = E [1(X n = j X 0 = i)] In matrix notation n=0 n=0 n=0 = P(X n = j X 0 = i) = δ i,j + = δ i,j + n=1 n=1 P (n) i,j Q (n) i,j, where δ i,j = 1 {i=j} S = I + Q + Q 2 + Q 3 + (geometric sequence) Yunan Liu (NC State University) ISE/OR / 60

46 Mean Number of Steps until Absorption Solving the fundamental matrix What s missing? Fundamental matrix: S = I + Q + Q 2 + Q 3 + QS = SQ = Q + Q 2 + Q 3 + Q 4 + S(I Q)= (I Q)S = I. S = (I Q) 1 Define M i = mean number of steps until absorption, starting in i T. M i = S i,j, or M = S e, where e = [1,..., 1] T. }{{} j T m Yunan Liu (NC State University) ISE/OR / 60

47 Probability of Absorption For i T, l A. Define B i,l = P(eventually being absorbed by state l X 0 = i) = P(X 1 = l X 0 = i) + P(X 2 = l, X 1 T X 0 = i) +P(X 3 = l, X 2 T X 0 = i) + = R i,l + P(X 2 = l X 1 = j, X 0 = i)p(x 1 = j X 0 = i) j T + j T = R i,l + j T In matrix notation: P(X 3 = l X 2 = j, X 0 = i)p(x 2 = j X 0 = i) + R j,l Q i,j + j T R j,l Q (2) i,j + = (R) i,l + (QR) i,l + (Q 2 R) i,l + B = R + QR + Q 2 R + = (I + Q + Q 2 + ) R = SR = (I Q) 1 R. Yunan Liu (NC State University) ISE/OR / 60

48 Ever Hitting Probabilities Revisited Using the fundamental matrix S, we can compute ever-hitting probability f i,j = P(j ever visited after leaving i) < 1. For i, j T, i j S i,j = E[# of visits in j X 0 = i] = E[# of visits in j j is ever visited, X 0 = i] f i,j = E[# of visits in j X 0 = j] f i,j = S j,j f i,j For i = j T, S j,j = E[# of visits in j X 0 = j] = E[# in j j ever again,x 0 = j]f j,j + E[# in j j never again,x 0 = j](1 f j,j ) = [1 + S j,j ] f j,j + 1 (1 f j,j ) = 1 + S j,j f j,j f i,j = { Si,j S j,j, if i j; S j,j 1 S j,j, if i = j. Yunan Liu (NC State University) ISE/OR / 60

49 Yunan Liu (NC State University) ISE/OR / 60

50 Time Reversibility Yunan Liu (NC State University) ISE/OR / 60

51 Reverse Time DTMC Definition (Reverse-Time DTMC) Consider a DTMC {X n } in stationary with TPM P and steady state π. Define its reverse-time process Properties: { X n} = {X n, X n 1, X n 2,... } (i) Inverse-time process { X n} is a DTMC, that is, P(X m = j X m+1 = i, X m+2 = i m+2, X m+3 = i m+3,...) = P(X m = j X m+1 = i) (ii) The transition probability of { X n} is P i,j P(X m = j X m+1 = i) = π jp j,i π i. Yunan Liu (NC State University) ISE/OR / 60

52 Reverse Time DTMC Proof of property (i): Given present, future past (and past future). More rigorously: LHS = P(X m = j, X m+1 = i, X m+2 = i m+2, X m+3 = i m+3,... ) P(X m+1 = i, X m+2 = i m+2, X m+3 = i m+3,... ) = P(X m+2 = i m+2, X m+3 = i m+3,... X m+1 = i, X m = j)p(x m+1 = i, X m = j) P(X m+2 = i m+2, X m+3 = i m+3,... X m+1 = i)p(x m+1 = i) = P(X m+1 = i, X m = j) = P(X m = j X m+1 = i) P(X m+1 = i) Proof of property (ii): P i,j= P(X m = j X m+1 = i) = P(X m = j, X m+1 = i) P(X m+1 = i) P(X m+1 = i X m = j)p(x m = j) P(X m+1 = i) = π jp j,i π i. Yunan Liu (NC State University) ISE/OR / 60

53 Time Reversibility Definition (Time Reversibility) We say DTMC {X n } is time reversible (T-R), if { X n } and {X n } have the same probability law ( P = P, P i,j = P i,j= P ) j,iπ j, π i or equivalently, π i P i,j = π j P j,i for all i, j S. Interpretation: A DTMC is T-R if, for all i, j S, long-run rate (prop. of transitions) from i j = long-run rate from j i Yunan Liu (NC State University) ISE/OR / 60

54 Example 1 In general, a DTMC may not be T-R, that is, {X n } and { X n } may have different dynamics ( P i,j P i,j ). Example: DTMC with S = {1, 2, 3} P = , π = ( 1 3, 1 3, 1 3 ) Checking the T-R criterion: = π 1P 1,2 π 2 P 2,1 = Hence, the DTMC is NOT time reversible. Yunan Liu (NC State University) ISE/OR / 60

55 Example 2: Random Walk on a Bounded Set Consider a random walk in a bounded set S = {0, 1,..., N} with transition probabilities P i,i+1 = p i = 1 P i,i 1 for 1 i N 1, P N,N 1 = P 0,1 = 1. Let S n denote the position at time n, then {S n, n 0} is a DTMC. Question: Is the DTMC time-reversible? Yunan Liu (NC State University) ISE/OR / 60

56 Time Reversibility Proposition (A Sufficient Condition) If there exists a vector π = (π 1, π 2,... ), s.t., π k 0, π k = 1, satisfying πi P i,j = πj P j,i, for all i, j S (1) then 1 π = π (that is, π is the stationary distribution) 2 The DTMC is T-R. Proof: summing both sides of (1) over i yields πi P i,j = πj P j,i = πj i i which is the balance equation! Hence, π = π. In addition, we have P j,i = πj, i so that DTMC is T-R. π i P i,j = π j P j,i, Yunan Liu (NC State University) ISE/OR / 60

57 Random Walks on Weighted Graphs Definition (RWoWG) A graph has n nodes and m arcs. Each arc (undirected) (i, j) has a weight w i,j = w j,i. A particle moves from node i j with a probability that is proportional to the weight w i,j, that is, P i,j = w i,j k w, for all i, j. i,k Let X n be the location of the particle at time n. If the RWoWG is connected, then {X n, n 0} is a DTMC which is irreducible and positive-recurrence. Yunan Liu (NC State University) ISE/OR / 60

58 Random Walks on Weighted Graphs Theorem (Steady state of RWoWG) The DTMC of a connected RWoWG 1 is time-reversible, 2 has steady state Intuition: π i = k S π i = k S k S w i,k l S w k,l, i S. (2) (sum of weights on all incoming arcs to i) (sum of weights on all outgoing arcs from k) Proof: We verify that π i in (2) indeed solves Equation (1): π i P i,j = l S k S w i,k k S w l,k w i,j k S w i,k = l S where the 2nd equality holds because w i,j = w j,i. k S w j,k k S w l,k w j,i k S w j,k = π j P j,i, Yunan Liu (NC State University) ISE/OR / 60

59 Examples of RWoWG: Markov Mouse in a Closed Maze Revisited Questions: Is the DTMC time reversible? Are there easy ways to compute π? Yunan Liu (NC State University) ISE/OR / 60

60 Examples of RWoWG: Random Walk on a Bounded Set Revisited Consider a random walk in a bounded set S = {0, 1,..., N} with transition probabilities P i,i+1 = p i = 1 P i,i 1 for 1 i N 1, P N,N 1 = P 0,1 = 1. Let S n denote the position at time n, then {S n, n 0} is a DTMC. Questions: Can we transform this problem to a RWoWG? If yes, give an expression of π i, i = 0, 1,..., N. Yunan Liu (NC State University) ISE/OR / 60

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

ISM206 Lecture, May 12, 2005 Markov Chain

ISM206 Lecture, May 12, 2005 Markov Chain ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

2 DISCRETE-TIME MARKOV CHAINS

2 DISCRETE-TIME MARKOV CHAINS 1 2 DISCRETE-TIME MARKOV CHAINS 21 FUNDAMENTAL DEFINITIONS AND PROPERTIES From now on we will consider processes with a countable or finite state space S {0, 1, 2, } Definition 1 A discrete-time discrete-state

More information

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

Discrete Time Markov Chain (DTMC)

Discrete Time Markov Chain (DTMC) Discrete Time Markov Chain (DTMC) John Boccio February 3, 204 Sources Taylor & Karlin, An Introduction to Stochastic Modeling, 3rd edition. Chapters 3-4. Ross, Introduction to Probability Models, 8th edition,

More information

Markov Chains. Chapter 16. Markov Chains - 1

Markov Chains. Chapter 16. Markov Chains - 1 Markov Chains Chapter 16 Markov Chains - 1 Why Study Markov Chains? Decision Analysis focuses on decision making in the face of uncertainty about one future event. However, many decisions need to consider

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains AM : Introduction to Optimization Models and Methods Lecture 7: Markov Chains Yiling Chen SEAS Lesson Plan Stochastic process Markov Chains n-step probabilities Communicating states, irreducibility Recurrent

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

Markov Chains (Part 4)

Markov Chains (Part 4) Markov Chains (Part 4) Steady State Probabilities and First Passage Times Markov Chains - 1 Steady-State Probabilities Remember, for the inventory example we had (8) P &.286 =.286.286 %.286 For an irreducible

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1. IEOR 46: Introduction to Operations Research: Stochastic Models Spring, Professor Whitt Class Lecture Notes: Tuesday, March. Continuous-Time Markov Chains, Ross Chapter 6 Problems for Discussion and Solutions.

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 6, 2016 Outline 1. Introduction 2. Chapman-Kolmogrov Equations

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Markov Chains Introduction

Markov Chains Introduction Markov Chains 4 4.1. Introduction In this chapter, we consider a stochastic process {X n,n= 0, 1, 2,...} that takes on a finite or countable number of possible values. Unless otherwise mentioned, this

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

1 Random Walks and Electrical Networks

1 Random Walks and Electrical Networks CME 305: Discrete Mathematics and Algorithms Random Walks and Electrical Networks Random walks are widely used tools in algorithm design and probabilistic analysis and they have numerous applications.

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Readings: Finish Section 5.2

Readings: Finish Section 5.2 LECTURE 19 Readings: Finish Section 5.2 Lecture outline Markov Processes I Checkout counter example. Markov process: definition. -step transition probabilities. Classification of states. Example: Checkout

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Discrete Markov Chain. Theory and use

Discrete Markov Chain. Theory and use Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical

More information

M3/4/5 S4 Applied Probability

M3/4/5 S4 Applied Probability M3/4/5 S4 Applied Probability Autumn 215 Badr Missaoui Room 545 Huxley Building Imperial College London E-Mail: badr.missaoui8@imperial.ac.uk Department of Mathematics Imperial College London 18 Queens

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

Uncertainty Runs Rampant in the Universe C. Ebeling circa Markov Chains. A Stochastic Process. Into each life a little uncertainty must fall.

Uncertainty Runs Rampant in the Universe C. Ebeling circa Markov Chains. A Stochastic Process. Into each life a little uncertainty must fall. Uncertainty Runs Rampant in the Universe C. Ebeling circa 2000 Markov Chains A Stochastic Process Into each life a little uncertainty must fall. Our Hero - Andrei Andreyevich Markov Born: 14 June 1856

More information

Chapter 11 Advanced Topic Stochastic Processes

Chapter 11 Advanced Topic Stochastic Processes Chapter 11 Advanced Topic Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section

More information

Mathematical Methods for Computer Science

Mathematical Methods for Computer Science Mathematical Methods for Computer Science Computer Science Tripos, Part IB Michaelmas Term 2016/17 R.J. Gibbens Problem sheets for Probability methods William Gates Building 15 JJ Thomson Avenue Cambridge

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues

More information

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather 1. Markov chain LTCC. Exercises Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4} and transition matrix 1/2 1/2 0 0 P = 0 1/2 1/3 1/6. 0 0 0 1 (a) What happens if the chain starts in

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

18.440: Lecture 33 Markov Chains

18.440: Lecture 33 Markov Chains 18.440: Lecture 33 Markov Chains Scott Sheffield MIT 1 Outline Markov chains Examples Ergodicity and stationarity 2 Outline Markov chains Examples Ergodicity and stationarity 3 Markov chains Consider a

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Markov Chain Model for ALOHA protocol

Markov Chain Model for ALOHA protocol Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Markov Chains. Sarah Filippi Department of Statistics  TA: Luke Kelly Markov Chains Sarah Filippi Department of Statistics http://www.stats.ox.ac.uk/~filippi TA: Luke Kelly With grateful acknowledgements to Prof. Yee Whye Teh's slides from 2013 14. Schedule 09:30-10:30 Lecture:

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

1 Random walks: an introduction

1 Random walks: an introduction Random Walks: WEEK Random walks: an introduction. Simple random walks on Z.. Definitions Let (ξ n, n ) be i.i.d. (independent and identically distributed) random variables such that P(ξ n = +) = p and

More information

Markov Chains. Contents

Markov Chains. Contents 6 Markov Chains Contents 6.1. Discrete-Time Markov Chains............... p. 2 6.2. Classification of States................... p. 9 6.3. Steady-State Behavior.................. p. 13 6.4. Absorption Probabilities

More information

18.600: Lecture 32 Markov Chains

18.600: Lecture 32 Markov Chains 18.600: Lecture 32 Markov Chains Scott Sheffield MIT Outline Markov chains Examples Ergodicity and stationarity Outline Markov chains Examples Ergodicity and stationarity Markov chains Consider a sequence

More information

APPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One

APPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One APPM /556 Markov Processes Fall 9, Some Review Problems for Exam One (Note: You will not have to invert any matrices on the exam and you will not be expected to solve systems of or more unknowns. There

More information

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India Markov Chains INDER K RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400076, India email: ikrana@iitbacin Abstract These notes were originally prepared for a College

More information

Link Analysis. Leonid E. Zhukov

Link Analysis. Leonid E. Zhukov Link Analysis Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural Analysis and Visualization

More information

Classification of Countable State Markov Chains

Classification of Countable State Markov Chains Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Lecture 4 - Random walk, ruin problems and random processes

Lecture 4 - Random walk, ruin problems and random processes Lecture 4 - Random walk, ruin problems and random processes Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 4 - Random walk, ruin problems and random processesapril 19, 2009 1 / 30 Part I Random

More information

Some Definition and Example of Markov Chain

Some Definition and Example of Markov Chain Some Definition and Example of Markov Chain Bowen Dai The Ohio State University April 5 th 2016 Introduction Definition and Notation Simple example of Markov Chain Aim Have some taste of Markov Chain and

More information

Hittingtime and PageRank

Hittingtime and PageRank San Jose State University SJSU ScholarWorks Master's Theses Master's Theses and Graduate Research Spring 2014 Hittingtime and PageRank Shanthi Kannan San Jose State University Follow this and additional

More information

12 Markov chains The Markov property

12 Markov chains The Markov property 12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

18.175: Lecture 30 Markov chains

18.175: Lecture 30 Markov chains 18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know

More information

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505 INTRODUCTION TO MCMC AND PAGERANK Eric Vigoda Georgia Tech Lecture for CS 6505 1 MARKOV CHAIN BASICS 2 ERGODICITY 3 WHAT IS THE STATIONARY DISTRIBUTION? 4 PAGERANK 5 MIXING TIME 6 PREVIEW OF FURTHER TOPICS

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

Markov Chains Absorption (cont d) Hamid R. Rabiee

Markov Chains Absorption (cont d) Hamid R. Rabiee Markov Chains Absorption (cont d) Hamid R. Rabiee 1 Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is 1 (i.e.,

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

1.2. Markov Chains. Before we define Markov process, we must define stochastic processes.

1.2. Markov Chains. Before we define Markov process, we must define stochastic processes. 1. LECTURE 1: APRIL 3, 2012 1.1. Motivating Remarks: Differential Equations. In the deterministic world, a standard tool used for modeling the evolution of a system is a differential equation. Such an

More information

MATH HOMEWORK PROBLEMS D. MCCLENDON

MATH HOMEWORK PROBLEMS D. MCCLENDON MATH 46- HOMEWORK PROBLEMS D. MCCLENDON. Consider a Markov chain with state space S = {0, }, where p = P (0, ) and q = P (, 0); compute the following in terms of p and q: (a) P (X 2 = 0 X = ) (b) P (X

More information

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and TMA 4265 Stochastic Processes Semester project, fall 2014 Student number 730631 and 732038 Exercise 1 We shall study a discrete Markov chain (MC) {X n } n=0 with state space S = {0, 1, 2, 3, 4, 5, 6}.

More information

Markov Chains. October 5, Stoch. Systems Analysis Markov chains 1

Markov Chains. October 5, Stoch. Systems Analysis Markov chains 1 Markov Chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 5, 2011 Stoch. Systems

More information

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505 INTRODUCTION TO MCMC AND PAGERANK Eric Vigoda Georgia Tech Lecture for CS 6505 1 MARKOV CHAIN BASICS 2 ERGODICITY 3 WHAT IS THE STATIONARY DISTRIBUTION? 4 PAGERANK 5 MIXING TIME 6 PREVIEW OF FURTHER TOPICS

More information

ISyE 6650 Probabilistic Models Fall 2007

ISyE 6650 Probabilistic Models Fall 2007 ISyE 6650 Probabilistic Models Fall 2007 Homework 4 Solution 1. (Ross 4.3) In this case, the state of the system is determined by the weather conditions in the last three days. Letting D indicate a dry

More information

Interlude: Practice Final

Interlude: Practice Final 8 POISSON PROCESS 08 Interlude: Practice Final This practice exam covers the material from the chapters 9 through 8. Give yourself 0 minutes to solve the six problems, which you may assume have equal point

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

1.3 Convergence of Regular Markov Chains

1.3 Convergence of Regular Markov Chains Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain

More information

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Examples of Countable State Markov Chains Thursday, October 16, :12 PM stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

IEOR 6711: Stochastic Models I. Solutions to Homework Assignment 9

IEOR 6711: Stochastic Models I. Solutions to Homework Assignment 9 IEOR 67: Stochastic Models I Solutions to Homework Assignment 9 Problem 4. Let D n be the random demand of time period n. Clearly D n is i.i.d. and independent of all X k for k < n. Then we can represent

More information

LECTURE 3. Last time:

LECTURE 3. Last time: LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

Stochastic Models: Markov Chains and their Generalizations

Stochastic Models: Markov Chains and their Generalizations Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

1 IEOR 4701: Continuous-Time Markov Chains

1 IEOR 4701: Continuous-Time Markov Chains Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change

More information

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6) Markov chains and the number of occurrences of a word in a sequence (4.5 4.9,.,2,4,6) Prof. Tesler Math 283 Fall 208 Prof. Tesler Markov Chains Math 283 / Fall 208 / 44 Locating overlapping occurrences

More information

Today. Next lecture. (Ch 14) Markov chains and hidden Markov models

Today. Next lecture. (Ch 14) Markov chains and hidden Markov models Today (Ch 14) Markov chains and hidden Markov models Graphical representation Transition probability matrix Propagating state distributions The stationary distribution Next lecture (Ch 14) Markov chains

More information

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

6 Stationary Distributions

6 Stationary Distributions 6 Stationary Distributions 6. Definition and Examles Definition 6.. Let {X n } be a Markov chain on S with transition robability matrix P. A distribution π on S is called stationary (or invariant) if π

More information

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC) Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC) 2.1 Classification of DTMC states Prof. Dr. Anne Remke Design and Analysis of Communication Systems University

More information