ECE 6960: Adv. Random Processes & Alications Lecture Notes, Fall 2010 Lecture 16 Today: (1) Markov Processes, (2) Markov Chains, (3) State Classification Intro Please turn in H 6 today. Read Chater 11, sections 1-3, for Thursday. Read the Bianchi aer for next Tuesday. 1 Markov Processes e re going to talk about random rocesses which have limited memory. Def n: Markov Process A discrete-time random rocess X n is Markov if it has the roerty that P [X n+1 X n,x n 1,X n 2,...] = P [X n+1 X n ] A continuous-time random rocess X(t) is Markov if it has the roerty that P [X(t n+1 ) X(t n ),X(t n 1 ),X(t n 2 ),...] = P [X(t n+1 ) X(t n )] If at time n you write a distribution for X n+1 given all ast values of X, the distribution is no different from that just using the resent value X n. Given the resent, the ast does not matter. Note that how you define X n is u to you. Examles Foreachone, writep [X(t n+1 ) X(t n ),X(t n 1 ),...]and P [X(t n+1 ) X(t n )]: Brownian motion: The value of X n+1 is equal to X n lus the random motion that occurs between time n and n+1. This motion is i.i.d. in a Brownian motion rocess. Any indeendent increments rocess (e.g., Poisson rocess). Gambling or investments. Digital systems. The state is described by what is in the comuter s memory; and the transitions may be non-random (described by a deterministic algorithm) or random. Randomness may arrive from inut signals.
ECE 6960-002 Fall 2010 2 Notes: The value X n is also called the state. The change from X n to X n+1 is called the state transition. i.i.d. r..s are also Markov. Ther.v. X n canbeeitherdiscrete-valuedorcontinuous-valued in order to have the Markov roerty. However, it must be discrete-valued in order to be reresented in a Markov chain, which we will talk about next. 2 Markov Chains hen X n is a Markov rocess and: 1. The r.v.s X n are discrete-valued, and 2. The transition robabilities P [X n+1 X n ] are not a function of n, we can reresent it as a Markov chain. Because the event sace Ω is countable, we tyically reresent our range S X as a set of integers. (If it wasn t, we could consider Y i = g(x i ) to be a function which assigned a unique integer to each element of S X.) Def n: Transition Probability The robability of transition from state i to state j is denoted i,j, i,j P [X n+1 = j X n = i] 2.1 Visualization e make diagrams to show the ossible rogression of a Markov rocess. Each state is a circle; while each transition is an arrow, labeled with the robability of that transition. Examle: Discrete Telegrah ave r.. Let X n be a Binomial r.. with arameter, and let Y n = n ( 1) X i = ( 1) n i=1 X i = Y n 1 ( 1) Xn i=1 Each time a trial is a success, the r.. Y n switches from 1 to -1 or vice versa. See the state transition diagram drawn in Fig. 1. Examle: (Miller & Childers) Collect Them All This is the fast food chain romotion with a series of toys for kids who are told to Collect them all!. Let there be four toys, and
ECE 6960-002 Fall 2010 3 1- -1 +1 1- Figure 1: A state transition diagram for the Discrete Telegrah ave. let X n be the number out of four that you ve collected after your nth visit to the chain. How many states are there? hat are the transition robabilities? 1 0.25 0.5 0.75 0.75 0.5 0.25 1 0 1 2 3 4 Figure 2: A state transition diagram for the Collect Them All! random rocess. 2.2 Single Ste Transition Probability Matrices This is Section 4.1. The transition robabilities satisfy: 1. i,j 0 2. j i,j = 1 Note: i i,j 1! Don t make this mistake. The robability of leaving state i for any state i is equal to 1. Def n: State Transition Probability Matrix The state transition robability matrix P of an N-state Markov chain is given by: 1,1 1,2 1,N 2,1 2,2 2,N..... N,1 N,2 N,N Note: The rows sum to one; the columns may not. There may be N states, but they may not have values 1, 2, 3,...,N. Thus if we don t have such values, we may create an
ECE 6960-002 Fall 2010 4 intermediate r.v. n which is equal to the rank of the value of X n, or n = rankx n, for some arbitrary ranking system. Examle: Discrete telegrah wave hat is the the TPM of the Discrete Telegrah ave r..? Use: n = 1 for X n = 1, and n = 2 for X n = 1: [ ] [ ] 1, 1 1,1 1 = 1, 1 1,1 1 Examle: Collect Them All hat is the the TPM of the Collect them all examle? Use n = X n +1: = 1,1 1,2 1,3 1,4 1,5 2,1 2,2 2,3 2,4 2,5 3,1 3,2 3,3 3,4 3,5 4,1 4,2 4,3 4,4 4,5 5,1 5,2 5,3 5,4 5,5 0 1 0 0 0 0 0.25 0.75 0 0 0 0 0.5 0.5 0 0 0 0 0.75 0.25 0 0 0 0 1 Examle: Gambling $50 You start at a casino with 5 $10 chis. Each time n you bet one chi. You win with robability 0.45, and lose with robability 0.55. If you run out, you will sto betting. Also, you decide beforehand to sto if you doubleyour money. hat is the TPMfor this random rocess? 1 0 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0 1 Examle: aiting in a finite queue A mail server (bank) can deliver one email (customer) at each
ECE 6960-002 Fall 2010 5 minute. But, X n more emails (customers) arrive in minute n, where X n is (i.i.d.) Poisson with arameter λ = 1 er minute. Emails (eole) who can t be handled immediately are queued. But if the number in the queue, Y n, is equal to 2, the queue is full, and emails will be droed (customers won t stay and wait). Thus the number of emails in the queue (eole in line) is given by hat is the P [X n = k]? Y n+1 = min(2,max(0,y n 1)+X n ) P [X n = k] = (λt)k e λt = 1 k! ek! P [X n = 0] = 1/e 0.37, and P [X n = 1] = 1/e 0.37, and P [X n = 2] = 1/(2e) 0.18. 0,0 0,1 0,2 1,0 1,1 1,2 2,0 2,1 2,2 = 0.37 0.37 0.26 0.37 0.37 0.26 0 0.37 0.63 Examle: Chute and Ladder See Figure 3. You roll a die (a fair die) and move forward that number of squares. Then, if you land on to of a chute, you have to fall down to a lower square; if you land on bottom of a ladder, you climb u to the higher square. The object is to land on inner. You don t need to get there with an exact roll. This is a Markov Chain: your future square only deends on your resent square and your roll. hat are the states? They are S X = {1,2,4,5,7} Since you ll never stay on 3 or 6, we don t need to include them as states. (e could but there would just be 0 robability of landing on them, so why bother.) This is the transition robability matrix: 7 inner 6 5 3 4 2 1 Start Figure 3: Playing board for the game, Chute and Ladder.
ECE 6960-002 Fall 2010 6 0 1/6 2/6 2/6 1/6 0 0 2/6 2/6 2/6 0 0 1/6 1/6 4/6 0 0 1/6 0 5/6 0 0 0 0 1 Examle: Countably Infinite Markov Chain e can also have a countably infinite number of states. It is a discrete-valued r.. after all; we might still have an infinite number of states. For examle, if we didn t ever sto gambling at a fixed uer number. Or, if we allowed ourselves to get into arbitrary debts. Such an examle, where we gamble $1 at each time, is shown in Figure 4. -1 0 1 2 Figure 4: Examle of a Markov Chain with a countably infinite state sace. Examle: Random Backoff In medium access control (MAC) rotocols for acket radio channels, asender may transmitbuthave its acket collide with a acket from another sender who sent at the same time. If a collision occurs (which haens with robability ), each will wait a random back-off time rior to transmitting. This random back-off time is chosen to be a uniform in {1,...,} for some maximum wait time (ignoring the ossible increase in after multile collisions). Figure 5 shows a transition diagram. 1 + 1 1 1 1 0 1 2-2 -1 Figure 5: Markov Chain of the waiting time in a random back-off MAC rotocol. A TPM for this random rocess is, 1 + 1 1 1 1 1 0 0 0 0 0 1 0 0 0.......... 0 0 0 0 0 0 0 0 1 0 1
ECE 6960-002 Fall 2010 7 2.3 Multi-ste Markov Chain Dynamics 2.3.1 Initialization e might not know in exactly which state the markov chain will start. For examle, for the bank queue examle, we might have eole lined u when the bank oens. Let s say we ve measured over many days and found that at time zero, the number of eole is uniformly distributed, i.e., { 1/3, x = 0,1,2 P [X 0 = k] = 0, o.w. e reresent this kind of information in a vector: In general, (0) = [P [X 0 = 0],P [X 0 = 1],P [X 0 = 2]] (n) = [P [X n = 0],P [X n = 1],P [X n = 2]] The only requirement is that the sum of (n) is 1 for any n. 2.3.2 Multile-Ste Transition Matrix This is in Ross Section 4.2. Def n: n-ste transition Matrix The n-ste transition robability matrix P(n) of Markov chain X n has (i, j)th element i,j (n) = P [X n+m = j X m = i]
ECE 6960-002 Fall 2010 8 Theorem: Chaman-Kolmogorov equations: For a Markov chain, the n-ste transition matrix satisfies P(n+m) = P(n)P(m) Proof: Consider the (i,j) element of P(n+m), i,j (n+m), i,j (n+m) = P [X n+m = j X 0 = i] = P [X n+m = j,x n = k X 0 = i] k hy is this ste true? i,j (n+m) = k = k = k = k P [X n+m = j X n = k,x 0 = i]p [X n = k X 0 = i] P [X n+m = j X n = k]p [X n = k X 0 = i] k,j (m) i,k (n) i,k (n) k,j (m) This latter form shows the matrix multilication. hen you have a sum of matrix elements, you should be able to recognize when that exression can be written as a matrix multilication. Here, the dummy index is on the inside of the subscrits. This is how we can see that i,j (n+m) is equal to the sum of the roducts of row i of P(n) and column j of P(m). Thus P(n+m) = P(n)P(m) This means, to find the two-ste transition matrix, you multily (matrix multily) P and P together. In general, the n-ste transition matrix is P(n) = [P(1)] n The state robabilities at time n can be found as (n) T = (0) T [P(1)] n 3 Markov Chain State Classification This is Leon-Garcia 11.3. There are quite a few definitions and terms which accomany Markov chains. Def n: Accessible A state j is accessible from state i if i,j (n) > 0 for some n 0.
ECE 6960-002 Fall 2010 9 Notes: Note that a state always communicates with itself. Accessible is also that there is a ositive robability that, starting at state i, state j will ever be entered. Def n: Communicate States i and j communicate if: State j is accessible from state i, and State i is accessible from state j. Notes: e write i j if states i and j communicate. Of course it is a symmetric relation. Communicates with is also transitive, i.e., if i j and k j then i k. Def n: Class States which communicate with each other are in the same class. Def n: Irreducible If all states in a Markov chain are in one class, then the chain is irreducible. Examle: Ross, 4.12 Consider the 4-state Markov chain with states {0,1,2,3} and TPM P = 0.5 0.5 0 0 0.5 0.5 0 0 0.25 0.25 0.25 0.25 0 0 0 1 hich states communicate? hat class(es) exist? Is this MC irreducible? Def n: Absorbing A state is absorbing if no other state is accessible from it.