Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)

Size: px
Start display at page:

Download "Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)"

Transcription

1 Markov chains and the number of occurrences of a word in a sequence ( ,.,2,4,6) Prof. Tesler Math 283 Fall 208 Prof. Tesler Markov Chains Math 283 / Fall 208 / 44

2 Locating overlapping occurrences of a word Consider a (long) single-stranded nucleotide sequence τ = τ... τ N and a (short) word w = w... w k, e.g., w = AA. for i = to N-3 { if (τ i τ i+ τ i+2 τ i+3 == AA) {... } } The above scan takes up to 4N comparisons to locate all occurrences of AA (kn comparisons for w of length k). A finite state automaton (FSA) is a machine that can locate all occurrences while only examining each letter of τ once. Prof. Tesler Markov Chains Math 283 / Fall / 44

3 Overlapping occurrences of AA A,C,T M 0 C,T A,C,T A C,T A A,C,T A A AA The states are the nodes,, A, A, AA (prefixes of w). For w = w w 2 w k, there are k + states (one for each prefix). Start in the state (shown on figure as 0). Scan τ = τ τ 2... τ N one character at a time left to right. Transition edges: When examining τ j, move from the current state to the next state according to which edge τ j is on. For each node u = w w r and each letter x = A, C,, T, determine the longest suffix s (possibly ) of w w r x that s among the states. Draw an edge u x s The number of times we are in the state AA is the desired count of number of occurrences. Prof. Tesler Markov Chains Math 283 / Fall / 44

4 Overlapping occurrences of AA in τ = CAATCAAT... A,C,T 0 C,T A,C,T A C,T M A A,C,T A A AA t τ t C A A T C A A T... Time t State at t τ t 0 C 2 0 A A 5 A 6 A 7 T 8 0 C Time t State at t τ t A A 2 A A 3 AA 4 A T 5 0 Prof. Tesler Markov Chains Math 283 / Fall / 44

5 Non-overlapping occurrences of AA M = A,C,T 0 C,T A,C,T A C,T A A,C,T A A AA M2 = A,C,T 0 C,T A,C,T A A A A AA C,T A,C,T For non-overlapping occurrences of w: Replace the outgoing edges from w by copies of the outgoing edges from. On previous slide, the time 3 4 transition AA A changes to AA. Prof. Tesler Markov Chains Math 283 / Fall / 44

6 Motif {AA, TA}, overlaps permitted C A,C,T 0 C C T A,C,T A A A T A A,C,T AA A,C,T T T T A A,C,T TA States: All prefixes of all words in the motif. If a prefix occurs multiple times, only create one node for it. Transition edges: they may jump from one word of the motif to another. TA A. Count the number of times we reach the states for any words in the motif (AA or TA). Prof. Tesler Markov Chains Math 283 / Fall / 44

7 Markov chains A Markov chain is similar to a finite state machine, but incorporates probabilities. Let S be a set of states. We will take S to be a discrete finite set, such as S = {, 2,..., s}. Let t =, 2,... denote the time. Let X, X 2,... denote a sequence of random variables, values S. The X t s form a (first order) Markov chain if they obey these rules The probability of being in a certain state at time t + only depends on the state at time t, not on any earlier states: P(X t+ = x t+ X = x,..., X t = x t ) = P(X t+ = x t+ X t = x t ) 2 The probability of transitioning from state i at time t to state j at time t + only depends on i and j, but not on the time t: P(X t+ = j X t = i) = p ij at all times t for some values p ij, which form an s s transition matrix. Prof. Tesler Markov Chains Math 283 / Fall / 44

8 Transition matrix The transition matrix, P, of the Markov chain M is From state To state : 0 p A + p C + p T p : p C + p T p p A 0 0 3: A p A + p C + p T 0 0 p 0 4: A p C + p T p 0 0 p A = 5: AA p A + p C + p T 0 0 p 0 P P 2 P 3 P 4 P 5 P 2 P 22 P 23 P 24 P 25 P 3 P 32 P 33 P 34 P 35 P 4 P 42 P 43 P 44 P 45 P 5 P 52 P 53 P 54 P 55 Notice that the entries in each row sum up to p A + p C + p + p T =. A matrix with all entries 0 and all row sums equal to is called a stochastic matrix. The transition matrix of a Markov chain is always stochastic. All row sums = can be written P = where = so is a right eigenvector of P with eigenvalue. Prof. Tesler Markov Chains Math 283 / Fall / 44.

9 Transition matrices for AA pa+pc+pt 0 p pc+pt p pa+pc+pt pa pc+pt M A p pa+pc+pt p A pa p AA P 3/4 / /2 /4 / /4 0 0 /4 0 /2 /4 0 0 /4 3/4 0 0 /4 0 pa+pc+pt 0 p pc+pt p pa+pc+pt pa pc+pt M2 A p pa+pc+pt p p A pa AA P2 3/4 / /2 /4 / /4 0 0 /4 0 /2 /4 0 0 /4 3/4 / Edge labels are replaced by probabilities, e.g., p C + p T. The matrices are shown for the case that all nucleotides have equal probabilities /4. P2 (no overlaps) is obtained from P (overlaps allowed) by replacing the last row with a copy of the first row. Prof. Tesler Markov Chains Math 283 / Fall / 44

10 Other applications of automata Automata / state machines are also used in other applications in Math and Computer Science. The transition weights may be defined differently, and the matrices usually aren t stochastic. Combinatorics: Count walks through the automaton (instead of getting their probabilities) by setting transition weights u x s to. Computer Science (formal languages, classifiers,... ): Does the string τ contain AA? Output if it does, 0 otherwise. Modify M: remove the outgoing edges on AA. On reaching state AA, terminate with output. If the end of τ is reached, terminate with output 0. This is called a deterministic finite acceptor (DFA). Markov chains: Instead of considering a specific string τ, we ll compute probabilities, expected values,... over the sample space of all strings of length n. Prof. Tesler Markov Chains Math 283 / Fall / 44

11 Other Markov chain examples A Markov chain is kth order if the probability of X t = i depends on the values of X t,..., X t k. It can be converted to a first order Markov chain by making new states that record more history. Positional independence: Instead of a null hypothesis that a DNA sequence is generated by repeated rolls of a biased four-sided die, we could use a Markov chain. The simplest is a one-step transition matrix p AA p AC p A p AT p P = CA p CC p C p CT p A p C p p T p TA p TC p T p TT P could be the same at all positions. In a coding region, it could be different for the first, second, and third positions of codons. Nucleotide evolution: There are models of random point mutations over the course of evolution concerning Markov chains with the form P (same as above) in which X t is the state A, C,, T of the nucleotide at a given position in a sequence at time (generation) t. Prof. Tesler Markov Chains Math 283 / Fall 208 / 44

12 Questions about Markov chains What is the probability of being in a particular state after n steps? 2 What is the probability of being in a particular state as n? 3 What is the reverse Markov chain? 4 If you are in state i, what is the expected number of time steps until the next time you are in state j? What is the variance of this? What is the complete probability distribution? 5 Starting in state i, what is the expected number of visits to state j before reaching state k? Prof. Tesler Markov Chains Math 283 / Fall / 44

13 Transition probabilities after two steps Time t t + t + 2 P i P j P i2 2 P 2j i P i3 3 P 3j j P i4 P 4j P i5 4 P 5j To compute the probability for going from state i at time t to state j at time t + 2, consider all the states it could go through at time t + : 5 P(X t+2 = j X t = i) = r P(X t+ = r X t = i)p(x t+2 = j X t+ = r, X t = i) = r P(X t+ = r X t = i)p(x t+2 = j X t+ = r) = r P irp rj = (P 2 ) ij Prof. Tesler Markov Chains Math 283 / Fall / 44

14 Transition probabilities after n steps For n 0, the transition matrix from time t to time t + n is P n : P(X t+n = j X t = i) = P(X t+ = r X t = i)p(x t+2 = r 2 X t+ = r ) r,...,r n = P i r P r r 2 P rn j = (P n ) ij r,...,r n (sum over possible states r,..., r n at times t +,..., t + (n )) Prof. Tesler Markov Chains Math 283 / Fall / 44

15 State probability vector α i (t) = P(X t = i) is the probability of being in state i at time t. α (t) Column vector α(t) =. α s (t) or transpose it to get a row vector α(t) = (α (t),..., α s (t)) The probabilities at time t + n are α j (t + n) = P(X t+n = j α(t)) = i P(X t+n = j X t = i)p(x t = i) = i α i(t)(p n ) ij = ( α(t) P n ) j so α(t + n) = α(t) P n (row vector times matrix) or equivalently, (P ) n α(t) = α(t + n) (matrix times column vector). Prof. Tesler Markov Chains Math 283 / Fall / 44

16 State vector after n steps for AA; P = P P = P 2 = (P ) 2 = At t = 0, suppose 3 chance of being in the st state; 2 3 being in the 2 nd state; and no chance of other states: α(0) = ( 3, 2 3, 0, 0, 0). Time t = 2 is n = 2 0 = 2 steps later: α(2) = ( 3, 2 3, 0, 0, 0)P2 = ( 6, 5 24, 6, 24, 0) Alternately: α(0) = /3 2/ α(2) = (P ) 2 α(0) = /6 5/24 /6 /24 0 chance of Prof. Tesler Markov Chains Math 283 / Fall / 44

17 Transition probabilities after n steps for AA; P = P P 0 = I = P 3 = P = P 4 = P 2 = P n = P 4 for n 5 Regardless of the starting state, the probabilities of being in states,, 5 at time t (when t is large enough) are 6, 5 64, 5 256, 64, 256. Usually P n just approaches a limit asymptotically as n increases, rather than reaching it. We ll see other examples later (like P2). Prof. Tesler Markov Chains Math 283 / Fall / 44

18 Matrix powers in Matlab and R Matlab >> P = [ [ 3/4, /4, 0, 0, 0 ]; % [ 2/4, /4, /4, 0, 0 ]; % [ 3/4, 0, 0, /4, 0 ]; % A [ 2/4, /4, 0, 0, /4 ]; % A [ 3/4, 0, 0, /4, 0 ]; % AA ] P = >> P * P % or P^2 ans = R > P = rbind( + c(3/4,/4, 0, 0, 0), # + c(2/4,/4,/4, 0, 0), # + c(3/4, 0, 0,/4, 0), # A + c(2/4,/4, 0, 0,/4), # A + c(3/4, 0, 0,/4, 0) # AA + ) > P [,] [,2] [,3] [,4] [,5] [,] [2,] [3,] [4,] [5,] > P %*% P [,] [,2] [,3] [,4] [,5] [,] [2,] [3,] [4,] [5,] Note: R doesn t have a built-in matrix power function. The > and + symbols above are prompts, not something you enter. Prof. Tesler Markov Chains Math 283 / Fall / 44

19 Stationary distribution, a.k.a. steady state distribution If P is irreducible and aperiodic (these will be defined soon) then P n approaches a limit with this format as n : ] lim n Pn = [ ϕ ϕ 2 ϕ s ϕ ϕ 2 ϕ s ϕ ϕ 2 ϕ s In other words, no matter what the starting state, the probability of being in state j after n steps approaches ϕ j. The row vector ϕ = (ϕ,..., ϕ s ) is called the stationary distribution of the Markov chain. It is stationary because these probabilities stay the same from one time to the next; in matrix notation, ϕ P = ϕ, or P ϕ = ϕ. So ϕ is a left eigenvector of P with eigenvalue. Since it represents probabilities of being in each state, the components of ϕ add up to. Prof. Tesler Markov Chains Math 283 / Fall / 44

20 Stationary distribution computing it for example M pa+pc+pt 0 p pc+pt p pa+pc+pt pa pc+pt M A p pa+pc+pt p A pa p AA Solve ϕ P = ϕ, or (ϕ,..., ϕ 5 )P = (ϕ,..., ϕ 5 ): ϕ = 3 4 ϕ + 2 ϕ ϕ ϕ ϕ 5 ϕ 2 = 4 ϕ + 4 ϕ 2 + 0ϕ ϕ 4 + 0ϕ 5 ϕ 3 = 0ϕ + 4 ϕ 2 + 0ϕ 3 + 0ϕ 4 + 0ϕ 5 ϕ 4 = 0ϕ + 0ϕ ϕ 3 + 0ϕ ϕ 5 ϕ 5 = 0ϕ + 0ϕ 2 + 0ϕ ϕ 4 + 0ϕ 5 P 3/4 / /2 /4 / /4 0 0 /4 0 /2 /4 0 0 /4 3/4 0 0 /4 0 and the total probability equation ϕ + ϕ 2 + ϕ 3 + ϕ 4 + ϕ 5 =. This is 6 equations in 5 unknowns, so it is overdetermined. Actually, the first 5 equations are underdetermined; they add up to ϕ + + ϕ 5 = ϕ + + ϕ 5. Knock out the ϕ 5 = equation and solve the rest of them to get ϕ = ( 6, 5 64, 5 256, 64, 256 ) (0.6875, , , 0.056, ). Prof. Tesler Markov Chains Math 283 / Fall / 44

21 Solving equations in Matlab or R (this method doesn t use the functions for eigenvectors) Matlab >> eye(5) # identity >> P - eye(5) # transpose minus identity >> [P - eye(5) ; ] >> sstate=[p -eye(5); ] \ [ ] sstate = R > diag(,5) % identity [,] [,2] [,3] [,4] [,5] [,] [2,] [3,] [4,] [5,] > t(p) - diag(,5) % transpose minus identity [,] [,2] [,3] [,4] [,5] [,] [2,] [3,] [4,] [5,] > rbind(t(p) - diag(,5), c(,,,,)) [,] [,2] [,3] [,4] [,5] [,] [2,] [3,] [4,] [5,] [6,] > sstate = qr.solve(rbind(t(p) - diag(,5), + c(,,,,)),c(0,0,0,0,0,)) > sstate [] [5] Prof. Tesler Markov Chains Math 283 / Fall / 44

22 Eigenvalues of P A transition matrix is stochastic: all entries are 0 and its row [ ] sums are all. So.. P = where = Thus, λ = is an eigenvalue of P and is a right eigenvector. There is also a left eigenvector of P with eigenvalue : w P = w where w is a row vector. Normalize it so its entries add up to, to get the stationary distribution ϕ. All eigenvalues λ of a stochastic matrix have λ. An irreducible aperiodic Markov chain has just one eigenvalue =. The 2 nd largest λ determines how fast P n converges. For example, if it s diagonalizable, the spectral decomposition is: P n = n M + λ 2 n M 2 + λ 3 n M 3 + but there may be complications (periodic Markov chains, complex eigenvalues,... ). Prof. Tesler Markov Chains Math 283 / Fall / 44.

23 Technicalities reducibility A Markov chain is irreducible if every state can be reached from every other state after enough steps. The above example is reducible since there are states that cannot be reached from each other: after sufficient time, you are either stuck in state 3, the component {4, 5, 6, 7}, or the component {8, 9, 0}. Prof. Tesler Markov Chains Math 283 / Fall / 44

24 Technicalities period State i has period d if the Markov chain can only go from state i to itself in multiples of d steps, where d is the maximum number that satisfies that. If d > then state i is periodic. A Markov chain is periodic if at least one state is periodic and is aperiodic if no states are periodic. All states in a component have the same period. Component {4, 5, 6, 7} has period 2 and component {8, 9, 0} has period 3, so the Markov chain is periodic. Prof. Tesler Markov Chains Math 283 / Fall / 44

25 Technicalities absorbing states An absorbing state has all its outgoing edges going to itself; e.g., state 3 above. An irreducible Markov chain with two or more states cannot have any absorbing states. Prof. Tesler Markov Chains Math 283 / Fall / 44

26 Technicalities summary There are generalizations to infinite numbers of discrete or continuous states and to continuous time. We will work with Markov chains that are finite, discrete, irreducible, and aperiodic, unless otherwise stated. For a finite discrete Markov chain on two or more states: irreducible and aperiodic with no absorbing states is equivalent to P or a power of P has all entries greater than 0 and in this case, lim n P n exists and all its rows are the stationary distribution. Prof. Tesler Markov Chains Math 283 / Fall / 44

27 Reverse Markov Chain pa+pc+pt p M 0 p pc+pt pa+pc+pt pa pc+pt A p pa+pc+pt p A pa p AA Reverse 0 A A AA A Markov chain modeling forwards progression of time can be reversed to make predictions about the past. For example, this is done in models of nucleotide evolution. The graph of the reverse Markov chain has the same nodes as the forwards chain; the same edges but reversed directions and new probabilities. Prof. Tesler Markov Chains Math 283 / Fall / 44

28 Reverse Markov Chain The transition matrix P of the forwards Markov chain was defined so that P(X t+ = j X t = i) = p ij at all times t. Assume the forwards machine has run long enough to reach the stationary distribution, P(X t = i) = ϕ i. The reverse Markov chain has transition matrix Q, where q ij = P(X t = j X t+ = i) = P(X t+ = i X t = j)p(x t = j) P(X t+ = i) = p jiϕ j ϕ i (Recall Bayes Theorem: P(B A) = P(A B)P(B)/P(A).) Prof. Tesler Markov Chains Math 283 / Fall / 44

29 Reverse Markov Chain of M pa+pc+pt 0 p pc+pt p pa+pc+pt pa pc+pt M A p pa+pc+pt p Reverse of M A pa p AA 0 A A AA P Q Stationary distribution of P is ϕ = ( 6, 5 64, 5 256, 64, 256 ) Example of one entry: The edge 0 A in the reverse chain has q 3 = p 3 ϕ 3 /ϕ = ( )( 256 )/( 6 ) = This means that in the steady state of the forwards chain, when 0 is entered, there is a probability that the previous state was A. Prof. Tesler Markov Chains Math 283 / Fall / 44

30 Matlab and R Matlab >> d_sstate = diag(sstate) d_sstate = >> Q = inv(d_sstate) * P * d_sstate Q = R > d_sstate = diag(sstate) > Q = solve(d_sstate) %*% t(p) %*% d_sstate Prof. Tesler Markov Chains Math 283 / Fall / 44

31 Expected time from state i till next time in state j pa+pc+pt p 0 p pc+pt pa+pc+pt pa pc+pt A p pa+pc+pt p A pa p AA If M is in state, what is the expected number of steps until the next time it is in state AA? More generally, what s the expected # steps from state i to state j? Fix the end state j once and for all. Simultaneously solve for expected # steps from all start states i. For i =,..., s, let N i be a random variable for the number of steps from state i to the next time in state j. Next time means that if i = j, we count until the next time at state j, with N j ; we don t count it as already there in 0 steps. We ll develop systems of equations for E(N i ), Var(N i ), and P Ni (x). Prof. Tesler Markov Chains Math 283 / Fall / 44

32 Expected time from state i till next time in state j Time t t + t ++N i #Steps Probability N + P N 2 N 2 N 2 + P N 3 N 4 5 N 3 + N 4 + P 3 P 4 Fix j = 5. 5 Random variable N r = # steps from state r to next time in state j. Dotted paths have no occurrences of state j in the middle. Expected # steps from state i = to j = 5 (repeat this for all i): (time t) (time t + ) E(N ) = P E(N + ) + P 2 E(N 2 + ) + P 3 E(N 3 + ) + P 4 E(N 4 + ) + P 5 E() Both N s have same distribution, and we can expand E() s: E(N ) = P r = ( P r E(N r ) ) + r : r j P r E(N r ) + r r : r j Prof. Tesler Markov Chains Math 283 / Fall / 44 P 5

33 Expected time from state i till next time in state j Recall we fixed j, and defined N i relative to it. Start in state i. There is a probability P ir of going one step to state r. If r = j, we are done in one step: E(N i st step is i j) = If r j, the expected number of steps after the first step is E(N r ): E(N i st step is i r) = E(N r + ) = E(N r ) + Combine with the probability of each value of r: s s E(N i ) = P ij + P ir E(N r + ) = P ij + P ir (E(N r ) + ) = s P ir + r= r=, r j s P ir E(N r ) = + r=, r j r=, r j s P ir E(N r ) r=, r j Doing this for all s states, i =,..., s, gives s equations in the s unknowns E(N ),..., E(N s ). Prof. Tesler Markov Chains Math 283 / Fall / 44

34 Expected times between states in M: times to state 5 E(N ) = (E(N ) + ) + 4 (E(N 2) + ) = E(N ) + 4 E(N 2) E(N 2 ) = (E(N ) + ) + 4 (E(N 2) + ) + 4 (E(N 3) + ) = + 2 E(N ) + 4 E(N 2) + 4 E(N 3) E(N 3 ) = (E(N ) + ) + 4 (E(N 4) + ) = E(N ) + 4 E(N 4) E(N 4 ) = (E(N ) + ) + 4 (E(N 2) + ) = + 2 E(N ) + 4 E(N 2) E(N 5 ) = (E(N ) + ) + 4 (E(N 4) + ) = E(N ) + 4 E(N 4) This is 5 equations in 5 unknowns E(N ),..., E(N 5 ). Matrix format: /4 / E(N ) /2 3/4 /4 0 0 E(N 2 ) 3/4 0 /4 0 E(N 3 ) /2 /4 0 0 E(N 4 ) = 3/4 0 0 /4 E(N 5 ) } {{ } Zero out j th column of P. Then subtract from each diagonal entry. } {{ } All s. E(N ) = 272, E(N 2 ) = 268, E(N 3 ) = 256, E(N 4 ) = 204, E(N 5 ) = 256. Matlab and R: Enter matrix C and vector r. Solve C x = r with Matlab: x=c\r or x=inv(c)*r R: x=solve(c,r) Prof. Tesler Markov Chains Math 283 / Fall / 44

35 Variance and PF of number of steps between states We may compute E(g(N i )) for any function g by setting up recurrences in the same way. For each i =,..., s: E(g(N i )) = P ij g()+ P ir E(g(N r +)) = expansion depending on g r j Variance of N i s: Var(N i ) = E(N 2 i ) (E(N i )) 2 s s E(N 2 i ) = P ij 2 + P ir E((N r +) 2 ) = +2 P ir E(N r )+ r=, r j r=, r j s P ir E(N 2 r ) r=, r j Plug in the previous solution of E(N ),..., E(N s ). Then solve the s equations for the s unknowns E(N 2 ),..., E(N s 2 ). PF: P Ni (x) = E(x N i ) = n=0 P(N i = n)x n s E(x N i ) = P ij x + P ir E(x Nr+ ) = P ij x + r=, r j s P ir x E(x N r ) r=, r j Solve the s equations for s unknowns E(x N ),..., E(x N s ). See the old handout for a worked out example. Prof. Tesler Markov Chains Math 283 / Fall / 44

36 Powers of matrices (see separate slides) Sample [ matrix: ] 8 P = 6 3 Diagonalization: [ ] P [ = VDV ] V = D = V = P n = (VDV )(VDV ) (VDV ) = VD n V = [ ][ 5 n ][ n [ 2 ] 3/2 / When a square (s s) matrix P has distinct eigenvalues, it can be diagonalized P = VDV where D is a diagonal matrix of the eigenvalues of P (any order); the columns of V are right eigenvectors of P (in same order as D); the rows of V are left eigenvectors of P (in same order as D); If any eigenvalues are equal, it may or may not be diagonalizeable, but there is a generalization called Jordan Canonical Form, P = VJV giving P n = VJ n V. J has eigenvalues on the diagonal and s and 0 s just above it, and is also easy to raise to powers. Prof. Tesler Markov Chains Math 283 / Fall / 44 ]

37 Matrix powers spectral decomposition (distinct eigenvalues) Powers of P: P n = (VDV [ ] )(VDV ) [ = VD n V P n = VD n V 5 n 0 = V 0 6 n V 5 = V n ] [ ] 0 V V n V [ 5 V n ] [ ][ 0 V 2 5 = n ][ ] [ 0 2 ()(5 = n )( 2) ()(5 n ] )() (3)(5 n )( 2) (3)(5 n )() [ ] [ ] = 5 n [ 2 ] = n λ r 3 l = 5 n [ ] [ ] [ ] [ ] [ 0 0 V 0 6 n V (6 = n = n )(.5) 2(6 n ] )(.5).5.5 4(6 n )(.5) 4(6 n )(.5) [ ] [ ] = 6 n 2 [.5 ].5 = n λ2 r 4 2 l 2 = 6 n Spectral decomposition of P n : [ ] P n = VD n V = λ n r l + λ n 2 r 2 l 2 = 5 n [ ] + 6 n Prof. Tesler Markov Chains Math 283 / Fall / 44

38 Jordan Canonical Form Matrices with two or more equal eigenvalues cannot necessarily be diagonalized. Matlab and R do not give an error or warning. The Jordan Canonical Form is a generalization that turns into diagonalization when possible, and still works otherwise: P = VJV J = B B B P n = VJ n V where B n 0 0 J n 0 B = n B 3n B i n = B i = λ i λ i λ i λ i λ i n λ ( ) n n ( n n 2 i λi ( 2) λi n 0 λ n ) n i λi n λ ( n i n λ i In applications when repeated eigenvalues are a possibility, it s best to use the Jordan Canonical Form. ) λi n Prof. Tesler Markov Chains Math 283 / Fall / 44

39 Jordan Canonical Form for P in Matlab (R doesn t currently have JCF available either built-in or as an add-on)» P = [ [ 3/4, /4, 0, 0, 0 ]; % [ 2/4, /4, /4, 0, 0 ]; % [ 3/4, 0, 0, /4, 0 ]; % A [ 2/4, /4, 0, 0, /4 ]; % A [ 3/4, 0, 0, /4, 0 ]; % AA ] P = » [V,J] = jordan(p) V = » Vi = inv(v) Vi = » V * J * Vi ans = J = Prof. Tesler Markov Chains Math 283 / Fall / 44

40 Powers of P using JCF P = VJV gives P n = VJ n V, and J n is easy to compute:» J J = » Jˆ2 ans = » Jˆ3 ans = » Jˆ4 ans = » Jˆ5 ans = For this matrix, J n = J 4 when n 4, so P n = VJ n V = VJ 4 V = P 4 for n 4. Prof. Tesler Markov Chains Math 283 / Fall / 44

41 Non-overlapping occurrences of AA pa+pc+pt 0 p pc+pt p pa+pc+pt pa pc+pt M2 A p pa+pc+pt p p A pa AA P2 3/4 / /2 /4 / /4 0 0 /4 0 /2 /4 0 0 /4 3/4 / » [V2,J2] = jordan(p2) V2 = i i i i i i i i i i J2 = i i» V2i = inv(v2) V2i = i i i i Prof. Tesler Markov Chains Math 283 / Fall / 44

42 Non-overlapping occurrences of AA JCF (J2) n = [ ] n n (i/4) n ( i/4) n [ ] = [ ] n = [ 0 0 [ ] ] [ ] = for n 2 [ ] One eigenvalue =. It s the third one listed, so the stationary distribution is the third row of (V2) normalized:» V2i(3,:) / sum(v2i(3,:)) ans = Two eigenvalues = 0. The interpretation of one of them is that the first and last rows of P2 are equal, so (, 0, 0, 0, ) is a right eigenvector of P2 with eigenvalue 0. Two complex eigenvalues, 0 ± i/4. Since P2 is real, all complex eigenvalues must come in conjugate pairs. The eigenvectors also come in conjugate pairs (last 2 columns of V2; last 2 rows of (V2). Prof. Tesler Markov Chains Math 283 / Fall / 44

43 Spectral decomposition with JCF and complex eigenvalues (J2) n = [ ] n n (i/4) n ( i/4) n [ ] = [ ] n = [ 0 0 [ ] ] [ ] = for n 2 [ ] (P2) n = (V2)(J2) n (V2) = [ ] [ ] n [ 0 l r r ] 2 + r () n l 3 + r 4 (i/4) n l 4 + r 5 ( i/4) n l 5 l 2 The first term vanishes when n 2, so when n 2 the format is = n S3 + (i/4) n S4 + ( i/4) n S5 = S3 + (i/4) n S4 + ( i/4) n S5 Prof. Tesler Markov Chains Math 283 / Fall / 44

44 Spectral decomposition with JCF and complex eigenvalues For n 2, (P2) n = S3 + (i/4) n S4 + ( i/4) n S5 where» S3 = V2(:,3) * V2i(3,:) S3 = » S4 = V2(:,4) * V2i(4,:) S4 = i i i i i i i i i i i i i i i i i i i i» S5 = V2(:,5) * V2i(5,:) S5 = i i i i i i i i i i i i i i i i i i i i S3 corresponds to the stationary distribution. S4 and S5 are complex conjugates, so (i/4) n S4 + ( i/4) n S5 is a sum of two complex conjugates; thus, it is real-valued, even though complex numbers are involved in the computation. Prof. Tesler Markov Chains Math 283 / Fall / 44

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Stochastic processes and

Stochastic processes and Stochastic processes and Markov chains (part II) Wessel van Wieringen w.n.van.wieringen@vu.nl wieringen@vu nl Department of Epidemiology and Biostatistics, VUmc & Department of Mathematics, VU University

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Stochastic processes and Markov chains (part II)

Stochastic processes and Markov chains (part II) Stochastic processes and Markov chains (part II) Wessel van Wieringen w.n.van.wieringen@vu.nl Department of Epidemiology and Biostatistics, VUmc & Department of Mathematics, VU University Amsterdam, The

More information

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Markov Chains. Sarah Filippi Department of Statistics  TA: Luke Kelly Markov Chains Sarah Filippi Department of Statistics http://www.stats.ox.ac.uk/~filippi TA: Luke Kelly With grateful acknowledgements to Prof. Yee Whye Teh's slides from 2013 14. Schedule 09:30-10:30 Lecture:

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

1 Ways to Describe a Stochastic Process

1 Ways to Describe a Stochastic Process purdue university cs 59000-nmc networks & matrix computations LECTURE NOTES David F. Gleich September 22, 2011 Scribe Notes: Debbie Perouli 1 Ways to Describe a Stochastic Process We will use the biased

More information

The number of occurrences of a word (5.7) and motif (5.9) in a DNA sequence, allowing overlaps Covariance (2.4) and indicators (2.

The number of occurrences of a word (5.7) and motif (5.9) in a DNA sequence, allowing overlaps Covariance (2.4) and indicators (2. The number of occurrences of a word (5.7) and motif (5.9) in a DNA sequence, allowing overlaps Covariance (2.4) and indicators (2.9) Prof. Tesler Math 283 Fall 2016 Prof. Tesler # occurrences of a word

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Homework set 2 - Solutions

Homework set 2 - Solutions Homework set 2 - Solutions Math 495 Renato Feres Simulating a Markov chain in R Generating sample sequences of a finite state Markov chain. The following is a simple program for generating sample sequences

More information

Lecture 3: Markov chains.

Lecture 3: Markov chains. 1 BIOINFORMATIK II PROBABILITY & STATISTICS Summer semester 2008 The University of Zürich and ETH Zürich Lecture 3: Markov chains. Prof. Andrew Barbour Dr. Nicolas Pétrélis Adapted from a course by Dr.

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

Chapter 5. Finite Automata

Chapter 5. Finite Automata Chapter 5 Finite Automata 5.1 Finite State Automata Capable of recognizing numerous symbol patterns, the class of regular languages Suitable for pattern-recognition type applications, such as the lexical

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Statistics 992 Continuous-time Markov Chains Spring 2004

Statistics 992 Continuous-time Markov Chains Spring 2004 Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Markov Chains and Related Matters

Markov Chains and Related Matters Markov Chains and Related Matters 2 :9 3 4 : The four nodes are called states. The numbers on the arrows are called transition probabilities. For example if we are in state, there is a probability of going

More information

4.7.1 Computing a stationary distribution

4.7.1 Computing a stationary distribution At a high-level our interest in the rest of this section will be to understand the limiting distribution, when it exists and how to compute it To compute it, we will try to reason about when the limiting

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors MAT 67L, Laboratory III Contents Instructions (1) Read this document. (2) The questions labeled Experiments are not graded, and should not be turned in. They are designed for

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

0.1 Naive formulation of PageRank

0.1 Naive formulation of PageRank PageRank is a ranking system designed to find the best pages on the web. A webpage is considered good if it is endorsed (i.e. linked to) by other good webpages. The more webpages link to it, and the more

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

HW 2 Solutions. The variance of the random walk is explosive (lim n Var (X n ) = ).

HW 2 Solutions. The variance of the random walk is explosive (lim n Var (X n ) = ). Stochastic Processews Prof Olivier Scaillet TA Adrien Treccani HW 2 Solutions Exercise. The process {X n, n } is a random walk starting at a (cf. definition in the course. [ n ] E [X n ] = a + E Z i =

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

Sequence modelling. Marco Saerens (UCL) Slides references

Sequence modelling. Marco Saerens (UCL) Slides references Sequence modelling Marco Saerens (UCL) Slides references Many slides and figures have been adapted from the slides associated to the following books: Alpaydin (2004), Introduction to machine learning.

More information

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006. Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Math 307 Learning Goals

Math 307 Learning Goals Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear

More information

Introduction to Computational Biology Lecture # 14: MCMC - Markov Chain Monte Carlo

Introduction to Computational Biology Lecture # 14: MCMC - Markov Chain Monte Carlo Introduction to Computational Biology Lecture # 14: MCMC - Markov Chain Monte Carlo Assaf Weiner Tuesday, March 13, 2007 1 Introduction Today we will return to the motif finding problem, in lecture 10

More information

Unit 16: Hidden Markov Models

Unit 16: Hidden Markov Models Computational Statistics with Application to Bioinformatics Prof. William H. Press Spring Term, 2008 The University of Texas at Austin Unit 16: Hidden Markov Models The University of Texas at Austin, CS

More information

Chapter 5: Integer Compositions and Partitions and Set Partitions

Chapter 5: Integer Compositions and Partitions and Set Partitions Chapter 5: Integer Compositions and Partitions and Set Partitions Prof. Tesler Math 184A Winter 2017 Prof. Tesler Ch. 5: Compositions and Partitions Math 184A / Winter 2017 1 / 32 5.1. Compositions A strict

More information

Mathematical Methods for Computer Science

Mathematical Methods for Computer Science Mathematical Methods for Computer Science Computer Science Tripos, Part IB Michaelmas Term 2016/17 R.J. Gibbens Problem sheets for Probability methods William Gates Building 15 JJ Thomson Avenue Cambridge

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Language Acquisition and Parameters: Part II

Language Acquisition and Parameters: Part II Language Acquisition and Parameters: Part II Matilde Marcolli CS0: Mathematical and Computational Linguistics Winter 205 Transition Matrices in the Markov Chain Model absorbing states correspond to local

More information

MAA704, Perron-Frobenius theory and Markov chains.

MAA704, Perron-Frobenius theory and Markov chains. November 19, 2013 Lecture overview Today we will look at: Permutation and graphs. Perron frobenius for non-negative. Stochastic, and their relation to theory. Hitting and hitting probabilities of chain.

More information

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Lecture 21. David Aldous. 16 October David Aldous Lecture 21 Lecture 21 David Aldous 16 October 2015 In continuous time 0 t < we specify transition rates or informally P(X (t+δ)=j X (t)=i, past ) q ij = lim δ 0 δ P(X (t + dt) = j X (t) = i) = q ij dt but note these

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov

More information

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Anthony Trubiano April 11th, 2018 1 Introduction Markov Chain Monte Carlo (MCMC) methods are a class of algorithms for sampling from a probability

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Chapter 5: Integer Compositions and Partitions and Set Partitions

Chapter 5: Integer Compositions and Partitions and Set Partitions Chapter 5: Integer Compositions and Partitions and Set Partitions Prof. Tesler Math 184A Fall 2017 Prof. Tesler Ch. 5: Compositions and Partitions Math 184A / Fall 2017 1 / 46 5.1. Compositions A strict

More information

Lecture #5. Dependencies along the genome

Lecture #5. Dependencies along the genome Markov Chains Lecture #5 Background Readings: Durbin et. al. Section 3., Polanski&Kimmel Section 2.8. Prepared by Shlomo Moran, based on Danny Geiger s and Nir Friedman s. Dependencies along the genome

More information

Solutions to Problem Set 5

Solutions to Problem Set 5 UC Berkeley, CS 74: Combinatorics and Discrete Probability (Fall 00 Solutions to Problem Set (MU 60 A family of subsets F of {,,, n} is called an antichain if there is no pair of sets A and B in F satisfying

More information

MATH36001 Perron Frobenius Theory 2015

MATH36001 Perron Frobenius Theory 2015 MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,

More information

1 A first example: detailed calculations

1 A first example: detailed calculations MA4 Handout # Linear recurrence relations Robert Harron Wednesday, Nov 8, 009 This handout covers the material on solving linear recurrence relations It starts off with an example that goes through detailed

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

MAS275 Probability Modelling Exercises

MAS275 Probability Modelling Exercises MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.

More information

2.11. The Maximum of n Random Variables 3.4. Hypothesis Testing 5.4. Long Repeats of the Same Nucleotide

2.11. The Maximum of n Random Variables 3.4. Hypothesis Testing 5.4. Long Repeats of the Same Nucleotide 2.11. The Maximum of n Random Variables 3.4. Hypothesis Testing 5.4. Long Repeats of the Same Nucleotide Prof. Tesler Math 283 Fall 2018 Prof. Tesler Max of n Variables & Long Repeats Math 283 / Fall 2018

More information

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar

More information

Math 1553, Introduction to Linear Algebra

Math 1553, Introduction to Linear Algebra Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level

More information

18.600: Lecture 32 Markov Chains

18.600: Lecture 32 Markov Chains 18.600: Lecture 32 Markov Chains Scott Sheffield MIT Outline Markov chains Examples Ergodicity and stationarity Outline Markov chains Examples Ergodicity and stationarity Markov chains Consider a sequence

More information

Lecture 5: Random Walks and Markov Chain

Lecture 5: Random Walks and Markov Chain Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

More information

1.3 Convergence of Regular Markov Chains

1.3 Convergence of Regular Markov Chains Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Math 304 Handout: Linear algebra, graphs, and networks.

Math 304 Handout: Linear algebra, graphs, and networks. Math 30 Handout: Linear algebra, graphs, and networks. December, 006. GRAPHS AND ADJACENCY MATRICES. Definition. A graph is a collection of vertices connected by edges. A directed graph is a graph all

More information

MATH 118 FINAL EXAM STUDY GUIDE

MATH 118 FINAL EXAM STUDY GUIDE MATH 118 FINAL EXAM STUDY GUIDE Recommendations: 1. Take the Final Practice Exam and take note of questions 2. Use this study guide as you take the tests and cross off what you know well 3. Take the Practice

More information

Social network analysis: social learning

Social network analysis: social learning Social network analysis: social learning Donglei Du (ddu@unb.edu) Faculty of Business Administration, University of New Brunswick, NB Canada Fredericton E3B 9Y2 October 20, 2016 Donglei Du (UNB) AlgoTrading

More information

18.440: Lecture 33 Markov Chains

18.440: Lecture 33 Markov Chains 18.440: Lecture 33 Markov Chains Scott Sheffield MIT 1 Outline Markov chains Examples Ergodicity and stationarity 2 Outline Markov chains Examples Ergodicity and stationarity 3 Markov chains Consider a

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

Lecture Note 12: The Eigenvalue Problem

Lecture Note 12: The Eigenvalue Problem MATH 5330: Computational Methods of Linear Algebra Lecture Note 12: The Eigenvalue Problem 1 Theoretical Background Xianyi Zeng Department of Mathematical Sciences, UTEP The eigenvalue problem is a classical

More information

Linear Algebra Basics

Linear Algebra Basics Linear Algebra Basics For the next chapter, understanding matrices and how to do computations with them will be crucial. So, a good first place to start is perhaps What is a matrix? A matrix A is an array

More information

STA 294: Stochastic Processes & Bayesian Nonparametrics

STA 294: Stochastic Processes & Bayesian Nonparametrics MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a

More information

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) The authors explain how the NCut algorithm for graph bisection

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Lecture 4: Evolutionary Models and Substitution Matrices (PAM and BLOSUM)

Lecture 4: Evolutionary Models and Substitution Matrices (PAM and BLOSUM) Bioinformatics II Probability and Statistics Universität Zürich and ETH Zürich Spring Semester 2009 Lecture 4: Evolutionary Models and Substitution Matrices (PAM and BLOSUM) Dr Fraser Daly adapted from

More information

Markov chains (week 6) Solutions

Markov chains (week 6) Solutions Markov chains (week 6) Solutions 1 Ranking of nodes in graphs. A Markov chain model. The stochastic process of agent visits A N is a Markov chain (MC). Explain. The stochastic process of agent visits A

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Finite Automata. Finite Automata

Finite Automata. Finite Automata Finite Automata Finite Automata Formal Specification of Languages Generators Grammars Context-free Regular Regular Expressions Recognizers Parsers, Push-down Automata Context Free Grammar Finite State

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

18.175: Lecture 30 Markov chains

18.175: Lecture 30 Markov chains 18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know

More information

ELEC633: Graphical Models

ELEC633: Graphical Models ELEC633: Graphical Models Tahira isa Saleem Scribe from 7 October 2008 References: Casella and George Exploring the Gibbs sampler (1992) Chib and Greenberg Understanding the Metropolis-Hastings algorithm

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Lecture No. #07 Jordan Canonical Form Cayley Hamilton Theorem (Refer Slide Time:

More information

Markov Chains Absorption (cont d) Hamid R. Rabiee

Markov Chains Absorption (cont d) Hamid R. Rabiee Markov Chains Absorption (cont d) Hamid R. Rabiee 1 Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is 1 (i.e.,

More information

Markov Chains and Pandemics

Markov Chains and Pandemics Markov Chains and Pandemics Caleb Dedmore and Brad Smith December 8, 2016 Page 1 of 16 Abstract Markov Chain Theory is a powerful tool used in statistical analysis to make predictions about future events

More information

Languages. A language is a set of strings. String: A sequence of letters. Examples: cat, dog, house, Defined over an alphabet:

Languages. A language is a set of strings. String: A sequence of letters. Examples: cat, dog, house, Defined over an alphabet: Languages 1 Languages A language is a set of strings String: A sequence of letters Examples: cat, dog, house, Defined over an alphaet: a,, c,, z 2 Alphaets and Strings We will use small alphaets: Strings

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems Lecture 31: Some Applications of Eigenvectors: Markov Chains and Chemical Reaction Systems Winfried Just Department of Mathematics, Ohio University April 9 11, 2018 Review: Eigenvectors and left eigenvectors

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time

More information

The Distribution of Mixing Times in Markov Chains

The Distribution of Mixing Times in Markov Chains The Distribution of Mixing Times in Markov Chains Jeffrey J. Hunter School of Computing & Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand December 2010 Abstract The distribution

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

Recitation 8: Graphs and Adjacency Matrices

Recitation 8: Graphs and Adjacency Matrices Math 1b TA: Padraic Bartlett Recitation 8: Graphs and Adjacency Matrices Week 8 Caltech 2011 1 Random Question Suppose you take a large triangle XY Z, and divide it up with straight line segments into

More information

Biology 644: Bioinformatics

Biology 644: Bioinformatics A stochastic (probabilistic) model that assumes the Markov property Markov property is satisfied when the conditional probability distribution of future states of the process (conditional on both past

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and TMA 4265 Stochastic Processes Semester project, fall 2014 Student number 730631 and 732038 Exercise 1 We shall study a discrete Markov chain (MC) {X n } n=0 with state space S = {0, 1, 2, 3, 4, 5, 6}.

More information