TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

Size: px
Start display at page:

Download "TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and"

Transcription

1 TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and Exercise 1 We shall study a discrete Markov chain (MC) {X n } n=0 with state space S = {0, 1, 2, 3, 4, 5, 6}. The transition probability matrix of the MC is P = a) Equivalence classes The chain has three equivalence classes {0, 1}, {2, 3} and {4, 5, 6}. Whether they are recurrent or transient can be determined from figure 1. The circles denote states in the state space and there are arrows between all states with transition probability greater than 0. {0, 1} is transient because it is possible to leave the class, and once having left, it is impossible to go back. Hence the states 0 and 1 will only be visited a finite number of times. The period of a state i is the greatest common divisor of all integers n 1 for which P n ii > 0. P n ii > 0 is the probability of returning to state i in n steps. P n 00 > 0 for all n 1. Hence the equivalence class {0, 1} is aperiodic. {2, 3} is recurrent because once you enter the class, you cannot leave. Hence the states 2 and 3 will be visited infinitely often. P n 33 > 0 for all n 1. Therefore the equivalence class {2, 3} is aperiodic. {4, 5, 6} is recurrent because once you enter the class, you cannot leave. Hence the states 4, 5 and 6 will be visited infinitely often. P n 44 > 0 for n = 3, 6, 9,... Thus it takes three steps to go from state four and return to state four. This is also the case for state 5 and 6. Therefore the equivalence class {4, 5, 6} has period 3. 1b) Stationary probabilities Q = X X X X X X X X X X X X X X X 1

2 Figure 1: State space and possible transitions between states in the MC. Stationary probabilities are the probability of being in state j at any time n. A Markov chain can have several stationary distributions. If the MC is irreducible and ergodic, there exists a limiting distribution, π j = lim P ij, n j 0 because there is only one n stationary distribution. This means that a limiting distribution is always a stationary distribution, but the converse is not necessarily true. The elements of lim n P(n) which do not exists are marked by an X in Q, and those that exist are marked by a 0 if the limit is 0 and a if the limit exists but is not zero. The elements lim n P n ij, i = 0, 1, 4, 5, 6, j = 4, 5, 6 do not have a limit because the equivalence class {4, 5, 6} is not aperiodic. The elements lim P ij, n i = 0, 1, 2, 3, 4, 5, 6, j = 0, 1 do have a limit, which is 0. This n is because the equivalence class {0, 1} is transient and when you leave the class, you cannot reenter. The probability of staying in {0, 1} is less than 1, therefore Pii n, i = 1, 2 approaches 0 as n. The elements lim P ij, n i = 2, 3, j = 4, 5, 6 have a limit, which is 0, because the equivalence class {2, 3} is recurrent, hence you cannot enter {4, 5, 6}, given initial state X 0 = n 2 or X 0 = 3. The elements lim P ij, n i = 4, 5, 6, j = 2, 3 also have this limit because the n equivalence class {4, 5, 6} is recurrent. The elements lim P ij, n i = 0, 1, 2, 3, j = 2, 3 have a limit not equal to zero. This is n because the equivalence class {2, 3} is irreducible and ergodic. Hence there exists only one stationary distribution which is the limiting distribution. 2

3 1c) Expected proportion of time in different states We shall determine the expected proportion of time that the chain spends in the different states, depending on the initial state. Figure 2: The expected proportion of time that the chain spends in the different states, depending on the initial state. The element (.) ij in the matrix in figure 2 denotes the proportion of time spent in state j given that the MC starts in state i. As an example, u 0 is the proportion of time spent in state 2 given that the initial state is 0. 3

4 Figure 3: Calculated values of π i, i = 2,3,4,5,6, in figure 2. 4

5 Figure 4: Calculated values of u 0 and u 1 in figure 2. 5

6 Figure 5: Calculated values of v 0 and v 1 in figure 2. 6

7 Figure 6: Calculated values of w 0 and w 1 in figure 2. 7

8 The calculations in figure 3, 4, 5 and 6 give 1 lim n n n P (k) = k=1 1d) Simulations of MC By calculating the matrix 1 n n k=1 P(k) numerically for large n, we get approximately the same matrix as calculated above. The probability that the chain ends up in state 0 or 1, given initial state X 0 = 0, 1 is not exactly 0, but as n increases, these entries approaches 0, as expected. The other entries of 1 n n k=1 P(k) which are zero, are identically zero for all n, because these steps have a probability identical to 0. A script written in R which does this simulation is included in the section R-code on page 13. In the same section, there is also a script in which a Markov chain is simulated times, given initial state X 0 = 0. We obtain ( ). These entries are the approximate proportion of time the chain spends in each state, 1 n given initial state X 0 = 0. These proportions approach the top row of lim P (k). n n k=1 This verifies the results in exercise 1c. 8

9 Exercise 2 We are interested in finding: i) The distribution of the life time of the mouse. ii) Probability that mouse and cat are never in neighbouring rooms 2a) Discrete time Markov Chain Let X n denote the smallest number of doors that separate the mouse and the cat at time point n, n = 0, 1, 2,. The state space for X n is S = {0, 1, 2}. If X n = 0, the mouse is in the same room as the cat and gets eaten. The number of doors X n+1 is only dependent on X n because both the cat and the mouse move once at every time step n. Therefore X n can be modelled using a Markov Chain {X n } n=0. The transition matrix of the MC is P = The probabilities for how the cat and the mouse act are given. At all time steps, P (cat moves) and P (mouse moves) are both 0.45 and P (cat stays) and P (mouse stays) are both 0.1. P 00 = 1 because the cat and the mouse must be in the same room when the cat has eaten the mouse. State 0 is an absorbing state. Consequently, the other elements in the first row are 0. When starting in neighbouring rooms, there are two possibilities that they will end up in the same room. Either the mouse moves and the cat stays or the cat moves and the mouse stays. Because they move independently of each other, the resulting probability is P 10 = P (cat moves) P (mouse stays) + P (cat stays) P (mouse moves). The same logic and numbers apply for the case P 12. There are four ways for them to end up in neighbouring rooms after having moved. Either they both move clockwise, anti-clockwise, one in each direction or they interchange rooms. A final possibility is that none of them move. P 11 = 4 P (cat moves) P (mouse moves) + P (cat stays) P (mouse stays) When starting two doors apart, there are two possibilities that they will end up in the same room. In either way, one of them will move clockwise and one will move anticlockwise. P 20 = 2 P (mouse moves) P (cat moves). The possibility that they end up one door apart is the sum of possibilities: one of them stays and the other one moves clockwise or anti-clockwise. P 21 = 2 P (mouse moves) P (cat stays) + 2 P (mouse stays) P (cat moves). 9

10 There are three ways for them to end up two doors apart. Either they both move in different directions or they both stay. P 22 = 2 P (cat moves) P (mouse moves) + P (cat stays) P (mouse stays). In order to find the equivalence classes of the state space, we examine how the states communicate. Figure 7 provides an overview of accessibility between states. The circles denote states in the state space and there are arrows between all states with transition probability greater than 0. Starting in state 1, it is possible to access both states 2 and 0. Starting in state 2, it is possible to access both states 1 and 0. It is, however, not possible to reach states 1 or 2 when starting in state 0. If the MC starts in 1, the expected number of times 1 is visited is finite because the MC at one point will reach the absorbing state 0. Therefore {1, 2} is a transient equivalence class. When the MC reaches state 0, it can never leave. Therefore {0} is a positive recurrent equivalence class Figure 7: Statespace and possible ways to move between states 2b) Expected life time of a mouse The expected life time of the mouse is finite. We know that at some point the cat will be in the same room as the mouse, which is an absorbing state. To find the mean time the MC spends in transient states, before it gets to state 0, we calculate the number of visits in each transient state. Let s ij be the expected number of visits in a state j when the MC starts in state i. Then, in matrix notation, S = (I P T ) 1. Here, I is the identity matrix and P T is the transient part of P, i.e. transitions from either states 1 or 2 to states 1 or 2. S is the matrix with elements s ij. S = ( ) In order to calculate the mean time spent in transient states when the MC starts in state 2, we sum all row entries in s 2j, j = 1, 2. When the cat and the mouse start two 10

11 doors apart, the mouse is expected to live time steps. So the mouse is expected to survive 4 hours. Simulations of the MC yields similar results. The mean of a mouse s life time after simulations is about 4.0 time steps. Figure 8 is included to illustrate how the life time of a mouse was distributed within the simulations. The black bars show the accumulation of different life times. The blue graph are values drawn from the geometric distribution with mean The mouse life distribution resembles the geometric distribution, but is distinctly larger at time steps one and two. This is due to the high possibility that the MC reaches state 0 from state 2. The interpretation of this is that most mice are eaten within the first two hours. Some mice, however, survive for a longer period of time. When the MC is in state 1, the possibility of staying in state 1 is a lot higher than going to state 0 or state 2. This is why some mice survive more than 30 hours. relative occurrence life time Figure 8: The distribution of life time for a mouse 2c) Cat and mouse never in neighbouring room Simulations of the MC yields that the number of MCs that reach state 1 relative the number of MCs that reach state 0 or state 1 is about This means that the probability that the cat and mouse are never in neighbouring rooms is This is in coherence with the theoretical calculations of probability that the cat and mouse will never be in neighbouring rooms. 11

12 Figure 9: Calculation of the probability that the MC never will reach state 1, given that it starts in state 2. 12

13 R-code Exercise 1 1 #T r a n s i t i o n p r o b a b i l i t y matrix 2 P < matrix ( c (1 / 4, 1/ 4, 0, 1/ 6, 1/ 6, 0, 1/ 6, 3 1/ 3, 0, 1/ 3, 0, 0, 1/ 6, 1/ 6, 4 0, 0, 1/ 4, 3/ 4, 0, 0, 0, 5 0, 0, 1/ 3, 2/ 3, 0, 0, 0, 6 0, 0, 0, 0, 0, 1, 0, 7 0, 0, 0, 0, 0, 0, 1, 8 0, 0, 0, 0, 1, 0, 0 ), nrow = 7, ncol = 7, byrow=t) 9 10 #Length o f t h e Markov chaim 11 n < #Adding t r a n s i t i o n p r o b a b i l i t y matrices P to the power o f k, 14 #k = 1, 2,..., n 15 mat sum = 0 16 f o r ( k in 1 : n ){ 17 mat sum < mat sum + P %ˆ% k 18 } #The e n t r i e s o f L i s the expected p r oportion o f time that the chain 21 #spends in the d i f f e r e n t s t a t e s, depending on the i n i t i a l s t a t e 22 L < mat sum / n 13

14 1 #Function to s i m u l a t e a Markov chain 2 3 simmc < f u n c t i o n ( q, P, n, s t a t e s ){ 4 5 #Generate v e c t o r o f the d e s i r e d length 6 mymc < rep (NA, n ) 7 8 #Sample the i n i t i a l s t a t e 9 mymc[ 1 ] < sample ( s t a t e s, 1, prob = q ) #When we know where we a r e a t time i 1, we can sample t h e next s t a t e a t time i 12 f o r ( i in 2 : n ) { 13 #Match r e t u r n s the p o s i t i o n o f the f i r s t argument in the second argument, 14 #in t h i s case, match r e t u r n s the p o s i t i o n o f mymc in the s t a t e s v e c t o r 15 mymc[ i ] < sample ( s t a t e s, 1, prob = P[ match (mymc[ i 1], s t a t e s ), ] ) 16 } 17 #Return the Markov chain 18 return (mymc) 19 } # T r a n s i t i o n p r o b a b i l i t y matrix 22 P < matrix ( c (1 / 4, 1/ 4, 0, 1/ 6, 1/ 6, 0, 1/ 6, 23 1/ 3, 0, 1/ 3, 0, 0, 1/ 6, 1/ 6, 24 0, 0, 1/ 4, 3/ 4, 0, 0, 0, 25 0, 0, 1/ 3, 2/ 3, 0, 0, 0, 26 0, 0, 0, 0, 0, 1, 0, 27 0, 0, 0, 0, 0, 0, 1, 28 0, 0, 0, 0, 1, 0, 0 ), nrow = 7, ncol = 7, byrow=t) #S c r i p t which s i m u l a t e s the Markov chain #Length o f Markov chain 34 n < #State space 36 s t a t e s < c ( 0, 1, 2, 3, 4, 5, 6) 37 #Vector with i n i t i a l p r o b a b i l i t i e s 38 #We s t a r t in s t a t e 0 39 q < c ( 1, 0, 0, 0, 0, 0, 0) MC < simmc( q=q, P=P, n=n, s t a t e s=s t a t e s ) 43 p l o t (MC, type= o, ylab= S t a t e s, xlab= time ) 44 14

15 45 #Generate v e c t o r to count the number o f times the markov chain 46 #v i s i t s a c e r t a i n s t a t e. 47 count < rep ( 0, 7 ) 48 #Run the Markov chain m times 49 m = f o r ( i in 1 :m){ 51 MC < simmc( q=q, P=P, n=n, s t a t e s=s t a t e s ) 52 f o r ( j in 1 : n ){ 53 #Update the count v e c t o r 54 count [MC[ j ]+1] = count [MC[ j ]+1] } 56 } 57 #We are i n t e r e s t e d in the p roportion o f time the Markov chain 58 #spends in each s t a t e, hence we have to d i v i d e the elements in 59 #count by n m, s i n c e the length o f the Markov chain i s n, and we 60 #run the chain m times 61 propoftime < count / ( n m) 15

16 Exercise 2 1 ##Function to c r e a t e a MC: 2 simulatemc = f u n c t i o n ( q, P, n, s t a t e s ){ 3 #Generate v e c t o r o f l e n gth n 4 mymc = rep ( 0, n ) 5 #Sample the i n i t i a l s t a t e from the s t a t e s v e c t o r 6 #using the i n i t i a l p r o b a b i l i t i e s 7 #1 element from s t a t e s (w/ prob. d i s t. q ) 8 mymc[ 1 ] = sample ( s t a t e s, 1, prob=q ) 9 10 f o r ( i in 2 : n ){ 11 #match ( a, b ) f i n d s the p o s i t i o n o f an element a in b v e c t o r 12 #by doing t h i s, we can input a s t a t e s p a c e which 13 #doesn t n e c e s s a r i l y match i n t e g e r s 1 to height o f P 14 mymc[ i ] = sample ( s t a t e s, 1, prob=p[ match (mymc[ i 1], s t a t e s ), ] ) 15 i f (mymc[ i ] == 0){ 16 break #No reason to sample new 0 elements, 0 i s absorbing 17 } 18 } 19 #r e turn the Markov chain 20 return (mymc) 21 } ##S c r i p t which s i m u l a t e s MC and f i n d s d i s t r i b u t i o n o f l i f e time 24 ##o f a mouse and p r o b a b i l i t y that MC never r e a c h e s s t a t e #T r a n s i t i o n matrix : 27 p = c ( 1, 0, 0, , , , , , ) 28 P = matrix ( data = p, nrow = 3, ncol = 3, byrow=true) q = c ( 0, 0, 1 ) #I n i t i a l p r o b a b i l i t y, s t a r t in s t a t e 2 31 s t a t e s = c ( 0, 1, 2 ) #S t atespace 32 n = 60 #Number o f elements in the MC 33 l i f e T i m e = rep ( 0, n 1) #Mouse l i f e times 34 valmc zero = 0 #Counts MCs that reach s t a t e 0 35 cumlife = 0 #Cumulative l i f e o f a l l mice 36 valmc one = #Counts MCs that reach s t a t e 1 and/ or 0 37 cumones = 0 #Cumulative num. o f MCs that reach 1 38 counted = 0 #Bool which knows i f MC has been counted #Loop to c r e a t e MC: 41 f o r ( i in 1:10000){ 42 MC = simulatemc ( q, P, n, s t a t e s ) #Gets a new MC 43 counted = 0 #Resets counted 44 16

17 45 f o r ( i in 2 : n ){ 46 i f (MC[ i ] == 1 &&! counted ){ 47 cumones = cumones counted = 1 49 } 50 i f (MC[ i ] == 0){ 51 l i f e T i m e [ i 1] = l i f e T i m e [ i 1] valmc zero = valmc zero cumlife = cumlife + i 1 54 break 55 } 56 } 57 i f ( min (MC)==2){ 58 valmc one = valmc one 1 59 } 60 } #Mean l i f e time 63 expected = ( cumlife /valmc zero ) #Prob. o f never r e a c h i n g s t a t e 1 66 prob = 1 ( cumones/valmc one ) #p l o t the f i n d i n g s 69 lt = l i f e T i m e /valmc zero 70 x = seq ( 1, n 1) 71 p l o t ( x, lt, type= h, xlab= l i f e time, ylab= r e l a t i v e o c currence ) 72 l i n e s ( x, dgeom ( x, 1 / , l o g=false), c o l= blue ) 17

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Markov Model. Model representing the different resident states of a system, and the transitions between the different states Markov Model Model representing the different resident states of a system, and the transitions between the different states (applicable to repairable, as well as non-repairable systems) System behavior

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Markov chains (week 6) Solutions

Markov chains (week 6) Solutions Markov chains (week 6) Solutions 1 Ranking of nodes in graphs. A Markov chain model. The stochastic process of agent visits A N is a Markov chain (MC). Explain. The stochastic process of agent visits A

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains AM : Introduction to Optimization Models and Methods Lecture 7: Markov Chains Yiling Chen SEAS Lesson Plan Stochastic process Markov Chains n-step probabilities Communicating states, irreducibility Recurrent

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature. Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state

More information

Homework set 2 - Solutions

Homework set 2 - Solutions Homework set 2 - Solutions Math 495 Renato Feres Simulating a Markov chain in R Generating sample sequences of a finite state Markov chain. The following is a simple program for generating sample sequences

More information

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Lecture 21. David Aldous. 16 October David Aldous Lecture 21 Lecture 21 David Aldous 16 October 2015 In continuous time 0 t < we specify transition rates or informally P(X (t+δ)=j X (t)=i, past ) q ij = lim δ 0 δ P(X (t + dt) = j X (t) = i) = q ij dt but note these

More information

LTCC. Exercises solutions

LTCC. Exercises solutions 1. Markov chain LTCC. Exercises solutions (a) Draw a state space diagram with the loops for the possible steps. If the chain starts in state 4, it must stay there. If the chain starts in state 1, it will

More information

Math 166: Topics in Contemporary Mathematics II

Math 166: Topics in Contemporary Mathematics II Math 166: Topics in Contemporary Mathematics II Xin Ma Texas A&M University November 26, 2017 Xin Ma (TAMU) Math 166 November 26, 2017 1 / 14 A Review A Markov process is a finite sequence of experiments

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

4.7.1 Computing a stationary distribution

4.7.1 Computing a stationary distribution At a high-level our interest in the rest of this section will be to understand the limiting distribution, when it exists and how to compute it To compute it, we will try to reason about when the limiting

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

Countable state discrete time Markov Chains

Countable state discrete time Markov Chains Countable state discrete time Markov Chains Tuesday, March 18, 2014 2:12 PM Readings: Lawler Ch. 2 Karlin & Taylor Chs. 2 & 3 Resnick Ch. 1 Countably infinite state spaces are of practical utility in situations

More information

On asymptotic behavior of a finite Markov chain

On asymptotic behavior of a finite Markov chain 1 On asymptotic behavior of a finite Markov chain Alina Nicolae Department of Mathematical Analysis Probability. University Transilvania of Braşov. Romania. Keywords: convergence, weak ergodicity, strong

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

30 Classification of States

30 Classification of States 30 Classification of States In a Markov chain, each state can be placed in one of the three classifications. 1 Since each state falls into one and only one category, these categories partition the states.

More information

The Theory behind PageRank

The Theory behind PageRank The Theory behind PageRank Mauro Sozio Telecom ParisTech May 21, 2014 Mauro Sozio (LTCI TPT) The Theory behind PageRank May 21, 2014 1 / 19 A Crash Course on Discrete Probability Events and Probability

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

Interlude: Practice Final

Interlude: Practice Final 8 POISSON PROCESS 08 Interlude: Practice Final This practice exam covers the material from the chapters 9 through 8. Give yourself 0 minutes to solve the six problems, which you may assume have equal point

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

18.440: Lecture 33 Markov Chains

18.440: Lecture 33 Markov Chains 18.440: Lecture 33 Markov Chains Scott Sheffield MIT 1 Outline Markov chains Examples Ergodicity and stationarity 2 Outline Markov chains Examples Ergodicity and stationarity 3 Markov chains Consider a

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

Lecture 3: Markov chains.

Lecture 3: Markov chains. 1 BIOINFORMATIK II PROBABILITY & STATISTICS Summer semester 2008 The University of Zürich and ETH Zürich Lecture 3: Markov chains. Prof. Andrew Barbour Dr. Nicolas Pétrélis Adapted from a course by Dr.

More information

18.600: Lecture 32 Markov Chains

18.600: Lecture 32 Markov Chains 18.600: Lecture 32 Markov Chains Scott Sheffield MIT Outline Markov chains Examples Ergodicity and stationarity Outline Markov chains Examples Ergodicity and stationarity Markov chains Consider a sequence

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Question Points Score Total: 70

Question Points Score Total: 70 The University of British Columbia Final Examination - April 204 Mathematics 303 Dr. D. Brydges Time: 2.5 hours Last Name First Signature Student Number Special Instructions: Closed book exam, no calculators.

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006. Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or

More information

Math 304 Handout: Linear algebra, graphs, and networks.

Math 304 Handout: Linear algebra, graphs, and networks. Math 30 Handout: Linear algebra, graphs, and networks. December, 006. GRAPHS AND ADJACENCY MATRICES. Definition. A graph is a collection of vertices connected by edges. A directed graph is a graph all

More information

Markov chain Monte Carlo

Markov chain Monte Carlo 1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop

More information

18.175: Lecture 30 Markov chains

18.175: Lecture 30 Markov chains 18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know

More information

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018 Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections

More information

Discrete Markov Chain. Theory and use

Discrete Markov Chain. Theory and use Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical

More information

57:022 Principles of Design II Final Exam Solutions - Spring 1997

57:022 Principles of Design II Final Exam Solutions - Spring 1997 57:022 Principles of Design II Final Exam Solutions - Spring 1997 Part: I II III IV V VI Total Possible Pts: 52 10 12 16 13 12 115 PART ONE Indicate "+" if True and "o" if False: + a. If a component's

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

CHAPTER 6. Markov Chains

CHAPTER 6. Markov Chains CHAPTER 6 Markov Chains 6.1. Introduction A(finite)Markovchainisaprocess withafinitenumberofstates (or outcomes, or events) in which the probability of being in a particular state at step n+1depends only

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

ISyE 6650 Probabilistic Models Fall 2007

ISyE 6650 Probabilistic Models Fall 2007 ISyE 6650 Probabilistic Models Fall 2007 Homework 4 Solution 1. (Ross 4.3) In this case, the state of the system is determined by the weather conditions in the last three days. Letting D indicate a dry

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Chapter 29 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M.

Chapter 29 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. 29 Markov Chains Definition of a Markov Chain Markov chains are one of the most fun tools of probability; they give a lot of power for very little effort. We will restrict ourselves to finite Markov chains.

More information

1 Ways to Describe a Stochastic Process

1 Ways to Describe a Stochastic Process purdue university cs 59000-nmc networks & matrix computations LECTURE NOTES David F. Gleich September 22, 2011 Scribe Notes: Debbie Perouli 1 Ways to Describe a Stochastic Process We will use the biased

More information

Classification of Countable State Markov Chains

Classification of Countable State Markov Chains Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive

More information

The cost/reward formula has two specific widely used applications:

The cost/reward formula has two specific widely used applications: Applications of Absorption Probability and Accumulated Cost/Reward Formulas for FDMC Friday, October 21, 2011 2:28 PM No class next week. No office hours either. Next class will be 11/01. The cost/reward

More information

Social network analysis: social learning

Social network analysis: social learning Social network analysis: social learning Donglei Du (ddu@unb.edu) Faculty of Business Administration, University of New Brunswick, NB Canada Fredericton E3B 9Y2 October 20, 2016 Donglei Du (UNB) AlgoTrading

More information

MAS275 Probability Modelling Exercises

MAS275 Probability Modelling Exercises MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.

More information

Leslie matrices and Markov chains.

Leslie matrices and Markov chains. Leslie matrices and Markov chains. Example. Suppose a certain species of insect can be divided into 2 classes, eggs and adults. 10% of eggs survive for 1 week to become adults, each adult yields an average

More information

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány Budapest University of Tecnology and Economics AndrásVetier Q U E U I N G January 25, 2000 Supported by Pro Renovanda Cultura Hunariae Alapítvány Klebelsberg Kunó Emlékére Szakalapitvány 2000 Table of

More information

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford Probabilistic Model Checking Michaelmas Term 20 Dr. Dave Parker Department of Computer Science University of Oxford Next few lectures Today: Discrete-time Markov chains (continued) Mon 2pm: Probabilistic

More information

Answers Lecture 2 Stochastic Processes and Markov Chains, Part2

Answers Lecture 2 Stochastic Processes and Markov Chains, Part2 Answers Lecture 2 Stochastic Processes and Markov Chains, Part2 Question 1 Question 1a Solve system of equations: (1 a)φ 1 + bφ 2 φ 1 φ 1 + φ 2 1. From the second equation we obtain φ 2 1 φ 1. Substitition

More information

0.1 Naive formulation of PageRank

0.1 Naive formulation of PageRank PageRank is a ranking system designed to find the best pages on the web. A webpage is considered good if it is endorsed (i.e. linked to) by other good webpages. The more webpages link to it, and the more

More information

P(X 0 = j 0,... X nk = j k )

P(X 0 = j 0,... X nk = j k ) Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that

More information

Special Mathematics. Tutorial 13. Markov chains

Special Mathematics. Tutorial 13. Markov chains Tutorial 13 Markov chains "The future starts today, not tomorrow." Pope John Paul II A sequence of trials of an experiment is a nite Markov chain if: the outcome of each experiment is one of a nite set

More information

CS 798: Homework Assignment 3 (Queueing Theory)

CS 798: Homework Assignment 3 (Queueing Theory) 1.0 Little s law Assigned: October 6, 009 Patients arriving to the emergency room at the Grand River Hospital have a mean waiting time of three hours. It has been found that, averaged over the period of

More information

Markov Chains Absorption Hamid R. Rabiee

Markov Chains Absorption Hamid R. Rabiee Markov Chains Absorption Hamid R. Rabiee Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is (i.e., p ii = ). A

More information

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6) Markov chains and the number of occurrences of a word in a sequence (4.5 4.9,.,2,4,6) Prof. Tesler Math 283 Fall 208 Prof. Tesler Markov Chains Math 283 / Fall 208 / 44 Locating overlapping occurrences

More information

Markov chains. MC 1. Show that the usual Markov property. P(Future Present, Past) = P(Future Present) is equivalent to

Markov chains. MC 1. Show that the usual Markov property. P(Future Present, Past) = P(Future Present) is equivalent to Markov chains MC. Show that the usual Markov property is equivalent to P(Future Present, Past) = P(Future Present) P(Future, Past Present) = P(Future Present) P(Past Present). MC 2. Suppose that X 0, X,...

More information

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes

More information

Markov Chains Absorption (cont d) Hamid R. Rabiee

Markov Chains Absorption (cont d) Hamid R. Rabiee Markov Chains Absorption (cont d) Hamid R. Rabiee 1 Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is 1 (i.e.,

More information

Lecture #5. Dependencies along the genome

Lecture #5. Dependencies along the genome Markov Chains Lecture #5 Background Readings: Durbin et. al. Section 3., Polanski&Kimmel Section 2.8. Prepared by Shlomo Moran, based on Danny Geiger s and Nir Friedman s. Dependencies along the genome

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

ON A CONJECTURE OF WILLIAM HERSCHEL

ON A CONJECTURE OF WILLIAM HERSCHEL ON A CONJECTURE OF WILLIAM HERSCHEL By CHRISTOPHER C. KRUT A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

Week 5: Markov chains Random access in communication networks Solutions

Week 5: Markov chains Random access in communication networks Solutions Week 5: Markov chains Random access in communication networks Solutions A Markov chain model. The model described in the homework defines the following probabilities: P [a terminal receives a packet in

More information

Lecture 5: Random Walks and Markov Chain

Lecture 5: Random Walks and Markov Chain Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

More information

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient: Stochastic2010 Page 1 Long-Time Properties of Countable-State Markov Chains Tuesday, March 23, 2010 2:14 PM Homework 2: if you turn it in by 5 PM on 03/25, I'll grade it by 03/26, but you can turn it in

More information

Probability & Computing

Probability & Computing Probability & Computing Stochastic Process time t {X t t 2 T } state space Ω X t 2 state x 2 discrete time: T is countable T = {0,, 2,...} discrete space: Ω is finite or countably infinite X 0,X,X 2,...

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

MAA704, Perron-Frobenius theory and Markov chains.

MAA704, Perron-Frobenius theory and Markov chains. November 19, 2013 Lecture overview Today we will look at: Permutation and graphs. Perron frobenius for non-negative. Stochastic, and their relation to theory. Hitting and hitting probabilities of chain.

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time

More information

APPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One

APPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One APPM /556 Markov Processes Fall 9, Some Review Problems for Exam One (Note: You will not have to invert any matrices on the exam and you will not be expected to solve systems of or more unknowns. There

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Chapter 5 Markov Chain Monte Carlo MCMC is a kind of improvement of the Monte Carlo method By sampling from a Markov chain whose stationary distribution is the desired sampling distributuion, it is possible

More information

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505 INTRODUCTION TO MCMC AND PAGERANK Eric Vigoda Georgia Tech Lecture for CS 6505 1 MARKOV CHAIN BASICS 2 ERGODICITY 3 WHAT IS THE STATIONARY DISTRIBUTION? 4 PAGERANK 5 MIXING TIME 6 PREVIEW OF FURTHER TOPICS

More information