Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices
|
|
- Bruno Simmons
- 5 years ago
- Views:
Transcription
1 Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics Stochastic Processes 3, September Today: Discrete time Markov chains - invariant probability distribution Classification of states Classification of chains Next week Poisson process Two weeks from now Birth- and Death Processes Regular Transition Probability Matrices Interpretation of π j s P = P, Regular: If P k > 0 for some k In that case lim n P (n) = π j 0 i, j N Theorem 4. (Page 68) let P be a regular transition probability matrix on the states 0,,..., N. Then the limiting distribution π = (π 0, π, π N ) is the unique nonnegative solution of the equations Limiting probabilities lim n P (n) = π j Long term averages lim n Stationary distribution π = πp m n= P(n) = π j N π j = π k P, k=0 π = πp N π k =, π = k=0
2 A Social Mobility Example Son s Class Lower Middle Upper Lower Father s Middle Class Upper P 8 = π 0 = 0.40π π π 2 π = 0.50π π π 2 Classification of Markov chain states States which cannot be left, once entered - absorbing states States where the return some time in the future is certain - recurrent or persistent states The mean time to return can be finite - postive recurrence/non-null recurrent infinite - null recurrent States where the return some time in the future is uncertain - transient states States which can only be visited at certain time epochs - periodic states π 2 = 0.0π π π 2 = π 0 + π + π 2 Classification of States j is accessible from i if P (n) > 0 for some n If j is accessible from i and i is accessible from j we say that the two states communicate Communicating states constitute equivalence classes (an equivalence relation) i communicates with j and j communicates with k then i and k communicates First passage and first return times We can formalise the discussion of state classification by use of a certain class of probability distributions - first passage time distributions. Define the first passage probability f (n) = P{X j, X 2 j,..., X n j, X n = j X 0 = i} This is the probability of reaching j for the first time at time n having started in i. What is the probability of ever reaching j? f = n= f (n) The probabilities f (n) constitiute a probability distribution. On the contrary we cannot say anything in general on n= p(n) (the n-step transition probabilities)
3 State classification by f (n) Classification of Markov chains A state is recurrent (persistent) if f (= ) n= f (n) = A state is positive or non-null recurrent if E(Ti ) <. E(T i ) = (n) n= nf = µ i A state is null recurrent if E(Ti ) = µ i = A state is transient if f <. In this case we define µ i = for later convenience. A peridoic state has nonzero p (nk) for some k. A state is ergodic if it is positive recurrent and aperiodic. We can identify subclasses of states with the same properties All states which can mutually reach each other will be of the same type Once again the formal analysis is a little bit heavy, but try to stick to the fundamentals, definitions (concepts) and results Properties of sets of intercommunicating states (a) i and j has the same period (b) i is transient if and only if j is transient (c) i is null persistent (null recurrent) if and only if j is null persistent A set C of states is called (a) Closed if p = 0 for all i C, j / C (b) Irreducible if i j for all i, j C. Theorem Decomposition Theorem The state space S can be partitioned uniquely as S = T C C 2... where T is the set of transient states, and the C i are irreducible closed sets of persistent states Lemma If S is finite, then at least one state is persistent(recurrent) and all persistent states are non-null (positive recurrent)
4 Basic Limit Theorem Theorem 4.3 The basic limit theorem of Markov chains (a) Consider a recurrent irreducible aperiodic Markov chain. Let P (n) be the probability of entering state i at the nth transition, n =, 2,..., given that X 0 = i. By our earlier convention P (0) =. Let f (n) be the probability of first returning to state i at the nth transition n =, 2,..., where f (0) = 0. Then lim n P(n) = (n) n=0 nf (b) under the same conditions as in (a), lim n P (n) ji = lim n P (n) for all j. = m i An example chain (random walk with reflecting barriers) P = With initial probability distribution p (0) = (, 0, 0, 0, 0, 0, 0, 0) or X 0 =. Properties of that chain A number of different sample paths X n s We have a finite number of states From state we can reach state j with a probability f j 0.4 j, j >. From state j we can reach state with a probability f j j, j >. Thus all states communicate and the chain is irreducible. Generally we won t bother with bounds for the f s. Since the chain is finite all states are positive recurrent A look on the behaviour of the chain
5 The state probabilities Limiting distribution For an irreducible aperiodic chain, we have that p (n) µ j as n, for all i and j Three important remarks If the chain is transient or null-persistent (null-recurrent) p (n) 0 If the chain is positive recurrent p (n) µ j The limiting probability of X n = j does not depend on the starting state X 0 = i p (n) j The stationary distribution Stationary distribution A distribution that does not change with n The elements of p (n) are all constant The implication of this is p (n) = p (n ) P = p (n ) by our assumption of p (n) being constant Expressed differently π = πp Definition The vector π is called a stationary distribution of the chain if π has entries (π j : j S) such that (a) π j 0 for all j, and j π j = (b) π = πp, which is to say that π j = i π ip for all j. VERY IMPORTANT An irreducible chain has a stationary distribution π if and only if all the states are non-null persistent (positive recurrent);in this case, π is the unique stationary distribution and is given by π i = µ i for each i S, where µ i is the mean recurrence time of i.
6 The example chain (random walk with reflecting barriers) P = Elementwise the matrix equation is π i = j π jp ji π = π π 2 π 2 = π π 2 + π 3 π 3 = π π 3 + π 4 π = πp Or π = π π 2 π j = π j π j + π j+ π 8 = π π π 2 = 0.6 π π j+ = (( )π j 0.4π j ) Can be solved recursively to find: π j = ( ) 0.4 j π The normalising condition We note that we don t have to use the last equation We need a solution which is a probability distribution 8 π j =, j= Such that 8 j= N a i = i=0 ( 0.4 ( ) = π 0.4 ) j π = π 7 k=0 a N+ a N <, a N + N <, a = a N =, a < π = 0.4 ( 0.4 ( ) 0.4 k ) 8 The solution of π = πp More or less straightforward, but one problem if x is a solution such that x = xp then obviously (kx) = (kx)p is also a solution. Recall the definition of eigenvalues/eigen vectors If Ay = λy we say that λ is an eigenvalue of A with an associated eigenvector y. Here y is a right eigenvector, there is also a left eigenvector
7 The solution of π = πp continued The vector π is a left eigenvector of P. The main theorem says that there is a unique eigenvector associated with the eigenvalue of P In practice this means that the we can only solve but a normalising condition But we have the normalising condition by j π j = this can expressed as π =. Where =. Roles of the solution to π = πp For an irreducible Markov chain, (the condition we need to verify) The stationary solution. If p (0) = π then p (n) = π for all n. The limiting distribution, i.e. p (n) π for n (the Markov chain has to be aperiodic too). Also p (n) π j. The mean recurrence time for state i is µ i = π i. The mean number of visits in state j between two successive visits to state i is π j π i. The long run average probability of finding the Markov chain in state i is π i. π i = lim n n n k= p(k) i also true for periodic chains. Example (null-recurrent) chain P = p p 2 p 3 p 4 p For p j > 0 the chain is obviously irreducible. The main theorem tells us that we can investigate directly for π = πp. π = π p + π 2 π 2 = π p 2 + π 3 π j = π p j + π j+ π = π p + π 2 π 2 = π p 2 + π 3 π j = π p j + π j+ we get π 2 = ( p )π π 3 = ( p p 2 )π π j = ( p p j )π j π j = ( p p j )π π j = π p i π j = π p i i= i=j Normalisation i π j = π p i = π p i = π ip i j= j= i=j i= j= i=
8 Reversible Markov chains p p22 p2 2 p2 [ p p P = 2 p 2 p 22 ] Solve sequence of linear equations instead of the whole system Local balance in probability flow as opposed to global balance Nice theoretical construction Local balance equations π i = j π j p ji Term for term we get π i = j π j p ji π i p = j j π i p = π j p ji π j p ji π i p = j j π j p ji If they are fulfilled for each i and j, the global balance equations can be obtained by summation.
9 Why reversible? Another look at a similar question P{X n = i X n = j} = P{X n = i}p{x n = j X n = i} and for a stationary chain = P{X n = i}p π i p For a reversible chain (local balance) this is π i p = π j p ji = P{X n = j}p{x n = i X n = j} = P{X n = j X n = i} the reversed sequence. P{X n = j X n = i} = P{X n = j X n = i} P{X n = i} = P{X n = j}p{x n = i X n = j} P{X n = i} For a stationary chain we get π j p ji π i = P{X n = j}p ji P{X n = i} The chain is reversible if P{X n = j X n = i} = p leading to the local balance equations p = π jp ji π i Exercise 0 (6/2/9 ex.) In connection with an examination of the reliability of some software intended for use in control of modern ferries one is interested in examining a stochastic model of the use of a control program. The control program works as " state machine " i.e. it can be in a number of different levels, 4 are considered here. The levels depend on the physical state of the ferry. With every shift of time unit while the program is run, the program will change from level j to level k with probability p jk. Two possibilities are considered: The program has no errors and will run continously shifting between the four levels. The program has a critical error. In this case it is possible that the error is found, this happens with probality q i, i =, 2, 3, 4 depending on the level. The error will be corrected immediately and the program will from then on be without faults. Alternatively the program can stop with a critical error (the ferry will continue to sail, but without control). This happens with probability r i, i =, 2, 3, 4. In general q i + r i <, a program with errors can thus work and the error is not nescesarily discovered. It is assumed that detection of an error, as well as the apperance of a fault happens coincidently with shift between levels. The program starts running in level, and it is known that the program contains one critical error.
10 Solution: Question Question - continued Formulate a stochastic process ( Markov chain) in discrete time describing this system. The model is a discrete time Markov chain. A possible definition of states could be 0: The programme has stopped. -4: The programme is operating safely in level i. 5-8: The programme is operating in level i-4, the critical error is not detected. The transition matrix A is A = P 0 r Diag(q i )P Diag(S i )P r = The model is a discrete time Markov chain. Where P = {p } r r 2 r 3 r 4 Diag(S i ) = Diag(q i ) = S S S S 4 q q q q 4 S i = r i q i Question - continued Solution question 2 A = Or without matrix notation: p p 2 p 3 p p 2 p 22 p 23 p p 3 p 32 p 33 p p 4 p 42 p 43 p r q p q p 2 q p 3 q p 4 S p S p 2 S p 3 S p 4 r 2 q 2 p 2 q 2 p 22 q 2 p 23 q 2 p 24 S 2 p 2 S 2 p 22 S 2 p 23 S 2 p 24 r 3 q 3 p 3 q 3 p 32 q 3 p 33 q 3 p 34 S 3 p 3 S 3 p 32 S 3 p 33 S 3 p 34 r 4 q 4 p 4 q 4 p 42 q 4 p 43 q 4 p 44 S 4 p 4 S 4 p 42 S 4 p 43 S 4 p 44 Characterise the states in the Markov chain. With reasonable assumptions on P (i.e. irreducible) we get State 0 Absorbing Positive recurrent 2 Positive recurrent 3 Positive recurrent 4 Positive recurrent 5 Transient 6 Transient 7 Transient 8 Transient
11 Solution question 3 We now consider the case where the stability of the system has been assured, i.e. the error has been found and corrected, and the program has been running for long time without errors. The parameters are as follows. P i,i+ = 0.6 i =, 2, 3 P i,i = 0.2 i = 2, 3, 4 P i,j = 0 i j > q i = 0 3i r i = 0 3i 5 Characterise the stochastic proces, that describes the stable system. The system becomes stable by reaching one of the states -4. The process is ergodic from then on. The procces is a reversible ergodic Markov chain in discrete time. Solution question 4 For what fraction of time will the system be in level. We obtain the following steady state equations π i = 3 i π 4 3 i π = 40π = i= π = 40 The sum 4 i= 3i can be obtained by using 4 i= 3i = 34 3 = i= 3 i π = 34 3 π =
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 2
MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationStatistics 150: Spring 2007
Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationMarkov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More informationExamples of Countable State Markov Chains Thursday, October 16, :12 PM
stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without
More informationClassification of Countable State Markov Chains
Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive
More information2 Discrete-Time Markov Chains
2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 1
MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter
More informationMarkov Chains (Part 3)
Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More informationSMSTC (2007/08) Probability.
SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................
More information= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1
Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The
More informationThe cost/reward formula has two specific widely used applications:
Applications of Absorption Probability and Accumulated Cost/Reward Formulas for FDMC Friday, October 21, 2011 2:28 PM No class next week. No office hours either. Next class will be 11/01. The cost/reward
More informationChapter 16 focused on decision making in the face of uncertainty about one future
9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account
More informationNote that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +
Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More informationProbability, Random Processes and Inference
INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx
More informationMarkov Chains (Part 4)
Markov Chains (Part 4) Steady State Probabilities and First Passage Times Markov Chains - 1 Steady-State Probabilities Remember, for the inventory example we had (8) P &.286 =.286.286 %.286 For an irreducible
More informationMATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015
ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which
More informationLecture 20 : Markov Chains
CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called
More informationINTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING
INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing
More informationCS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions
CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some
More informationISM206 Lecture, May 12, 2005 Markov Chain
ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationCountable state discrete time Markov Chains
Countable state discrete time Markov Chains Tuesday, March 18, 2014 2:12 PM Readings: Lawler Ch. 2 Karlin & Taylor Chs. 2 & 3 Resnick Ch. 1 Countably infinite state spaces are of practical utility in situations
More information88 CONTINUOUS MARKOV CHAINS
88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state
More informationLTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather
1. Markov chain LTCC. Exercises Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4} and transition matrix 1/2 1/2 0 0 P = 0 1/2 1/3 1/6. 0 0 0 1 (a) What happens if the chain starts in
More informationISE/OR 760 Applied Stochastic Modeling
ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /
More informationLTCC. Exercises solutions
1. Markov chain LTCC. Exercises solutions (a) Draw a state space diagram with the loops for the possible steps. If the chain starts in state 4, it must stay there. If the chain starts in state 1, it will
More informationMarkov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.
Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or
More informationIrreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1
Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate
More informationTransience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:
Stochastic2010 Page 1 Long-Time Properties of Countable-State Markov Chains Tuesday, March 23, 2010 2:14 PM Homework 2: if you turn it in by 5 PM on 03/25, I'll grade it by 03/26, but you can turn it in
More informationStatistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013
Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Friday, Feb 8, 2013 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note
More informationLecture 9 Classification of States
Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and
More informationLecture 21. David Aldous. 16 October David Aldous Lecture 21
Lecture 21 David Aldous 16 October 2015 In continuous time 0 t < we specify transition rates or informally P(X (t+δ)=j X (t)=i, past ) q ij = lim δ 0 δ P(X (t + dt) = j X (t) = i) = q ij dt but note these
More informationMarkov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)
Markov chains and the number of occurrences of a word in a sequence (4.5 4.9,.,2,4,6) Prof. Tesler Math 283 Fall 208 Prof. Tesler Markov Chains Math 283 / Fall 208 / 44 Locating overlapping occurrences
More informationDiscrete Markov Chain. Theory and use
Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical
More information12 Markov chains The Markov property
12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience
More informationAt the boundary states, we take the same rules except we forbid leaving the state space, so,.
Birth-death chains Monday, October 19, 2015 2:22 PM Example: Birth-Death Chain State space From any state we allow the following transitions: with probability (birth) with probability (death) with probability
More informationThe Transition Probability Function P ij (t)
The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it
More informationBirth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes
DTU Informatics 247 Stochastic Processes 6, October 27 Today: Limiting behaviour of birth and death processes Birth and death processes with absorbing states Finite state continuous time Markov chains
More informationChapter 7. Markov chain background. 7.1 Finite state space
Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider
More informationRecap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks
Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution
More information1.3 Convergence of Regular Markov Chains
Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain
More informationMarkov Chains and MCMC
Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time
More informationStochastic process. X, a series of random variables indexed by t
Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,
More informationMarkov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015
Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of
More informationReinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More informationMarkov chains. Randomness and Computation. Markov chains. Markov processes
Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space
More informationStochastic Simulation
Stochastic Simulation Ulm University Institute of Stochastics Lecture Notes Dr. Tim Brereton Summer Term 2015 Ulm, 2015 2 Contents 1 Discrete-Time Markov Chains 5 1.1 Discrete-Time Markov Chains.....................
More informationStochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property
Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat
More informationOutlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)
Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous
More informationIEOR 6711: Professor Whitt. Introduction to Markov Chains
IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine
More informationStochastic Models: Markov Chains and their Generalizations
Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction
More informationMarkov Chains, Random Walks on Graphs, and the Laplacian
Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer
More informationLecture 10: Powers of Matrices, Difference Equations
Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More informationConvex Optimization CMU-10725
Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state
More informationWinter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo
Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte
More informationMatrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution
1/29 Matrix analytic methods Lecture 1: Structured Markov chains and their stationary distribution Sophie Hautphenne and David Stanford (with thanks to Guy Latouche, U. Brussels and Peter Taylor, U. Melbourne
More informationLecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes
Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities
More information4.7.1 Computing a stationary distribution
At a high-level our interest in the rest of this section will be to understand the limiting distribution, when it exists and how to compute it To compute it, we will try to reason about when the limiting
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov
More informationISyE 6650 Probabilistic Models Fall 2007
ISyE 6650 Probabilistic Models Fall 2007 Homework 4 Solution 1. (Ross 4.3) In this case, the state of the system is determined by the weather conditions in the last three days. Letting D indicate a dry
More informationPositive and null recurrent-branching Process
December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?
More informationLecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is
MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))
More informationMarkov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly
Markov Chains Sarah Filippi Department of Statistics http://www.stats.ox.ac.uk/~filippi TA: Luke Kelly With grateful acknowledgements to Prof. Yee Whye Teh's slides from 2013 14. Schedule 09:30-10:30 Lecture:
More informationSocial network analysis: social learning
Social network analysis: social learning Donglei Du (ddu@unb.edu) Faculty of Business Administration, University of New Brunswick, NB Canada Fredericton E3B 9Y2 October 20, 2016 Donglei Du (UNB) AlgoTrading
More informationPowerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.
Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.
More informationLecture #5. Dependencies along the genome
Markov Chains Lecture #5 Background Readings: Durbin et. al. Section 3., Polanski&Kimmel Section 2.8. Prepared by Shlomo Moran, based on Danny Geiger s and Nir Friedman s. Dependencies along the genome
More informationSIMILAR MARKOV CHAINS
SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in
More informationMATH 56A SPRING 2008 STOCHASTIC PROCESSES 65
MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest
More informationTreball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS
Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,
More informationhttp://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is
More informationSTA 294: Stochastic Processes & Bayesian Nonparametrics
MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a
More informationMath Homework 5 Solutions
Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram
More informationLecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.
Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution
More informationLet (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t
2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition
More informationLectures on Markov Chains
Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................
More informationNo class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.
Stationary Distributions Monday, September 28, 2015 2:02 PM No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Homework 1 due Friday, October 2 at 5 PM strongly
More informationUsing Markov Chains To Model Human Migration in a Network Equilibrium Framework
Using Markov Chains To Model Human Migration in a Network Equilibrium Framework Jie Pan Department of Mathematics and Computer Science Saint Joseph s University Philadelphia, PA 19131 Anna Nagurney School
More informationM3/4/5 S4 Applied Probability
M3/4/5 S4 Applied Probability Autumn 215 Badr Missaoui Room 545 Huxley Building Imperial College London E-Mail: badr.missaoui8@imperial.ac.uk Department of Mathematics Imperial College London 18 Queens
More informationThe Theory behind PageRank
The Theory behind PageRank Mauro Sozio Telecom ParisTech May 21, 2014 Mauro Sozio (LTCI TPT) The Theory behind PageRank May 21, 2014 1 / 19 A Crash Course on Discrete Probability Events and Probability
More informationQuestion Points Score Total: 70
The University of British Columbia Final Examination - April 204 Mathematics 303 Dr. D. Brydges Time: 2.5 hours Last Name First Signature Student Number Special Instructions: Closed book exam, no calculators.
More informationBudapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány
Budapest University of Tecnology and Economics AndrásVetier Q U E U I N G January 25, 2000 Supported by Pro Renovanda Cultura Hunariae Alapítvány Klebelsberg Kunó Emlékére Szakalapitvány 2000 Table of
More informationAn Introduction to Entropy and Subshifts of. Finite Type
An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview
More informationIEOR 6711: Stochastic Models I. Solutions to Homework Assignment 9
IEOR 67: Stochastic Models I Solutions to Homework Assignment 9 Problem 4. Let D n be the random demand of time period n. Clearly D n is i.i.d. and independent of all X k for k < n. Then we can represent
More informationChapter 2: Markov Chains and Queues in Discrete Time
Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called
More informationStochastic Processes (Week 6)
Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov
More informationLECTURE 3. Last time:
LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate
More informationMAS275 Probability Modelling Exercises
MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.
More informationChapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan
Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process
More informationMATH36001 Perron Frobenius Theory 2015
MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,
More information