Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Size: px
Start display at page:

Download "Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected"

Transcription

1 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X n+1 =j X n =i] In words, the past is conditionally independent of the future given the present state of the process or given the present state, the past contains no additional information on the future evolution of the system. The Markov property is common in probability models because, by assumption, one supposes that the important variables for the system being modeled are all included in the state space. We consider homogeneous Markov chains for which P[X n+1 =j X n =i] = P[X 1 =j X 0 =i]. 1

2 Example: physical systems. If the state space contains the masses, velocities and accelerations of particles subject to Newton s laws of mechanics, the system in Markovian (but not random!) Example: speech recognition. Context can be important for identifying words. Context can be modeled as a probability distrubtion for the next word given the most recent k words. This can be written as a Markov chain whose state is a vector of k consecutive words. Example: epidemics. Suppose each infected individual has some chance of contacting each susceptible individual in each time interval, before becoming removed (recovered or hospitalized). Then, the number of infected and susceptible individuals may be modeled as a Markov chain. 2

3 Define P ij = P [ X n+1 =j X n =i ]. Let P = [P ij ] denote the (possibly infinite) transition matrix of the one-step transition probabilities. Write P 2 ij = k=0 P ikp kj, corresponding to standard matrix multiplication. Then P 2 ij = k P [ X n+1 =k X n =i ] P [ X n+2 =j X n+1 =k ] = k P [ X n+2 =j,x n+1 =k X n =i ] (via the Markov property. Why?) [ { = P Xn+2 =j,x n+1 =k } ] X n =i k = P [ X n+2 =j X n =i ] Generalizing this calculation: The matrix power Pij n gives the n-step transition probabilities. 3

4 The matrix multiplication identity P n+m = P n P m corresponds to the Chapman-Kolmogorov equation P n+m ij = k=0 Pn ik Pm kj. Let ν (n) be the (possibly infinite) row vector of probabilities at time n, so ν (n) i = P [ X n =i ]. Then ν (n) = ν (0) P n, using standard multiplication of a vector and a matrix. Why? 4

5 Example. Set X 0 = 0, and let X n evolves as a Markov chain with transition matrix 1/2 1/2 0 P = /2 1/ Find ν (n) 0 = P[X n =0] by (i) using a probabilistic argument 5

6 (ii) using linear algebra. 6

7 Classification of States State j is accessible from i if Pij k k 0. > 0 for some The transition matrix can be represented as a directed graph with arrows corresponding to positive one-step transition probabilities j is accessible from i if there is a path from i to j. For example, Here, 4 is accessible from 0, but not vice versa. i and j communicate if they are accessible from each other. This is written i j, and is an equivalence relation, meaning that (i) i i [reflexivity] (ii) If i j then j i [symmetry] (iii) If i j and j k then i k [transitivity] 7

8 An equivalence relation divides a set (here, the state space) into disjoint classes of equivalent states (here, called communication classes). A Markov chain is irreducible if all the states communicate with each other, i.e., if there is only one communication class. The communication class containing i is absorbing if P jk = 0 whenever i j but i k (i.e., when i communicates with j but not with k). An absorbing class can never be left. A partial converse is... Example: Show that a communication class, once left, can never be re-entered. 8

9 State i has period d if Pii n = 0 when n is not a multiple of d and if d is the greatest integer with this property. If d = 1 then i is aperiodic. Example: Show that all states in the same communication class have the same period. Note: This is obvious by considering a generic directed graph for a periodic state: i i+1 i+2 i+d

10 State i is recurrent if P [ re-enter i X 0 =i] = 1, where {re-enter i} is the event n=1 {X n = i,x k i for k = 1,...,n 1}. If i is not recurrent, it is transient. Let S 0 = 0 and S 1,S 2,... be times of successive returns to i, with N i (t) = max{n : S n t} being the corresponding counting process. If i is recurrent, then N i (t) is a renewal process, since the Markov property gives independence of interarrival times X A n = S n S n 1. Letting µ ii = E[X A 1 ], the expected return time for i, we then have the following from renewal theory: P [ lim t N i (t)/t = 1/µ ii X 0 =i ] = 1 If i j, lim n n k=1 Pk ij /n = 1/µ jj If i is aperiodic, lim n P n ij = 1/µ jj for j i If i has period d, lim n P nd ii = d/µ ii 10

11 If i is transient, then N i (t) is a defective renewal process. This is a generalization of renewal processes where X A 1,X A 2,... are still iid, abut we allow P[X A 1 = ] > 0. Proposition: i is recurrent if and only if n=1 Pn ii =. 11

12 Example: Show that if i is recurrent and i j then j is recurrent. Example: If i is recurrent and i j, show that P [ never enter state j X 0 =i ] = 0. 12

13 If i is transient, then µ ii =. If i is recurrent and µ ii < then i is said to be positive recurrent. Otherwise, if µ ii =, i is null recurrent. Proposition: If i j and i is recurrent, then either i and j are both positive recurrent, or both null recurrent (i.e., positive/null recurrence is a property of communication classes). 13

14 Random Walks The simple random walk is a Markov chain on the integers, Z = {..., 1,0,1,2,...} with X 0 = 0 and P [ X n+1 =X n +1 ] = p, P [ X n+1 =X n 1 ] = 1 p. Example: If X n counts the number of successes minus the number of failures for a new medical procedure, X n could be modeled as a random walk, with p the success rate of the procedure. When should the trial be stopped? If p = 1/2, the random walk is symmetric. The symmetric random in d dimensions is a vector valued Markov chain, with state space Z d, X (d) 0 = (0,...,0). Two possible definitions are (i) Let X (d) n+1 X(d) n take each of the 2 d possibilities (±1,±1,...,±1) with equal probability. (ii) Let X (d) n+1 X(d) n take one of the 2d values (±1,0,...,0), (0,±1,0,...,0),... with equal probability. This is harder to analyze. 14

15 Example: A biological signaling molecule becomes separated from its receptor. It starts diffusing, due to themal noise. Suppose the diffusion is well modeled by a random walk. Will the molecule return to the receptor? If the molecule is constrained to a one-dimensional line? A two-dimensional surface? Three-dimensional space? Proposition: The symmetric random walk is null recurrent when d = 1 and d = 2, but transient for d 3. Proof: The method is to employ Stirling s formula, n! n n+1/2 e n 2π where a n b n means lim n (a n /b n ) = 1, to approximate P [ X (d) ]. n =X (d) 0 Note: This is in HW 5. You are expected to solve the simpler case (i), though you can solve (ii) if you want a bigger challenge. 15

16 Stationary Distributions π = {π i,i = 0,1,...} is a stationary distribution for P = [P ij ] if π j = i=0 π ip ij with π i 0 and i=0 π i = 1. In matrix notation, π j = i=0 π ip ij is π = πp where π is a row vector. Theorem: An irreducible, aperiodic, positive recurrent Markov chain has a unique stationary distribution, which is also the limiting distribution π j = lim n P n ij. Such Markov chains are called ergodic. Proof 16

17 Proof continued 17

18 Irreducible chains which are transient or null recurrent have no stationary distribution. Why? Chains which are periodic or which have multiple communicating classes may have lim n Pij n not existing, or depending on i. A chain started in a stationary distribution will remain in that distribution, i.e., will result in a stationary process. If we can find any probability distribution solving the stationarity equations π = πp and we check that the chain is irreducible and aperiodic, then we know that (i) The chain is positive recurrent. (ii) π is the unique stationary distribution. (iii) π is the limiting distribution. 18

19 Example: Monte Carlo Markov Chain Suppose we wish to evaluate E [ h(x) ] where X has distribution π (i.e., P [ X=i ] = π i ). The Monte Carlo approach is to generate X 1,X 2,...X n π and estimate E [ h(x) ] 1 n n i=1 h(x i) If it is hard to generate an iid sample from π, we may look to generate a sequence from a Markov chain with limiting distribution π. This idea, called Monte Carlo Markov Chain (MCMC), was introduced by Metropolis and Hastings (1953). It has become a fundamental computational method for the physical and biological sciences. It is also commonly used for Bayesian statistical inference. 19

20 Metropolis-Hastings Algorithm (i) Choose a transition matrix Q = [q ij ] (ii) Set X 0 = 0 (iii) for n = 1,2,... generate Y n with P [ Y n =j X n 1 =i ] = q ij. If X n 1 = i and Y n = j, set j with probability min(1,π j q ji /π i q ij ) X n = i otherwise Here, Y n is called the proposal and we say the proposal is accepted with probability min(1,π j q ji /π i q ij ). If the proposal is not accepted, the chain stays in its previous state. 20

21 Proposition: Set q ij min(1,π j q ji /π i q ij ) j i P ij = q ii + ik {1 min(1,π k q ki /π i q ik )} j = i k iq Then π is a stationary distribution of the Metropolis-Hastings chain {X n }. If P ij is irreducible and aperiodic, then π is also the limiting distribution. Proof. 21

22 Example: (spatial models on a lattice) Let {Y ij,1 i m,1 j n} be a spatial stochastic process. Let N(i, j) define a neighborhood, e.g. N(i,j) = {(p,q) : p i + q j = 1} We may model the conditional distribution of Y ij given the neighbors. E.g., if your neighbors vote Republican (R), you are more likely to do so. Say, P [ Y ij =R {Y pq,(p,q) (i,j)}] = 1 2 +α[( (p,q) N(i,j) I {Y pq =R}) 2 ]. The full distribution π of Y is, in principle, specified by the conditional distributions. In practice, one simulates from π using MCMC, where the proposal distribution is to pick a random (i,j) and swap it from R to D or vice versa. 22

23 Example: Sampling conditional distributions. If X and Θ have joint density f XΘ (x,θ) and we observe X = x, then one wants to sample from f Θ X (θ x) = f XΘ(x,θ) f X (x) f Θ (θ)f X Θ (x θ) For Bayesian inference, f Θ (θ) is the prior distribution, and f Θ X (θ x) is the posterior. The model determines f Θ (θ) and f X Θ (x θ). The normalizing constant, f X (x) = f XΘ (x,θ)dθ, is often unknown. Using MCMC to sample from the posterior allows numerical evaluation of the posterior mean E [ Θ X=x ], posterior variance Var ( Θ X=x ), or a 1 α credible region defined to be a set A such that P [ Θ A X=x ] = 1 α. 23

24 Galton-Watson Branching Process Let X n be the size of a population at time n. Suppose each individual has an iid offspring distribution, so X n+1 = X n k=1 Z(k) n+1 where Z (k) n+1 Z. We can think of Z (k) n+1 = 0 being the death of the parent, or think of X n as the size of the n th generation. Suppose X 0 = 1. Notice that X n = 0 is an absorbing state, termed extinction. Supposing P[Z = 0] > 0 and P[Z > 1] > 0, we can show that 1,2,3,... are transient states. Why? Thus, the branching process either becomes extinct or tends to infinity. 24

25 Set φ(s) = E[s Z ], the probability generating function for Z. Set φ n (s) = E[s X n ]. Show that φ n (s) = φ (n) (s) = φ(φ(...φ(s)...)), meaning φ applied n times. 25

26 If we can find a solution to φ(s) = s, we will have φ n (s) = s for all n. This suggests plotting φ(s) vs s, noting that (i) φ(1) = 1. Why? (ii) φ(0) = P[Z=0]. Why? (iii) φ(s) is increasing, i.e., dφ ds > 0. Why? (iv) φ(s) is convex, i.e., d2 φ ds 2 > 0. Why? 26

27 Now, notice that, for 0 < s < 1, P [ extinction ] = lim n P[ X n =0 ] = lim n φ n(s). Argue that, in CASE 1, lim n φ n (s) is the unique fixed point φ(s) = s for 0 < s < 1. In CASE 2, lim n φ n (s) = 1. 27

28 Conclude that in CASE 1 (with dφ ds s=1 > 1, i.e. E[Z] > 1) there is some possibility of infinite growth. In CASE 2 (with dφ ds s=1 1, i.e. E[Z] 1) extinction is assured. Example: take 0 w.p. 1/4 Z = 1 w.p. 1/4 2 w.p. 1/2 Find the chance of extinction.. Now suppose the founding population has size k (i.e., X 0 = k). Find the chance of extinction. 28

29 Time Reversal Thinking backwards in time can be a powerful strategy. For any Markov chain {X k,k = 0,1,...,n}, we can set Y k = X n k. Then Y k is a Markov chain. Suppose X k has transition matrix P ij, then Y k has inhomogeneous transition matrix P [k] ij, where P [k] ij = P [ Y k+1 =j Y k =i ] = P [ X n k 1 =j X n k =i ] = P[ X n k =i X n k 1 =j ] P [ X n k 1 =j ] P [ X n k =i ] = P ji P [ X n k 1 =j ]/ P [ X n k =i ]. If {X k } is stationary, with stationary distribution π, then {Y k } is homogeneous, with P ij = P jiπ j /π i Surprisingly many important chains have the time-reversibility property P ij = P ij, for which the chain looks the same forwards as backwards. 29

30 Theorem. (detailed balance equations) If π is a probability distribution with π i P ij = π j P ji, then π is a stationary distribution for P and the chain started at π is time-reversible. Proof The detailed balance equations are simpler than the general equations for finding π. Try to solve them when looking for π, in case you get lucky! Intuition: π i P ij is the rate of going directly from i to j. A chain is reversible if this equals rate of going directly from j to i. Periodic chains (π does not give limiting probabilities) and reducible chains (π is not unique) are both OK here. 30

31 An equivalent condition for reversibility of ergodic Markov chains is every loop of states starting and ending in i has the same probability as the reverse loop, i.e., for all i 1,...,i k, P ii1 P i1,i 2...P ik 1 i k P ik i = P iik P ik i k 1...P i2 i 1 P i1 i Proof 31

32 Example: Birth-Death Process. Let P [ X n =X n +1 X n =i ] = p i and P [ X n =X n 1 X n =i ] = q i = 1 p i. If {X n } is positive recurrent, it must be reversible. Heuristic reason: p q 4 Find the stationary distribution giving conditions for its existence. p 2 p p 0 =1 0 q 3 q 2 q 1 Note that this chain is periodic (d = 2). There is still a unique stationary distribution which solves the detailed balance equations. 32

33 Example: Metropolis Algorithm. The Metropolis-Hastings algorithm with symmetric proposals (q ij = q ji ) is the Metropolis algorithm. Here, q ij min(1,π j /π i ) j i P ij = q ii + ik {1 min(1,π k /π i )} j = i k iq Show that P ij is reversible, with stationary distribution π. Solution 33

34 Random Walk on a Graph Consider undirected, connected graph with a positive weight w ij on each edge ij. Set w ij = 0 if i & j are not connected by an edge. Set P ij = w ij k w ik The Markov chain {X n } with transition matrix P is called a random walk on a graph. Think of a driver who is lost: each vertex is an intersection, the weight is the width (in lanes) of a road leaving an intersection, the driver picks a random exit when he/she arrives at an intersection but has a preference for bigger streets

35 Show that the random walk on a graph is reversible, and has stationary distribution π i = j w / ij jk w jk Note: the double sum counts each weight twice. Hence, find the limiting probability of being at vertex 3 on the graph shown above. 35

36 Ergodicity A stochastic process {X n } is ergodic if limiting time averages equal limiting probabilities, i.e., lim n time in state i up to time n n = lim n P[X n=i]. Show that an irreducible, aperiodic, positive recurrent Markov chain is ergodic (i.e., this general idea of ergodicity matches our definition for Markov chains). 36

37 Semi-Markov Processes A Semi-Markov process is a discrete state, continuous time process {Z(t),t 0} for which the sequence of states visited is a Markov chain {X n,n = 0,1,...} and, conditional on {X n }, the times between transitions are independent. Specifically, define transition times S 0,S 1,... by S 0 = 0 and S n S n 1 F ij conditional on X n 1 =i and X n =j. Then, Z(t) = X N(t) where N(t) = sup{n : S n t}. {X n } is the embedded Markov chain. The special case where F ij Exponential(ν i ) is called a continuous time Markov chain The special case when 0, t < 1 F ij (t) = 1, t 1 results in Z(n) = X n, so we retrieve the discrete time Markov chain. 37

38 Example: A taxi serves the airport, the hotel district and the theater district. Let Z(t) = 0,1,2 if the taxi s current destination is each of these locations respectively. The time to travel from i to j has distribution F ij. Upon arrival at i, the taxi immediately picks up a new customer whose destination has distribution given by P ij. Then, Z(t) is a semi-markov process. To study the limiting behavior of Z(t) we make some definitions... Say {Z(t)} is irreducible if {X n } is irreducible Say {Z(t)} is non-lattice if {N(t)} is non-lattice Set H i to be the c.d.f. of the time in state i before a transition, so H i (t) = j P ijf ij (t) and the expected time of a visit to i is µ i = xdh 0 i (x) = j P ij xdf 0 ij (x). Let T ii be the time between successive visits to i and define µ ii = E [ T ii ]. 38

39 Proposition: If {Z(t)} is an irreducible, non-lattice semi-markov process with µ ii <, then lim P[ Z(t)=i Z(0)=j ] = µ i t µ ii time in i before t = lim t t w.p. 1 Proof: identify a relevant alternating renewal process. 39

40 By counting up time spent in i differently, we get another identity lim t time in i during [0,t] t = π iµ i j π jµ j supposing that {X n } is ergodic (irreducible, aperiodic, positive recurrent) with stationary distribution π. Proof 40

41 Calculations with semi-markov models typically involve identifying a suitable renewal, alternating renewal, regenerative or reward process. Example Let Y(t) = S N(t)+1 t be the residual life process for an ergodic, non-lattice semi-markov model. Find an expression for lim t P [ Y(t) > x ]. Hint: Consider an alternating renewal process which switches on when Z(t) enters i, and switches off immediately if the next transition is not j or switches off once the time until the transition to j is less than x. 41

42 Proof (continued) 42

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Markov chain Monte Carlo

Markov chain Monte Carlo 1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop

More information

Stochastic Simulation

Stochastic Simulation Stochastic Simulation Ulm University Institute of Stochastics Lecture Notes Dr. Tim Brereton Summer Term 2015 Ulm, 2015 2 Contents 1 Discrete-Time Markov Chains 5 1.1 Discrete-Time Markov Chains.....................

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Chapter 5 Markov Chain Monte Carlo MCMC is a kind of improvement of the Monte Carlo method By sampling from a Markov chain whose stationary distribution is the desired sampling distributuion, it is possible

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

14 Branching processes

14 Branching processes 4 BRANCHING PROCESSES 6 4 Branching processes In this chapter we will consider a rom model for population growth in the absence of spatial or any other resource constraints. So, consider a population of

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

Bayesian Methods with Monte Carlo Markov Chains II

Bayesian Methods with Monte Carlo Markov Chains II Bayesian Methods with Monte Carlo Markov Chains II Henry Horng-Shing Lu Institute of Statistics National Chiao Tung University hslu@stat.nctu.edu.tw http://tigpbp.iis.sinica.edu.tw/courses.htm 1 Part 3

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information

Question Points Score Total: 70

Question Points Score Total: 70 The University of British Columbia Final Examination - April 204 Mathematics 303 Dr. D. Brydges Time: 2.5 hours Last Name First Signature Student Number Special Instructions: Closed book exam, no calculators.

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

Ch5. Markov Chain Monte Carlo

Ch5. Markov Chain Monte Carlo ST4231, Semester I, 2003-2004 Ch5. Markov Chain Monte Carlo In general, it is very difficult to simulate the value of a random vector X whose component random variables are dependent. In this chapter we

More information

Continuous time Markov chains

Continuous time Markov chains Continuous time Markov chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 16, 2017

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS 63 2.1 Introduction In this chapter we describe the analytical tools used in this thesis. They are Markov Decision Processes(MDP), Markov Renewal process

More information

4 Branching Processes

4 Branching Processes 4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring

More information

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014 Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Monday, Feb 10, 2014 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

Lecture 4a: Continuous-Time Markov Chain Models

Lecture 4a: Continuous-Time Markov Chain Models Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results Discrete time Markov chains Discrete Time Markov Chains, Definition and classification 1 1 Applied Mathematics and Computer Science 02407 Stochastic Processes 1, September 5 2017 Today: Short recap of

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

ISyE 6761 (Fall 2016) Stochastic Processes I

ISyE 6761 (Fall 2016) Stochastic Processes I Fall 216 TABLE OF CONTENTS ISyE 6761 (Fall 216) Stochastic Processes I Prof. H. Ayhan Georgia Institute of Technology L A TEXer: W. KONG http://wwong.github.io Last Revision: May 25, 217 Table of Contents

More information

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There

More information

MAS275 Probability Modelling Exercises

MAS275 Probability Modelling Exercises MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

1 Random Walks and Electrical Networks

1 Random Walks and Electrical Networks CME 305: Discrete Mathematics and Algorithms Random Walks and Electrical Networks Random walks are widely used tools in algorithm design and probabilistic analysis and they have numerous applications.

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model

More information

1 Random walks: an introduction

1 Random walks: an introduction Random Walks: WEEK Random walks: an introduction. Simple random walks on Z.. Definitions Let (ξ n, n ) be i.i.d. (independent and identically distributed) random variables such that P(ξ n = +) = p and

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India Markov Chains INDER K RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400076, India email: ikrana@iitbacin Abstract These notes were originally prepared for a College

More information

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013 Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Friday, Feb 8, 2013 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note

More information

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline). Large-Time Behavior for Continuous-Time Markov Chains Friday, December 02, 2011 10:58 AM Homework 4 due on Thursday, December 15 at 5 PM (hard deadline). How are formulas for large-time behavior of discrete-time

More information

12 Markov chains The Markov property

12 Markov chains The Markov property 12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience

More information

Markov Chains Introduction

Markov Chains Introduction Markov Chains 4 4.1. Introduction In this chapter, we consider a stochastic process {X n,n= 0, 1, 2,...} that takes on a finite or countable number of possible values. Unless otherwise mentioned, this

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

Markov Chain Model for ALOHA protocol

Markov Chain Model for ALOHA protocol Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Markov Chains. Sarah Filippi Department of Statistics  TA: Luke Kelly Markov Chains Sarah Filippi Department of Statistics http://www.stats.ox.ac.uk/~filippi TA: Luke Kelly With grateful acknowledgements to Prof. Yee Whye Teh's slides from 2013 14. Schedule 09:30-10:30 Lecture:

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Monte Carlo Methods. Leon Gu CSD, CMU

Monte Carlo Methods. Leon Gu CSD, CMU Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte

More information

LECTURE #6 BIRTH-DEATH PROCESS

LECTURE #6 BIRTH-DEATH PROCESS LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

MCMC and Gibbs Sampling. Sargur Srihari

MCMC and Gibbs Sampling. Sargur Srihari MCMC and Gibbs Sampling Sargur srihari@cedar.buffalo.edu 1 Topics 1. Markov Chain Monte Carlo 2. Markov Chains 3. Gibbs Sampling 4. Basic Metropolis Algorithm 5. Metropolis-Hastings Algorithm 6. Slice

More information

Advanced Sampling Algorithms

Advanced Sampling Algorithms + Advanced Sampling Algorithms + Mobashir Mohammad Hirak Sarkar Parvathy Sudhir Yamilet Serrano Llerena Advanced Sampling Algorithms Aditya Kulkarni Tobias Bertelsen Nirandika Wanigasekara Malay Singh

More information

ISM206 Lecture, May 12, 2005 Markov Chain

ISM206 Lecture, May 12, 2005 Markov Chain ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov

More information

LECTURE 3. Last time:

LECTURE 3. Last time: LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate

More information

P(X 0 = j 0,... X nk = j k )

P(X 0 = j 0,... X nk = j k ) Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time

More information

T. Liggett Mathematics 171 Final Exam June 8, 2011

T. Liggett Mathematics 171 Final Exam June 8, 2011 T. Liggett Mathematics 171 Final Exam June 8, 2011 1. The continuous time renewal chain X t has state space S = {0, 1, 2,...} and transition rates (i.e., Q matrix) given by q(n, n 1) = δ n and q(0, n)

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány Budapest University of Tecnology and Economics AndrásVetier Q U E U I N G January 25, 2000 Supported by Pro Renovanda Cultura Hunariae Alapítvány Klebelsberg Kunó Emlékére Szakalapitvány 2000 Table of

More information

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley Midterm 2: Format 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University Markov Chains Andreas Klappenecker Texas A&M University 208 by Andreas Klappenecker. All rights reserved. / 58 Stochastic Processes A stochastic process X tx ptq: t P T u is a collection of random variables.

More information

Interlude: Practice Final

Interlude: Practice Final 8 POISSON PROCESS 08 Interlude: Practice Final This practice exam covers the material from the chapters 9 through 8. Give yourself 0 minutes to solve the six problems, which you may assume have equal point

More information

Inference in Bayesian Networks

Inference in Bayesian Networks Andrea Passerini passerini@disi.unitn.it Machine Learning Inference in graphical models Description Assume we have evidence e on the state of a subset of variables E in the model (i.e. Bayesian Network)

More information

A REVIEW AND APPLICATION OF HIDDEN MARKOV MODELS AND DOUBLE CHAIN MARKOV MODELS

A REVIEW AND APPLICATION OF HIDDEN MARKOV MODELS AND DOUBLE CHAIN MARKOV MODELS A REVIEW AND APPLICATION OF HIDDEN MARKOV MODELS AND DOUBLE CHAIN MARKOV MODELS Michael Ryan Hoff A Dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment

More information

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00 Norges teknisk naturvitenskapelige universitet Institutt for matematiske fag Page 1 of 7 English Contact: Håkon Tjelmeland 48 22 18 96 EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013

More information

1 IEOR 4701: Continuous-Time Markov Chains

1 IEOR 4701: Continuous-Time Markov Chains Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change

More information