Uncertainty Runs Rampant in the Universe C. Ebeling circa Markov Chains. A Stochastic Process. Into each life a little uncertainty must fall.

Size: px
Start display at page:

Download "Uncertainty Runs Rampant in the Universe C. Ebeling circa Markov Chains. A Stochastic Process. Into each life a little uncertainty must fall."

Transcription

1 Uncertainty Runs Rampant in the Universe C. Ebeling circa 2000 Markov Chains A Stochastic Process Into each life a little uncertainty must fall.

2 Our Hero - Andrei Andreyevich Markov Born: 14 June 1856 in Ryazan, Russia Died: 20 July 1922 in Petrograd (now St Petersburg), Russia

3 Brand Switching The wine of the month club offers two brands of wine Wine A and Wine B Currently Wine A has 20% of the market Customers in the club order monthly either A or B Anyone buying Wine A one month has a 75% chance of buying it the next month therefore a 25% chance of switching to Wine B Anyone buying Wine B one month has a 55% chance of buying it the next month therefore a 45% chance of switching to Wine A Wine A What is the expected market share of Wine A at the end of the next several months?

4 Brand Switching and Markov a look ahead Next Month: Wine A Wine B This Month: Wine A Wine B q (0) = (.2.8); P= initial state vector transition matrix

5 looking further ahead = =.2.8 = (1) (0) q q P (2) (1) q q P ( ) ( ) = = = ( ) ( ) Month: 2 (2) (0) q = q P =.2.8 = Wow, using matrix-vectors Month: 1 Month: 2 ( ) ( ) q = q P= q P P= q P (2) (1) (0) (0) 2 is so easy. ( )

6 Still looking further ahead = = = (3) (2) q q P Month: 3 ( ) ( ) Month: 3 (3) (0) q = q P =.2.8 = ( ) ( ) Month: 10 (10) (0) q = q P =.2.8 = ( ) ( ) Look, both rows are now the same!!!

7 Stochastic Process Indexed collection of random variables index is often time {X t }, t = 0, 1, 2, 3,. Examples X t = number of demands in month t X t = number of customers served on day t X t = number of errors on page t of a manuscript X t = A or B, selected brand for month t X t = 0,1,2,3,... ; the state of system at time t

8 What do we do with these stochastic processes? I think we answer questions with them. What we really need to model a stochastic process is the joint probability distribution.

9 The Markovian Property P{X t = a t X 1 = a 1, X 2 = a 2,, X t-1 = a t-1 } = P{X t = a t X t-1 = a t-1 } I call this the Markovian or memoryless Property

10 A Stationary Process Assume for all t: P{X t = j X t-1 = i} = P{X 1 = j X 0 = i} = P ij I call these the onestep transition probabilities

11 Markov Chains Look, these Markov processes are really great for predicting stock market movements! Two types steady-state processes absorption state processes Characteristics finite number of states future state depends only on current state (Markovian or memoryless property) stationary process (probabilities constant over time) discrete transitions (time periods)

12 Transition Probabilities The conditional probability that a Markov chain will be in state j at time t+1 given that it is in state i at time t is denoted p ij and is called the one-step transition probability, i.e., p ij = P[X t+1 = j X t = i]. The matrix of one-step transition probabilities is denoted as P and is given by { P} ij P p11 p12 p13 p1 m p p p p p p p p m = = m1 m2 m3 mm where m j= 1 and p 0 ij p ij = 1 for all i

13 Stochastic matrix { P} ij p11 p12 p13 p1 m p p p p p p p p m = P= m1 m2 m3 mm where m j= 1 and p 0 ij p ij = 1 for all i A matrix whose rows sum to one is called a stochastic matrix. If the columns also sum to one, then it is called a doubly stochastic matrix.

14 n-step transition probabilities P{X t+n = j X t = i} = P{X n = j X 0 = i} = P ij (n) where j= 1 ( n) ij = 1 for all i ( n) and p 0 for n= 1, 2,3... ij m p

15 The n-step transition matrix { n } P ( ) ( n) ij ( n) ( n) ( n) ( n) p p p p p p p p m (n) ( n) ( n) ( n) = P = m p p p p ( n) ( n) ( n) ( n) m1 m2 m3 mm

16 Chapman-Kolmogorov Equations m ( n) v ( n v) ij ik kj k = 1 P = PP i, jn,,0< v< n in particular m m ( n) ( n 1) ( n 1) ij = ik kj = ik kj k= 1 k= 1 P P P P P Provides a method for computing the n-step transition probabilities

17 More on the Chapman-Kolmogorov Equations P m ( n) ( n 1) ij = Pik Pkj k = 1 in particular m (2) (1) (1) ij = ik kj k = 1 P P P This is definition of the (i,j) th element if the matrix P is squared. That is, {P ij (2) } = P (2) = P x P

18 Generalizing P (2) = P P = P 2 P (3) = P P (2) = P 3 I see now how we can compute the n- step transition matrix. and in general P (n) = P P (n-1) = P (n-1) P = P n

19 Whoa Chuck! We need another example of these so called Markov chains before you go any further.

20 Our very first complete example Joe college has decided when on spring break to travel among his favorite three cities: Dayton, Columbus, and Cleveland. Each day, he will randomly determine where he will spend the following day. If he is currently in Dayton, he will either stay in Dayton or travel to Columbus with a probability. If he is in Columbus, he will either stay in Columbus or travel to Cleveland with a probability. If he is in Cleveland, he will travel to Dayton with a.75 probability or travel to Columbus with a.25 probability.

21 Our very first complete example continued: Define state 1: Dayton state 2: Columbus state 3: Cleveland P = Joe College going on his spring break.

22 Our very first complete example continued some more P (2) = = Day 2 of the spring break

23 Our very first complete example continued even more P = (3) = Day 3 of the spring break

24 Our example, yes continued P (4) = = Day 4 of the spring break

25 Our example, you guessed it (5) (2) (3) P = P P = = Day 5 of the spring break

26 Our example, keeps on truck-in Joe College decides to begin his spring in Dayton with probability.2, Columbus with probability.3, and Cleveland with probability.5. q (0) = (.2,.3,.5) is referred to as the initial state vector and q (n) is the state vector after n transitions (5) (0) (5) q = q P = (.2,.3,.5) = (.3356,.443,.2214)

27 Initial State Probabilities In some cases, we may not know in which state a Markov chain starts, so we describe the initial state (0) probability: q = P X = i i [ ] 0 ( ) (0) (0) (0) (0) (0) Given the initial state vector q = q, q, q,, q m the probability that a Markov chain is in state j at time n is called the unconditional state probability and can be found via: This is a remarkable result. q (n) = q (0) P (n)

28 Steady-State Probabilities The steady-state probabilities of a Markov chain, when they exist, is given by: π j lim ( n) = P j = 1,..., ij n m

29 Joe College s spring break revisited Joe decides to continue his spring break indefinitely and not return to classes. In the long run, what is the average percent of time that he will spend in each city? P = After 10 days P =

30 Steady-State Equations π m j= 1 m = π p for j = 1,..., m j i ij i= 1 π j = 1 There are m+2 equations and only m+1 unknowns. One of the equations must be redundant.

31 Back to the example.5.5 0,,,, [ π π π ] = [ π π π ] π 1 =.5 π π 3 π 2 =.5 π 1 +.5π π 3 eliminate one π 3 =.5 π 2 π 1 + π 2 + π 3 =1 need in order to avoid all zero solution

32 Back to the example some more 1. π 1 =.5 π π 3 2. π 2 =.5 π 1 +.5π π 3 3. π 3 =.5 π 2 4. π 1 + π 2 + π 3 =1 From 1: π 3 = 50 π 1 /75 = (2/3) π 1 From 3: π 2 = 2 π 3 = (4/3) π 1 Using 4: π 1 + (4/3) π 1 + (2/3) π 1 = 1 or (1 + 4/3 + 2/3) π 1 = 3 π 1 =1 π 1 = 1/3, π 2 = 4/9 =.4444, π 3 = 2/9 =.2222

33 Expected Recurrence Times let μ jj = 1 π j μ jj are expected recurrence times. The average number of transitions before returning to state j. from example: μ 11 = 1/π 1 = 3/1 = 3 days μ 22 = 1/π 2 = 9/4 = 2.25 days μ 33 = 1/π 3 = 9/2 = 4.5 days

34 This is great stuff Chuck. Can you do another example? Please!!!!

35 It s a rat race A rat moves randomly in the maze below. What are the long term probabilities of the rat being in each region?

36 Transition Matrix p =

37 Steady-State equations p = π1 = π π = π + π + π π3 = π π π + π + π + π = Solution: π1, π2, π3, π4 =,,, ( )

38 Another example In a particular society, the movement of a family from one social class to another is governed by the following transition probabilities where a transition takes place from one generation to the next. class: upper middle lower P = upper middle lower

39 State Transition Diagram Upper class Lower class Middle class.2.7

40 After 2 and 3 generations P 2 class: upper middle lower = upper middle lower P 3 class: upper middle lower = upper middle lower

41 The Steady-State solution ( π ) ( ) 1, π2, π3 = 1, 2, π π π π =.4 π +.1π Equations: π =.1 π +.2 π +.6π π + π + π = 1 Solution: π1, π2, π3 =,, =.0975,.585, ( ) ( )

42 Expected recurrence times ,, =,, = 10.25,1.71,3.15 π1 π2 π ( ) I see that if my family were to leave the lower class we would return in 3.15 generations. How many generations are expected before my family reaches the upper class?

43 Quick, go to the first passage times This is really going to be good. Comeon

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

Markov Chains (Part 4)

Markov Chains (Part 4) Markov Chains (Part 4) Steady State Probabilities and First Passage Times Markov Chains - 1 Steady-State Probabilities Remember, for the inventory example we had (8) P &.286 =.286.286 %.286 For an irreducible

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

ISM206 Lecture, May 12, 2005 Markov Chain

ISM206 Lecture, May 12, 2005 Markov Chain ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov

More information

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 6, 2016 Outline 1. Introduction 2. Chapman-Kolmogrov Equations

More information

Markov Chains. Chapter 16. Markov Chains - 1

Markov Chains. Chapter 16. Markov Chains - 1 Markov Chains Chapter 16 Markov Chains - 1 Why Study Markov Chains? Decision Analysis focuses on decision making in the face of uncertainty about one future event. However, many decisions need to consider

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1. IEOR 46: Introduction to Operations Research: Stochastic Models Spring, Professor Whitt Class Lecture Notes: Tuesday, March. Continuous-Time Markov Chains, Ross Chapter 6 Problems for Discussion and Solutions.

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains AM : Introduction to Optimization Models and Methods Lecture 7: Markov Chains Yiling Chen SEAS Lesson Plan Stochastic process Markov Chains n-step probabilities Communicating states, irreducibility Recurrent

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

ISyE 6650 Probabilistic Models Fall 2007

ISyE 6650 Probabilistic Models Fall 2007 ISyE 6650 Probabilistic Models Fall 2007 Homework 4 Solution 1. (Ross 4.3) In this case, the state of the system is determined by the weather conditions in the last three days. Letting D indicate a dry

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

2 DISCRETE-TIME MARKOV CHAINS

2 DISCRETE-TIME MARKOV CHAINS 1 2 DISCRETE-TIME MARKOV CHAINS 21 FUNDAMENTAL DEFINITIONS AND PROPERTIES From now on we will consider processes with a countable or finite state space S {0, 1, 2, } Definition 1 A discrete-time discrete-state

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

The Markov Decision Process (MDP) model

The Markov Decision Process (MDP) model Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy School of Informatics 25 January, 2013 In the MAB Model We were in a single casino and the

More information

4.7.1 Computing a stationary distribution

4.7.1 Computing a stationary distribution At a high-level our interest in the rest of this section will be to understand the limiting distribution, when it exists and how to compute it To compute it, we will try to reason about when the limiting

More information

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There

More information

Lectures on Probability and Statistical Models

Lectures on Probability and Statistical Models Lectures on Probability and Statistical Models Phil Pollett Professor of Mathematics The University of Queensland c These materials can be used for any educational purpose provided they are are not altered

More information

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Markov Chains. Sarah Filippi Department of Statistics  TA: Luke Kelly Markov Chains Sarah Filippi Department of Statistics http://www.stats.ox.ac.uk/~filippi TA: Luke Kelly With grateful acknowledgements to Prof. Yee Whye Teh's slides from 2013 14. Schedule 09:30-10:30 Lecture:

More information

APPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One

APPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One APPM /556 Markov Processes Fall 9, Some Review Problems for Exam One (Note: You will not have to invert any matrices on the exam and you will not be expected to solve systems of or more unknowns. There

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

Markov Processes Cont d. Kolmogorov Differential Equations

Markov Processes Cont d. Kolmogorov Differential Equations Markov Processes Cont d Kolmogorov Differential Equations The Kolmogorov Differential Equations characterize the transition functions {P ij (t)} of a Markov process. The time-dependent behavior of the

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

18.440: Lecture 33 Markov Chains

18.440: Lecture 33 Markov Chains 18.440: Lecture 33 Markov Chains Scott Sheffield MIT 1 Outline Markov chains Examples Ergodicity and stationarity 2 Outline Markov chains Examples Ergodicity and stationarity 3 Markov chains Consider a

More information

Problems. HW problem 5.7 Math 504. Spring CSUF by Nasser Abbasi

Problems. HW problem 5.7 Math 504. Spring CSUF by Nasser Abbasi Problems HW problem 5.7 Math 504. Spring 2008. CSUF by Nasser Abbasi 1 Problem 6.3 Part(A) Let I n be an indicator variable de ned as 1 when (n = jj I n = 0 = i) 0 otherwise Hence Now we see that E (V

More information

Stochastic Processes

Stochastic Processes MTHE/STAT455, STAT855 Fall 202 Stochastic Processes Final Exam, Solutions (5 marks) (a) (0 marks) Condition on the first pair to bond; each of the n adjacent pairs is equally likely to bond Given that

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

18.600: Lecture 32 Markov Chains

18.600: Lecture 32 Markov Chains 18.600: Lecture 32 Markov Chains Scott Sheffield MIT Outline Markov chains Examples Ergodicity and stationarity Outline Markov chains Examples Ergodicity and stationarity Markov chains Consider a sequence

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Continuous-Time Markov Chain

Continuous-Time Markov Chain Continuous-Time Markov Chain Consider the process {X(t),t 0} with state space {0, 1, 2,...}. The process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0 and nonnegative integers i, j, x(u),

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

A&S 320: Mathematical Modeling in Biology

A&S 320: Mathematical Modeling in Biology A&S 320: Mathematical Modeling in Biology David Murrugarra Department of Mathematics, University of Kentucky http://www.ms.uky.edu/~dmu228/as320/ Spring 2016 David Murrugarra (University of Kentucky) A&S

More information

18.175: Lecture 30 Markov chains

18.175: Lecture 30 Markov chains 18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know

More information

MATH 446/546 Test 2 Fall 2014

MATH 446/546 Test 2 Fall 2014 MATH 446/546 Test 2 Fall 204 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 546 level. Please read and follow all of these

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

Discrete Markov Chain. Theory and use

Discrete Markov Chain. Theory and use Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical

More information

5 Mutual Information and Channel Capacity

5 Mutual Information and Channel Capacity 5 Mutual Information and Channel Capacity In Section 2, we have seen the use of a quantity called entropy to measure the amount of randomness in a random variable. In this section, we introduce several

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte

More information

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results Discrete time Markov chains Discrete Time Markov Chains, Definition and classification 1 1 Applied Mathematics and Computer Science 02407 Stochastic Processes 1, September 5 2017 Today: Short recap of

More information

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time Math 416 Lecture 11 Math 416 Lecture 16 Exam 2 next time Birth and death processes, queueing theory In arrival processes, the state only jumps up. In a birth-death process, it can either jump up or down

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Markov Model. Model representing the different resident states of a system, and the transitions between the different states Markov Model Model representing the different resident states of a system, and the transitions between the different states (applicable to repairable, as well as non-repairable systems) System behavior

More information

Continuous Time Markov Chains

Continuous Time Markov Chains Continuous Time Markov Chains Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 2015 Outline Introduction Continuous-Time Markov

More information

Google and Biosequence searches with Markov Chains

Google and Biosequence searches with Markov Chains Google and Biosequence searches with Markov Chains Nigel Buttimore Trinity College Dublin 3 June 2010 UCD-TCD Mathematics Summer School Frontiers of Maths and Applications Summary A brief history of Andrei

More information

Markov Chains Introduction

Markov Chains Introduction Markov Chains 4 4.1. Introduction In this chapter, we consider a stochastic process {X n,n= 0, 1, 2,...} that takes on a finite or countable number of possible values. Unless otherwise mentioned, this

More information

Law of large numbers for Markov chains

Law of large numbers for Markov chains Chapter 14 Law of large numbers for Markov chains In this chapter we consider the equilibrium state of a Markov chain in the long run limit of many steps. This limit of observing the dynamic chain over

More information

Hidden Markov Models. Ron Shamir, CG 08

Hidden Markov Models. Ron Shamir, CG 08 Hidden Markov Models 1 Dr Richard Durbin is a graduate in mathematics from Cambridge University and one of the founder members of the Sanger Institute. He has also held carried out research at the Laboratory

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

The probability of going from one state to another state on the next trial depends only on the present experiment and not on past history.

The probability of going from one state to another state on the next trial depends only on the present experiment and not on past history. c Dr Oksana Shatalov, Fall 2010 1 9.1: Markov Chains DEFINITION 1. Markov process, or Markov Chain, is an experiment consisting of a finite number of stages in which the outcomes and associated probabilities

More information

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Markov Chains for Biosequences and Google searches

Markov Chains for Biosequences and Google searches Markov Chains for Biosequences and Google searches Nigel Buttimore Trinity College Dublin 21 January 2008 Bell Labs Ireland Alcatel-Lucent Summary A brief history of Andrei Andreyevich Markov and his work

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes

More information

Undergraduate Research and Creative Practice

Undergraduate Research and Creative Practice Grand Valley State University ScholarWorks@GVSU Honors Projects Undergraduate Research and Creative Practice 2014 The Application of Stochastic Processes in the Sciences: Using Markov Chains to Estimate

More information

Lecture 4a: Continuous-Time Markov Chain Models

Lecture 4a: Continuous-Time Markov Chain Models Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time

More information

Markov Chain Model for ALOHA protocol

Markov Chain Model for ALOHA protocol Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

4452 Mathematical Modeling Lecture 16: Markov Processes

4452 Mathematical Modeling Lecture 16: Markov Processes Math Modeling Lecture 16: Markov Processes Page 1 4452 Mathematical Modeling Lecture 16: Markov Processes Introduction A stochastic model is one in which random effects are incorporated into the model.

More information

Math 166: Topics in Contemporary Mathematics II

Math 166: Topics in Contemporary Mathematics II Math 166: Topics in Contemporary Mathematics II Xin Ma Texas A&M University November 26, 2017 Xin Ma (TAMU) Math 166 November 26, 2017 1 / 10 Announcements 1. Homework 27 (M.1) due on this Wednesday and

More information

Stochastic Modelling Unit 1: Markov chain models

Stochastic Modelling Unit 1: Markov chain models Stochastic Modelling Unit 1: Markov chain models Russell Gerrard and Douglas Wright Cass Business School, City University, London June 2004 Contents of Unit 1 1 Stochastic Processes 2 Markov Chains 3 Poisson

More information

Analysis of Blood Transfused in a City Hospital. with the Principle of Markov-Dependence

Analysis of Blood Transfused in a City Hospital. with the Principle of Markov-Dependence Applied Mathematical Sciences, Vol. 8, 2, no., 86-862 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.288/ams.2.8 Analysis of Blood Transfused in a City Hospital with the Principle of Markov-Dependence

More information

The Leslie Matrix. The Leslie Matrix (/2)

The Leslie Matrix. The Leslie Matrix (/2) The Leslie Matrix The Leslie matrix is a generalization of the above. It describes annual increases in various age categories of a population. As above we write p n+1 = Ap n where p n, A are given by:

More information

1 if the i-th toss is a Head (with probability p) ξ i = 0 if the i-th toss is a Tail (with probability 1 p)

1 if the i-th toss is a Head (with probability p) ξ i = 0 if the i-th toss is a Tail (with probability 1 p) 6 Chapter 3. Markov Chain: Introduction Whatever happened in the past, be it glory or misery, be Markov! 3.1. Examples Example 3.1. (Coin Tossing.) Let ξ 0 0 and, for i 1, 1 if the i-th toss is a Head

More information

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST Information Theory Lecture 5 Entropy rate and Markov sources STEFAN HÖST Universal Source Coding Huffman coding is optimal, what is the problem? In the previous coding schemes (Huffman and Shannon-Fano)it

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo Group Prof. Daniel Cremers 11. Sampling Methods: Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues

More information

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution 1/29 Matrix analytic methods Lecture 1: Structured Markov chains and their stationary distribution Sophie Hautphenne and David Stanford (with thanks to Guy Latouche, U. Brussels and Peter Taylor, U. Melbourne

More information

Lecture 5: Introduction to Markov Chains

Lecture 5: Introduction to Markov Chains Lecture 5: Introduction to Markov Chains Winfried Just Department of Mathematics, Ohio University January 24 26, 2018 weather.com light The weather is a stochastic process. For now we can assume that this

More information

Stochastic Models: Markov Chains and their Generalizations

Stochastic Models: Markov Chains and their Generalizations Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction

More information

The Transportation Problem. Experience the Joy! Feel the Excitement! Share in the Pleasure!

The Transportation Problem. Experience the Joy! Feel the Excitement! Share in the Pleasure! The Transportation Problem Experience the Joy! Feel the Excitement! Share in the Pleasure! The Problem A company manufactures a single product at each of m factories. i has a capacity of S i per month.

More information

6.231 DYNAMIC PROGRAMMING LECTURE 13 LECTURE OUTLINE

6.231 DYNAMIC PROGRAMMING LECTURE 13 LECTURE OUTLINE 6.231 DYNAMIC PROGRAMMING LECTURE 13 LECTURE OUTLINE Control of continuous-time Markov chains Semi-Markov problems Problem formulation Equivalence to discretetime problems Discounted problems Average cost

More information

Engineering Mathematics : Probability & Queueing Theory SUBJECT CODE : MA 2262 X find the minimum value of c.

Engineering Mathematics : Probability & Queueing Theory SUBJECT CODE : MA 2262 X find the minimum value of c. SUBJECT NAME : Probability & Queueing Theory SUBJECT CODE : MA 2262 MATERIAL NAME : University Questions MATERIAL CODE : SKMA104 UPDATED ON : May June 2013 Name of the Student: Branch: Unit I (Random Variables)

More information

each nonabsorbing state to each absorbing state.

each nonabsorbing state to each absorbing state. Chapter 8 Markov Processes Absorbing States Markov Processes Markov process models are useful in studying the evolution of systems over repeated trials or sequential time periods or stages. They have been

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Continuous time Markov chains

Continuous time Markov chains Continuous time Markov chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 16, 2017

More information

1 IEOR 4701: Continuous-Time Markov Chains

1 IEOR 4701: Continuous-Time Markov Chains Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change

More information

Examination paper for TMA4265 Stochastic Processes

Examination paper for TMA4265 Stochastic Processes Department of Mathematical Sciences Examination paper for TMA4265 Stochastic Processes Academic contact during examination: Andrea Riebler Phone: 456 89 592 Examination date: December 14th, 2015 Examination

More information

Practice problems. Practice problems. Example. Grocery store example 2 dairies. Creamwood Cheesedale. Next week This week Creamwood 1 Cheesedale 2

Practice problems. Practice problems. Example. Grocery store example 2 dairies. Creamwood Cheesedale. Next week This week Creamwood 1 Cheesedale 2 Practice problems Grocery store example dairies Next week This week Creamwood Cheesedale Creamwood Cheesedale.7.4.6 Example.7.7.4.7.4.6.7.6.4.6 Practice problems Probability of purchasing Cheesedale in

More information

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00 Norges teknisk naturvitenskapelige universitet Institutt for matematiske fag Page 1 of 7 English Contact: Håkon Tjelmeland 48 22 18 96 EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013

More information

STAT 380 Continuous Time Markov Chains

STAT 380 Continuous Time Markov Chains STAT 380 Continuous Time Markov Chains Richard Lockhart Simon Fraser University Spring 2018 Richard Lockhart (Simon Fraser University)STAT 380 Continuous Time Markov Chains Spring 2018 1 / 35 Continuous

More information

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18. IEOR 6711: Stochastic Models I, Fall 23, Professor Whitt Solutions to Final Exam: Thursday, December 18. Below are six questions with several parts. Do as much as you can. Show your work. 1. Two-Pump Gas

More information

Solutions to Problem Set 5

Solutions to Problem Set 5 UC Berkeley, CS 74: Combinatorics and Discrete Probability (Fall 00 Solutions to Problem Set (MU 60 A family of subsets F of {,,, n} is called an antichain if there is no pair of sets A and B in F satisfying

More information