each nonabsorbing state to each absorbing state.

Similar documents
Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Math 166: Topics in Contemporary Mathematics II

IE 336 Seat # Name. Closed book. One page of hand-written notes, front and back. No calculator. 60 minutes.

Decision Theory: Markov Decision Processes

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Chapter 16 focused on decision making in the face of uncertainty about one future

The Markov Decision Process (MDP) model

56:171 Operations Research Final Exam December 12, 1994

2 DISCRETE-TIME MARKOV CHAINS

Statistical Problem. . We may have an underlying evolving system. (new state) = f(old state, noise) Input data: series of observations X 1, X 2 X t

Lecture Notes: Markov chains Tuesday, September 16 Dannie Durand

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

21 Markov Decision Processes

Markov Chains Absorption (cont d) Hamid R. Rabiee

n α 1 α 2... α m 1 α m σ , A =

Benchmark Test : Grade 6 Math. Class/Grade. Benchmark: MA.6.A.3.2. Which inequality best represents the graph below? A x 4 B x 4 C x 4 D x 4

Special Mathematics. Tutorial 13. Markov chains

Practice problems. Practice problems. Example. Grocery store example 2 dairies. Creamwood Cheesedale. Next week This week Creamwood 1 Cheesedale 2

Markov Chains Absorption Hamid R. Rabiee

Chapter 29 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M.

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.

Ex 12A The Inverse matrix

The Leslie Matrix. The Leslie Matrix (/2)

Annotated Answer Key. Name. Grade 8, Unit 7, Lesson 2: Building blocks: proportional and non-proportional linear relationships

The probability of going from one state to another state on the next trial depends only on the present experiment and not on past history.

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)

Markov Chains. Chapter 16. Markov Chains - 1

IEOR 6711: Professor Whitt. Introduction to Markov Chains

General Mathematics 2018 Chapter 5 - Matrices

Uncertainty Runs Rampant in the Universe C. Ebeling circa Markov Chains. A Stochastic Process. Into each life a little uncertainty must fall.

Arithmetic Sequence. Arithmetic Sequences model discrete constant growth because the domain is limited to the positive integers (counting numbers)

Markov Processes Hamid R. Rabiee

Interactive Study Guide Solving Two-Step Equations

Finding Steady State of Safety Systems Using the Monte Carlo Method

Systems of Linear Equations in Two Variables. Break Even. Example. 240x x This is when total cost equals total revenue.

QUEUING MODELS AND MARKOV PROCESSES

Stochastic Models: Markov Chains and their Generalizations

ISM206 Lecture, May 12, 2005 Markov Chain

Competitive Equilibrium

Markov Chains and Transition Probabilities

10-1 Sequences as Functions. Determine whether each sequence is arithmetic. Write yes or no. 1. 8, 2, 12, 22

4452 Mathematical Modeling Lecture 16: Markov Processes

Part A. P (w 1 )P (w 2 w 1 )P (w 3 w 1 w 2 ) P (w M w 1 w 2 w M 1 ) P (w 1 )P (w 2 w 1 )P (w 3 w 2 ) P (w M w M 1 )

Finite-Horizon Statistics for Markov chains

IE 336 Seat # Name (clearly) Closed book. One page of hand-written notes, front and back. No calculator. 60 minutes.

Concept of Statistical Model Checking. Presented By: Diana EL RABIH

Discrete Markov Chain. Theory and use

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

PRACTICE WORKSHEET FOR MA3ENA EXAM

Sociology 376 Exam 1 Spring 2012 Prof Montgomery

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

Continuous-time Markov Chains

Lecture 11 Linear programming : The Revised Simplex Method

CS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method

Markov Chains (Part 4)

Chapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal.

Mortenson Pissarides Model

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Suppose one cell phone company charges $0.10 per minute for phone calls.

Name of the Student: Problems on Discrete & Continuous R.Vs

Thanksgiving Break Homework Packet Name: Per: Everyday on break, you are expected to do at least 15 minutes of math work.

EQUATIONS WITH MULTIPLE VARIABLES 5.1.1

CMPSCI 240: Reasoning Under Uncertainty

QUESTION ONE Let 7C = Total Cost MC = Marginal Cost AC = Average Cost

Learning Log Title: CHAPTER 6: SOLVING INEQUALITIES AND EQUATIONS. Date: Lesson: Chapter 6: Solving Inequalities and Equations

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

UNIVERSITY OF WISCONSIN DEPARTMENT OF ECONOMICS MACROECONOMICS THEORY Preliminary Exam August 1, :00 am - 2:00 pm

Reinforcement Learning

Problems. HW problem 5.7 Math 504. Spring CSUF by Nasser Abbasi

3F1 Information Theory, Lecture 3

1 Computing the Invariant Distribution

x x 1 0 1/(N 1) (N 2)/(N 1)

Stochastic Optimization

Name of the Student: Problems on Discrete & Continuous R.Vs

Models for representing sequential circuits

Newton s Laws Student Success Sheets (SSS)

Some Definition and Example of Markov Chain

2. The Power Method for Eigenvectors

ECE 541 Project Report: Modeling the Game of RISK Using Markov Chains

Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6

56:171 Operations Research Final Examination December 15, 1998

Markov Processes Cont d. Kolmogorov Differential Equations

Operations Research II, IEOR161 University of California, Berkeley Spring 2007 Final Exam. Name: Student ID:

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Lecture 10: Powers of Matrices, Difference Equations

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Adders allow computers to add numbers 2-bit ripple-carry adder

Lecture 20 : Markov Chains

Organization, Careers and Incentives

0.1 Naive formulation of PageRank

Inclement Weather FAQ s

The Laplace Expansion Theorem: Computing the Determinants and Inverses of Matrices

Matrices 2. Slide for MA1203 Business Mathematics II Week 4

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

9 - Markov processes and Burt & Allison 1963 AGEC

Digital Logic: Boolean Algebra and Gates. Textbook Chapter 3

Statistical Models and Algorithms for Real-Time Anomaly Detection Using Multi-Modal Data

Discrete planning (an introduction)

Economic Growth: Lecture 13, Stochastic Growth

Markov Chains, Stochastic Processes, and Matrix Decompositions

Transcription:

Chapter 8 Markov Processes Absorbing States Markov Processes Markov process models are useful in studying the evolution of systems over repeated trials or sequential time periods or stages. They have been used to describe the probability that: a machine that is functioning in one period will continue to function or break down in the next period. A consumer purchasing brand A in one period will purchase brand B in the next period. 1 2 Transition probabilities govern the manner in which the state of the system changes from one stage to the next. These are often represented in a transition matrix. A system has a finite Markov chain with stationary transition probabilities if: there are a finite number of states, the transition probabilities remain constant from stage to stage, and the probability of the process being in a particular state at stage n+1 1 is completely determined by the state of the process at stage n (and not the state at stage n-1). This is referred to as the memory-less property. 3 4 The state probabilities at any stage of the process can be recursively calculated by multiplying the initial state probabilities by the state of the process at stage n. The probability of the system being in a particular state after a large number of stages is called a steady- state probability. Steady state probabilities can be found by solving the system of equations P = together with the condition for probabilities that i = 1. Matrix P is the transition probability matrix Vector is the vector of steady state probabilities. 5 6 1

Absorbing States An absorbing state is one in which the probability that the process remains in that state once it enters the state is 1. If there is more than one absorbing state, then a steady-state state condition independent of initial state conditions does not exist. If a Markov chain has both absorbing and nonabsorbing states, the states may be rearranged so that the transition matrix can be written as the following composition of four submatrices: I, 0, R, and Q: I 0 R Q 7 8 I = an identity matrix indicating one always remains in an absorbing state once it is reached 0 = a zero matrix representing 0 probability of transitioning from the absorbing states to the nonabsorbing states R = the transition probabilities from the nonabsorbing states to the absorbing states Q = the transition probabilities between the nonabsorbing states The fundamental matrix, N,, is the inverse of the difference between the identity matrix and the Q matrix. N = (I - Q ) -1 9 10 NR Matrix The NR matrix is the product of the fundamental (N) matrix and the R matrix. It gives the probabilities of eventually moving from each nonabsorbing state to each absorbing state. Multiplying any vector of initial nonabsorbing state probabilities by NR gives the vector of probabilities for the process eventually reaching each of the absorbing states. Such computations enable economic analyses of systems and policies. Henry, a persistent salesman, calls North's Hardware Store once a week hoping to speak with the store's buying agent, Shirley. If Shirley does not accept Henry's call this week, the probability she will do the same next week is.35. On the other hand, if she accepts Henry's call this week, the probability she will not do so next week is.20. 11 12 2

Transition Matrix Next Week's Call This.35.65 Week's Call.20.80 How many times per year can Henry expect to talk to Shirley? To find the expected number of accepted calls per year, find the long-run proportion (probability) of a call being accepted and multiply it by 52 weeks. 13 14 (continued) Let 1 = long run proportion of refused calls 2 = long run proportion of accepted calls Then,.35.65 [ ] = [ ].20.80 (continued) + = (1) + = (2) + = 1 (3) Solve for and. 15 16 (continued) Solving using equations (2) and (3). (Equation 1 is redundant.) Substitute = 1 - into (2) to give:.65(1-2 ) + = 2 This gives =.76471. Substituting back into equation (3) gives =.23529. Thus the expected number of accepted calls per year is: (.76471)(52) = 39.76 or about 40 What is the probability Shirley will accept Henry's next two calls if she does not accept his call this week? 17 18 3

.35.35.65 P =.35(.35) =.1225 P =.35(.65) =.2275 What is the probability of Shirley accepting exactly one of Henry's next two calls if she accepts his call this week?.65.20.80 P =.65(.20) =.1300 P =.65(.80) =.5200 19 20 The probability of exactly one of the next two calls being accepted if this week's call is accepted can be found by adding the probabilities of (accept next week and refuse the following week) and (refuse next week and accept the following week) =.13 +.16 =.29 The vice president of personnel at Jetair Aerospace has noticed that yearly shifts in personnel can be modeled by a Markov process. The transition matrix is: Next Year Same Pos. Promotion Retire Quit Fired Current Year Same Position.55.10.05.20.10 Promotion.70.20 0.10 0 Retire 0 0 1 0 0 Quit 0 0 0 1 0 Fired 0 0 0 0 1 21 22 Transition Matrix Next Year Retire Quit Fired Same Promotion Current Year Retire 1 0 0 0 0 Quit 0 1 0 0 0 Fired 0 0 1 0 0 Same.05.20.10.55.10 Promotion 0.10 0.70.20-1 -1 1 0.55.10.45 -.10 N = (I - Q ) - 1 = - = 0 1.70.20 -.70.80 23 24 4

The determinant, d = a a - a a = (.45)(.80) - (-.70)(-.10).10) =.29 Thus, NR Matrix The probabilities of eventually moving to the absorbing states from the nonabsorbing states are given by:.80/.29.10/.29 2.76.34 N = =.70/.29.45/.29 2.41 1.55 NR = 2.76.34.05.20.10 x 2.41 1.55 0.10 0 25 26 NR Matrix (continued) NR = Retire Quit Fired Same.14.59.28 Promotion.12.64.24 Absorbing States What is the probability of someone who was just promoted eventually retiring?... quitting?... being fired? 27 28 Absorbing States (continued) The answers are given by the bottom row of the NR matrix. The answers are therefore: Eventually Retiring =.12 Eventually Quitting =.64 Eventually Being Fired =.24 29 5