Chapter 10 Markov Chains and Transition Matrices

Similar documents
Chapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal.

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

Game Theory- Normal Form Games

T 1. The value function v(x) is the expected net gain when using the optimal stopping time starting at state x:

Axioms of Probability. Set Theory. M. Bremer. Math Spring 2018

1) Which of the following is a correct statement about the three lines given by the equations: x + y =2, y =5, x+2y =7.

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

Optimization 4. GAME THEORY

Math 166: Topics in Contemporary Mathematics II

Name: 180A MIDTERM 2. (x + n)/2

Name: Exam 2 Solutions. March 13, 2017

Lecture 5: Introduction to Markov Chains

MAT 271E Probability and Statistics

Supervised Machine Learning (Spring 2014) Homework 2, sample solutions

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Math , Fall 2012: HW 5 Solutions

Chapter 18 Sampling Distribution Models

Chapter 6 Continuous Probability Distributions

Using first-order logic, formalize the following knowledge:

STAT 516 Midterm Exam 2 Friday, March 7, 2008

a i,1 a i,j a i,m be vectors with positive entries. The linear programming problem associated to A, b and c is to find all vectors sa b

Section 9 2B:!! Using Confidence Intervals to Estimate the Difference ( µ 1 µ 2 ) in Two Population Means using Two Independent Samples.

Be able to define the following terms and answer basic questions about them:

MAT2377. Ali Karimnezhad. Version September 9, Ali Karimnezhad

Markov Processes Hamid R. Rabiee

AP Statistics Semester I Examination Section I Questions 1-30 Spend approximately 60 minutes on this part of the exam.

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Lecture 2 : CS6205 Advanced Modeling and Simulation

Introduction to Game Theory

13-5 Probabilities of Independent and Dependent Events

PROBABILITY.


MATH 3C: MIDTERM 1 REVIEW. 1. Counting

A&S 320: Mathematical Modeling in Biology

Markov Chains and Related Matters

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin

18.440: Lecture 33 Markov Chains

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Discussion 01. b) What is the probability that the letter selected is a vowel?

Sampling Distributions and the Central Limit Theorem. Definition

Chapter 6: Addition and Subtraction of Integers

Math 166: Topics in Contemporary Mathematics II

Problem Points S C O R E Total: 120

Discrete Random Variables. Discrete Random Variables

Markov Chains and Transition Probabilities

Chapter 10. Finite-State Markov Chains. Introductory Example: Googling Markov Chains

Math Maximum and Minimum Values, I

Preliminary Physics. Moving About. DUXCollege. Week 2. Student name:. Class code:.. Teacher name:.

Lecture 4 An Introduction to Stochastic Processes

Math 1313 Experiments, Events and Sample Spaces

Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about

Introduction to Stochastic Processes

Mathematics Standards for High School Advanced Quantitative Reasoning

Chapter 5 : Probability. Exercise Sheet. SHilal. 1 P a g e

Lecture 4 - Random walk, ruin problems and random processes

POPULATION AND SAMPLE

Lecture 20 : Markov Chains

Be able to define the following terms and answer basic questions about them:

Optimization - Examples Sheet 1

Bayes Formula. MATH 107: Finite Mathematics University of Louisville. March 26, 2014

3 PROBABILITY TOPICS

Chapter 16 focused on decision making in the face of uncertainty about one future

STAT 414: Introduction to Probability Theory

MAS108 Probability I

Calculus in Business. By Frederic A. Palmliden December 7, 1999

Statistics 100 Exam 2 March 8, 2017

MAS275 Probability Modelling Exercises

A birthday in St. Petersburg

Math489/889 Stochastic Processes and Advanced Mathematical Finance Solutions for Homework 7

Problem # Number of points 1 /20 2 /20 3 /20 4 /20 5 /20 6 /20 7 /20 8 /20 Total /150

STAT 418: Probability and Stochastic Processes

6. For any event E, which is associated to an experiment, we have 0 P( 7. If E 1

ELEG 3143 Probability & Stochastic Process Ch. 2 Discrete Random Variables

Learning From Data Lecture 3 Is Learning Feasible?

DRAFT. Mathematical Methods 2017 Sample paper. Question Booklet 1

TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013

6.4 Type I and Type II Errors

MARKOV PROCESSES. Valerio Di Valerio

Bernoulli and Binomial Distributions. Notes. Bernoulli Trials. Bernoulli/Binomial Random Variables Bernoulli and Binomial Distributions.

STAT Chapter 3: Probability

IE 303 Discrete-Event Simulation L E C T U R E 6 : R A N D O M N U M B E R G E N E R A T I O N

MAT 122 Homework 7 Solutions

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

CS 2750: Machine Learning. Bayesian Networks. Prof. Adriana Kovashka University of Pittsburgh March 14, 2016

MATH STUDENT BOOK. 12th Grade Unit 9

Two Way Table 2 Categorical Variable Analysis Problem

Week 6, 9/24/12-9/28/12, Notes: Bernoulli, Binomial, Hypergeometric, and Poisson Random Variables

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

MATH4250 Game Theory 1. THE CHINESE UNIVERSITY OF HONG KONG Department of Mathematics MATH4250 Game Theory

Chapter 1 - Lecture 3 Measures of Location

What is Probability? Probability. Sample Spaces and Events. Simple Event

Lecture 1 Introduction to Probability and Set Theory Text: A Course in Probability by Weiss

Chapter 13, Probability from Applied Finite Mathematics by Rupinder Sekhon was developed by OpenStax College, licensed by Rice University, and is

The remains of the course

Chapter 7: Section 7-1 Probability Theory and Counting Principles

Math 51 Midterm 1 July 6, 2016

1. When applied to an affected person, the test comes up positive in 90% of cases, and negative in 10% (these are called false negatives ).

LECTURE 15: RELATIONS. Software Engineering Mike Wooldridge

Example. What is the sample space for flipping a fair coin? Rolling a 6-sided die? Find the event E where E = {x x has exactly one head}

Please simplify your answers to the extent reasonable without a calculator. Show your work. Explain your answers, concisely.

Transcription:

Finite Mathematics (Mat 119) Lecture week 3 Dr. Firozzaman Department of Mathematics and Statistics Arizona State University Chapter 10 Markov Chains and Transition Matrices A Markov Chain is a sequence of experiments, each of which results in one of a finite number of states that we label 1, 2, 3,.. m. If p is the probability of moving from state i to state j, then the transition matrix P = [ p ] of a Markov chain is the m m matrix p11 p12 p1m P = p( m 1)1 p( m 1)2 p( m 1) m pm 1 pm2 pmm Note that the sum of entries of each row is 1,as those represent probabilities of any state. v = [ p11 p12 p13 p1 m ] is an initial probability distribution. Probability distribution after k stages has the following relation v = v P = v P ( k ) ( k 1) k Example 1. Consider the transition matrix given below and answer following questions: State1 Stat e 2 State1 1/ 3 2 / 3 P = State 2 1/ 4 3/ 4 a) What does the entry 3/4 in this matrix represent? b) Assuming that the system is initially in State 1, find the probability distribution one observation later. What is it two observations later? c) Assuming that the system is initially in State 2, find the probability distribution one observation later. What is it two observations later? Solution: a) The transition probability from state 2 to state 2 is 3/4. In other words it the probability that an object in state 2 will move to state 2. b) = [1 0], = = [1/ 3 2 / 3] one observation later, (1) v v v P (2) v = [5 /18 13/18] two observation later 1

c) = [0 1], = = [1/ 4 3/ 4] one observation later, (1) v v v P (2) v = [13/ 48 35 / 48] two observations later Example 2..7.2.1 P = =.4.1.5.6.2.2, v [.3.3.4]. Determine (1) (2) v, v Homework # 3 (Example 3) due on Wednesday Jan 9, 2008 Example 3. The city of Oak Lawn is experiencing a movement of its population to suburbs. At present 70% of the total population lives in the city and 30% lives in the suburbs. But each year 7% of the city people move to suburbs while only 1% of the suburbs people move back to the city. Assuming that the total population (city and suburbs) remains constant. Find what is the ratio of city people to suburbs people after 5 years. (Answer: Approximately 1 to 1) Example 4. Consider a maze consisting of four rooms as shown in the figure. The experiment consists of releasing a mouse in a particular room and observing its behavior. Consider the transition matrix 1/ 4 1/ 2 1/ 4 0 1/ 6 2 / 3 0 1/ 6 P = 1/ 3 0 1/ 3 1/ 3 0 1/ 4 1/ 2 1/ 4 1 2 3 4 In a certain case if the initial probability distribution is probability distribution after 5 observations. v = [1/ 6 2 / 3 0 1/ 6]. Find Example 5. Consider a maze in 9 rooms. The system consists of releasing a mouse in any room. We assume that the following learning patterns exist. If the mouse is in the rooms 1, 2, 3, 4, 5, it remains in that room or it moves to any room that the maze permits, each having the same probability; if it is in room 8, it moves directly to room 9; if it is in rooms 6, 7, or 9, it remains in that room. The mouse can move from 1 to 4 only; from 4 to 1 or to 7; from 2 to 5 only; from 5 to 2 or to 8; 3 to 6 only; 6 to 3 or to 9; and 8 to 9 only; 9 to 6 or to 8. Draw the maze and answer the following questions: a) Explain why the above experiment is a Markov chain. b) Construct the transition matrix P. c) Find the transition probability after the third state, given the initial probability matrix v = [0 0 0 0 0 0 0 0 1] 2

Regular Markov Chains: A Markov chain is said to be regular if, for some power of its transition matrix, all of the entries are positive. Remember that here zero is not considered as positive. For this section, zero is zero, neither positive nor negative. Example 6. 1/ 2 1/ 2 P = 1 0 is regular since 2 3/ 4 1/ 4 P = 1/ 2 1/ 2 Example 7. 1 0 P = 3/ 4 1/ 4 is not regular. Verify. 3/ 4 1/ 4 Example 8. P 12 = 7 / 20 13/ 20. Find P. Consider t = [0.5833 0.4167] And show that t = tp. In this case we say that, the row vector t is fixed by P. Example 9. Find the fixed probability vector t of the Markov Chain whose transition 0.93 0.07 matrix is P = 0.01 0.99 Example 10. Is the transition matrix is 1/ 3 1/ 3 1/ 3 P = 0 1/ 2 1/ 2 1/ 4 1/ 4 1/ 2 regular? Absorbing Markov Chain: In a Markov Chain if from state E i to state E j, then p denotes the probability of going E is called absorbing state if p = 1. A Markov Chain is i called absorbing Markov Chain iff it contains at least one absorbing state and it is possible to go from any non-absorbing state to an absorbing state in one or more stages. Example 11. 1 0 P = 3/ 4 1/ 4 is absorbing as first row is absorbing state. Subdividing transition matrices to absorbing Markov Chain: Any matrix subdivided as the given for is such a matrix, I O P = S Q where I is the identity matrix, O is the zero matrix. 3

1 0 0 Example 12. Subdivide the matrix as absorbing Markov Chain: P = 1/ 3 1/ 3 1/ 3 0 0 1 Two person games: Any conflict or competition between two people is called a twoperson game. Example 13. From tossing a coin, player I picks heads or tails and player II attempts to guess the choice. Player I will pay player II $3 if both choose heads; player I will pay player II $2 if both choose tails. If player II guesses incorrectly, he will pay player I $5. Write the payoff matrix. Strictly determined game: A game defined by a matrix is said to be strictly determined iff there is an entry of the matrix that is the smallest element in its row and is also the largest in its column. This entry is then called the value of the game, which is saddle point. Example 14. Determine whether the game defined by the matrix below is strictly determined. If it is find the value of the game. a) 3 0 2 1 P = 2 3 0 1 4 2 1 0 b) 2 0 1 P = 3 6 0 1 3 7 c) 1 0 3 P = 1 2 1 2 2 3 Mixed strategy: Whenever strategies are mixed for players, it is called mixed strategy game. Expected Payoff: The expected payoff E of player I in a two person zero sum game, defined by the matrix A, in which the row vector P and column vector Q define the respective strategy probabilities of player I and player II is E = PAQ 3 1 1/ 3 Example 15. Given A = 2 1, P = [1/ 3 1/ 3 1/ 3], Q = 2 / 3. Find E 1 0 3 1 1/ 3 Solution: PAQ = [1/ 3 1/ 3 1/ 3] 2 1 = [2 / 9] 2 / 3, a one by one matrix. The game 1 0 is biased in favor of player I and has an expected payoff of 2/9. 4

Example 16. Given that expected payoff E corresponding to the optimal strategies is a11a22 a12a a 21 11 a12 given by E = PAQ = ; A = a a 11 + a22 a12 a 21 21 a 22 1 1 Question: For the game matrix A = 2 3 find the expected payoff. Transition diagram: The graphical representation of a transition matrix is called the transition diagram. Example 17. An insurance company classifies drivers as low-risk if they are accidentfree for 1 year. Past records indicate that 98% of the drivers in the low-risk category (L) one year will remain in that category the next year, and 78% of the drivers who are not in the low-risk category (L ) one year will be in the low-risk category the next year. a) Draw a transition diagram b) Find the transition matrix P c) If 90% of the drivers in the community are in the low-risk category this year, what is the probability that a driver chosen at random from the community will be in the low-risk category next year? Year after next? b) L L L.98.02 P = L '.78.22 a).02.98.22 Transition Matrix L L c) v = [.90.10] then [ ].78 Transition diagram (1) (2) (1) v P v v P =.90.10 = [.96.04], = = [.972.028] Therefore 96% of the drivers will be in the low-risk category next year and 97.2% of the drivers will be in the low-risk category year after next year. Example 18. Part-time students admitted to an MBA program in a university are considered to be first-year students until they complete 15 credits successfully. Then they are classified as second-year students and may begin to take more advanced courses and to work on the thesis required for graduation. Past record indicate that at the end of each year 10% of the first-year students (F) drop out of the program (D) and 30% become second-year students (S). Also, 10% of the second-year students drop out of the program and graduate (G) 40% each year. Students that graduate or drop out never return to the program. a) Draw a transition diagram. b) Find the transition matrix P 5

c) What is the probability that a first-year student graduates within 4 years? Drop out within 4 years? Example 19. Find transition matrix from the given transition diagram (complete the transition diagram before finding transition matrix):.4.2 B? A.5?.7?.5 C.1 6