MATH 446/546 Test 2 Fall 2014

Similar documents
SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012

Statistics 150: Spring 2007

Math 304 Handout: Linear algebra, graphs, and networks.

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

MATH 445/545 Test 1 Spring 2016

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Markov Chains Handout for Stat 110

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

Readings: Finish Section 5.2

MA 110 Algebra and Trigonometry for Calculus Spring 2017 Exam 1 Tuesday, 7 February Multiple Choice Answers EXAMPLE A B C D E.

IEOR 6711: Professor Whitt. Introduction to Markov Chains

Lecture 5: Introduction to Markov Chains

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Probability, Random Processes and Inference

a. Define your variables. b. Construct and fill in a table. c. State the Linear Programming Problem. Do Not Solve.

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Eigenvalues and eigenvectors.

Page 0 of 5 Final Examination Name. Closed book. 120 minutes. Cover page plus five pages of exam.

Markov Chains Absorption Hamid R. Rabiee

MARKOV PROCESSES. Valerio Di Valerio

Uncertainty Runs Rampant in the Universe C. Ebeling circa Markov Chains. A Stochastic Process. Into each life a little uncertainty must fall.

STOCHASTIC MODELLING OF A COMPUTER SYSTEM WITH HARDWARE REDUNDANCY SUBJECT TO MAXIMUM REPAIR TIME

LECTURE NOTES: Discrete time Markov Chains (2/24/03; SG)

ECE 541 Project Report: Modeling the Game of RISK Using Markov Chains

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

CENTRAL TEXAS COLLEGE SYLLABUS FOR MATH 2318 Linear Algebra. Semester Hours Credit: 3

Be sure this exam has 9 pages including the cover. The University of British Columbia

Chapter 16 focused on decision making in the face of uncertainty about one future

Lecture Notes: Markov chains Tuesday, September 16 Dannie Durand

Markov Processes Cont d. Kolmogorov Differential Equations

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

IE 336 Seat # Name. Closed book. One page of hand-written notes, front and back. No calculator. 60 minutes.

ANALYTICAL MODEL OF A VIRTUAL BACKBONE STABILITY IN MOBILE ENVIRONMENT

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)

solve them completely showing your steps along the way

QUALIFYING EXAM IN SYSTEMS ENGINEERING

Problems. HW problem 5.7 Math 504. Spring CSUF by Nasser Abbasi

STOCHASTIC PROCESSES Basic notions

The cost/reward formula has two specific widely used applications:

Markov Chains for the RISK Board Game Revisited

IE 336 Seat # Name (one point) < KEY > Closed book. Two pages of hand-written notes, front and back. No calculator. 60 minutes.

88 CONTINUOUS MARKOV CHAINS

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Name: MATH 3195 :: Fall 2011 :: Exam 2. No document, no calculator, 1h00. Explanations and justifications are expected for full credit.

Multiple Choice Answers. MA 110 Precalculus Spring 2015 Exam 3 14 April Question

Models of Molecular Evolution

Discrete Markov Chain. Theory and use

Homework set 2 - Solutions

Markov Processes Hamid R. Rabiee

QUEUING MODELS AND MARKOV PROCESSES

57:022 Principles of Design II Final Exam Solutions - Spring 1997

The Transition Probability Function P ij (t)

0.1 Naive formulation of PageRank

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

University of Georgia Department of Mathematics. Math 1113 Final Exam Fall 2016

Lectures on Probability and Statistical Models

Lecture 20 : Markov Chains

A&S 320: Mathematical Modeling in Biology

Availability. M(t) = 1 - e -mt

PBW 654 Applied Statistics - I Urban Operations Research

Chapter 4 Markov Chains at Equilibrium

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Master s Written Examination - Solution

Social network analysis: social learning

Mathematical Framework for Stochastic Processes

STATISTICAL ANALYSIS OF LAW ENFORCEMENT SURVEILLANCE IMPACT ON SAMPLE CONSTRUCTION ZONES IN MISSISSIPPI (Part 1: DESCRIPTIVE)

Logistical and Transportation Planning. QUIZ 1 Solutions

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

The Leslie Matrix. The Leslie Matrix (/2)

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Math 2J Lecture 16-11/02/12

Matching Problems. Roberto Lucchetti. Politecnico di Milano

MARKOV MODEL WITH COSTS In Markov models we are often interested in cost calculations.

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time

The SIS and SIR stochastic epidemic models revisited

Markov Chains and Pandemics

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

Incentivized Kidney Exchange

1 Probabilities. 1.1 Basics 1 PROBABILITIES

CS 798: Homework Assignment 3 (Queueing Theory)

Chapter 3: Markov Processes First hitting times

Markov decision processes and interval Markov chains: exploiting the connection

ISyE 6650 Probabilistic Models Fall 2007

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Math 166: Topics in Contemporary Mathematics II

Stochastic Modelling Unit 1: Markov chain models

Analysis of Student Success in CHEM 400 after Enrollment in CHEM 300

Who invented Calculus Newton or Leibniz? Join me in this discussion on Sept. 4, 2018.

Rooks and Pascal s Triangle. shading in the first k squares of the diagonal running downwards to the right. We

Name: Class: Date: Mini-Unit. Data & Statistics. Investigation 1: Variability & Associations in Numerical Data. Practice Problems

Continuous-Time Markov Chain

Markov Chains, Stochastic Processes, and Matrix Decompositions

MATH 1020 TEST 1 VERSION A SPRING Printed Name: Section #: Instructor:

The probability of going from one state to another state on the next trial depends only on the present experiment and not on past history.

Transcription:

MATH 446/546 Test 2 Fall 204 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 546 level. Please read and follow all of these directions. Answer all the questions appropriate to level at which you are taking the course. You may use any calculator you have at your disposal (please not your phone). I have allowed you access to the IUP sage server if you would like to use it as a calculator. Show all of your work to get full credit. This includes algebraic expressions in order to get foll credit. Note this specifically means that you should write down symbolic matrix formulations of the computations you are doing with Sage and Matlab. Write a sentence answer for each of the word problem parts stated. Answer all questions neatly. Work alone! NAME:

All Students. Given the probability transition matrix for three states: 0. p = 0.7 0.3 0.0 0.2 0.05 0.75 (5 pts) Calculate the probability that starting from state in 4 steps the Markov chain ends in state 3. Here we look at p 4 = 0.3249 0.24545 2965 0.3038 0.2979 0.3983 0.264625 0.20825 2520625 and use the entry in row column 3. Thus the probability of starting in state that we are in state 3 after 4 steps of the chain is 2965. (5 pts) What is the steady state distribution for the given Markov chain? Here we are looking at finding a vector π such that: Thus, we solve π = πp (P T I)π = 0 with one of the rows replaced by the constraint that π i =. This gives: 0.9 0.7 0.2 0.7 0.05.0.0.0 π T = Solving for π yields the steady state distribution: π = (0.29666666667, 0.24666666667, 66666666667) Note this is very similar to the rows of p raised to a high power: 0.29666666667 0.24666666667 66666666667 p 00 = 0.29666666667 0.24666666667 66666666667. 0.29666666667 0.24666666667 66666666667 0 0 2

2. (5 pts) A police car is on patrol in a neighborhood known for its gang activities. During a patrol, there is a 60% chance of responding time to the location where help is needed, else regular patrol will continue. Upon receiving a call, there is a 0% chance for cancellation (in which case normal patrol is resumed) and a 30% chance that the car is already responding to a previous call. When the police car arrives at the scene, there is a 0% chance that the instigators will have fled (in which case the car returns back to patrol) and a 40% chance that apprehension is made immediately. Else, the officers will search the area. If apprehension occurs, there is a 60% chance of transporting the suspects to the police station, else they are released and the car returns to patrol. Express the probabilistic activities of the police patrol in the form of a diagram, and then probability transition matrix. The trick to this problem is to consider all the possible states. Thus, the car can be in any of the following five places described in the problem: Patrol-l, Call-2, Scene-3, Apprehend Transit-4, Station-5 The following diagram depicts the different transitions in the Markov Chain. Patrol 0.6 Station 0. Call 0. Fled 0.6 0.3 0.6 Apprehend Scene Search Using the numbering scheme allows for the following probability transition matrix to be created. 0.6 0.0 0.0 0.0 0. 0.3 0.6 0.0 0.0 P = 0. 0.0 0.0 0.0 0.0 0.0 0.6.0 0.0 0.0 0.0 0.0 3

3. Patients suffering form kidney failure can either get a transplant or undergo periodic dialysis. During any one year, 30% undergo cadaveric transplants, and 0% receive living-donor kidneys. In the year following a transplant, 30% of the cadaveric transplants and 5% of living-donor recipients go back to dialysis. Death percentages among the two groups are 20% and 0%, respectively. Of those in the dialysis pool, 0% die, and of those who survive more than one year after transplant, 5% die and 5% go back to dialysis. The following diagram represents this situation graphically: 0.3 0.3 Dialysis : 0. 0. 0.05 Cadaveric Transplant : 2 0.2 Living Doner Transplant : 3 0. Good Post-Transplant Year : 4 0.05 Death: 5 0.9 0.75 (a) (6 pts) What is a valid transition matrix for the described Markov Chain? The transition matrix for the given situation using the state numbers as given in the diagram is as follows: 0.3 0. 0.0 0. 0.3 0.0 0.0 0.2 P = 0.0 0.0 0.75 0. 0.05 0.0 0.0 0.9 0.05 0.0 0.0 0.0 0.0.0 (b) (5 pts) What is the expected number of years a patient stays on dialysis. The expected number of visits to state j starting in state i is given by element (i, j) in (I N) where when given probability transition matrix P we calculate N by removing the absorbing state. This gives N = 0.3 0. 0.0 0.3 0.0 0.0 0.0 0.0 0.75 0.05 0.0 0.0 0.9 4

and (I N) = 3.53982300885.0694690265 0.353982300885 7.964607699.94690265487.58407079646 0.94690265487 9.38053097345.85840707965 5752223894.8584070796.68459292.769950442 3097345327 0.769950442 3.982300885 Here it can see that a patient will stay on dialysis approximately 3.539 years. (c) (5 pts) What is the longevity of a patient who starts on dialysis? The expected time to absorption may be found by summing the values in the (I N) matrix across any row representing the number of transitions till absorption when starting in state i. (I N) = 2.9203539823 3.06946903 5.283858407 6.46076992 The expected number of years until someone who starts on dialysis dies is 2.92 years. (d) (5 pts) What is the life expectancy of a patient who survives year or longer after a transplant. Here we can use the same work we had form the previous problem. Noting that the entry in row 4 corresponding to having a good post-transplant year is approximately 6.46. Thus, we expect a patient who survives year or longer after a transplant to live 6.46 more years. (e) (5 pts) The expected number of years before an at-least--year transplant survivor goes back to dialysis or dies. Here we can consider the matrix (I N) that gives the expected time in state j starting in state i. Considering the row 4 column 4 entry this is the expected time in a post-transplant good year prior to death or going back on dialysis. Specifically we see it is expected that a person will spend 3.98 years in this particular state. To answer the question as worded we need to consider going back on dialysis as an absorbing state and place it last in the transition matrix. Mapping the states in the transition matrix as follows: 2, 3, 4, 5, Thus the new probability transition matrix is: 0.0 0.0 0.2 0.3 0.0 0.0 0.75 0. P = 0.0 0.0 0.9 0.05 0.05 0.0 0.0 0.0.0 0.0 0.0 0.0 0.0 0.0.0 5.

and can be separated into the following: 0.0 0.0 N = 0.0 0.0 0.75 and A = 0.0 0.0 0.9 The expected time to absorption is given by: 6.0 (I N) = 8.5 0.0 0.2 0.3 0. 0.05 0.05 and starting in the state corresponding to a good post transplant year we expect 0 years before death or dialysis. 6

545 Additional Problems. (0 pts) A doubly stochastic Markov chain with m states in the state space S is one where p ij = j S. Show that the vector ( m, m,..., ) m is a stationary distribution. Here we aim to show that for a matrix P with the conditions that: p ij = j S ( columns sum to ) and that p ij = i S ( rows sum to ), j S π = ( m, m,..., m ) satisfies π = πp. Proof: Let π i = m for any i S. Then consider the jth element of the row vector πp given by: π i p ij = m p ij = p ij m = m = π j Thus, π = = π = πp. ( m, m,..., ) m is the stationary distribution as was to be shown. 7