MARKOV MODEL WITH COSTS In Markov models we are often interested in cost calculations.
|
|
- Lester Park
- 5 years ago
- Views:
Transcription
1 MARKOV MODEL WITH COSTS In Markov models we are often interested in cost calculations. inventory model: storage costs manpower planning model: salary costs machine reliability model: repair costs We will look at 3 types of cost calculations: 1. Expected total costs over a finite horizon 2. Long-run expected costs per period 3. Expected total costs over an infinite horizon (possible if you only have costs in the transient states) 1/25
2 Expected total costs over a finite horizon Assumption: every visit to state i will give you expected costs c(i). What are the expected total costs over the time span {0, 1,..., n}? Define g(i, n): expected total costs over time span {0, 1,..., n} when you start in state i. Then we have g(i, n) = N m i,j (n)c(j). j=1 2/25
3 Using the notation we have g(n) = g(1, n) g(2, n). g(n, n), c = g(n) = M(n) c. c(1) c(2). c(n) Conclusion: If we know the matrix of occupancy times M(n) and the cost vector c, we can calculate the expected total costs over a finite horizon. 3/25
4 Example: Inventory system (see Example 5.6) State space: S = {2, 3, 4, 5} Transition matrix: P = Assume storage costs in state i: 50 i Then expected total storage costs over time span {0, 1, 2,..., 10} g(10) = 2229, /25
5 Long-run expected costs per period If n, then often the total expected costs over the time span {0, 1,..., n} also tend to infinity. Hence, usually in long-run cost calculations we look at the expected costs per period: Theorem: g(i) = lim n g(i, n) n + 1. For an irreducible Markov chain with occupancy distribution ˆπ we have g(i) = g = N ˆπ j c(j). Remark that for an irreducible Markov chain the long-run expected costs per period do not depend on the initial state. j=1 5/25
6 Example: Manpower planning model (see Example 5.8). State space: S = {1, 2, 3, 4} Transition matrix: P = Assume salary costs in states 1,2,3,4: 400,600,800,1000 Then the occupancy distribution is given by ˆπ = [0.273, 0.455, 0.182, 0.091] and the long-run expected salary costs per week per employee are N ˆπ j c(j) = j=1 6/25
7 Expected total costs over an infinite horizon If there are only costs in transient states, then the expected total costs will not tend to infinity when n. How do we calculate in this case g(i) = lim n g(i, n)? Let A be the set of states in the end classes and hence S \ A the set of transient states. By using a first-step analysis we can show that g(i) = c(i) + p i,j g(j), i S \ A. j S\A This system of equations has a unique solution. 7/25
8 Example driver s license model State space:{t 1, T 2, P 1, P 2, P 3, P 4, S, F } Transition matrix: P = Transition diagram: see separate slide /25
9 What is the meaning of the different states? T 1, T 2 : theoretical exam (first + second time) P 1, P 2, P 3, P 4 : practical exam (1st,2nd,3rd, 4th time) S: Leaving with driver s license (Success) F : Leaving without driver s license (Failure) costs theoretical exam: 45 euro each time, costs practical exam: 90 euro each time. Question: What are the expected total costs over an infinite horizon? 9/25
10 Let g(i) be the expected total costs over an infinite horizon starting in state i. Then we have g(t 1 ) = g(t 2 ) g(p 1 ), g(t 2 ) = 45 + g(p 1 ), g(p 1 ) = g(p 2 ), g(p 2 ) = g(p 3 ), g(p 3 ) = g(p 4 ), g(p 4 ) = g(p 4 ). and hence g(p 4 ) = 120, g(p 3 ) = 156, g(p 2 ) = 207 g(p 1 ) = 276.3, g(t 2 ) = g(t 1 ) = /25
11 First passage times (Time until the Markov chain first enters a certain set of states) In the preceding slides we have seen twice that you can calculate a certain quantity by doing a so-called "first-step analysis": derive a system of equations by considering what can happen in the first period with the Markov chain. Calculation of the probability that a reducible Markov chain will end in a certain end class. Calculation of the expected total costs over an infinite horizon for a reducible Markov chain with only costs in the transient states. The same technique can be used to calculate expected first passage times of a Markov chain. 11/25
12 Let A be a subset of the state space S and define m i (A) as the expected time until the Markov chain first enters the subset A when it starts in state i. Then of course we have m i (A) = 0 if i A and furthermore m i (A) = 1 + p i,j m j (A), i / A. j S\A This, again, gives a system of equations from which we can obtain the quantities m i (A) for i / A. 12/25
13 Example: Manpower planning model Calculate the expected time an employee is working in the company. Markov model for 1 specific employee: State space S = {1, 2, 3, 4, left company} Transition matrix P = /25
14 Let A = {left company} and hence S \ A = {1, 2, 3, 4}. The quantities m i (A) satisfy the equations m 1 (A) = m 1 (A) m 2 (A) m 2 (A) = m 2 (A) m 3 (A) m 3 (A) = m 3 (A) m 4 (A) m 4 (A) = m 4 (A) The solution of this system of equations is given by m 4 (A) = 100, m 3 (A) = 60, m 2 (A) = 88.89, m 1 (A) = Hence, the expected time an employee is working in the company is equal to weeks (approximately 1.4 years). 14/25
15 COHORT MODELS Discrete time Markov chains are often used in the study of the behaviour of a group of persons or objects. These systems are often called Cohort models. An example of a cohort model is the manpower planning model. In the manpower planning model we assumed so far that the total number of employees is constant. Each time an employee leaves the company, he or she is instantaneously replaced by a new employee. Using the theory of Markov chains we were able to determine the shortterm and long-term behaviour of the number of employees in the different categories. 15/25
16 Example: Manpower planning model Transition matrix P = Assume we have 100 employees and in the beginning of week 1 we have that 50 employees belong to category 1, 25 to category 2, 15 to category 3 and 10 to category 4. How many employees do you expect in the different categories in the beginning of week 5, 11 and 100? How many employees do you expect in the different categories in the longrun? 16/25
17 We have and hence a (0) = [0.50, 0.25, 0.15, 0.10] a (4) = a (0) P 4 = [0.466, 0.289, 0.146, 0.099], a (10) = a (0) P 10 = [0.424, 0.336, 0.143, 0.098], a (99) = a (0) P 99 = [0.274, 0.461, 0.177, 0.088]. In this way we can calculate the expected number of employees in the different categories in the beginning of week 5, 11 and 100. The unique normalized solution of the system of equations π = πp is given by π = [0.273, 0.454, 0.182, 0.091]. Hence, in the long-run we expect approximately 27 employees in category 1, 45 in category 2, 18 in category 3 and 9 in category 4. 17/25
18 However, in many applications it is not realistic to assume that the number of persons in the group is constant over time. The departures of persons from the group on the one hand and the arrivals of new persons into the group can be independent processes. Example: The number of persons having a car insurance at a certain insurance company. (The different categories here represent the different levels in the no-claims bonus system). How do we calculate in such cases quantities like the expected number of persons in the group at a certain time instant and the division of the persons within the group over the different levels (short-term behaviour)? the expected number of persons in the group in the long-run and the division of the persons within the group over the different levels (longterm behaviour)? 18/25
19 Assume we have a group of persons, where the behaviour of each person can be described by a Markov chain with state space S = {0, 1, 2,..., N} and transition matrix P. State 0 represents the situation that the person has left the system. Q is the part of the transition matrix corresponding to transitions from states {1, 2,..., N} to states {1, 2,..., N}. Here, Q is a sub-stochastic matrix, i.e., a matrix with q i,j 0 for all i and j and N j=1 q i,j 1 for all i. 19/25
20 Example: Manpower planning model (model for 1 specific employee) State space S = {0, 1, 2, 3, 4} Transition matrix P = Q = /25
21 Notation: Short-term behaviour r (n) i : expected number of new persons, called recruits, entering the group from outside at time n in state i. s (n) i : expected total number of persons in the group at time n in state i. If we denote with r (n) and s (n) the transient vectors then we have r (n) = [r (n) 1, r (n) 2,..., r (n) N ], s (n) = r (n) + s (n 1) Q s (n) = [s (n) 1, s (n) 2,..., s (n) N ], Conclusion: If we know the beginvector s (0) and the vectors containing the expected numbers of recruits in the different states r (1), r (2), r (3),..., we can calculate the vectors s (1), s (2), s (3), /25
22 Example: Markov chain with state space S = {0, 1, 2, 3, 4} and transition matrix P = Furthermore assume that s (0) = [10, 10, 10, 10] and r (n) = [10, 0, 0, 0] for all n. s (1) = [10, 0, 0, 0] + [10, 10, 10, 10] = [16, 9, 9.5, 11] /25
23 s (2) = [10, 0, 0, 0] + [16, 9, 9.5, 11] = [19.6, 9.5, 8.9, 11.8]. s (3) = [10, 0, 0, 0] + [19.6, 9.5, 8.9, 11.8] = [21.76, 10.57, 8.61, 12.40] and so on... 23/25
24 Long-term behaviour In the case that the expected number of recruits is constant over time, i.e., r (n) = r for all n, we can also determine the long-run expected number of persons in the group in the different states. In this case we have that s = lim n s (n) satisfies and hence where I is the identity matrix. s = r + s Q, s = r (I Q) 1 Remark: The existence of the inverse of the matrix I Q follows form the fact that the matrix Q is sub-stochastic. 24/25
25 Example (continued): In the example we have r (n) = r = [10, 0, 0, 0] for all n and I Q = and hence = s = lim s (n) = [10, 0, 0, 0] n = [25, 16.67, 13.89, 27, 78] [25, 17, 14, 28]. 1 25/25
Markov Chains (Part 4)
Markov Chains (Part 4) Steady State Probabilities and First Passage Times Markov Chains - 1 Steady-State Probabilities Remember, for the inventory example we had (8) P &.286 =.286.286 %.286 For an irreducible
More informationMarkov Chains (Part 3)
Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is
More information= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1
Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationDiscrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices
Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 1
MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More informationThe Theory behind PageRank
The Theory behind PageRank Mauro Sozio Telecom ParisTech May 21, 2014 Mauro Sozio (LTCI TPT) The Theory behind PageRank May 21, 2014 1 / 19 A Crash Course on Discrete Probability Events and Probability
More information56:171 Operations Research Final Exam December 12, 1994
56:171 Operations Research Final Exam December 12, 1994 Write your name on the first page, and initial the other pages. The response "NOTA " = "None of the above" Answer both parts A & B, and five sections
More informationProbability, Random Processes and Inference
INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx
More informationRecap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks
Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution
More informationMARKOV PROCESSES. Valerio Di Valerio
MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some
More informationMarkov chains. Randomness and Computation. Markov chains. Markov processes
Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space
More informationMATH37012 Week 10. Dr Jonathan Bagley. Semester
MATH37012 Week 10 Dr Jonathan Bagley Semester 2-2018 2.18 a) Finding and µ j for a particular category of B.D. processes. Consider a process where the destination of the next transition is determined by
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationStochastic Models: Markov Chains and their Generalizations
Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction
More informationMarkov Model. Model representing the different resident states of a system, and the transitions between the different states
Markov Model Model representing the different resident states of a system, and the transitions between the different states (applicable to repairable, as well as non-repairable systems) System behavior
More informationThe Markov Decision Process (MDP) model
Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy School of Informatics 25 January, 2013 In the MAB Model We were in a single casino and the
More informationQ = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?
IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2005, Professor Whitt, Second Midterm Exam Chapters 5-6 in Ross, Thursday, March 31, 11:00am-1:00pm Open Book: but only the Ross
More informationAvailability. M(t) = 1 - e -mt
Availability Availability - A(t) the probability that the system is operating correctly and is available to perform its functions at the instant of time t More general concept than reliability: failure
More informationRandom Walk on a Graph
IOR 67: Stochastic Models I Second Midterm xam, hapters 3 & 4, November 2, 200 SOLUTIONS Justify your answers; show your work.. Random Walk on a raph (25 points) Random Walk on a raph 2 5 F B 3 3 2 Figure
More informationThe Transition Probability Function P ij (t)
The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it
More informationCAS Exam MAS-1. Howard Mahler. Stochastic Models
CAS Exam MAS-1 Howard Mahler Stochastic Models 2019 Stochastic Models, HCM 12/31/18, Page 1 The slides are in the same order as the sections of my study guide. Section # Section Name 1 Introduction 2 Exponential
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationMarkov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.
Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or
More informationISM206 Lecture, May 12, 2005 Markov Chain
ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov
More informationIE 5112 Final Exam 2010
IE 5112 Final Exam 2010 1. There are six cities in Kilroy County. The county must decide where to build fire stations. The county wants to build as few fire stations as possible while ensuring that there
More informationProblem Points S C O R E Total: 120
PSTAT 160 A Final Exam December 10, 2015 Name Student ID # Problem Points S C O R E 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 10 9 10 10 10 11 10 12 10 Total: 120 1. (10 points) Take a Markov chain with the
More informationDefinition and Examples of DTMCs
Definition and Examples of DTMCs Natarajan Gautam Department of Industrial and Systems Engineering Texas A&M University 235A Zachry, College Station, TX 77843-3131 Email: gautam@tamuedu Phone: 979-845-5458
More informationStochastic Shortest Path Problems
Chapter 8 Stochastic Shortest Path Problems 1 In this chapter, we study a stochastic version of the shortest path problem of chapter 2, where only probabilities of transitions along different arcs can
More informationThe cost/reward formula has two specific widely used applications:
Applications of Absorption Probability and Accumulated Cost/Reward Formulas for FDMC Friday, October 21, 2011 2:28 PM No class next week. No office hours either. Next class will be 11/01. The cost/reward
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More informationMarkov Chains. Contents
6 Markov Chains Contents 6.1. Discrete-Time Markov Chains............... p. 2 6.2. Classification of States................... p. 9 6.3. Steady-State Behavior.................. p. 13 6.4. Absorption Probabilities
More informationDiscrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results
Discrete time Markov chains Discrete Time Markov Chains, Definition and classification 1 1 Applied Mathematics and Computer Science 02407 Stochastic Processes 1, September 5 2017 Today: Short recap of
More informationReadings: Finish Section 5.2
LECTURE 19 Readings: Finish Section 5.2 Lecture outline Markov Processes I Checkout counter example. Markov process: definition. -step transition probabilities. Classification of states. Example: Checkout
More informationIEOR 6711: Stochastic Models I. Solutions to Homework Assignment 9
IEOR 67: Stochastic Models I Solutions to Homework Assignment 9 Problem 4. Let D n be the random demand of time period n. Clearly D n is i.i.d. and independent of all X k for k < n. Then we can represent
More informationIEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.
IEOR 6711: Stochastic Models I, Fall 23, Professor Whitt Solutions to Final Exam: Thursday, December 18. Below are six questions with several parts. Do as much as you can. Show your work. 1. Two-Pump Gas
More informationON A CONJECTURE OF WILLIAM HERSCHEL
ON A CONJECTURE OF WILLIAM HERSCHEL By CHRISTOPHER C. KRUT A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF
More informationCDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv
More informationSTA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008
Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There
More informationLecture 20 : Markov Chains
CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called
More informationP(X 0 = j 0,... X nk = j k )
Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that
More informationMATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015
ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which
More informationStatistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013
Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Friday, Feb 8, 2013 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note
More information57:022 Principles of Design II Final Exam Solutions - Spring 1997
57:022 Principles of Design II Final Exam Solutions - Spring 1997 Part: I II III IV V VI Total Possible Pts: 52 10 12 16 13 12 115 PART ONE Indicate "+" if True and "o" if False: + a. If a component's
More informationMarkov Reliability and Availability Analysis. Markov Processes
Markov Reliability and Availability Analysis Firma convenzione Politecnico Part II: Continuous di Milano e Time Veneranda Discrete Fabbrica State del Duomo di Milano Markov Processes Aula Magna Rettorato
More informationSTOCHASTIC MODELS FOR RELIABILITY, AVAILABILITY, AND MAINTAINABILITY
STOCHASTIC MODELS FOR RELIABILITY, AVAILABILITY, AND MAINTAINABILITY Ph.D. Assistant Professor Industrial and Systems Engineering Auburn University RAM IX Summit November 2 nd 2016 Outline Introduction
More informationStatistics 150: Spring 2007
Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities
More information. Find E(V ) and var(v ).
Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number
More informationSlides 9: Queuing Models
Slides 9: Queuing Models Purpose Simulation is often used in the analysis of queuing models. A simple but typical queuing model is: Queuing models provide the analyst with a powerful tool for designing
More informationSingle-part-type, multiple stage systems. Lecturer: Stanley B. Gershwin
Single-part-type, multiple stage systems Lecturer: Stanley B. Gershwin Flow Line... also known as a Production or Transfer Line. M 1 B 1 M 2 B 2 M 3 B 3 M 4 B 4 M 5 B 5 M 6 Machine Buffer Machines are
More informationAnswers to selected exercises
Answers to selected exercises A First Course in Stochastic Models, Henk C. Tijms 1.1 ( ) 1.2 (a) Let waiting time if passengers already arrived,. Then,, (b) { (c) Long-run fraction for is (d) Let waiting
More informationMIT Manufacturing Systems Analysis Lectures 6 9: Flow Lines
2.852 Manufacturing Systems Analysis 1/165 Copyright 2010 c Stanley B. Gershwin. MIT 2.852 Manufacturing Systems Analysis Lectures 6 9: Flow Lines Models That Can Be Analyzed Exactly Stanley B. Gershwin
More informationLECTURE #6 BIRTH-DEATH PROCESS
LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death
More informationStatistics 433 Practice Final Exam: Cover Sheet and Marking Sheet
Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet YOUR NAME INSTRUCTIONS: No notes, no calculators, and no communications devices are permitted. Please keep all materials away from your
More informationSolutions For Stochastic Process Final Exam
Solutions For Stochastic Process Final Exam (a) λ BMW = 20 0% = 2 X BMW Poisson(2) Let N t be the number of BMWs which have passes during [0, t] Then the probability in question is P (N ) = P (N = 0) =
More informationMarkov Chains Absorption (cont d) Hamid R. Rabiee
Markov Chains Absorption (cont d) Hamid R. Rabiee 1 Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is 1 (i.e.,
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 2
MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on
More informationPre-Calculus Module 4
Pre-Calculus Module 4 4 th Nine Weeks Table of Contents Precalculus Module 4 Unit 9 Rational Functions Rational Functions with Removable Discontinuities (1 5) End Behavior of Rational Functions (6) Rational
More informationChapter 16 focused on decision making in the face of uncertainty about one future
9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account
More informationMarkov Processes and Queues
MIT 2.853/2.854 Introduction to Manufacturing Systems Markov Processes and Queues Stanley B. Gershwin Laboratory for Manufacturing and Productivity Massachusetts Institute of Technology Markov Processes
More informationMATH 56A SPRING 2008 STOCHASTIC PROCESSES 65
MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest
More informationDiscrete Markov Chain. Theory and use
Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical
More informationCourse 1 Solutions November 2001 Exams
Course Solutions November Exams . A For i =,, let R = event that a red ball is drawn form urn i i B = event that a blue ball is drawn from urn i. i Then if x is the number of blue balls in urn, ( R R)
More informationEXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00
Norges teknisk naturvitenskapelige universitet Institutt for matematiske fag Page 1 of 7 English Contact: Håkon Tjelmeland 48 22 18 96 EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013
More informationCDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical
CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one
More informationQuestion Points Score Total: 70
The University of British Columbia Final Examination - April 204 Mathematics 303 Dr. D. Brydges Time: 2.5 hours Last Name First Signature Student Number Special Instructions: Closed book exam, no calculators.
More informationMarkov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015
Markov Chains Arnoldo Frigessi Bernd Heidergott November 4, 2015 1 Introduction Markov chains are stochastic models which play an important role in many applications in areas as diverse as biology, finance,
More informationBuilding Diamond-free Posets
Aaron AMS Southeastern Sectional October 5, 2013 Joint with Éva Czabarka, Travis Johnston, and László Székely The Diamond Conjecture is False Aaron AMS Southeastern Sectional October 5, 2013 Joint with
More informationChapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS
Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS 63 2.1 Introduction In this chapter we describe the analytical tools used in this thesis. They are Markov Decision Processes(MDP), Markov Renewal process
More informationOutlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)
Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous
More information( v 1 + v 2 ) + (3 v 1 ) = 4 v 1 + v 2. and ( 2 v 2 ) + ( v 1 + v 3 ) = v 1 2 v 2 + v 3, for instance.
4.2. Linear Combinations and Linear Independence If we know that the vectors v 1, v 2,..., v k are are in a subspace W, then the Subspace Test gives us more vectors which must also be in W ; for instance,
More information4.7.1 Computing a stationary distribution
At a high-level our interest in the rest of this section will be to understand the limiting distribution, when it exists and how to compute it To compute it, we will try to reason about when the limiting
More informationStatistics 427: Sample Final Exam
Statistics 427: Sample Final Exam Instructions: The following sample exam was given several quarters ago in Stat 427. The same topics were covered in the class that year. This sample exam is meant to be
More informationStatistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014
Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Monday, Feb 10, 2014 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note
More informationInterlude: Practice Final
8 POISSON PROCESS 08 Interlude: Practice Final This practice exam covers the material from the chapters 9 through 8. Give yourself 0 minutes to solve the six problems, which you may assume have equal point
More informationINTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING
INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing
More informationLECTURE 3. Last time:
LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationCHAPTER 6. Markov Chains
CHAPTER 6 Markov Chains 6.1. Introduction A(finite)Markovchainisaprocess withafinitenumberofstates (or outcomes, or events) in which the probability of being in a particular state at step n+1depends only
More informationStochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property
Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat
More informationMARKOV DECISION PROCESSES
J. Virtamo 38.3141 Teletraffic Theory / Markov decision processes 1 MARKOV DECISION PROCESSES In studying Markov processes we have up till now assumed that the system, its states and transition probabilities
More informationMath Homework 5 Solutions
Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram
More informationMarkov chains (week 6) Solutions
Markov chains (week 6) Solutions 1 Ranking of nodes in graphs. A Markov chain model. The stochastic process of agent visits A N is a Markov chain (MC). Explain. The stochastic process of agent visits A
More informationSocial network analysis: social learning
Social network analysis: social learning Donglei Du (ddu@unb.edu) Faculty of Business Administration, University of New Brunswick, NB Canada Fredericton E3B 9Y2 October 20, 2016 Donglei Du (UNB) AlgoTrading
More informationLectures on Markov Chains
Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................
More informationSenior Math Circles November 19, 2008 Probability II
University of Waterloo Faculty of Mathematics Centre for Education in Mathematics and Computing Senior Math Circles November 9, 2008 Probability II Probability Counting There are many situations where
More informationExample: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected
4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X
More informationProbabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford
Probabilistic Model Checking Michaelmas Term 20 Dr. Dave Parker Department of Computer Science University of Oxford Next few lectures Today: Discrete-time Markov chains (continued) Mon 2pm: Probabilistic
More informationContinuous Time Markov Chain Examples
Continuous Markov Chain Examples Example Consider a continuous time Markov chain on S {,, } The Markov chain is a model that describes the current status of a match between two particular contestants:
More informationChi-Squared Tests. Semester 1. Chi-Squared Tests
Semester 1 Goodness of Fit Up to now, we have tested hypotheses concerning the values of population parameters such as the population mean or proportion. We have not considered testing hypotheses about
More informationMt. Douglas Secondary
Foundations of Math 11 Calculator Usage 207 HOW TO USE TI-83, TI-83 PLUS, TI-84 PLUS CALCULATORS FOR STATISTICS CALCULATIONS shows it is an actual calculator key to press 1. Using LISTS to Calculate Mean,
More informationGoal Programming. Note: See problem for the problem statement. We assume that part-time (fractional) workers are allowed.
Goal Programming Note: See problem 13.13 for the problem statement. We assume that part-time (fractional) workers are allowed. Example 1: Preemptive Goal Programming The problem is currently stated as
More informationMarkov Repairable Systems with History-Dependent Up and Down States
Markov Repairable Systems with History-Dependent Up and Down States Lirong Cui School of Management & Economics Beijing Institute of Technology Beijing 0008, P.R. China lirongcui@bit.edu.cn Haijun Li Department
More informationJRF (Quality, Reliability and Operations Research): 2013 INDIAN STATISTICAL INSTITUTE INSTRUCTIONS
JRF (Quality, Reliability and Operations Research): 2013 INDIAN STATISTICAL INSTITUTE INSTRUCTIONS The test is divided into two sessions (i) Forenoon session and (ii) Afternoon session. Each session is
More informationMATH 446/546 Test 2 Fall 2014
MATH 446/546 Test 2 Fall 204 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 546 level. Please read and follow all of these
More informationOn asymptotic behavior of a finite Markov chain
1 On asymptotic behavior of a finite Markov chain Alina Nicolae Department of Mathematical Analysis Probability. University Transilvania of Braşov. Romania. Keywords: convergence, weak ergodicity, strong
More information