Answers to selected exercises

Similar documents
NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER I EXAMINATION MH4702/MAS446/MTH437 Probabilistic Methods in OR

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

Link Models for Circuit Switching

Ex 12A The Inverse matrix

Name of the Student:

57:022 Principles of Design II Final Exam Solutions - Spring 1997

Queueing Theory. VK Room: M Last updated: October 17, 2013.

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

YORK UNIVERSITY FACULTY OF ARTS DEPARTMENT OF MATHEMATICS AND STATISTICS MATH , YEAR APPLIED OPTIMIZATION (TEST #4 ) (SOLUTIONS)

Performance Evaluation of Queuing Systems

1 Gambler s Ruin Problem

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Probability and Statistics Concepts

Outline. Finite source queue M/M/c//K Queues with impatience (balking, reneging, jockeying, retrial) Transient behavior Advanced Queue.

λ λ λ In-class problems

Readings: Finish Section 5.2

Queuing Theory. The present section focuses on the standard vocabulary of Waiting Line Models.

Known probability distributions

MARKOV PROCESSES. Valerio Di Valerio

ELEG 3143 Probability & Stochastic Process Ch. 2 Discrete Random Variables

Operations Research II, IEOR161 University of California, Berkeley Spring 2007 Final Exam. Name: Student ID:

Discrete Random Variables

Name of the Student: Problems on Discrete & Continuous R.Vs

Discrete and continuous

Engineering Mathematics : Probability & Queueing Theory SUBJECT CODE : MA 2262 X find the minimum value of c.

IE 5112 Final Exam 2010

1 Gambler s Ruin Problem

The Transition Probability Function P ij (t)

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017

CS 1538: Introduction to Simulation Homework 1

DISCRETE VARIABLE PROBLEMS ONLY

IE 4521 Midterm #1. Prof. John Gunnar Carlsson. March 2, 2010

Chapter 8 Queuing Theory Roanna Gee. W = average number of time a customer spends in the system.

SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012

Figure 10.1: Recording when the event E occurs

Solutions to COMP9334 Week 8 Sample Problems

Math 493 Final Exam December 01

10.2 For the system in 10.1, find the following statistics for population 1 and 2. For populations 2, find: Lq, Ls, L, Wq, Ws, W, Wq 0 and SL.

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

Stochastic process. X, a series of random variables indexed by t

MAS275 Probability Modelling Exercises

IEOR 3106: Second Midterm Exam, Chapters 5-6, November 7, 2013

5/15/18. Operations Research: An Introduction Hamdy A. Taha. Copyright 2011, 2007 by Pearson Education, Inc. All rights reserved.

Markov Chain Model for ALOHA protocol

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

PBW 654 Applied Statistics - I Urban Operations Research

Queuing Analysis. Chapter Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Find the value of n in order for the player to get an expected return of 9 counters per roll.

Random variable X is a mapping that maps each outcome s in the sample space to a unique real number x, x. X s. Real Line

Quiz Queue II. III. ( ) ( ) =1.3333

Networking = Plumbing. Queueing Analysis: I. Last Lecture. Lecture Outline. Jeremiah Deng. 29 July 2013

Page 0 of 5 Final Examination Name. Closed book. 120 minutes. Cover page plus five pages of exam.

Random Walk on a Graph

Since D has an exponential distribution, E[D] = 0.09 years. Since {A(t) : t 0} is a Poisson process with rate λ = 10, 000, A(0.

Hypothesis Tests and Estimation for Population Variances. Copyright 2014 Pearson Education, Inc.

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Computer Systems Modelling

Non Markovian Queues (contd.)

Exercises Solutions. Automation IEA, LTH. Chapter 2 Manufacturing and process systems. Chapter 5 Discrete manufacturing problems

Exam of Discrete Event Systems

Final Examination. Adrian Georgi Josh Karen Lee Min Nikos Tina. There are 12 problems totaling 150 points. Total time is 170 minutes.

INDEX. production, see Applications, manufacturing

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

M/M/1 Retrial Queueing System with Negative. Arrival under Erlang-K Service by Matrix. Geometric Method

Queueing Theory and Simulation. Introduction

Discrete Event Systems Exam

ISM206 Lecture, May 12, 2005 Markov Chain

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

1. A machine produces packets of sugar. The weights in grams of thirty packets chosen at random are shown below.

Chapter 1: Revie of Calculus and Probability

O June, 2010 MMT-008 : PROBABILITY AND STATISTICS

Chapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal.

IE 336 Seat # Name (clearly) < KEY > Open book and notes. No calculators. 60 minutes. Cover page and five pages of exam.

Discrete Random Variables. Discrete Random Variables

Discrete Event and Process Oriented Simulation (2)

Guidelines for Solving Probability Problems

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Introduction to Stochastic Processes

QUEUING SYSTEM. Yetunde Folajimi, PhD

Slides 9: Queuing Models

Classification of Queuing Models

2DI90 Probability & Statistics. 2DI90 Chapter 4 of MR

CS 361: Probability & Statistics

Systems Simulation Chapter 6: Queuing Models

Lecture 4: Bernoulli Process

Availability. M(t) = 1 - e -mt

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk

Stochastic Processes

1. If X has density. cx 3 e x ), 0 x < 0, otherwise. Find the value of c that makes f a probability density. f(x) =

All models are wrong / inaccurate, but some are useful. George Box (Wikipedia). wkc/course/part2.pdf

P 1.5 X 4.5 / X 2 and (iii) The smallest value of n for

Name of the Student: Problems on Discrete & Continuous R.Vs

Introduction to Queuing Networks Solutions to Problem Sheet 3

JRF (Quality, Reliability and Operations Research): 2013 INDIAN STATISTICAL INSTITUTE INSTRUCTIONS

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)

Transcription:

Answers to selected exercises A First Course in Stochastic Models, Henk C. Tijms 1.1 ( ) 1.2 (a) Let waiting time if passengers already arrived,. Then,, (b) { (c) Long-run fraction for is (d) Let waiting time. Then 1.3 (a) Let waiting time if passengers already arrived,. Let be a -process where a passenger arrives every second event ( ) (b) Same as 1.2b. (c) Same as 1.2c. 1.4 ( ) Why? Since, - and, - 1.5 Let number of cars passed before you can cross Thus, the geometric distribution with. 1.6 time since the last arrival before or at (a)

(b) and for, 1.7 Let travel time of a given fast car., -, -, -. / ( * (. / ) 1.8 Condition on and and compute ( ) ( ) 1.9 (a) The merged process is also Poisson, and ( ) (b) Let service time of the next request. 1.10 Let number of claims. ( ) ( ) [Let and ]

( * ( * 2.1 If the number of lamps replaced at time, then the expected number of street lamps used in, - is where, - ( + 2.2 (a) Let number of weeks up to waste amount. Then the expected number of weeks needed to exceed is where (b) 2.3 Mean and standard deviation are given by 7.78 and 4.78 minutes, respectively. 2.4 Use the inequalities to conclude that ( ) for and. (b) follows immediately from this. 2.5 (a) If denotes a -process, then ( ) ( ) ( ) (b) The excess variable is. Let (using memoryless property). ( ) ( ) Thus is Erlang if and only if -arrivals occurred in, giving the formula. 2.6 (a) The long-run fraction of time the process is in state equals. This is also the limit of ( ).

3.1 Let system state at the beginning of th day. State space both parts failed one part is working both parts are working Transition matrix ( + 3.2 Let system state at day with state space both machines work one works, one is in day one of repair one works, one is in day two of repair one is in day one of repair, one is in day two of repair Transition matrix (, 3.3 (a) Let number of containers at day. State space * +. Let ( ) Transition probabilities ( * for and 0 otherwise. (b) Let a container be type- if its residency time is,. Let number of type- containers day * + is a Markov chain with state space * + Let ( - ) Then the transition probabilities are ( * ( * for and, and 0 otherwise. 3.4 Define a Markov chain with an absorbing state. no game played yet one team won current game but not previous one team won last two but not the one before that

one team won last three games Let 3 be an absorbing state. Transition matrix (, Then where ( ). 3.5 (a) Let probability that team A wins. Let the states be for no game played yet A has won the last games but not the one before that B has won the last games but not the one before that Let and be absorbing states. We get a transition matrix with All other transition probabilities are 0. ( ) Let ( ),.Then which is given by solving the linear equation system (b) Let probability of a draw Then in the transition probabilities we add

and is replaced by everywhere. Similarly in the linear equations, and the term is added to the right-hand side of every equation. 3.6 Define states as start or last toss was tails last tosses were heads but not the one before, Let be an absorbing state. The transition probabilities become (, Let expected number of throws to reach state 3 from state. Which gives an equation system Solving this gives. Since the gain is 12 the game is not fair. 3.7 Define states where state means that of the six outcomes have appeared so far. State 6 is absorbing. The transition probabilities become and for All other.. 3.8 Let Joe s money after runs. State space * + * +. Note that state 200 means 200 or more. Let 0 and 200 be absorbing states. The transition probabilities become and, and and, The other. Let expected number of bets to state 0 or 200 from state.

Expected number of bets when starting in 100 is linear equations gives. Similarly, using a system of ( ) 3.10 Uniform means that for states. If is uniform we get i.e. if the columns sum to 1, which they do. 3.11 Let * + * +. Transition matrix. / The equilibrium distribution is * + * +. The long-run net amount won is The game is fair! 3.12 The long-run average cost equals 17.77. 3.13 (a) The long-run fraction of games won is 0.4957. (c) The long-run fraction of games won is 0.5073. 3.15 (a) Markov chain * + { } where Let the age in days at the beginning of day of component. denote the failure state. The state space is * + Transition probabilities for {

where probability that component of age fails the next day. Similar for and with above. For and, take. Vice versa for and. (b) Let and denote the costs of replacing 1 and 2 components respectively. The long run average cost becomes ( ( ) ( )) ( ( ) ( ) ) 3.17 Let number of messages present at time. Let { Then *( )+ is a Markov chain with state space * + * + Transition probabilities, (b) Long-run fraction of lost messages 3.19 Recall that Erlang can be seen as the sum of independent subtasks.

Let number of subtasks present just before. State space * +. Transition probabilities for and 4.1 number of waiting passengers at { State space * + * +. 4.2 Let { {( )} is a Markov chain with state space * + 4.3 Let * + be a continuous-time Markov chain with state space * + where both stations free station 1 is free, station 2 busy both stations busy station 1 is blocked, station 2 is busy 4.4 (a) Let number of cars present at. State space * +. (b) The equilibrium probabilities are given by 4.5 Let state of production hall at time, with state space where * + both machines are idle the fast machine is busy, the slow machine is idle the fast machine is idle, the slow machine is busy both machines are busy

(b) The equilibrium equations are given by The long-run fraction of time that the fast machine is used is given by and the slow machine by. The long-run fraction of incoming orders that are lost equals. 4.6 Let denote the state of the system at time. There are 13 states. a taxi is waiting but no customers are present customers are waiting at the station, no taxi is there and the taxi took customers last time,,. The equilibrium equations will be on the form The long-run fraction of taxis waiting is. The long-run fraction of customers that potentially goes elsewhere is. 4.7 Let number of trailers present at { The process has state space Equilibrium equations * + ( ) ( ) 4.8 Let number of messages in the system at and { State space * + * + The long-run fraction of time the channel is idle is.

The long-run fraction of messages blocked is. The long-run average number of messages waiting to be transmitted is 4.9 number of customers at. State space * +. Equate the rate out of set * + to the rate into the set and get recurrence relation which gives and thus The long-run fraction of persons that join the queue is The long-run average number of persons served per time unit is 4.10 (a) Equilibrium equations. / The long-run fraction of potential customers lost is (b) Now the states are no sheroot is in and none is waiting passengers are waiting and the sheroot is in, Equilibrium equations

The long-run fraction of potential customers lost is now. 4.12. State space * +. Equilibrium equations ( ) The long-run average stock is. The long-run average number of orders per time-unit = long-run average number of transitions from 1 to =. 4.16 Let. State space * +. Construct a modified Markov chain with absorbing state same but the leaving rates become. The state space is the { and { Find using the uniformization method where 5.1 (a), state space * + For * + the recursive relation which gives ( * { (b) Error in book! Should be 5.2 (a).

For * + which gives. / ( * { 5.8 * + satisfy the equilibrium equations. Substitute the in these equations using the given expression and check if it holds. 5.9, * +. For * + which gives. /. / With and we get that since { 5.10 * + *( )+ has state space * +. /. /. /. / 5.14, * + For * + giving the truncated Poisson as usual.

The long-run fraction of time the system is down is then. (b) No, due to the insensitivity property.