TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Size: px
Start display at page:

Download "TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis"

Transcription

1 TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis

2 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues in Tandem

3 6-3 Time-Reversed Markov Chains {X n : n=0,, } irreducible aperiodic Markov chain with transition probabilities P ij j= 0 Unique stationary distribution (π j > 0) if and only if: π = π P, j = 0,,... j i= 0 i ij Process in steady state: P ij =, i = 0,,... Pr{ X n = j} = π j = lim Pr { Xn = j X0 = i} n Starts at n=-, that is {X n : n =,-,0,, } Choose initial state according to the stationary distribution How does {X n } look reversed in time?

4 6-4 Time-Reversed Markov Chains Define Y n =X τ-n, for arbitrary τ>0 {Y n } is the reversed process. Proposition : {Y n } is a Markov chain with transition probabilities: P Pji =, i, j = 0,,... π π * j ij i {Y n } has the same stationary distribution π j with the forward chain {X n }

5 6-5 Time-Reversed Markov Chains Proof of Proposition : P = P{ Y = j Y = i, Y = i,, Y = i } * ij m m m m k k = PX { = j X = ix, = i,, X = i} τ m τ m+ τ m+ τ m+ k k = PX { = j X = ix, = i,, X = i} n n+ n+ n+ k k PX { n = jx, n+ = ix, n+ = i,, Xn+ k = ik} = PX { = ix, = i,, X = i} n+ n+ n+ k k n+ = n+ k = k n = n+ n n+ PX { i,, X i X jx, = ipx } { = jx, = i} = PX { = i,, X = i X = i} P{ X = i} PX { = jx, = i} PX { = i} n n+ = = n+ n+ n+ k k n+ n+ PX { = i X = jp } { X = j} Pjiπ PX { = i} π n+ n n = = n+ π P * j ji π π π π 0 ipij = i i 0 i = j P i 0 ji = = = = j πi PX { = j X = i} = PY { = j Y = i} n n+ m m i j

6 6-6 Reversibility Stochastic process {X(t)} is called reversible if (X(t ), X(t ),, X(t n )) and (X(τ-t ), X(τ-t ),, X(τ-t n )) have the same probability distribution, for all τ, t,, t n Markov chain {X n } is reversible if and only if the transition probabilities of forward and reversed chains are equal Pij = P or equivalently, if and only if π P = π P, i, j = 0,,... i ij j ji Detailed Balance Equations Reversibility * ij

7 6-7 Reversibility Discrete-Time Chains Theorem : If there exists a set of positive numbers {π j }, that sum up to and satisfy: Then: π P = π P, i, j = 0,,... i ij j ji. {π j } is the unique stationary distribution. The Markov chain is reversible Example: Discrete-time birth-death processes are reversible, since they satisfy the DBE

8 6-8 Example: Birth-Death Process P 0 Pn, n P nn, + 0 n n+ S S c P 00 P 0 Pnn, P nn, P n +, n One-dimensional Markov chain with transitions only between neighboring states: P ij =0, if i-j > Detailed Balance Equations (DBE) π P = π P n = 0,,... n n, n+ n+ n+, n Proof: GBE with S ={0,,,n} give: n n π P = π P π P = π P j ji i ij n n, n+ n+ n+, n j= 0 i= n+ j= 0 i= n+

9 6-9 Time-Reversed Markov Chains (Revisited) Theorem : Irreducible Markov chain with transition probabilities P ij. If there exist: A set of transition probabilities Q ij, with j Q ij =, i 0, and A set of positive numbers {π j }, that sum up to, such that Then: π P = π Q, i, j = 0,,... () i ij j ji Q ij are the transition probabilities of the reversed chain, and {π j } is the stationary distribution of the forward and the reversed chains Remark: Use to find the stationary distribution, by guessing the transition probabilities of the reversed chain even if the process is not reversible

10 6-0 Continuous-Time Markov Chains {X(t): - < t < } irreducible aperiodic Markov chain with transition rates q ij, i j Unique stationary distribution (p i > 0) if and only if: p q = pq, j = 0,,... j i j ji i j i ij Process in steady state e.g., started at t =- : Pr{ X( t) = j} = p = lim Pr{ X ( t) = j X(0) = i} j t If {π j }, is the stationary distribution of the embedded discrete-time chain: π j / ν j pj =, ν j qji, j = 0,,... i j π / ν i i i

11 6- Reversed Continuous-Time Markov Chains Reversed chain {Y(t)}, with Y(t)=X(τ-t), for arbitrary τ>0 Proposition :. {Y(t)} is a continuous-time Markov chain with transition rates: pq * j ji qij =, i, j = 0,,..., i j p i. {Y(t)} has the same stationary distribution {p j } with the forward chain Remark: The transition rate out of state i in the reversed chain is equal to the transition rate out of state i in the forward chain j i pq p q q = = = q = ν, i = 0,,... * j i j ji i j i ij ij j i ij i pi pi

12 6- Reversibility Continuous-Time Chains Markov chain {X(t)} is reversible if and only if the transition rates of forward and reversed chains are equal q * ij = qij, or equivalently pq = p q, i, j = 0,,..., i j i ij j ji Detailed Balance Equations Reversibility Theorem 3: If there exists a set of positive numbers {p j }, that sum up to and satisfy: pq i ij = pjqji, i, j = 0,,..., i j Then:. {p j } is the unique stationary distribution. The Markov chain is reversible

13 6-3 Example: Birth-Death Process λ 0 λ λn 0 n n+ S λ n S c n n + Transitions only between neighboring states q = λ, q =, q = 0, i j > ii, + i ii, i ij Detailed Balance Equations λ, 0,,... npn = n + pn + n = Proof: GBE with S ={0,,,n} give: n j ji i ij n n n+ n+ j= 0 i= n+ j= 0 i= n+ M/M/, M/M/c, M/M/ n p q = pq λ p = p

14 6-4 Reversed Continuous-Time Markov Chains (Revisited) Theorem 4: Irreducible continuous-time Markov chain with transition rates q ij. If there exist: A set of transition rates φ ij, with j i φ ij = j i q ij, i 0, and A set of positive numbers {p j }, that sum up to, such that Then: pϕ = p q, i, j = 0,,..., i j i ij j ji φ ij are the transition rates of the reversed chain, and {p j } is the stationary distribution of the forward and the reversed chains Remark: Use to find the stationary distribution, by guessing the transition probabilities of the reversed chain even if the process is not reversible

15 6-5 Reversibility: Trees Theorem 5: For a Markov chain form a graph, where states are the nodes, and for each q ij >0, there is a directed arc i j Irreducible Markov chain, with transition rates that satisfy q ij >0 q ji >0 If graph is a tree contains no loops then Markov chain is reversible Remarks: q 0 0 q 0 Sufficient condition for reversibility q q 6 q q 6 Generalization of one-dimensional birth-death process 6 q 3 q 3 q 67 q

16 6-6 Kolmogorov s Criterion (Discrete Chain) Detailed balance equations determine whether a Markov chain is reversible or not, based on stationary distribution and transition probabilities Should be able to derive a reversibility criterion based only on the transition probabilities! Theorem 6: A discrete-time Markov chain is reversible if and only if: P P P P = P P P P ii i i i i i i ii i i ii i i 3 n n n n n n 3 for any finite sequence of states: i, i,, i n, and any n Intuition: Probability of traversing any loop i i i n i is equal to the probability of traversing the same loop in the reverse direction i i n i i

17 6-7 Kolmogorov s Criterion (Continuous Chain) Detailed balance equations determine whether a Markov chain is reversible or not, based on stationary distribution and transition rates Should be able to derive a reversibility criterion based only on the transition rates! Theorem 7: A continuous-time Markov chain is reversible if and only if: q q q q = q q q q ii i i i i i i ii i i ii i i 3 n n n n n n 3 for any finite sequence of states: i, i,, i n, and any n Intuition: Product of transition rates along any loop i i i n i is equal to the product of transition rates along the same loop traversed in the reverse direction i i n i i

18 6-8 Kolmogorov s Criterion (proof) Proof of Theorem 6: Necessary: If the chain is reversible the DBE hold πpii = π P i i πpii = π 3 3Pii 3 P P P P = P P P P πn Pi π n i = n np inin πnpi π ni = Pii n Sufficient: Fixing two states i =i, and i n =j and summing over all states i,, i n- we have 3 n n 3 Taking the limit n ii ii i i ii ii ii ii ii 3 n n n n n n 3 P P P P = P P P P P P = P P n n i, i i i i, j ji ij j, i i i i, i ij ji ij ji n n lim Pij Pji = Pij lim Pji π jpji = Pij πi n n

19 6-9 Example: M/M/ Queue with Heterogeneous Servers 0 αλ ( α) λ A B A B B A λ λ λ λ 3 + A+ B A B M/M/ queue. Servers A and B with service rates A and B respectively. When the system empty, arrivals go to A with probability α and to B with probability -α. Otherwise, the head of the queue takes the first free server. Need to keep track of which server is busy when there is customer in the system. Denote the two possible states by: A and B. Reversibility: we only need to check the loop 0 A B 0: q q q q = αλ λ q q q q = α λ λ 0,,,,0 0,,,,0 ( ) A A B B A B B B A A B A Reversible if and only if α=/. What happens when A = B, and α /?

20 6-0 Example: M/M/ Queue with Heterogeneous Servers S 3 0 αλ ( α) λ A B A B B A λ λ λ λ 3 + A+ B A B S S n λ pn = p, n=,3,... A + B p = p λ p0 = ApA + BpB ( + ) p = λ( p + p ) p = p A 0 λ λ+ α ( A + B) λ + + λ λ+ ( α )( + ) A B A B A B B 0 B λ + A + B A + pa = p0 + Bp + ( ) A + p = p0 A B λ + A + B ( λ) αλ A A B λ λ α α p0 + pa + pb + p n = p0 = n= + A + B λ A B λ + A+ B λ λ λ + ( α ) A+ α B B

21 6- Multidimensional Markov Chains Theorem 8: {X (t)}, {X (t)}: independent Markov chains {X i (t)}: reversible {X(t)}, with X(t)=(X (t), X (t)): vector-valued stochastic process {X(t)} is a Markov chain {X(t)} is reversible Multidimensional Chains: Queueing system with two classes of customers, each having its own stochastic properties track the number of customers from each class Study the joint evolution of two queueing systems track the number of customers in each system

22 6- Example: Two Independent M/M/ Queues Two independent M/M/ queues. The arrival and service rates at queue i are λ i and i respectively. Assume ρ i = λ i / i <. {(N (t), N (t))} is a Markov chain. Probability of n customers at queue, and n at queue, at steady-state p n n p n p n n n (, ) = ( ρ ) ρ ( ρ ) ρ = ( ) ( ) Product-form distribution Generalizes for any number K of independent queues, M/M/, M/M/c, or M/M/. If p i (n i ) is the stationary distribution of queue i: p( n, n,, n ) = p ( n ) p ( n ) p ( n ) K K K

23 6-3 Example: Two Independent M/M/ Queues Stationary distribution: λ λ λ λ pn (, n) = Detailed Balance Equations: pn ( +, n) = λ pn (, n) pn (, n + ) = λ pn (, n) Verify that the Markov chain is reversible Kolmogorov criterion n n λ λ λ λ λ λ λ λ λ λ λ λ λ λ λ λ λ λ λ λ λ λ 0 3 λ 0 3 λ

24 6-4 Truncation of a Reversible Markov Chain Theorem 9: {X(t)} reversible Markov process with state space S, and stationary distribution {p j : j S}. Truncated to a set E S, such that the resulting chain {Y(t)} is irreducible. Then, {Y(t)} is reversible and has stationary distribution: Remark: This is the conditional probability that, in steady-state, the original process is at state j, given that it is somewhere in E Proof: Verify that: p j p j =, j E p k E p j pi p jqji = pq i ij qji = qij pjqji = pq i ij, i, j S; i j p p k E p j = = j E p p j E j k E k k k E k k

25 6-5 Example: Two Queues with Joint Buffer The two independent M/M/ queues of the previous example share a common buffer of size B arrival that finds B customers waiting is blocked State space restricted to E = {( n, n ) : ( n ) + ( n ) B} Distribution of truncated chain: Normalizing: + + (, ) (0,0) n n pn n = p ρ ρ, ( n, n) E p(0,0) n n = ( n, n ) E ρ ρ Theorem specifies joint distribution up to the normalization constant Calculation of normalization constant is often tedious λ λ λ λ 03 3 λ λ 0 λ λ λ λ λ 0 λ λ λ λ λ λ 3 λ State diagram for B =

26 6-6 Burke s Theorem {X(t)} birth-death process with stationary distribution {p j } Arrival epochs: points of increase for {X(t)} Departure epoch: points of increase for {X(t)} {X(t)} completely determines the corresponding arrival and departure processes Arrivals Departures

27 6-7 Burke s Theorem Poisson arrival process: λ j =λ, for all j Birth-death process called a (λ, j )-process Examples: M/M/, M/M/c, M/M/ queues Poisson arrivals LAA: For any time t, future arrivals are independent of {X(s): s t} (λ, j )-process at steady state is reversible: forward and reversed chains are stochastically identical Arrival processes of the forward and reversed chains are stochastically identical Arrival process of the reversed chain is Poisson with rate λ The arrival epochs of the reversed chain are the departure epochs of the forward chain Departure process of the forward chain is Poisson with rate λ

28 6-8 Burke s Theorem t t t t Reversed chain: arrivals after time t are independent of the chain history up to time t (LAA) Forward chain: departures prior to time t and future of the chain {X(s): s t} are independent

29 6-9 Burke s Theorem Theorem 0: Consider an M/M/, M/M/c, or M/M/ system with arrival rate λ. Suppose that the system starts at steady-state. Then:. The departure process is Poisson with rate λ. At each time t, the number of customers in the system is independent of the departure times prior to t Fundamental result for study of networks of M/M/* queues, where output process from one queue is the input process of another

30 6-30 Single-Server Queues in Tandem λ Poisson Station Station λ Customers arrive at queue according to Poisson process with rate λ. Service times exponential with mean / i. Assume service times of a customer in the two queues are independent. Assume ρ i =λ/ i < What is the joint stationary distribution of N and N number of customers in each queue? Result: in steady state the queues are independent and p n n p n p n n n (, ) = ( ρ ) ρ ( ρ ) ρ = ( ) ( )

31 6-3 Single-Server Queues in Tandem λ Poisson Station Station λ Q is a M/M/ queue. At steady state its departure process is Poisson with rate λ. Thus Q is also M/M/. Marginal stationary distributions: p n n ( ) ( ρ ) ρ, n 0,,... n = = p ( n ) = ( ρ ) ρ, n = 0,,... To complete the proof: establish independence at steady state Q at steady state: at time t, N (t) is independent of departures prior to t, which are arrivals at Q up to t. Thus N (t) and N (t) independent: Letting t, the joint stationary distribution P{ N () t = n, N () t = n } = P{ N () t = n} P{ N () t = n } = p ( n ) P{ N () t = n } pn (, n) = p( n) p( n) = ( ρ ) ρ ( ρ ) ρ n n

32 7-3 Queues in Tandem Theorem : Network consisting of K single-server queues in tandem. Service times at queue i exponential with rate i, independent of service times at any queue j i. Arrivals at the first queue are Poisson with rate λ. The stationary distribution of the network is: K ni K ρi ρi i i= pn (,, n ) = ( ), n= 0,,...; i=,..., K At steady state the queues are independent; the distribution of queue i is that of an isolated M/M/ queue with arrival and service rates λ and i n p ( n ) = ( ρ ) ρ i, n = 0,,... i i i i i Are the queues independent if not in steady state? Are stochastic processes {N (t)} and {N (t)} independent?

33 7-33 Queues in Tandem: State-Dependent Service Rates Theorem : Network consisting of K queues in tandem. Service times at queue i exponential with rate i (n i ) when there are n i customers in the queue independent of service times at any queue j i. Arrivals at the first queue are Poisson with rate λ. The stationary distribution of the network is: K pn (,, nk) = pi( ni), ni = 0,,...; i=,..., K where {p i (n i )} is the stationary distribution of queue i in isolation with Poisson arrivals with rate λ Examples:./M/c and./m/ queues If queue i is./m/, then: ( λ / ) λ p n e n i= ni i / i i( i) =, i = 0,,... ni!

Time Reversibility and Burke s Theorem

Time Reversibility and Burke s Theorem Queuing Analysis: Time Reversibility and Burke s Theorem Hongwei Zhang http://www.cs.wayne.edu/~hzhang Acknowledgement: this lecture is partially based on the slides of Dr. Yannis A. Korilis. Outline Time-Reversal

More information

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem Wade Trappe Lecture Overview Network of Queues Introduction Queues in Tandem roduct Form Solutions Burke s Theorem What

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

reversed chain is ergodic and has the same equilibrium probabilities (check that π j =

reversed chain is ergodic and has the same equilibrium probabilities (check that π j = Lecture 10 Networks of queues In this lecture we shall finally get around to consider what happens when queues are part of networks (which, after all, is the topic of the course). Firstly we shall need

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 3 January 29, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 3 January 29, 2003 Prof. Yannis A. Korilis TCOM 5: Networkig Theory & Fudametals Lecture 3 Jauary 29, 23 Prof. Yais A. Korilis 3-2 Topics Markov Chais Discrete-Time Markov Chais Calculatig Statioary Distributio Global Balace Equatios Detailed Balace

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

Continuous-Time Markov Chain

Continuous-Time Markov Chain Continuous-Time Markov Chain Consider the process {X(t),t 0} with state space {0, 1, 2,...}. The process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0 and nonnegative integers i, j, x(u),

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

CS 798: Homework Assignment 3 (Queueing Theory)

CS 798: Homework Assignment 3 (Queueing Theory) 1.0 Little s law Assigned: October 6, 009 Patients arriving to the emergency room at the Grand River Hospital have a mean waiting time of three hours. It has been found that, averaged over the period of

More information

1 IEOR 4701: Continuous-Time Markov Chains

1 IEOR 4701: Continuous-Time Markov Chains Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin Queuing Networks Florence Perronnin Polytech Grenoble - UGA March 23, 27 F. Perronnin (UGA) Queuing Networks March 23, 27 / 46 Outline Introduction to Queuing Networks 2 Refresher: M/M/ queue 3 Reversibility

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking Lecture 7: Simulation of Markov Processes Pasi Lassila Department of Communications and Networking Contents Markov processes theory recap Elementary queuing models for data networks Simulation of Markov

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Performance Evaluation of Queuing Systems

Performance Evaluation of Queuing Systems Performance Evaluation of Queuing Systems Introduction to Queuing Systems System Performance Measures & Little s Law Equilibrium Solution of Birth-Death Processes Analysis of Single-Station Queuing Systems

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

Markov Chain Model for ALOHA protocol

Markov Chain Model for ALOHA protocol Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

LECTURE #6 BIRTH-DEATH PROCESS

LECTURE #6 BIRTH-DEATH PROCESS LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES 295 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES 16 Queueing Systems with Two Types of Customers In this section, we discuss queueing systems with two types of customers.

More information

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18. IEOR 6711: Stochastic Models I, Fall 23, Professor Whitt Solutions to Final Exam: Thursday, December 18. Below are six questions with several parts. Do as much as you can. Show your work. 1. Two-Pump Gas

More information

T. Liggett Mathematics 171 Final Exam June 8, 2011

T. Liggett Mathematics 171 Final Exam June 8, 2011 T. Liggett Mathematics 171 Final Exam June 8, 2011 1. The continuous time renewal chain X t has state space S = {0, 1, 2,...} and transition rates (i.e., Q matrix) given by q(n, n 1) = δ n and q(0, n)

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011 Exercise 6.5: Solutions to Homework 0 6.262 Discrete Stochastic Processes MIT, Spring 20 Consider the Markov process illustrated below. The transitions are labelled by the rate q ij at which those transitions

More information

Introduction to Markov Chains, Queuing Theory, and Network Performance

Introduction to Markov Chains, Queuing Theory, and Network Performance Introduction to Markov Chains, Queuing Theory, and Network Performance Marceau Coupechoux Telecom ParisTech, departement Informatique et Réseaux marceau.coupechoux@telecom-paristech.fr IT.2403 Modélisation

More information

UNIVERSITY OF YORK. MSc Examinations 2004 MATHEMATICS Networks. Time Allowed: 3 hours.

UNIVERSITY OF YORK. MSc Examinations 2004 MATHEMATICS Networks. Time Allowed: 3 hours. UNIVERSITY OF YORK MSc Examinations 2004 MATHEMATICS Networks Time Allowed: 3 hours. Answer 4 questions. Standard calculators will be provided but should be unnecessary. 1 Turn over 2 continued on next

More information

GI/M/1 and GI/M/m queuing systems

GI/M/1 and GI/M/m queuing systems GI/M/1 and GI/M/m queuing systems Dmitri A. Moltchanov moltchan@cs.tut.fi http://www.cs.tut.fi/kurssit/tlt-2716/ OUTLINE: GI/M/1 queuing system; Methods of analysis; Imbedded Markov chain approach; Waiting

More information

Continuous Time Markov Chains

Continuous Time Markov Chains Continuous Time Markov Chains Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 2015 Outline Introduction Continuous-Time Markov

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Non Markovian Queues (contd.)

Non Markovian Queues (contd.) MODULE 7: RENEWAL PROCESSES 29 Lecture 5 Non Markovian Queues (contd) For the case where the service time is constant, V ar(b) = 0, then the P-K formula for M/D/ queue reduces to L s = ρ + ρ 2 2( ρ) where

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00 Norges teknisk naturvitenskapelige universitet Institutt for matematiske fag Page 1 of 7 English Contact: Håkon Tjelmeland 48 22 18 96 EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013

More information

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes DTU Informatics 247 Stochastic Processes 6, October 27 Today: Limiting behaviour of birth and death processes Birth and death processes with absorbing states Finite state continuous time Markov chains

More information

LECTURE 3. Last time:

LECTURE 3. Last time: LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

Queueing Networks G. Rubino INRIA / IRISA, Rennes, France

Queueing Networks G. Rubino INRIA / IRISA, Rennes, France Queueing Networks G. Rubino INRIA / IRISA, Rennes, France February 2006 Index 1. Open nets: Basic Jackson result 2 2. Open nets: Internet performance evaluation 18 3. Closed nets: Basic Gordon-Newell result

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

Part II: continuous time Markov chain (CTMC)

Part II: continuous time Markov chain (CTMC) Part II: continuous time Markov chain (CTMC) Continuous time discrete state Markov process Definition (Markovian property) X(t) is a CTMC, if for any n and any sequence t 1

More information

A review of Continuous Time MC STA 624, Spring 2015

A review of Continuous Time MC STA 624, Spring 2015 A review of Continuous Time MC STA 624, Spring 2015 Ruriko Yoshida Dept. of Statistics University of Kentucky polytopes.net STA 624 1 Continuous Time Markov chains Definition A continuous time stochastic

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

Structured Markov Chains

Structured Markov Chains Structured Markov Chains Ivo Adan and Johan van Leeuwaarden Where innovation starts Book on Analysis of structured Markov processes (arxiv:1709.09060) I Basic methods Basic Markov processes Advanced Markov

More information

STAT 380 Continuous Time Markov Chains

STAT 380 Continuous Time Markov Chains STAT 380 Continuous Time Markov Chains Richard Lockhart Simon Fraser University Spring 2018 Richard Lockhart (Simon Fraser University)STAT 380 Continuous Time Markov Chains Spring 2018 1 / 35 Continuous

More information

Examination paper for TMA4265 Stochastic Processes

Examination paper for TMA4265 Stochastic Processes Department of Mathematical Sciences Examination paper for TMA4265 Stochastic Processes Academic contact during examination: Andrea Riebler Phone: 456 89 592 Examination date: December 14th, 2015 Examination

More information

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time Math 416 Lecture 11 Math 416 Lecture 16 Exam 2 next time Birth and death processes, queueing theory In arrival processes, the state only jumps up. In a birth-death process, it can either jump up or down

More information

Advanced Queueing Theory

Advanced Queueing Theory Advanced Queueing Theory 1 Networks of queues (reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity, BCMP networks, mean-value analysis, Norton's

More information

Markov Processes and Queues

Markov Processes and Queues MIT 2.853/2.854 Introduction to Manufacturing Systems Markov Processes and Queues Stanley B. Gershwin Laboratory for Manufacturing and Productivity Massachusetts Institute of Technology Markov Processes

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

Introduction to Queuing Networks Solutions to Problem Sheet 3

Introduction to Queuing Networks Solutions to Problem Sheet 3 Introduction to Queuing Networks Solutions to Problem Sheet 3 1. (a) The state space is the whole numbers {, 1, 2,...}. The transition rates are q i,i+1 λ for all i and q i, for all i 1 since, when a bus

More information

Queueing Systems: Lecture 3. Amedeo R. Odoni October 18, Announcements

Queueing Systems: Lecture 3. Amedeo R. Odoni October 18, Announcements Queueing Systems: Lecture 3 Amedeo R. Odoni October 18, 006 Announcements PS #3 due tomorrow by 3 PM Office hours Odoni: Wed, 10/18, :30-4:30; next week: Tue, 10/4 Quiz #1: October 5, open book, in class;

More information

Assignment 3 with Reference Solutions

Assignment 3 with Reference Solutions Assignment 3 with Reference Solutions Exercise 3.: Poisson Process Given are k independent sources s i of jobs as shown in the figure below. The interarrival time between jobs for each source is exponentially

More information

Queueing. Chapter Continuous Time Markov Chains 2 CHAPTER 5. QUEUEING

Queueing. Chapter Continuous Time Markov Chains 2 CHAPTER 5. QUEUEING 2 CHAPTER 5. QUEUEING Chapter 5 Queueing Systems are often modeled by automata, and discrete events are transitions from one state to another. In this chapter we want to analyze such discrete events systems.

More information

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems  M/M/1  M/M/m  M/M/1/K Queueing Theory I Summary Little s Law Queueing System Notation Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K " Little s Law a(t): the process that counts the number of arrivals

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Buzen s algorithm. Cyclic network Extension of Jackson networks

Buzen s algorithm. Cyclic network Extension of Jackson networks Outline Buzen s algorithm Mean value analysis for Jackson networks Cyclic network Extension of Jackson networks BCMP network 1 Marginal Distributions based on Buzen s algorithm With Buzen s algorithm,

More information

Queueing Theory II. Summary. ! M/M/1 Output process. ! Networks of Queue! Method of Stages. ! General Distributions

Queueing Theory II. Summary. ! M/M/1 Output process. ! Networks of Queue! Method of Stages. ! General Distributions Queueing Theory II Summary! M/M/1 Output process! Networks of Queue! Method of Stages " Erlang Distribution " Hyperexponential Distribution! General Distributions " Embedded Markov Chains M/M/1 Output

More information

Queueing Networks and Insensitivity

Queueing Networks and Insensitivity Lukáš Adam 29. 10. 2012 1 / 40 Table of contents 1 Jackson networks 2 Insensitivity in Erlang s Loss System 3 Quasi-Reversibility and Single-Node Symmetric Queues 4 Quasi-Reversibility in Networks 5 The

More information

Networking = Plumbing. Queueing Analysis: I. Last Lecture. Lecture Outline. Jeremiah Deng. 29 July 2013

Networking = Plumbing. Queueing Analysis: I. Last Lecture. Lecture Outline. Jeremiah Deng. 29 July 2013 Networking = Plumbing TELE302 Lecture 7 Queueing Analysis: I Jeremiah Deng University of Otago 29 July 2013 Jeremiah Deng (University of Otago) TELE302 Lecture 7 29 July 2013 1 / 33 Lecture Outline Jeremiah

More information

6 Solving Queueing Models

6 Solving Queueing Models 6 Solving Queueing Models 6.1 Introduction In this note we look at the solution of systems of queues, starting with simple isolated queues. The benefits of using predefined, easily classified queues will

More information

M/M/1 Retrial Queueing System with Negative. Arrival under Erlang-K Service by Matrix. Geometric Method

M/M/1 Retrial Queueing System with Negative. Arrival under Erlang-K Service by Matrix. Geometric Method Applied Mathematical Sciences, Vol. 4, 21, no. 48, 2355-2367 M/M/1 Retrial Queueing System with Negative Arrival under Erlang-K Service by Matrix Geometric Method G. Ayyappan Pondicherry Engineering College,

More information

Introduction to queuing theory

Introduction to queuing theory Introduction to queuing theory Queu(e)ing theory Queu(e)ing theory is the branch of mathematics devoted to how objects (packets in a network, people in a bank, processes in a CPU etc etc) join and leave

More information

Markov Processes Cont d. Kolmogorov Differential Equations

Markov Processes Cont d. Kolmogorov Differential Equations Markov Processes Cont d Kolmogorov Differential Equations The Kolmogorov Differential Equations characterize the transition functions {P ij (t)} of a Markov process. The time-dependent behavior of the

More information

Flow Equivalence and Stochastic Equivalence in G-Networks

Flow Equivalence and Stochastic Equivalence in G-Networks Flow Equivalence and Stochastic Equivalence in G-Networks Jean-Michel Fourneau Laboratoire PRISM Univ. de Versailles Saint-Quentin 45 Avenue des Etats-Unis 78000 Versailles, France jmf@prism.uvsq.fr Erol

More information

Queuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011

Queuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011 Queuing Theory Richard Lockhart Simon Fraser University STAT 870 Summer 2011 Richard Lockhart (Simon Fraser University) Queuing Theory STAT 870 Summer 2011 1 / 15 Purposes of Today s Lecture Describe general

More information

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010 Exercises Stochastic Performance Modelling Hamilton Institute, Summer Instruction Exercise Let X be a non-negative random variable with E[X ]

More information

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process International Journal of Statistics and Systems ISSN 973-2675 Volume 12, Number 2 (217, pp. 293-31 Research India Publications http://www.ripublication.com Multi Stage Queuing Model in Level Dependent

More information

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

QUEUING MODELS AND MARKOV PROCESSES

QUEUING MODELS AND MARKOV PROCESSES QUEUING MODELS AND MARKOV ROCESSES Queues form when customer demand for a service cannot be met immediately. They occur because of fluctuations in demand levels so that models of queuing are intrinsically

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

Name of the Student:

Name of the Student: SUBJECT NAME : Probability & Queueing Theory SUBJECT CODE : MA 6453 MATERIAL NAME : Part A questions REGULATION : R2013 UPDATED ON : November 2017 (Upto N/D 2017 QP) (Scan the above QR code for the direct

More information

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory Contents Preface... v 1 The Exponential Distribution and the Poisson Process... 1 1.1 Introduction... 1 1.2 The Density, the Distribution, the Tail, and the Hazard Functions... 2 1.2.1 The Hazard Function

More information

Computer Systems Modelling

Computer Systems Modelling Computer Systems Modelling Computer Laboratory Computer Science Tripos, Part II Lent Term 2010/11 R. J. Gibbens Problem sheet William Gates Building 15 JJ Thomson Avenue Cambridge CB3 0FD http://www.cl.cam.ac.uk/

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There

More information

Model reversibility of a two dimensional reflecting random walk and its application to queueing network

Model reversibility of a two dimensional reflecting random walk and its application to queueing network arxiv:1312.2746v2 [math.pr] 11 Dec 2013 Model reversibility of a two dimensional reflecting random walk and its application to queueing network Masahiro Kobayashi, Masakiyo Miyazawa and Hiroshi Shimizu

More information

Continuous Time Processes

Continuous Time Processes page 102 Chapter 7 Continuous Time Processes 7.1 Introduction In a continuous time stochastic process (with discrete state space), a change of state can occur at any time instant. The associated point

More information

A Study on Performance Analysis of Queuing System with Multiple Heterogeneous Servers

A Study on Performance Analysis of Queuing System with Multiple Heterogeneous Servers UNIVERSITY OF OKLAHOMA GENERAL EXAM REPORT A Study on Performance Analysis of Queuing System with Multiple Heterogeneous Servers Prepared by HUSNU SANER NARMAN husnu@ou.edu based on the papers 1) F. S.

More information

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 4: Discrete-Time Markov Chains September 1, 215 Dr. Itamar Arel College of Engineering Department of Electrical Engineering & Computer

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017 CPSC 531: System Modeling and Simulation Carey Williamson Department of Computer Science University of Calgary Fall 2017 Motivating Quote for Queueing Models Good things come to those who wait - poet/writer

More information

A TANDEM QUEUE WITH SERVER SLOW-DOWN AND BLOCKING

A TANDEM QUEUE WITH SERVER SLOW-DOWN AND BLOCKING Stochastic Models, 21:695 724, 2005 Copyright Taylor & Francis, Inc. ISSN: 1532-6349 print/1532-4214 online DOI: 10.1081/STM-200056037 A TANDEM QUEUE WITH SERVER SLOW-DOWN AND BLOCKING N. D. van Foreest

More information

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India Markov Chains INDER K RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400076, India email: ikrana@iitbacin Abstract These notes were originally prepared for a College

More information

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 2. Poisson Processes Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Outline Introduction to Poisson Processes Definition of arrival process Definition

More information

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL First published in Journal of Applied Probability 43(1) c 2006 Applied Probability Trust STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL LASSE LESKELÄ, Helsinki

More information

A TANDEM QUEUEING SYSTEM WITH APPLICATIONS TO PRICING STRATEGY. Wai-Ki Ching. Tang Li. Sin-Man Choi. Issic K.C. Leung

A TANDEM QUEUEING SYSTEM WITH APPLICATIONS TO PRICING STRATEGY. Wai-Ki Ching. Tang Li. Sin-Man Choi. Issic K.C. Leung Manuscript submitted to AIMS Journals Volume X, Number 0X, XX 00X Website: http://aimsciences.org pp. X XX A TANDEM QUEUEING SYSTEM WITH APPLICATIONS TO PRICING STRATEGY WAI-KI CHING SIN-MAN CHOI TANG

More information

Advanced Computer Networks Lecture 2. Markov Processes

Advanced Computer Networks Lecture 2. Markov Processes Advanced Computer Networks Lecture 2. Markov Processes Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/28 Outline 2/28 1 Definition

More information

Lecture 6: Entropy Rate

Lecture 6: Entropy Rate Lecture 6: Entropy Rate Entropy rate H(X) Random walk on graph Dr. Yao Xie, ECE587, Information Theory, Duke University Coin tossing versus poker Toss a fair coin and see and sequence Head, Tail, Tail,

More information

THE QUEEN S UNIVERSITY OF BELFAST

THE QUEEN S UNIVERSITY OF BELFAST THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M

More information

Departure Processes of a Tandem Network

Departure Processes of a Tandem Network The 7th International Symposium on perations Research and Its Applications (ISRA 08) Lijiang, China, ctober 31 Novemver 3, 2008 Copyright 2008 RSC & APRC, pp. 98 103 Departure Processes of a Tandem Network

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Readings: Finish Section 5.2

Readings: Finish Section 5.2 LECTURE 19 Readings: Finish Section 5.2 Lecture outline Markov Processes I Checkout counter example. Markov process: definition. -step transition probabilities. Classification of states. Example: Checkout

More information