Analysis of an Infinite-Server Queue with Markovian Arrival Streams

Similar documents
Stochastic process. X, a series of random variables indexed by t

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process

M/M/3/3 AND M/M/4/4 RETRIAL QUEUES. Tuan Phung-Duc, Hiroyuki Masuyama, Shoji Kasahara and Yutaka Takahashi

Jitter Analysis of an MMPP 2 Tagged Stream in the presence of an MMPP 2 Background Stream

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

Chapter 3: Markov Processes First hitting times

THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Approximate Formulas for the Cell Loss Probability in Finite-Buffer Queues with Correlated Input. Guidance. Shigeaki Maeda

Cover Page. The handle holds various files of this Leiden University dissertation

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

IEOR 6711, HMWK 5, Professor Sigman

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Statistics 150: Spring 2007

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Part I Stochastic variables and Markov chains

Introduction to Queuing Networks Solutions to Problem Sheet 3

A note on the GI/GI/ system with identical service and interarrival-time distributions

MULTIVARIATE DISCRETE PHASE-TYPE DISTRIBUTIONS

ring structure Abstract Optical Grid networks allow many computing sites to share their resources by connecting

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

1 Continuous-time chains, finite state space

Lecturer: Olga Galinina

Markov processes and queueing networks

Birth-Death Processes

Departure Processes of a Tandem Network

Data analysis and stochastic modeling

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

ec1 e-companion to Liu and Whitt: Stabilizing Performance

Chapter 2: Markov Chains and Queues in Discrete Time

Analysis of a tandem queueing model with GI service time at the first queue

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

Lecture 4a: Continuous-Time Markov Chain Models

Other properties of M M 1

A FAST MATRIX-ANALYTIC APPROXIMATION FOR THE TWO CLASS GI/G/1 NON-PREEMPTIVE PRIORITY QUEUE

THE QUEEN S UNIVERSITY OF BELFAST

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

An M/M/1 Queue in Random Environment with Disasters

S1. Proofs of the Key Properties of CIATA-Ph

Glossary availability cellular manufacturing closed queueing network coefficient of variation (CV) conditional probability CONWIP

Synchronized Queues with Deterministic Arrivals

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

Stabilizing Customer Abandonment in Many-Server Queues with Time-Varying Arrivals

Figure 10.1: Recording when the event E occurs

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

Quantitative Model Checking (QMC) - SS12

Analysis of an M/G/1 queue with customer impatience and an adaptive arrival process

ON THE NON-EXISTENCE OF PRODUCT-FORM SOLUTIONS FOR QUEUEING NETWORKS WITH RETRIALS

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Supermodular ordering of Poisson arrays

Technical Appendix for: When Promotions Meet Operations: Cross-Selling and Its Effect on Call-Center Performance

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

Stochastic inventory management at a service facility with a set of reorder levels

MAP/G/l QUEUES UNDER N-POLICY WITH AND WITHOUT VACATIONS

ANALYSIS OF A QUEUE IN THE BMAP/G/1/N SYSTEM

ABC methods for phase-type distributions with applications in insurance risk problems

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Class 11 Non-Parametric Models of a Service System; GI/GI/1, GI/GI/n: Exact & Approximate Analysis.

Statistics 992 Continuous-time Markov Chains Spring 2004

Performance Modelling of Computer Systems

Part II: continuous time Markov chain (CTMC)

An EM algorithm for Batch Markovian Arrival Processes and its comparison to a simpler estimation procedure

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

1 Delayed Renewal Processes: Exploiting Laplace Transforms

An Overview of Methods for Applying Semi-Markov Processes in Biostatistics.

Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K

Continuous-Time Markov Chain

Queueing systems in a random environment with applications

Continuous Time Markov Chains

Queueing Networks and Insensitivity

Stationary Probabilities of Markov Chains with Upper Hessenberg Transition Matrices

Point Process Control

M/M/1 Retrial Queueing System with Negative. Arrival under Erlang-K Service by Matrix. Geometric Method

Preemptive Resume Priority Retrial Queue with. Two Classes of MAP Arrivals

A Heterogeneous Two-Server Queueing System with Balking and Server Breakdowns

Discriminatory Processor Sharing Queues and the DREB Method

HEAVY-TRAFFIC ASYMPTOTIC EXPANSIONS FOR THE ASYMPTOTIC DECAY RATES IN THE BMAP/G/1 QUEUE

Non Markovian Queues (contd.)

Advanced Queueing Theory

Lecture 15. Theory of random processes Part III: Poisson random processes. Harrison H. Barrett University of Arizona

On the static assignment to parallel servers

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS

FRACTIONAL BROWNIAN MOTION WITH H < 1/2 AS A LIMIT OF SCHEDULED TRAFFIC

Inventory Ordering Control for a Retrial Service Facility System Semi- MDP

THIELE CENTRE. The M/M/1 queue with inventory, lost sale and general lead times. Mohammad Saffari, Søren Asmussen and Rasoul Haji

Queueing Networks G. Rubino INRIA / IRISA, Rennes, France

Markov Chain Model for ALOHA protocol

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

Lecturer: Olga Galinina

Central Limit Theorem Approximations for the Number of Runs in Markov-Dependent Binary Sequences

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS

Departure processes from MAP/PH/1 queues

ON COMPOUND POISSON POPULATION MODELS

Performance Evaluation of Queuing Systems

Multi-class, multi-server queues with non-preemptive priorities

Name of the Student:

A Perishable Inventory System with Service Facilities and Negative Customers

Transcription:

Analysis of an Infinite-Server Queue with Markovian Arrival Streams Guidance Professor Associate Professor Assistant Professor Masao FUKUSHIMA Tetsuya TAKINE Nobuo YAMASHITA Hiroyuki MASUYAMA 1999 Graduate Course in Department of Applied Mathematics and Physics Graduate School of Informatics Kyoto University February 21

Contents 1 Introduction 1 2 Model 1 3 Time-dependent joint distribution of the number of customers 3 31 Joint distribution of the number of customers 5 4 Numerically Feasible Formulas for Phase-Type Service Times 5 41 Time-dependent joint binomial moments 6 42 Time-dependent formula for phase-type services 8 43 Limiting formula for phase-type services 12 5 Numerical Examples 14 51 Impact of service time distribution on VarN 15 52 Impact of arrival process 16 53 Impact of correlation in service time sequence 18

Abstract This paper considers an infinite-server queue with multiple batch Markovian arrival streams The service time distribution of customers may be different for different arrival streams, and simultaneous batch arrivals from more than one stream are allowed For this queue, we first derive a system of ordinary differential equations which the time-dependent matrix joint generating function of the number of customers in the system satisfies Next assuming phase-type service times, we derive explicit and numerically feasible formulas for the time-dependent and limiting joint binomial moments Further some numerical examples are provided to discuss the impact of system parameters on the performance

1 Introduction ver the past few decades, several studies have been done on infinite-server queues with batch arrivals, eg, 4, 5, 6, 7, 9 In particular, Liu and Templeton 7 studied an infinite-server queue, where the arrival process is assumed to be a Markov renewal process, the batch size distribution and the service times of individual customers in the batch may depend on the state of the Markov renewal process, and service times of individual customers in a batch are independent and identically distributed (iid) To the best of our knowledge, this queue is the most general one among those studied in the past They derived the time-dependent generating function and binomial moments of the number of customers in the system However, most of those results are given in terms of integrals with respect to an Markov renewal function, and therefore numerical computations with those results are not easy to conduct, as stated in 7 The main purpose of this paper is to develop numerically tractable formulas for infinite-server queues For this purpose, this paper considers an infinite-server queue with multiple batch Markovian arrival streams Roughly speaking, customer arrivals occur in the following way There exists an irreducible finite-state Markov chain that governs the arrival process, and customers from respective arrival streams arrive with a predefined probability when a transition of the Markov chain happens Note that this arrival process includes a superposition of independent phase-type renewal batch arrivals and Markov modulated batch Poisson arrivals as special cases We assume that the service time distribution of customers may be different for different arrival streams and simultaneous batch arrivals from more than one stream are allowed For this queue, we first derive a system of ordinary differential equations for the time-dependent matrix joint generating function of the number of customers in the system This result is considered as a generalization of the known result for the P H/GI/ queue with single arrivals 8 As shown in 8, applying a general-purpose numerical algorithm, we can compute the time-dependent joint distribution and the joint factorial moments of the number of customers in the system Next we derive the time-dependent joint binomial moment matrices of customers in the system, and assuming phase-type service times, we obtain explicit and numerically feasible expressions of the time-dependent and the limiting joint binomial moment matrices Furthermore, through numerical examples, we reveal the impact of system parameters on the time-dependent and the limiting performance The remainder of this paper is organized as follows In Section 2, we describe the mathematical model In Section 3, we derive a system of ordinary differential equations for the time-dependent matrix joint generating function of the number of customers in the system In Section 4, we derive the time-dependent joint binomial moment matrices, and then assuming phase-type service times, we obtain numerically feasible expressions of the time-dependent and the limiting joint binomial moment matrices Finally, in Section 5, we show some numerical examples and discuss the impact of system parameters on the performance 2 Model We consider an infinite-server queue fed by K arrival streams Hereafter we call customers arriving from the νth (ν = 1,, K) stream as class ν customers Let K denote a finite set of class indices, ie, K = {1,, K} Customer arrivals are governed by a time homogeneous, stationary Markov chain, which is called the underlying Markov chain hereafter The underlying Markov chain has a finite state space M = {1,, M} and it is assumed to be irreducible The underlying Markov chain stays in state i M for an exponential interval of time with mean µ 1 i, and then changes its state to j M with probability 1

σ i,j We define a nonnegative 1 K vector n = (n 1,, n K ) Z +, where Z = {n = (n 1,, n K ); n ν =, 1, for all ν K}, Z + = Z {} When a transition from state i to state j happens, n ν customers in class ν simultaneously arrive at the queue with probability σ i,j (n)/σ i,j, where σ i,i () is assumed to be zero for all i M and σ i,j (n) satisfies σ i,j = Z σ i,j (n) This batch arrival process is considered as an extension of Markovian arrival streams (MASs) in 1 We now introduce some notations which describe the above MAS Let C denote an M M matrix whose (i, j)th element C i,j (i, j M) is given by { µi, i = j, C i,j = σ i,j ()µ i, i j Further, for n Z +, we define D(n) as an M M matrix whose (i, j)th element D i,j (n) (i, j M) is given by D i,j (n) = σ i,j (n)µ i, n Z + Thus our marked batch MAS is characterized by (C, D(n)) The infinitesimal generator of the underlying Markov chain is given by C + D, where D is defined as D = Z + D(n) We denote the (i, j)th element of D by D i,j We define π as the stationary probability vector of the underlying Markov chain Note that π satisfies π(c + D) =, πe = 1, where e denotes a 1 M column vector whose elements are all equal to one We define λ ν ( K) as the arrival rate of class ν customers, ie, λ ν = π ν n ν D(n)e, where e ν denotes the νth unit vector: e ν = (,,, 1,,,) νth (21) Service times of class ν ( K) customers are assumed to be iid according to a distribution function H ν (t) with finite mean b ν For simplicity, we assume that no customers are present in the system at time throughout the paper Remark 21 Customer arrivals defined above can be viewed as follows Let S t denote the state of the underlying Markov chain at time t Then, given S t = i, the conditional joint probability that n ν customers in class ν (ν K) simultaneously arrive at the queue during time interval (t, t + δt and S t+δt = j is given by D i,j (n)δt + o(δt) Besides, given S t = i, event S t+δt = j (j i) with no arrivals happens with probability C i,j δt + o(δt) The assumption σ i,i (n) = for all i M implies that at least one customer arrives whenever a transition from any state i to itself happens 2

3 Time-dependent joint distribution of the number of customers In this section, we consider the time-dependent joint generating function of the number of customers of each class in the system Let T m denote the mth (m = 1, 2,, k) arrival epoch after time, where < T 1 < T 2 < Let X m,ν (ν K) denote the number of class ν customers arriving at time T m We define N ν (t) as the number of class ν customers in the system at time t When no arrivals happen in time interval (, t, we have N ν (t) = for all ν K Thus E zν Nν(t) 1{S t = j, T 1 > t} S = i = e t, (31) i,j where 1{ξ} denotes an indicator function of event ξ Next we consider the case T k t < T k+1 for some k 1 Let ξ k (k = 1, 2, ) denote the event T k t < T k+1 Given the event ξ k happens, we define x m as (X m,1,, X m,k ) Because customers are served independently, we have for z ν 1 (ν K) E z Nν(t) ν T m = t m (m = 1,, k), ξ k, x m = n m (m = 1,, k) = = k m=1 ν m K k m=1 ν m K nm,νm H νm (t t m ) + z νm H νm (t t m ) 1 + (z νm 1)H νm (t t m ) nm,νm, (32) where n m = (n m,1,, n m,k ) Z + for all m = 1, 2,, k and H ν (t) = 1 H ν (t) Because events ξ k (k = 1, 2, ) are exclusive, it follows from (32) that E = zν Nν(t) 1{S t = j and T 1 < t} S = i k=1 1 Z + k m=1 ν m K 2 Z + k Z + dt 1 dt 2 dt k t 1 t k 1 1 + (z νm 1)H νm (t t m ) nm,νm e t 1 D(n 1 )e (t 2 t 1 ) D(n 2 ) e (t k t k 1 ) D(n k )e (t t k) i,j (33) We now define the time-dependent matrix joint generating function G (t, z) of the number of customers of each class in the system at time t, where z = (z 1,, z K ) with z ν 1 for all ν K Namely, G (t, z) denotes an M M matrix whose (i, j)th element represents E It then follows from (31) and (33) that zν Nν(t) 1{S t = j} S = i, i, j M G (t, z) = e t + k=1 1 Z + 2 Z + k Z + dt 1 dt 2 dt k t 1 t k 1 3

k m=1 ν m K nm,νm 1 + (z νm 1)H νm (t t m ) e t 1 D(n 1 )e (t 2 t 1 ) D(n 2 ) e (t k t k 1 ) D(n k )e (t t k) (34) Further letting u j = t t k+1 j (j = 1, 2,, k) and rearranging terms, we have G (t, z) = e t + k=1 k k Z + m=1 ν m K k 1 Z + 1 Z + du k uk nm,νm 1 + (z νm 1)H νm (u m ) du k 1 du 1 e (t uk) D(n k )e (u k u k 1 ) D(n k 1 ) e (u 2 u 1 ) D(n 1 )e u 1 uk = e t + du k du k 1 du 1 e (t uk) D (u k, z) k=1 e (u k u k 1 ) D (u k 1, z) e (u 2 u 1 ) D (u 1, z)e u 1, (35) where D (t, z) = Z + 1 + (z ν 1)H ν (t) nν D(n) (36) Theorem 31 For any closed interval of t where all of H ν (t) is continuous, G (t, z) in (35) satisfies the following differential equation: t G (t, z) = C + D (t, z) G (t, z), (37) Further (37) has a unique continuous solution in, ) with G (, z) = I Proof Pre-multiplying both sides of (35) by exp( Ct), we have uk e t G (t, z) = I + du k k=1 du k 1 du 1 e u k D (u k, z) e (u k u k 1 ) D (u k 1, z) e (u 2 u 1 ) D (u 1, z)e u 1 (38) Differentiating both sides of (38) with respect to t yields e t t G (t, z) e t CG (t, z) = e t D (t, z)e t + e t D (t, z) uk 1 du k 1 du k 2 k=2 du 1 e (t u k 1) D (u k 1, z) e (u k 1 u k 2 ) D (u k 2, z) e (u 2 u 1 ) D (u 1, z)e u 1 = e t D (t, z)g (t, z) (39) Rearranging terms in (39) and pre-multiplying both sides by exp(ct), we obtain (37) The uniqueness of the solution of (37) with G (, z) = I follows from the well-known results of ordinary differential equations (see 2, p167, Theorem 1) 4

Remark 31 Ramaswami and Neuts 8 obtained a system of ordinary differential equations which the generating function of the number of customers in the P H/GI/ queue satisfies Theorem 31 is considered as a generalization of the result in 8 31 Joint distribution of the number of customers We now consider the time-dependent joint distribution of the number of customers of each class in the system Let D(t, n) (n Z) denotes an M M matrix which satisfies D (t, z) = n 1 = n K = z n1 z n K D(t, n), where D (t, z) is given by (36) Also, let L(t, n) (n Z + ) denote an M M matrix whose (i, j)th element represents PrN 1 (t) = n 1,, N K (t) = n K, S t = j S = i For a given n = (n 1,, n K ) Z, comparing the coefficient matrices of z n1 z n K (37), we obtain on both sides of d L(t, n) = (C + D(t, ))L(t, n) + dt D(t, m)l(t, n m) Therefore the time-dependent joint distributions L(t, n) ( n m) are given by the solution of a system of ordinary differential equations which can be solved numerically by a general-purpose numerical algorithm (eg, see 8) Also, from (37), we can derive a system of ordinary differential equations whose solution provides the joint factorial moments for the number of customers of each class in the system at time t We define Ω(t, m) and Ψ(t, m) as Ω(t, m) = lim z1 1 lim z K 1 Ψ(t, m) = lim z1 1 lim z K 1 m 1 z m 1 1 m 1 z m 1 1 m K z m K K t G (t, z), m K z m D (t, z) K K By differentiating both sides of (37) m ν (ν K) times with respective to z ν and setting z ν = 1 for all ν K, we obtain Ω(t, m) = (C + D)Ω(t, m) + ( ) mν Ψ(t, n)ω(t, m n) t Thus the time-dependent joint factorial moment matrices Ω(t, n) ( n m) are given by the solution of a system of ordinary differential equations, too 4 Numerically Feasible Formulas for Phase-Type Service Times In this section, we consider the joint binomial moments of the number of customers of each class in the system and we develop numerically feasible formulas for the joint binomial moments, assuming n ν 5

phase-type service times We define B(t, m) (m Z + ) as an M M matrix whose (i, j)th element represents ( ) Nν (t) B(t, m) i,j = E 1{S t = j} S = i (41) m ν B(t, m) is called the mth joint binomial moment matrix of the number of customers of each class in the system at time t 41 Time-dependent joint binomial moments We define G B (t, ω) as the matrix joint binomial moment generating function of the number of customers of each class in the system at time t, ie, G B(t, ω) = e ( + )t + Z + ω mν ν B(t, m), (42) where ω = (ω 1, ω 2,, ω K ) and ω ν + 1 1 for all ν K Note here that G B (t, ω) is given in terms of G (t, z): G B (t, ω) = G (t, ω 1 + 1,, ω K + 1), (43) Theorem 41 The time-dependent matrix joint binomial moment generating function G B(t, ω) is given by G B(t, ω) = e ( + )t uk + du k k=1 du k 1 du 1 e ( + )(t uk) D B(u k, ω) e ( + )(u k u k 1 ) D B (u k 1, ω) e ( + )(u 2 u 1 ) D B(u 1, ω)e ( + )u 1, (44) where D B(t, ω) = D (t, ω 1 + 1,, ω K + 1) D (45) Proof It is easy to see from (37) that G B (t, ω) in (43) satisfies the following differential equation t G B (t, ω) = C + D + D B (t, ω) G B (t, ω), (46) (t, ω) = I, (47) G B where D B(t, ω) is given by (45) In what follows, we shall show (44) is a solution of (46) Pre-multiplying both sides of (44) by exp (C + D)t, we have e ( + )t G B (t, ω) = I + k=1 du k uk du k 1 du 1 e ( + )u k D B (u k, ω) e ( + )(u k u k 1 ) D B(u k 1, ω) e ( + )(u 2 u 1 ) D B(u 1, ω)e ( + )u 1 (48) 6

Differentiating both sides of (48) with respect to t yields e ( + )t t G B (t, ω) e ( + )t (C + D)G B (t, ω) = e ( + )t D B (t, ω) e ( + )t + k=2 du k 1 uk 1 du k 2 du 1 e ( + )(t uk 1) D B(u k 1, ω)e ( + )(u k 1 u k 2 ) D B (u k 1, ω) e ( + )(u 2 u 1 ) D B (u 1, ω)e ( + )u 1 = e ( + )t D B(t, ω)g B(t, ω) (49) Thus, pre-multiplying both sides of (49) by exp(c + D)t, we see that G B (t, ω) in (44) satisfies the differential equation (46) and (47) We now rewrite G B (t, ω) in (44) to be a more appealing form Note first that D B (t, ω) = nν 1 + ω ν H ν (t) D(n) D Z + = Z + = Z n ν m ν= ω mν ν ( ) nν ων mν m ν H mν ν (t) H mν ν = Z + D(n) + Z + ω mν ν = Z + ω mν ν H mν ν (t) (t) D(n) D ( ) nν D(n) D m ν H mν ν (t) ( ) nν m ν D(n) ( ) nν D(n) D m ν = Z + ω ( ) H ( ) (t) D(m), (41) where H ( ω ( ) = ω mν ν, m Z +, ) (t) = D(m) = Thus, with (41), G B(t, ω) in (44) is rewritten to be G B (t, ω) = e( + )t + k=1 H mν ν (t), m Z +, (411) 1 Z + ( ) nν D(n), m Z + (412) m ν 2 Z + uk du k du k 1 7 k Z + ω ( 1+ + k) du 1 H ( k) (uk )H ( k 1) (uk 1 )

where H ( 1) (u1 )e ( + )(t u k) D(l k )e ( + )(u k u k 1 ) D(l k 1 ) e ( + )(u 2 u 1 ) D(l 1 )e ( + )u 1 = e ( + )t + Z + ω ( ) du k uk m = m ν, m Z, k=1 k L k ( ) du k 1 du 1 H ( k) (uk )H ( k 1) (uk 1 ) H ( 1) (u1 )e ( + )(t u k) D(l k )e ( + )(u k u k 1 ) D(l k 1 ) e ( + )(u 2 u 1 ) D(l 1 )e ( + )u 1, (413) l j = (l j,1,, l j,k ) Z +, { } lk = (l 1,, l k ) l j Z, j = 1, 2,, k, { L k (m) = lk = (l 1,, l k ) l 1 + + l k = m, } D(l j ), l j Z +, j = 1, 2,, k Therefore comparing (42) and (413), we obtain the following theorem Theorem 42 The mth joint binomial moment matrix B(t, m) is given by B(t, m) = k=1 k L k ( ) du k uk du k 1 du 1 H ( k) (uk )H ( k 1) (uk 1 ) H ( 1) (u1 ) e ( + )(t u k) D(l k )e ( + )(u k u k 1 ) D(l k 1 ) e ( + )(u 2 u 1 ) D(l 1 )e ( + )u 1 (414) Remark 41 Note that H ( ) (t) has probability meanings To see this, suppose m ν (ν K) customers of class ν simultaneously arrive Let H ν,i (ν K, i = 1,, m ν ) denote a random variable representing a service time of the ith customer of class ν Then H ( ) (t) represents the complementary distribution function of min,i=1,,mν H ν,i, because H ( m ν ) (t) = Pr(H ν,i > t) i=1 ( ) = Pr min H ν,i > t,i=1,,m ν Remark 42 We see from (414) that B(t, m) is independent of all D(l) (l m, l m) 42 Time-dependent formula for phase-type services We now assume that the service time distribution of class ν customers is a phase-type distribution with representation (β ν, T ν ): H ν (x) = β ν e νx e, ν K (415) 8

Then using the properties of Kronecker product and Kronecker sum : (U 1 U 2 U n ) (V 1 V 2 V n ) = (U 1 V 1 )(U 2 V 2 ) (U n V n ), n 1, exp(u) exp(v ) = exp(u V ), we rewrite H ( ) (x) to be H ( ) (x) = ( ) β ν e νx lν e = β < > exp(t x)e, where β < > and T (l Z + ) are given by β < > = β 1 β 1 }{{} T = T 1 T 1 }{{} Thus B(t, m) in (414) is rewritten to be β 2 β }{{ 2 } l 1 l 2 T 2 T }{{ 2 } l 1 B(t, m) = l 2 k=1 k L k ( ) β K β }{{ K, } l K T K T }{{ K } l K F k (t, l k ), (416) where F k (t, l k ) = du k uk du k 1 du 1 k j=1 β < j> exp(t j u j )e e ( + )(t u k) D(l k )e ( + )(u k u k 1 ) D(l k 1 ) e ( + )(u 2 u 1 ) D(l 1 )e ( + )u 1 (417) To obtain a numerically feasible formula for F k (t, l k ), we shall rewrite (417) by considering the time-reversed arrival process Note that the time-reversed arrival process is a batch marked MAS with representation (C, D (n)), where C = diag (π) 1 C T diag (π), D (n) = diag (π) 1 D(n) T diag (π), with an M M diagonal matrix diag (π) whose (i, i)th element is equal to the ith element of π Therefore (417) is rewritten to be F k (t, k l k ) = b(l η )d(l η ) diag (π) 1 η=1 uk k du k du k 1 du 1 κ ( j) exp(t j u j )( T j )e j=1 exp ( Q u 1 ) D (l 1 ) exp Q (u 2 u 1 ) D (l 2 ) exp Q (u k u k 1 ) D (l K ) exp Q (t u k ) T diag (π), (418) 9

where b(l) = dx β < > exp(t x)e = β < > ( T ) 1 e, (419) d(l) = max diag (π) 1 D(l) T diag (π) e, (42) i M i Q = diag (π) 1 (C + D) T diag (π), D (l) = d(l) 1 diag (π) 1 D(l) T diag (π), (ν K, l Z + ), κ ( ) = β< > ( T ) 1 β < > ( T ) 1 e, Note that Q denotes the infinitesimal generator of the time-reversed process of the underlying Markov chain that governs an arrival process and satisfies πq = Note also that D (l) is a non-negative matrix whose row sums are all equal to or less than one, ie, D (l) is a sub-stochastic matrix Further κ ( j) exp(t j u j )( T j )e, j = 1, 2, is considered as the density function of a phase-type distribution with representation (κ ( j), T j ) We now define F k (t, l k ) (k 1) as F k (t, l k ) = du k uk du k 1 du 1 Then F k (t, l k ) is rewritten in terms of F k (t, l k ) k j=1 κ ( j) exp(t j u j )( T j )e exp ( Q u 1 ) D (l 1 ) exp Q (u 2 u 1 ) D (l 2 ) exp Q (u k u k 1 ) D (l k ) exp Q (t u k ) (421) F k (t, k l k ) = b(l η )d(l η ) diag (π) 1 F T k (t, l k ) diag (π) (422) η=1 In what follows, we derive a numerically feasible formula for F k (t, l k) We first consider F 1 (t, l 1): F 1 (t, l 1) = du 1 κ ( 1) exp(t 1 u 1 )( T 1 )e exp(q u 1 ) D (l 1 ) exp Q (t u 1 ) (423) Using the properties of Kronecker product and Kronecker sum, we rewrite (423) to be F 1 (t, l 1) = du 1 κ ( 1) exp(t 1 u 1 ) ( T 1 )e I Q exp(q u 1 ) D (l 1 ) exp Q (t u 1 ) = (κ ( 1) I Q ) du 1 exp(t 1 Q u 1 ) ( T 1 )e D (l 1 ) exp Q (t u 1 ) where I Q denotes unit matrix whose size is the same as that of Q 1

Similarly, F 2 (t, l 2 ) is rewritten to be F 2 (t, l 2 ) = du 2 du 1 κ ( 1) exp(t 1 u 1 ) ( T 1 )e 1 1 1 κ ( 2) exp(t 2 u 1 ) I 1 (l 2 ) expt 2 (u 2 u 1 ) ( T 2 )e 1 I Q exp(q u 1 ) D (l 1 ) expq (u 2 u 1 ) D (l 2 ) expq (t u 2 ) = (κ ( 1) κ ( 2) I Q ) du 2 du 1 exp(t 1 T 2 Q u 1 ) ( T 1 )e I 1 (l 2 ) D (l 1 ) expt 2 Q (u 2 u 1 ) ( T 2 )e D (l 2 ) expq (t u 2 ), where I 1 ( l 2 ) denotes unit matrix whose size is the same as that of T 2 Following the same manipulation as in the cases k = 1 and 2, we obtain for k = 1, 2,, F k (t, l k ) = J k ( t uk l k ) du k du k 1 du 1 expu k,1 ( l k )u 1 V k,1 ( l k ) expu k,2 ( l k )(u 2 u 1 )V k,2 ( l k ) expu k,k ( l k )(u k u k 1 )V k,k ( l k ) expq (t u k ), (424) where, with I k j (l j+1, l j+2,, l k ) (k j 1) being an unit matrix whose size is the same as that of T j+1 T k, J k ( l k ) = κ ( 1) κ ( k) I Q, (425) U k,j ( l k ) = T j T k Q, j = 1, 2,, k, (426) { V k,j ( ( T j )e I l k ) = k j (l j+1,, l k ) D (l j ), j = 1, 2,, k 1, ( T k )e D (427) (l k ), j = k Lemma 41 For j = 1,, k, let U j denote an m i m i matrix and V j denote an m j m j+1 matrix If all matrices U j and V j are bounded, the following equation holds for all k (k = 1, 2, ) x2 dx k 1 xk V dx k dx 1 e 1 x 1 V 1 e 2 (x 2 x 1 ) V 2 e k (t x k) V k U 1 V 1 U 2 V 2 = V exp t U k 1 V k 1 U k V k (428) The proof of Lemma 41 is given in Appendix Applying Lemma 41 to (424) and taking account of (416) and (422), we have the following theorem 11

Theorem 43 B(t, m) is given by k B(t, m) = b(l η )d(l η )diag (π) 1 F T k (t, l k ) diag (π), k=1 k L k ( ) η=1 where b(l η ) and d(l η ) are given by (419) and (42), respectively Further F k (t, l k ) (k 1) is given by F k (t, l k ) = J k ( l k ) exp A k ( l k )t, (429) with U k,1 ( l k ) V k,1 ( l k ) A k ( U k,2 ( l k ) V k,2 ( l k ) l k ) =, (43) U k,k ( l k ) V k,k ( l k ) Q where J k ( l k ), U k,j ( l k ) (j = 1, 2,, k) and V k,j ( l k ) (j =, 1,, k) are defined in (425), (426) and (427), respectively Remark 43 A k ( l k ) in (43) is considered as a defective infinitesimal generator of an absorbing Markov chain Namely, A k ( l k ) has negative diagonal elements and non-negative off-diagonal elements, all row sums of it are non-positive and at least one row sum is strictly negative Therefore applying the uniformization technique 1, we can readily compute expa k ( l k )t Further since expa k ( l k )t, J k ( l k ), I Q, diag (π), diag (π) 1, b(l η ) and d(l η ) are all non-negative, the computation of B(t, m) is numerically stable 43 Limiting formula for phase-type services In this subsection, assuming phase-type service times, we derive an explicit and numerically feasible formula for the limit B(m) of B(t, m): We define F k ( l k ) as Theorem 44 B(m) is given by where B(m) = lim t B(t, m) F k ( l k ) = lim t F k (t, l k ) k B(m) = b(l η )d(l η ) diag (π) 1 F k ( l k ) T diag (π), (431) k=1 k L k ( ) η=1 F k ( l k ) = J k ( l k )( U k,1 ( l k )) 1 V k,1 ( l k ) ( U k,k ( l k )) 1 V k,k ( l k )eπ, with J k ( l k ), U k,j ( l k ) (j = 1, 2,, k) and V k,j ( l k ) (j =, 1,, k) in (425), (426) and (427), respectively 12 I Q

Proof (431) follows from (416) and (422) Note that A k ( l k ) in (43) is rewritten to be A k ( l k ) = A k ( l k ) V k,k ( l k ) Q, (432) where U k,1 ( l k ) V k,1 ( l k ) A k( U k,2 ( l k ) V k,2 ( l k ) l k ) = U k,k 1 ( l k ) V k,k 1 ( l k ) U k,k ( l k ) Because (see (51) in Appendix) ( ) K11 K exp 12 t = e 11 t due 11 (t u) K 12 e 22 u K, 22 e 22 t (433) It follows from (432) that exp A k ( l k )t = exp A k ( l k )t Θ k (t, l k ) expq t, (434) where Θ k (t, t l k ) = dx exp A k( l k )x V k,k ( l k ) expq (t x) (435) Note that A k ( l k ) given by (433) is regarded as the defective infinitesimal generator of an absorbing Markov chain in the transient state, and therefore all row sums of the above matrix are strictly negative Thus we have exp A k( l k )t dt = A k( 1 l k ) (436) n the other hand, since Q is the infinitesimal generator of the irreducible Markov chain with the stationary probability vector π, we have Thus from (435), (436) and (437), we obtain Θ k ( l k ) = lim t Θ k (t, l k ) lim t exp(q t) = eπ (437) 13

= dx exp A k ( l k )x = = dx exp A k( l k )x A k ( l k ) 1 V k,k ( l k ) V k,k ( l k ) V k,k ( l k ) exp( Q x) lim exp(q t) t eπ eπ, (438) where we use exp(q t)e = e (t ) Therefore lim exp A k ( l k )t = t Θk ( l k ) eπ (439) Finally, from (429), (438) and (439), we obtain F k ( l k ) = lim F t k (t, l k ) = J k ( Θ k ( l k ) l k ) eπ I Q = J k ( 1 l k ) A k ( l k ) eπ, V k,k ( l k ) from which Theorem 44 follows 5 Numerical Examples In this section, we show some numerical examples based on Theorems 43 and 44, and discuss the impact of system parameters on the mean and variance of the number of customers in the system under the assumption that the arrival process is stationary Note that when the arrival process is stationary, the mean number EN ν (t) (ν K) of class ν customers at time t is given by EN ν (t) = πb(t, e ν )e = λ ν dxh ν (x), where e ν is given by (21) Also the limit EN ν of EN ν (t) is given by EN ν = πb(e ν )e = λ ν b ν, where b ν denotes the mean service time of class ν customers b ν = du H ν (u) 14

Therefore the mean EN(t) of the total number of customers at time t and its limit EN are given by EN(t) = λ ν dxh ν (x), EN = λ ν b ν n the other hand, the variance VarN ν (t) (ν K) of the number of class ν customers at time t and its limit VarN ν are given by VarN ν (t) = 2πB(t, 2e ν )e + EN ν (t) EN ν (t) 2, VarN ν = 2πB(2e ν )e + EN ν EN ν 2 Also, the covariance CovN ν (t), N ν (t) (ν, ν K) of the numbers of class ν and ν customers at time t and its limit CovN ν, N ν are given by CovN ν (t), N ν (t) = πb(t, e ν + e ν )e EN ν (t)en ν (t), CovN ν, N ν = πb(e ν + e ν )e EN ν EN ν Therefore the variance VarN(t) of the total number of customers at time t and its limit VarN are given by VarN(t) = VarN ν (t) + 2 CovN ν (t), N ν (t), ν,ν K ν ν VarN = VarN ν + 2 CovN ν, N ν ν,ν K ν ν 51 Impact of service time distribution on VarN We first show the impact of the service time distribution on the limiting variance of the number of customers To do so, we consider the following MMPP input There is only one class, ie, K = {1}, and C = 9 1 1 3, D = 8 2 1/2, if n = 1, D(n) = θ(n)d, θ(n) = 1/2, if n = 3,, otherwise As for the service time distribution, we fix the mean service time to one (ie, EN = 1), and consider the followings For < C v < 1, k-stage Erlang distributions: k k 1 k k 1 H 1 (t) = 1 } {{ } exp k = 2, 3,, (51) k 1 k k 1 k 1 for C v = 1, the exponential distribution:, H 1 (t) = e t, (52) 15

and for C v > 1, a 2-state balanced hyper-exponential distribution: ( ) 2p 1 H 1 (t) = p 1 p exp 2(1 p) 1, < p < 5 (53) Figure 1 plots the limiting variance VarN as a function of the coefficient of variation C v of the service time distribution We observe that as C v increases, the variance of the number of customers in the system decreases 5 45 4 35 VarN 3 25 2 15 5 1 15 2 25 3 coefficeint of variation C v Figure 1: Limiting variance of the number of customers Remark 51 For the M(t)/G/ queue with a sinusoidal arrival rate, Eick et al 3 show that as C v decreases, the deviation of the arrival process has greater impact on the mean number of customers in the system Thus our observation coincides with that in 3 52 Impact of arrival process We consider the impact of the correlation in the arrival process on the variance of the number of customers For this purpose, we assume that batches arrive according to the following 2-state Markov modulated Poisson process: 8 c c 8 C =, D =, c >, c 2 c 2 1/2, if n = 1, D(n) = θ(n)d, θ(n) = 1/2, if n = 3,, otherwise Note that the time-correlation of the arrival process increases with the mean sojourn time 1/c in each state of the underlying Markov chain As for the service time distribution, we consider three cases: (1) the 2-stage Erlang distribution in (51) with k = 2, (2) the exponential distribution in (52), and (3) the 2-stage hyper-exponential distribution in (53) with p = 25 Figure 2 plots the limiting variance VarN as a function of 1/c We observe that the limiting variance increases with the increase of correlation in arrivals 16

6 5 4 VarN 3 2 1 Erlang Exp Hyper-exp 5 1 15 2 25 3 35 4 45 5 1/c Figure 2: Limiting variance of the number of customers Next we consider the time-dependent variance VarN(t) in systems with two stationary arrival streams We fix the marginal characteristics of each arrival streams, which is represented by 6 1 5 C =, D =, 1 1 1/2, if n = 1, θ(n) = 1/2, if n = 3, D(n) = θ(n)d, H(t) = 5 5, otherwise, ( 75 exp 15 ) 1 t 1 where H(t) denotes the complementary distribution of service times Under this restriction, we consider the following three cases Case 1: Negatively correlated arrival streams The two arrival streams are negatively correlated Namely, two arrival streams are governed by the single two-state underlying Markov chain and when it is in state ν (ν = 1, 2), only class ν customers can arrive More precisely, the overall arrival process is characterized by 6 1 C =, D 1 6 1 = 1/2, if n = 1, θ(n) = 1/2, if n = 3,, otherwise D(n 1, n 2 ) = 5, D 2 = 5 θ(n 1 )D 1, if n 1 1 and n 2 =, θ(n 2 )D 2, if n 1 = and n 2 1, otherwise, ( ) 75 1 H ν (t) = 5 5 exp t, ν = 1, 2 15 1 Case 2: Superposition of two independent arrival streams, 17

The two arrival streams are independent of each other Thus the overall arrival process is characterized by C = D 1 = D 2 = 6 1 1 1 5 1 1 1 1 5 1/2, if n = 1, θ(n) = 1/2, if n = 3,, otherwise, H ν (t) = 5 5 exp 6 1 1 1 = = = 5 5 5 5 12 1 1 1 7 1 1 7 1 1 1 2,,, D(n 1, n 2 ) =, otherwise, ) 1 t, ν = 1, 2 1 ( 75 15 θ(n 1 )D 1, if n 1 1 and n 2 =, θ(n 2 )D 2, if n 1 = and n 2 1, Case 3: Positively correlated arrival streams The two arrival streams are positively correlated Namely, arrivals from both streams occur simultaneously with probability one More precisely, the overall arrival process is characterized by 6 1 C =, D = 1 1 1/2, if n = 1, θ(n) = 1/2, if n = 3, H ν (t) = 5 5, otherwise, 5 ( 75 exp 15, D(n 1, n 2 ) = θ(n 1 )θ(n 2 )D, ) 1 t, ν = 1, 2 1 Because the marginal characteristics of each stream in these three cases are the same, the timedependent and the limiting marginal distributions of the numbers of customers of respective class are identical among these three cases However, the covariance of the numbers of customers in respective classes are different Figure 3 show the time-dependent covariance CovN 1 (t), n 2 (t) of the numbers of customers in respective class as a function of t We observe that positive (resp negative) correlation between two arrival streams leads to positive (resp negative) covariance in any time t Thus the positive (resp negative) correlation in arrivals leads to large (resp small) variance, compared with the superposition of independent streams, ie, Case 2, as shown in Figure 4 53 Impact of correlation in service time sequence Finally we consider the M X /SM/ queue to investigate the correlation in the service time sequence We assume that batches arrive in a Poisson process with rate five, and batch sizes are iid according 18

CovN(t) 3 25 2 15 1 5-5 -1 Case 3 Case 2 Case 1-15 5 1 15 2 t Figure 3: Time-dependent covariance of the number of customers VarN(t) 9 8 7 6 5 4 3 2 1 Case 3 Case 2 Case 1 5 1 15 2 t Figure 4: Time-dependent variance of the number of customers 19

to the following two-points distribution: Batch size = { 1, probability 1/2, 3, probability 1/2 (54) Let H(n; x) denote the service time distribution of customers in the nth batch The sequence {H(n; x); n = 1, 2, } of the service time distributions constitutes a semi-markov process whose kernel H(x) is given ( ) ph H(x) = 1 (x) (1 p)h 2 (x), p < 1, (55) (1 p)h 1 (x) ph 2 (x) where H 1 (t) and H 2 (t) (t ) are distribution functions Note that for p < 1/2, the sequence has negative correlation, for p = 1/2, it becomes iid, and for 1/2 < p < 1, it has positive correlation We regard customers whose service time distribution is given by H ν (x) (ν = {1, 2}) as class ν customers Formally this M X /SM/ queue is characterized by 5 C =, D 5 1 = 1/2, if n = 1, θ(n) = 1/2, if n = 3,, otherwise, D(n 1, n 2 ) = 5p 5(1 p) θ(n 1 )D 1, if n 1 1 and n 2 =, θ(n 2 )D 2, if n 1 = and n 2 1,, otherwise,, D 2 = 5(1 p) 5p where p < 1 Note that λ 1 = λ 2 = 5 in this formulation As for service time distributions H ν (x) (ν = 1, 2), we fix λ 1 b 1 + λ 2 b 2 to 1 (ie, EN = 1), and consider the following three cases: Case 1: H 1 (x) = 1 25e 5x 75e 15x, H 2 (x) = 1 e x Case 2: H 1 (x) = 1 e 2x, H 2 (x) = 1 e 2 3 x Case 3: H 1 (x) = 1 e 5x, H 2 (x) = 1 e 5 9 x In Case 1, both the mean service times of class 1 and class 2 are equal to 1 Thus the autocovariance of the service time sequence is equal to zero for all p n the other hand, in Case 2 and Case 3, the mean service times are different, so that the autocovariance of the service time sequence is positive for p > 1/2 and negative p < 1/2 Figures 5, 6 and 7 plot the marginal variances VarN 1 and VarN 2, the covariance CovN 1, N 2 and the variance VarN of the total number of customers as functions of parameter p We observe that when the autocovariance of the service time sequence is equal to zero (Case 1), VarN is almost constant for p However, when the correlation in the service time sequence becomes strong, VarN gets large, as shown in Figures 6 and 7 2,

1 8 6 4 2-2 -4 VarN 1 VarN 2 CovN 1, N 2 VarN 2 4 6 8 1 p Figure 5: Variance and covariance in Case 1 1 8 6 4 2-2 -4 VarN 1 VarN 2 CovN 1, N 2 VarN 2 4 6 8 1 p Figure 6: Variance and covariance in Case 2 1 8 6 4 2-2 -4 VarN 1 VarN 2 CovN 1, N 2 VarN 2 4 6 8 1 p Figure 7: Variance and covariance in Case 3 21

Appendix: Proof of Lemma 41 Proof By mathematical induction, we prove Lemma 41 We first consider k = 1 We define G 1 (t, U 1, U 2, V 1 ) as G 1 (t, U 1, U 2, V 1 ) = By differentiating (56) with respect to t, we have dx e 1 (t x) V 1 e 2x (56) d dt G 1(t, U 1, U 2, V 1 ) U 1 G 1 (t, U 1, U 2, V 1 ) = V 1 e 2t (57) n the other hand, by straightforward calculation based on the definition of matrix exponential, we find ( ) U exp 1 V 1 e 1t G 1 (t, U 1, U 2, V 1 ) t =, (58) U 2 e 2 x where G 1 (t, U 1, U 2, V 1 ) = It is easy to verify that G 1 (t, U 1, U 2, V 1 ) satisfies n=1 t n n 1 U j 1 n! V 1U n 1 j 2 j= d dt G 1 (t, U 1, U 2, V 1 ) U 1 G 1 (t, U 1, U 2, V 1 ) = V 1 e 2t (59) Taking the difference between (57) and (59), we have from which it follows that d G 1 (t, U 1, U 2, V 1 ) dt G 1 (t, U 1, U 2, V 1 ) = U 1 G 1 (t, U 1, U 2, V 1 ) G 1 (t, U 1, U 2, V 1 ), for some constant matrix K Note here that so that K = Thus we have G 1 (t, U 1, U 2, V 1 ) G 1 (t, U 1, U 2, V 1 ) = e 1t K, G 1 (, U 1, U 2, V 1 ) = G 1 (, U 1, U 2, V 1 ) =, G 1 (t, U 1, U 2, V 1 ) = G 1 (t, U 1, U 2, V 1 ), t, and therefore (58) is rewritten to be ( ) U exp 1 V 1 t = e 1 t U 2 dxe 1x V 1 e 2 (t x) e 1 x (51) 22

As a result, we have V dx e 1x V 1 e 2 (t x) V 2 = V = V which shows (428) holds for k = 1 Suppose that (428) holds for some l 1, ie, We then have x2 I ( U exp 1 V 1 U 2 exp ) t V I 2 ( ) U 1 V 1 t (511) U 2 V 2 xl V dx l dx l 1 dx 1 e 1 x 1 V 1 e 2 (x 2 x 1 ) V 2 e l (t x l) V l U 1 V 1 U 2 V 2 = V exp t U l 1 V l 1 V l U l xl+1 dx l+1 V x2 dx l dx 1 e 1 x 1 V 1 e 2 (x 2 x 1 ) V 2 e l (x l+1 x l ) V l e l+1 (t x l+1 ) V l+1 = V U 1 V 1 U 2 V 2 dx l+1 exp x l+1 U l 1 V l 1 V l U l e l+1 (t x l+1 ) V l+1 (512) Thus, applying (511) to (512), we obtain t xl+1 x2 dx l+1 V dx l dx 1 e 1 x 1 V 1 e 2 (x 2 x 1 ) V 2 e l (x l+1 x l ) V l e l+1 (t x l+1 ) V l+1 = V U 1 V 1 U 2 V 2 exp t U l 1 V l 1 V, (513) l U l V l+1 U l+1 23

which shows that (428) holds for k = 1 + 1, too References 1 S Asmussen and G Koole, Marked point processes as limits Markovian arrival streams, Journal of Applied Probability 3 (1993) 365 372 2 R Bellman, Introduction to Matrix Analysis, 2nd ed (SIAM, Philadelphia, 1997) 3 S G Eick, W A Massey and W Whitt, M t /G/ queues with sinusoidal arrival rates, Management Science 39, no 2 (1993) 241 252 4 D F Holman, M L Chaudhry and B R K Kashyap, n the number in the system GI X /M/, Sankhyā Ser A44 Pt 1 (1982) 294 297 5 D F Holman, M L Chaudhry and B R K Kashyap, n the service system M X /G/, European Journal of perational Research 13 (1983) 142 145 6 L Liu, B R K Kashyap and J G C Templeton, n the GI X /G/ system, Journal of Applied Probability 27 (199) 671 683 7 L Liu and J G C Templeton, The GR Xn /G n / system: system size, Queueing Systems 8 (1991) 323 356 8 V Ramaswami and M F Neuts, Some explicit formulas and computational methods for infiniteserver queues with phase-type arrivals, Journal of Applied Probability 17 (198) 498 514 9 D N Shanbhag, n infinite server queues with batch arrivals, Journal of Applied Probability 3 (1966) 274 279 1 H C Tijms, Stochastic Model: An Algorithmic Approach (John Wiley & Sons, Chichester, 1994) 24