4 M/G/1 queue In the M/G/1 queue customers arrive according to a Poisson process with rate λ and they are treated in order of arrival The service times are independent and identically distributed with distribution function F B ( ) and density f B ( ) For stability we have to require that the occupation rate ρ λe(b) (1) is less than one In this chapter we will derive the limiting or equilibrium distribution of the number of customers in the system and the distributions of the sojourn time and the waiting time It is further shown how the means of these quantities can be obtained by using the mean value approach 41 Which limiting distribution? The state of the M/G/1 queue can be described by the pair (n, x) where n denotes the number of customers in the system and x the service time already received by the customer in service We thus need a two-dimensional state description The first dimension is still discrete, but the other one is continuous and this essentially complicates the analysis However, if we look at the system just after departures, then the state description can be simplified to n only, because x 0 for the new customer (if any) in service Denote by L d k the number of customers left behind by the kth departing customer In the next section we will determine the limiting distribution d n lim k P (L d k n) The probability d n can be interpreted as the fraction of customers that leaves behind n customers But in fact we are more interested in the limiting distribution p n defined as p n lim t P (L(t) n), where L(t) is the number of customers in the system at time t The probability p n can be interpreted as the fraction of time there are n customers in the system From this distribution we can compute, eg, the mean number of customers in the system Another perhaps even more important distribution is the limiting distribution of the number of customers in the system seen by an arriving customer, ie, a n lim k P (L a k n), where L a k is the number of customers in the system just before the kth arriving customer From this distribution we can compute, eg, the distribution of the sojourn time What is the relation between these three distributions? It appears that they all are the same Of course, from the PASTA property we already know that a n p n for all n We will now explain why also a n d n for all n Taking the state of the system as the number of 1
customers therein, the changes in state are of a nearest-neighbour type: if the system is in state n, then an arrival leads to a transition from n to n + 1 and a departure from n to n 1 Hence, in equilibrium, the number of transitions per unit time from state n to n + 1 will be the same as the number of transitions per unit time from n + 1 to n The former transitions correspond to arrivals finding n customers in the system, the frequency of which is equal to the total number of arrivals per unit time, λ, multiplied with the fraction of customers finding n customers in the system, a n The latter transitions correspond to departures leaving behind n customers The frequency of these transitions is equal to the total number of departures per unit time, λ, multiplied with the fraction of customers leaving behind n customers, d n Equating both frequencies yields a n d n Note that this equality is valid for any system where customers arrive and leave one by one Thus it also holds for, eg, the G/G/c queue Summarizing, for the M/G/1 queue, arrivals, departures and outside observers all see the same distribution of number of customers in the system, ie, for all n, 42 Departure distribution a n d n p n In this section we will determine the distribution of the number of customers left behind by a departing customer when the system is in equilibrium Denote by L d k the number of customers left behind by the kth departing customer We first derive an equation relating the random variable L d k+1 to L d k The number of customers left behind by the k + 1th customer is clearly equal to the number of customers present when the kth customer departed minus one (since the k + 1th customer departs himself) plus the number of customers that arrives during his service time This last number is denoted by the random variable A k+1 Thus we have L d k+1 L d k 1 + A k+1, which is valid if L d k > 0 In the special case L d k 0, it is readily seen that L d k+1 A k+1 From the two equations above it is immediately clear that the sequence {L d k} k0 forms a Markov chain This Markov chain is usually called the imbedded Markov chain, since we look at imbedded points on the time axis, ie, at departure instants We now specify the transition probabilities p i,j P (L d k+1 j L d k i) Clearly p i,j 0 for all j < i 1 and p i,j for j i 1 gives the probability that exactly j i + 1 customers arrived during the service time of the k + 1th customer This holds for i > 0 In state 0 the kth customer leaves behind an empty system and then p 0,j gives 2
the probability that during the service time of the k + 1th customer exactly j customers arrived Hence the matrix P of transition probabilities takes the form P α 0 α 1 α 2 α 3 α 0 α 1 α 2 α 3 0 α 0 α 1 α 2 0 0 α 0 α 1 0 0 0 α 0 where α n denotes the probability that during a service time exactly n customers arrive To calculate α n we note that given the duration of the service time, t say, the number of customers that arrive during this service time is Poisson distributed with parameter λt Hence, we have (λt) n α n e λt f B (t)dt (2) t0 n! The transition probability diagram is shown in figure 1, Figure 1: Transition probability diagram for the M/G/1 imbedded Markov chain This completes the specification of the imbedded Markov chain We now wish to determine its limiting distribution Denote the limiting distribution of L d k by {d n } n0 and the limiting random variable by L d So d n P (L d n) lim k P (L d k n) The limiting probabilities d n, which we know are equal to p n, satisfy the equilibrium equations d n d n+1 α 0 + d n α 1 + + d 1 α n + d 0 α n n d n+1 k α k + d 0 α n, n 0, 1, (3) k0 3
To solve the equilibrium equations we will use the generating function approach Let us introduce the probability generating functions P L d(z) d n z n, P A (z) α n z n, n0 n0 which are defined for all z 1 Multiplying (3) by z n and summing over all n leads to Hence we find ( n ) P L d(z) d n+1 k α k + d 0 α n z n n0 k0 z 1 n d n+1 k z n+1 k α k z k + d 0 α n z n n0 k0 n0 z 1 d n+1 k z n+1 k α k z k + d 0 P A (z) k0 nk z 1 α k z k d n+1 k z n+1 k + d 0 P A (z) k0 nk z 1 P A (z)(p L d(z) d 0 ) + d 0 P A (z) P L d(z) d 0P A (z)(1 z 1 ) 1 z 1 P A (z) To determine the probability d 0 we note that d 0 is equal to p 0, which is the fraction of time the system is empty Hence d 0 p 0 1 ρ ( alternatively, d 0 follows from the requirement P L d(1) 1) So, by multiplying numerator and denominator by z we obtain P L d(z) (1 ρ)p A(z)(1 z) P A (z) z By using (2), the generating function P A (z) can be rewritten as P A (z) Substitution of (5) into (4) finally yields (λt) n e λt f B (t)dtz n n! n0 t0 t0 n0 t0 n0 (λtz) n e λt f B (t)dt n! e (λ λz)t f B (t)dt (4) B(λ λz) (5) P L d(z) (1 ρ) B(λ λz)(1 z) B(λ λz) z 4 (6)
This formula is one form of the Pollaczek-Khinchin formula In the following sections we will derive similar formulas for the sojourn time and waiting time By differentiating formula (6) we can determine the moments of the queue length To find its distribution, however, we have to invert formula (6), which usually is very difficult In the special case that B(s) is a quotient of polynomials in s, ie, a rational function, then in principle the right-hand side of (6) can be decomposed into partial fractions, the inverse transform of which can be easily determined The service time has a rational transform for, eg, mixtures of Erlang distributions or Hyperexponential distributions The inversion of (6) is demonstrated below for exponential and Erlang-2 service times Example 41 (M/M/1) Suppose the service time is exponentially distributed with mean 1/µ Then B(s) µ µ + s Thus Hence P L d(z) (1 ρ) µ (1 z) µ+λ λz (1 ρ)µ(1 z) µ z µ z(µ + λ λz) µ+λ λz d n p n (1 ρ)ρ n, n 0, 1, 2, Example 42 (M/E 2 /1) Suppose the service time is Erlang-2 distributed with mean 2/µ Then ( ) 2 µ B(s), µ + s so For ρ 1/3 we then find P L d(z) P L d(z) (1 ρ) ( ) µ 2 µ+λ λz (1 z) ( ) 2 z µ µ+λ λz (1 ρ)µ 2 (1 z) µ 2 z(µ + λ λz) 2 (1 ρ)(1 z) 1 z(1 + ρ(1 z)/2) 2 1 ρ 1 ρz ρ 2 z(1 z)/4 (1 ρ)µ(1 z) (µ λz)(1 z) 1 ρ 1 ρz 2/3 1 z/3 z(1 z)/36 24 36 13z + z 2 24 (4 z)(9 z) 24/5 4 z 24/5 9 z 6/5 1 z/4 8/15 1 z/9 5
Hence, ) n, n 0, 1, 2, d n p n 6 5 ( 1 4 ) n 8 15 ( 1 9 Example 43 (M/H 2 /1) Suppose that λ 1 and that the service time is hyperexponentially distributed with parameters p 1 1 p 2 1/4 and µ 1 1, µ 2 2 So the mean service time is equal to 1/4 1 + 3/4 1/2 5/8 The Laplace-Stieltjes transform of the service time is given by Thus we have So B(s) 1 4 1 1 + s + 3 4 2 2 + s 1 4 8 + 7s (1 + s)(2 + s) 3 1 15 7z (1 z) 8 4 (2 z)(3 z) P L d(z) 1 15 7z z 4 (2 z)(3 z) 3 8 (15 7z)(1 z) (15 7z) 4z(2 z)(3 z) 3 8 15 7z (3 2z)(5 2z) 3 8 9/4 3 2z + 3 8 5/4 5 2z/5 9/32 1 2z/3 + 3/32 1 2z/5 d n p n 9 32 ( 2 3 ) n + 3 32 ( 2 5 ) n, n 0, 1, 2, 43 Distribution of the sojourn time We now turn to the calculation of how long a customer spends in the system We will show that there is a nice relationship between the transforms of the time spent in the system and the departure distribution Let us consider a customer arriving at the system in equilibrium Denote the total time spent in the system for this customer by the random variable S with distribution function F S ( ) and density f S ( ) The distribution of the number of customers left behind upon departure of our customer is equal to {d n } n0 (since the system is in equilibrium) In considering a first-come first-served system it is clear that all customers left behind are precisely those who arrived during his stay in the system Thus we have (cf (2)) d n Hence, we find similarly to (5) that t0 (λt) n e λt f S (t)dt n! P L d(z) S(λ λz) 6
Substitution of this relation into (6) yields S(λ λz) (1 ρ) B(λ λz)(1 z) B(λ λz) z Making the change of variable s λ λz we finally arrive at S(s) This formula is also known as the Pollaczek-Khinchin formula (1 ρ) B(s)s λ B(s) + s λ (7) Example 44 (M/M/1) For exponential service times with mean 1/µ we have B(s) µ µ + s Thus S(s) µ (1 ρ) s µ+s λ µ + s λ µ+s (1 ρ)µs λµ + (s λ)(µ + s) (1 ρ)µs µ(1 ρ) (µ λ)s + s2 µ(1 ρ) + s Hence, S is exponentially distributed with parameter µ(1 ρ), ie, F S (t) P (S t) 1 e µ(1 ρ)t, t 0 Example 45 (M/E 2 /1) Suppose that λ 1 and that the service time is Erlang-2 distributed with mean 1/3, so Then it follows that (verify) ( ) 6 2 B(s) 6 + s F S (t) 8 5 (1 e 3t ) 3 5 (1 e 8t ), t 0 Example 46 (M/H 2 /1) Consider example 43 again From (7) we obtain (verify) F S (t) 27 32 (1 e t/2 ) + 5 32 (1 e 3t/2 ), t 0 7
44 Distribution of the waiting time We have that S, the time spent in the system by a customer, is the sum of W (his waiting time) and B (his service time), where W and B are independent Since the transform of the sum of two independent random variables is the product of the transforms of these two random variables, it holds that Together with (7) it follows that S(s) W (s) B(s) (8) W (s) which is the third form of the Pollaczek-Khinchin formula Example 47 (M/M/1) For exponential service times with mean 1/µ we have Then from (9) it follows that (verify) The inverse transform yields (1 ρ)s λ B(s) + s λ, (9) B(s) µ µ + s W (s) (1 ρ) + ρ µ(1 ρ) µ(1 ρ) + s F W (t) P (W t) (1 ρ) + ρ(1 e µ(1 ρ)t ), t 0 Hence, with probability (1 ρ) the waiting time is zero (ie, the system is empty on arrival) and, given that the waiting time is positive (ie, the system is not empty on arrival), the waiting time is exponentially distributed with parameter µ(1 ρ) 45 Mean value approach The mean waiting time can of course be calculated from the Laplace-Stieltjes transform (9) by differentiating and substituting s 0 In this section we show that the mean waiting time can also be determined directly (ie, without transforms) with the mean value approach A new arriving customer first has to wait for the residual service time of the customer in service (if there is one) and then continues to wait for the servicing of all customers who were already waiting in the queue on arrival By PASTA we know that with probability ρ the server is busy on arrival Let the random variable R denote the residual service time and let L q denote the number of customers waiting in the queue Hence, E(W ) E(L q )E(B) + ρe(r), 8
and by Little s law (applied to the queue), So we find E(L q ) λe(w ) E(W ) ρe(r) 1 ρ (10) Formula (10) is commonly referred to as the Pollaczek-Khinchin mean value formula It remains to calculate the mean residual service time In the following section we will show that E(R) E(B2 ) 2E(B), (11) which may also be written in the form E(R) E(B2 ) 2E(B) σ2 B + E(B) 2 2E(B) 1 2 (c2 B + 1)E(B) (12) An important observation is that, clearly, the mean waiting time only depends upon the first two moments of service time (and not upon its distribution) So in practice it is sufficient to know the mean and standard deviation of the service time in order to estimate the mean waiting time 46 Residual service time Suppose that our customer arrives when the server is busy and denote the total service time of the customer in service by X Further let f X ( ) denote the density of X The basic observation to find f X ( ) is that it is more likely that our customer arrives in a long service time than in a short one So the probability that X is of length x should be proportional to the length x as well as the frequency of such service times, which is f B (x)dx Thus we may write P (x X x + dx) f X (x)dx Cxf B (x)dx, where C is a constant to normalize this density So Hence C 1 x0 xf B (x)dx E(B) f X (x) xf B(x) E(B) Given that our customer arrives in a service time of length x, the arrival instant will be a random point within this service time, ie, it will be uniformly distributed within the service time interval (0, x) So P (t R t + dt X x) dt x, t x 9
Of course, this conditional probability is zero when t > x Thus we have P (t R t + dt) f R (t)dt This gives the final result xt dt x f f B (x) X(x)dx xt E(B) dxdt 1 F B(t) dt E(B) f R (t) 1 F B(t) E(B) from which we immediately obtain, by partial integration, E(R) t0 tf R (t)dt 1 t(1 F B (t))dt 1 1 E(B) t0 E(B) t0 2 t2 f B (t)dt E(B2 ) 2E(B) This computation can be repeated to obtain all moments of R, yielding E(R n ) E(Bn+1 ) (n + 1)E(B), Example 48 (Erlang service times) For an Erlang-r service time with mean r/µ we have E(B) r µ, σ2 (B) r µ 2, so Hence E(B 2 ) σ 2 (B) + (E(B)) 2 E(R) 1 + r 2µ r(1 + r) µ 2 10