M/G/1 and M/G/1/K systems

Similar documents
GI/M/1 and GI/M/m queuing systems

P (L d k = n). P (L(t) = n),

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Computer Networks More general queuing systems

Performance Evaluation of Queuing Systems

Link Models for Circuit Switching

Class 11 Non-Parametric Models of a Service System; GI/GI/1, GI/GI/n: Exact & Approximate Analysis.

Data analysis and stochastic modeling

CDA5530: Performance Models of Computers and Networks. Chapter 4: Elementary Queuing Theory

M/G/1 queues and Busy Cycle Analysis

Queueing Theory II. Summary. ! M/M/1 Output process. ! Networks of Queue! Method of Stages. ! General Distributions

UNIVERSITY OF YORK. MSc Examinations 2004 MATHEMATICS Networks. Time Allowed: 3 hours.

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017

ECE 3511: Communications Networks Theory and Analysis. Fall Quarter Instructor: Prof. A. Bruce McDonald. Lecture Topic

Simple queueing models

Introduction to Queueing Theory

BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS

Chapter 8 Queuing Theory Roanna Gee. W = average number of time a customer spends in the system.

Kendall notation. PASTA theorem Basics of M/M/1 queue

Introduction to Queueing Theory

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Part II: continuous time Markov chain (CTMC)

Queues and Queueing Networks

Introduction to Queuing Networks Solutions to Problem Sheet 3

Intro to Queueing Theory

QUEUING SYSTEM. Yetunde Folajimi, PhD

Queueing Systems: Lecture 3. Amedeo R. Odoni October 18, Announcements

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

IEOR 6711, HMWK 5, Professor Sigman

Part I Stochastic variables and Markov chains

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

Networking = Plumbing. Queueing Analysis: I. Last Lecture. Lecture Outline. Jeremiah Deng. 29 July 2013

Introduction to Queueing Theory

Introduction to queuing theory

Introduction to Markov Chains, Queuing Theory, and Network Performance

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Elementary queueing system

M/G/1 and Priority Queueing

Non Markovian Queues (contd.)

Discrete-event simulations

6 Solving Queueing Models

Lecture 20: Reversible Processes and Queues

NICTA Short Course. Network Analysis. Vijay Sivaraman. Day 1 Queueing Systems and Markov Chains. Network Analysis, 2008s2 1-1

Probability and Statistics Concepts

Queueing Theory. VK Room: M Last updated: October 17, 2013.

Name of the Student:

Stochastic process. X, a series of random variables indexed by t

Stochastic Models of Manufacturing Systems

Bulk input queue M [X] /M/1 Bulk service queue M/M [Y] /1 Erlangian queue M/E k /1

Assignment 3 with Reference Solutions

Analysis of a tandem queueing model with GI service time at the first queue

Outline. Finite source queue M/M/c//K Queues with impatience (balking, reneging, jockeying, retrial) Transient behavior Advanced Queue.

Queueing Review. Christos Alexopoulos and Dave Goldsman 10/6/16. (mostly from BCNN) Georgia Institute of Technology, Atlanta, GA, USA

Continuous Time Processes

Queuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Review of Queuing Models

Stationary Analysis of a Multiserver queue with multiple working vacation and impatient customers

Lesson 7: M/G/1 Queuing Systems Analysis

Birth-Death Processes

LECTURE #6 BIRTH-DEATH PROCESS

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

Chapter 2 Queueing Theory and Simulation

Chapter 10. Queuing Systems. D (Queuing Theory) Queuing theory is the branch of operations research concerned with waiting lines.

Performance Modelling of Computer Systems

Lecturer: Olga Galinina

Queueing Review. Christos Alexopoulos and Dave Goldsman 10/25/17. (mostly from BCNN) Georgia Institute of Technology, Atlanta, GA, USA

Classical Queueing Models.

Markov processes and queueing networks

Variance reduction techniques

Queueing Theory and Simulation. Introduction

Stochastic Models in Computer Science A Tutorial

Continuous Time Markov Chains

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

PBW 654 Applied Statistics - I Urban Operations Research

Analysis of Software Artifacts

J. MEDHI STOCHASTIC MODELS IN QUEUEING THEORY

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

Introduction to Queuing Theory. Mathematical Modelling

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Figure 10.1: Recording when the event E occurs

Queuing Theory. Using the Math. Management Science

Chapter 5: Special Types of Queuing Models

Structured Markov Chains

Exponential Distribution and Poisson Process

Chapter 3 Balance equations, birth-death processes, continuous Markov Chains

16:330:543 Communication Networks I Midterm Exam November 7, 2005

Basics of Stochastic Modeling: Part II

The discrete-time Geom/G/1 queue with multiple adaptive vacations and. setup/closedown times

Chapter 1. Introduction. 1.1 Stochastic process

Readings: Finish Section 5.2

11 The M/G/1 system with priorities

Introduction to queuing theory

Lecture 10: Semi-Markov Type Processes

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin

2905 Queueing Theory and Simulation PART IV: SIMULATION

Variance reduction techniques

Systems Simulation Chapter 6: Queuing Models

Transcription:

M/G/1 and M/G/1/K systems Dmitri A. Moltchanov dmitri.moltchanov@tut.fi http://www.cs.tut.fi/kurssit/elt-53606/

OUTLINE: Description of M/G/1 system; Methods of analysis; Residual life approach; Imbedded Markov chain approach. Lecture: M/G/1 and M/G/1/K systems: part I 2

1. Description of M/G/1 queuing system M/G/1 queuing system stands for: single server; infinite number of waiting positions; Poisson arrival process; interarrival times are exponentially distributed. generally distributed service times: practically, any distribution. What we also assume: first come, first served (FCFS); what it may affect: PDF or pdf of waiting time! Lecture: M/G/1 and M/G/1/K systems: part I 3

1.1. Arrival and service processes Arrival process: Poisson with parameter (mean value) λ interarrival times are exponential with mean 1/λ; CDF, pdf and LT are: A(t) = 1 e λt, a(t) = λe λt, A(s) = 0 a(t)e st dt = 0 λe λt e st dt = λ s + λ. (1) Service process: service times are arbitrary with mean 1/µ; PDF, pdf and LT are: B(t), b(t), B(s). (2) Lecture: M/G/1 and M/G/1/K systems: part I 4

2. Methods of analysis There are a number of methods: residual life approach: simple to understand: only mean values can be obtained. transform approach based on imbedded Markov chain: harder to deal with; distributions of desired performance parameters can be obtained; the idea: find points at which Markov property holds and use transforms. direct approach based on imbedded Markov chain: harder to deal with; distributions of desired performance parameters can be obtained; the idea: find points at which Markov property holds and use direct integration. method of supplementary variables: the most complicated one: distributions of desired performance parameters can be obtained; the idea: look at arbitrary points and make them Markovian. Lecture: M/G/1 and M/G/1/K systems: part I 5

3. Residual life approach: mean waiting time Note the following: we want to find mean queuing time (mean time spent waiting for service); no need to assume FCFS service discipline: service order does not influence mean waiting time; we have to require that the server does not stay idle when there are waiting customers; mean values are the same as for LCFS, RANDOM, PS.... Arrivals: 1 Service: 1 Figure 1: M/G/1 queuing system. Note: we are looking for E[W Q ]: mean waiting time in the buffer. Lecture: M/G/1 and M/G/1/K systems: part I 6

Consider arriving customer, we have two choice: customer arrives to non-empty system or; it arrives to empty system. When the customer arrives to non-empty system it finds: n q customers waiting in the buffer; customer which is currently in service: it has R residual service time left to complete its ongoing service; residual service time is the time required to complete the ongoing service. When the customer arrives to empty system: no customers waiting n q = 0; residual service time R is zero; customer can enter service immediately. Lecture: M/G/1 and M/G/1/K systems: part I 7

Let us denote: RV R be the RV of residual service time: in case of exponential service times R has exponential distribution. E[R] be it mean. Note the following: PASTA property holds for M/G/1 system; distribution that arrival sees is the same as time-averaged (at arbitrary time); using Little s result we have for the mean waiting time in the buffer, E[W Q ]: E[W Q ] = E[N Q ]E[X] + E[R]. (3) E[N Q ] - mean number of packets waiting in the buffer, E[X] mean service time; E[R] - mean residual service time. Using Little s result E[N Q ] = λe[w Q ] we get: E[W Q ] = λe[w Q ]E[X] + E[R]. (4) Lecture: M/G/1 and M/G/1/K systems: part I 8

Denoting ρ = λe[x] and solving for E[W Q ]: E[W Q ] = ρe[w Q ] + E[R], E[W Q ] = E[R] (1 ρ). (5) to find E[W ] we, therefore, have to find mean residual service time E[R]. in what follows we take graphical approach. r(ô) ô Figure 2: Residual service times. R jumps by the amount of service time X when a new customer enters the service; decreases until the customer completes service, duration is X; R = 0 when customer completes its service. Lecture: M/G/1 and M/G/1/K systems: part I 9

Time averaged of r(τ) in (0, t) is: E[r[0, t)] = 1 t t 0 r(τ)dτ. (6) Approximate previous equation by the following sum: E[r[0, t)] = 1 t t 0 r(τ)dτ 1 t M(t) X 2 /2 is the square of a service triangle, M(t) arrivals in (0, t). Note the following: we have an error in the last residual service time in [0, t); i=1 1 2 X2 i. (7) approximation becomes better as t (number of components increases!). Multiplying and dividing it by M(t) we get: E[r[0, t)] = M(t) t 1 1 M(t) 2 M(t) i=1 X 2 i. (8) Lecture: M/G/1 and M/G/1/K systems: part I 10

Consider the last equation when t we get: lim t lim t M(t) t 1 2 1 M(t) λ M(t) i=1 X 2 i 1 2 E[X2 ]. (9) Therefore, we have the following expression for mean residual service time: E[R] = E[r] = 1 2 λe[x2 ], (10) Substituting this result in E[W ] = E[R]/(1 ρ) we get: this result is known as Pollaczek-Khinchine formula: they simultaneously got this result. E[W Q ] = λe[x2 ] 2(1 ρ). (11) Lecture: M/G/1 and M/G/1/K systems: part I 11

3.1. Performance parameters Mean time spent in the buffer: E[W Q ] Mean number of customers in the buffer: E[N Q ] Mean time spent in the system: E[W ] E[W Q ] = λe[x2 ] 2(1 ρ). (12) E[N Q ] = λe[w Q ] = λ2 E[X 2 ] 2(1 ρ). (13) E[W ] = E[W Q ] + E[X] = λe[x2 ] 2(1 ρ) Mean number of customers in the system: E[N Q ] + E[X]. (14) E[N] = E[W ]λ = λ2 E[X 2 ] 2(1 ρ) + ρ. (15) Lecture: M/G/1 and M/G/1/K systems: part I 12

4. Imbedded Markov chain Recall M/M/-/-/-: let N(t) be the number of customers at time t: we know how the system evolves in time after t in t; arrival may occur with probability λ t: it is independent from previous arrival: memoryless property (A 1 = A 2 )! departure may occur with rate µ t: it is independent from previous departure: memoryless property (D 1 = D 2 )! the resulting process is actually birth-death Markov one! A 1 (t)=a 2 (t) A 2 (t) t D 1 (t)=d 2 (t) D 2 (t) t Figure 3: State of M/M/1 queuing system. Lecture: M/G/1 and M/G/1/K systems: part I 13

4.1. Problem with M/G/1 queuing system Let S Q (t) be the number of customers at time t: do we know how the system evolves in time after t in t? arrival may occur with rate λ t: it is independent from previous arrival: memoryless property (A 1 = A 2 )! departure may occur with probability µ t: µ µ as D 2 is not the same as D 1! this process is no longer Markovian! A 1 (t) A 2 (t) t D 2 (t) D ( t) : D ( t) D ( t) 1 1 2 t Figure 4: State of M/G/1 queuing system. Lecture: M/G/1 and M/G/1/K systems: part I 14

4.2. How to choose the state of M/G/1 system State of the system: M/G/1 queuing system number of customers in the system AND time since the service started: in this case we know how the system evolves in time; we know the distribution of time till the next arrival: it is the same as initial one; we know the distribution of time till the next departure: we track it: D(t + x x) = P r{(t t + x) T > x} = (D(t + x) D(x))/(1 D(x)). (16) t A( t) 1 e t t S ( ), 0,1,... Q t k k D( t x x) D( t x) D( x) 1 D( x) Figure 5: State of M/G/1 queuing system given by (S Q (t), D(t + x x)). Lecture: M/G/1 and M/G/1/K systems: part I 15

Are there any problems with such description? two complicated to deal with; we better prefer to avoid it if more simple description is available! State of the system: M/G/1 queuing system number of customers in the system at special time instants d + 1, d + 2,... : choose these instants such that we should not track time since previous service started; which instants: just after service completions as D(t + x x) = D(t) when x = 0! d 1 + A( t) 1 e t d 2 + x 0 D( t) D( t x) D( x) D( t 0) D(0) D( t x x) 0 1 D( x) 1 D(0) t S ( ), 0,1,... Q t k k Figure 6: State of M/G/1 queuing system. Lecture: M/G/1 and M/G/1/K systems: part I 16

4.3. Steady-state distribution as seen by departure Doe the following: consider Markov chain imbedded at t i, i = 1, 2,... when i th customer departs from the system; system state: number of customers at t i, i = 1, 2,... denoted by n i, i = 1, 2,... ; departing customer is not included in the system state! Time instant t i -... n i - = k Time instant t i +... n i + = k-1 Just departed at t i Figure 7: Time instants t i and t + i in M/G/1 queuing system. Lecture: M/G/1 and M/G/1/K systems: part I 17

Denote: a i+1 be the number of arrivals in (i + 1) th service time. a i+1 t i t i+1 x 0 ith departure D( t) t Figure 8: Arrivals in (i + 1) th service time. Consider time diagram of the system and distinguish between: i th departure leaves non-empty system: n i > 0; i th departure leaves empty system: n i = 0. Lecture: M/G/1 and M/G/1/K systems: part I 18

Consider the case when n i > 0. a i+1 arrivals in (i+1) th service time... (i+1) th service time t i th departure (system is not empty; (i+1) th departure (system may be empty may be not; n i >0 n i+1 = n i - 1 + a i+1 Figure 9: Time diagram of the M/G/1 queuing system when n i > 0. In this case we have the following relation between n i and n i+1 : n i+1 = n i 1 + a i+1, n i = 1, 2,.... (17) Lecture: M/G/1 and M/G/1/K systems: part I 19

Consider the case when n i = 0. first arrival after empty period a i+1 arrivals in (i+1) th service time... (i+1) th service time t i th departure system is empty (i+1) th departure system may be empty may be not n i = 0 n i+1 = a i+1 Figure 10: Time diagram of the M/G/1 queuing system when n i = 0. In this case we have the following relation between n i and n i+1 : n i+1 = a i+1, n i = 0. (18) Lecture: M/G/1 and M/G/1/K systems: part I 20

Summarizing both cases (n i > 0 and n i = 0) we have: n i+1 = n i 1 + a i+1, n i = 1, 2,..., n i+1 = a i+1, n i = 0. (19) Introducing unit step function U(n i ) rewrite: n i+1 = n i U(n i ) + a i+1, n i = 0, 1,..., (20) where unit step function is given by U(n i ) = 1, n i = 1, 2,..., U(n i ) = 0, n i = 0. (21) this expressin gives us the idea how the system evolves between t i and t i+1 ; recall that t i, i = 0, 1,... are imbedded points of the Markov chain. GENERAL RELATION: n i+1 = n i U(n i ) + a i+1, n i = 0, 1,..., (22) Lecture: M/G/1 and M/G/1/K systems: part I 21

Consider imbedded Markov chain at steady-state t : let q ij, i, j = 0, 1,..., be transition probabilities of going from i to j; let a k the number of arrivals in the service time; q ik = α k, i = 0, k = 0, 1,..., q ik = α k i+1, i = 1, 2,..., k = i 1, i, i + 1,... (23) 3 2 3 1 2 2 2 0 1 2 3... 0 0 0 0 Figure 11: Transition probabilities of imbedded Markov chain at steady-state. Lecture: M/G/1 and M/G/1/K systems: part I 22

Observe that α k, k = 0, 1,... fully determine the Markov chain. departure k arrivals... service time departure t Figure 12: k arrivals in a service time. Arrivals come from Poisson process α k, k = 0, 1,..., we can get α k : α k = 0 (λx) k e λx b(x)dx, k = 0, 1,.... (24) k! Lecture: M/G/1 and M/G/1/K systems: part I 23

4.4. Transform approach vs. direct integration The difference is twofold: in the way we compute α k, k = 0, 1,... α k = 0 (λx) k e λx b(x)dx, k = 0, 1,.... (25) k! first term number of arrivals from Poisson process in time x (Erlang distribution); b(x) service time distribution equals to x; we integrate over all length of t: from 0 to. in the way we solve linear equations of the Markov chain asking Mathlab to do it for us; trying to get closed-form analytical solution. Let us define the following: q i, i = 0, 1,... : number of customers in the system just after departure; Lecture: M/G/1 and M/G/1/K systems: part I 24

4.5. Brute-force direct approach What we do: estimate α k directly via integration; use classic way to get steady-state solution for q i, i = 0, 1,... ; works when capacity is limited M/M/1/K. In vector/matrix form: q 00 q 01 q 02 q 03... q 10 q 11 q 12 q 13... qq = q : (q 0, q 1,... ) = (q 0, q 1,... ) 0 q 21 q 22 q 23........ 1 q e = 1 : (q 0, q 1,... ) 1 = 1 (26). Lecture: M/G/1 and M/G/1/K systems: part I 25

4.6. Other steady-state distributions What we get: PF and moments of the number as seen by departure... Note: we are usually asked to get state distribution at arbitrary time! Let us define the following: p k, k = 0, 1,... : time-averaged (arbitrary time) steady-state distribution of states; a k, k = 0, 1,... : steady-state distribution of states as seen by arrival; d k, k = 0, 1,... : steady-state distribution of states as seen by departure. Question: how to find p k, k = 0, 1,...? Lecture: M/G/1 and M/G/1/K systems: part I 26

Two next results gives us a way to use this PF: Kleinrock s result (!!! remember it as well as PASTA): the following holds for those system when state can change at most ±1 in t 0: d k = a k,..., k = 0, 1,.... (27) PASTA property: the following holds when the arrival process is homogenous Poisson: a k = p k,..., k = 0, 1,.... (28) combining both we have for M/G/1 queuing system: a k = p k = d k,..., k = 0, 1,.... (29) What it gives: we obtained steady-state distribution at any instant of time; q k = a k facilitates derivation of waiting time CDF. Lecture: M/G/1 and M/G/1/K systems: part I 27

4.7. Performance parameters Mean time spent in the buffer: E[W Q ] Mean number of customers in the buffer: E[N Q ] Mean time spent in the system: E[W ] E[W Q ] = λe[x2 ] 2(1 ρ). (30) E[N Q ] = λe[w Q ] = λ2 E[X 2 ] 2(1 ρ). (31) E[W ] = E[W Q ] + E[X] = λe[x2 ] 2(1 ρ) Mean number of customers in the system: E[N Q ] + E[X]. (32) E[N] = E[W ]λ = λ2 E[X 2 ] 2(1 ρ) + ρ. (33) Lecture: M/G/1 and M/G/1/K systems: part I 28

5. Transform approach Consider n i+1 = n i U(n i ) + a i+1 and take means of both sides to get: E[n i+1 ] = E[n i ] E[U(n i )] + E[a i+1 ]. (34) Since we consider the system at steady-state E[n i+1 ] = E[n i ] = E[n]: E[U(n i )] = E[a i+1 ]. (35) Note the following: assuming that the system is empty with probability d 0 we have: E[U(n i )] = 1 d 0. (36) mean number of arrivals in (i + 1) th service time is given by: E[a i+1 ] = 0 λtb(t)dt = λe[x] = ρ. (37) Therefore, in the steady-state we have: d 0 = 1 ρ. (38) Lecture: M/G/1 and M/G/1/K systems: part I 29

Introduce PGFs of n i and n i+1 : P i (z) = E[z n i ] = z k P r{n i = k}, k=0 P i+1 (z) = E[z n i+1 ] = z k P r{n i+1 = k}. (39) Considering n i+1 = n i U(n i ) + a i+1 as the sum of RVs n i U(n i ) and a i+1 : To simplify it, consider the following: k=0 P i+1 (z) = E[z n i U(n i ) ]E[z a i+1 ]. (40) since we consider system at steady-state we can drop subscripts i and write: P (z) = E[z n U(n) ]E[z a ]. (41) defining PGF of number of arriving customers in service time by A(z) = E[z a ], then: A(z) = 0 E[z a X = t]b(t)dt = 0 z n (λt)n e λt b(t)dt = n! 0 e λt(1 z) b(t)dt. (42) Lecture: M/G/1 and M/G/1/K systems: part I 30

Rewrite the last result as: A(z) = 0 e λt(1 z) b(t)dt = 0 b(t)e (λ λz)t dt. (43) Recall the definition of the Laplace transform: F (s) = 0 f(t)e st dt, (44) Thus, we have for PGF of arriving customers in service time: A(z) = B(λ λz). (45) where B( ) is the LT. Now we can get moments of number of arrivals in service time: A (1) = λb (0) = λe[x] = ρ; A (1) = λ 2 B (0) = λ 2 E[X 2 ]. (46) we used moments generating properties of PGF and Laplace transform. Lecture: M/G/1 and M/G/1/K systems: part I 31

Let us get back to P (z) = E[z n U(n) ]E[z a ]: P (z) = A(z)E[z n U(n) ] = A(z) z k U(k) P r{n = k} = k=0 ( ) ( ) = A(z) z 0 d 0 + z k 1 p k = A(z) d 0 + 1 z k p k 1 z z d 0 = k=1 k=0 ( 1 = A(z) z P (z) 1 ) z d 0(1 z). (47) Use d 0 = (1 ρ) to get PGF of the number of customers as seen by departure: P (z) = (1 ρ)(1 z)a(z), (48) A(z) z where A(z) = B(λ λz). Substituting A(z) = B(λ λz), we get Pollaczek-Khinchine transform equation: P (z) = (1 ρ)(1 z)b(λ λz). (49) B(λ λz) z Lecture: M/G/1 and M/G/1/K systems: part I 32

5.1. Mean number of customers as seen by departure Using using moment generating properties of PGF and LT: we can get moments of the number of customers as seen by departure. To get mean, rewrite (48) as follows: Taking two derivatives we get: P (z)(a(z) z) = (1 ρ)(1 z)a(z). (50) P (z)(a(z) z) + P (z)(a (z) 1) = (1 ρ)(1 z)a (z) (1 ρ)a(z), P (z)(a(z) z) + 2P (z)(a (z) 1) + P (z)a (z) = (1 ρ)(1 z)a (z) (1 ρ)a (z). (51) Let z = 1 and use A(1) = 1, P (1) = 1, A (1) = ρ and A (1) = λ 2 E[X 2 ] to get: 2P (1)(1 ρ) + λ 2 E[X 2 ] = (1 ρ)( 2ρ) (52) Mean number of customers as seen by departing customer: E[N] = λ2 E[X 2 ] 2(1 ρ). (53) Lecture: M/G/1 and M/G/1/K systems: part I 33