Stability of the two queue system

Similar documents
Non-homogeneous random walks on a semi-infinite strip

Load Balancing in Distributed Service System: A Survey

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk

2. Transience and Recurrence

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS

Session-Based Queueing Systems

6 Solving Queueing Models

Asymptotics for Polling Models with Limited Service Policies

Lecture 20: Reversible Processes and Queues

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

Performance Evaluation of Queuing Systems

Model reversibility of a two dimensional reflecting random walk and its application to queueing network

STOCHASTIC PROCESSES Basic notions

Cover Page. The handle holds various files of this Leiden University dissertation

arxiv:math/ v4 [math.pr] 12 Apr 2007

Markov processes and queueing networks

Modelling Complex Queuing Situations with Markov Processes

Positive Harris Recurrence and Diffusion Scale Analysis of a Push Pull Queueing Network. Haifa Statistics Seminar May 5, 2008

Maximum pressure policies for stochastic processing networks

ANALYSIS OF P2P FILE TRANSFERRING SYSTEM BY STOCHASTIC QUEUE MODELS JI ZHU THESIS

Structures of optimal policies in Markov Decision Processes with unbounded jumps: the State of our Art

Dynamic resource sharing

1 IEOR 4701: Continuous-Time Markov Chains

Readings: Finish Section 5.2

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

reversed chain is ergodic and has the same equilibrium probabilities (check that π j =

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Lecture 9 Classification of States

On the Pathwise Optimal Bernoulli Routing Policy for Homogeneous Parallel Servers

Exercises Solutions. Automation IEA, LTH. Chapter 2 Manufacturing and process systems. Chapter 5 Discrete manufacturing problems

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 3, MARCH

Stochastic models in product form: the (E)RCAT methodology

Countable state discrete time Markov Chains

Stochastic Models. Edited by D.P. Heyman Bellcore. MJ. Sobel State University of New York at Stony Brook

Part I Stochastic variables and Markov chains

On the Partitioning of Servers in Queueing Systems during Rush Hour

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

MARKOV DECISION PROCESSES

Tail Decay Rates in Double QBD Processes and Related Reflected Random Walks

QUEUING SYSTEM. Yetunde Folajimi, PhD

Queueing. Chapter Continuous Time Markov Chains 2 CHAPTER 5. QUEUEING

Chapter 1. Introduction. 1.1 Stochastic process

Convergence Rates for Renewal Sequences

Queues and Queueing Networks

On the Partitioning of Servers in Queueing Systems during Rush Hour

Stochastic process. X, a series of random variables indexed by t

Lecture 20 : Markov Chains

Tutorial: Optimal Control of Queueing Networks

Introduction to Markov Chains, Queuing Theory, and Network Performance

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

On the static assignment to parallel servers

A matrix over a field F is a rectangular array of elements from F. The symbol

The shortest queue problem

Some open problems related to stability. 1 Multi-server queue with First-Come-First-Served discipline

1 Markov decision processes

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition

ring structure Abstract Optical Grid networks allow many computing sites to share their resources by connecting

DELAY, MEMORY, AND MESSAGING TRADEOFFS IN DISTRIBUTED SERVICE SYSTEMS

QUEUEING AND SCHEDULING IN RANDOM ENVIRONMENTS

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

The discrete-time Geom/G/1 queue with multiple adaptive vacations and. setup/closedown times

A monotonic property of the optimal admission control to an M/M/1 queue under periodic observations with average cost criterion

Stability of queueing networks. Maury Bramson. University of Minnesota

A.Piunovskiy. University of Liverpool Fluid Approximation to Controlled Markov. Chains with Local Transitions. A.Piunovskiy.

Bucket brigades - an example of self-organized production. April 20, 2016

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

STABILITY OF MULTICLASS QUEUEING NETWORKS UNDER LONGEST-QUEUE AND LONGEST-DOMINATING-QUEUE SCHEDULING

Multiserver Queueing Model subject to Single Exponential Vacation

57:022 Principles of Design II Final Exam Solutions - Spring 1997

Computer Systems Modelling

THE INTERCHANGEABILITY OF./M/1 QUEUES IN SERIES. 1. Introduction

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS

CS 798: Homework Assignment 3 (Queueing Theory)

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Markov Chain Model for ALOHA protocol

Data analysis and stochastic modeling

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

Figure 10.1: Recording when the event E occurs

Stability and Heavy Traffic Limits for Queueing Networks

Stochastic2010 Page 1

Queuing Theory. Using the Math. Management Science

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

`First Come, First Served' can be unstable! Thomas I. Seidman. Department of Mathematics and Statistics. University of Maryland Baltimore County

A Heavy Traffic Approximation for Queues with Restricted Customer-Server Matchings

Chapter 3: Markov Processes First hitting times

Electronic Companion Fluid Models for Overloaded Multi-Class Many-Server Queueing Systems with FCFS Routing

IEOR 6711, HMWK 5, Professor Sigman

Advanced Computer Networks Lecture 3. Models of Queuing

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Cross-Selling in a Call Center with a Heterogeneous Customer Population. Technical Appendix

Expected Values Estimated via Mean-Field Approximation are 1/N-Accurate

Theory of Computing Systems 1999 Springer-Verlag New York Inc.

Technical Appendix for: When Promotions Meet Operations: Cross-Selling and Its Effect on Call-Center Performance

Transcription:

Stability of the two queue system Iain M. MacPhee and Lisa J. Müller University of Durham Department of Mathematical Science Durham, DH1 3LE, UK (e-mail: i.m.macphee@durham.ac.uk, l.j.muller@durham.ac.uk) Abstract. We describe ergodicity and transience conditions for a general two queue system with multiple service regimes, a dedicated traffic stream for each queue, a further stream which can be routed to either queue and where completed jobs can be fed back into the queues. There is only one class of jobs but the service times and feedback probabilities depend upon the configuration of the servers. Several different levels of control of the service regimes are considered. We use the semi-martingale methods described in [Fayolle et al., 1995] and our results generalise those of [Kurkova, 2001]. Keywords: controlled queue systems. 1 Introduction In this paper we consider a system which has two queues with servers that can be configured in several ways. Our main aim is to identify conditions under which we can give a queue length dependent policy for choosing the service configurations that guarantees the stability of the system. The queues have independent Poisson arrival streams with rates λ i, i = 1, 2 and there is an independent Poisson arrival stream with rate λ of jobs that can be sent to either queue (we will call this the routeable stream). We assume all jobs are of the same class and are served in the order they join their queues but their service times depend upon their queue and the service scheme in force while they are being served. Under server configuration k, at most one job is in service at each non-empty queue and all jobs in queue i have independent, exponentially distributed service times with mean µ 1 ki, i = 1, 2 (so the server configuration k and the destination of a routeable job determines its service distribution). We label the server configurations by k = 1,..., K, the queue to which the routable stream is directed by j = 1, 2 so that the finite set R of overall management regimes has members η = (k, j). In addition the system has Jackson-type feedback with probabilities that depend upon the current management regime. Any job that completes service at queue i under regime η independently enters queue i with probability p η ii, i = 1, 2 or leaves the system with probability p η i0 1 pη i1 pη i2 0. We will assume throughout this paper that we can instantaneously switch between different management regimes at the instants just after changes to queue lengths.

1190 MacPhee and Müller Example The model described above includes as a special case the model with two servers, where server i can be used to process jobs at either queue which it does at rate µ i. This gives four service regimes s 1 s 2 = 1 2, 21, 11 and 2 2 (i.e. server 1 at queue 1, server 2 at queue 2; server 2 at queue 1, server 1 at queue 2; both servers at queue 1; both servers at queue 2). Given that the service rates are additive we get the pairs (µ 1, µ 2 ), (µ 2, µ 1 ), (µ 1 + µ 2, 0) and (0, µ 1 + µ 2 ) respectively. The question we consider is whether for such a system with a given set of parameters, the management regime can be changed from time to time to ensure that the queue lengths remain stable or whether the queue lengths must grow indefinitely regardless of how the system is managed. Similar systems but with fixed servers have been studied in the past using transform methods, often under strong symmetry assumptions on parameters, see [Feng et al., 2002] and [Foley and McDonald, 2001] who give stability conditions for an n-dimensional JSQ model and carry out the large deviations analysis of system occupancy for the two dimensional system. We define the model we consider in section 2 and state our results in section 3. We omit the proofs here to be able to describe the model in full length. The proof is done using the semi-martingale methods described in [Fayolle et al., 1995] and can be found in [MacPhee and Müller, ]. Our results generalise those of [Kurkova, 2001] as we consider multiple service regimes and do not require any symmetry. 2 Definitions We now define the queueing system, its control, and the classes of control policies that we wish to investigate. 2.1 Events, blocks and control policies As the Lyapunov function results we use are described in terms of discrete processes it is convenient to study a discrete time process which we now describe. To simplify comparison of the process dynamics under different management regimes we uniformise the continuous time jump process, following Serfozo [Serfozo, 1979], by choosing a constant ρ max k {λ + λ 1 + λ 2 + µ k1 + µ k2 } and introducing a fictitious bell event which has exponential inter-event times with rate ρ (λ + λ 1 + λ 2 + µ k1 + µ k2 ) at any given queue lengths when regime (k, j) is used (so the total event rate has the same value ρ in all states under all regimes). We now consider the uniformised discrete time process Ξ on state space Z 2 0 {(x, y) Z2 : x 0, y 0}, obtained by considering the queue lengths at bell events, arrival times of new jobs and at service completions and consequent re-entry to queues. We will use α = (x, y) Z 2 0 to denote a typical state vector for Ξ.

Stability of the two queue system 1191 It is also necessary to define the policies by which the management regimes at each state are selected. Our main interest will be in policies which choose the same regimes over large sets of states, specifically cone shaped blocks for which we need some notation. Let e i denote the unit vector in the axis i direction and for non-zero z R 2 let z denote the length of z and arg u (z) the argument relative to non-zero vector u R 2 (the angle anticlockwise from u to z). For any non-zero u, v R 2 let l(u) = {z R 2 : z = tu, t > 0} denote the half-line in the direction u and C(u, v) {z R 2 : z > 0, 0 < arg u (z) < arg u (v)} (1) the cone swept anticlockwise from direction u to direction v. The closure of such a cone will be denoted C(u, v). We give specific labels to the positive parts of the axes, A i l(e i ) as we will consider them as blocks subsequently. It will also be convenient to define two special versions of the argument, one relative to each axis A i. Let R : R 2 R 2 be reflection in the line z 1 = z 2 i.e. R(z 1, z 2 ) = (z 2, z 1 ) and define arg 1 (z) = arg e1 (z), arg 2 (z) = arg 1 ( R(z) ) (2) so arg 2 (z) is the angle measured clockwise from e 2 to z. A policy for controlling this discrete event system is a sequence Π = {π n : n 0} of transition probabilities π n from H n, the process history at time n, to R, the set of regimes i.e. for any history α 0, η 0,..., α n 1, η n 1, α n the next action is selected according to the distribution π n (α 0, η 0,..., α n, ). This definition includes non-stationary, non-markov randomised policies though they offer no performance benefits when applied to stationary Markov processes, see e.g. Blackwell [Blackwell, 1965]. Let ξ i (n) denote the length of queue i at time n and ξ(n) = (ξ 1 (n), ξ 2 (n)). A policy Π along with an initial distribution for the queues determines a stochastic process (Ξ, Π) = {(ξ(n), η n ) : n 0} which will only be Markov when π n (α 0, η 0,..., α n, ) is a distribution dependent only on α n. A policy Π which selects an action a(α) with probability 1 whenever the system state is α, where a is a map from Z 2 0 to R, is a deterministic stationary policy. Our main interest is in a class of these that we call block pure policies, denoted Π b, where the state space Z 2 0 is partitioned into a small number of disjoint blocks, always lines or cones, such that a is constant on each block C(u, v). We also investigate a generalisation of these, block randomised policies, denoted Π r, where for each block the distribution πn(α, r ) is the same at every state α in the block (so the Π b are degenerate cases of the Π r ). With such policies the process (Ξ, Π r ) is Markov due to our assumptions about Poisson arrivals and exponential service times. 2.2 The queues and their mean drifts The process (Ξ, Π) has bounded jumps, specifically ±e i and ±(e 2 e 1 ) and so all moments of its jump distributions exist under any policy but in this two

1192 MacPhee and Müller dimensional case our results can be stated in terms of their first moments. For each regime η let M η = E(ξ(n + 1) ξ(n) H n, π n = η) (3) denote the mean drift vector for any period when the policy selects regime η. We have, for k = 1,..., K at states α Z 2 + {(x, y) Z 2 : x > 0, y > 0} M η = (M η 1, Mη 2 { ( ) ρ 1 λ + λ 1 + µ k2 p η 21 = µ k1p η 10, λ 2 + µ k1 p η 12 µ k2p η ) 20, η = (k, 1) ρ 1( λ 1 + µ k2 p η 21 µ k1p η 10, λ + λ 2 + µ k1 p η 12 µ k2p η (4) 20), η = (k, 2) It is convenient to assume that when queue i is empty the policy selects a regime η chosen from among those with µ ki = 0 (this is equivalent to having non-idling servers). This ensures that equation (4) is also correct for histories ending in states α A 1 {(x, 0) : x > 0} and α A 2 {(0, y) : y > 0} for such service regimes. We will sometimes use the notation M and M to denote mean drifts for the system under appropriate regimes for A 1 and A 2 respectively. Now consider any policy Π allowing randomisation. The mean drift of our process Ξ under Π when the current state is α Z 2 + is a 2-dimensional vector M Π lying in the convex set { M = p η M η : p η [0, 1] and } p η = 1 (5) η η the convex hull of the regime mean drifts. The extreme points of M are a subset of the regime mean drifts M η. When three or more of the M η are distinct it may happen that the two-dimensional interior, Int 2 (M) {z M : B(z, ɛ) M for some ɛ > 0}, (where for z R 2 +, B(z, ɛ) = {z R 2 : z z < ɛ}) is non-empty. 3 Classification of the system The behaviour of the system depends on whether the convex set α + M can be separated from the origin by a line through α. Any set of parameters for the process (Ξ, Π) falls into one the following four exclusive cases: C1 (0, 0) = 0 / M and there exists a state α Z 2 + and a line L v (α) {β R 2 : v T (β α) = 0} (6) with normal vector v through α separating α + M from the origin 0. If there exists one such α Z 2 + then there is an infinite cone of such α.

Stability of the two queue system 1193 C2 0 / M and there exists no α Z 2 + and line L v(α) which separates α+m from 0. C3 Int 2 (M) is non-empty, 0 M and there exists no α Z 2 +, v R2 such that the line L v (α) separates α + Int 2 (M) from the origin. C4 0 is a boundary point of M and either Int 2 (M) = or the tangent line to α + M through α separates the origin from α + Int 2 (M) for each α in a cone within Z 2 +. See Figure 1 for examples of C1-C4. A 2 0 A 2 L v α + M A 2 M α A 1 A 2 α + 0 L M v α α + M A 1 α + α α 0 A 1 Fig.1. From top left: C1, C2, and below C3, C4. 0 L v A 1 Note: the cases in C4 are critical but we will say very little about them in this paper. We start stating our results by giving sufficient conditions for instability or stability respectively of the system under fully randomised controls in cases C1 and C2 respectively. Next we show that in case C3 there is always a block pure policy that makes (Ξ, Π b ) ergodic and we also show that randomisation allows the use of fewer blocks. Finally in this section we consider some situations with even lower levels of control.

1194 MacPhee and Müller 3.1 Fully randomised controls The following two results apply when even the most general policy Π is used to control the queueing system. They imply that in cases C1 and C2 the control policy used does not affect the stability or otherwise of the process. Theorem 1 If 0 / M and there exists an α Z 2 + and v R 2 such that the line L v (α), see (6), separates α + M from the origin 0 then the process (Ξ, Π) is unstable, in the sense that the total number of queued jobs almost surely goes to linearly in time for any policy Π. The conditions of the theorem can be pictured in an alternative way. Specifically there exists a state α Z 2 + such that the line segment from 0 to α does not intersect α + M (it follows that if there is any such pair α, v then there is an infinite cone of points α such that L v (α ) separates 0 and α + M). Theorem 2 If 0 / M and there is no α Z 2 +, v R 2 such that L v (α) separates α + M from 0 then (Ξ, Π) is stable, in the sense that the total number of queued jobs remains bounded in mean, under every policy Π. The alternative description of the conditions here is that for every α Z 2 + the line segment joining 0 to α intersects α + M. From this it follows there is some v R 2 + such that 0 and α + M are in the same halfspace created by L v (α). 3.2 Block controls In case C3 it does make a difference which policy is used for running the system. In fact we can show that block pure policies Π b with at most a handful of blocks are adequate to ensure stability of the process. Under policies of this type the process (Ξ, Π b ) is Markov so we can now talk about ergodicity and transience. Theorem 3 If 0 Int 2 (M) then there is a block pure policy Π b with at most five blocks such that the Markov chain (Ξ, Π b ) is ergodic. Theorems 2 and 3 imply the following result. Corollary 1 If 0 is a boundary point of M, Int 2 (M) is non-empty and there exists no α Z 2 +, v R2 such that L v (α) separates α+int 2 (M) from 0 then there is a policy Π b with at most three blocks such that (Ξ, Π b ) is ergodic. In Theorem 3 the number of blocks required to achieve ergodicity can be reduced if block randomised policies Π r are used. Corollary 2 If 0 Int 2 (M) and a block randomised policy Π r is used then at most four blocks are necessary to ensure that (Ξ, Π r ) is ergodic.

Stability of the two queue system 1195 Example [Foley and McDonald, 2001] consider a model which has fixed servers (their service rate drops to 0 when their queues are empty), no feedback and is strictly JSQ. Their stability criterion for N = 2 queues is that ρ max 1 where ρ max = max{λ 1 /µ 1, λ 2 /µ 2, (λ + λ 1 + λ 2 )/(µ 1 + µ 2 ) }. For the policy which sends the routable stream to the queue with minimum weighted work our model has two regimes depending upon where the routable traffic is sent and these have drift vectors M 1 = 1 ρ (λ + λ 1 µ 1, λ 2 µ 2 ) and M 2 = 1 ρ (λ 1 µ 1, λ + λ 2 µ 2 ). As ρ(m 2 M 1 ) = ( λ, λ) (1, 1) the line segment joining these two drift vectors has the form z 1 + z 2 = (λ + λ 1 µ 1 + λ 2 µ 2 )/ρ which can only intersect R 2 when ρ max < 1. The case ρ max = 1 is critical and we see that our conditions are equivalent to those of Foley and McDonald in this case. The simplicity of the classification based on the convex hull M confirms that this geometrical approach combined with the Lyapunov function method is a natural technique for studying stability of multi-queue systems though of course large deviations results like those [Foley and McDonald, 2001] are not achievable thisway. 3.3 Low levels of control The results of [Fayolle et al., 1995] can also be used to classify the process for any control policy that is block homogeneous for any small number of blocks. It soon becomes evident to anybody who attempts this that there are many ways for the process to remain stable and many more for it to be transient. To illustrate this we now spell out the possible behaviours of the queueing system with four blocks, specifically the axes A 1, A 2 and two cones, C 1 = C(e 1, d) l(d) and C 2 = C(d, e 2 ) (see (1) for this notation), that partition Z 2 +. The two cones are not assumed to be symmetric i.e. the vector d R 2 + need not be parallel to (1, 1). We assume that in each of the A i and C i, i = 1, 2 a single management regime is used (different blocks may have a common regime) with mean drift vectors M 1, M 2 in blocks C 1, C 2 respectively and M, M in blocks A 1, A 2 respectively. This assumption about and notation for the regimes on the A i we will use in all further sections but the C i are specific to this section. We first label the M i according to the angles ϕ i they make relative to the axes A i, i = 1, 2. For each M i angle ϕ i = 0 is in the direction of A i and ϕ 1 increases clockwise while ϕ 2 increases anticlockwise i.e. ϕ i = 2π arg i (M i ). We label the directions of the M i as A when 0 < ϕ i < π, B when π ϕ i 3π 2 and D when 3π 2 < ϕ i 2π. The various cases of this model are labelled with label of M 1 /label of M 2 so a label B/A means M 1 has a positive y and a negative x component and M 2 has x component negative with y of either sign. Figure 2 illustrates this labelling scheme for the directions of the M i from origins α i.

1196 MacPhee and Müller A 2 C 2 A α 2 B B A D α 1 l(d) C 1 A 1 Fig.2. Graphical explanation of the labels. From the results in [Fayolle et al., 1995] on the random walk in the positive quadrant, we have (in their terminology): (i) if a drift M i has an A label then axis A i is an ergodic face; (ii) that face will be outgoing, ingoing or neutral according to the sign of the second vector field (which is scalar in this case); (iii) if M i has a B or D label then face A i is transient and there is no second vector field. In this two dimensional case the sign of the second vector field depends only upon the angles of M and M 1 for A 1, M and M 2 for A 2. As with the angles ϕ i it is convenient to name the angles ψ 1 = arg 1 (M ), ψ 2 = arg 2 (M ) that M, M make relative to axes A 1, A 2 respectively, so ψ i = 0 is in the A i direction and ψ 1 increases anticlockwise while ψ 2 increases clockwise. Now, following the sign of the second vector field, we modify the labels for M i, i = 1, 2 to A + : ϕ i + ψ i < π, A : ϕ i + ψ i > π, A 0 : ϕ i + ψ i = π (7) Using this labelling system we can identify 25 different cases to deal with. It turns out that in many of the cases we get the same result for all choices of the two cones i.e. all slopes d d 2 /d 1 (0, ) of the line l(d) separating them. Theorem 4 classifies these invariant cases. Theorem 4 The system is (1) ergodic in cases A /A B, B/A, B/B with M1 x M > M2 y 1 x M y 2 (2) transient in cases A + /A B D, A B D/A +, B/B with M1 x M < M2 y 1 x M, y 2 D/B, B/D, D/D; (3) null recurrent in cases A 0 /A 0 A + B, A + B/A 0, B/B with M1 x M = y 1 M2 x. M 2 y

Stability of the two queue system 1197 A 2 M 1 M 2 l(d) M 2 2 M 2 1 M 1 1 M 1 M 1 2 M 2 Fig.3. Example of case D/A where l(d) is important. A 1 For systems with no control over the service regimes there still may be some control over the routable traffic stream. The next theorem shows that there are sets of parameters such that a change to the slope of the switching line l(d) can change Ξ from a transient to an ergodic process. We describe in detail only the case D/A 0 A, depicted in Fig. 3, as case A 0 A /D is very similar. The relative slopes of M 1, l(d) and M 2 are crucial so we label two key conditions: E1: M 1 2 < d M 1 1 (so l(d) is steeper than M 1 ); E1 : M 1 2 > d M 1 1; E2: M 2 2 d ( M 2 1) (including cases with M 2 2 0 and implies M 2 is not steeper than l(d)). Theorem 5 In case D/A 0 A the ergodicity or non-ergodicity of the Markov chain Ξ also depends on the slope d > 0 of the line l(d) separating C 1 and C 2 as follows: (a) if E1 holds then Ξ is transient, (b) if E1 holds then Ξ s excursions into C 1 have finite mean time and Ξ is (i) ergodic if E2 holds and M 2 is A or if E2 does not hold and ( M2 2)M1 1 < M1 2 ( M2 1 ) (so M1 is steeper than M 2 ); (ii) null recurrent if E2 holds and M 2 is A 0 or if E2 does not hold and ( M2)M 2 1 1 = M2( M 1 1); 2 (iii) transient if E2 does not hold and ( M2 2)M1 1 > M1 2 ( M2 1 ). The case A 0 A /D is simply the reflection of the above in the line l(1, 1). Note: this theorem says nothing about the cases where M 1 is parallel to l(d) but in practice this will not be a major problem if the slope of the line l(d) is under user control.

1198 MacPhee and Müller References [Blackwell, 1965]D. Blackwell. Discounted dynamic programming. Annals of Mathematical Statistics, pages 226 235, 1965. [Fayolle et al., 1995]G. Fayolle, V. A. Malishev, and M. V. Menshikov. Topics in the Constructive Theory of countable Markov Chains. Cambridge University Press, Cambridge, 1995. [Feng et al., 2002]W. Feng, K. Adachi, and A. Kowada. A two-queue and twoserver model with a threshold-based control service policy. European Journal of Operational Research, pages 593 611, 2002. [Foley and McDonald, 2001]R. D. Foley and D. R. McDonald. Join the shortest queue: stability and exact asymptotics. Annals of applied Probability, pages 569 607, 2001. [Kurkova, 2001]I. A. Kurkova. A load-balanced network with two servers. Queueing Systems, pages 379 389, 2001. [MacPhee and Müller, ]I.M. MacPhee and L.J. Müller. Stability classification of a controlled multi-queue system with many service and routing regimes. submitted to Queueing Systems. [Serfozo, 1979]R.F. Serfozo. An equivalence between continuous and discrete time markov decision processes. Operations Research, 27:616 620, 1979.