Carry-Over Round Robin: A Simple Cell Scheduling Mechanism. for ATM Networks. Abstract

Size: px
Start display at page:

Download "Carry-Over Round Robin: A Simple Cell Scheduling Mechanism. for ATM Networks. Abstract"

Transcription

1 Carry-Over Round Robin: A Simple Cell Scheduling Mechanism for ATM Networks Debanjan Saha y Sarit Mukherjee z Satish K. Tripathi x Abstract We propose a simple cell scheduling mechanism for ATM networks. The proposed mechanism, named Carry-Over Round Robin (CORR), is an extension of weighted round robin scheduling. We show that albeit its simplicity, CORR achieves tight bounds on end-to-end delay and near perfect fairness. Using a variety of video trac traces we show that CORR often outperforms some of the more complex scheduling disciplines such as Packet-by-Packet Generalized Processor Sharing (PGPS). This work is supported in part by NSF under Grant No. CCR and Army Research Laboratory under Cooperative Agreement No. DAAL A short version of the paper appeared in the proceedings of IEEE Infocom'96. y IBM T.J. Watson Research Center, Yorktown Heights, NY debanjan@watson.ibm.com. z Dept. of Computer Science & Engg. University of Nebraska, Lincoln, NE sarit@cse.unl.edu. x Dept. of Computer Science, University of Maryland, College Park, MD tripathi@cs.umd.edu.

2 1 Introduction This paper presents a simple yet eective cell multiplexing mechanism for ATM networks. The proposed mechanism, named Carry-Over Round Robin (CORR), is a simple extension of weighted round robin scheduling. It provides each connection a minimum guaranteed rate of service at the time of connection setup. The excess capacity is fairly shared among active connections. CORR overcomes a common shortcoming of most round robin and frame based scheduler, that is, coupling delay performance and bandwidth allocation granularity. We show that despite its simplicity, CORR often outperforms some of the more sophisticated schemes, such as Packet-by-Packet Generalized Processor Sharing (PGPS) in terms of delay performance and fairness. Rate based service disciplines for packet switched network is a well studied area of research [1, 2, 4, 7, 11, 8]. Based on bandwidth sharing strategies, most of the proposed schemes can be classied into one of the two categories { (1) fair queuing mechanisms, and (2) frame-based or weighted round robin policies. Virtual clock [11], packet-by-packet generalized processor sharing (PGPS) [3, 7], self clocked fair queueing (SFQ) [6], are the most popular examples of schemes that use fair queueing strategies to guarantee a certain share of bandwidth to a specic connection. The most popular frame-based schemes are Stop-and-Go (SG) [4, 5] and Hierarchical-Round-Robin (HRR) [2]. While fair queueing policies are extremely exible in terms of allocating bandwidth in very ne granularity and fair distribution of bandwidth among active connections, they are expensive in terms of implementation. Frame-based mechanisms on the other hand are inexpensive in terms of their implementation. However, they suer from many shortcomings, such as inecient utilization of bandwidth, coupling between delay performance and bandwidth allocation granularity, unfair allocation of bandwidth. CORR strives to integrate the exibility and fairness of the fair queueing strategies with the simplicity of frame-based/round robin mechanisms. The starting point of our algorithm is a simple variation of round robin scheduling. Like round robin, CORR divides the time-line into allocation cycles, and each connection is allocated a fraction of the available bandwidth in each cycle. However, unlike slotted implementations of round robin schemes where bandwidth is allocated as a multiple of a xed quantum, in our scheme bandwidth allocation granularity can be arbitrarily small. This helps CORR to break the coupling between framing delay and granularity of bandwidth allocation. Another important dierence between CORR and frame based schemes, such as SG and HRR is that CORR is a work conserving service discipline. It does not waste the spare capacity of the system, rather share it fairly among active connections. A recent paper [10] proposed a similar idea for ecient implementation of fair queuing. However, the algorithm proposed in [10] has not been analyzed for delay and other related performance metrics. We have presented detailed analysis of CORR and derived tight bounds on end-to-end delay. Our derivation of delay bounds does not assume a specic trac arrival pattern. Hence, unlike PGPS (for which delay bound is available only for leaky bucket controlled sources) we can derive end-to-end delay bounds for CORR for a variety of trac sources. Using trac traces from real life video sources we have shown that CORR often performs better than PGPS in terms of the size of the admissible 1

3 region. We have also analyzed the fairness properties of CORR under the most general scenarios and have shown that it achieves nearly perfect fairness. The rest of this paper is organized as follows. In section 2 we present the intuition behind CORR and its algorithmic description. We discuss the properties of the algorithm in section 3. Section 4 is devoted to the analysis of the algorithm and its evaluation in terms delay performance and fairness. In section 5 we compare the end-to-end performance of CORR with PGPS and SG using a variety of trac traces. We conclude the paper in section 6. 2 Scheduling Algorithm Like round robin scheduling, CORR divides the time line into allocation cycles. The imuength of an allocation cycle is T. Let us assume that the cell transmission time is the basic unit of time. Hence, the imum number of cells (or slots) transmitted during one cycle is T. At the time of admission, each connection C i is allocated a rate R i expressed in cells per cycle. Unlike simple round robin schemes, where R i s have to be integers, CORR allows R i s to be real. Since R i s can take real values, the granularity of bandwidth allocation can be arbitrarily small, irrespective of the length of the allocation cycle. The goal of the scheduling algorithm is to allocate each connection C i close to R i slots in each cycle and exactly R i slots per cycle over a longer time frame. It also distributes the excess bandwidth among the active connections C i s in the proportion of their respective R i s. The CORR scheduler (see gure 1) consists of three asynchronous events Initialize, Enqueue, and Dispatch. The event Initialize is invoked when a new connection is admitted. If a connection is admissible 1, it simply adds the connection to the connection-list fcg. The connection-list is ordered in the decreasing order of R i br i c, that is, the fractional part of R i. The event Enqueue is activated at the arrival of a packet. It puts the packet in the appropriate connection queue and updates the cell count of the connection. The most important event in the scheduler is Dispatch. The event Dispatch is invoked at the beginning of a busy period. Before explaining the task performed by Dispatch, let us introduce the variables and constants used in the algorithm and the basic intuition behind it. The scheduler maintains separate queues for each connection. For each connection C i, n i keeps the count of the waiting cells, and r i holds the number of slots currently credited to it. Note that r i s can be real as well as negative fractions. A negative value of r i signies that the connection has been allocated more slots than it deserves. A positive value of r i reects the current legitimate requirement of the connection. In order to allocate slots to meet the requirement of the connection as closely as possible, CORR divides each allocation cycle into two sub-cycles a major cycle and a minor cycle. In the major cycle, integral requirement of each connection is satised rst. Slots left over from major cycle are allocated in minor cycle to connections with still unfullled fractional requirements. Obviously, a fraction of a slot cannot be allocated. Hence, eligible connections are allocated a full slot each in the minor cycle whenever slots are available. However, all the connections 1 We discuss admission control later. 2

4 Constants T : Cycle length. R i : Slots allocated to C i. Variables fcg: Set of all connections. t: Slots left in current cycle. n i : Number of cells in C i. r i : Current slot allocation of C i. Events Initialize(C i ) /* Invoked at connection setup time. */ add C i to fcg; /* fcg is ordered in decreasing order of R i br i c. */ n i 0; r i 0; Enqueue() /* Invoked at cell arrival time. */ n i = n i + 1 add cell to connection queue; Dispatch() /* Invoked at the beginning of a busy period. */ 8C i :: r i 0; while not end-of-busy-period do t T ; 1. Major Cycle: for all C i 2 fcg do /* From head to tail. */ r i min(n i ; r i + R i ); x i min(t; br i c); t t x i ; r i r i x i ; n i n i x i ; dispatch x i cells from connection queue C i ; end for 2. Minor Cycle: for all C i 2 fcg do /* From head to tail. */ x i min(t; dr i e); t t x i ; r i r i x i ; n i n i x i ; dispatch x i cells from connection queue C i ; end while end for Figure 1: Carry-Over Round Robin Scheduling. with fractional requirements may not be allocated a slot in the minor cycle. The connections that get a slot in the minor cycle over-satisfy their requirements and carry a debit to the next cycle. The eligible connections that do not get a slot in the minor cycle carry a credit to the next cycle. The 3

5 allocations for the next cycle are adjusted to reect this debit and credit carried over from the last cycle. Following is a detailed description of the steps taken in the Dispatch event. At the beginning of a busy period, all r i s are set to 0 and a new cycle is initiated. The cycles continue until the end of the busy period. At the beginning of each cycle, the current number of unallocated slots t is initialized to T, and the major cycle is initiated. In the major cycle, the dispatcher cycles through connection-list and, for each connection C i, updates r i to r i + R i. If the number of cells queued in the connection queue, n i, is less than the updated value of r i, r i is set to n i. This is to make sure that a connection cannot accumulate credits. The minimum of t and br i c cells are dispatched from the connection queue of C i. The variables are appropriately adjusted after dispatching the cells. A minor cycle starts with the slots left over from preceding major cycle. Again, the dispatcher walks through the connection-list. As long as there are slots left, a connection is deemed eligible for dispatching i 1) it has queued packets, and 2) its r i is greater than zero. If there is no eligible connection or if t reaches zero, the cycle ends. Note that the length of the major and minor cycles may be dierent in dierent allocation cycles. Example: Let us consider a CORR scheduler with cycle length T = 4 and serving three connections C 1, C 2, and C 3 with R 1 = 2, R 2 = 1:5, and R 3 = 0:5, respectively. In an ideal system where fractional slots can be allocated, slots can be allocated to the connections in a fashion shown in gure 2, resulting in full utilization of the system. CORR also achieves full utilization, but with a dierent allocation of slots. For ease of exposition, let us assume that all three connections are backlogged starting from the beginning of the busy period. In the major cycle of the rst cycle, CORR allocates C 1, C 2, and C 3, br 1 c = 2, br 2 c = 1, and br 3 c = 0 slots, respectively. Hence, at the beginning of the rst minor cycle, t = 1, r 1 = 0:0, r 2 = 0:5, and r 3 = 0:5. The only slot left over for the minor cycle goes to C 2. Consequently, at the end of the rst cycle, r 1 = 0:0, r 2 = 0:5, and r 3 = 0:5, and the adjusted requirements for the second cycle are r 1 = r 1 + R 1 = 0:0 + 2:0 = 2:0 r 2 = r 2 + R 2 = 0:5 + 1:5 = 1:0 r 3 = r 3 + R 3 = 0:5 + 0:5 = 1:0 Since all the r i s are integral, they are all satised in the major cycle. The main attraction of CORR is its simplicity. In terms of complexity, CORR is comparable to round robin and frame based mechanisms. However, CORR does not suer from the shortcomings of round robin and frame based schedulers. By allowing the number of slots allocated to a connection in an allocation cycle to be a real number instead of an integer, we break the coupling between the service delay and bandwidth allocation granularity. Also, unlike frame based mechanisms, such as SG and HRR, CORR is a work conserving discipline capable of exploiting the multiplexing gains of packet switching. In the following section we discuss some of its basic properties. 4

6 Cycle 1 Cycle 2 Ideal R 1 = 2.0 r 1 =2.0 r 1 =0.0 r1 =2.0 r 1 =0.0 R r 2 = =1.5 r 2 =0.5 r 2 =1.0 r =0.0 2 R = r = r = r =1.0 3 r =0.0 3 Major Cycle Minor Major Cycle Minor Cycle Cycle 3 Basic Properties Connection 1 Connection 2 Connection 3 Figure 2: An Example Allocation. In this section, we discuss some of the basic properties of the scheduling algorithm. Lemma 3.1 denes an upper bound on the aggregate requirements of all streams inherited from the last cycle. This result is used in lemma 3.2 to determine the upper and lower bounds on the individual requirements carried over from the last cycle by each connection. Lemma 3.1 If P 8C i2fcg R i T then at the beginning of each cycle P 8C i2fcg r i 0: Proof: We will prove this by induction. We will rst show that it holds at the beginning of a busy period, and then we will show that if it hold in the kth cycle, it also holds in the (k + 1) th cycle. Base Case: From the allocation algorithm we observe that at the beginning of a busy period r i = 0 for all connection C i. Hence, r i = 0 Thus, the assertion holds in the base case. 8C i2fcg Inductive Hypothesis: Assume that the premise holds in the kth cycle. We need to prove that it also holds in the (k + 1) th cycle. We use superscript for cycles in the following proof. 8C i2fcg r k+1 i = 8C i2fcg r k i + 8C i2fcg R i T 0 + T T 0: 5

7 This completes the proof. Henceforth we assume that the admission control mechanism makes sure that P 8C i2fcg R i T at all nodes. This simple admission control test is one of the attractions of the CORR scheduling. Lemma 3.2 If P 8C i2fcg R i T then at the beginning of each cycle 1 < i r i i < 1; where i = k fkr i bkr i cg; k = 1; 2; : : : Proof: To derive the lower bound on r i, observe that in each cycle no more than dr i e slots are allocated to connection C i. Also, note that r i is incremented in steps of R i. Hence, the lowest r i can get to is, i = k fkr i dkr i eg; k = 1; 2; : : :1 = k fkr i bkr i cg; k = 1; 2; : : :1 Derivation of the upper bound is little more complex. Let us assume that there are n connections C i, i = 1; 2; : : :; n. With no loss of generality we can renumber them, such that R i R j, when i < j. For the sake of simplicity, let us also assume that all the R i s are fractional. We show later that this assumption is not restrictive. To prove the upper bound we rst prove that r i never exceeds 1, for all connections C i. Now, since R n is the lowest of all R i s, i = 1; 2; : : :; n, C n is the last connection in the connection list. Consequently, C n is the last connection considered for a possible cell dispatch in both major and minor cycles. Hence, if we can prove that r n never exceeds 1, so is true for all other r i s. We will prove this by contradiction. Let us assume that C n enters a busy period in allocation cycle 1. Observe, that C n experiences the worst case allocation when all other connections also enter their busy period in the same cycle. Let us assume that r n exceeds 1. This would happen in the allocation cycle d1=r n e. Since r n exceeds 1, C n is considered for a possible dispatch in the major cycle. Now, C n is not scheduled during the major cycle of the allocation cycle d1=r n e if and only if the following is true at the beginning of the allocation cycle, n 1 i=1 br i + R i c T Froemma 3.1 we know that, P n i=1 r i 0 at the beginning of each cycle. Since r n > 0, P n 1 i=1 r i < 0 at the beginning of the allocation cycle d1=r n e. But, n 1 i=1 br i + R i c n 1 i=1 n 1 (r i + R i ) < 0 + i=1 R i < T 6

8 This contradicts our original premise. Hence, r n cannot exceed 1. Noting that r n is incremented in steps of R n the bound follows. We have proved the bounds under the assumption that all R i s are fractional. If we relax that assumption, the result still holds. This is due to the fact that the integral part of R i is guaranteed to be allocated in each allocation cycle. Hence, even when R i s are not all fractional, we can reduce the problem to an equivalent one with fractional R i s using the transformation ^R i = R i br i c and ^T = T n i=1br i c: This completes the proof. 4 Quality of Sevice Envelope In this section we analyze the worst-case end-to-end delay performance of CORR. Other measures of performance such as delay jitter and buer size at each node can also be found from the results derived in this section. In order to nd the end-to-end delay, we have rst derived delay bounds for a single node system. We then show that the end-to-end delay for a multi-node system can be reduced to delay encountered in an equivalent single node system. Hence, the delay bounds derived for single node system can be substituted to nd the end-to-end delay. We have also presented a comprehensive analysis of CORR's fairness. We dene a fairness index to quantify how fairly bandwidth is allocated among active connections, and show that the fairness index of CORR is within a constant factor of any scheduling discipline. 4.1 Delay Analysis In this section we derive the worst case delay bounds of a connection spanning single or multiple nodes, each employing CORR scheduling to multiplex trac from dierent connections. We assume that each connection has an associated trac envelope that describes the characteristics of the trac it is carrying, and a minimum guaranteed rate of service at each multiplexing node. Our goal is to determine the imum delay suered by any cell belonging to the connection. We start with a simple system consisting of a single multiplexing node, and nd the worst case delay for dierent trac envelopes. Single-node Case Let us consider a single server employing CORR scheduling to service trac from dierent connections. Since we are interested in the worst case delay behavior, and each connection is guaranteed a minimum rate of service, we can consider each connection in isolation. Our problem then is to nd the imum dierence between the arrival and the departure times of any cell, assuming that the 7

9 cells are serviced using CORR scheduling with a minimum guaranteed rate of service. The arrival time of a cell can be obtained from the trac envelope dened by the trac envelope associated with a connection. Trac envelope associated with a connection depends on the shaping mechanism (see Appendix) used at the network entry point. In this paper we have considered leaky bucket and moving window shapers. Following, we derive the worst case departure time a cell in terms of the service rate allocated to the connection and the length of the allocation cycle. Knowing both the arrival and the departure functions, we can compute the worst case delay bound. Before presenting the results let us rst formally introduce the denition of a connection busy period and a system busy period. Cell Index i Arrival Function delay encountered by cell i backlog at time t Departure Function t Time Figure 3: Computing delay and backlog from the arrival and departure functions. Denition 4.1 A connection is said to be in the busy period if connection queue is non-empty. The system is said to be in the busy period if at least one of the active connection is in the its busy period. Note, that a particular connection can switch between busy and idle periods even when the system is in the same busy period. The following theorem determines the departure time of a specic cell belonging to a particular connection. Theorem 4.1 Assume that a connection enters a busy period at time 0. Let d(i) be the latest time by which the i th cell, starting from the beginning of the current busy period, departs the system. Then d(i) can be expressed as, i d(i) = T; i = 0; 1; : : :; 1: R where R is the rate allocated to the connection and T is the imuength of the allocation cycle. Proof: Since a cell may leave the system any time during an allocation cycle, to capture the worst case we assume that all the cells served during an allocation cycle leave at the end of the cycle. Now, 8

10 when a connection enters a busy period, in the worst case, r =. If cell i departs at the end of the allocation cycle L, the number of slots allocated by the scheduler is L R + and the number of slots consumed is i + 1 (since packet number starts from 0). In the worst case, 1 > L R (i + 1) 0: This implies that, i > L i : R R From the above inequality and noting that L is an integer and d(i) = L T, we get i d(i) = T: R Theorem 4.1 precisely characterizes the departure function d() associated with a connection. As mentioned earlier, the arrival function a() associated with a connection is determined by the trac envelope and has been characterized for dierent composite shapers in the Appendix. Knowing both a(i) and d(i), the arrival and departure time of the i th cell in a busy period, delay encountered by the i th cell can be computed as d(i) a(i) (see gure 3). Note that this is really the horizontal distance between the arrival and departure functions at i. Hence, the imum delay encountered by any cell is really the imum horizontal distance between the arrival and the departure functions. Similarly, the vertical distance between these functions represents the backlog in the system. Unfortunately, nding the imum delay, that is the imum horizontal dierence between the arrival and the departure functions, is a dicult task. Hence, instead of nding the imum delay directly by measuring the horizontal distance between these functions, we rst determine the point at which the imum backlog occurs and the index i of the cell which is at the end of the queue at that point. The worst case delay is then computed by evaluating d(i) a(i). Following we carry out this procedures for a(i) dened by composite leaky bucket, and moving window shapers. Lemma 4.1 Consider a connection shaped using an n-component moving window shaper and passing through a single multiplexing node employing CORR scheduling with an allocation cycle of length T 2. If the connection is allocated a service rate of R, then the worst case delay encountered by any cell belonging to the connection is upper bounded by, D CORR=MW 8 >< >: mj + n T R l=j+1 2mj + T w j R ml 1 1 w l ; n ml 1 1 l=j+1 w l ; R + w j (R=T m j =w j ) R + w j (R=T m j =w j ) = 1 > 1 when m j w j < R T < m j+1 w j+1 ; j = 1; 2; : : :; n 1: 2 We assume that the cycle length is smaller than the smallest window period. 9

11 Proof: First we will show that under the conditions stated above the system is stable. That is, the length of the busy period is nite. To prove that, it is sucient to show that there exists a positive integer k such that, the number of cells serviced in kw j time is more than equal to km j. In other words, we have to show that there exists a k such that the following holds, kw j d(km j 1) (kmj 1) kw j T R (kmj 1) kw j + 1 R R + k w j (R=T m j =w j ) T Clearly, for there to exists a positive integer k so that the above equality is satised, the following condition needs to hold. R=T m j =w j > 0 or R=T > m j =w j By our assumption, R=T > m j =w j. Hence, the system is stable. Now, we need to determine the point at which the imum backlog occurs. Depending on the value of k, the imum backlog can occur at one of the two places. Case 1: k = 1. If k = 1, that is, when trac coming in during a time window of length w j departs the system in the same window, the imum backlog occurs at the arrival instant of the (m j 1) th cell 3. Clearly, the index of the cell at the end of the queue at that instant is (m j 1) th. Hence, the imum delay encountered by any cell under this scenario is the same as the delay suered by the (m j 1) th cell, and can be enumerated by computing d(m j 1) a(m j 1). We can evaluate a(m j 1) as following, a(m j 1) = = + = n mj 1 l=1 j mj 1 l=1 n l=j+1 n l=j+1 mj 1 mj 1 mj 1 1 mj 1 1 mj 1 1 mj 1 1 ml 1 ml 1 ml 1 ml 1 w l w l w l w l (since 1 > ) 3 Note that cells are numbered from 0. Since R can be non-integral, we can choose T arbitrarily small without aecting the granularity of bandwidth allocation 10

12 = n l=j+1 ml 1 1 w l : Therefore, the worst case delay is bounded by, D CORR=MW d(m j 1) a(m j 1) mj + n ml 1 T 1 R Case 2: k > 1. When k is greater than 1, the connection busy period continues beyond the rst window of length w j. Since R=T > m j =w j, the rate of trac arrival is lower than the rate of departure. Still, in this case, not all cells that arrive during the rst window of length w j are served during that period and the left over cells are carried over into the next window. This is due to the fact that unlike the arrival curve, the departure function does not start at time 0 but at time d(1 + )=Re. This is the case when k = 1 also. However, in that case, the rate of service is high enough to serve all the cells before the end of the rst window. When k > 1 the backlog carried over from the rst window is cleared in portions over the next k 1 windows. Clearly, the backlogs carried over into subsequent windows diminish in size and is cleared completely by the end of the k th window. Hence, second window is the one where the backlog inherited from the last window is the imum. Consequently, absolute backlog in the system reaches its imum at the arrival of the 2m j 1 cell. Hence, the imum delay encountered by any cell under this scenario is the same as the delay suered by the (2m j 1) th cell, and can be enumerated by computing d(2m j 1) a(2m j 1). We can evaluate a(2m j 1) as following, a(2m j 1) = = n 2mj 1 l=1 j 1 2mj 1 l= mj 1 n l=j+1 = 0 + w j + = w j + m j 2mj 1 n l=j+1 n l=j+1 2mj 2mj 1 1 2mj 1 ml 1 ml 1 1 mj 1 2mj 1 ml 1 1 m j 1 2mj 1 1 w l : 1 l=j+1 w l w l w j m j ml 1 2mj 1 1 w l ml 1 w l : w l (since 1 2 ) Therefore the worst case delay is bounded by, D CORR=MW d(2m j 1) a(2m j 1) 11

13 2mj + R T w j n l=j+1 ml 1 1 w l : Lemma 4.2 Consider a connection shaped by an n-component leaky bucket shaper and passing through a single multiplexing node employing CORR scheduling with an allocation cycle of imum length T. If the connection is allocated a service rate of R, then the worst case delay suered by any cell belonging to the connection is upper bounded by, D CORR=LB Bj T (B j b j + 1) t j ; R 1 when < R t j T < 1 ; j = 1; 2; : : :; n: t j+1 Proof: In order to identify the point where imum the backlog occurs, observe that the rate of arrivals is more than the rate of service until the slope of the trac envelope changes from 1 t j+1 to 1 t j. This change in the slope occurs at the arrival of the B th j cell in the worst case. Hence, the imum delay encountered by any cell is at most as large as the delay suered by B j th cell. We can compute a(b j ) as following, a(b j ) = = n+1 l=1 j 1 l=1 (B j b l + 1) t l [U(B j B l ) U(B j B l 1 )] (B j b l + 1) t l [U(B j B l ) U(B j B l 1 )] +(B j b j + 1) t j [U(B j B j ) U(B j B j 1 )] + n+1 l=j+1 = (B j b j + 1) t j : (B j b l + 1) t l [U(B j B l ) U(B j B l 1 )] Now d(b j ) a(b j ) yields the result. The results derived in this section dene tight upper bounds for delay encountered in a CORR scheduler under dierent trac arrival patterns. The compact closed form expressions make the task of computing the numerical bounds for a specic set of parameters very simple. We would also like to mention that compared to other published works, we consider a much larger and general set of trac envelopes in our analysis. Although simple closed form bounds under very general arrival patterns is an important contribution of this work, bounds for a single node system is not very useful in a real life scenario. In most real systems, a connection spans multiple nodes and the end-to-end delay bound is what is of interest. In the following section we derive bounds on end-to-end delay using the results presented in this section. 12

14 Multiple-node Case In the last section we derived worst case bounds on delay for dierent trac envelopes for a single node system. In this section, we derive similar bounds for a multi-node system. We assume that there are n multiplexing nodes between the source and the destination, and at each node a minimum available rate of service is guaranteed. We denote by a k (i) the arrival time of cell i at node k. The service time at node k for cell i is denoted by s k (i). We assume that the propagation delay between nodes is zero 4. Hence, the departure time of cell i from node k is a k+1 (i). Note, that a 1 (i) is the arrival time of the i th cell in the system and a n+1 (i) is the departure time of the i th cell from the system. Let us denote by S k (p; q) = P q i=p s k(i). This is nothing but the aggregate service times of cells p through q at node k. In other words, S k (p; q) is the service time of the burst of cells p through q at node k. The following theorem expresses the arrival time of a particular cell at a specic node in terms of the arrival times of the preceding cells at the system and their service times at dierent nodes. This is a very general result, and is independent of the particular scheduling discipline used at the multiplexing node and trac envelope associated with the connection. We will use this result later to derive the worst case bound on end-to-end delay. Theorem 4.2 For any node k and for any cell the i, the following holds: ( k 1 a k (i) = a 1 (j) + 1ji j=l 1l 2l k=i Proof: We will prove this theorem by induction on k and i. Induction on k: Base Case: When k = 1,!) : a 1 (i) = 1ji = a 1 (i) ( a 1 (j) + j=l 1l 2l k=i 0!) Clearly, the assertion holds. Inductive Hypothesis: Let us assume that the premise holds for all m k. In order to prove that the hypothesis is correct, we need to show that it holds for m = k + 1. a k+1 (i) = fa k+1 (i 1) + s k (i); a k (i) + s k (i)g 4 This assumption does not aect the generality of the results, since the propagation delay at each stage is constant and can be included in s k(i). 13

15 = ( 1ji = 1ji 1 " ( 1ji 1 a 1 (i) + = ( 1ji 1 a 1 (i) + = ( a 1 (i) + = 1ji a 1 (j) + 1ji 1 " " " a 1 (j) + i 1ji 1 " a 1 (j) + j=l 1l 2l k=i S h (i; i) " a 1 (j) + k 1ji 1 ( i S h (i; i) " a k (j) + S h (i; i) a 1 (j) + j=l 1l 2l k+1=i 1 k 1 k j=l 1l 2l k+1=i 1 j=l 1l 2l k=i ) a 1 (j) + k 1 j=l 1l 2l k<l k+1=i j=l 1l 2l k=l k+1 =i ) a 1 (j) + ) k!# k j=l 1l 2l kl k+1=i j=l 1l 2l kl k+1 =i k k + s k (i) )! k!#! + S k (i; i)!# ;!) # + s k (i); + s k (i) ;!#!# ; ; # ; Induction on i: Base Case: When i = 1, a k (1) = 1j1 ( a 1 (j) + k 1 = a 1 (1) + S h (1; 1) k 1 = a 1 (1) + s h (1) Hence, the assertion holds in the base case. j=l 1l 2l k=1 k 1 Inductive Hypothesis: Let us assume that the premise holds for all n i. In order to prove that the hypothesis is correct, we need to show that it holds for n = i !)

16 a k (i + 1) = fa k (i) + s k 1 (i + 1); a k 1 (i + 1) + s k 1 (i + 1)g = ( 1ji+1 = 1ji = ( " 1ji " 1ji " a 1 (j) + " a 1 (j) + a 1 (i + 1) + 1ji = ( " 1ji a 1 (j) + a 1 (j) + k 1 " a 1 (j) + a 1 (i + 1) + ( 1ji a 1 (i + 1) + = 1ji+1 ( " j=l 1l 2l k=i j=l 1l 2l k 1=i+1 j=l 1l 2l k=i j=l 1l 2l k 1=i+1 S h (i + 1; i + 1) a 1 (j) + i ) k 2 k 1 k 2 k 1 j=l 1l 2l k 1<l k=i+1 j=l 1l 2l k 1=l k =i+1 S h (i + 1; i + 1) a 1 (j) + k a k (j) + ) k 1 j=l 1l 2l k 1l k=i+1 S h (i + 1; i + 1) ) j=l 1l 2l k 1l k=i+1 k 1! k 1 k 1!#!#! + s k 1 (i + 1); + s k 1 (i + 1) + s k 1 (i + 1) # ) + S k 1 (i + 1; i + 1)!# ;!)!#!# ; ; ; # ; The result stated in the above theorem determines the departure time of any cell from any node in the system in terms of the arrival times of the preceding cells and the service times of the cells at dierent nodes. This is the most general result known to us on the enumeration of end-to-end delay in terms of service times of cells at intermediate nodes. We believe, this result will prove to be a powerful tool in enumerating end-to-end delay for any rate based scheduling discipline and will be an eective alternative for the ad hoc techniques commonly used for end-to-end analysis. Although the result stated in theorem 1 is very general, it is dicult to make use of it in its most general form. In order to nd the exact departure time of any cell from any node we need to know both the arrival times of the cells and their service times at dierent nodes. Arrival times of dierent cells can be obtained from the arrival function, but computing service times for dierent cells at each 15

17 node is a daunting task. Hence, computing precise departure time of a cell from any node in the system is often quite dicult. However, accurate departure time of a specic cell is rarely of critical interest. More often we are interested in other metrics, such as worst case delay encountered by a cell. Fortunately, computing the worst case bound on the departure time, and then the worst case delay is not that dicult. The following corollary expresses the worst case delay suered by a cell in terms of the worst case service times at each node. Corollary 4.1 Consider a connection passing through n multiplexing nodes. Assume that there exists a S w such that S w (p; q) S h (p; q) for all q p and h = 1; 2; : : :n. Then, in the worst case, delay D(i) suered by the cell i belonging to the connection can be upper bounded by D(i) 1ji ( a 1 (j) + j=l 1l 2l n+1=i ( n S w (l h ; l h+1 ) )) a 1 (i) Proof: Follows trivially from theorem 1 by substituting S h s, h = 1; 2; : : :n by S w. Corollary 4.1 expresses the worst case delay encountered by any cell under the assumption that for any p and q there exists a function S w such that S w (p; q) S h (p; q), for h = 1; 2; : : :; n. The closer S w is to S h, the tighter is the bound. The choice of S w depends on the particular scheduling discipline used at the multiplexing nodes. In case of CORR it is simply the service time at the minimum guaranteed rate of service. Following corollary instantiates the delay bound for CORR service discipline. Corollary 4.2 Consider a connection traversing n nodes, each of which employs CORR scheduling discipline. Let R w be the minimum rate of service oered to a connection at the bottleneck node, and T be the imuength of the allocation cycle. Then the worst case delay suered by the i th cell belonging to the connection is bounded by, D CORR (i) n + (n 1) 2 + w R w T + 1ji fa 1(j) + S w (j; i)g a 1 (i) Proof: This follows from corollary 4.1 by replacing j=l 1l 2l n+1 =i ( n S w (l h ; l h+1 ) ) with n + (n 1) 2 + w R w T + S w (j; i) Following steps explain the details, n S w (l h ; l h+1 ) = n n l(h+1 l h + 1) w R w lh+1 l h w R w T (f rom theorem 4:1)

18 n + (n 1) 2 + w R w n + (n 1) 2 + w R w n + (n 1) 2 + w R w (ln+1 l T ) w T R w T + S w (l 1 ; l n+1 ) (from theorem 4:1) T + S w (j; i) (putting l 1 = j and l n+1 = i) The nal result follows immediately. The expression for D CORR derived above consists of two main terms. The rst term is a constant independent of the cell index. If we observe the second term carefully, we realize that it is no other than the delay encountered by the i th cell at CORR server with a cycle time T and a minimum rate of service R w. Hence, end-to-end delay reduces to the sum of the delay encountered in a single node system and a constant. By substituting the delay bounds for the single-node system derived in the last section we can enumerate end-to-end delay in a multi-node system for dierent trac envelopes. 4.2 Fairness Analysis In the last section we analyzed some of the worst case behavior of the system. In the worst case analysis it is assumed that the system is fully loaded and each connection is served at the minimum guaranteed rate. However, that is often not the case. In a work conserving server, when the system is not fully loaded the spare capacity can be used by the busy sessions to achieve better performance. One of the important performance metric of a work conserving scheduler is the fairness of the system. That is how fair is the scheduler in distributing the excess capacity among the active connections. Let us dene by D p (t), the number of packets of connection p transmitted during [0,t). We dene the normalized work received by a connection p as w p (t) = D p (t)=r p. Accordingly, w p (t 1 ; t 2 ) = w p (t 2 ) w p (t 1 ), where t 1 t 2 is the normalized service received by connection p during (t 1 ; t 2 ). In an ideally fair system, the normalized service received by dierent connections in their busy state increase at the same rate. For sessions that are not busy at t, normalized service stays constant. If two connections p and q are both in their busy period during [t 1 ; t 2 ), we can easily show that w p (t 1 ; t 2 ) = w q (t 1 ; t 2 ). Unfortunately, the notion of ideal fairness is only applicable to hypothetical uid ow model. In a real packet network, a complete packet from one connection has to be transmitted before service is shifted to another connections. Therefore, it is not possible to satisfy equality of normalized rate of services for all busy sessions at all times. However, it is possible to keep the normalized services received by dierent connections close to each other. The Packet-by-Packet Generalized Processor Sharing (PGPS) and the Self-Clocked-Fair-Queuing (SFQ) are close approximations to ideal-fairqueuing in the sense that they try to keep the normalized services received by busy sessions close to that of an ideal system. Unfortunately, the realization of PGPS and SFQ are quite complex. In the following we will show that the CORR scheduling is almost as fair as PGPS and SFQ, albeit its simplicity in terms of implementation. For reasons of simplicity we will assume that our 17

19 sampling points coincide with the beginning of the allocation cycles only. If frame sizes are small, this approximation is quite reasonable. Lemma 4.3 If a connection p is in a busy period during the cycles c 1 through c 2, where c 2 c 1, the amount of service received by the connection during [c 1 ; c 2 ] is bounded by f0; b(c 2 c 1 )R p p cg D p (c 1 ; c 2 ) d(c 2 c 1 )R p + p e Proof: Follows directly froemma 3.2. Corollary 4.3 If a connection p is in a busy period during the cycles c 1 through c 2, where c 2 c 1, the amount of normalized service received by the connection during [c 1 ; c 2 ] is bounded by 0; b(c 2 c 1 )R p p c R p w p (c 1 ; c 2 ) d(c 2 c 1 )R p + p e R p Proof: Follows directly froemma 4.3 and the denition of normalized service. Theorem 4.3 If two connections p and q are in their busy periods during the cycles c 1 through c 2, where c 2 c 1, then (p; q) = jw p (c 1 ; c 2 ) w q (c 1 ; c 2 )j 1 + p R p q R q Proof: From the last corollary we get, (p; q) = jw p (c 1 ; c 2 ) w q (c 1 ; c 2 )j d(c 2 c 1 )R p + p e 8 < : R p b(c 2 c 1 )R q q c ; R q d(c 2 c 1 )R q + q e b(c 2 c 1 )R p p c R q R m p R p l [c 2 c 1 ] + p R p R p l [c2 c 1 ] + q R q R q m 1 + p R q [c 2 c 1 ] p R p [c 2 c 1 ] q R p R q q ; 1 + q R q R q 18 j [c 2 c 1 ] q R q R q k ; R q j k [c2 c 1 ] p R p R p [c 2 c 1 ] 1 + q 9 = R q ; R q [c 2 c 1 ] 1 + p p R p R p

20 This completes the proof. To compare fairness of CORR with other schemes, such as PGPS and SFQ, we can use (p; q) as the performance metric. As discussed earlier, (p; q) is the absolute dierence in normalized work received by two sessions over a time period where both of them were busy. We proved earlier that if our sample points are at the beginning of the allocation cycles, CORR (p; q) 1 + p R p q R q : Under the same scenario described in the last section, it can be proved that in the SFQ scheme the following holds at all times, SF Q (p; q) 1 R p + 1 R q Due to dierence in the denition of busy periods in PGPS similar result is dicult derive. However, Golestani [6] has shown that the imum permissible service disparity between a pair of busy connections in the SFQ scheme is never more than two times the corresponding gure for any real queueing scheme. This proves that, P GPS (p; q) 1 2 SF Q (p; q) Note that, 0 i 1 for all connection i. Hence, the fairness index of CORR is within two times that of SFQ and at most four times that of any other queuing discipline including PGPS. 5 Numerical Results In this section we compare the performance of CORR with PGPS and SG using a number of MPEG coded video traces with widely varying trac characteristics. We used four video clips (see Table 1), each approximately 10 minutes long in our study. The rst video is an excerpt from a very fast scene changing basketball game. The second clip is a music video (MTV) of the rock group REM. It is composed of rapidly changing scenes in tune with the song. The third sequence is a clip from CNN Headline news where the scene alternates between the anchor reading news and dierent news clips. The last one is a lecture video with scenes alternating between the speaker talking and the viewgraphs. The only moving objects here are the speaker's head and hands. Figure 4 plots frame sizes against frame number (equivalently time) for all four sequences for an appreciation of the burstiness in dierent sequences. In all traces, frames are sequenced as IBBPBB and frame rate is 30 frames/sec. Observe that, in terms of the size of GoP 5 and that of an average frame, BasketBall and Lecture videos are at the two extremes (the largest and the smallest, respectively), with the other two videos in between. 5 The repeating sequence (IBBPBB in this case) is called a GoP or Group of Pictures. 19

21 Traces Type of Maximum Minimum Average Variation Frame Frame Size Frame Size Frame Size (Std. Dev) I P Basketball B Avg. Frame GoP I P MTV Video B Avg. Frame GoP I P News Clip B Avg. Frame GoP I P Lecture B Avg. Frame GoP Table 1: Characteristics of the MPEG traces. Size is in bytes and frame sequence is IBBPBB. Results presented in the rest of the section demonstrate (1) CORR achieves high utilization irrespective of the shaping mechanism used, (2) when used in conjunction with composite shapers CORR can exploit the precision in trac characterization and can achieve even higher utilization. CORR and PGPS PGPS is a packet-by-packet implementation of the fair queuing mechanism. In PGPS, incoming cells from dierent connections are buered in a sorted priority queue and is served in the order in which they would leave the server in an ideal fair queueing system. The departure times of cells in the ideal system is enumerated by simulating a reference uid ow model. Simulation of the reference system and maintenance of the priority queue are both quite expensive operations. Hence, the implementation of the PGPS scheme in a high-speed switch is dicult, to say the least. Nevertheless, we compare CORR with PGPS to show how it fares against an ideal scheme. In the results presented below we consider a system conguration where all connections from the source to the sink pass through ve switching nodes connected via T3 (45 Mb/s) links (see gure 5). We also assume that each switch is equipped with 2000 cell buers on each output port. As shown in gure 5 trac from a source passes through shaper(s) before entering the network. For the results 20

22 40 35 BasketBall I Frame P Frame B Frame I P B MusicVideo Frame Size (Kbytes) Frame Size (Kbytes) Frame Number NewsClip Frame Number Lecture I P B Frame Size (Kbytes) I Frame P Frame B Frame Frame Number Frame Size (Kbytes) Frame Number Figure 4: MPEG compressed video traces. Frame sequence is IBBPBB. reported in this section we assume that the shapers used at the network entry point employ leaky bucket shaping mechanism. In gures 6,7,8, and 9 we compare the number of connections admitted by CORR and PGPS for dierent trac sources, end-to-end delay requirements, and shaper congurations. In order to make the comparison fair, we have chosen the leaky bucket parameters for dierent sources in such a way that it imizes the number of connections admitted by PGPS given the end-to-end delay, and buer sizes in the switch and the shaper. This is a tricky and time consuming process (we use 21

23 Source Shaper(s) Switch 1 Switch 2 Switch 3 Switch 4 Switch 5 Sink Figure 5: Experimentation model of the network. All the links are 45 Mbps. Both the shapers are used for CORR. Only one shaper is used for other scheduling disciplines CORR vs PGPS: BasketBall (Shaper Buffer = 100 ms) 10.0 CORR vs PGPS: BasketBall (Shaper Buffer = 200 ms) Ratio of No. of Connections Single Shaper & T = 1 Dual Shaper & T = 1 Single Shaper & T = 10 Dual Shaper & T = 10 Single Shaper & T = 20 Dual Shaper & T = 20 Ratio of No. of Connections Single Shaper & T = 1 Dual Shaper & T = 1 Single Shaper & T = 10 Dual Shaper & T = 10 Single Shaper & T = 20 Dual Shaper & T = Delay (ms) Delay (ms) Figure 6: Relative performance of CORR and PGPS on BasketBall video with dierent shaper buers. linear programming techniques) and requires a full scan of the entire trac trace for each source. For a description of this procedure please refer to [9]. We have plotted the ratio of the number of connections admitted by CORR used in conjunction with single and dual leaky bucket shapers with that of PGPS under this best case scenario. In gure 6 we have compared the number of connections admitted by CORR and PGPS for the BasketBall video. The two sets of graphs correspond to two dierent sizes of shaper buers, 100ms and 200ms in this case. Quite expectedly PGPS outperforms CORR for delay bounds less than 150ms when CORR is used in conjuction with a single leaky bucket. Note however that the ratio of the number of connections is very close to 1 and approaches 1 for higher delay bounds. This is due to the xed frame synchronization overhead in CORR that is more conspicuous in low delay regions. The eect of this xed delay fades for higher end-to-end delays. We also observe that the lower the frame size(t), the more competitive CORR is to PGPS in terms of number of connections admitted. 22

24 10.0 CORR vs PGPS: MusicVideo (Shaper Buffer = 100 ms) 10.0 CORR vs PGPS: MusicVideo (Shaper Buffer = 200 ms) Ratio of No. of Connections Single Shaper & T = 1 Dual Shaper & T = 1 Single Shaper & T = 10 Dual Shaper & T = 10 Single Shaper & T = 20 Dual Shaper & T = 20 Ratio of No. of Connections Single Shaper & T = 1 Dual Shaper & T = 1 Single Shaper & T = 10 Dual Shaper & T = 10 Single Shaper & T = 20 Dual Shaper & T = Delay (ms) Delay (ms) Figure 7: Relative performance of CORR and PGPS on MusicVideo with dierent shaper buers CORR vs PGPS: NewsClip (Shaper Buffer = 100 ms) 10.0 CORR vs PGPS: NewsClip (Shaper Buffer = 200 ms) Ratio of No. of Connections Single Shaper & T = 1 Dual Shaper & T = 1 Single Shaper & T = 10 Dual Shaper & T = 10 Single Shaper & T = 20 Dual Shaper & T = 20 Ratio of No. of Connections Single Shaper & T = 1 Dual Shaper & T = 1 Single Shaper & T = 10 Dual Shaper & T = 10 Single Shaper & T = 20 Dual Shaper & T = Delay (ms) Delay (ms) Figure 8: Relative performance of CORR and PGPS on NewsClip video with dierent shaper buers. For traditional frame based (or round robin) scheduling a small size frame (short cycle time) leads to large bandwidth allocation granularity and hence is not useful in practice. CORR however does not suer from this shortcoming and can use very small cycle time. 23

25 10.0 CORR vs PGPS: Lecture (Shaper Buffer = 100 ms) 10.0 CORR vs PGPS: Lecture (Shaper Buffer = 200 ms) Ratio of No. of Connections Single Shaper & T = 1 Dual Shaper & T = 1 Single Shaper & T = 10 Dual Shaper & T = 10 Single Shaper & T = 20 Dual Shaper & T = 20 Ratio of No. of Connections Single Shaper & T = 1 Dual Shaper & T = 1 Single Shaper & T = 10 Dual Shaper & T = 10 Single Shaper & T = 20 Dual Shaper & T = Delay (ms) Delay (ms) Figure 9: Relative performance of CORR and PGPS on Lecture video with dierent shaper buers. When used in conjunction with dual leaky bucket shapers, CORR outperforms PGPS irrespective of delay bounds, shaper buer sizes, and cycle times. PGPS cannot take advantage of multi-rate shaping. So, no matter what the delay bound is, the number of connections admitted by PGPS depends only on the leaky bucket parameters. CORR on the other hand can choose the lowest rate of service sucient to guarantee the required end-to-end delay bound. The benets of this exibility is reected in gure 6 where the connections admitted by CORR outnumbers the connections admitted by PGPS by more than 4:1 margin. For a shaper buer size of 100ms the ratio of number of connections admitted by CORR and that by PGPS is around 8 for end-to-end delay bound of 20ms. The ratio falls sharply and attens out at around 4 for end-to-end delay of 100ms or more. The higher gain seen by CORR for lower end-to-end delay budget can be explained by its by eective use of shaper and switch buers. Unlike PGPS, CORR uses a much lower service rate, just enough to guarantee the required end-to-end delay. It eectively uses the buers in the switches and the shaper to smooth out the burstiness in the trac. As delay budget increases, PGPS uses a lower rate of service. Consequently, gain seen by CORR decreases and eventually stabilizes around 4. A trend similar to the one seen in gure 6 is observed in gures 7,8,and 9. In all cases PGPS outperforms CORR (used in conjuction with single leaky bucket) for low end-to-end delay. For higher delays the ratio of number of connections admitted by CORR and PGPS is practically 1. When used in conjunction with two leaky buckets, CORR outperforms PGPS by a margin higher than 4:1. We also observe that gain seen by CORR when used with two leaky buckets is higher for lower delays. Careful observation reveals that gain also depends on the size of the shaper buer used and the trac pattern of the source. The smaller is the shaper buer, the higher is the gain. 24

A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks: The Single Node Case. 1

A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks: The Single Node Case. 1 A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks: The Single Node Case 1 Abhay K Parekh 2 3 and Robert G Gallager 4 Laboratory for Information and Decision Systems

More information

Any Work-conserving Policy Stabilizes the Ring with Spatial Reuse

Any Work-conserving Policy Stabilizes the Ring with Spatial Reuse Any Work-conserving Policy Stabilizes the Ring with Spatial Reuse L. assiulas y Dept. of Electrical Engineering Polytechnic University Brooklyn, NY 20 leandros@olympos.poly.edu L. Georgiadis IBM. J. Watson

More information

IEEE Journal of Selected Areas in Communication, special issue on Advances in the Fundamentals of Networking ( vol. 13, no. 6 ), Aug

IEEE Journal of Selected Areas in Communication, special issue on Advances in the Fundamentals of Networking ( vol. 13, no. 6 ), Aug IEEE Journal of Selected Areas in Communication, special issue on Advances in the Fundamentals of Networking ( vol. 13, no. 6 ), Aug. 1995. Quality of Service Guarantees in Virtual Circuit Switched Networks

More information

`First Come, First Served' can be unstable! Thomas I. Seidman. Department of Mathematics and Statistics. University of Maryland Baltimore County

`First Come, First Served' can be unstable! Thomas I. Seidman. Department of Mathematics and Statistics. University of Maryland Baltimore County revision2: 9/4/'93 `First Come, First Served' can be unstable! Thomas I. Seidman Department of Mathematics and Statistics University of Maryland Baltimore County Baltimore, MD 21228, USA e-mail: hseidman@math.umbc.edui

More information

[4] T. I. Seidman, \\First Come First Serve" is Unstable!," tech. rep., University of Maryland Baltimore County, 1993.

[4] T. I. Seidman, \\First Come First Serve is Unstable!, tech. rep., University of Maryland Baltimore County, 1993. [2] C. J. Chase and P. J. Ramadge, \On real-time scheduling policies for exible manufacturing systems," IEEE Trans. Automat. Control, vol. AC-37, pp. 491{496, April 1992. [3] S. H. Lu and P. R. Kumar,

More information

I 2 (t) R (t) R 1 (t) = R 0 (t) B 1 (t) R 2 (t) B b (t) = N f. C? I 1 (t) R b (t) N b. Acknowledgements

I 2 (t) R (t) R 1 (t) = R 0 (t) B 1 (t) R 2 (t) B b (t) = N f. C? I 1 (t) R b (t) N b. Acknowledgements Proc. 34th Allerton Conf. on Comm., Cont., & Comp., Monticello, IL, Oct., 1996 1 Service Guarantees for Window Flow Control 1 R. L. Cruz C. M. Okino Department of Electrical & Computer Engineering University

More information

Technion - Computer Science Department - Technical Report CS On Centralized Smooth Scheduling

Technion - Computer Science Department - Technical Report CS On Centralized Smooth Scheduling On Centralized Smooth Scheduling Ami Litman January 25, 2005 Abstract Shiri Moran-Schein This paper studies evenly distributed sets of natural numbers and their applications to scheduling in a centralized

More information

A Measurement-Analytic Approach for QoS Estimation in a Network Based on the Dominant Time Scale

A Measurement-Analytic Approach for QoS Estimation in a Network Based on the Dominant Time Scale 222 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 11, NO. 2, APRIL 2003 A Measurement-Analytic Approach for QoS Estimation in a Network Based on the Dominant Time Scale Do Young Eun and Ness B. Shroff, Senior

More information

M/G/FQ: STOCHASTIC ANALYSIS OF FAIR QUEUEING SYSTEMS

M/G/FQ: STOCHASTIC ANALYSIS OF FAIR QUEUEING SYSTEMS M/G/FQ: STOCHASTIC ANALYSIS OF FAIR QUEUEING SYSTEMS MOHAMMED HAWA AND DAVID W. PETR Information and Telecommunications Technology Center University of Kansas, Lawrence, Kansas, 66045 email: {hawa, dwp}@ittc.ku.edu

More information

Convergence Complexity of Optimistic Rate Based Flow. Control Algorithms. Computer Science Department, Tel-Aviv University, Israel

Convergence Complexity of Optimistic Rate Based Flow. Control Algorithms. Computer Science Department, Tel-Aviv University, Israel Convergence Complexity of Optimistic Rate Based Flow Control Algorithms Yehuda Afek y Yishay Mansour z Zvi Ostfeld x Computer Science Department, Tel-Aviv University, Israel 69978. December 12, 1997 Abstract

More information

On queueing in coded networks queue size follows degrees of freedom

On queueing in coded networks queue size follows degrees of freedom On queueing in coded networks queue size follows degrees of freedom Jay Kumar Sundararajan, Devavrat Shah, Muriel Médard Laboratory for Information and Decision Systems, Massachusetts Institute of Technology,

More information

Bounded Delay for Weighted Round Robin with Burst Crediting

Bounded Delay for Weighted Round Robin with Burst Crediting Bounded Delay for Weighted Round Robin with Burst Crediting Sponsor: Sprint Kert Mezger David W. Petr Technical Report TISL-0230-08 Telecommunications and Information Sciences Laboratory Department of

More information

Bounding the End-to-End Response Times of Tasks in a Distributed. Real-Time System Using the Direct Synchronization Protocol.

Bounding the End-to-End Response Times of Tasks in a Distributed. Real-Time System Using the Direct Synchronization Protocol. Bounding the End-to-End Response imes of asks in a Distributed Real-ime System Using the Direct Synchronization Protocol Jun Sun Jane Liu Abstract In a distributed real-time system, a task may consist

More information

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139 Upper and Lower Bounds on the Number of Faults a System Can Withstand Without Repairs Michel Goemans y Nancy Lynch z Isaac Saias x Laboratory for Computer Science Massachusetts Institute of Technology

More information

This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication.

This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. RC 20649 (91385) 12/5/96 Computer Science/Mathematics 41 pages IBM Research Report Performance Bounds for Guaranteed and Adaptive Services Rajeev Agrawal Department of Electrical and Computer Engineering

More information

Delay bounds (Simon S. Lam) 1

Delay bounds (Simon S. Lam) 1 1 Pacet Scheduling: End-to-End E d Delay Bounds Delay bounds (Simon S. Lam) 1 2 Reerences Delay Guarantee o Virtual Cloc server Georey G. Xie and Simon S. Lam, Delay Guarantee o Virtual Cloc Server, IEEE/ACM

More information

1 Introduction Future high speed digital networks aim to serve integrated trac, such as voice, video, fax, and so forth. To control interaction among

1 Introduction Future high speed digital networks aim to serve integrated trac, such as voice, video, fax, and so forth. To control interaction among On Deterministic Trac Regulation and Service Guarantees: A Systematic Approach by Filtering Cheng-Shang Chang Dept. of Electrical Engineering National Tsing Hua University Hsinchu 30043 Taiwan, R.O.C.

More information

On-line Bin-Stretching. Yossi Azar y Oded Regev z. Abstract. We are given a sequence of items that can be packed into m unit size bins.

On-line Bin-Stretching. Yossi Azar y Oded Regev z. Abstract. We are given a sequence of items that can be packed into m unit size bins. On-line Bin-Stretching Yossi Azar y Oded Regev z Abstract We are given a sequence of items that can be packed into m unit size bins. In the classical bin packing problem we x the size of the bins and try

More information

On Two Class-Constrained Versions of the Multiple Knapsack Problem

On Two Class-Constrained Versions of the Multiple Knapsack Problem On Two Class-Constrained Versions of the Multiple Knapsack Problem Hadas Shachnai Tami Tamir Department of Computer Science The Technion, Haifa 32000, Israel Abstract We study two variants of the classic

More information

Strong Performance Guarantees for Asynchronous Buffered Crossbar Schedulers

Strong Performance Guarantees for Asynchronous Buffered Crossbar Schedulers Strong Performance Guarantees for Asynchronous Buffered Crossbar Schedulers Jonathan Turner Washington University jon.turner@wustl.edu January 30, 2008 Abstract Crossbar-based switches are commonly used

More information

A Starvation-free Algorithm For Achieving 100% Throughput in an Input- Queued Switch

A Starvation-free Algorithm For Achieving 100% Throughput in an Input- Queued Switch A Starvation-free Algorithm For Achieving 00% Throughput in an Input- Queued Switch Abstract Adisak ekkittikul ick ckeown Department of Electrical Engineering Stanford University Stanford CA 9405-400 Tel

More information

Environment (E) IBP IBP IBP 2 N 2 N. server. System (S) Adapter (A) ACV

Environment (E) IBP IBP IBP 2 N 2 N. server. System (S) Adapter (A) ACV The Adaptive Cross Validation Method - applied to polling schemes Anders Svensson and Johan M Karlsson Department of Communication Systems Lund Institute of Technology P. O. Box 118, 22100 Lund, Sweden

More information

in accordance with a window ow control protocol. Alternatively, the total volume of trac generated may in fact depend on the network feedback, as in a

in accordance with a window ow control protocol. Alternatively, the total volume of trac generated may in fact depend on the network feedback, as in a Proc. Allerton Conf. on Comm.,Control, and Comp., Monticello, IL. Sep t. 1998. 1 A Framework for Adaptive Service Guarantees Rajeev Agrawal ECE Department University of Wisconsin Madison, WI 53706-1691

More information

The Rate-Based Execution Model

The Rate-Based Execution Model University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln CSE Technical reports Computer Science and Engineering, Department of 4-1-1999 The Rate-Based Execution Model Kevin Jeffay

More information

Microeconomic Algorithms for Flow Control in Virtual Circuit Networks (Subset in Infocom 1989)

Microeconomic Algorithms for Flow Control in Virtual Circuit Networks (Subset in Infocom 1989) Microeconomic Algorithms for Flow Control in Virtual Circuit Networks (Subset in Infocom 1989) September 13th, 1995 Donald Ferguson*,** Christos Nikolaou* Yechiam Yemini** *IBM T.J. Watson Research Center

More information

Fast Evaluation of Ensemble Transients of Large IP Networks. University of Maryland, College Park CS-TR May 11, 1998.

Fast Evaluation of Ensemble Transients of Large IP Networks. University of Maryland, College Park CS-TR May 11, 1998. Fast Evaluation of Ensemble Transients of Large IP Networks Catalin T. Popescu cpopescu@cs.umd.edu A. Udaya Shankar shankar@cs.umd.edu Department of Computer Science University of Maryland, College Park

More information

Scheduling Adaptively Parallel Jobs. Bin Song. Submitted to the Department of Electrical Engineering and Computer Science. Master of Science.

Scheduling Adaptively Parallel Jobs. Bin Song. Submitted to the Department of Electrical Engineering and Computer Science. Master of Science. Scheduling Adaptively Parallel Jobs by Bin Song A. B. (Computer Science and Mathematics), Dartmouth College (996) Submitted to the Department of Electrical Engineering and Computer Science in partial fulllment

More information

Submitted to IEEE Transactions on Computers, June Evaluating Dynamic Failure Probability for Streams with. (m; k)-firm Deadlines

Submitted to IEEE Transactions on Computers, June Evaluating Dynamic Failure Probability for Streams with. (m; k)-firm Deadlines Submitted to IEEE Transactions on Computers, June 1994 Evaluating Dynamic Failure Probability for Streams with (m; k)-firm Deadlines Moncef Hamdaoui and Parameswaran Ramanathan Department of Electrical

More information

Semantic Importance Dual-Priority Server: Properties

Semantic Importance Dual-Priority Server: Properties Semantic Importance Dual-Priority Server: Properties David R. Donari Universidad Nacional del Sur - CONICET, Dpto. de Ing. Eléctrica y Computadoras, Bahía Blanca, Argentina, 8000 ddonari@uns.edu.ar Martin

More information

These are special traffic patterns that create more stress on a switch

These are special traffic patterns that create more stress on a switch Myths about Microbursts What are Microbursts? Microbursts are traffic patterns where traffic arrives in small bursts. While almost all network traffic is bursty to some extent, storage traffic usually

More information

Stability, Queue Length and Delay of Deterministic and Stochastic Queueing Networks Cheng-Shang Chang IBM Research Division T.J. Watson Research Cente

Stability, Queue Length and Delay of Deterministic and Stochastic Queueing Networks Cheng-Shang Chang IBM Research Division T.J. Watson Research Cente Stability, Queue Length and Delay of Deterministic and Stochastic Queueing Networks Cheng-Shang Chang IBM Research Division T.J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 cschang@watson.ibm.com

More information

In Proceedings of the 13th U.K. Workshop on Performance Engineering of Computer. and Telecommunication Systems (UKPEW'97), July 1997, Ilkley, U.K.

In Proceedings of the 13th U.K. Workshop on Performance Engineering of Computer. and Telecommunication Systems (UKPEW'97), July 1997, Ilkley, U.K. In Proceedings of the 13th U.K. Workshop on Performance Engineering of Computer and Telecommunication Systems (UKPEW'97), July 1997, Ilkley, U.K. Investigation of Cell Scale and Burst Scale Eects on the

More information

Optimal Rejuvenation for. Tolerating Soft Failures. Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi.

Optimal Rejuvenation for. Tolerating Soft Failures. Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi. Optimal Rejuvenation for Tolerating Soft Failures Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi Abstract In the paper we address the problem of determining the optimal time

More information

Multiplicative Multifractal Modeling of. Long-Range-Dependent (LRD) Trac in. Computer Communications Networks. Jianbo Gao and Izhak Rubin

Multiplicative Multifractal Modeling of. Long-Range-Dependent (LRD) Trac in. Computer Communications Networks. Jianbo Gao and Izhak Rubin Multiplicative Multifractal Modeling of Long-Range-Dependent (LRD) Trac in Computer Communications Networks Jianbo Gao and Izhak Rubin Electrical Engineering Department, University of California, Los Angeles

More information

Energy Harvesting Multiple Access Channel with Peak Temperature Constraints

Energy Harvesting Multiple Access Channel with Peak Temperature Constraints Energy Harvesting Multiple Access Channel with Peak Temperature Constraints Abdulrahman Baknina, Omur Ozel 2, and Sennur Ulukus Department of Electrical and Computer Engineering, University of Maryland,

More information

A POPULATION-MIX DRIVEN APPROXIMATION FOR QUEUEING NETWORKS WITH FINITE CAPACITY REGIONS

A POPULATION-MIX DRIVEN APPROXIMATION FOR QUEUEING NETWORKS WITH FINITE CAPACITY REGIONS A POPULATION-MIX DRIVEN APPROXIMATION FOR QUEUEING NETWORKS WITH FINITE CAPACITY REGIONS J. Anselmi 1, G. Casale 2, P. Cremonesi 1 1 Politecnico di Milano, Via Ponzio 34/5, I-20133 Milan, Italy 2 Neptuny

More information

Statistical Analysis of Delay Bound Violations at an Earliest Deadline First (EDF) Scheduler Vijay Sivaraman Department of Computer Science, 3820 Boel

Statistical Analysis of Delay Bound Violations at an Earliest Deadline First (EDF) Scheduler Vijay Sivaraman Department of Computer Science, 3820 Boel Statistical Analysis of Delay Bound Violations at an Earliest Deadline First (EDF) Scheduler Vijay Sivaraman Department of Computer Science, 3820 Boelter Hall, UCLA, Los Angeles, CA 90095, U.S.A. (Email:

More information

Proportional Share Resource Allocation Outline. Proportional Share Resource Allocation Concept

Proportional Share Resource Allocation Outline. Proportional Share Resource Allocation Concept Proportional Share Resource Allocation Outline Fluid-flow resource allocation models» Packet scheduling in a network Proportional share resource allocation models» CPU scheduling in an operating system

More information

TUNABLE LEAST SERVED FIRST A New Scheduling Algorithm with Tunable Fairness

TUNABLE LEAST SERVED FIRST A New Scheduling Algorithm with Tunable Fairness TUNABLE LEAST SERVED FIRST A New Scheduling Algorithm with Tunable Fairness Pablo Serrano, David Larrabeiti, and Ángel León Universidad Carlos III de Madrid Departamento de Ingeniería Telemática Av. Universidad

More information

1 Introduction During the execution of a distributed computation, processes exchange information via messages. The message exchange establishes causal

1 Introduction During the execution of a distributed computation, processes exchange information via messages. The message exchange establishes causal Quasi-Synchronous heckpointing: Models, haracterization, and lassication D. Manivannan Mukesh Singhal Department of omputer and Information Science The Ohio State University olumbus, OH 43210 (email: fmanivann,singhalg@cis.ohio-state.edu)

More information

TDDB68 Concurrent programming and operating systems. Lecture: CPU Scheduling II

TDDB68 Concurrent programming and operating systems. Lecture: CPU Scheduling II TDDB68 Concurrent programming and operating systems Lecture: CPU Scheduling II Mikael Asplund, Senior Lecturer Real-time Systems Laboratory Department of Computer and Information Science Copyright Notice:

More information

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 3, MARCH

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 3, MARCH IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 3, MARCH 1998 315 Asymptotic Buffer Overflow Probabilities in Multiclass Multiplexers: An Optimal Control Approach Dimitris Bertsimas, Ioannis Ch. Paschalidis,

More information

1 Introduction During the execution of a distributed computation, processes exchange information via messages. The message exchange establishes causal

1 Introduction During the execution of a distributed computation, processes exchange information via messages. The message exchange establishes causal TR No. OSU-ISR-5/96-TR33, Dept. of omputer and Information Science, The Ohio State University. Quasi-Synchronous heckpointing: Models, haracterization, and lassication D. Manivannan Mukesh Singhal Department

More information

Resource Allocation for Video Streaming in Wireless Environment

Resource Allocation for Video Streaming in Wireless Environment Resource Allocation for Video Streaming in Wireless Environment Shahrokh Valaee and Jean-Charles Gregoire Abstract This paper focuses on the development of a new resource allocation scheme for video streaming

More information

Latency and Backlog Bounds in Time- Sensitive Networking with Credit Based Shapers and Asynchronous Traffic Shaping

Latency and Backlog Bounds in Time- Sensitive Networking with Credit Based Shapers and Asynchronous Traffic Shaping Latency and Backlog Bounds in Time- Sensitive Networking with Credit Based Shapers and Asynchronous Traffic Shaping Ehsan Mohammadpour, Eleni Stai, Maaz Mohuiddin, Jean-Yves Le Boudec September 7 th 2018,

More information

An Improved Bound for Minimizing the Total Weighted Completion Time of Coflows in Datacenters

An Improved Bound for Minimizing the Total Weighted Completion Time of Coflows in Datacenters IEEE/ACM TRANSACTIONS ON NETWORKING An Improved Bound for Minimizing the Total Weighted Completion Time of Coflows in Datacenters Mehrnoosh Shafiee, Student Member, IEEE, and Javad Ghaderi, Member, IEEE

More information

SUM x. 2x y x. x y x/2. (i)

SUM x. 2x y x. x y x/2. (i) Approximate Majorization and Fair Online Load Balancing Ashish Goel Adam Meyerson y Serge Plotkin z July 7, 2000 Abstract This paper relates the notion of fairness in online routing and load balancing

More information

Channel Allocation Using Pricing in Satellite Networks

Channel Allocation Using Pricing in Satellite Networks Channel Allocation Using Pricing in Satellite Networks Jun Sun and Eytan Modiano Laboratory for Information and Decision Systems Massachusetts Institute of Technology {junsun, modiano}@mitedu Abstract

More information

Contents 1 Introduction 3 2 Continuous time rate based control Network and Queue Models The Rate Control M

Contents 1 Introduction 3 2 Continuous time rate based control Network and Queue Models The Rate Control M SP-EPRCA: an ATM Rate Based Congestion Control Scheme based on a Smith Predictor D. Cavendish, S. Mascolo, M. Gerla dirceu@cs.ucla.edu, mascolo@poliba.it, gerla@cs.ucla.edu Abstract This report presents

More information

Group Ratio Round-Robin: O(1) Proportional Share Scheduling for Uniprocessor and Multiprocessor Systems

Group Ratio Round-Robin: O(1) Proportional Share Scheduling for Uniprocessor and Multiprocessor Systems Group Ratio Round-Robin: O() Proportional Share Scheduling for Uniprocessor and Multiprocessor Systems Bogdan Caprita, Wong Chun Chan, Jason Nieh, Clifford Stein, and Haoqiang Zheng Department of Computer

More information

Author... Department of Mathematics August 6, 2004 Certified by... Michel X. Goemans Professor of Applied Mathematics Thesis Co-Supervisor

Author... Department of Mathematics August 6, 2004 Certified by... Michel X. Goemans Professor of Applied Mathematics Thesis Co-Supervisor Approximating Fluid Schedules in Packet-Switched Networks by Michael Aaron Rosenblum B.S., Symbolic Systems; M.S., Mathematics Stanford University, 998 Submitted to the Department of Mathematics in partial

More information

2 optimal prices the link is either underloaded or critically loaded; it is never overloaded. For the social welfare maximization problem we show that

2 optimal prices the link is either underloaded or critically loaded; it is never overloaded. For the social welfare maximization problem we show that 1 Pricing in a Large Single Link Loss System Costas A. Courcoubetis a and Martin I. Reiman b a ICS-FORTH and University of Crete Heraklion, Crete, Greece courcou@csi.forth.gr b Bell Labs, Lucent Technologies

More information

Competitive Management of Non-Preemptive Queues with Multiple Values

Competitive Management of Non-Preemptive Queues with Multiple Values Competitive Management of Non-Preemptive Queues with Multiple Values Nir Andelman and Yishay Mansour School of Computer Science, Tel-Aviv University, Tel-Aviv, Israel Abstract. We consider the online problem

More information

Integrating External and Internal Clock Synchronization. Christof Fetzer and Flaviu Cristian. Department of Computer Science & Engineering

Integrating External and Internal Clock Synchronization. Christof Fetzer and Flaviu Cristian. Department of Computer Science & Engineering Integrating External and Internal Clock Synchronization Christof Fetzer and Flaviu Cristian Department of Computer Science & Engineering University of California, San Diego La Jolla, CA 9093?0114 e-mail:

More information

TRANSMISSION STRATEGIES FOR SINGLE-DESTINATION WIRELESS NETWORKS

TRANSMISSION STRATEGIES FOR SINGLE-DESTINATION WIRELESS NETWORKS The 20 Military Communications Conference - Track - Waveforms and Signal Processing TRANSMISSION STRATEGIES FOR SINGLE-DESTINATION WIRELESS NETWORKS Gam D. Nguyen, Jeffrey E. Wieselthier 2, Sastry Kompella,

More information

Sensitivity Analysis for Discrete-Time Randomized Service Priority Queues

Sensitivity Analysis for Discrete-Time Randomized Service Priority Queues Sensitivity Analysis for Discrete-Time Randomized Service Priority Queues George Kesidis 1, Takis Konstantopoulos 2, Michael Zazanis 3 1. Elec. & Comp. Eng. Dept, University of Waterloo, Waterloo, ON,

More information

Module 5: CPU Scheduling

Module 5: CPU Scheduling Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation 5.1 Basic Concepts Maximum CPU utilization obtained

More information

CPU Scheduling Exercises

CPU Scheduling Exercises CPU Scheduling Exercises NOTE: All time in these exercises are in msec. Processes P 1, P 2, P 3 arrive at the same time, but enter the job queue in the order presented in the table. Time quantum = 3 msec

More information

Quiz 1 EE 549 Wednesday, Feb. 27, 2008

Quiz 1 EE 549 Wednesday, Feb. 27, 2008 UNIVERSITY OF SOUTHERN CALIFORNIA, SPRING 2008 1 Quiz 1 EE 549 Wednesday, Feb. 27, 2008 INSTRUCTIONS This quiz lasts for 85 minutes. This quiz is closed book and closed notes. No Calculators or laptops

More information

The Entropy of Cell Streams as a. Trac Descriptor in ATM Networks

The Entropy of Cell Streams as a. Trac Descriptor in ATM Networks 1 The Entropy of Cell Streams as a Trac Descriptor in ATM Networks N. T. Plotkin SRI International 333 Ravenswood Avenue Menlo Park, CA 94025, USA ninatp@erg.sri.com and C. Roche Laboratoire MASI Universite

More information

Load Regulating Algorithm for Static-Priority Task Scheduling on Multiprocessors

Load Regulating Algorithm for Static-Priority Task Scheduling on Multiprocessors Technical Report No. 2009-7 Load Regulating Algorithm for Static-Priority Task Scheduling on Multiprocessors RISAT MAHMUD PATHAN JAN JONSSON Department of Computer Science and Engineering CHALMERS UNIVERSITY

More information

The Weakest Failure Detector to Solve Mutual Exclusion

The Weakest Failure Detector to Solve Mutual Exclusion The Weakest Failure Detector to Solve Mutual Exclusion Vibhor Bhatt Nicholas Christman Prasad Jayanti Dartmouth College, Hanover, NH Dartmouth Computer Science Technical Report TR2008-618 April 17, 2008

More information

Chapter 6: CPU Scheduling

Chapter 6: CPU Scheduling Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation 6.1 Basic Concepts Maximum CPU utilization obtained

More information

Fair Operation of Multi-Server and Multi-Queue Systems

Fair Operation of Multi-Server and Multi-Queue Systems Fair Operation of Multi-Server and Multi-Queue Systems David Raz School of Computer Science Tel-Aviv University, Tel-Aviv, Israel davidraz@post.tau.ac.il Benjamin Avi-Itzhak RUTCOR, Rutgers University,

More information

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems  M/M/1  M/M/m  M/M/1/K Queueing Theory I Summary Little s Law Queueing System Notation Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K " Little s Law a(t): the process that counts the number of arrivals

More information

ADMISSION AND FLOW CONTROL IN GENERALIZED PROCESSOR SHARING SCHEDULERS

ADMISSION AND FLOW CONTROL IN GENERALIZED PROCESSOR SHARING SCHEDULERS ADMISSION AND FLOW CONTROL IN GENERALIZED PROCESSOR SHARING SCHEDULERS Ph.D. Theses By Róbert Szabó Research Supervisors: Dr. Tibor Trón Dr. József Bíró Department of Telecommunications and Telematics

More information

Scheduling I. Today. Next Time. ! Introduction to scheduling! Classical algorithms. ! Advanced topics on scheduling

Scheduling I. Today. Next Time. ! Introduction to scheduling! Classical algorithms. ! Advanced topics on scheduling Scheduling I Today! Introduction to scheduling! Classical algorithms Next Time! Advanced topics on scheduling Scheduling out there! You are the manager of a supermarket (ok, things don t always turn out

More information

Design of IP networks with Quality of Service

Design of IP networks with Quality of Service Course of Multimedia Internet (Sub-course Reti Internet Multimediali ), AA 2010-2011 Prof. Pag. 1 Design of IP networks with Quality of Service 1 Course of Multimedia Internet (Sub-course Reti Internet

More information

Branching Rules for Minimum Congestion Multi- Commodity Flow Problems

Branching Rules for Minimum Congestion Multi- Commodity Flow Problems Clemson University TigerPrints All Theses Theses 8-2012 Branching Rules for Minimum Congestion Multi- Commodity Flow Problems Cameron Megaw Clemson University, cmegaw@clemson.edu Follow this and additional

More information

Perfect Simulation of M/G/c Queues

Perfect Simulation of M/G/c Queues Perfect Simulation of M/G/c Queues Stephen B. Connor and Wilfrid S. Kendall 28th February 2014 Abstract In this paper we describe a perfect simulation algorithm for the stable M/G/c queue. Sigman (2011:

More information

Stochastic dominance with imprecise information

Stochastic dominance with imprecise information Stochastic dominance with imprecise information Ignacio Montes, Enrique Miranda, Susana Montes University of Oviedo, Dep. of Statistics and Operations Research. Abstract Stochastic dominance, which is

More information

Online Packet Routing on Linear Arrays and Rings

Online Packet Routing on Linear Arrays and Rings Proc. 28th ICALP, LNCS 2076, pp. 773-784, 2001 Online Packet Routing on Linear Arrays and Rings Jessen T. Havill Department of Mathematics and Computer Science Denison University Granville, OH 43023 USA

More information

Burst Scheduling Based on Time-slotting and Fragmentation in WDM Optical Burst Switched Networks

Burst Scheduling Based on Time-slotting and Fragmentation in WDM Optical Burst Switched Networks Burst Scheduling Based on Time-slotting and Fragmentation in WDM Optical Burst Switched Networks G. Mohan, M. Ashish, and K. Akash Department of Electrical and Computer Engineering National University

More information

Fair Scheduling in Input-Queued Switches under Inadmissible Traffic

Fair Scheduling in Input-Queued Switches under Inadmissible Traffic Fair Scheduling in Input-Queued Switches under Inadmissible Traffic Neha Kumar, Rong Pan, Devavrat Shah Departments of EE & CS Stanford University {nehak, rong, devavrat@stanford.edu Abstract In recent

More information

IOs/sec sec Q3 Figure 1: TPC-D Query 3 I/O trace. tables occur in phases which overlap little. Problem specication I/Os to

IOs/sec sec Q3 Figure 1: TPC-D Query 3 I/O trace. tables occur in phases which overlap little. Problem specication I/Os to Capacity planning with phased workloads E. Borowsky, R. Golding, P. Jacobson, A. Merchant, L. Schreier, M. Spasojevic, and J. Wilkes Hewlett-Packard Laboratories Abstract At the heart of any conguration

More information

Optimal Media Streaming in a Rate-Distortion Sense For Guaranteed Service Networks

Optimal Media Streaming in a Rate-Distortion Sense For Guaranteed Service Networks Optimal Media Streaming in a Rate-Distortion Sense For Guaranteed Service Networks Olivier Verscheure and Pascal Frossard Jean-Yves Le Boudec IBM Watson Research Center Swiss Federal Institute of Tech.

More information

Acknowledgements I wish to thank in a special way Prof. Salvatore Nicosia and Dr. Paolo Valigi whose help and advices have been crucial for this work.

Acknowledgements I wish to thank in a special way Prof. Salvatore Nicosia and Dr. Paolo Valigi whose help and advices have been crucial for this work. Universita degli Studi di Roma \Tor Vergata" Modeling and Control of Discrete Event Dynamic Systems (Modellazione e Controllo di Sistemi Dinamici a Eventi Discreti) Francesco Martinelli Tesi sottomessa

More information

1 Introduction A priority queue is a data structure that maintains a set of elements and supports operations insert, decrease-key, and extract-min. Pr

1 Introduction A priority queue is a data structure that maintains a set of elements and supports operations insert, decrease-key, and extract-min. Pr Buckets, Heaps, Lists, and Monotone Priority Queues Boris V. Cherkassky Central Econ. and Math. Inst. Krasikova St. 32 117418, Moscow, Russia cher@cemi.msk.su Craig Silverstein y Computer Science Department

More information

Distributed Optimization. Song Chong EE, KAIST

Distributed Optimization. Song Chong EE, KAIST Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links

More information

Information in Aloha Networks

Information in Aloha Networks Achieving Proportional Fairness using Local Information in Aloha Networks Koushik Kar, Saswati Sarkar, Leandros Tassiulas Abstract We address the problem of attaining proportionally fair rates using Aloha

More information

AS computer hardware technology advances, both

AS computer hardware technology advances, both 1 Best-Harmonically-Fit Periodic Task Assignment Algorithm on Multiple Periodic Resources Chunhui Guo, Student Member, IEEE, Xiayu Hua, Student Member, IEEE, Hao Wu, Student Member, IEEE, Douglas Lautner,

More information

Response Time in Data Broadcast Systems: Mean, Variance and Trade-O. Shu Jiang Nitin H. Vaidya. Department of Computer Science

Response Time in Data Broadcast Systems: Mean, Variance and Trade-O. Shu Jiang Nitin H. Vaidya. Department of Computer Science Response Time in Data Broadcast Systems: Mean, Variance and Trade-O Shu Jiang Nitin H. Vaidya Department of Computer Science Texas A&M University College Station, TX 7784-11, USA Email: fjiangs,vaidyag@cs.tamu.edu

More information

Simulation of Process Scheduling Algorithms

Simulation of Process Scheduling Algorithms Simulation of Process Scheduling Algorithms Project Report Instructor: Dr. Raimund Ege Submitted by: Sonal Sood Pramod Barthwal Index 1. Introduction 2. Proposal 3. Background 3.1 What is a Process 4.

More information

Geometric Capacity Provisioning for Wavelength-Switched WDM Networks

Geometric Capacity Provisioning for Wavelength-Switched WDM Networks Geometric Capacity Provisioning for Wavelength-Switched WDM Networks Li-Wei Chen and Eytan Modiano Abstract In this chapter, we use an asymptotic analysis similar to the spherepacking argument in the proof

More information

Impact of Cross Traffic Burstiness on the Packet-scale Paradigm An Extended Analysis

Impact of Cross Traffic Burstiness on the Packet-scale Paradigm An Extended Analysis Impact of ross Traffic Burstiness on the Packet-scale Paradigm An Extended Analysis Rebecca Lovewell and Jasleen Kaur Technical Report # TR11-007 Department of omputer Science University of North arolina

More information

Maximizing throughput in zero-buffer tandem lines with dedicated and flexible servers

Maximizing throughput in zero-buffer tandem lines with dedicated and flexible servers Maximizing throughput in zero-buffer tandem lines with dedicated and flexible servers Mohammad H. Yarmand and Douglas G. Down Department of Computing and Software, McMaster University, Hamilton, ON, L8S

More information

Novel determination of dierential-equation solutions: universal approximation method

Novel determination of dierential-equation solutions: universal approximation method Journal of Computational and Applied Mathematics 146 (2002) 443 457 www.elsevier.com/locate/cam Novel determination of dierential-equation solutions: universal approximation method Thananchai Leephakpreeda

More information

Advanced Computer Networks Lecture 3. Models of Queuing

Advanced Computer Networks Lecture 3. Models of Queuing Advanced Computer Networks Lecture 3. Models of Queuing Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/13 Terminology of

More information

Queueing Theory and Simulation. Introduction

Queueing Theory and Simulation. Introduction Queueing Theory and Simulation Based on the slides of Dr. Dharma P. Agrawal, University of Cincinnati and Dr. Hiroyuki Ohsaki Graduate School of Information Science & Technology, Osaka University, Japan

More information

CPU scheduling. CPU Scheduling

CPU scheduling. CPU Scheduling EECS 3221 Operating System Fundamentals No.4 CPU scheduling Prof. Hui Jiang Dept of Electrical Engineering and Computer Science, York University CPU Scheduling CPU scheduling is the basis of multiprogramming

More information

A STAFFING ALGORITHM FOR CALL CENTERS WITH SKILL-BASED ROUTING: SUPPLEMENTARY MATERIAL

A STAFFING ALGORITHM FOR CALL CENTERS WITH SKILL-BASED ROUTING: SUPPLEMENTARY MATERIAL A STAFFING ALGORITHM FOR CALL CENTERS WITH SKILL-BASED ROUTING: SUPPLEMENTARY MATERIAL by Rodney B. Wallace IBM and The George Washington University rodney.wallace@us.ibm.com Ward Whitt Columbia University

More information

UNIVERSITY OF CALIFORNIA, SAN DIEGO. Quality of Service Guarantees for FIFO Queues with Constrained Inputs

UNIVERSITY OF CALIFORNIA, SAN DIEGO. Quality of Service Guarantees for FIFO Queues with Constrained Inputs UNIVERSITY OF ALIFORNIA, SAN DIEGO Quality of Service Guarantees for FIFO Queues with onstrained Inputs A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy

More information

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads Operations Research Letters 37 (2009) 312 316 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Instability of FIFO in a simple queueing

More information

IS 709/809: Computational Methods in IS Research Fall Exam Review

IS 709/809: Computational Methods in IS Research Fall Exam Review IS 709/809: Computational Methods in IS Research Fall 2017 Exam Review Nirmalya Roy Department of Information Systems University of Maryland Baltimore County www.umbc.edu Exam When: Tuesday (11/28) 7:10pm

More information

Network management and QoS provisioning - Network Calculus

Network management and QoS provisioning - Network Calculus Network Calculus Network calculus is a metodology to study in a deterministic approach theory of queues. First a linear modelization is needed: it means that, for example, a system like: ρ can be modelized

More information

Scheduling Slack Time in Fixed Priority Pre-emptive Systems

Scheduling Slack Time in Fixed Priority Pre-emptive Systems Scheduling Slack Time in Fixed Priority Pre-emptive Systems R.I.Davis Real-Time Systems Research Group, Department of Computer Science, University of York, England. ABSTRACT This report addresses the problem

More information

Minimizing Average Completion Time in the. Presence of Release Dates. September 4, Abstract

Minimizing Average Completion Time in the. Presence of Release Dates. September 4, Abstract Minimizing Average Completion Time in the Presence of Release Dates Cynthia Phillips Cliord Stein y Joel Wein z September 4, 1996 Abstract A natural and basic problem in scheduling theory is to provide

More information

to provide continuous buered playback ofavariable-rate output schedule. The

to provide continuous buered playback ofavariable-rate output schedule. The The Minimum Reservation Rate Problem in Digital Audio/Video Systems (Extended Abstract) David P. Anderson Nimrod Megiddo y Moni Naor z April 1993 Abstract. The \Minimum Reservation Rate Problem" arises

More information

Exact emulation of a priority queue with a switch and delay lines

Exact emulation of a priority queue with a switch and delay lines ueueing Syst (006) 53:115 15 DOI 10.1007/s11134-006-6669-x Exact emulation of a priority queue with a switch and delay lines A. D. Sarwate V. Anantharam Received: 30 October 004 / Revised: 13 December

More information

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 5, MAY

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 5, MAY IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 5, MAY 1998 631 Centralized and Decentralized Asynchronous Optimization of Stochastic Discrete-Event Systems Felisa J. Vázquez-Abad, Christos G. Cassandras,

More information