Online Packet Buffering
|
|
- Kerrie Flynn
- 6 years ago
- Views:
Transcription
1 Online Packet Buffering Dissertation zur Erlangung des Doktorgrades der Fakultät für Angewandte Wissenschaften der Albert-Ludwigs-Universität Freiburg im Breisgau Markus Schmidt Freiburg im Breisgau Februar 2006
2 Albert-Ludwigs-Universität Freiburg i. Br. Fakultät für Angewandte Wissenschaften Dekan: Prof. Dr. Jan G. Korvink Referentin: Korreferent: Prof. Dr. Susanne Albers Prof. Dr. Peter Sanders Tag der Promotion:
3
4
5 Contents Introduction. Incomplete information, online algorithms, and competitive analysis Packet buffering problems Multiqueue packet buffering Bounded delay buffering Outline of the thesis Deterministic multiqueue algorithms 7 2. Introduction Lower bounds Greedy algorithms Arbitrary deterministic algorithms Upper bounds A semi-greedy algorithm Resource augmentation Bicodal buffers Lower bound Competitiveness of the greedy algorithm An optimal offline algorithm Conclusions Randomization in multiqueue buffering 7 3. Introduction Lower bound Generalizing algorithms without loss of competitiveness Gambler a coin-tossing algorithm A random permutation algorithm Conclusions i
6 4 Bounded delay buffering Introduction Lower bound The greedy algorithm The MaxCatch algorithm A ϕ-competitive algorithm for 2-values services Conclusions Conclusion and outlook 27 6 Zusammenfassung 29 ii
7 Chapter Introduction This thesis treats several buffering problems that occur in routers or switches of computer networks. We develop and investigate algorithms for temporary data packet buffering, where information about the packets is not completely known in advance, but arrives by and by over time. In the classical approach of designing algorithms, all data are assumed to be known in advance. However, in practical applications, this assumption often does not hold. It may happen that decisions on a process must be made although information about this process is still incomplete. For such scenarios, online algorithms, which are able to make decisions without complete knowledge on the input, are used. A well-known example of an online problem is makespan minimization in job scheduling. We are given a set of machines and a set of jobs. Each job has a processing time and must be assigned to any machine, where each machine can execute at most one job at a time and the assignment of each job must not be split to several machines. The goal is to distribute the jobs to the machines in such a way that the moment of time at which the last job finishes is as early as possible. We have an online setting if and that is usually the case we do not know the number of jobs and their processing times in advance, but incrementally get the information about the jobs one after another and have to put each new job to a machine immediately at the instance of its revealing without knowing anything about future jobs. This chapter is organized as follows: In Section., we discuss how to measure the performance of online algorithms by means of competitive analysis. We present an overview of the buffering problems considered in this thesis in Section.2. Eventually, in Section.3, we give a brief outline of the thesis.. Incomplete information, online algorithms, and competitive analysis Since, throughout this thesis, we consider maximization problems, we here define a performance measure only for this type of optimization problems; for minimization problems, it can be defined in a similar way. Let σ = (σ, σ 2,...) be an input sequence and let ALG(σ) denote the profit algorithm ALG achieves when processing σ. The decisions an online algorithm makes at time step t may depend only on σ,..., σ t ; any information σ τ revealed at time τ > t is unknown at this instant. In order to measure
8 2 CHAPTER. INTRODUCTION how well it copes with the difficulty of incomplete knowledge about the input, we compare following the concept of competitive analysis introduced by Sleator and Tarjan [22] an online algorithm to an optimal offline algorithm that knows the whole input sequence σ in advance. In a competitive analysis, we determine the competitive ratio of the online algorithm, which is defined to be the asymptotic worst case ratio between the profit of the optimal offline algorithm and the profit of the online algorithm, where if the online algorithm is randomized, i.e. if it makes random decisions, the expected profit of the online algorithm is considered. When comparing an online algorithm to an optimal offline algorithm, we can interpret the latter to be the adversary of the former. Since the offline algorithm has knowledge about the online algorithm, it can construct an input sequence where the online algorithm performs poorly while the offline algorithm serves the sequence optimally and thus generates a great competitive ratio. In the case of deterministic online algorithms, the adversary knows the configuration of the online algorithm in each time step and can construct disadvantageous sequences by using this knowledge. If, however, the online algorithms uses randomization, it can reach distinct states when serving the same sequence. In our analyses, we assume that the adversary cannot predict which configuration the randomized online algorithm achieves in a concrete service of a given sequence, but it only knows the probabilities with which the possible configurations are reached. Thus, randomization enables an online algorithm to defy the omniscience of the adversary. Adversaries as described above are called oblivious whereas those that can adapt the input sequence after each random decision of the online algorithm are called adaptive. We emphasize that, throughout this thesis, we always consider oblivious adversaries. Now, we formally define the competitive ratios. Let DET and RAND be deterministic and randomized online algorithms, respectively, and let OPT denote an optimal offline algorithm. The competitive ratios R(DET) and R(RAND) are given by { } OPT(σ) R(DET) = lim sup n DET(σ) : OPT(σ) n and (.) { } OPT(σ) R(RAND) = lim sup n E[RAND(σ)] : OPT(σ) n, (.2) where E[ ] is the expectation operator according to the probability distribution used by RAND. Since the profit of OPT is at least the profit of the online algorithm, we conclude that a competitive ratio is at least and the smaller its competitive ratio, the better does an online algorithm perform. By considering the asymptotic profit ratio (OPT(σ) ), we get rid of small additive terms that might make too great an impact on the profit ratio when the optimal profit is small. In this context, we moreover call DET and RAND to be c-competitive, c, if there are constants a DET and a RAND such that, for all sequences σ, we have c DET(σ) + a DET OPT(σ) (.3) and c E[RAND(σ)] + a RAND OPT(σ), (.4) respectively. Thus, an online algorithm ONL is c-competitive if and only if c R(ONL).
9 .2. PACKET BUFFERING PROBLEMS 3 For proving that no deterministic online algorithm ONL is better than c-competitive, it suffices to show that there is an arrival sequence σ ONL and another algorithm ADV (the adversary of ONL) such that ADV(σ ONL ) c ONL(σ ONL ). The claim then follows from the fact that OPT(σ ONL ) ADV(σ ONL ) for any optimal offline algorithm OPT. Throughout this thesis, we shall denote by ADV an algorithm whose behaviour is clear from the local context and which need not be optimal. Whenever we use the notation OPT in a lower bound construction, we explicitly prove that the output created by OPT is optimal. On the other hand, when proving that a deterministic online algorithm ALG is c-competitive, we have to show that there is a constant a ONL such that c ONL(σ) + a ONL OPT(σ) for all input sequences σ. Since it is often difficult to state how σ is optimally processed for a general σ, we show that for each algorithm ADV, i.e. for each adversary, we have ADV(σ) b σ for some constant b σ, only depending on σ. It then suffices to show that c ONL(σ) + a ONL b σ because b σ OPT(σ) for an optimal offline algorithm OPT is one special adversary. Therefore, in upper bound proofs, we use the notation OPT to represent the entirety of possible adversaries. The statements above refer to deterministic online algorithms; in the randomized case, the expected profits must be considered..2 Packet buffering problems In computer networks, data is nowadays interchanged between several computers by means of data packets where the data packets are forwarded by routers on their way from their origin computer to their destination computer. In this thesis, we study basic buffer management problems that arise in network routers or switches; routers forward data packets from one local area network to another whereas switches forward them within one local area network. Routers and switches are equipped with several input and output ports for incoming and outgoing data streams from or to other devices of the local or neighboring network, respectively. Devices that route data packets arriving at the input ports to the appropriate output ports so that the packets can reach their correct destinations are critical elements for the performance of high-speed networks. Since data traffic may be bursty and packet loss is wished to be kept small, ports are equipped with buffers where packets can be stored temporarily. The limitation of the buffer capacities entails the importance of effective buffer management strategies to maximize the throughput at switches and routers. As a result, there has recently been considerable research interest in the design and analysis of various buffer management policies [, 2, 3, 4, 5, 6, 7, 8, 9, 0,, 2, 3, 4, 5, 6, 7, 8, 9, 20, 2]. In this thesis, we consider two different buffer scenarios multiqueue buffering and bounded delay buffering that we shall describe in the next sections..2. Multiqueue packet buffering This problem occurs in crossbar switches as illustrated in Figure.. The switch has m input ports each of which is linked to a set of output ports. Each of the input ports i has for each linked output port j
10 4 CHAPTER. INTRODUCTION a buffer Q ij organized as a queue. In each queue Q ij, a limited number of packets can be simultaneously stored. At any time step, new packets may arrive at the input ports and can be appended to the respective buffers if space permits; in the event of buffer overflow, some packets must be dropped. For each output port j, the switch can select one non-empty queue Q ij per time step and transmit the packet at its head through the output port. Since the queues Q ij and Q ij are independent of each other for different output ports j and j, we manage the distinct output ports separately and only consider a set of queues {Q j,..., Q mj }. Our goal is to maximize the throughput, i.e. the total number of transmitted packets. We emphasize that we consider all packets to be equally important i.e. all of them have the same (unit) value and to be of the same (unit) size. Thus, we do not have to pay attention to which particular packet is to be transmitted, but it suffices to determine which queue is to be served next. Most current networks, in particular IP networks, treat packets from different data streams equally in their intermediate routers. Furthermore, we assume that each queue has the same buffer size B. Thus, each queue can store up to B packets simultaneously due to the unit packet size. The buffer size B is large, typically several hundreds or thousands. For a fixed output port j, the packet acceptance and transmission works as follows: Suppose that queue Q ij currently stores b i packets and that a i new packets arrive there. If b i + a i B, then all new packets can be accepted; otherwise, a i + b i B packets must be dropped. In any time step, the switch can select one non-empty buffer and transmit the packet at its head through the output port. We assume w.l.o.g. that the packet arrival step precedes the transmission step. m output ports O m input ports I I i O j O m I m crossbar switch Figure.: Crossbar switch.2.2 Bounded delay buffering Since different data streams may be of different importance, network users want to obtain a service that is appropriate to their purposes at a suitable cost performance ratio. This is achieved by Quality of Service (QoS) networks where due to the users requirements the traffic is partitioned into several service classes, thus allowing prioritization. Since, moreover, the users want their data to arrive fast at their respective destinations, each data packet has a deadline up to which the routing service must be accomplished for the network operator to be credited for this transmission; we call models with such deadline constraints bounded delay models. Within this framework, we study the problem of throughput
11 .3. OUTLINE OF THE THESIS 5 maximization with respect to the given QoS classes, assuming a single buffer where up to B packets of unit size may be stored coinstantaneously. We model the QoS classes by attributing to each packet belonging to a data stream of a certain service level a corresponding packet value where those with a higher priority are given a greater value. The deadline constraint is taken into account by dropping those packets whose deadlines have passed for we shall not be paid for their transmission any longer. Thus, the throughput is given by the total sum of values of those packets that are transmitted within their deadlines. We investigate deadline restricted QoS packet routing under bounded buffering, i.e. buffers with limited capacity, in a general deadline framework without assuming special deadline constraints. The injection, transmission and rejection of packets are processed in the following way. Each time step t consists of three substeps: firstly, the packets p with deadline d(p) < t are removed from the buffer. Secondly, an arbitrary number of packets may arrive. If there is a buffer overflow, we must decide which packets are rejected where packets may be discarded irrespective of whether they have already been stored in the buffer or have just arrived, i.e. preemption is allowed. The accepted new packets are inserted into the buffer. Thirdly, one of the packets in the buffer is chosen for transmission irrespective of the packets arrival order. For both settings, information on future packet arrivals usually is very limited or not available at all. We do not make any probabilistic assumptions about the input, but investigate an online setting where, at any time, future packet arrivals are unknown. We are interested in online buffer management strategies that have a provably good performance and measure the performance of online algorithms by their competitive ratio, using the concept of competitive analysis introduced in Section...3 Outline of the thesis In Chapter 2, we investigate deterministic online algorithms for the multiqueue problem. We derive lower bounds for greedy and other deterministic algorithms in Section 2.2. Moreover, we show in Section 2.3 that a modified (semi-)greedy algorithm has a better competitive ratio than the greedy algorithm itself and analyze the performance of online algorithms that are granted more resources than the optimal offline algorithm they are compared to. We consider resource augmentation with respect to memory and speed. In Section 2.4, we discuss the special case of a router having only two ports and show both lower and upper bounds for this setting. Eventually, we present an optimal offline algorithm with a linear running time in Section 2.5. The analysis of randomized online algorithms for the multiqueue buffering problem is given in Chapter 3. First, we discuss a randomized lower bound for arbitrary buffer sizes in Section 3.2. We then show in Section 3.3 how to generalize algorithms for unit buffers, which can store only one packet per queue, to arbitrary buffers without increasing their competitive ratio. In Section 3.4, we investigate an online algorithm that tosses a multisided coin in every time step, whereas, in Section 3.5, we consider a randomized online algorithm that makes all random decisions in advance and then acts like a deterministic
12 6 CHAPTER. INTRODUCTION algorithm. In Chapter 4, we discuss the bounded delay buffering problem for weighted packets in a single queue, introduced in Section.2.2. We first deduce a randomized lower bound in Section 4.2. In Section 4.3 and Section 4.4, we investigate two different greedy algorithms and show that their competitive ratios differ. Eventually, we consider the special case that there are only two packet values and present both lower and upper bounds for this setting in Section 4.5. Parts of this thesis were published in the following papers: Chapter 2 except Section 2.4 and Section 3.2 are based on [2], Chapter 3 except Section 3.2 and Section 2.4 are based on [2].
13 Chapter 2 Deterministic multiqueue algorithms In this chapter, we discuss deterministic online algorithms for the multiqueue packet buffering problem with respect to throughput maximization as introduced in Section Introduction First, we summarize which results were known before work to this thesis began and give an overview of our contribution to the subject. Previous work: Azar and Richter [5] observed that any work-conserving algorithm ALG, which serves any non-empty queue, is 2-competitive: Partition the arrival sequence σ into subsequences σ l such that ALG s buffers are empty at the end of each σ l. W.l.o.g. we postpone the beginning of σ l+ until OPT has emptied its buffers, too. If OPT buffers b i packets in queue i at the end of subsequence σ l, then at least b i packets must have arrived there in σ l. ALG has transmitted at least m i= b i packets because it has accepted at least m i= b i packets and all its buffers are empty again. Since both ALG and OPT transmit exactly one packet during each time step and OPT still buffers m i= b i packets when ALG s buffers become empty, OPT delivers m i= b i packets more than ALG does. Prior to our work, no deterministic online algorithm with a competitive ratio smaller than 2 was known. Azar and Richter [5] showed that if B =, no deterministic strategy can be better than (2 m )-competitive. For arbitrary B, they gave a lower bound of.366. Bar-Noy et al. [0] and Fleischer and Koga [3] studied buffer management policies when buffers have unlimited capacity and one wishes to minimize the maximum queue length, which is a kind of dual problem to ours. The minimization of the maximum queue length corresponds to the minimization of the buffer size B subject to the constraint that no data packet be lost. They showed a lower bound of Ω(log m) and presented Θ(log m)-competitive online algorithms. Our contribution: In the first part of the chapter, we settle the competitive performance of the entire family of greedy algorithms. In practice, greedy algorithms are most important. At any time, a greedy algorithm serves a queue that currently buffers the largest number of packets. Serving the longest queue is 7
14 8 CHAPTER 2. DETERMINISTIC MULTIQUEUE ALGORITHMS a very reasonable strategy to avoid packet loss if future arrival patterns are unknown. Moreover, greedy strategies are interesting because they are fast and use little extra memory. A switch cannot afford complex computations to decide which queue to be served, nor has it sufficient memory to maintain detailed information on past or current configurations. In this chapter, we present a thorough study of greedy algorithms and their variants. We prove that a greedy algorithm cannot be better than 2-competitive, no matter how ties are broken. Since any work-conserving algorithm is 2-competitive, the competitiveness of any greedy policy is indeed 2. Our lower bound construction is involved and relies on a new recursive construction for building dynamic adversarial buffer configurations. We use a variant of our technique to develop a lower bound for any deterministic online algorithm and show that, for any buffer size B, no deterministic online strategy ALG can achieve a competitiveness smaller than e/(e ).58. Interestingly, we establish this bound by comparing the throughput of ALG to that of any greedy algorithm. Although, in terms of competitiveness, greedy algorithms are not better than arbitrary work-conserving algorithms, greedy strategies are important from a practical point of view. Therefore, it is interesting to consider variants of greedy policies and to analyze greedy approaches in extended problem settings. In the second part of the chapter, we develop a slightly modified deterministic greedy stategy, called Semi-Greedy (SGR), and prove that it achieves a competitive ratio of 7/9.89. We conjecture that SGR is actually an optimal deterministic algorithm because for B = 2, we give a proof that it achieves an optimal competitiveness of 3/7.86. These results show, in particular, that deterministic algorithms can beat the factor of 2 and perform better than arbitrary work-conserving strategies. The new SGR algorithm is simple. If there is a queue buffering more than B/2 packets, SGR serves a longest queue. If all queues store at most B/2 packets, SGR serves a longest queue that has never buffered B packets provided there is one; otherwise, SGR serves a longest queue. The idea of this rule is to establish some fairness among the queues. SGR is essentially as fast as greedy. It can be implemented such that at most one extra comparison is needed in each time step. The extra memory requirements are also low. For each queue, we have to maintain only one bit indicating whether or not the queue has ever buffered B packets. SGR does not follow the standard greedy strategy only if each queue buffers at most B/2 packets and, hence, the risk of packet loss is low. Thus, we consider SGR to be a very practical algorithm. We analyze SGR by defining a new potential function that measures the number of packets that SGR has already lost or could lose if an adversary replenishes corresponding queues. In contrast to standard amortized analysis, we do not bound the potential change in each time step. We rather show that if the potential increased at T time steps and T > C for some constant C, then the potential must have decreased at T 2 steps with T 2 > C 2 for some other constant C 2 depending on C. In the second part of the chapter, we also study the case that an online algorithm is granted more resources than an optimal offline algorithm and show that we can beat the competitiveness of 2. We consider resource augmentation with respect to memory and speed, i.e. we study settings in which an online algorithm has (a) larger buffers for each queue or (b) a higher transmission rate. For scenario (a), we prove that any greedy algorithm achieves a competitive ratio of (2B + A)/(B + A) if it has an additional
15 2.2. LOWER BOUNDS 9 buffer of A in each queue. Hence, by doubling the buffer capacities, we obtain a performance ratio of.5. We show that this bound does not hold for all work-conserving algorithms. For scenario (b), we show an upper bound of + /k for all work-conserving algorithms if, in each time step, an online algorithm can transmit k times as many packets as an adversary. Again, by doubling the transmission rate, we obtain a competitiveness of.5. In the constructions of the lower bounds above, the number m of ports is assumed to be large compared to the buffer queue capacity B. Since, in practice, the port number is rather small and, thus, the packet capacity satisfies B m, we deem this orthogonal problem to be of independent interest. Moreover, greedy algorithms are as aforementioned very important in practice because they are fast, use little extra memory and reduce packet loss by always serving a longest queue. In the last part of the chapter, we consider buffers with m = 2 queues, called bicodal buffers. For this setting, we show a lower bound of 6/3.23 for any online algorithm and prove that the competitive ratio of greedy algorithms is 9/7.29, improving the best previously known upper bound of 2 /m = 3/2 shown in [5]. Finally, we give a linear running time offline algorithm for computing an optimal service schedule maximizing the throughput. This chapter is organized as follows: In Section 2.2, we develop our lower bounds. We present the new SGR algorithm and investigate scenarios with resource augmentation in Section 2.3. In Section 2.4, the greedy algorithm for bicodal buffers is analyzed. The optimal offline algorithm is given in Section Lower bounds In this section, we derive lower bound results for deterministic online algorithms. First, we analyze the family of greedy algorithms, which are of major importance for practical issues. Then, we develop lower bounds for arbitrary deterministic online strategies Greedy algorithms Formally, we call an online algorithm GR greedy if GR always serves a longest queue. Greedy algorithms may differ in the way ties are broken in case several queues currently store a maximum number of packets. The tie breaking rule may also be randomized. For greedy algorithms, we have the following lower bound: Theorem 2. For any B, the competitive ratio of any randomized greedy algorithm GR is not smaller than 2 /B if m B. Proof. Fix a buffer size B > 0. We show that there exist infinitely many m and associated packet arrival sequences for m queues such that the throughput achieved by an adversary ADV is at least
16 0 CHAPTER 2. DETERMINISTIC MULTIQUEUE ALGORITHMS 2 /B Θ(m /2B 2 ) times that achieved by GR. This proves the theorem. We use arrival sequences where whenever there are several queues of maximum lengths, all these queues are served once before the next packets arrive. Thus, the tie-breaking criteria need not be considered, and it does not matter whether they use randomization. Let µ 2 be an integer and b = 2 B 2. Set m = µ b. We construct a recursive partitioning of the m queues (cf. Figures ). For any i with i B 2, let m i = m /2i. The m queues are divided into m blocks, each of them consisting of m subsequent queues. These blocks are labeled,..., m in ascending order. Block n with n m is subdivided into m 2 blocks, each of them consisting of m 2 subsequent queues labeled (n, ),..., (n, m 2 ). This partitioning is repeated up to level B 2. In general, any block (n,..., n i ) at level i consisting of m i queues is subdivided into m i+ blocks each containing m i+ queues. These blocks are labeled (n,..., n i, ),..., (n,..., n i, m i+ ). Note that a block (n,..., n B 2 ) at level B 2 consists of exactly µ queues. Figure 2. shows all buffers and their partition into m = m blocks of level, labeled,..., m, each consisting of m queues. In Figure 2.2, the partition of one such block n m into m 2 = 4 m subblocks of level 2, labeled (n, ),..., (n, m 2 ), is illustrated. Eventually, Figure 2.3 demonstrates the partitioning of the entire buffer into m /b blocks (n,..., n B 2 ), n i m i, i B 2, of level B 2, where each block comprises µ queues. B B B () m (n, ) m 2 (,..., ) }µ (2) m (n, 2) m 2. m m m (n,..., n B 2 ) }µ... (m ) m (n, m 2 ) m 2 (m,..., m B 2 ) }µ Figure 2.: m = m blocks Partitioning into Figure 2.2: Partitioning of block (n ) into m 2 = 4 m subblocks Figure 2.3: Partitioning into blocks each consisting of µ queues at final level B 2
17 2.2. LOWER BOUNDS B (n,..., n 4 ) }µ = m 4 m 3 m 2 m Figure 2.4: Structure of the staircase centered at (n,..., n B 2 ), B = 6 We define a lexicographic ordering on the (B 2)-tuples (n,..., n B 2 ) in the standard way. Given (n,..., n B 2 ) and (n,..., n B 2 ), we have (n,..., n B 2 ) < (n,..., n B 2 ) if and only if n i < n i for some i and n j = n j for all j < i. Furthermore, (n,..., n B 2 ) (n,..., n B 2 ) if (n,..., n B 2 ) < (n,..., n B 2 ) or n i = n i for all i B 2. Throughout this section, tuples (n,..., n B 2 ), (n,..., n B 2 ) and (n,..., n B 2 ) are denoted by N, N and N, respectively. The basic idea of the lower bound construction is to maintain a staircase of packets in GR s queues where we call a buffer configuration a staircase centered at block (n,..., n B 2 ) if GR s queues in any block (n,..., n B 2 ) buffer i packets if n j = n j, for j i, but n i+ n i+. The structure of such a staircase is illustrated in Figure 2.4: Each block N buffers as many packets as given by the length of the greatest common prefix with block N. The size of the different stairs, including substairs of higher level, is given by the arrow diagram to the right of the buffer image. During our construction, the staircase center moves through the blocks in increasing lexicographic order. Note that at each center move from (n,..., n B 2 ) to its successor (n,..., n B 2 ), where (n,..., n i ) = (n,..., n i ) is the greatest common label prefix of the two blocks, only the queues in blocks whose labels also have this prefix are affected. When the center is located at (n,..., n B 2 ), we force a packet loss of B at each of GR s queues in that block. ADV will be able to accept all packets and essentially has full queues in all blocks (n,..., n B 2 ) that are lexicographically smaller than (n,..., n B 2 ). When the construction ends, almost all of ADV s queues are fully populated while GR s queues are empty. Since a total of nearly mb packets are trans-
18 2 CHAPTER 2. DETERMINISTIC MULTIQUEUE ALGORITHMS mitted by ADV and GR during the construction, this gives the desired lower bound. Formally, we process blocks (n,..., n B 2 ) with n i 2 for all i in increasing lexicographic order. Blocks (n,..., n B 2 ) with n i = for some i are special in that less than B packets will arrive there. When we start processing a block (n,..., n B 2 ) with n i 2 for all i, certain invariants given below hold. We show how to process it such that the invariants are also true when we start processing the next block. (G) Let (n,..., n B 2 ) be a block with (n,..., n B 2 ) < (n,..., n B 2 ) and n i 2 for all i. GR buffers exactly j + packets in each of the queues if j is the largest index with n = n,..., n j = n j. (G2) Let (n,..., n B 2 ) be a block with (n,..., n B 2 ) < (n,..., n B 2 ) such that n i = and n j 2 for all j i. GR has i packets in each of the queues if n j = n j for all j i. (G3) Let (n,..., n B 2 ) be a block with (n,..., n B 2 ) (n,..., n B 2 ). GR buffers exactly j packets in each of the queues if j is the largest index with n = n,..., n j = n j. (A) Let (n,..., n B 2 ) be a block with (n,..., n B 2 ) < (n,..., n B 2 ) and n i 2 for all i. ADV has B packets in each of the first m B 2 = µ queues and B packets in the last queue of this block. (A2) Let (n,..., n B 2 ) be a block with (n,..., n B 2 ) < (n,..., n B 2 ) such that n i = and n j 2 for all j i. ADV has two packets in each of the queues if n j = n j for all j i and one packet in those queues, otherwise. (A3) Let (n,..., n B 2 ) be a block with (n,..., n B 2 ) (n,..., n B 2 ). ADV has 0 packets in each of those queues. Initialization: We show how to establish the six invariants for the block N = 2 B 2 := (2,..., 2), }{{} for which we illustrate, in Figure 2.5, the initial configurations to be described next. At the beginning, 2m packets arrive in the queues of block () at level, two packets in each queue, and m packets in the queues of block (2) at level, one packet in each queue. GR starts transmission from block while ADV does so from block 2. After both GR and ADV have transmitted m packets, we jump to level 2, where the arrival pattern is adopted in gauge: 2m 2 packets arrive in block (2, ), two in each queue, and m 2 packets in block (2, 2), one in each queue. We continue to copy and scale down this pattern until blocks (2,..., 2, ) and (2,..., 2, 2) = 2 }{{}}{{} B 2 at level B 2 are reached. At level i, GR always clears m i B 3 B 2 packets in block (2,..., 2, ) while ADV does so in (2,..., 2, 2). In the example of Figure 2.5, we have }{{}}{{} i i B = 5; thus, GR transmits m packets from block (), m 2 packets from block (2, ), and m 3 packets from block (2, 2, ) whereas ADV transmits m packets from block (2), m 2 packets from block (2, 2), and m 3 packets from block (2, 2, 2). B 2
19 2.2. LOWER BOUNDS 3 B = 5 B = 5 B = 5... B = 5... (n, ) () () (n ) (n ) (n, n 2, ) (n, n 2, n 3) (n, n 2, n 3 + ) (2, ) (2, ) (n, ) (n, ) (2) (2, 2, ) (2, 2, 2) (2, 2, 3) (2, 2, ) (n ) (n, n 2, ) (n, n 2, n 3) (n, n 2, n 3 + ) (n, n 2, ) (2, 3) (n, n 2 + ) GR ADV GR ADV Figure 2.5: Queue configurations when we start processing block (2, 2, 2). Figure 2.6: Queue configurations when we start processing block (n, n 2, n 3 ). Invariants (A) and (G) trivially hold because there is no block N < 2 B 2 with n i 2 for all i B 2. If N < 2 B 2 and i is the smallest index with n i =, then n = = n i = 2, and, thus, n j = n j for all j i. Since each queue in N received one packet at each of the levels,..., i and two packets at level i, the queues in N received i + packets, i of which have been transmitted by ADV (at levels,..., i ), while only one of them has been transmitted by GR (at level i). Thus, ADV and GR buffer (i + ) (i ) = 2 and (i + ) = i packets, respectively, in each queue of N, and, hence invariants (A2) and (G2) hold. If N 2 B 2 and j is the largest index with n = n,..., n j = n j, then queues in N received j packets, all of which have been transmitted by ADV, while none of them have been transmitted by GR, giving that invariants (A3) and (G3) hold. Processing of a block: We next describe how to process block N = (n,..., n B 2 ). Figure 2.6 shows the buffer configurations when the processing starts: In GR s configuration, the staircase is centered at block N, whereas, in ADV s configuration, blocks N < N are completely populated only the last queue of each block buffers one packet less unless the label of N contains a. In this case, the queues of N buffer two packets if N and N have a common prefix; one packet, otherwise. Let q,..., q mb 2 be the m B 2 queues block N. By (G3), GR has B 2 packets in each of these queues. By (A3), ADV stores no packets there. We subsequently apply the following arrival pattern to each but the last of the q j s: B packets arrive at queue q j and one packet at queue q j+. First, GR accepts two packets in q and one packet in q 2. Then, q is completely populated while q 2 still has one free buffer cell left. Afterwards, GR transmits a packet from q. At the arrival of B packets at q 2, GR must reject all but one of them and can accept the additional packet at q 3 as well. This behavior is repeated until the last queue in N is reached. In contrast, ADV always processes the single packet in q j+ in order to be able to accept B packets in the next step. When B packets arrive at q mb 2, we leave the additional packet out because we would
20 4 CHAPTER 2. DETERMINISTIC MULTIQUEUE ALGORITHMS cross the boundary to the next block of level B 2. We assume that both ADV and GR then transmit one packet from q mb 2. Hence, GR stores B packets in each queue of N, whereas ADV buffers B packets in all but the last queue and B packets in the last one. Note that the invariants (A) (A3) and (G) (G3) still hold for all queues except those in N because the processing so far only affected N itself. We next show how to establish the six invariants for the next block N = (n,..., n B 2 ) in the lexicographic order satisfying n i 2, for all i. We distinguish cases depending on whether or not n B 2 = n B 2 +. Case (n B 2 = n B 2 + ): The processing steps to be described are exemplified in Figure 2.7. Since n B 2 = n B 2 +, blocks N and N belong to the same block of level B 3. By (G3), GR buffers B 3 packets in each queue of N. Now, m B 2 packets arrive at N, one at each of the queues (cf. 2.7 (a)). Thus, each queue in N buffers B packets, and each queue in N buffers B 2 ones (cf. 2.7 (b)). In the following m B 2 time steps GR transmits one packet from each queue in N while ADV serves the queues in N, which are then empty again in ADV s configuration (cf. 2.7 (c)). Since only N and N have been affected, the invariants (G), (G2) and (G3) hold for N as well. The same is true for (A) because the statement holds for block N, as argued at the end of the last paragraph. (A2) was not affected during the processing of N because n i 2, for all i B 2. Since no new packets arrived at blocks that are lexicographically larger than N, (A3) is also satisfied. Case 2 (n B 2 n B 2 + ): How the blocks under concern are to be processed in this case is illustrated in Figure 2.8 and Figure 2.9. The lower case letters a,..., h refer to the drawings in these two figures. Since n B 2 B 3. Let i be the largest index such that n i n B 2 +, blocks N and N do not belong to the same block at level < m i, i.e. there is another block at level i. Hence, N = (n,..., n i, m i+,..., m B 2 ) and N = (n,..., n i, n i +, 2,..., 2). In the following B 2 j=i+ m j time steps, no new packets arrive, and since (G) and (G2) hold, GR transmits m j packets from the queues of block (n,..., n i, m i+,..., m j ), for j = B 2,..., i + (cf. a ). During each iteration, one packet is transmitted from each of these queues. In these time steps, for j = B 2,..., i+, ADV transmits one packet from each of the queues in (n,..., n i, m i+,..., m j, ). By invariant (A2), these queues hold exactly two packets in ADV s configuration and store precisely one packet after the transmission. In the next time step, m i packets arrive at the queues of (n,..., n i, n i +), one packet at each of these queues (cf. b ). In GR s configuration at that time, the queues in (n,..., n i ) buffer exactly i + packets while all other queues buffer less. GR then transmits one packet from each of the queues in (n,..., n i ) while ADV serves the queues in (n,..., n i, n i + ) so that they are empty again (cf. c ). In the following B 2 j=i+ m j time steps, we restore in GR s configuration the full staircase on top of the i packets in the queues of (n,..., n i, n i + ). More precisely, 2m i+ packets arrive at the queues of (n,..., n i, n i +, ), two at each of these queues, and m i+ packets at the queues of (n,..., n i, n i +, 2), one packet at each of these queues (cf. d ). GR transmits one packet from each queue in (n,..., n i, n i +, ) while ADV clears block (n,..., n i, n i +, 2) (cf. e ). Then, 2m i+2 packets arrive in (n,..., n i, n i +, 2, ) and m i+2 packets in (n,..., n i, n i +, 2, 2) (cf. f ). Again, GR serves
21 2.2. LOWER BOUNDS 5 B = 8 B = 8 B = 8 N B N B N B 2 N N B 2 N B 2 B 3 GR GR GR (a) (b) (c) B = 8 B = 8 B = 8 N B N B N B B B B N N N empty empty ADV ADV ADV empty buffer cell populated buffer cell populated buffer cell to be cleared arriving packet Figure 2.7: Preparation of next block N if blocks N and N are adjacent the queues in the first of these blocks while ADV clears the second (cf. g ). This process continues up to blocks (n,..., n i, n i +, 2,..., 2, ) and (n,..., n i, n i +, 2,..., 2) at level B 2 (cf. h ). Lemma 2. Invariants (G) (G3) and (A) (A3) hold when we start processing block N = (n,..., n i, n i +, 2,..., 2). Proof. Consider block N = (n,..., n i, n i +, 2,..., 2) =: (n,..., n B 2 ) and let N = (n,..., n B 2 ) be an arbitrary block. We first study (G). Let N < N, n j 2 for all j, and let k be the largest index such that n = n,..., n k = n k. If k < i, then there is nothing to show because the queues in N have not been touched by GR since the processing of N started. If k i, we have N = (n,..., n i, m i+,..., m k, n k+,..., n B 2 ). So N is affected at the iteration steps for
22 6 CHAPTER 2. DETERMINISTIC MULTIQUEUE ALGORITHMS GR GR GR GR R S N N U V W X Y Z a b c d R S N N U V W X Y Z ADV ADV ADV ADV N = (n, n 2, m 3, m 4 ) N = (n, n 2 +, 2, 2) R = (n, n 2, ) S = (n, n 2, 2) U = (n, n 2, m 3, ) V = (n, n 2, m 3, 2) W = (n, n 2 +, ) X = (n, n 2 +, 2, ) Y = (n, n 2 +, 2, m 4 ) Z = (n, n 2 +, m 3 ) used as defined in Fig. 2.7 Figure 2.8: Preparation of next block N if blocks N and N are not adjacent (B = 6, i = 2) [a d]
23 2.2. LOWER BOUNDS 7 GR GR GR GR R S N N U V W X Y Z e f g h R S N N U V W X Y Z ADV ADV ADV ADV N = (n, n 2, m 3, m 4 ) N = (n, n 2 +, 2, 2) R = (n, n 2, ) S = (n, n 2, 2) U = (n, n 2, m 3, ) V = (n, n 2, m 3, 2) W = (n, n 2 +, ) X = (n, n 2 +, 2, ) Y = (n, n 2 +, 2, m 4 ) Z = (n, n 2 +, m 3 ) used as defined in Fig. 2.7 Figure 2.9: Preparation of next block N if blocks N and N are not adjacent (B = 6, i = 2) [e h]
24 8 CHAPTER 2. DETERMINISTIC MULTIQUEUE ALGORITHMS j = k,..., i, hence k i + times, where iteration i corresponds to the subsequent transmission of one packet from each queue in (n,..., n i ) (cf. c ). Since N buffered k + packets before the processing of N started, there are k + (k i + ) = i packets after the iteration steps. On the other hand, i = max{j : n = n,..., n j = n j}. For N = N the statement of (G) also holds because exactly i packets are buffered at these queues. If N < N, statement (G2) holds because of the same arguments, starting with k packets before the processing and eventually getting i packets. If N < N < N, then let j be the largest index with n = n,..., n j = n j. We have i j < B 2 and n j+ =. Since n i+ = = n j = 2, GR buffers exactly (j + 2) = j + packets in the queues of N. Hence, (G2) holds here as well. Moreover, since GR has B 2 packets in the queues of N, (G3) holds for N = N. If N > N, then we distinguish two cases. If n l > n l for some l i, there is nothing to show because GR s configuration in these queues has not changed and the largest index j with n = n,..., n j = n j is equal to the largest index j with n = n,..., n j = n j. If n = n,..., n i = n i = n i +, then let j be the largest index with n = n,..., n j = n j. We have n i+ = 2,..., n j = 2 and n j+ > 2. Hence, the queues in N store exactly j packets and (G3) holds. Invariant (A) obviously holds because it held when we started processing N and the desired property was established for block N. There exist no blocks N with N < N < N and n i 2 for all i. Now, let N be a block with n i =, n j 2 for all j i. Invariant (A2) is satisfied for blocks N with N N because during the processing of N, ADV served the queues in (n,..., n i, m i+,..., m j, ) for j = B 2,..., i + exactly once, thus reducing the number of packets stored there from 2 to. If N > N, then the queues store exactly two packets, as desired, because N has been served during the construction steps j =,..., i of the staircase centered at N, and, in construction step i, in which two packets arrive, N is not served by ADV. Finally, (A3) holds because ADV has transmitted all packets that have arrived at blocks N N. After processing the last block (m,..., m B 2 ), no further packets arrive, but the iteration steps are executed as if there were another block (cf. a ). Since there is no subsequent block any more, the packet arrival (cf. b ) cannot take place. While GR transmits one packet from each queue in block m (cf. c ), we assume that ADV serves each queue in block precisely once. Since there are still two packets in each queue of block in ADV s configuration, we reduce the load there to one packet per queue as well. Then, we have the following configurations: From invariants (G), (G2) and (G3), we derive that GR buffers one packet in each queue. Due to (A), (A2) and (A3), ADV buffers B or B packets in the queues in blocks (n,..., n B 2 ) with n i 2 for all i while the others buffer exactly one packet like GR does. Let T G be the throughput achieved by GR and T A be the throughput achieved by ADV. For any block (n,..., n B 2 ) with n i 2 for all i, GR transmits B packets from each of the associated queues. For any block (n,..., n i, ) with n j 2, for j =,..., i, GR transmits i + packets from each queue. There are i j= (m j ) such blocks, each containing m i queues. Thus, T G = (m m)b + δ
25 2.2. LOWER BOUNDS 9 where m = B 2 i (m j )m i and δ = i= j= B 2 i (m j )m i (i + ). i= j= The throughput of ADV is equal to that of GR plus the number of packets that are in ADV s final configuration when the processing of blocks ends minus the number of packets that are in GR s final configuration when the processing of blocks ends. Since each queue in any block (n,..., n i, ) with n j 2, for j =,..., i buffers precisely one packet in both ADV s and GR s final configuration, we can restrict ourselves to consider blocks (n,..., n B 2 ) with n i 2 for all i. In ADV s final configuration, the queues in these blocks store B packets, except for the last of these queues which buffers only B packets. Using the facts that the set of blocks under concern consists of m m queues and that each such a block comprises of µ queues, we derive that T A = T G + (m m)b δ 2 (m m) where δ 2 = (m m)/µ and the last subtrahend results from the load of one packet in each of the m m queues in GR s final configuration. Hence T A T G + (m m)(b ) mµ. Moreover This implies B 2 m Θ( j= mµ Θ(m 2 B 2 ), B 2 m j ) = Θ( j= m 2 j ) = Θ(m 2 B 2 ), B 2 δ Θ(B m j ) = Θ(Bm 2 B 2 ). j= T A (m m)b + δ + (m m)(b ) mµ = 2 (m m) + δ + mµ T G (m m)b + δ (m m)b + δ ) 2 B δ + mµ (m m)b + δ 2 B Θ (Bm 2 B 2 mb = 2 B Θ ( m 2 B 2 ) Arbitrary deterministic algorithms Having investigated greedy algorithms in the preceding subsection, we now discuss arbitrary deterministic online strategies, for which we derive the following lower bound result: Theorem 2.2 The competitive ratio of any deterministic online algorithm ALG is at least e/(e ) if m B. Proof. Let B be a positive integer representing the buffer size. For any positive integer N, let m = (B +) N be the number of queues. Let the B buffer columns be indexed,..., B where column B
26 20 CHAPTER 2. DETERMINISTIC MULTIQUEUE ALGORITHMS is the one at the head of the queues and column is the one at the tails. At the beginning, B packets arrive at each of the m queues such that all buffers are fully populated. We present a scheme S for constructing request sequences σ. Our scheme has the property that the throughput achieved by an adversary ADV is at least e/(e ) times the throughput obtained by any greedy algorithm and that the throughput of any greedy strategy is at least as large as the throughput of any deterministic online algorithm ALG. This establishes the theorem. It will be sufficient to consider the standard greedy algorithm, denoted by GR, which serves the smallest indexed queue in case there is more than one queue of maximum length. Our request sequence σ consists of superphases P,..., P B. Superphase P i consists of phases (i, N),..., (i, ). In a phase (i, s), essentially (B +) s ( B+ B )i 2 packets arrive at the q s = (B +) s most populated queues in the online configuration. We first present an informal description of the superphases and phases, explaining how GR would serve them. Then, we give a formal definition with respect to any online strategy. When superphase P i is finished, the last i columns in GR s buffers are empty while the other columns are fully populated. In particular, when σ ends, all buffers are empty. Superphase P i is meant to empty column i. After phase (i, s), the first m q s queues contain B i packets while the remaining queues buffer B i + packets. This load difference of one packet in the last q s queues compared to the others is sufficient for GR to serve each of these q s queues exactly once during the next q s time steps. Since GR is a deterministic algorithm, the adversary knows for each k q s which of the q s queues are served by GR during the first k time steps of the next phase if no new packets arrive and can use this knowledge to serve other queues and make new packets arrive there in order to cause as large a packet loss for GR as possible. We first describe superphase P, which we exemplify in Figure 2.0 for B = 3, N = 3, and start with phase (, N). Initially, all q N+ = m queues are fully populated. During the first q N B time steps, no packets arrive and GR transmits a packet from each of the first q N B queues. Then, B packets arrive at each of the remaining q N+ q N B = (B + ) N (B + ) N B = (B + ) N (B + B) = (B + ) N = q N queues, all of which must be dropped by GR because the last q N queues already buffer B packets each. An adversary, on the other hand, could transmit all packets from the last q N queues during the first q N B time steps so that no packet loss occurs. At the end of phase (, N), the last q N queues are fully populated in GR s buffer configuration. The arrival pattern now repeats for the other phases in P : At the beginning of (, s), the last q s+ queues in GR s configuration store B packets each. During the first q s B time steps, no packets arrive and GR transmits one packet from each of the first q s B of these q s+ queues. Then, B packets are sent to each of the last q s queues, all of which are lost by GR. Again, an adversary can accept all packets by transmitting from the last q s queues in the previous time steps. At the end of (, ), the last queue in GR s configuration contains B packets while all other queues buffer B packets. Now, there is one time step without packet arrivals such that GR can transmit one packet from the last queue and has exactly B packets in each of its buffers. In the example of Figure 2.0, there are m = (B + ) N = 4 3 = 64 queues, all of which are initially completely populated. Illustration (, 3). shows that, during the first q 3 B = = 48 time steps, i.e. in phase (, 3), GR once serves each of the first 48 queues whereas ADV completely empties the remaining
arxiv: v1 [cs.ds] 30 Jun 2016
Online Packet Scheduling with Bounded Delay and Lookahead Martin Böhm 1, Marek Chrobak 2, Lukasz Jeż 3, Fei Li 4, Jiří Sgall 1, and Pavel Veselý 1 1 Computer Science Institute of Charles University, Prague,
More informationCompetitive Management of Non-Preemptive Queues with Multiple Values
Competitive Management of Non-Preemptive Queues with Multiple Values Nir Andelman and Yishay Mansour School of Computer Science, Tel-Aviv University, Tel-Aviv, Israel Abstract. We consider the online problem
More informationInput-queued switches: Scheduling algorithms for a crossbar switch. EE 384X Packet Switch Architectures 1
Input-queued switches: Scheduling algorithms for a crossbar switch EE 84X Packet Switch Architectures Overview Today s lecture - the input-buffered switch architecture - the head-of-line blocking phenomenon
More informationOnline Packet Routing on Linear Arrays and Rings
Proc. 28th ICALP, LNCS 2076, pp. 773-784, 2001 Online Packet Routing on Linear Arrays and Rings Jessen T. Havill Department of Mathematics and Computer Science Denison University Granville, OH 43023 USA
More informationComparison-based FIFO Buffer Management in QoS Switches
Comparison-based FIFO Buffer Management in QoS Switches Kamal Al-Bawani 1, Matthias Englert 2, and Matthias Westermann 3 1 Department of Computer Science, RWTH Aachen University, Germany kbawani@cs.rwth-aachen.de
More informationOnline algorithms for parallel job scheduling and strip packing Hurink, J.L.; Paulus, J.J.
Online algorithms for parallel job scheduling and strip packing Hurink, J.L.; Paulus, J.J. Published: 01/01/007 Document Version Publisher s PDF, also known as Version of Record (includes final page, issue
More informationStrong Performance Guarantees for Asynchronous Buffered Crossbar Schedulers
Strong Performance Guarantees for Asynchronous Buffered Crossbar Schedulers Jonathan Turner Washington University jon.turner@wustl.edu January 30, 2008 Abstract Crossbar-based switches are commonly used
More informationColored Bin Packing: Online Algorithms and Lower Bounds
Noname manuscript No. (will be inserted by the editor) Colored Bin Packing: Online Algorithms and Lower Bounds Martin Böhm György Dósa Leah Epstein Jiří Sgall Pavel Veselý Received: date / Accepted: date
More informationLecture 2: Paging and AdWords
Algoritmos e Incerteza (PUC-Rio INF2979, 2017.1) Lecture 2: Paging and AdWords March 20 2017 Lecturer: Marco Molinaro Scribe: Gabriel Homsi In this class we had a brief recap of the Ski Rental Problem
More informationOnline Scheduling with Bounded Migration
Online Scheduling with Bounded Migration Peter Sanders Universität Karlsruhe (TH), Fakultät für Informatik, Postfach 6980, 76128 Karlsruhe, Germany email: sanders@ira.uka.de http://algo2.iti.uni-karlsruhe.de/sanders.php
More informationOn Two Class-Constrained Versions of the Multiple Knapsack Problem
On Two Class-Constrained Versions of the Multiple Knapsack Problem Hadas Shachnai Tami Tamir Department of Computer Science The Technion, Haifa 32000, Israel Abstract We study two variants of the classic
More informationOnline Interval Coloring and Variants
Online Interval Coloring and Variants Leah Epstein 1, and Meital Levy 1 Department of Mathematics, University of Haifa, 31905 Haifa, Israel. Email: lea@math.haifa.ac.il School of Computer Science, Tel-Aviv
More informationOnline Scheduling of Parallel Jobs on Two Machines is 2-Competitive
Online Scheduling of Parallel Jobs on Two Machines is 2-Competitive J.L. Hurink and J.J. Paulus University of Twente, P.O. box 217, 7500AE Enschede, The Netherlands Abstract We consider online scheduling
More informationDecentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication
Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication Stavros Tripakis Abstract We introduce problems of decentralized control with communication, where we explicitly
More informationOnline bin packing 24.Januar 2008
Rob van Stee: Approximations- und Online-Algorithmen 1 Online bin packing 24.Januar 2008 Problem definition First Fit and other algorithms The asymptotic performance ratio Weighting functions Lower bounds
More informationA lower bound for scheduling of unit jobs with immediate decision on parallel machines
A lower bound for scheduling of unit jobs with immediate decision on parallel machines Tomáš Ebenlendr Jiří Sgall Abstract Consider scheduling of unit jobs with release times and deadlines on m identical
More informationOn-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm
On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm Christoph Ambühl and Monaldo Mastrolilli IDSIA Galleria 2, CH-6928 Manno, Switzerland October 22, 2004 Abstract We investigate
More informationChapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.
Chapter 4 Greedy Algorithms Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 1 4.1 Interval Scheduling Interval Scheduling Interval scheduling. Job j starts at s j and
More informationRecoverable Robustness in Scheduling Problems
Master Thesis Computing Science Recoverable Robustness in Scheduling Problems Author: J.M.J. Stoef (3470997) J.M.J.Stoef@uu.nl Supervisors: dr. J.A. Hoogeveen J.A.Hoogeveen@uu.nl dr. ir. J.M. van den Akker
More informationTechnion - Computer Science Department - Technical Report CS On Centralized Smooth Scheduling
On Centralized Smooth Scheduling Ami Litman January 25, 2005 Abstract Shiri Moran-Schein This paper studies evenly distributed sets of natural numbers and their applications to scheduling in a centralized
More informationarxiv: v1 [cs.ds] 6 Jun 2018
Online Makespan Minimization: The Power of Restart Zhiyi Huang Ning Kang Zhihao Gavin Tang Xiaowei Wu Yuhao Zhang arxiv:1806.02207v1 [cs.ds] 6 Jun 2018 Abstract We consider the online makespan minimization
More information4 Sequencing problem with heads and tails
4 Sequencing problem with heads and tails In what follows, we take a step towards multiple stage problems Therefore, we consider a single stage where a scheduling sequence has to be determined but each
More information1 Introduction (January 21)
CS 97: Concrete Models of Computation Spring Introduction (January ). Deterministic Complexity Consider a monotonically nondecreasing function f : {,,..., n} {, }, where f() = and f(n) =. We call f a step
More informationThe Relative Worst Order Ratio Applied to Paging
The Relative Worst Order Ratio Applied to Paging Joan Boyar, Lene M. Favrholdt, and Kim S. Larsen Department of Mathematics and Computer Science, University of Southern Denmark, Odense, Denmark, {joan,lenem,kslarsen}@imada.sdu.dk
More informationLecture 9: Dynamics in Load Balancing
Computational Game Theory Spring Semester, 2003/4 Lecture 9: Dynamics in Load Balancing Lecturer: Yishay Mansour Scribe: Anat Axelrod, Eran Werner 9.1 Lecture Overview In this lecture we consider dynamics
More informationEmbedded Systems 14. Overview of embedded systems design
Embedded Systems 14-1 - Overview of embedded systems design - 2-1 Point of departure: Scheduling general IT systems In general IT systems, not much is known about the computational processes a priori The
More informationAsymptotic redundancy and prolixity
Asymptotic redundancy and prolixity Yuval Dagan, Yuval Filmus, and Shay Moran April 6, 2017 Abstract Gallager (1978) considered the worst-case redundancy of Huffman codes as the maximum probability tends
More information3 Some Generalizations of the Ski Rental Problem
CS 655 Design and Analysis of Algorithms November 6, 27 If It Looks Like the Ski Rental Problem, It Probably Has an e/(e 1)-Competitive Algorithm 1 Introduction In this lecture, we will show that Certain
More informationNon-Work-Conserving Non-Preemptive Scheduling: Motivations, Challenges, and Potential Solutions
Non-Work-Conserving Non-Preemptive Scheduling: Motivations, Challenges, and Potential Solutions Mitra Nasri Chair of Real-time Systems, Technische Universität Kaiserslautern, Germany nasri@eit.uni-kl.de
More informationOpen Problems in Throughput Scheduling
Open Problems in Throughput Scheduling Jiří Sgall Computer Science Institute of Charles University, Faculty of Mathematics and Physics, Malostranské nám. 25, CZ-11800 Praha 1, Czech Republic. sgall@iuuk.mff.cuni.cz
More informationShrinking Maxima, Decreasing Costs: New Online Packing and Covering Problems
Shrinking Maxima, Decreasing Costs: New Online Packing and Covering Problems Pierre Fraigniaud Magnús M. Halldórsson Boaz Patt-Shamir CNRS, U. Paris Diderot Dror Rawitz Tel Aviv U. Reykjavik U. Adi Rosén
More informationOnline Scheduling of Jobs with Fixed Start Times on Related Machines
Algorithmica (2016) 74:156 176 DOI 10.1007/s00453-014-9940-2 Online Scheduling of Jobs with Fixed Start Times on Related Machines Leah Epstein Łukasz Jeż Jiří Sgall Rob van Stee Received: 10 June 2013
More informationQueue Length Stability in Trees under Slowly Convergent Traffic using Sequential Maximal Scheduling
1 Queue Length Stability in Trees under Slowly Convergent Traffic using Sequential Maximal Scheduling Saswati Sarkar and Koushik Kar Abstract In this paper, we consider queue-length stability in wireless
More informationScheduling Online Algorithms. Tim Nieberg
Scheduling Online Algorithms Tim Nieberg General Introduction on-line scheduling can be seen as scheduling with incomplete information at certain points, decisions have to be made without knowing the complete
More informationIEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 3, MARCH
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 3, MARCH 1998 315 Asymptotic Buffer Overflow Probabilities in Multiclass Multiplexers: An Optimal Control Approach Dimitris Bertsimas, Ioannis Ch. Paschalidis,
More informationAlternatives to competitive analysis Georgios D Amanatidis
Alternatives to competitive analysis Georgios D Amanatidis 1 Introduction Competitive analysis allows us to make strong theoretical statements about the performance of an algorithm without making probabilistic
More informationPerformance Evaluation of Queuing Systems
Performance Evaluation of Queuing Systems Introduction to Queuing Systems System Performance Measures & Little s Law Equilibrium Solution of Birth-Death Processes Analysis of Single-Station Queuing Systems
More informationOnline algorithms December 13, 2007
Sanders/van Stee: Approximations- und Online-Algorithmen 1 Online algorithms December 13, 2007 Information is revealed to the algorithm in parts Algorithm needs to process each part before receiving the
More informationOnline interval scheduling on uniformly related machines
Online interval scheduling on uniformly related machines Leah Epstein Lukasz Jeż Jiří Sgall Rob van Stee August 27, 2012 Abstract We consider online preemptive throughput scheduling of jobs with fixed
More informationCoin Changing: Give change using the least number of coins. Greedy Method (Chapter 10.1) Attempt to construct an optimal solution in stages.
IV-0 Definitions Optimization Problem: Given an Optimization Function and a set of constraints, find an optimal solution. Optimal Solution: A feasible solution for which the optimization function has the
More informationThis means that we can assume each list ) is
This means that we can assume each list ) is of the form ),, ( )with < and Since the sizes of the items are integers, there are at most +1pairs in each list Furthermore, if we let = be the maximum possible
More informationA 2-Approximation Algorithm for Scheduling Parallel and Time-Sensitive Applications to Maximize Total Accrued Utility Value
A -Approximation Algorithm for Scheduling Parallel and Time-Sensitive Applications to Maximize Total Accrued Utility Value Shuhui Li, Miao Song, Peng-Jun Wan, Shangping Ren Department of Engineering Mechanics,
More informationMinimizing Total Delay in Fixed-Time Controlled Traffic Networks
Minimizing Total Delay in Fixed-Time Controlled Traffic Networks Ekkehard Köhler, Rolf H. Möhring, and Gregor Wünsch Technische Universität Berlin, Institut für Mathematik, MA 6-1, Straße des 17. Juni
More informationAn Improved Bound for Minimizing the Total Weighted Completion Time of Coflows in Datacenters
IEEE/ACM TRANSACTIONS ON NETWORKING An Improved Bound for Minimizing the Total Weighted Completion Time of Coflows in Datacenters Mehrnoosh Shafiee, Student Member, IEEE, and Javad Ghaderi, Member, IEEE
More informationReal-time operating systems course. 6 Definitions Non real-time scheduling algorithms Real-time scheduling algorithm
Real-time operating systems course 6 Definitions Non real-time scheduling algorithms Real-time scheduling algorithm Definitions Scheduling Scheduling is the activity of selecting which process/thread should
More informationOn queueing in coded networks queue size follows degrees of freedom
On queueing in coded networks queue size follows degrees of freedom Jay Kumar Sundararajan, Devavrat Shah, Muriel Médard Laboratory for Information and Decision Systems, Massachusetts Institute of Technology,
More informationSub-Optimal Scheduling of a Flexible Batch Manufacturing System using an Integer Programming Solution
Sub-Optimal Scheduling of a Flexible Batch Manufacturing System using an Integer Programming Solution W. Weyerman, D. West, S. Warnick Information Dynamics and Intelligent Systems Group Department of Computer
More informationScheduling Coflows in Datacenter Networks: Improved Bound for Total Weighted Completion Time
1 1 2 Scheduling Coflows in Datacenter Networs: Improved Bound for Total Weighted Completion Time Mehrnoosh Shafiee and Javad Ghaderi Abstract Coflow is a recently proposed networing abstraction to capture
More informationSFM-11:CONNECT Summer School, Bertinoro, June 2011
SFM-:CONNECT Summer School, Bertinoro, June 20 EU-FP7: CONNECT LSCITS/PSS VERIWARE Part 3 Markov decision processes Overview Lectures and 2: Introduction 2 Discrete-time Markov chains 3 Markov decision
More informationAn improved approximation algorithm for the stable marriage problem with one-sided ties
Noname manuscript No. (will be inserted by the editor) An improved approximation algorithm for the stable marriage problem with one-sided ties Chien-Chung Huang Telikepalli Kavitha Received: date / Accepted:
More informationOn-line Bin-Stretching. Yossi Azar y Oded Regev z. Abstract. We are given a sequence of items that can be packed into m unit size bins.
On-line Bin-Stretching Yossi Azar y Oded Regev z Abstract We are given a sequence of items that can be packed into m unit size bins. In the classical bin packing problem we x the size of the bins and try
More informationSanta Claus Schedules Jobs on Unrelated Machines
Santa Claus Schedules Jobs on Unrelated Machines Ola Svensson (osven@kth.se) Royal Institute of Technology - KTH Stockholm, Sweden March 22, 2011 arxiv:1011.1168v2 [cs.ds] 21 Mar 2011 Abstract One of the
More informationA Robust APTAS for the Classical Bin Packing Problem
A Robust APTAS for the Classical Bin Packing Problem Leah Epstein 1 and Asaf Levin 2 1 Department of Mathematics, University of Haifa, 31905 Haifa, Israel. Email: lea@math.haifa.ac.il 2 Department of Statistics,
More informationOnline Optimization of Busy Time on Parallel Machines
Online Optimization of Busy Time on Parallel Machines Mordechai Shalom 1 Ariella Voloshin 2 Prudence W.H. Wong 3 Fencol C.C. Yung 3 Shmuel Zaks 2 1 TelHai College, Upper Galilee, 12210, Israel cmshalom@telhai.ac.il
More informationExact and Approximate Equilibria for Optimal Group Network Formation
Exact and Approximate Equilibria for Optimal Group Network Formation Elliot Anshelevich and Bugra Caskurlu Computer Science Department, RPI, 110 8th Street, Troy, NY 12180 {eanshel,caskub}@cs.rpi.edu Abstract.
More informationImproved Bounds for Flow Shop Scheduling
Improved Bounds for Flow Shop Scheduling Monaldo Mastrolilli and Ola Svensson IDSIA - Switzerland. {monaldo,ola}@idsia.ch Abstract. We resolve an open question raised by Feige & Scheideler by showing that
More informationSPT is Optimally Competitive for Uniprocessor Flow
SPT is Optimally Competitive for Uniprocessor Flow David P. Bunde Abstract We show that the Shortest Processing Time (SPT) algorithm is ( + 1)/2-competitive for nonpreemptive uniprocessor total flow time
More informationBin packing and scheduling
Sanders/van Stee: Approximations- und Online-Algorithmen 1 Bin packing and scheduling Overview Bin packing: problem definition Simple 2-approximation (Next Fit) Better than 3/2 is not possible Asymptotic
More informationFaculty of Computer Science, Electrical Engineering and Mathematics Algorithms and Complexity research group Jun.-Prof. Dr. Alexander Skopalik
Faculty of Computer Science, Electrical Engineering and Mathematics Algorithms and Complexity research group Jun.-Prof. Dr. Alexander Skopalik Online Algorithms Notes of the lecture SS3 by Vanessa Petrausch
More informationTCP is Competitive Against a Limited AdversaryThis research was supported in part by grants from NSERC and CITO.
TCP is Competitive Against a Limited AdversaryThis research was supported in part by grants from NSERC and CITO. Jeff Edmonds Computer Science Department York University, Toronto, Canada jeff@cs.yorku.ca
More informationData Gathering and Personalized Broadcasting in Radio Grids with Interferences
Data Gathering and Personalized Broadcasting in Radio Grids with Interferences Jean-Claude Bermond a,, Bi Li a,b, Nicolas Nisse a, Hervé Rivano c, Min-Li Yu d a Coati Project, INRIA I3S(CNRS/UNSA), Sophia
More informationA Framework for Scheduling with Online Availability
A Framework for Scheduling with Online Availability Florian Diedrich, and Ulrich M. Schwarz Institut für Informatik, Christian-Albrechts-Universität zu Kiel, Olshausenstr. 40, 24098 Kiel, Germany {fdi,ums}@informatik.uni-kiel.de
More informationA note on semi-online machine covering
A note on semi-online machine covering Tomáš Ebenlendr 1, John Noga 2, Jiří Sgall 1, and Gerhard Woeginger 3 1 Mathematical Institute, AS CR, Žitná 25, CZ-11567 Praha 1, The Czech Republic. Email: ebik,sgall@math.cas.cz.
More informationA Starvation-free Algorithm For Achieving 100% Throughput in an Input- Queued Switch
A Starvation-free Algorithm For Achieving 00% Throughput in an Input- Queued Switch Abstract Adisak ekkittikul ick ckeown Department of Electrical Engineering Stanford University Stanford CA 9405-400 Tel
More informationEnhancing Active Automata Learning by a User Log Based Metric
Master Thesis Computing Science Radboud University Enhancing Active Automata Learning by a User Log Based Metric Author Petra van den Bos First Supervisor prof. dr. Frits W. Vaandrager Second Supervisor
More informationPreemptive Online Scheduling: Optimal Algorithms for All Speeds
Preemptive Online Scheduling: Optimal Algorithms for All Speeds Tomáš Ebenlendr Wojciech Jawor Jiří Sgall Abstract Our main result is an optimal online algorithm for preemptive scheduling on uniformly
More informationSemi-Online Scheduling on Two Uniform Machines with Known Optimum Part I: Tight Lower Bounds
Angewandte Mathematik und Optimierung Schriftenreihe Applied Mathematics and Optimization Series AMOS # 27(2015) György Dósa, Armin Fügenschuh, Zhiyi Tan, Zsolt Tuza, and Krzysztof Węsek Semi-Online Scheduling
More informationOperations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads
Operations Research Letters 37 (2009) 312 316 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Instability of FIFO in a simple queueing
More informationA lower bound on deterministic online algorithms for scheduling on related machines without preemption
Theory of Computing Systems manuscript No. (will be inserted by the editor) A lower bound on deterministic online algorithms for scheduling on related machines without preemption Tomáš Ebenlendr Jiří Sgall
More informationS. ABERS Vohra [3] then gave an algorithm that is.986-competitive, for all m 70. Karger, Phillips and Torng [] generalized the algorithm and proved a
BETTER BOUNDS FOR ONINE SCHEDUING SUSANNE ABERS y Abstract. We study a classical problem in online scheduling. A sequence of jobs must be scheduled on m identical parallel machines. As each job arrives,
More informationAlmost Tight Bounds for Reordering Buffer Management *
Almost Tight Bounds for Reordering Buffer Management * Anna Adamaszek Artur Czumaj Matthias Englert Harald Räcke ABSTRACT We give almost tight bounds for the online reordering buffer management problem
More informationCMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017
CMSC CMSC : Lecture Greedy Algorithms for Scheduling Tuesday, Sep 9, 0 Reading: Sects.. and. of KT. (Not covered in DPV.) Interval Scheduling: We continue our discussion of greedy algorithms with a number
More informationNon-Preemptive and Limited Preemptive Scheduling. LS 12, TU Dortmund
Non-Preemptive and Limited Preemptive Scheduling LS 12, TU Dortmund 09 May 2017 (LS 12, TU Dortmund) 1 / 31 Outline Non-Preemptive Scheduling A General View Exact Schedulability Test Pessimistic Schedulability
More information1 Approximate Quantiles and Summaries
CS 598CSC: Algorithms for Big Data Lecture date: Sept 25, 2014 Instructor: Chandra Chekuri Scribe: Chandra Chekuri Suppose we have a stream a 1, a 2,..., a n of objects from an ordered universe. For simplicity
More informationNetworked Embedded Systems WS 2016/17
Networked Embedded Systems WS 2016/17 Lecture 2: Real-time Scheduling Marco Zimmerling Goal of Today s Lecture Introduction to scheduling of compute tasks on a single processor Tasks need to finish before
More informationCPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017
CPSC 531: System Modeling and Simulation Carey Williamson Department of Computer Science University of Calgary Fall 2017 Motivating Quote for Queueing Models Good things come to those who wait - poet/writer
More informationMinimizing the Maximum Flow Time in the Online-TSP on the Real Line
Minimizing the Maximum Flow Time in the Online-TSP on the Real Line Sven O. Krumke a,1 Luigi Laura b Maarten Lipmann c,3 Alberto Marchetti-Spaccamela b Willem E. de Paepe d,3 Diana Poensgen a,3 Leen Stougie
More informationLIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974
LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the
More informationChapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.
Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should
More information250 (headphones list price) (speaker set s list price) 14 5 apply ( = 14 5-off-60 store coupons) 60 (shopping cart coupon) = 720.
The Alibaba Global Mathematics Competition (Hangzhou 08) consists of 3 problems. Each consists of 3 questions: a, b, and c. This document includes answers for your reference. It is important to note that
More informationChapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.
Chapter 4 Greedy Algorithms Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 1 4.1 Interval Scheduling Interval Scheduling Interval scheduling. Job j starts at s j and
More informationDynamic Power Allocation and Routing for Time Varying Wireless Networks
Dynamic Power Allocation and Routing for Time Varying Wireless Networks X 14 (t) X 12 (t) 1 3 4 k a P ak () t P a tot X 21 (t) 2 N X 2N (t) X N4 (t) µ ab () rate µ ab µ ab (p, S 3 ) µ ab µ ac () µ ab (p,
More informationDecentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication 1
Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication 1 Stavros Tripakis 2 VERIMAG Technical Report TR-2004-26 November 2004 Abstract We introduce problems of decentralized
More informationRate-monotonic scheduling on uniform multiprocessors
Rate-monotonic scheduling on uniform multiprocessors Sanjoy K. Baruah The University of North Carolina at Chapel Hill Email: baruah@cs.unc.edu Joël Goossens Université Libre de Bruxelles Email: joel.goossens@ulb.ac.be
More informationClock-driven scheduling
Clock-driven scheduling Also known as static or off-line scheduling Michal Sojka Czech Technical University in Prague, Faculty of Electrical Engineering, Department of Control Engineering November 8, 2017
More informationInequality Comparisons and Traffic Smoothing in Multi-Stage ATM Multiplexers
IEEE Proceedings of the International Conference on Communications, 2000 Inequality Comparisons and raffic Smoothing in Multi-Stage AM Multiplexers Michael J. Neely MI -- LIDS mjneely@mit.edu Abstract
More informationSemi-Online Preemptive Scheduling: One Algorithm for All Variants
Semi-Online Preemptive Scheduling: One Algorithm for All Variants Tomáš Ebenlendr Jiří Sgall Abstract The main result is a unified optimal semi-online algorithm for preemptive scheduling on uniformly related
More informationComplexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler
Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard
More informationProbabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford
Probabilistic Model Checking Michaelmas Term 20 Dr. Dave Parker Department of Computer Science University of Oxford Overview PCTL for MDPs syntax, semantics, examples PCTL model checking next, bounded
More informationSemi-Online Multiprocessor Scheduling with Given Total Processing Time
Semi-Online Multiprocessor Scheduling with Given Total Processing Time T.C. Edwin Cheng Hans Kellerer Vladimir Kotov Abstract We are given a set of identical machines and a sequence of jobs, the sum of
More informationOutline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.
Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität
More informationarxiv: v2 [cs.ds] 5 Aug 2015
Online Algorithms with Advice for Bin Packing and Scheduling Problems Marc P. Renault a,,2, Adi Rosén a,2, Rob van Stee b arxiv:3.7589v2 [cs.ds] 5 Aug 205 a CNRS and Université Paris Diderot, France b
More informationDispersing Points on Intervals
Dispersing Points on Intervals Shimin Li 1 and Haitao Wang 1 Department of Computer Science, Utah State University, Logan, UT 843, USA shiminli@aggiemail.usu.edu Department of Computer Science, Utah State
More informationEnvironment (E) IBP IBP IBP 2 N 2 N. server. System (S) Adapter (A) ACV
The Adaptive Cross Validation Method - applied to polling schemes Anders Svensson and Johan M Karlsson Department of Communication Systems Lund Institute of Technology P. O. Box 118, 22100 Lund, Sweden
More informationOn Equilibria of Distributed Message-Passing Games
On Equilibria of Distributed Message-Passing Games Concetta Pilotto and K. Mani Chandy California Institute of Technology, Computer Science Department 1200 E. California Blvd. MC 256-80 Pasadena, US {pilotto,mani}@cs.caltech.edu
More informationAlgorithms for pattern involvement in permutations
Algorithms for pattern involvement in permutations M. H. Albert Department of Computer Science R. E. L. Aldred Department of Mathematics and Statistics M. D. Atkinson Department of Computer Science D.
More informationTheory of Computation Prof. Raghunath Tewari Department of Computer Science and Engineering Indian Institute of Technology, Kanpur
Theory of Computation Prof. Raghunath Tewari Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture 10 GNFA to RE Conversion Welcome to the 10th lecture of this course.
More informationCS 453 Operating Systems. Lecture 7 : Deadlock
CS 453 Operating Systems Lecture 7 : Deadlock 1 What is Deadlock? Every New Yorker knows what a gridlock alert is - it s one of those days when there is so much traffic that nobody can move. Everything
More informationOnline Competitive Algorithms for Maximizing Weighted Throughput of Unit Jobs
Online Competitive Algorithms for Maximizing Weighted Throughput of Unit Jobs Yair Bartal 1, Francis Y. L. Chin 2, Marek Chrobak 3, Stanley P. Y. Fung 2, Wojciech Jawor 3, Ron Lavi 1, Jiří Sgall 4, and
More informationNon-clairvoyant Scheduling for Minimizing Mean Slowdown
Non-clairvoyant Scheduling for Minimizing Mean Slowdown N. Bansal K. Dhamdhere J. Könemann A. Sinha April 2, 2003 Abstract We consider the problem of scheduling dynamically arriving jobs in a non-clairvoyant
More information