Lecture 4: November 17, Part 1 Single Buffer Management

Similar documents
3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Structure and Drive Paul A. Jensen Copyright July 20, 2003

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

Expected Value and Variance

Problem Set 9 Solutions

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Lecture 4. Instructor: Haipeng Luo

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Maximizing the number of nonnegative subsets

Module 9. Lecture 6. Duality in Assignment Problems

Graph Reconstruction by Permutations

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

Lecture 3: Shannon s Theorem

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013

More metrics on cartesian products

find (x): given element x, return the canonical element of the set containing x;

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling

Affine transformations and convexity

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

1 The Mistake Bound Model

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

ECE559VV Project Report

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Société de Calcul Mathématique SA

VQ widely used in coding speech, image, and video

Composite Hypotheses testing

Randomness and Computation

HMMT February 2016 February 20, 2016

A 2D Bounded Linear Program (H,c) 2D Linear Programming

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Foundations of Arithmetic

Economics 101. Lecture 4 - Equilibrium and Efficiency

CHAPTER 17 Amortized Analysis

} Often, when learning, we deal with uncertainty:

MODELING TRAFFIC LIGHTS IN INTERSECTION USING PETRI NETS

Learning Theory: Lecture Notes

Lecture 12: Discrete Laplacian

3.1 ML and Empirical Distribution

Appendix B: Resampling Algorithms

Queueing Networks II Network Performance

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

E Tail Inequalities. E.1 Markov s Inequality. Non-Lecture E: Tail Inequalities

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Finding Dense Subgraphs in G(n, 1/2)

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

NUMERICAL DIFFERENTIATION

Comparison of Regression Lines

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper

Error Probability for M Signals

Lecture 3: Probability Distributions

Lecture 14: Bandits with Budget Constraints

a b a In case b 0, a being divisible by b is the same as to say that

A Simple Inventory System

Edge Isoperimetric Inequalities

A random variable is a function which associates a real number to each element of the sample space

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Ph 219a/CS 219a. Exercises Due: Wednesday 23 October 2013

CSE4210 Architecture and Hardware for DSP

Calculation of time complexity (3%)

Lecture 4: Constant Time SVD Approximation

Appendix B. The Finite Difference Scheme

Excess Error, Approximation Error, and Estimation Error

NP-Completeness : Proofs

Managing Capacity Through Reward Programs. on-line companion page. Byung-Do Kim Seoul National University College of Business Administration

The Minimum Universal Cost Flow in an Infeasible Flow Network

= z 20 z n. (k 20) + 4 z k = 4

Lecture Notes on Linear Regression

A new construction of 3-separable matrices via an improved decoding of Macula s construction

Introduction to Continuous-Time Markov Chains and Queueing Theory

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13]

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

First day August 1, Problems and Solutions

Suggested solutions for the exam in SF2863 Systems Engineering. June 12,

Statistical Foundations of Pattern Recognition

Lecture 10: May 6, 2013

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

12 MATH 101A: ALGEBRA I, PART C: MULTILINEAR ALGEBRA. 4. Tensor product

Statistical Mechanics and Combinatorics : Lecture III

Lecture Space-Bounded Derandomization

Refined Coding Bounds for Network Error Correction

18.1 Introduction and Recap

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Global Sensitivity. Tuesday 20 th February, 2018

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

Assortment Optimization under MNL

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

The Second Anti-Mathima on Game Theory

Analysis of Discrete Time Queues (Section 4.6)

Introduction to Random Variables

), it produces a response (output function g (x)

Errors for Linear Systems

LARGEST WEIGHTED DELAY FIRST SCHEDULING: LARGE DEVIATIONS AND OPTIMALITY. By Alexander L. Stolyar and Kavita Ramanan Bell Labs

Amusing Properties of Odd Numbers Derived From Valuated Binary Tree

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

COS 521: Advanced Algorithms Game Theory and Linear Programming

Transcription:

Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input Output Queued (CIOQ) swtch, we dvded the swtch polcy nto two man parts:. Schedulng How we pass pacets from the swtch nputs to ts outputs. 2. Buffers anagement (When the sze of the buffers s fnte) acceptng/throwng pacets from the buffers. In ths part of the lecture we wll focus on the management of a sngle buffer, where the buffer has a fxed sze. Our goal s to maxmze the throughput of the swtch, that s, the number of pacets transmtted. In the more general case, each pacet has a weght/value, whch can stand for ts prorty (Dff Serv), and our goal s to maxmze the sum of weghts/values of pacets transmtted. 2 The odel We defne the model of a sngle buffer management n an OQ swtch as follows:. A buffer of sze B (pacets) 2. All pacets have fxed sze. 3. Every pacet p has a weght/value of vp ( ) 4. The buffer s FIFO (later we wll see ths s not a lmtaton, as every other buffer s algorthm can be smulated usng FIFO) 5. In each tme unt one pacet can be sent from the buffer. 6. Every pacet enterng the buffer can leave the buffer only when t s transmtted (non-preemptve). Our algorthm wll decde for each pacet, p, f p should be nserted to the buffer or not (dropped).

The goal of the algorthm s to maxmze the value of transmtted pacets, v( p ), or n other words to maxmze v( p). p transmtted 3 The Algorthm We assume that pvp, ( ) [, α ], and we dvde the range [, α ] to classes. Class contans pacets values n range[ e, e ). Defnton 3. Algorthm A s defned as follows:. Choose a random {,.., ln α }, s chosen wth unform dstrbuton. 2. From now on, a pacet p s accepted by the buffer only f the buffer has space left and v( p ) s n class. Defnton 3.2 A( σ ) - Sum of pacet values whch algorthm A transmtted after a seres of pacets σ s receved. OPT ( σ ) - Optmal number of pacets receved from class. OPT ( σ ) - Optmal weghts sum of pacets receved from class. Corollary 3.3 The Algorthm accepts an optmal number of pacets from class. Clam 3.4 The compettve rato of the Algorthm s e. Proof: A s a probablstc algorthm, let E( A( σ )) be the expectaton of A( σ ). Snce the weght of every pacet n class s at least e, and snce A devotes all ts tme to sendng pacets from class : EA e OPT () ( ( σ)) ( σ) = Corollary 3.5 The weght of every pacet n class s at most OPT ( σ) e OPT ( σ) e, so And from () and Corollary 3.5: 2

e OPT ( σ ) σ = = e EA ( ( σ)) e OPT( ) = OPT ( σ) OPT ( σ) ln α e = e 4 Smulatng an Arbtrary Polcy usng a FIFO ueue Clam 4. Gven an algorthm P, worng on a buffer of sze B, non-preemptve, and not necessarly a FIFO. There exsts an algorthm P ', worng on a buffer of sze B, nonpreemptve and FIFO, and whch for every seuence of pacets σ, P( σ) = P'( σ) Proof: We wll show that such algorthm, P ', exsts by defnng t. We smulate a run of P on σ n the bacground. Algorthm P ' : - If P accepts a pacet, P ' accepts t too. - When P transmts a pacet, P ' sends the pacet at the head of ts FIFO buffer. Corollary 4.2 Snce the buffer s non-preemptve, every pacet enterng the buffer s sent, therefore f P and P ' receve the same pacets, P( σ) = P'( σ). (even though the order of transmsson s not necessarly the same). Corollary 4.3 P ' has enough room n ts buffer to accept every pacet whch needs to be accepted by ts defnton. (It can be easly proved by nducton on the tme that the number of pacets at P ' and P s always the same). However, note that there mght be an addtve dfference of at most B α n the sum of weghts of the pacets sent by the two polces at any gven tme. The algorthm A defned earler (Defnton 3.), has a sgnfcant drawbac, even though we saw the expectaton s good. In practcal use, snce the probablty of droppng a pacet p s, many pacets are dropped, especally n case we have a burst of pacets not from class, the buffer mght be empty for a long tme, and we won t send any pacet. We wll show a determnstc algorthm A ' where ths drawbac s fxed. Note that from clam 4. we can now use a dfferent method than FIFO. 3

Defnton 4.4 Algorthm A ' s defned as follows: We dvde the buffer nto classes, each class s sze s B, as before, class contans values n range [ e, e ). A ' performs a round-robn n delvery (.e. n tme t a pacet s sent from class t mod + ) Corollary 4.5 A ' sends, for every class, at least OPT ( σ ) pacets. Clam 4.6 The compettve rato of the Algorthm A ' s e. Proof: Snce the weght of every pacet n class s at least e, and from corollary 4.5: A e OPT () ( σ) ( σ) = Corollary 4.7 The weght of every pacet n class s at most OPT ( σ) e OPT ( σ) e, so And from () and Corollary 4.5: e OPT ( σ ) σ = = e A( σ) e OPT( ) = OPT ( σ) OPT ( σ) ln α e = e 4

5 Part 2 Shared memory When modelng the swtch, we can tal about the buffer allocaton wthn the swtch. A smple scheme of our swtch loos le ths: nput ( pacets n memory) N outputs. In our model, n every tmestep pacet can leave on every output lne. Note that n ths dscusson the pacets don t have weghts/values. We manage N ueues and want to maxmze the number of pacets transmtted. There can be several strateges, for example:. Fxed partton assgn T cells for each output ueue. L - ueue length of. 2. Unform Fxed Partton specal case of Fxed partton where : T = N 3. Total Shared emory - every pacet s nserted to the swtch s memory (as long as there s space left) regardless of ts destnaton. 4. Defnng lower and upper bounds - R - lower bound on space we reserve for a ueue, T - upper bound on ueue sze. A new pacet s nserted to ueue f L T and max( L, R ). Note that f T > we need to prevent ueues from overlap. In the rest of ths lecture we wll defne and analyze a strategy called Harmonc partton. 6 Harmonc partton We wll start wth some defnton: Defnton 6. For N We defne constants B by: B = ln N + = s ()- the -th sze ueue at a specfc tme. (.e. numbers the ueues by ther sze - from = as the bggest ueue to =N as the smallest ueue). L - length of ueue s (). Defnton 6.2 The Harmonc partton polcy s: Accept a pacet to ts ueue (the ueue matchng ts output port) only f after acceptng = the pacet the followng condton holds:, N, L B Ths s an attempt to match the bounds on the ueues to the traffc at that tme. 5

Note that the bounds don t regard to a specfc ueue, but to a ueue number when sorted by sze. Defnton 6.3 We defne H as a shared memory swtch operatng under Harmonc buffer management polcy. H ( σ ) s the number of pacets the swtch transmts for seuence σ of pacets. Smlar to before, OPT s the optmal (off-lne) polcy whch transmts the maxmum possble number of pacets for any nput seuence. Theorem 6.4 The compettve rato of the Harmonc Algorthm s O(ln N ) It s possble to show Ω (ln N /(ln ln N)), but we won t see t here. Proof: Frst some defntons: Defnton 6.5 A pacet sent by OPT s called extra f at the tme step t s sent from ueue, H doesn t send a pacet from ueue. The above defnton of extra s of our nterest, snce the number of extra pacets euals to OPT( σ) H ( σ), and we actually needs to prove: OPT ( σ) H ( σ) O(ln N) H ( σ) So f we wll prove the number of extra pacets wth algorthm run on seuence σ O(ln N) H( σ ) we ve proved the theorem. Defnton 6.6 A pacet n the OPT ueue s called potentally extra at some tme step, f ts dstance from the OPT ueue head s bgger than the length of the same ueue for H. Observaton 6.7 If at a certan tme step a pacet s extra, then sometme n the past ths pacet was potentally extra. (.e. number of extra pacets number of potentally extra pacets). So, n order to prove theorem 6.4 t s suffcent to prove that the number of potentally extra pacet of algorthm run on seuence σ O(ln N) H( σ ). Theorem 6.8 Number of potentally extra pacets of the algorthm H s at most ln N + tmes the number of pacets that H accepts. Proof: We wll show that at every tme step of the algorthm we can have a mappng between potentally extra pacets and pacets that H has n the buffer at the same tme, or pacets already sent by H The mappng wll satsfy the followng: Every pacet of H s mapped to at most ln N + potentally extra pacets. 6

Every potentally extra pacet s mapped as long as t s potentally extra. If a potentally extra pacet, p, s mapped to a pacet then the dstance of p from the head of ts ueue (n H ) s eual or bgger than the dstance of from the head of ts ueue (n OPT ). (and when a pacet s sent from the swtch we consder her dstance from ts ueue s head as negatve). We wll now defne the mappng and show t satsfes the above propertes. The mappng: every tme step both H and OPT receve pacets. Afterwards changes and addtons are done to the mappng. Upon arrval of a new pacet:. If OPT ddn t accept the pacet, or f the pacet s not potentally extra, do nothng. 2. If the pacet s potentally extra: (a) If H too the pacet, then a pacet exsts at OPT n the same ueue, whch was potentally extra before (so t was mapped before), but now s not. The new potentally extra pacet taes the mappng of that pacet. (b) If H ddn t tae the pacet, we wll map the new potentally extra pacet to some pacet of H whch s closer to the head of ts ueue and has less than ln N + mappngs. Clam 6.9 Rule 2(b) s applcable,.e. we wll show that a pacet closer to the head of the ueue, wth less than ln N + mappngs, exsts. Note that f rule 2(b) s applcable, the defned mappng satsfes our demands. Proof: Snce H ddn t tae the pacet, we now that tang the pacet would have volated a condton, so f we too that pacet -, N, L > B, we defne to be the mnmal,.e. Therefore: L B, L > B = = = mn{ L > B }. = And so we obtan that f we too the pacet: L L L B B = = L = > = lnn + > ln N + = 7

Before tang the pacet - ueue s length). L lnn + (Tang the pacet adds at most one to ts Snce we now the condton L B s broen should we tae the pacet, we now = the pacet belongs to a ueue whose number s n range [, ] (otherwse the condton wouldn t have been volated). Let s count the number of pacets n ueues to, whose dstance from ther ueue s head s at most L. Queue of the arrved pacet Fgure As we can see from fgure, the number of such pacets s at least L = ln N + Snce the arrved pacet s potentally extra, ts dstance from the head of the ueue s at least L +, and therefore each one of the above pacets s sutable for the ln N + mappng. So far we have seen ln N + pacets whch are closer to the head of the ueue exsts. It s left to show that at least one of these pacets has less than ln N + mappngs. Ths s smple to see, snce every potentally extra pacet whch mapped to a pacet n the buffer of H, s stll wthn the buffer of OPT. 8

Before the new potentally extra pacet arrved to OPT, there were at most pacets at OPT (because OPT receved ths pacet, so he needed space for t). So the number of mappngs to the relevant pacets of H s. We have canddate pacets, and total mappng from these pacets s ln N +, therefore there exsts one pacet, whch s mapped to less than ln N + pacets. Ths pacet s mapped to our new potentally extra pacet. (n rule 2(b)). Concluson We showed at each tme step, a mappng between potentally extra pacets and pacets n the memory of H, and the pacet n H s mapped to at most ln N + potentally extra pacets. From the exstence of the mappng we conclude that: As a pacet of H leaves the swtch, there are at most ln N + pacets matched to t. After a pacet leaves, no new pacets are mapped to t. As a pacet of H leaves the swtch, all pacets matched to t n OPT are stll left n the buffer. For every pacet whch OPT sends from ueue, ether there s a pacet that H sends from at the same tme, or t s matched to a pacet H already sent. Therefore, the number of pacets that OPT has sent at tme t s at most O(ln N ) tmes the number of pacets H has sent at that same tme. References [] W. Aello, Y. ansour, S. Raagopolan, A. Rosén, Compettve Queue Polces for Dfferentated Servces. Proc. Of INFOCO 2000, pp. 43-440. [2] E. L. Hahne, A.Kesselman and Y.ansour, Compettve Buffer anagement for Shared-emory Swtches. Proceedngs of SPAA 200. [3] A.Kesselman and Y.ansour, Harmonc Buffer anagement Polcy for Shared emory Swtches. INFOCO. 2002. 9