Primal Dual Gives Almost Optimal Energy Efficient Online Algorithms

Size: px
Start display at page:

Download "Primal Dual Gives Almost Optimal Energy Efficient Online Algorithms"

Transcription

1 Prmal Dual Gves Almost Optmal Energy Effcent Onlne Algorthms Nkhl R. Devanur Zhy Huang Abstract We consder the problem of onlne schedulng of obs on unrelated machnes wth dynamc speed scalng to mnmze the sum of energy and weghted flow tme. We gve an algorthm wth an almost optmal compettve rato for arbtrary power functons. No earler results handled arbtrary power functons for mnmzng flow tme plus energy wth unrelated machnes.) For power functons of the form fs) = s α for some constant α >, we get a compettve rato of O α ), mprovng upon a prevous log α compettve rato of Oα 2 ) by Anand et al. [3], along wth a matchng lower bound of Ω α ). Further, n the resource log α augmentaton model, wth a + ɛ speed up, we gve a 2 + ) compettve algorthm, wth essentally the same ɛ technques, mprovng the bound of + O ) by Gupta ɛ 2 et al. [5] and matchng the bound of Anand et al. [3] for the specal case of fxed speed unrelated machnes. Unlke the prevous results most of whch used an amortzed local compettveness argument or dual fttng methods, we use a prmal-dual method, whch s useful not only to analyze the algorthms but also to desgn the algorthm tself. Introducton The desgn of onlne algorthms for schedulng problems has been an actve area of research. Typcally n such problems obs arrve onlne, over tme, and n order to complete a ob t must be assgned a certan amount of processng, ts processng volume. The algorthm has to schedule the obs on one or more machnes so as to complete them as soon as possble. A standard obectve s a weghted sum of flow tmes; the flow tme of a ob s the duraton of tme between ts release and completon. In the unrelated machnes verson of the problem, each ob can have a dfferent volume and a dfferent weght for each machne. Preempton/resumpton s allowed, but mgraton of a ob from one machne to another s not. Mcrosoft Research, Redmond. Emal: nkdev@mcrosoft.com. Stanford Unversty. Emal: hzhy@stanford.edu. Ths work was done whle the author was a graduate student at Unversty of Pennsylvana and an ntern at Mcrosoft Research, Redmond. Supported n part by an ONR MURI Grant N4797 and a Smons Graduate Fellowshp for Theoretcal Computer Scence No ). Of late, an mportant consderaton n such problems has been the energy consumpton. A popular approach to model the energy consumpton s va the dynamc speed scalng model: a machne can run at many dfferent speeds, hgher speeds process obs faster but consume more energy. The rate of consumpton of energy w.r.t. tme s the power consumpton whch s gven as a functon of speed. Typcal power functons are of the form s α where s s the speed and α s some constant, commonly equal to 2 or 3. The obectve now s to mnmze the sum of energy consumed and weghted flow tme. In ths paper we show that a natural and prncpled prmal-dual approach gves almost optmal compettve ratos for the most general verson of the problem, wth unrelated machnes and arbtrary monotone non-decreasng and convex power functons. See Secton 2 and Secton 3 for formal problem defntons.) We summarze our contrbutons below. For power functons fs) = s α, we gve an algorthm wth compettve rato 8α/ log 2 α. We also gve a correspondng lower bound of α/4 log 2 α. Ths mproves upon a prevous Oα 2 ) compettve rato of Anand et al. [3]. We also show bounds for specfc values of α, for α = 2 and 3 we show compettve ratos of 4 and 5.58 respectvely. It s worth notng that these bounds mprove upon the prevous best bounds known for these values of α, even for a sngle machne whch were 5.24 and 8 respectvely for α = 2 and 3, due to Bansal et al. [7]). 2 For an arbtrary power functon f, we defne a quantty Γ f such that for the power functon fs) = s α, Γ f = α. Therefore Γ f can be thought of as a generalzaton of the quantty α. We show analogous bounds for arbtrary power functons, an upper bound of 8Γ f / log 2 Γ f and a lower bound of Γ f /4 log 2 Γ f. No prevous bounds were known for arbtrary power functons. 3 These results are summarzed n Table. An alternate obectve functon often consdered because t s sometmes easer to deal wth, s the Ths s w.l.o.g. as one can reduce problems wth arbtrary power functons to those wth non-decreasng and convex power functons e.g., Secton 3. of [6]). 2 For the unweghted case, however, a compettve rato of 2 s known, due to Andrew et al. [4]. 3 The power functons can dffer by machne, and the compettve rato s determned by the worst one.

2 fractonal flow tme. For the fractonal flow tme, magne that a ob s broken nto nfntesmally small peces and each pece has an ndependent flow tme whch s equal to the tme between ts own completon tme and ts release tme. The fractonal flow tme s the average flow tme of all the peces put together. Our guarantees for the fractonal flow tme are essentally the same as those for the ntegral flow tme and they are summarzed n Table 2. We also consder the resource augmentaton model, where for the same gven power, machnes used by the algorthm run + ɛ tmes faster than the machnes used by the offlne optmum. Our technques wth mnor modfcatons) extend to ths model as well. We show a compettve rato of 2 ɛ + ) whch mproves upon the prevous best known bound of + O ɛ ) by Gupta et 2 al. [5]. As a specal case when the power functon s a - step functon, ths bound matches the best known compettve rato for mnmzng the weghted flow tme on fxed speed unrelated machnes obtaned n Anand et al. [3] and Chadha et al. []). These results are summarzed n Table 3. Technques and proof overvew The most common and successful technque for analyzng onlne algorthms for such schedulng problems has been the amortzed local compettveness argument. The technque calls for a potental functon that stores the excess cost ncurred by the optmum soluton and uses t as needed to pay for the algorthm s soluton. More recently Anand et al. [3] used the dual fttng method to gve several mproved compettve ratos. We use a prmal dual approach, whch s a prncpled approach that s used to gude the desgn of the algorthm tself n addton to beng a tool for the analyss. Our algorthms are based on a convex programmng relaxaton. Most of the work done s n understandng the structure of the convex program and the propertes of the optmal prmal and dual solutons; the algorthm and the analyss follow naturally after that. In other words, we derve the algorthm from the structure of the convex program. Almost all the specal cases of our problem, such as related machnes, a sngle machne, unweghted flow tme, etc. use the followng speed scalng rule ntroduced by Albers and Fuwara [2]: set the speed so that the power consumed s equal to the remanng weght PERW). The power s the rate of energy consumed and the remanng weght can be thought of as the rate of accrual of the weghted flow tme. What the PERW rule therefore ensures s that the energy consumed and the weghted flow tme are both equal, whch s convenent for the analyss. At frst glance, t may also appear that PERW s the greedy choce,.e., t s the optmal choce f no more obs are released. A more careful analyss shows ths to be false, that the optmal speed to set, f no more obs are released, s s so that f f s)) equals the remanng weght. 4 We frst analyze the natural greedy algorthm that follows from ths speed scalng rule coupled wth a greedy ob assgnment polcy: a ob s assgned to the machne for whch the ncrease n the energy plus flow tme s the smallest. Ths gves an Oα) compettve rato, whch already beats the prevous bound of Oα 2 ) whch uses the greedy ob assgnment polcy along wth the PERW speed scalng rule. Settng the speed so that no other ob arrves n the future s really a conservatve approach. The algorthm should antcpate the arrval of some obs n the future and run faster. Such an aggressve algorthm faces a trade-off: the mprovement obtaned when there are more obs n the future versus the degradaton when there aren t. The algorthm should hedge aganst both the cases and balance the compettve rato. A systematc way to do ths s to set the speed s so that f f s/c)) equals remanng weght, for some constant c, obtan a compettve rato as a functon of c and then set c to mnmze the compettve rato. For nstance, for the power functon fs) = s α, the best ln α choce of c we have turns out to be + α, gvng a 8α compettve rato of log 2 α. Moreover ths framework allows us to also analyze the PERW rule for power functons fs) = s α, whch corresponds to a choce of c = α ) /α. We show that the compettve rato wth the PERW speed scalng rule s stll O α log α ), thus gvng some ustfcaton for ths rule as well. We start our analyss by consderng the obectve of weghted fractonal flow tme plus energy Secton 2). We consder a convex programmng relaxaton and ts dual usng Fenchel conugates. We analyze the smple case of schedulng a sngle ob on a sngle machne and derve the speed scalng rule f f s)) = remanng fractonal) weght as optmal. We then consder many obs on a sngle machne, all released at tme, and show that the same speed scalng rule s optmal, along wth the ob selecton rule of hghest densty frst HDF, densty = weght/volume). We characterze the optmal dual soluton for the same and show several structural propertes of the optmal prmal and dual solutons. Fnally we consder the unrelated machnes case, whch calls for a ob assgnment rule whch assgns obs to machnes). At any tme, gven the ob assgnments, the algorthm uses the optmal 4 f s the Fenchel conugate of f, defned as f µ) := sup x {µx fx)}, and f f s)) has the followng geometrc nterpretaton: f you draw a tangent to f at s, then ths s the length of the y-ntercept of the tangent.

3 Table : Compettve ratos for mnmzng ntegral weghted flow tme plus energy Secton 3) Sngle machne, fs) = s α 5.24 [7] Best known Ths paper α = 2 α = 3 General α = 2 α = 3 General 8 [7] Unrelated machnes, fs) = s α Oα 2 ) [3] Unrelated machnes, any fs) α O log 2 α ) [6, 7] c new) O α log 2 α )a Thm. 3.2) Θ α log 2 α )a Thm. 3.2, 5.) Θ ) Γf ab log 2 Γ f Thm. 3.2, 5.) Table 2: Compettve ratos for mnmzng fractonal weghted flow tme plus energy Secton 2) Best known Ths paper General < α < 2 α = 2 α = 3 General Sngle machne, arbtrary fs) 2 [6] α O α log 2 α ) Unrelated machnes, fs) = s α Oα) [3] 2α Unrelated machnes, any fs) new) Θ α log 2 α )a Thm. 2.3, 5.) Θ ) Γf a log 2 Γ f Thm. 2.3, 5.) Table 3: Compettve ratos for mnmzng fractonal/ntegral weghted flow tme plus energy any power functon fs)) wth resource augmentaton + ɛ)-speed) Secton 4) Best known Ths paper Integral flow tme plus energy, arbtrary power functon + O ɛ 2 ) [5] ɛ Thm. 4.) Fractonal flow tme plus energy, arbtrary power functon + 5 ɛ [5] Integral flow tme, fxed speed ɛ [3] ɛ Thm. 4.2) Fractonal flow tme, fxed speed ɛ [] a 8Γ Specfcally, we show an upper bound of f 8α log 2 Γ f log 2 α for fs) = sα Γ ) and a lower bound of f α 4 log 2 Γ f 4 log 2 α for fs) = sα ) on the compettve ratos. b Γ f s defned to be max s f f s))/fs) +. c To acheve ths compettve rato, one need to combne the technques n [6, 7]. E.g., see the dscussons by Anand et al. [3] for more detals.

4 schedule for each machne assumng no future obs arrve and the correspondng dual. The ob assgnment rule follows from the complementary slackness condtons usng these duals. 5 The compettve rato of Oα) follows from a smple local chargng of the prmal to the dual cost, along wth some of the structural propertes establshed earler. We then consder a systematcally aggressve speed scalng rule, f f s/c)) = remanng weght for some constant c Secton 2.5). Wth the rest of the algorthm/proof more or less dentcal, we derve a compettve rato as a functon of c. On optmzng, we get a compettve rato of Oα/ log α). Typcally algorthms desgned for the fractonal flow tme also work for the ntegral flow tme wth some loss n the compettve rato. However, our analyss for the fractonal flow tme also goes through for the ntegral flow tme wth small modfcatons) wthout any loss n the compettve rato! Almost the same proof structure works for the ntegral case and we outlne what mnor modfcatons are requred Secton 3). Once agan essentally the same proof goes through for the resource augmentaton model as well Secton 4). Fnally we show a smple example wth 2 machnes that gves almost tght lower bounds Secton 5). Related work We summarze here a selecton of the most relevant related work. The fact that energy costs are a substantal part of the overall cost n data centers see Barroso [9]) motvates energy consderatons n schedulng problems. Early work on energy effcent algorthms was for the case of a sngle machne. Albers and Fuwara [2] ntroduced the obectve of energy plus flow tme n the dynamc speed scalng model, and the PERW speed scalng rule. Generalzatons to weghted flow tme followed [7, 5, 7] wth the current best compettve ratos gven by Bansal et al. [6] and Andrew et al. [4]. For other varants n the dynamc speed scalng model, and other models such as the power down model and the mportance of energy effcent algorthms, see the survey by Albers []. Motvated by the desgn of archtectures wth heterogenous cores/processors, Gupta et al. [5] consdered the case of related machnes wth dfferent power functons. See the references theren for more reasons to consder heterogenous machnes.) All these algorthms use the PERW speed scalng rule and use potental functons wth amortzed local compettveness arguments. Anand et al. [3] used a dual fttng based argument 5 Ths assgnment rule s almost the same as the greedy assgnment rule, but s slghtly dfferent. We state the ob assgnment rule derved from the complementary slackness condtons n the algorthm snce that seems more prncpled. In any case, the analyss remans the same for both ob assgnment rules. to generalze ths to unrelated machnes and the fxed speed case n the resource augmentaton model mprovng upon a prevous algorthm by Chadha et al. []). The maor dfference between dual fttng and prmal dual s that dual fttng s only used as an analyss tool for a gven algorthm whle prmal dual gudes the desgn of the algorthm tself. Also n recent work Gupta et al. [4] showed that the natural extensons of several well known algorthms that work for homogeneous machnes fal for heterogeneous machnes, thus ustfyng the use of non-standard algorthms of Gupta et al. and Anand et al. [5, 3]. As summarzed n Table - 3, our work unfes and mproves upon several of these results, most promnently [7, 6, 3, 5, ]. Buchbnder and Naor [] establshed the prmaldual approach for packng and coverng problems, unfyng several prevous potental functon based analyss. Gupta et al. [6] gave a prmal-dual algorthm for a general class of schedulng problems wth cost functons of the form fs) = s α. A dual and equvalent) problem of onlne concave matchng was consdered by Devanur and Jan [3] who also used the prmal-dual approach to gve optmal compettve ratos for arbtrary cost functons. Usng ether of these results for the problems we consder only gves a compettve rato of α α whch s sgnfcantly far from the ratos we obtan. Recent related work In concurrent and ndependent work, Nguyen [8] shows an Oα/ log α)-compettve algorthm for mnmzng flow tme plus energy on unrelated machnes when the power functon s fs) = s α. The analyss uses dual fttng based on a non-convex program and Lagrangan dualty. 2 Weghted fractonal flow-tme plus energy In ths secton we consder the obectve of fractonal flow tme plus energy and obtan compettve algorthms for t. Suppose that we are gven a power functon, f : R + R +, whch s monotoncally non-decreasng, convex and s at. Further, we assume that f s also dfferentable. For ease of presentaton we assume that all the machnes have the same power functon although all of our results go through easly wth dfferent power functons for dfferent machnes. offlne verson of the problem. Frst, we defne the Input: A set of obs J and a set of machnes M. For each ob J ts release tme r R +. For each ob J and each machne M, the volume v and the weght w of ob f scheduled on machne. Let the densty of ob on machne be ρ = w /v.

5 Output: An assgnment of each ob to a sngle machne no mgraton). For each machne and tme t R +, the ob scheduled at tme t on machne, denoted by t) and the speed at whch the machne s run, denoted by s t. Constrants: Job must be scheduled only after ts release tme. It must receve a total amount of v unts of computaton f t s assgned to machne,.e., t [r s,+ ]: t)= t dt = v. Obectves: The obectve has two components, energy and fractonal flow-tme. Recall that f s the power functon, whch gves the power consumpton as a functon of the speed. The energy consumed by machne s therefore E = fs t )dt. The fractonal flow-tme s an aggregated measure of the watng tme of a ob. Suppose ob s scheduled on machne. Let ˆv t) be the remanng volume of ob at tme t,.e., ˆv t) = v t [r,t]: t )= s t )dt. The fractonal flow-tme of ob s then defned to be F := v r ˆv dt. The obectve s to mnmze the total energy consumed by all the machnes and a weghted sum of the flow-tmes of all the obs: E + : w F ) In the onlne verson of the problem the detals of ob are gven only at tme r. The algorthm has to make decsons at tme t wthout knowng anythng about the obs released n the future. 2. Convex programmng relaxaton and the dual The algorthms we desgn are based on a convex programmng relaxaton of the problem and ts dual, whch are shown n Fgure. The dual convex program s obtaned usng Fenchel dualty. The Fenchel conugate of f s the functon f µ) := sup x {µx fx)}. See Appendx A or [2] for a detaled explanaton.) The varables s t denote the speed at whch ob s scheduled on machne at tme t. s t = s t s the total speed of machne at tme t. For each ob, constrant 2.) enforces that the schedulng must complete ob. We wll not state trval constrants such as s t throughout the paper.) In the obectve functon, the frst summaton corresponds to. 2.) P frac ) mnmze, ρ, t : : r + fs t )dt +, r s t w s t = :r t s t r s t v dt t r )s t dt w f ) w)dw ) dt D frac ) maxmze α f β t )dt,, t r : α v ρ t r ) + β t + w w f ) w)dw Fgure : Convex Programmng relaxaton of mnmzng fractonal flow tme plus energy. the fractonal flow-tme: s t dt unts of ob s processed between t and t + dt, all of whch wated for a duraton of t r resultng n t r ) st v dt amount of fractonal flow-tme. The second summaton s the total energy consumed. The thrd summaton s requred because the convex program allows a ob to be splt among many machnes and even have dfferent parts run n parallel. Wthout the thrd term, the convex program fals to provde a good lower bound on the cost of the optmal soluton. We show that the optmum of the convex program wth an addtonal factor of 2 s a lower bound on opt, the optmum offlne soluton to the problem. We note ths n the followng theorem. Theorem 2.. The optmum value of the convex crogram P frac ) s at most 2opt. Proof. Consder an nstance wth only one ob released at tme and a large number of machnes. The optmal soluton to the convex program schedules the ob smultaneously on all the machnes and the total cost w.r.t. the frst two terms wll tend to zero as the number of machnes tends to nfnty. The optmal algorthm has to schedule the ob on a sngle machne and hence pays a fxed non-zero cost. The thrd term fxes ths problem: consder a modfed nstance where we have multple copes of each machne as many as the number of obs); the cost of the optmal soluton to ths nstance s only lower. In ths modfed nstance, w.l.o.g., no two obs are ever scheduled on the same machne. It can be shown Lemma 2.2) that f ob s scheduled on a copy of machne all by tself, then the optmal cost energy + flow-tme) due to ob s v w w f ) w)dw. Now, stll allowng a ob to be splt among dfferent machnes, an r s t v dt fracton of ob s scheduled on machne.

6 Thus s t w r w f ) w)dw ) dt s a lower bound on the cost of schedulng ob. The algorthm heavly uses the structure of the optmal solutons to the prmal and the dual programs. We explan ths structure n stages. There s a natural decomposton of the problem tself: at the hghest level s the decson to allocate a ob to one of the machnes. Gven these choces, the rest of the problem decomposes nto a separate one for each machne. For each machne, gven the set of obs that have to be scheduled on t and the volumes and denstes, there s the problem of pckng the ob to schedule and the speed to set at any tme, n order to mnmze the total energy and flow-tme on that machne. Further, gven the choce of the ob to schedule on a machne, there s an even smpler problem of settng the speed. We start wth the smplest problem of all, gven ust a sngle ob and a sngle machne, what s the optmal speed schedule that mnmzes the total energy and fractonal flow-tme Optmal schedulng for sngle ob We obtan a smpler convex program and ts dual for the problem of schedulng a sngle ob on a sngle machne. We drop the thrd term n the obectve snce that deals wth nonntegral assgnment of obs to machnes. Snce there s only one ob n ths subsecton, we assume w.l.o.g. that r =. 2.2) mnmze t : maxmze t : ρ ts t dt + fs t )dt s t = s t s t v dt α f β t )dt ρ t + β t α v Recall that the conugate functon f s defned as f β) := sup s {βs fs)}. The functon f s also convex and monotoncally non-decreasng. If f s strctly convex, then so s f. The most mportant property we use about f s the noton of a complementary par. β and s are sad to be a complementary par f any one of the followng condtons hold. It can be shown that f one of them holds, then so do the others.). f s) = β; 2. f ) β) = s; 3. fs) + f β) = sβ. 6 A characterzaton of the optmal schedulng can be found, e.g., n [8], but we beleve t s llustratve to rederve the optmal schedulng usng our prmal dual approach. The optmal solutons to these programs are characterzed by the generalzed) complementary slackness or KKT condtons. These are:. t, s t > α = v ρ t + β t ); 2. α > s t v dt = ; 3. β t and s t are a complementary par for all t. The frst condton mples that for the entre duraton that the machne s runnng wth non-zero speed), the quantty ρ t+β t remans the same, snce t must always equal α. In other words, β t must lnearly decrease wth tme, at the rate of ρ,.e., dβt dt = ρ. The man result n ths secton s that the optmum soluton has a closed form expresson where s t and β t are set as a functon of the remanng weght of the ob at tme t, whch we denote by ŵ t. More generally, ŵ t wll denote the total remanng weght of all the obs on machne.) Also the remanng volume at tme t s denoted ˆv t. Lemma 2.. The optmum soluton to the convex program Eqn. 2.2) s such that f f s t )) = f β t ) = ŵ t, and α = v f w ). Proof. Snce β t and s t form a complementary par, we have that df β t) dβ t = s t. We start by multplyng the LHS by dβt dt and the RHS by ρ, whch s vald snce we showed these are equal. Ths gves Therefore df β t ) dβ t = s t ρ. dβ t dt df β t ) dt = ρ dˆv t dt = dŵ t dt When ŵ t =, then s t =, fs t ) = and therefore f β t ) = by the thrd property of complementary pars). Therefore at any tme f β t ) = ŵ t. The rest of the assertons n the lemma follow mmedately. Remark 2.. By f β t ) = ŵ t and that β t and s t are a complementary par, we get a closed form of s t = α ) /α ŵ /α t when fs) = s α. Contrast ths wth the PERW rule for whch s t = ŵ /α t. We also gve a closed form for the value of the optmum. Ths form ustfes the ncluson of the thrd term n the obectve for the convex program P frac ). We wll also use ths lemma later to analyze how the cost of the optmum soluton changes as we add new obs..

7 Lemma 2.2. The cost of the optmal soluton s w ρ f ) w)dw. Proof. Recall that the total weghted flow-tme s equal to ŵ t dt and the total energy s equal to fs t )dt. From Lemma 2., f β t ) = ŵ t. Usng ths and the propertes of complementary pars, we get the followng sequence of equaltes. ŵ t + fs t )) dt = f β t ) + fs t )) dt = β t s t dt. Further, by the defnton of s t and β t, the above equals β t dˆv t = ρ β t dŵ t = ρ f ŵ t )dŵ t. The lemma follows from observng that as t goes from to, ŵ t goes from w to. 2.3 Optmal schedulng for a sngle machne We now consder the next stage where there are multple obs to be scheduled on a sngle machne, and the correspondng convex programs. We wll contnue to assume that r = for all obs. 2.3) mnmze t : : maxmze t, : r s t = s t ρ ts t dt + fs t )dt s t v dt α f β t )dt ρ t + β t α v The complementary slackness condtons for these par of programs are more or less as before. To begn wth, s t > α v = ρ t + β t. As before, ths mples that β t decreases at rate ρ whenever ob s scheduled, but the man new ssue s the choce of obs to schedule. The above complementary slackness condton mples that ob must be scheduled when the term ρ t+β t attans ts mnmum. The frst part, ρ t, always ncreases at rate ρ, whle the second part, β t decreases at rate ρ t) where t) s the ob scheduled at tme t. So f ρ < ρ t) then ρ t+β t s decreasng and vce-versa, f ρ > ρ t) then ρ t+β t s ncreasng. Ths mples that the hghest densty frst HDF) rule s optmal,.e., schedule the obs n the decreasng order of the densty. For any, ρ t + β t frst decreases when hgher densty obs are scheduled, then remans constant as ob s scheduled and then ncreases as lower densty obs are scheduled. Gven the choce of obs scheduled, the choce of speed s very smlar to the sngle-ob case. We state the followng generalzaton of Lemma 2. wthout proof, snce t ether follows from the dscusson above or s very smlar to the proof of Lemma 2.. Lemma 2.3. The optmum soluton to the convex program 2.3) s such that. Jobs are scheduled n the decreasng order of densty. 2. f f s t )) = f β t ) = ŵ t. 3. α = w t +v f ŵ t ) where t s the frst tme ob s scheduled. Unlke the sngle-ob case, we no longer have a closed form expresson for the optmal cost. Instead, we consder the margnal ncrease n the optmal cost due to a sngle ob, and show the followng generalzaton of Lemma 2.2. Lemma 2.4. The ncrease n the cost of the optmal soluton on ntroducng a new ob s at most w t + ρ w f ŵ t + w)dw where t s the frst tme ob s scheduled and ŵ t s the remanng weght at tme t n the orgnal nstance, wthout ob. Proof. It suffces to show that even f we use a suboptmal speed-scalng after ntroducng the new ob, the ncrease n the cost s only the amount as stated n the lemma. In partcular consder the sub-optmal schedulng n whch we keep the schedule tll tme t unchanged, and then start to schedule ob and use the optmal speed scalng. The entre ob wats tll tme t contrbutng w t to the total flow-tme. For the rest of the ncrease, consder ntroducng an nfntesmal weght dw at tme t. Ths causes the followng nfntesmal ncrease n the flow-tme and energy: df = ŵ t dt and de = fs t )dt. The rest of the proof s along the same lnes as that of Lemma 2.2 and s omtted here. Fnally, we note a couple of smple observatons that follow almost mmedately from Lemma 2.3. Let Γ f = max s f f s)) fs) +. Lemma 2.5. f β t )dt equals the total flow-tme. Lemma 2.6. The total flow-tme s at most Γ f tmes the total energy consumed. Proof. The total energy used s fs t )dt. The total flow-tme s ŵ t dt. Recall that by Lemma 2.3, s t s chosen such that ŵ t = f f s t )). So the rato between the fractonal flow tme and the energy s f f s t ))dt dvded by fs t )dt, whch s at most max s f f s)) fs).

8 2.4 Conservatve greedy algorthm In ths secton, we analyze a prmal-dual algorthm whch we call conservatve-greedy. The basc dea s that gven the choce of ob assgnments to machnes, the algorthm schedules the obs as f no other obs wll be released n the future. That s, t schedules the obs as per the optmal schedule for the current set of obs, as detaled n the prevous secton. The choce of ob assgnments to machnes s done va a natural prmal-dual method, the one dctated by the complementary slackness condtons. Concretely, at any pont, gven the obs already released and ther assgnment to machnes, the algorthm pcks the optmal schedulng on each machne, assumng no future obs are released. Ths also gves dual solutons, n partcular the varables β t for all and t n the future. When a new ob s released, ts assgnment to a machne s naturally drven by the followng dual constrants and the correspondng complementary slackness condtons. For all, t, α ρ t r ) + β t + v w w f ) w)dw. For a gven machne, we saw earler that the rght hand sde RHS) s mnmzed over all t) at t where t would be the frst tme ob s scheduled on gven the HDF rule. That stll holds true snce the thrd term above s ndependent of t. Now we need to mnmze over all as well and the algorthm does exactly ths. It assgns ob to the machne that mnmzes the RHS of the nequalty above wth t = t multpled by v. It sets the dual α so that the correspondng constrant s tght. It then updates the schedule and β t s for machne. Note that as we add more obs, the β t s can only ncrease, thus preservng dual feasblty. The entre algorthm s summarzed n Fgure 2. We wll show that, surprsngly, such a conservatve approach already acheves a meanngful compettve rato for arbtrary power functons and a near optmal compettve rato for polynomal power functons. Formally, we wll show the followng theorem n ths secton. Theorem 2.2. The fractonal conservatve greedy algorthm s 2Γ f -compettve for mnmzng weghted fractonal flow tme plus energy. The above compettve rato mght be unbounded f the functon s hghly skewed. Theorem 2.2 does not contradct known lower bounds for fxed-speed onlne schedulng, whch s a specal case when the power functon s a - step functon. For nce power functons such as polynomal power functons, the above theorem gves meanngful compettve ratos. In partcular, the fractonal conservatve greedy algorthm s 2α-compettve for mnmzng weghted fractonal flow tme plus energy wth fs) = s α. In the remanng of ths secton, we wll present the prmal-dual analyss of Theorem 2.2. It s easy to see that the algorthm constructs feasble prmal and dual solutons and the rato s obtaned by relatng the cost of the prmal to that of the dual. In fact, for every ob released, we relate the ncrease n the cost of the prmal to the ncrease n the cost of the dual. Lemma 2.7. When ob s released, the ncrease n the total cost of the algorthm s at most α. Proof. Suppose ob s assgned to machne and wll be scheduled at t droppng subscrpt ) accordng to the HDF rule on the current set of obs assgned to. Then α = w t r ) + v β t + ρ w f ) w)dw. The ncrease n the total cost of the algorthm s only the ncrease n that for machne. From Lemma 2.4 ths s at most w t r ) + ρ w f ) w + ŵ t )dw. Comparng the two, t suffces to show that 2.4) β t + w w f ) w)dw w w f ) w + ŵ t )dw. By Lemma 2.3, we have β t = f ) ŵ t ). Further, f ) s concave because f s convex. So f ) ŵ t ) + f ) w) f ) w + ŵ t ) for all w w. Integrate ths nequalty for w from to w and we get Eqn. 2.4) as desred. Proof. [Theorem 2.2] Consder the release of a new ob and suppose t s assgned to machne. The change n the dual cost, D, equals α plus the change n the contrbuton of β t s,.e., D = α f β t )dt ). Let the change n the total energy and the total flowtme of the algorthm be E and F respectvely. From Lemma 2.7, α E + F. Earler n Lemma 2.5 we showed that the total flow-tme s always equal to f β t )dt. Thus the same holds for the dfference. f β t )dt ) = F. From the three nequaltes above, the change n the dual cost s at least the change n the energy cost of the algorthm. Snce ths holds for

9 Fractonal conservatve greedy algorthm Speed scalng: Choose speed s t s.t. f f s t )) equals the fractonal remanng weght on machne. Set duals β t = f s t ), also for future tmes based on the planned schedule currently. Job selecton: Schedule the ob wth the hghest densty HDF). Job assgnment: Assgn ob to machne that mnmzes ρ t r ) + β t + w w f ) w)dw where t would be the frst tme ob s scheduled on gven the HDF rule. Set α so that the correspondng constrant s tght. Update the β t s for machne. Fgure 2: The conservatve greedy onlne schedulng algorthm for mnmzng fractonal flow tme plus energy wth arbtrary power functons every change and both the dual cost and the energy cost of the algorthm are zero to begn wth, t follows that the fnal dual cost s at least the fnal energy cost of the algorthm,.e., D E D E. Further, by Lemma 2.6, the total flow-tme and the energy are wthn a factor of Γ f. Even though that was for a sngle machne wth no future obs, t s easy to see that the same holds true for the conservatve-greedy algorthm as well. Therefore F Γ f )E. The total cost of the algorthm, alg can now be bounded n terms of the energy cost alone, whch s bound by the dual as above,.e., alg = E + F Γ f E. Also by Theorem 2. the dual s a lower bound on 2opt,.e., D 2opt. Puttng t all together, we get alg 2Γ f opt. An alternate algorthm wth essentally the same analyss s the followng: assgn ob to machne for whch the ncrease n the total cost s the mnmum. The dual α must however be set as we do currently, so there mght be a dsconnect between whch machne the ob s assgned to and whch machne dctates the dual soluton. The analyss stll s pretty much the same however. 2.5 Aggressve greedy algorthm In the conservatve greedy algorthm, the speed s scaled to the conservatve extreme as the speed s optmal assumng no future obs arrve. However, n an onlne nstance there mght be obs n the future, some of whch wll be effectvely delayed by the current ob. Therefore, a good onlne algorthm should take ths nto account when choosng the speed. In ths secton, we consder a famly of algorthms wth the aggressveness n terms of speed scalng parameterzed by any constant C, as gven n Fgure 3. The followng property of the β t s mples that our choce of β t s can be derved from the prmal dual analyss and ensures dual feasblty. Lemma 2.8. For any tme t at whch no new obs are released, β t lnearly decrease wth tme, at the rate of ρ, where s the ob beng processed on at tme t,.e., dβt dt = ρ. Proof. [Sketch] Note that the clamed equaton s satsfed by the conservatve greedy algorthm. Further, the C-aggressve greedy algorthm s runnng at exactly C tmes the speed of the conservatve greedy algorthm gven the same remanng weght of obs, and the β t s are set to be C fracton of those n the conservatve greedy gven the same remanng weght of obs. Puttng together these observatons and our defnton of β t we can easly verfy the lemma. We wll show that by choosng the optmal aggressveness we can mprove the compettve rato by a logarthmc factor. Concretely, we show the followng. Theorem 2.3. The fractonal C-aggressve greedy algorthm s 2Γ f,c -compettve for mnmzng weghted fractonal flow tme plus energy wth power functon f, where Γ f,c = CCΓ f + Γ f ). Γ f C Γ f + Recall that Γ f = max s f f s)) fs) +.) Proof of Theorem 2.3 In the rest of ths secton, we wll present of proof of Theorem 2.3. Gven Lemma 2.8, t s easy to check that the way we are settng the prmal and dual varables guarantees feasblty. The compettve rato comes from analyzng the rato between the

10 Fractonal C-aggressve greedy algorthm Speed scalng: Choose speed s t s.t. f f st C )) equals the total remanng weght on machne. Set duals β t = C f ) ŵ t ) s.t. f Cβ t ) equals the total remanng weght on machne, also for future tmes based on the planned schedule currently. Job selecton: Schedule the ob wth hghest densty HDF). Job assgnment: Assgn ob to machne that mnmzes ρ t r ) + β t + w w f ) w)dw where t would be the frst tme ob s scheduled on gven the HDF rule. Set α so that the correspondng constrant s tght. Update the β t s for machne. Fgure 3: The aggressve greedy onlne schedulng algorthm for mnmzng weghted fractonal flow tme plus energy wth arbtrary power functons ncremental costs of the algorthm and the dual due to the arrval of new obs. We wll frst develop a few lemmas that are needed n the analyss. We start by showng that the parameter Γ f for arbtrary functon f plays a smlar role as the degree of polynomal functons as the followng lemma holds. Recall Γ f = α for fs) = s α.) Lemma 2.9. For any power functon f, any s > and any C >, we have fcs) C Γ f fs). Proof. Note that for any s >, we have that f s )s fs ) = f f s )) + fs ) fs ) = f f s )) fs ) + Γ f, where the frst equalty s because s and f s ) are a complementary par and the nequalty s by defnton of Γ f. So we have that lnfcs)) lnfs)) = C d lnfxs)) The lemma follows. = C = C f xs)s fxs) dx f xs)xs fxs) x dx C Γ f x dx = Γ f lnc). Alternatve dual obectve: For the convenence of analyss, we wll consder an alternatve dual obectve, max α C f Cβ t )dt, subect to the same constrants. We let D denote the value of the above dual obectve. By the convexty of f, we have the followng lemma. Lemma 2.. For any values of the dual varables, we have D D. Alternatve power functon: We wll consder the total cost of our algorthm w.r.t. the power functon ˆfs) = C Γ f f s C ). We let Ê denote the energy cost of the algorthm w.r.t. ths power functon, and let âlg = F + Ê denote the total cost of the algorthm w.r.t. ths power functon, notng that the flow tme s unaffected. By Lemma 2.9, we have Lemma 2.. For any nstance, we have Ê E and âlg alg. By Lemma 2. and Lemma 2., t suffces to bound the rato between the ncrease n the total cost of the algorthm w.r.t. power functon ˆf when a new ob arrves and the ncrease n the alternatve dual obectve, denoted as âlg and D respectvely. Next, suppose a new ob arrves at tme t and s assgned to machne. We wll account for the ncremental costs of the algorthm and the dual by relatng them to the ncremental cost n an magnary nstance n whch we are usng the conservatve greedy algorthm. We wll call the current nstance I and the magnary nstance I. More precsely, let there be a sngle machne n I that s dentcal to machne n I. For each ncomplete ob n I at tme t before the arrval of ob ), we put an dentcal ob wth the same remanng volume n I at tme. Then, we wll consder also releasng ob n I at tme. By our constructon of I and the fact that the C-aggressve greedy algorthm s runnng exactly C-tmes faster than the conservatve one, there s a one-to-one mappng between the tmelne after t n I and the tmelne n I as specfed n the next lemma, whose proof s straghtforward and omtted. Lemma 2.2. For any tme t t, the remanng weght of each ob n I at tme t s the same as that n I at tme Ct t), both before and after the release of ob. Let F and E denote the ncrease n weghted fractonal flow tme and energy respectvely n I cond-

11 toned on runnng the conservatve greedy both before and after releasng ob. Recall that n the conservatve greedy algorthm, we have F = Γ f ) E. We let F and Ê denote the ncrease n weghted fractonal flow tme and energy w.r.t. ˆf) n I due to the arrval of ob. The next lemma accounts for the the ncrease n flow tme due to the arrval of ob as a fracton of E. The proof s rather straghtforward from Lemma 2.2 so we omt t. Lemma 2.3. F = C F = C Γ f ) E. The next lemma bound the ncrease n energy due to the arrval of ob by a fracton of E. Lemma 2.4. Ê = CΓ f E. Proof. For any t t, suppose the C-aggressve greedy algorthm runs at speed s at tme t. Then, the energy cost w.r.t. ˆf) n I from tme t to t + dt s ˆfs)dt = C Γ f f s C )dt. Further, by Lemma 2.2 and that the C-aggressve greedy algorthm s runnng C- tmes faster than the conservatve one, the conservatve greedy algorthm runs at speed s C at tme Ct t) n I. So the energy cost of from tme Ct t) to Ct t) + Cdt n I s f s C )Cdt, a C Γ f + fracton of the correspondng energy cost n I. Integratng for all t t, we get that the energy cost w.r.t. ˆf) n I after tme t s exactly C Γ f tmes the total energy cost n I. As ths relaton holds both before and after the release of ob, the lemma follows. The next lemma establsh the relaton between the ncremental cost of the alternatve dual due to β t s and E. Lemma 2.5. C f Cβ t )dt = C 2 Γ f )E. Proof. [Sketch] Recall that β t = C f ) ŵ t ). By the convexty of f, we have C f Cβ t ) = C ŵt. So we have that C f Cβ t )dt equals C tmes the weghted fractonal flow tme of nstance I. Hence, we have C f Cβ t )dt = C F = C 2 F. The lemma then follows from F = Γ f ) E. The next lemma follows from the fact that the C- aggressve greedy s runnng at exactly C tme the speed of conservatve greedy and our choce of β t. Lemma 2.6. α C F + E ) = C Γ f E. Proof. [Sketch] Consder the tme t at whch the new ob would be nserted accordng to HDF w.r.t. the orgnal nstance wthout ob ). Let t denote the tme at whch would be nserted n the magnary nstance. By our choce of speed scalng, we have t t) = C t. Moreover, β t = C f ) ŵ t ). So by our choce of α and an analyss smlar to Lemma 2.7, we get that α s at least C tme the total ncrease n weghted fractonal flow tme plus energy n I,.e., α C E + F ). Fnally, we are ready to derve the compettve rato of the C-aggressve greedy algorthm. Proof. [Theorem 2.3] Puttng together Lemma 2.3 to Lemma 2.6, we have that the ncremental costs of the algorthm w.r.t. ˆf) s ) âlg = F + Ê = C Γ f ) + C Γ f E and the ncremental costs of the alternatve dual s D = α f Cβ t )dt C C Γ f ) C 2 Γ f ) E. Smplfyng the rato between âlg and D from the above equatons proves Theorem 2.3. Optmal choce of aggressveness By optmzng our choce of C for < Γ f 2, Γ f = 3, and asymptotcally optmzng t for general Γ f, we get the followng corollary. We also show ts asymptotc) optmalty n Secton 5 wth an almost matchng lower bound. Corollary 2.. The fractonal C-aggressve greedy algorthm s: 2Γ f -compettve for < Γ f 2, wth C = ; 5.58-compettve for Γ f = 3, wth C.68; 8Γ f log 2 Γ f -compettve for general Γ f, wth C = + ln Γ f Γ f. Proof. The compettve ratos for < Γ f 2 and Γ f = 3 are easy to verfy. So we omt the tedous calculaton here. Next, consder the asymptotcal bound for general Γ f. Wth our choce of C = + ln Γ f Γ f, the denomnator of Γ f,c s Γ f C Γ f + = ln Γ f = log 2 Γ f / log 2 e).

12 On the other hand, note that and C = + ln Γ f Γ f C Γ f = = + ln + Γ f )) Γ f + Γ f 2 Γ f + ln Γ ) Γf f < e ln Γ f = Γ f e Γ f. So puttng together, we get that the numerator of Γ f,c s CC Γ f + Γ f ) < + Γ ) ) f 2 Γ f e Γ f + Γ f = 2 2 ) Γ f e + ) Γ f Γ f ) 2 e + 2 Γ f. Fnally, by the above dscusson and that 2 e + 2) log 2 e) < 4 and Theorem 2.3 we prove the clamed asymptotc compettve rato for arbtrary Γ f. In partcular, for polynomal power functons fs) = s α, we have the followng: Corollary 2.2. The fractonal C-aggressve greedy algorthm s: 2α-compettve for < α 2, wth C = ; 5.58-compettve for α = 3, wth C.68; 8α log 2 α ln α α. -compettve for general α, wth C = + We remark that wth fs) = s α the speed scalng of PERW also falls nto the framework of C-aggressve greedy algorthm. Further, ts choce of C α = α ) s also asymptotcally optmal when α 2. We omt ths calculaton n ths paper.) So our result can be vewed as a formal ustfcaton of the optmalty of PERW. Smlar remark apples to the ntegral case. We also note that for mnmzng fractonal flow tme plus energy, we can drop the thrd term n the prmal obectve and drop a factor of 2 n the compettve rato e.g., Lemma 2.6). As a result, we can get an α-compettve algorthm for < α 2, and a compettve algorthm for α = 3, hence the clamed ratos n Table 2. We omt the detals n ths paper. 3 Weghted ntegral flow-tme plus energy In ths secton, we wll dscuss the problem of onlne schedulng for mnmzng weghted ntegral) flow-tme plus energy. The problem for weghted ntegral flow tme has the same nput, output, and constrants as the fractonal flow tme verson. The only dfference s the obectve. Next let us formally defne the weghted ntegral flow tme of an nstance gven a schedule. Suppose ob s completed at tme c, then the weghted ntegral flow tme s w c r ). : An equvalent formula for the same s as follows. Let A t denote the set of obs such that have been released before or at tme t but have not been completed accordng to the schedule tll tme t,.e., r t and s t dt < v. t [r, ]: t)= The weghted ntegral flow tme s equal to A t: w dt. So the man dfference s that when a ob s partally completed, the entre weght of the ob wll contrbute to the weghted ntegral flow tme, whle only the ncomplete fracton wll contrbute to the weghted fractonal flow tme. Convex programmng relaxaton and the dual Smlar to the fractonal, we wll use the prmal-dual analyss va followng convex program for the problem of mnmzng ntegral flow tme plus energy and consder ts dual program: P nt ) mnmze r ρ t r )s t dt + fs t )dt + r f ) w )s t dt 3.5), t : :r s t t = s t s 3.6) : t v 3.7) r D nt ) maxmze,, t r : α v α f β t )dt ρ t r ) + β t + f ) w ) Here we use the same notaton as n the fractonal case so we wll omt the detaled explanatons of the

13 Integral conservatve greedy algorthm Speed scalng: Choose speed s t s.t. f f s t )) equals the ntegral remanng weght on machne. Set duals β t = f s t ) s.t. f β t ) equals the ntegral remanng weght on machne, also for future tmes based on the planned schedule currently. Job selecton and ob assgnment: Upon arrval of a new ob, assgn t to a machne and nsert t nto the processng queue of such that we mnmze ρ t r ) + β t + w w f ) w)dw, where t would be the completon tme of the predecessor of ob n the queue. Set α so that the correspondng constrant s tght. Update the β t s for machne. Fgure 4: The conservatve greedy onlne schedulng algorthm for mnmzng weghted ntegral flow tme plus energy wth arbtrary power functons convex programs. The only change s the thrd term n the prmal program and the correspondng part n the dual). Ths s because condtoned on beng allocated to machne, the optmal cost for ob n a sngleob nstance w.r.t. ntegral flow tme plus energy s v f ) w ). Hence, the share of the optmal sngleob cost for the st v dt fracton of ob s processed on machne from t to t + dt s f ) w )s t dt. Algorthms Smlar to the fractonal case, we wll consder the conservatve greedy algorthm, whch wll use the optmal speed scalng assumng there are no future obs, and a more general famly of C-aggressve greedy algorthms. The man dfference comparng to the fractonal case s the ob selecton rule on a sngle machne s no longer HDF. Instead, the algorthms wll combne the ob assgnment rule ob selecton rule by mantanng a processng queue for each machne. The machnes wll process the obs n ther queues n order. When a new ob arrves, the algorthm wll nsert the new ob to an poston n one of the processng queue accordng to the dual varables. The formal descrptons of the algorthms are presented n Fgure 4 and Fgure 5. The ntegral conservatve/aggressve greedy algorthms obtan the followng compettve rato for general power functons and polynomal power functons respectvely. Theorem 3.. The ntegral conservatve greedy algorthm s 2Γ f -compettve for mnmzng weghted ntegral flow tme plus energy, where recall that Γ f = max s f f s)) fs) +. Corollary 3.. The ntegral conservatve greedy algorthm s 2α-compettve for power functon fs) = s α for mnmzng weghted ntegral flow tme plus energy. Theorem 3.2. The ntegral C-aggressve greedy algorthm s 2Γ f,c -compettve for mnmzng weghted ntegral flow tme plus energy wth power functon f, where Γ f,c = CCΓ f + Γ f ) Γ f C Γ f + By asymptotcally optmzng our choce of C, we get the followng corollares, whose optmalty wll be presented n Secton 5. The proofs are dentcal to the ntegral case and hence omtted. Corollary 3.2. The ntegral C-aggressve greedy algorthm s 2Γ f -compettve for < Γ f 2, wth C = ; 5.58-compettve for Γ f = 3, wth C.68; 8Γ f log 2 Γ f -compettve for general Γ f, wth C = + ln Γ f Γ f. Corollary 3.3. For mnmzng weghted ntegral flow tme plus energy w.r.t. power functon fs) = s α, the fractonal C-aggressve greedy algorthm s 2α-compettve for < α 2, wth C = ; 5.58-compettve for α = 3, wth C.68; 8α log 2 α ln α α. -compettve for general α, wth C = + Compettve rato The prmal dual analyss of the ntegral case s almost dentcal to the fractonal case. So here we wll only sketch the analyss of the conservatve algorthm and explan the man dfferences between the ntegral case and the fractonal case. Extendng the.

14 Integral C-aggresve greedy algorthm Speed scalng: Choose speed s t s.t. f f st C )) equals the ntegral remanng weght on machne. Set duals β t = C f st C ) s.t. f Cβ t ) equals the ntegral remanng weght on machne, also for future tmes based on the planned schedule currently. Job selecton and ob assgnment: Upon arrval of a new ob, assgn t to a machne and nsert t nto the processng queue of such that we mnmze ρ t r ) + β t + w w f ) w)dw, where t would be the completon tme of the predecessor of ob n the queue. Set α so that the correspondng constrant s tght. Update the β t s for machne. Fgure 5: The C-aggressve greedy onlne schedulng algorthm for mnmzng weghted ntegral flow tme plus energy wth arbtrary power functon analyss from conservatve to aggressve algorthms s farly straghtforward usng technques we ntroduce n Secton 2.5. So the detals wll be omtted. Next, we wll show that α s an upper bound on the ncrease n flow tme plus energy due the ob. Ths s the man techncal component of the analyss of the compettve rato. The clam follows easly from the next lemma, whose proof s smlar to Lemma 2.7 and wll be sketched below. Lemma 3.. Suppose s a ob n the queue of machne whose scheduled completon tme s t before the arrval of ob. Then, the ncrease n total costs f we nsert ob to the queue of machne rght after s at most ) v ρ t r ) + β t + f ) w ) Proof. [Sketch] Note that the algorthm uses the optmal speed scalng, assumng no future obs. To bound the ncrease n flow tme plus energy when we nsert ob to the queue of machne rght after, t suffces to bound the ncrease n flow tme plus energy when we use a sub-optmal speed scalng. In partcular, we wll let the obs scheduled before use the same speed as before the arrval of, and use the optmal speed scalng after that. In ths case, the ncrease n flow tme plus energy wll be the flow tme due to ob watng untl tme t,.e., w t r ), plus the ncrease n flow tme of ob and obs scheduled after ) plus energy due to processng ob,.e., v f ) W t ) + w ) where W t ) s the ntegral remanng weght of obs on machne at tme t. By the concavty of f ), ncrease n flow tme s at most v f ) W t )) + f ) w ) ). The lemma then follows by f β t ) = W t ). Note that β t remans the same whle machne s processng the same ob. So the mnmal value of v ρ t r ) + β t + f ) w ) ) must be acheved n the completon tme t of one of the obs on. As a smple corollary of Lemma 3., we have: Corollary 3.4. The value of α s at least the ncrease n flow tme plus energy due to ob. Now we are ready to analyze the compettve rato of the ntegral conservatve greedy algorthm. Proof. [Theorem 3.] Frst, note that t follows from our defnton of the prmal program P nt that ts optmal obectve s at most twce that of the optmal flow tme plus energy by any offlne) algorthm. Further, by our choce of β t s, the contrbuton of f β t )dt s the weghted ntegral flow tme of the algorthm. So combnng wth Corollary 3.4 we get that upon the arrval of new obs, the ncrease n the dual obectve s at least the ncrease n energy of the algorthm. Fnally, va the same argument as n the fractonal case we have that the weghted ntegral flow tme of the algorthm s at most Γ f tmes the energy. So the theorem follows. 4 Resource augmentaton Further, as a smple applcaton of our prmal dual approach, we manage to gve smpler proofs for ether matchng or mproved compettve ratos for several onlne schedulng problems wth resource augmentaton. In the resource augmentaton settng, we wll compare to a weaker offlne benchmark n the sense that gven the same energy, the offlne algorthm can only run at + ɛ) fracton of the speed of the onlne algorthm. Theorem 4.. The fractonal/ntegral conservatve greedy algorthm s + ɛ)-speed and 2 ɛ + ) -

Primal Dual Gives Optimal Energy Efficient Online Algorithms

Primal Dual Gives Optimal Energy Efficient Online Algorithms Prmal Dual Gves Optmal Energy Effcent Onlne Algorthms Nkhl R. Devanur Mcrosoft Research, Redmond nkdev@mcrosoft.com Zhy Huang Unversty of Pennsylvana hzhy@cs.upenn.edu ABSTRAT We consder the problem of

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016 CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Lagrangian Primal Dual Algorithms in Online Scheduling

Lagrangian Primal Dual Algorithms in Online Scheduling Lagrangan Prmal Dual Algorthms n Onlne Schedulng Nguyen Km Thang IBISC, Unversty of Evry Val d Essonne, France Abstract We present a prmal-dual approach to desgn algorthms n onlne schedulng. Our approach

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling Real-Tme Systems Multprocessor schedulng Specfcaton Implementaton Verfcaton Multprocessor schedulng -- -- Global schedulng How are tasks assgned to processors? Statc assgnment The processor(s) used for

More information

Embedded Systems. 4. Aperiodic and Periodic Tasks

Embedded Systems. 4. Aperiodic and Periodic Tasks Embedded Systems 4. Aperodc and Perodc Tasks Lothar Thele 4-1 Contents of Course 1. Embedded Systems Introducton 2. Software Introducton 7. System Components 10. Models 3. Real-Tme Models 4. Perodc/Aperodc

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Perfect Competition and the Nash Bargaining Solution

Perfect Competition and the Nash Bargaining Solution Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange

More information

Online Appendix. t=1 (p t w)q t. Then the first order condition shows that

Online Appendix. t=1 (p t w)q t. Then the first order condition shows that Artcle forthcomng to ; manuscrpt no (Please, provde the manuscrpt number!) 1 Onlne Appendx Appendx E: Proofs Proof of Proposton 1 Frst we derve the equlbrum when the manufacturer does not vertcally ntegrate

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Online Classification: Perceptron and Winnow

Online Classification: Perceptron and Winnow E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Canonical transformations

Canonical transformations Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Solution Thermodynamics

Solution Thermodynamics Soluton hermodynamcs usng Wagner Notaton by Stanley. Howard Department of aterals and etallurgcal Engneerng South Dakota School of nes and echnology Rapd Cty, SD 57701 January 7, 001 Soluton hermodynamcs

More information

Spectral Graph Theory and its Applications September 16, Lecture 5

Spectral Graph Theory and its Applications September 16, Lecture 5 Spectral Graph Theory and ts Applcatons September 16, 2004 Lecturer: Danel A. Spelman Lecture 5 5.1 Introducton In ths lecture, we wll prove the followng theorem: Theorem 5.1.1. Let G be a planar graph

More information

Some basic inequalities. Definition. Let V be a vector space over the complex numbers. An inner product is given by a function, V V C

Some basic inequalities. Definition. Let V be a vector space over the complex numbers. An inner product is given by a function, V V C Some basc nequaltes Defnton. Let V be a vector space over the complex numbers. An nner product s gven by a functon, V V C (x, y) x, y satsfyng the followng propertes (for all x V, y V and c C) (1) x +

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

10. Canonical Transformations Michael Fowler

10. Canonical Transformations Michael Fowler 10. Canoncal Transformatons Mchael Fowler Pont Transformatons It s clear that Lagrange s equatons are correct for any reasonable choce of parameters labelng the system confguraton. Let s call our frst

More information

CSC 411 / CSC D11 / CSC C11

CSC 411 / CSC D11 / CSC C11 18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

Lecture 14: Bandits with Budget Constraints

Lecture 14: Bandits with Budget Constraints IEOR 8100-001: Learnng and Optmzaton for Sequental Decson Makng 03/07/16 Lecture 14: andts wth udget Constrants Instructor: Shpra Agrawal Scrbed by: Zhpeng Lu 1 Problem defnton In the regular Mult-armed

More information

CHAPTER 17 Amortized Analysis

CHAPTER 17 Amortized Analysis CHAPTER 7 Amortzed Analyss In an amortzed analyss, the tme requred to perform a sequence of data structure operatons s averaged over all the operatons performed. It can be used to show that the average

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Analysis of Discrete Time Queues (Section 4.6)

Analysis of Discrete Time Queues (Section 4.6) Analyss of Dscrete Tme Queues (Secton 4.6) Copyrght 2002, Sanjay K. Bose Tme axs dvded nto slots slot slot boundares Arrvals can only occur at slot boundares Servce to a job can only start at a slot boundary

More information

MA 323 Geometric Modelling Course Notes: Day 13 Bezier Curves & Bernstein Polynomials

MA 323 Geometric Modelling Course Notes: Day 13 Bezier Curves & Bernstein Polynomials MA 323 Geometrc Modellng Course Notes: Day 13 Bezer Curves & Bernsten Polynomals Davd L. Fnn Over the past few days, we have looked at de Casteljau s algorthm for generatng a polynomal curve, and we have

More information

Economics 101. Lecture 4 - Equilibrium and Efficiency

Economics 101. Lecture 4 - Equilibrium and Efficiency Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of

More information

(1 ) (1 ) 0 (1 ) (1 ) 0

(1 ) (1 ) 0 (1 ) (1 ) 0 Appendx A Appendx A contans proofs for resubmsson "Contractng Informaton Securty n the Presence of Double oral Hazard" Proof of Lemma 1: Assume that, to the contrary, BS efforts are achevable under a blateral

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium?

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium? APPLIED WELFARE ECONOMICS AND POLICY ANALYSIS Welfare Propertes of General Equlbrum What can be sad about optmalty propertes of resource allocaton mpled by general equlbrum? Any crteron used to compare

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition Sngle-Faclty Schedulng over Long Tme Horzons by Logc-based Benders Decomposton Elvn Coban and J. N. Hooker Tepper School of Busness, Carnege Mellon Unversty ecoban@andrew.cmu.edu, john@hooker.tepper.cmu.edu

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

An Integrated OR/CP Method for Planning and Scheduling

An Integrated OR/CP Method for Planning and Scheduling An Integrated OR/CP Method for Plannng and Schedulng John Hooer Carnege Mellon Unversty IT Unversty of Copenhagen June 2005 The Problem Allocate tass to facltes. Schedule tass assgned to each faclty. Subect

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

1 The Mistake Bound Model

1 The Mistake Bound Model 5-850: Advanced Algorthms CMU, Sprng 07 Lecture #: Onlne Learnng and Multplcatve Weghts February 7, 07 Lecturer: Anupam Gupta Scrbe: Bryan Lee,Albert Gu, Eugene Cho he Mstake Bound Model Suppose there

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

9 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations

9 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations Physcs 171/271 - Chapter 9R -Davd Klenfeld - Fall 2005 9 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys a set

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

O-line Temporary Tasks Assignment. Abstract. In this paper we consider the temporary tasks assignment

O-line Temporary Tasks Assignment. Abstract. In this paper we consider the temporary tasks assignment O-lne Temporary Tasks Assgnment Yoss Azar and Oded Regev Dept. of Computer Scence, Tel-Avv Unversty, Tel-Avv, 69978, Israel. azar@math.tau.ac.l??? Dept. of Computer Scence, Tel-Avv Unversty, Tel-Avv, 69978,

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

Lecture Space-Bounded Derandomization

Lecture Space-Bounded Derandomization Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval

More information

On the correction of the h-index for career length

On the correction of the h-index for career length 1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat

More information

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013 COS 511: heoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 15 Scrbe: Jemng Mao Aprl 1, 013 1 Bref revew 1.1 Learnng wth expert advce Last tme, we started to talk about learnng wth expert advce.

More information

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

The optimal delay of the second test is therefore approximately 210 hours earlier than =2. THE IEC 61508 FORMULAS 223 The optmal delay of the second test s therefore approxmately 210 hours earler than =2. 8.4 The IEC 61508 Formulas IEC 61508-6 provdes approxmaton formulas for the PF for smple

More information

Lecture 11. minimize. c j x j. j=1. 1 x j 0 j. +, b R m + and c R n +

Lecture 11. minimize. c j x j. j=1. 1 x j 0 j. +, b R m + and c R n + Topcs n Theoretcal Computer Scence May 4, 2015 Lecturer: Ola Svensson Lecture 11 Scrbes: Vncent Eggerlng, Smon Rodrguez 1 Introducton In the last lecture we covered the ellpsod method and ts applcaton

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Lagrange Multipliers Kernel Trick

Lagrange Multipliers Kernel Trick Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

THE SUMMATION NOTATION Ʃ

THE SUMMATION NOTATION Ʃ Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the

More information

Graph Reconstruction by Permutations

Graph Reconstruction by Permutations Graph Reconstructon by Permutatons Perre Ille and Wllam Kocay* Insttut de Mathémathques de Lumny CNRS UMR 6206 163 avenue de Lumny, Case 907 13288 Marselle Cedex 9, France e-mal: lle@ml.unv-mrs.fr Computer

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

a b a In case b 0, a being divisible by b is the same as to say that

a b a In case b 0, a being divisible by b is the same as to say that Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look

More information

9 Characteristic classes

9 Characteristic classes THEODORE VORONOV DIFFERENTIAL GEOMETRY. Sprng 2009 [under constructon] 9 Characterstc classes 9.1 The frst Chern class of a lne bundle Consder a complex vector bundle E B of rank p. We shall construct

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

The Second Eigenvalue of Planar Graphs

The Second Eigenvalue of Planar Graphs Spectral Graph Theory Lecture 20 The Second Egenvalue of Planar Graphs Danel A. Spelman November 11, 2015 Dsclamer These notes are not necessarly an accurate representaton of what happened n class. The

More information

How Strong Are Weak Patents? Joseph Farrell and Carl Shapiro. Supplementary Material Licensing Probabilistic Patents to Cournot Oligopolists *

How Strong Are Weak Patents? Joseph Farrell and Carl Shapiro. Supplementary Material Licensing Probabilistic Patents to Cournot Oligopolists * How Strong Are Weak Patents? Joseph Farrell and Carl Shapro Supplementary Materal Lcensng Probablstc Patents to Cournot Olgopolsts * September 007 We study here the specal case n whch downstream competton

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Exercise Solutions to Real Analysis

Exercise Solutions to Real Analysis xercse Solutons to Real Analyss Note: References refer to H. L. Royden, Real Analyss xersze 1. Gven any set A any ɛ > 0, there s an open set O such that A O m O m A + ɛ. Soluton 1. If m A =, then there

More information

A 2D Bounded Linear Program (H,c) 2D Linear Programming

A 2D Bounded Linear Program (H,c) 2D Linear Programming A 2D Bounded Lnear Program (H,c) h 3 v h 8 h 5 c h 4 h h 6 h 7 h 2 2D Lnear Programmng C s a polygonal regon, the ntersecton of n halfplanes. (H, c) s nfeasble, as C s empty. Feasble regon C s unbounded

More information

We present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}.

We present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}. CS 189 Introducton to Machne Learnng Sprng 2018 Note 26 1 Boostng We have seen that n the case of random forests, combnng many mperfect models can produce a snglodel that works very well. Ths s the dea

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information