Online Non-clairvoyant Scheduling to Simultaneously Minimize All Convex Functions
|
|
- Arleen Walker
- 6 years ago
- Views:
Transcription
1 Onlne Non-clarvoyant Schedulng to Smultaneously Mnmze All Convex Functons Kyle Fox, Sungjn Im 2, Janardhan Kulkarn 2, and enjamn Moseley 3 Department of Computer Scence, Unversty of Illnos, Urbana, IL 680. kylefox2@llnos.edu 2 Department of Computer Scence, Duke Unversty, Durham NC [sungjn,kulkarn]@cs.duke.edu 3 Toyota Technologcal Insttute at Chcago, Chcago, IL moseley@ttc.edu Abstract. We consder schedulng jobs onlne to mnmze the objectve [n] wg(c r), where w s the weght of job, r s ts release tme, C s ts completon tme and g s any non-decreasng convex functon. Prevously, t was known that the clarvoyant algorthm Hghest-Densty- Frst (HDF) s (2 + ɛ)-speed O()-compettve for ths objectve on a sngle machne for any fxed 0 < ɛ < [2]. We show the frst non-trval results for ths problem when g s not concave and the algorthm must be non-clarvoyant. More specfcally, our results nclude: A (2 + ɛ)-speed O()-compettve non-clarovyant algorthm on a sngle machne for all non-decreasng convex g, matchng the performance of HDF for any fxed 0 < ɛ <. A (3 + ɛ)-speed O()-compettve non-clarovyant algorthm on multple dentcal machnes for all non-decreasng convex g for any fxed 0 < ɛ <. Our postve result on multple machnes s the frst non-trval one even when the algorthm s clarvoyant. Interestngly, all performance guarantees above hold for all non-decreasng convex functons g smultaneously. We supplement our postve results by showng any algorthm that s oblvous to g s not O()-compettve wth speed less than 2 on a sngle machne. Further, any non-clarvoyent algorthm that knows the functon g cannot be O()-compettve wth speed less than 2 on a sngle machne or speed less than 2 on m dentcal machnes. m Introducton Schedulng a set of jobs that arrve over tme on a sngle machne s perhaps the most basc settng consdered n schedulng theory. A consderable amount of work has focused on ths fundamental problem. For examples, see [26]. In Research by ths author s supported n part by the Department of Energy Offce of Scence Graduate Fellowshp Program (DOE SCGF), made possble n part by the Amercan Recovery and Renvestment Act of 2009, admnstered by ORISE-ORAU under contract no. DE-AC05-06OR2300. Supported by NSF awards CCF and IIS
2 ths settng, there are n jobs that arrve over tme, and each job requres some processng tme to be completed on the machne. In the onlne settng, the scheduler becomes frst aware of job at tme r when job s released. Note that n the onlne settng, t s standard to assume jobs can be preempted. Generally, a clent that submts a job would lke to mnmze the flow tme of the job defned as F := C r, where C denotes the completon tme of job. The flow tme of a job measures the amount of tme the job wats to be satsfed n the system. When there are multple jobs competng for servce, the scheduler needs to make schedulng decsons to optmze a certan global objectve. One of the most popular objectves s to mnmze the total (or equvalently average) flow tme of all the jobs,.e., [n] F. It s well known that the algorthm Shortest-Remanng-Processng-Tme (SRPT) s optmal for that objectve n the sngle machne settng. The algorthm SRPT always schedules the job that has the shortest remanng processng tme at each pont n tme. Another well known result s that the algorthm Frst-In-Frst-Out (FIFO) s optmal for mnmzng the maxmum flow tme,.e., max [n] F on a sngle machne. The algorthm FIFO schedules the jobs n the order they arrve. These classc results have been extended to the case where jobs have prortes. In ths extenson, each job s assocated wth a weght denotng ts prorty; large weght mples hgher prorty. The generalzaton of the total flow tme problem s to mnmze the total weghted flow tme, [n] F. For ths problem t s known that no onlne algorthm can be O()-compettve [5]. A generalzaton of the maxmum flow tme problem s to mnmze the maxmum weghted flow tme max [n] F. It s also known for ths problem that no onlne algorthm can be O()-compettve [,5]. Due to these strong lower bounds, prevous work for these objectves has appealed to the relaxed analyss model called resource augmentaton [22]. In ths relaxaton, an algorthm A s sad to be s-speed c-compettve f A has a compettve rato of c when processng jobs s tmes faster than the adversary. The prmary goal of a resource augmentaton analyss s to fnd the mnmum speed an algorthm requres to be O()-compettve. For the total weghted flow tme objectve, t s known that the algorthm Hghest-Densty-Frst (HDF) s ( + ɛ)-speed O( ɛ )-compettve for any fxed ɛ > 0 [25,0]. The algorthm HDF always schedules the job of hghest densty, w. For the maxmum weghted flow objectve, the algorthm ggest-weght-frst (FW) s known to be ( + ɛ)- speed O( ɛ )-compettve [5]. FW always schedules the job wth the largest weght. Another wdely consdered objectve s mnmzng the l k -norms of flow tme, ( [n] F k ) /k [8,7,20,,4,23]. The lk -norm objectve s most useful for k {, 2, 3, }. Observe that total flow tme s the l -norm of flow tme, and the maxmum flow tme s the l -norm. The l 2 and l 3 norms are natural balances between the l and l norms. These objectves can be used to decrease the varance of flow tme, thereby yeldng a schedule that s far to requests. It s known that no algorthm can be n Ω() -compettve for mnmzng the l 2 -norm
3 [8]. On the postve sde, for ɛ > 0, HDF was shown to be ( + ɛ)-speed O( ɛ )- 2 compettve for any l k -norm objectve, k [8]. These objectves have also been consdered n the dentcal machne schedulng settng [24,4,3,2,9,6,3,9]. In ths settng, there are m machnes that the jobs can be scheduled on. Each job can be scheduled on any machne and job requres processng tme no matter whch machne t s assgned to. In the dentcal machne settng t s known that any randomzed onlne algorthm has compettve rato Ω(mn{ n m, log P }), where P denotes the rato between the maxmum and mnmum processng tme of a job [24]. HDF as well as several other algorthms are known to be scalable for weghted flow tme [0,4,9,3]. For the l k -norms objectve the multple machne verson of HDF s known to be scalable [3] as well as other algorthms [4,9]. For the maxmum unweghted flot s known that FIFO s (3 2/m)-compettve, and for weghted maxmum flow tme a scalable algorthm s known [,5]. The algorthms HDF and SRPT use the processng tme of a job to make schedulng decsons. An algorthm whch learns the processng tme of a job upon ts arrval s called clarvoyant. An algorthm that does not know the processng tme of a job before completng the job s sad to be non-clarvoyant. Among the aforementoned algorthms, FIFO and FW are non-clarvoyant. Non-clarvoyant schedulers are hghly desrable n many real world settngs. For example, an operatng system typcally does not know a job s processng tme. Thus, there has been extensve work done on desgnng non-clarvoyant schedulers for the problems dscussed above. Scalable non-clarvoyant algorthms are known for the maxmum weghted flow tme, average weghted flow tme, and l k -norms of flow tme objectves even on dentcal machnes [5,4]. It s common n schedulng theory that algorthms are talored for specfc schedulng settngs and objectve functons. For nstance, FIFO s consdered the best algorthm for non-clarvoyantly mnmzng the maxmum flow tme, whle HDF s consdered one of the best algorthms for mnmzng total weghted flow tme. One natural queston that arses s what to do f a system desgner wants to mnmze several objectve functons smultaneously. For nstance, a system desgner may want to optmze average qualty of servce, whle mnmzng the maxmum watng tme of a job. Dfferent algorthms have been consdered for mnmzng average flow tme and maxmum flow tme, but the system desgner would lke to have a sngle algorthm that performs well for both objectves. Motvated by ths queston, the general cost functon objectve was consdered n [2]. In the general cost functon problem, there s a functon g : R + R + gven, and the goal of the scheduler s to mnmze [n] g(f ). One can thnk of g(f ) as the penalty of makng job wat F tme steps, scaled by job s prorty (ts weght ). Ths objectve captures most schedulng metrcs. For example, ths objectve functon captures total weghted flow tme by settng g(x) = x. y settng g(x) = x k, the objectve also captures mnmzng [n] F k whch s essentally the same as the l k -norm objectve except the outer kth root s not taken. Fnally, by makng g grow very quckly the objectve can be desgned to capture mnmzng the maxmum weghted flow tme. As stated, one of the
4 reasons ths objectve was ntroduced was to fnd an algorthm that can optmze several objectves smultaneously. If one were to desgn an algorthm that optmzes the general cost functon g whle beng oblvous to g, then ths algorthm would optmze all objectve functons n ths framework smultaneously. In [2], the general cost functon objectve was consdered only assumng that g s non-decreasng. Ths s a natural assumpton snce there should be no ncentve for a job to wat longer. It was shown that n ths case, no algorthm that s oblvous to the cost functon g can be O()-compettve wth speed 2 ɛ for any fxed ɛ > 0. Surprsngly, t was also shown that HDF, an algorthm that s oblvous to g, s (2 + ɛ)-speed O(/ɛ)-compettve. Ths result shows that t s ndeed possble to desgn an algorthm that optmzes most of the reasonable schedulng objectves smultaneously on a sngle machne. Recall that HDF s clarvoyant. Ideally, we would lke to have a non-clarvoyant algorthm for general cost functons. Further, there s currently no known smlar result n the multple dentcal machnes settng. Results: In ths paper, we consder non-clarvoyant onlne schedulng to mnmze the general cost functon on a sngle machne as well as on multple dentcal machnes. In both the settngs, we gve the frst nontrval postve results when the onlne scheduler s requred to be non-clarvoyant. We concentrate on cost functons g whch are dfferentable, non-decreasng, and convex. We assume wthout loss of generalty that g(0) = 0. Note that all of the objectves dscussed prevously have these propertes. We show the followng somewhat surprsng result (Secton 4). Theorem. There exsts a non-clarvoyant algorthm that s (2+ɛ)-speed O(/ɛ)- compettve for mnmzng [n] g(c r ) on a sngle machne for any ɛ > 0, when the gven cost functon g : R + R + s dfferentable, non-decreasng, and convex (g s non-decreasng). Further, ths algorthm s oblvous to g. We then consder the general cost functon objectve on multple machnes for the frst tme, and gve a postve result. Ths algorthm s also non-clarvoyant. Theorem 2. There exsts a non-clarvoyant algorthm that s (3+ɛ)-speed O(/ɛ)- compettve for mnmzng [n] g(c r ) on multple dentcal machnes for any ɛ > 0, when the gven cost functon g : R + R + s dfferentable, nondecreasng, and convex (g s non-decreasng). Further, ths algorthm s oblvous to g. Note that we do not knof there exsts a constant compettve non-clarvoyant algorthm even for a sngle machne wth any constant speed when the cost functon s nether convex nor concave. We leave ths gap as an open problem. We complement these postve results by extendng the lower bound presented n [2]. They showed that for any ɛ > 0, no oblvous algorthm can be (2 ɛ)- speed O()-compettve on a sngle machne when the cost functon g s nondecreasng, but perhaps dscontnuous. We show the same lower bound even f g s dfferentable, non-decreasng, and convex. Thus, on a sngle machne, our
5 postve result s essentally tght up to constant factors n the compettve rato, and our algorthm acheves the same performance guarantee whle beng nonclarvoyant. Theorem 3. No randomzed clarvoyant algorthm that s oblvous to g can be (2 ɛ)-speed O()-compettve for mnmzng [n] g(c r ) on a sngle machne even f all jobs have unt weghts and g s dfferentable, non-decreasng, and convex. We go on to show that even f a non-clarvoyant algorthm knows the cost functon g, the algorthm cannot have a bounded compettve rato when gven speed less than 2. Theorem 4. Any determnstc non-clarvoyant (possbly aware of g) algorthm for mnmzng [n] g(c r ) on a sngle machne has an unbounded compettve rato when gven speed 2 ɛ for any fxed ɛ > 0 where g s dfferentable, non-decreasng, and convex.. Fnally, we show that at least 2 m speed s needed for any non-clarvoyant algorthm to be constant compettve on m dentcal machnes. Ths s the frst lower bound for the general cost functon specfcally desgned for the multple machne case. Theorem 5. Any randomzed non-clarvoyant (possbly aware of g) algorthm on m dentcal machnes has an unbounded compettve rato when gven speed less than 2 m ɛ for any fxed ɛ > 0 when g s dfferentable, non-decreasng, and convex.. Technques: To show Theorem, we consder the well-known algorthm Weghted- Shortest-Elapsed-Tme-Frst (WSETF) on a snge machne, and frst show that t s 2-speed O()-compettve for mnmzng the fractonal verson of the general cost functon objectve. Then wth a small extra amount of speed augmentaton, we convert WSETF s schedule nto the one that s (2+ɛ)-speed O()-compettve for the ntegral general cost functon. Ths converson s now a farly standard technque, and wll be further dscussed n Secton 2. Ths converson was also used n [2] when analyzng HDF. One can thnk of the fractonal objectve as convertng each job to a set of unt szed jobs of weght /. That s, the weght of the job s dstrbuted among all unt peces of the job. Notce that the resultng weght of the unt tme jobs as well as the number of them depends on the job s orgnal processng tme. Thus, to analyze a non-clarvoyant algorthm for the fractonal nstance one must consder the algorthm s decsons on the orgnal nstance and argue about the algorthm s cost on the fractonal nstance. Ths dffers from the analyss of [2], where the clarvoyant algorthm HDF can assume full knowledge of the converson. Due to ths, n [2] they can argue drectly about HDF s decsons for the fractonal nstance of the problem. Snce a non-clarvoyant algorthm does not know the fractonal nstance, t
6 seems dffcult to adapt the technques of [2] when analyzng a non-clarvoyant algorthm. If the nstance conssts of a set of unweghted jobs, WSETF always processes the job whch has been processed the least. Let q A (t) be the amount WSETF has processed job by tme t. When jobs have weghts, WSETF processes the job such that s maxmzed where s the weght n the ntegral q A (t) nstance. One can see that WSETF wll not necessarly process the jobs wth the hghest weght at each tme, whch s what the algorthm HDF wll do f all jobs are unt szed. Further, WSETF may round robn among multple jobs of the same prorty. For these reasons, our analyss of WSETF s substantally dfferent from the analyss n [2], and reles crucally on a new lower bound we develop on the optmal soluton. Ths lower bound holds for any objectve that s dfferentable, non-decreasng, and convex. Our lower bound gves a way to relate the fnal objectve of the optmal soluton to the volume of unsatsfed work the optmal soluton has at each moment n tme. We then bound the volume of unsatsfed jobs n the optmal schedule at each moment n tme and relate ths to WSETF s nstantaneous ncrease n ts objectve functon. We beleve that our new lower bound wll be useful n further analyss of schedulng problems snce t s versatle enough to be used for many schedulng objectves. Other Related Work: For mnmzng average flow tme on a sngle machne, the non-clarvoyant algorthms Shortest Elapse Tme Frst (SETF) and Latest Arrval Processor Sharng (LAPS) are known to be scalable [22,8]. Ther weghted versons Weghted Shortest Elapse Tme Frst (WSETF) and Weghted Latest Arrval Processor Sharng (WLAPS) are scalable for average weghted flow tme [8,6], and also for (weghted) l k norms of flow tme [8,7]. In [2], Im et al. showed Weghted Latest Arrval Processor Sharng (WLAPS) s scalable for concave functons g. They also showed that no onlne randomzed algorthm, even wth any constant speed-up, can have a constant compettve rato, when each job has ts own cost functon g, and the goal s to mnmze [n] g (F ). Ths more general problem was studed n the offlne settng by ansal and Pruhs [7]. They gave an O(log log np )-approxmaton (wthout speed augmentaton), where P s the rato of the maxmum to mnmum processng tme of a job. Ths s the best known approxmaton for mnmzng average weghted flow tme offlne, and a central open queston n schedulng theory s whether or not a O()-approxmaton exsts for weghted flow tme offlne. 2 Prelmnares The Fractonal Objectve: In ths secton we defne the fractonal general cost objectve and ntroduce some notaton. We wll refer to the non-fractonal general cost objectve as ntegral. For a schedule, let (t) denote the remanng processng tme of job at tme t. Let β (p) be the latest tme t such that (t) = p for any p where 0 p.
7 The fractonal objectve penalzes jobs over tme by chargng n proporton to how much of the job remans to be processed. Formally, the fractonal objectve s defned as: [n] C t=r (t) g (t r )dt () Generally when the fractonal objectve s consdered, t s stated n the form (). For our analyss t wll be useful to note that ths objectve s equvalent to: [n] p p=0 g(β (p) r )dp (2) As noted earler, consderng the fractonal objectve has proven to be qute useful for the analyss of algorthms n schedulng theory, because drectly argung about the fractonal objectve s usually easer from an analyss vewpont. A schedule whch optmzes the fractonal objectve can then be used to get a good schedule for the ntegral objectve as seen n the followng theorems. In the frst theorem (6), the algorthm s fractonal cost s compared aganst the optmal soluton for the fractonal objectve. In the second theorem (7), the algorthm s fractonal cost s compared aganst the optmal soluton for the ntegral nstance. Theorem 6 ([2]). If a (non-clarvoyant) algorthm A s s-speed c-compettve for mnmzng the fractonal general cost functon then there exsts a ( + ɛ)sspeed (+ɛ)c ɛ -compettve (non-clarvoyant) algorthm for the ntegral general cost functon objectve for any 0 ɛ. Theorem 7 ([2]). If a (non-clarvoyant) algorthm A wth s-speed has fractonal cost at most a factor c larger than the optmal soluton for the ntegral objectve then there exsts a ( + ɛ)s-speed (+ɛ)c ɛ -compettve (non-clarvoyant) algorthm for the ntegral general cost functon objectve for any 0 ɛ. These two theorems follow easly by the analyss gven n [2]. We note that the resultng algorthm that performs well for the ntegral objectve s not necessarly the algorthm A. Interestngly, [2] shows that f A s HDF then the resultng algorthm s stll HDF. However, f A s WSETF, the resultng ntegral algorthm need not be WSETF. Notaton: We nontroduce some more notaton that wll be used throughout the paper. For a schedule, let C be the completon tme of job. Let p (t) denote the remanng processng tme for job at tme t. Let q (t) = p (t) be the amount job has been processed by tme t. Let p wj,j (t) = (mn{, p j } qj (t))+. Here ( ) + denotes max{, 0}. Let,j = mn{ wj, p j } = p,j (r j). If the schedule s that produced by WSETF and t [r, C ] then p,j (t) s exactly the amount of processng tme WSETF wll devote to job j durng the nterval [t, C ]. In other words, the remanng tme job wats due to WSETF processng job j. Let Q (t) be the set of job released but unsatsfed by at tme t. Let
8 Z (t) = j Q (t) p,j (t). When the algorthm s the optmal soluton (OPT) we set to be O and f the algorthm s WSETF we set to be A. For example Q A (t) s the set of released and unsatsfed jobs for WSETF at tme t. Fnally, for a set of possbly overlappng tme ntervals I, let I denote the total length of ther unon. 3 Analyss tools In ths secton we ntroduce some useful tools that we use for our analyss. Frst we present our novel lower bound on the optmal soluton. Ths lower bound s the key to our analyss and the man techncal contrbuton of the paper. The left-hand-sde of the nequalty n the lemma has an arbtrary functon x(t) : R + R + \ {0}, whle the rght-hand-sde s smply a fractonal cost of the schedule n consderaton. Ths lower bound s nspred by one presented n [20]. However, the lower bound gven n [20] nvolves substantally dfferent terms, and s only for the l k -norms of flow tme. Our proof s consderably dfferent from [20], and perhaps smpler. Snce ths lower bound apples to any objectve that fts nto the general cost functon framework, we beleve that ths lower bound wll prove to be useful for a varety of schedulng problems. The assumpton n the lemma that g s convex s crucal; the lemma s not true otherwse. The usefulness of ths lemma wll become apparent n the followng two sectons. We prove ths lemma n Secton 6 after we show the power of the lemma. Lemma. Let σ be a set of jobs on a sngle machne wth speed s. Let be any feasble schedule and (σ) be the total weghted fractonal cost of wth objectve functon g that s dfferentable and convex (g s non-decreasng), wth g(0) = 0. Let x(t) : R + R + \{0} be any functon of t. Let p x, (t) = (mn(x(t), p (t)) q (t))+. Fnally, let Zx (t) = Q (t) p x, (t). Then, x(t) g(z x (t)/s )dt s (σ). Next we show a property of WSETF that wll be useful n relatng the volume of work of unsatsfed jobs n WSETF s schedule to that of the optmal soluton s schedule. y usng ths lemma we can bound the volume of jobs n the optmal soluton s schedule and then appeal to the lower bound shown n the prevous lemma. Ths lemma s somewhat smlar to one shown for the algorthm Shortest-Remanng-Processng-Tme (SRPT) [26,9]. However, we are able to get a stronger verson of ths lemma for WSETF. Lemma 2. Consder runnng WSETF usng s-speed for some s 2 on m dentcal machnes and the optmal schedule at unt speed on m dentcal machnes. For any job Q A (t) and tme t, t s the case that Z A(t) ZO (t) 0. Proof. For the sake of contradcton, let t be the earlest tme such that Z A (t) Z O (t) > 0. Let j be a job where pa,j (t) > po,j (t). Consder the nterval I = [r j, t].
9 Let I j be the set of ntervals where WSETF works on job j durng I and let I j be the rest of the nterval I. Knowng that pa,j (t) > po,j (t), we have that I j < s I. If ths fact were not true, then qa j (t) = s I j I, but snce OPT has speed, qj O(t) I, and therefore qa j (t) qo j (t), a contradcton of the defnton of job j. Hence, we know that I j ( s ) I. At each tme durng I ether WSETF s schedulng job j or all m machnes n WSETF s schedule are busy schedulng jobs whch contrbute to Z A (t). Thus the total amount of work done by WSETF durng I on jobs that contrbute to Z A (t) s at least qj A(t) + ms I j ms( s ) I = m(s ) I. The total amount of work OPT can do on jobs that contrbute to Z O (t) s m I. Let S denote the set of jobs that arrve durng I. The facts above mply that Z A (t) Z O (t) (Z A (r j ) + k S,k m(s ) I ) (Z O (r j ) + k S,k m I ) = (Z A (r j ) m(s ) I ) (Z O (r j ) m I ) Z A (r j ) Z O (r j ) [s 2] 0 [t s the frst tme Z A(t) ZO (t) > 0 and r j < t]. 4 Sngle machne We now show WSETF s 2-speed O()-compettve on a sngle processor for the fractonal objectve. We then derve Theorem. In Secton 5, we extend our analyss to bound the performance of WSETF on dentcal machnes as well when mgraton s allowed. Assume that WSETF s gven a speed s 2. Notce that Z A (t) always decreases at a rate of s for all jobs Q A (t) when t [r, C ]. Ths s because Z A (t) s exactly the amount of remanng processng WSETF wll do before job s completed amongst jobs that have arrved by tme t. Further, knowng that OPT has speed, we see Z O (t) decreases at a rate of at most at any tme t. We know that by Lemma 2 Z A(r ) Z O(r ) 0. Usng these facts, we derve for any tme t [r, C A], s wa(t) g(t r a(t) )dt s p a(t) Z A (t) Z O (t) (s ) (t r ). Therefore, ZO (t) s (t r ) for any t [r, C A ]. Let a(t) denote the job that WSETF works on at tme t. y the second defnton, WSETF s fractonal cost s (Za(t) O (t) ) dt w a(t) p a(t) g s s w a(t) g(za(t) O s p (t))dt a(t) The last nequalty follows snce g( ) s convex, g(0) = 0, and s. y applyng Lemma wth x(t) = p a(t) /w a(t), s = and beng OPT s schedule, we have the followng theorem.
10 Theorem 8. WSETF s s-speed (+ s )-compettve for the fractonal general cost functon when s 2. Ths theorem combned wth Theorem 6 proves Theorem. 5 Multple dentcal machnes Here we present the proof of Theorem 2. In the analyss of WSETF on a sngle machne, we bounded the cost of WSETF s schedule for the fractonal objectve to the cost of the optmal soluton for the fractonal objectve. In the multple machnes case, we wll not compare WSETF to the optmal soluton for the fractonal objectve but rather compare to the cost of the optmal soluton for the ntegral objectve. We then nvoke Theorem 7 to derve Theorem 2. We frst consder an obvous lower bound on the optmal soluton for the ntegral objectve. For each job, the best the optmal soluton can do s to process job mmedately upon ts arrval usng one of ts m unt speed machnes. We know that the total ntegral cost of the optmal soluton s at least g( ). (3) [n] Smlar to the sngle machne analyss, when a job s processed we charge the cost to the optmal soluton. However, f a job s processed at tme t where t r we charge to the ntegral lower bound on the optmal soluton above. If t r >, then we wll nvoke the lower bound on the optmal soluton shown n Lemma and use the fact that the an algorthm s fractonal objectve s always smaller than ts ntegral objectve. Assume that WSETF s gven speed s 3. If job Q A (t) s not processed by WSETF at tme t, then there must exst at least m jobs n Q A (t) processed nstead by WSETF at ths tme. Hence, for all jobs Q A (t), the quantty p A (t)+za (t)/m decreases at a rate of s durng [r, C A ]. In contrast, the quantty Z O (t)/m decreases at a rate of at most snce OPT has m unt speed machnes. Further, by Lemma 2, we know that Z A(r ) Z O(r ) 0, and p A (r )+Z A(r ) Z O(r ). Usng these facts we know for any job and t [r, C A] that p A (t) + (ZA (t) ZO (t))/m (s )(t r ). Notce that f t r, we have that p A (t)+(za (t) ZO (t))/m (s 2)(t r ). Therefore, t r ZO (t) m(s 2) when t r. Let W (t) be the set of jobs that WSETF processes at tme t. y defnton, the value of WSETF s fractonal objectve s s W (t) g(t r )dt. We dvde the set of jobs n W (t) nto two sets. The frst s the set of young jobs W y (t) whch are the set of jobs W (t) where t r. The other set s
11 W o (t) = W (t)\w y (t) whch s the set of old jobs. Let OPT denote the optmal soluton s ntegral cost. We see that WSETF s cost s at most the followng. s W (t) w g(t r )dt s W y(t) W y(t) g( ) + s [n] OPT + s w g(t r )dt + s s g( )dt + s W o(t) W o(t) g W o(t) W o(t) g(t r )dt ( Z O (t) m(s 2) ) dt [by the lower bound of (3) on OPT] OPT + s g(z O (t)/m)dt s 2 W o(t) g(t r )dt g(t r )dt The thrd nequalty holds snce a job can be n W y (t) only f s processed by WSETF at tme t, and job can be processed by at most before t s completed. More precsely, f s n W y (t), then t s processed by s dt durng tme [t, t + dt). Hence, [ W y(t)] s dt, where [ W y (t)] denotes the 0- ndcator varable such that [ W y (t)] = f and only f W y (t). The last nequalty follows snce g( ) s convex, g(0) = 0, and s 2. We know that a sngle m-speed machne s always as powerful as m unt speed machnes, because a m-speed machne can smulate m unt speed machnes. Thus, we can assume OPT has a sngle m-speed machne. We apply Lemma wth x(t) = / for each W o (t), s = m and beng OPT s schedule. Knowng that W o (t) m, we conclude that w a a W o(t) p a g(za O (t)/m) s at most the optmal soluton s fractonal cost. Knowng that any algorthm s fractonal cost s at most ts ntegral cost, we conclude that WSETF s fractonal cost wth s-speed s at most (2 + 2 s 2 ) tmes the ntegral cost of the optmal soluton when s 3. Usng Theorem 7, we derve Theorem 2. 6 Proof of the Man Lemma In ths secton we prove Lemma. Proof of [Lemma ] The ntuton behnd the lemma s that each nstance of Zx (t) s composed of several nfntesmal job slces. y ntegratng over how long these slces have left to lve, we get an upper bound on Zx (t). We then argue that the ntegraton over each slce s tme alve s actually the fractonal cost of that slce accordng to the second defnton of the fractonal objectve. Recall β (p) denotes the latest
12 tme t at whch p (t) = p. For any tme t, let Λ (t) = w p (t) g (β (p) t)dp, p=0 and let Λ(t) = Q (t) Λ (t). The proof of the lemma proceeds as follows. We frst show a lower bound on Λ(t) n terms of x(t) g(z x (t)/s ). Then we show an upper bound on Λ(t) n terms of the fractonal cost of s schedule. Ths strategy allows us to relate x(t) g(z x (t)/s ) and s cost. For the frst part of the strategy, we prove that s x(t) g(z x (t)/s ) Λ(t) at all tmes t. Consder any job Q (t) wth p x, (t) > 0. Suppose x(t). Then, Λ (t) = w p (t) g (β (p) t)dp p (t) g (β p=0 x(t) p=p (t) p x, (t) (p) t)dp. If > x(t), then by defnton of p x, (t), In ths case, p (t) p x, (t) p (t) + q (t) p x, (t) + q (t) [Snce p (t) p x,(t)] = mn( x(t), p (t)) q (t))+ + q (t) ( x(t) q (t)) + q (t) [Snce p x,(t) > 0] = x(t). Λ (t) = w p (t) g (β (p) t)dp p=0 p (t) p x, (t) x(t) p (t) p (t) p=p (t) p x, (t) g (β (p) t)dp p=p (t) p x, (t) g (β (p) t)dp [Snce g s non-decreasng, convex] [Snce p (t)/p x,(t) > /( x(t))]. In ether case, Λ (t) has a lower bound of quantty (4). y convexty of g, the lower bounds on Λ (t) are mnmzed f completes p x, (t) unts of as quckly as possble for each job. Schedule runs at speed s, so we have Λ(t) Z x (t) g (p/s )dp = s x(t) p=0 x(t) Z x (t)/s p=0 g (p)dp s x(t) g(z x (t)/s ). (4)
13 Ths proves that lower bound on Λ(t). Now we show an upper bound on Λ(t) n terms of the s fractonal cost. We show Λ(t)dt (I). Fx a job. We have Λ (t)dt = = p p=0 p (t) p=0 g(β (p))dp. g (β (p) t)dpdt = w p p=0 β (p) g (t)dtdp y summng over all jobs and usng the defnton of fractonal flow tme, we have that Λ(t)dt (I). Further, the gven lower bound and upper bounds on Λ(t)dt show us that s x(t) g(z x (t)/s )dt Λ(t)dt (I), whch proves the lemma. 7 Lower bounds We now present the proof of Theorem 3. Ths lower bound extends a lower bound gven n [2]. In [2], t was shown that no oblvous algorthm can be O()- compettve wth speed less than 2 ɛ for the general cost functon. However, they assume that the cost functon was possbly dscontnuous and not convex. We show that ther lower bound can be extended to the case where g s convex and contnuous. Ths shows that WSETF s essentally the best oblvous algorthm one can hope for. In all the proofs that follow, we wll consder a general cost functon g that s contnuous, non-decreasng, and convex. The functon s also dfferentable except at a sngle pont. The functon can be easly adapted so that t s dfferentable over all ponts n R +. Proof of [Theorem 3]: We appeal to Yao s Mn-max Prncple [2]. Let A be any determnstc onlne algorthm. Consder the cost functon g and large constant c such that g(f ) = 2c(F D) for F > D and g(f ) = 0 for 0 F D. It s easy to see that g s contnuous, non-decreasng, and convex. The constant D s hdden to A, and s set to wth probablty 2c(n+) and to n + wth probablty 2c(n+). Let E denote the event that D =. At tme 0, one bg job J b of sze n + s released. At each nteger tme t n, one unt szed job J t s released. Here n s assumed to be suffcently large. That s n > 2c ɛ. Note 2 that the event E has no effect on A s schedulng decson, snce A s gnorant of the cost functon. Suppose the onlne algorthm A fnshes the bg job J b by tme n+2. Further, say the event E occurs; that s D =. Snce 2n + volume of jobs n total are released and A can process at most (2 ɛ)(n+2) amount of work durng [0, n+2], A has at least 2n + (2 ɛ)(n + 2) = ɛ(n + 2) 3 volume of unt szed jobs unfnshed at tme n + 2. A has total cost at least 2c(ɛ(n + 2) 3) 2 /2 > c(ɛn) 2 /2., A has an expected cost greater than Ω(n). Now suppose A dd not fnsh J b by tme n+2. Condtoned on E, A has cost at least 2c. Hence A s expected cost s at least 2c( 2c(n+) ) > c. The nequalty follows snce n > 2c ɛ 2. Knowng that Pr[E] = 2c(n+)
14 We now consder the adversary s schedule. Condtoned on E (D = ), the adversary completes each unt szed job wthn one unt tme and hence has a non-zero cost only for J b. The total cost s 2c(n + ). Condtoned on E (D = n + ), the adversary schedules jobs n a frst n frst out fashon thereby havng cost 0. Hence the adversary s expected cost s 2c(n+) (2c)(n + ) =. Knowng that n s suffcently larger than c, the clam follows snce A has cost greater than c n expectaton. Next we show a lower bound for any non-clarvoyant algorthm that knows g. In [2] t was shown that no algorthm can be O()-compettve for a general cost functon wth speed less than 7/6. However, the cost functon g used n the lower bound was nether contnuous nor convex. We show that no algorthm can have a bounded compettve rato f t s gven a speed less than 2 > 7/6 even f the functon s contnuous and convex but the algorthm s requred to be non-clarvoyant. Proof of [Theorem 4]: Let A be any non-clarvoyant determnstc onlne algorthm wth speed s. Let the cost functon g be defned as g(f ) = F 0 for F > 0 and g(f ) = 0 otherwse. It s easy to verfy that g s contnuous, non-decreasng, and convex. At tme t = 0, job J of processng length 0 unts and weght w s released. At tme t = 0( 2 ), job J 2 of weght w 2 s released. Weghts of these jobs wll be set later. The processng tme of job J 2 s set based on the algorthm s decsons, whch can be done snce the algorthm A s non-clarovyant. Consder the amount of work done by A on the job J 2 by the tme t = 0. Suppose algorthm A worked on J 2 for less than 0( 2 ) unts by tme t = 0. In ths case, the adversary sets J 2 s processng tme to 0 unts. The flow tme of job J 2 n A s schedule s (0 0( 2 )) + (0 0( 2 ))/s 0 + 0( 2 )ɛ/( 2 ɛ) when s = 2 ɛ. Let ɛ = 0( 2 )ɛ/( 2 ɛ). Hence, A ncurs a weghted flow tme of ɛ w 2 towards J 2. The optmal soluton works on J 2 the moment t arrves untl ts completon, so ths job ncurs no cost. The optmal soluton processes J partally before J 2 arrves and processes t untl completon after job J 2 s completed. The largest flow tme the optmal soluton can have for J s 20, so the optmal cost s upper bounded by 0w. The compettve rato of A ɛ w 2 0w can be made arbtrarly large by settng w 2 to be much larger than w. Now consder the case where A works on J 2 for 0( 2 ) unts by tme t = 0. In ths case, the adversary sets the processng tme of job J 2 to 0( 2 ). Therefore, A completes J 2 by tme t = 0. However, A can not complete J wth flow tme of at most 0 unts, f gven a speed of at most 2 ɛ. Hence A ncurs a cost of ɛw towards flow tme of J. It s easy to verfy that for ths nput, the optmal soluton frst schedules J untl ts completon and then processes job J 2 to completon. Hence, the optmal soluton completes both the jobs wth flow tme of at most 0 unts, ncurrng a cost of 0. Agan, the compettve rato s unbounded.
15 Fnally, we show a lower bound for any non-clarvoyant algorthm that knows g on m dentcal machnes. We show that no algorthm can have a bounded compettve rato when gven speed less than 2 m. Prevously, the only prevous lower bounds for the general cost functon on dentcal machnes were lower bounds that carred over from the sngle machne settng. Proof of [Theorem 5]: We use Yao s mn-max prncple. Let A be any nonclarvoyant determnstc onlne algorthm on m parallel machnes wth the speed s = 2 ɛ, for any 0 < ɛ. Let L > be a parameter and we take m > ɛ. Let the cost functon g(f ) be defned as follows: g(f ) = F L for F > L and g(f ) = 0 otherwse. It s easy to verfy that, g s contnuous, non-decreasng, and convex. At tme t = 0, (m )L + jobs are released nto the system, out of whch (m )L jobs have unt processng tme and one job has processng tme L. The adversary sets the job wth processng tme L unformaly at random amongst all the jobs. Consder the tme t = L(m )+ sm. At the tme t, there exst a job j that has been processed to the extent of at most unt by A snce the most work A can do s smt = L(m ) +, whch s the total number of jobs. Wth probablty L(m )+, j has a processng tme of L unts. In the event that j has the processng tme of L unts, the earlest A can complete j s t + L L(m )+ sm + L s s = > L when L s suffcently large and s 2 ɛ (note that m > ɛ ). In ths case, j has a flow tme greater than L tme unts. Therefore, n expectaton A ncurs a postve cost. Let us now look at the adversary s schedule. Snce the adversary knows the processng tmes of jobs, the adversary processes the job j of length L on a dedcated machne. The rest of the unt length jobs are processed on other machnes. The adversary completes all the jobs by the tme L and hence pays cost of 0. Therefore, the expected compettve rato of the onlne algorthm A s unbounded. References. Anand, S., Garg, N., Kumar, A.: Resource augmentaton for weghted flow-tme explaned by dual fttng. In: SODA. pp (202) 2. Avraham, N., Azar, Y.: Mnmzng total flow tme and total completon tme wth mmedate dspatchng. In: SPAA 03: Proceedngs of the ffteenth annual ACM symposum on Parallel algorthms and archtectures. pp. 8 (2003) 3. Awerbuch,., Azar, Y., Leonard, S., Regev, O.: Mnmzng the flow tme wthout mgraton. SIAM J. Comput. 3(5), (2002) 4. Azar, Y., Epsten, L., Rchter, Y., Woegnger, G.J.: All-norm approxmaton algorthms. J. Algorthms 52(2), (2004) 5. ansal, N., Chan, H.L.: Weghted flow tme does not admt o()-compettve algorthms. In: SODA. pp (2009) 6. ansal, N., Krshnaswamy, R., Nagarajan, V.: etter scalable algorthms for broadcast schedulng. In: ICALP (). pp (200) 7. ansal, N., Pruhs, K.: The geometry of schedulng. In: IEE Symposum on the Foundatons of Computer Scence. pp (200)
16 8. ansal, N., Pruhs, K.: Server schedulng to balance prortes, farness, and average qualty of servce. SIAM J. Comput. 39(7), (200) 9. ecchett, L., Leonard, S.: Nonclarvoyant schedulng to mnmze the total flow tme on sngle and parallel machnes. J. ACM 5(4), (2004) 0. ecchett, L., Leonard, S., Marchett-Spaccamela, A., Pruhs, K.: Onlne weghted flow tme and deadlne schedulng. Journal of Dscrete Algorthms 4(3), (2006). ender, M.A., Chakrabart, S., Muthukrshnan, S.: Flow and stretch metrcs for schedulng contnuous job streams. In: SODA. pp (998) 2. orodn, A., El-Yanv, R.: On ranomzaton n onlne computaton. In: IEEE Conference on Computatonal Complexty. pp (997) 3. ussema, C., Torng, E.: Greedy multprocessor server schedulng. Oper. Res. Lett. 34(4), (2006) 4. Chekur, C., Goel, A., Khanna, S., Kumar, A.: Mult-processor schedulng to mnmze flow tme wth epslon resource augmentaton. In: STOC. pp (2004) 5. Chekur, C., Im, S., Moseley,.: Onlne schedulng to mnmze maxmum response tme and maxmum delay factor. Theory of Computng 8(), (202), http: // 6. Chekur, C., Khanna, S., Zhu, A.: Algorthms for mnmzng weghted flow tme. In: STOC. pp (200) 7. Edmonds, J., Im, S., Moseley,.: Onlne scalable schedulng for the l k -norms of flow tme wthout conservaton of work. In: ACM-SIAM Symposum on Dscrete Algorthms (20) 8. Edmonds, J., Pruhs, K.: Scalably schedulng processes wth arbtrary speedup curves. In: ACM-SIAM Symposum on Dscrete Algorthms. pp (2009) 9. Fox, K., Moseley,.: Onlne schedulng on dentcal machnes usng srpt. In: SODA. pp (20) 20. Im, S., Moseley,.: An onlne scalable algorthm for mnmzng l k -norms of weghted flow tme on unrelated machnes. In: ACM-SIAM Symposum on Dscrete Algorthms (20) 2. Im, S., Moseley,., Pruhs, K.: Onlne schedulng wth general cost functons. In: SODA. pp (202) 22. Kalyanasundaram,., Pruhs, K.: Speed s as powerful as clarvoyance. Journal of the ACM 47(4), (2000) 23. Kumar, V.S.A., Marathe, M.V., Parthasarathy, S., Srnvasan, A.: A unfed approach to schedulng on unrelated parallel machnes. J. ACM 56(5) (2009) 24. Leonard, S., Raz, D.: Approxmatng total flow tme on parallel machnes. J. Comput. Syst. Sc. 73(6), (2007) 25. Phllps, C.A., Sten, C., Torng, E., Wen, J.: Optmal tme-crtcal schedulng va resource augmentaton. Algorthmca 32(2), (2002) 26. Pruhs, K., Sgall, J., Torng, E.: Handbook of Schedulng: Algorthms, Models, and Performance Analyss, chap. Onlne Schedulng (2004)
Problem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More informationO-line Temporary Tasks Assignment. Abstract. In this paper we consider the temporary tasks assignment
O-lne Temporary Tasks Assgnment Yoss Azar and Oded Regev Dept. of Computer Scence, Tel-Avv Unversty, Tel-Avv, 69978, Israel. azar@math.tau.ac.l??? Dept. of Computer Scence, Tel-Avv Unversty, Tel-Avv, 69978,
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationReal-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling
Real-Tme Systems Multprocessor schedulng Specfcaton Implementaton Verfcaton Multprocessor schedulng -- -- Global schedulng How are tasks assgned to processors? Statc assgnment The processor(s) used for
More informationMinimizing Maximum Flow-time on Related Machines
Mnmzng Maxmum Flow-tme on Related Machnes Nkhl Bansal and Bouke Cloostermans Endhoven Unversty of Technology P.O. Box 513, 5600 Endhoven, The Netherlands b.cloostermans,n.bansal@tue.nl Abstract We consder
More informationStanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011
Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationMulti-processor Scheduling to Minimize Flow Time with ɛ Resource Augmentation
Mult-processor Schedulng to Mnmze Flow Tme wth Resource Augmentaton Chandra Chekur Lucent Bell Labs 600 Mountan Avenue Murray Hll, NJ 07974 chekur@researchbell-labscom Sanjeev Khanna Dept of Comp & Inf
More informationCS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016
CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationLecture 4: November 17, Part 1 Single Buffer Management
Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input
More informationPrimal Dual Gives Almost Optimal Energy Efficient Online Algorithms
Prmal Dual Gves Almost Optmal Energy Effcent Onlne Algorthms Nkhl R. Devanur Zhy Huang Abstract We consder the problem of onlne schedulng of obs on unrelated machnes wth dynamc speed scalng to mnmze the
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationEmbedded Systems. 4. Aperiodic and Periodic Tasks
Embedded Systems 4. Aperodc and Perodc Tasks Lothar Thele 4-1 Contents of Course 1. Embedded Systems Introducton 2. Software Introducton 7. System Components 10. Models 3. Real-Tme Models 4. Perodc/Aperodc
More informationLecture 14: Bandits with Budget Constraints
IEOR 8100-001: Learnng and Optmzaton for Sequental Decson Makng 03/07/16 Lecture 14: andts wth udget Constrants Instructor: Shpra Agrawal Scrbed by: Zhpeng Lu 1 Problem defnton In the regular Mult-armed
More informationThe Minimum Universal Cost Flow in an Infeasible Flow Network
Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran
More informationSpeed Scaling of Processes with Arbitrary Speedup Curves on a Multiprocessor
Speed Scalng of Processes wth Arbtrary Speedup Curves on a Multprocessor Nkhl Bansal Ho Leung Chan Jeff Edmonds Krk Pruhs Wth mult-core t s lke we are throwng ths Hal Mary pass down the feld and now we
More informationAn Online Scalable Algorithm for Minimizing l k -norms of Weighted Flow Time on Unrelated Machines
n nlne Scalable lgorthm for Mnmzng l -norms of Weghted Flow Tme on Unrelated Machnes Sungjn Im Benjamn Moseley bstract We consder the problem of schedulng jobs that arrve onlne n the unrelated machne model
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationLecture 4. Instructor: Haipeng Luo
Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would
More informationCOS 521: Advanced Algorithms Game Theory and Linear Programming
COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton
More informationECE559VV Project Report
ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate
More informationFoundations of Arithmetic
Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an
More information1 The Mistake Bound Model
5-850: Advanced Algorthms CMU, Sprng 07 Lecture #: Onlne Learnng and Multplcatve Weghts February 7, 07 Lecturer: Anupam Gupta Scrbe: Bryan Lee,Albert Gu, Eugene Cho he Mstake Bound Model Suppose there
More informationMaximizing the number of nonnegative subsets
Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationAnalysis of Discrete Time Queues (Section 4.6)
Analyss of Dscrete Tme Queues (Secton 4.6) Copyrght 2002, Sanjay K. Bose Tme axs dvded nto slots slot slot boundares Arrvals can only occur at slot boundares Servce to a job can only start at a slot boundary
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationCommunication Complexity 16:198: February Lecture 4. x ij y ij
Communcaton Complexty 16:198:671 09 February 2010 Lecture 4 Lecturer: Troy Lee Scrbe: Rajat Mttal 1 Homework problem : Trbes We wll solve the thrd queston n the homework. The goal s to show that the nondetermnstc
More informationSingle-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition
Sngle-Faclty Schedulng over Long Tme Horzons by Logc-based Benders Decomposton Elvn Coban and J. N. Hooker Tepper School of Busness, Carnege Mellon Unversty ecoban@andrew.cmu.edu, john@hooker.tepper.cmu.edu
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More informationLagrangian Primal Dual Algorithms in Online Scheduling
Lagrangan Prmal Dual Algorthms n Onlne Schedulng Nguyen Km Thang IBISC, Unversty of Evry Val d Essonne, France Abstract We present a prmal-dual approach to desgn algorthms n onlne schedulng. Our approach
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationSimultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals
Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,
More informationFinding Dense Subgraphs in G(n, 1/2)
Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng
More informationCase A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.
THE CELLULAR METHOD In ths lecture, we ntroduce the cellular method as an approach to ncdence geometry theorems lke the Szemeréd-Trotter theorem. The method was ntroduced n the paper Combnatoral complexty
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationNotes on Frequency Estimation in Data Streams
Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to
More informationTemperature. Chapter Heat Engine
Chapter 3 Temperature In prevous chapters of these notes we ntroduced the Prncple of Maxmum ntropy as a technque for estmatng probablty dstrbutons consstent wth constrants. In Chapter 9 we dscussed the
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016
U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and
More informationResource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud
Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationEconomics 101. Lecture 4 - Equilibrium and Efficiency
Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of
More informationprinceton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora
prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable
More informationThe Expectation-Maximization Algorithm
The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.
More informationCOS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013
COS 511: heoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 15 Scrbe: Jemng Mao Aprl 1, 013 1 Bref revew 1.1 Learnng wth expert advce Last tme, we started to talk about learnng wth expert advce.
More informationPerfect Competition and the Nash Bargaining Solution
Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange
More informationPhysics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1
P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationLecture Space-Bounded Derandomization
Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationfind (x): given element x, return the canonical element of the set containing x;
COS 43 Sprng, 009 Dsjont Set Unon Problem: Mantan a collecton of dsjont sets. Two operatons: fnd the set contanng a gven element; unte two sets nto one (destructvely). Approach: Canoncal element method:
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationLecture 2: Prelude to the big shrink
Lecture 2: Prelude to the bg shrnk Last tme A slght detour wth vsualzaton tools (hey, t was the frst day... why not start out wth somethng pretty to look at?) Then, we consdered a smple 120a-style regresson
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationComplete subgraphs in multipartite graphs
Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G
More informationFinding Primitive Roots Pseudo-Deterministically
Electronc Colloquum on Computatonal Complexty, Report No 207 (205) Fndng Prmtve Roots Pseudo-Determnstcally Ofer Grossman December 22, 205 Abstract Pseudo-determnstc algorthms are randomzed search algorthms
More information20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The first idea is connectedness.
20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The frst dea s connectedness. Essentally, we want to say that a space cannot be decomposed
More informationNP-Completeness : Proofs
NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem
More informationOnline Appendix: Reciprocity with Many Goods
T D T A : O A Kyle Bagwell Stanford Unversty and NBER Robert W. Stager Dartmouth College and NBER March 2016 Abstract Ths onlne Appendx extends to a many-good settng the man features of recprocty emphaszed
More informationCS-433: Simulation and Modeling Modeling and Probability Review
CS-433: Smulaton and Modelng Modelng and Probablty Revew Exercse 1. (Probablty of Smple Events) Exercse 1.1 The owner of a camera shop receves a shpment of fve cameras from a camera manufacturer. Unknown
More informationRandomness and Computation
Randomness and Computaton or, Randomzed Algorthms Mary Cryan School of Informatcs Unversty of Ednburgh RC 208/9) Lecture 0 slde Balls n Bns m balls, n bns, and balls thrown unformly at random nto bns usually
More informationPrimal Dual Gives Optimal Energy Efficient Online Algorithms
Prmal Dual Gves Optmal Energy Effcent Onlne Algorthms Nkhl R. Devanur Mcrosoft Research, Redmond nkdev@mcrosoft.com Zhy Huang Unversty of Pennsylvana hzhy@cs.upenn.edu ABSTRAT We consder the problem of
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationLARGEST WEIGHTED DELAY FIRST SCHEDULING: LARGE DEVIATIONS AND OPTIMALITY. By Alexander L. Stolyar and Kavita Ramanan Bell Labs
The Annals of Appled Probablty 200, Vol., No., 48 LARGEST WEIGHTED DELAY FIRST SCHEDULING: LARGE DEVIATIONS AND OPTIMALITY By Alexander L. Stolyar and Kavta Ramanan Bell Labs We consder a sngle server
More informationPolynomial Regression Models
LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance
More informationIntegrals and Invariants of Euler-Lagrange Equations
Lecture 16 Integrals and Invarants of Euler-Lagrange Equatons ME 256 at the Indan Insttute of Scence, Bengaluru Varatonal Methods and Structural Optmzaton G. K. Ananthasuresh Professor, Mechancal Engneerng,
More informationTHE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens
THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationRemarks on the Properties of a Quasi-Fibonacci-like Polynomial Sequence
Remarks on the Propertes of a Quas-Fbonacc-lke Polynomal Sequence Brce Merwne LIU Brooklyn Ilan Wenschelbaum Wesleyan Unversty Abstract Consder the Quas-Fbonacc-lke Polynomal Sequence gven by F 0 = 1,
More informationLecture 10 Support Vector Machines. Oct
Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron
More informationTwo Methods to Release a New Real-time Task
Two Methods to Release a New Real-tme Task Abstract Guangmng Qan 1, Xanghua Chen 2 College of Mathematcs and Computer Scence Hunan Normal Unversty Changsha, 410081, Chna qqyy@hunnu.edu.cn Gang Yao 3 Sebel
More informationEstimation: Part 2. Chapter GREG estimation
Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the
More informationOnline Scheduling of Car-Sharing Requests Between Two Locations with Many Cars and Flexible Advance Bookings
Onlne Schedulng of Car-Sharng Requests Between Two Locatons wth Many Cars and Flexble Advance Bookngs Keln Luo School of Management, X an Jaotong Unversty, X an, Chna luokeln@xtueducn https://orcdorg/0000-0003-2006-060
More informationOnline Classification: Perceptron and Winnow
E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng
More informationHidden Markov Models
Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,
More informationOnline Appendix. t=1 (p t w)q t. Then the first order condition shows that
Artcle forthcomng to ; manuscrpt no (Please, provde the manuscrpt number!) 1 Onlne Appendx Appendx E: Proofs Proof of Proposton 1 Frst we derve the equlbrum when the manufacturer does not vertcally ntegrate
More informationSolutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.
Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,
More informationMin Cut, Fast Cut, Polynomial Identities
Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.
More informationLecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.
prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove
More informationLecture 3: Probability Distributions
Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationTHE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.
THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY Wllam A. Pearlman 2002 References: S. Armoto - IEEE Trans. Inform. Thy., Jan. 1972 R. Blahut - IEEE Trans. Inform. Thy., July 1972 Recall
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationA new construction of 3-separable matrices via an improved decoding of Macula s construction
Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula
More informationProblem Set 9 - Solutions Due: April 27, 2005
Problem Set - Solutons Due: Aprl 27, 2005. (a) Frst note that spam messages, nvtatons and other e-mal are all ndependent Posson processes, at rates pλ, qλ, and ( p q)λ. The event of the tme T at whch you
More informationCHAPTER 17 Amortized Analysis
CHAPTER 7 Amortzed Analyss In an amortzed analyss, the tme requred to perform a sequence of data structure operatons s averaged over all the operatons performed. It can be used to show that the average
More informationError Probability for M Signals
Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal
More informationTHE SUMMATION NOTATION Ʃ
Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the
More informationA note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 7, Number 2, December 203 Avalable onlne at http://acutm.math.ut.ee A note on almost sure behavor of randomly weghted sums of φ-mxng
More informationLecture 2: Gram-Schmidt Vectors and the LLL Algorithm
NYU, Fall 2016 Lattces Mn Course Lecture 2: Gram-Schmdt Vectors and the LLL Algorthm Lecturer: Noah Stephens-Davdowtz 2.1 The Shortest Vector Problem In our last lecture, we consdered short solutons to
More informationHomework Assignment 3 Due in class, Thursday October 15
Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More information