Adaptivity and Approximation for Stochastic Packing Problems

Size: px
Start display at page:

Download "Adaptivity and Approximation for Stochastic Packing Problems"

Transcription

1 Adaptvty and Approxmaton for Stochastc Packng Problems Bran C. Dean Mchel X. Goemans Jan Vondrák Abstract We study stochastc varants of Packng Integer Programs (PIP) the problems of fndng a maxmum-value 0/ vector x satsfyng Ax b, wth A and b nonnegatve. Many combnatoral problems belong to ths broad class, ncludng the knapsack problem, maxmum clque, stable set, matchng, hypergraph matchng (a.k.a. set packng), b- matchng, and others. PIP can also be seen as a multdmensonal knapsack problem where we wsh to pack a maxmum-value collecton of tems wth vector-valued szes. In our stochastc settng, the vector-valued sze of each tem s known to us apror only as a probablty dstrbuton, and the sze of an tem s nstantated once we commt to ncludng the tem n our soluton. Followng the framework of [3], we consder both adaptve and non-adaptve polces for solvng such problems, adaptve polces havng the flexblty of beng able to make decsons based on the nstantated szes of tems already ncluded n the soluton. We nvestgate the adaptvty gap for these problems: the maxmum rato between the expected values acheved by optmal adaptve and non-adaptve polces. We show tght bounds on the adaptvty gap for set packng and b-matchng, and we also show how to fnd effcently non-adaptve polces approxmatng the adaptve optmum. For nstance, we can approxmate the adaptve optmum for stochastc set packng to wthn O(d /2 ), whch s not only optmal wth respect to the adaptvty gap, but t s also the best known approxmaton factor n the determnstc case. It s known that there s no polynomal-tme d /2 ɛ - approxmaton for set packng, unless NP = ZP P. Smlarly, for b-matchng, we obtan algorthmcally a tght bound on the adaptvty gap of O(λ) where λ satsfes P λ b j + =. For general Stochastc Packng, we prove that a smple greedy algorthm provdes an O(d)-approxmaton to the adaptve optmum. For A [0, ] d n, we provde an O(λ)- approxmaton where P /λ b j =. (For b = (B, B,..., B), we get λ = d /B.) We also mprove the hardness results for determnstc PIP: n the general case, we prove that a polynomal-tme d ɛ -approxmaton algorthm would mply NP = ZP P. In the specal case when A [0, ] d n and b = (B, B,..., B), we show that a d /B ɛ -approxmaton Research supported n part by NSF contracts ITR and CCR M.I.T., CSAIL, Cambrdge, MA bdean@mt.edu M.I.T., Dept. of Math. and CSAIL, Cambrdge, MA goemans@math.mt.edu M.I.T., Dept. of Math., Cambrdge, MA vondrak@math.mt.edu would mply NP = ZP P. Fnally, we prove that t s PSPACE-hard to fnd the optmal adaptve polcy for Stochastc Packng n any fxed dmenson d 2. Stochastc Packng We consder a multdmensonal generalzaton of the Stochastc Knapsack problem [3] where nstead of a scalar sze, each tem has a vector sze n R d, and a feasble soluton s a set of tems such that the total sze s bounded by a gven capacty n each component. Ths problem can be also seen as the stochastc verson of a Packng Integer Program (PIP), defned n [0]. A Packng Integer Program s a combnatoral problem n a very general form nvolvng the computaton of a soluton x {0, } d satsfyng packng constrants of the form Ax b where A s nonnegatve. Ths encapsulates many combnatoral problems such as hypergraph matchng (a.k.a. set packng), b-matchng, dsjont paths n graphs, maxmum clque and stable set. In general, Packng Integer Programs are NP-hard to solve or even to approxmate well. We menton the known hardness results n Secton.. Our stochastc generalzaton follows the phlosophy of [3] where tems have ndependent random szes whch are determned and revealed to our algorthm only after an tem s chosen to be ncluded n the soluton. Before the algorthm decdes to nsert an tem, t only has some nformaton about the probablty dstrbuton of ts sze. In the PIP problem, the sze of an tem s a column of the matrx A, whch we now consder to be a random vector ndependent of other columns of A. Once an tem s chosen, the sze vector s fxed, and the tem cannot be removed from the soluton anymore. An algorthm whose decsons depend on the observed sze vectors s called adaptve; an algorthm whch chooses an entre sequence of tems n advance s non-adaptve. Defnton.. (PIP) Gven a matrx A R d n + and vectors b R d +, v R n +, a Packng Integer Program (PIP) s the problem of maxmzng v x subject to A x b and x {0, } d. Defnton.2. (Stochastc Packng) Stochastc Packng (SP) s a stochastc varant of a PIP where A s a random matrx whose columns are ndependent

2 random vectors, denoted S(),... S(n). A feasble soluton s a set of tems F such that F S() b. The value of S() s nstantated and fxed once we nclude tem n F. Once ths decson s made, the tem cannot be removed. Whenever the condton F S() b s volated, no further tems can be nserted, and no value s receved for the overflowng tem. We consder 4 classes of Stochastc Packng problems: () General Stochastc Packng where no restrctons are placed on tem szes or capacty; by scalng, we can assume that b = (,,..., ). (2) Restrcted Stochastc Packng where S() have values n [0, ] d and b R d, b j. (3) Stochastc Set Packng where S() have values n {0, } d and b = (,,..., ). (4) Stochastc b-matchng where S() have values n {0, } d and b Z d, b j. Defnton.3. An adaptve polcy for a Stochastc Packng problem s a functon P : 2 [n] R d + [n]. The nterpretaton of P s that gven a confguraton (A, b) where A represents the tems stll avalable and b the remanng capacty, P(A, b) determnes whch tem should be chosen next among the tems n A. A non-adaptve polcy s a fxed orderng of tems to be nserted. For an nstance of Stochastc Packng, let ADAP T denote the maxmum possble expected value acheved by an adaptve polcy, and N ON ADAP T the maxmum possble expected value acheved by a non-adaptve polcy. Gven a Stochastc Packng problem, we can ask several questons. The frst one s, what s the optmum adaptve polcy? Can we fnd or characterze the optmum adaptve polcy? Next, we can ask, what s the beneft of adaptvty and evaluate the adaptvty gap the rato between the expected values of optmal adaptve and non-adaptve polces? Note that such a queston s ndependent of any complexty-theoretc assumptons; t refers merely to the exstence of polces, whch may not be computable effcently. We can also try to fnd algorthmcally a good non-adaptve polcy, whch approxmates the optmal adaptve polcy wthn some factor. In the sngle-dmensonal case, we proved that the adaptvty gap s bounded by a constant factor, and the non-adaptve soluton can be acheved by a smple greedy algorthm [3].. Known results. Stochastc Packng n ths form has not been consdered before. We buld on the prevous work on Packng Integer Programs, whch was ntated by Raghavan and Thompson [0], [9]. They proposed an LP approach combned wth randomzed roundng, whch yelds an O(d)-approxmaton for the general case [0]. For the case of Set Packng, ther methods yeld an O( d)-approxmaton. For general b parametrzed by B = mn b, there s an O(d /B )- approxmaton for A [0, ] d n and an O(d /(B+) )- approxmaton for A {0, } d n. The greedy algorthm gves a d-approxmaton for Set Packng [5]. Ths s complemented by the hardness results of Chekur and Khanna [] who show that a d /(B+) ɛ - approxmaton for the b-matchng problem wth b = (B, B,..., B) would mply NP = ZP P (usng Håstad s result on the napproxmablty of Max Clque [6]). For A [0, ] d n and real B, they get hardness of d /( B +) ɛ -approxmaton; ths stll leaves a gap between O(d /B ) and d /(B+) ɛ for B nteger, n partcular a gap between d /2 ɛ and O(d) n the general case. The analyss of the randomzed roundng technque has been refned by Srnvasan [] who presents stronger bounds; however, the approxmaton factors are not mproved n general (and ths s essentally mpossble due to [])..2 Our results. We prove bounds on the adaptvty gap and we also present algorthms to fnd a good nonadaptve soluton. Our results are summarzed n the table below. In the table, GP = General Packng, SP = Set Packng, RP = Restrcted Packng, BM = b-matchng and prevously known results are wthn square brackets. Determ. Inapprox- Adaptvty Stochastc approx. mablty gap approx. GP [ O(d) ] d ɛ Ω( ( ) ( d) ) O(d) ( ) RP [ O d B ] d B ɛ Ω d B+ O d B ( d ) SP [ O ] [ d /2 ɛ ] Θ( d) O( d) ( ) ( ) ( ) BM [O d B+ ] [ d B+ ɛ ] Θ d B+ O d B+ It turns out that the adaptvty gap for Stochastc Set Packng can be Ω( d), and n the case of b- matchng t can be Ω(λ) where λ s the soluton of j /λbj+ =. For b = (B, B,..., B), we get λ = d /(B+). (It s qute conspcuous how these bounds match the napproxmablty of the determnstc problems, whch s seemngly an unrelated noton!) These nstances are descrbed n Secton 2. On the postve sde, we prove n Secton 4 that a natural extenson of the greedy algorthm of [3] fnds a non-adaptve soluton of expected value ADAP T/O(d) for the general case. For Stochastc Set Packng we can acheve non-adaptvely expected value ADAP T/O( d)

3 by an LP approach wth randomzed roundng. The LP whch s descrbed n Secton 3 provdes a non-trval upper bound on the expected value acheved by any adaptve strategy, and our randomzed roundng strategy s descrbed n Secton 5. More generally, for stochastc b- matchng we can acheve non-adaptvely ADAP T/O(λ) where j /λbj+ =. For Restrced Stochastc Packng we get ADAP T/O(λ) where j = (Sec- /λbj ton 7). Note that for b = (B, B,..., B), we get λ = d /(B+) for Stochastc b-matchng and λ = d /B for Restrcted Stochastc Packng,.e. the best approxmaton factors known n the determnstc case. In Secton 8, we mprove the hardness results for determnstc PIP: We prove that a polynomal-tme d ɛ - approxmaton algorthm would mply NP = ZP P, so our greedy algorthm s essentally optmal even n the determnstc case. We also mprove the hardness result to d /B ɛ n the case when A [0, ] d n, b = (B, B,..., B), B 2 nteger. All our approxmaton factors match the best results known n the determnstc case and the hardness results mply that these are essentally optmal. For Set Packng and b-matchng, we also match our lower bounds on the adaptvty gap. It s qute surprsng that we can approxmate the optmal adaptve polcy to wthn O(λ) effcently, whle ths can be the actual gap between the adaptve and non-adaptve polces; and unless NP = ZP P, we cannot approxmate even the best non-adaptve soluton better than ths! Our results only assume that we know the expected sze of each tem (or more precsely the truncated mean sze, see Defnton 3., and the probablty that an tem alone fts) rather than the entre probablty dstrbuton. Wth such lmted nformaton, our results are also tght n the followng sense. For Stochastc Packng and Stochastc Set Packng, there exst nstances wth the same mean tem szes for whch the optmum adaptve values dffer by a factor Θ(d) or Θ( d), resp. These nstances are descrbed n Secton 6. 2 The beneft of adaptvty We present examples whch demonstrate that for Stochastc Packng, adaptvty can brng a substantal advantage (as opposed to Stochastc Knapsack where the beneft of adaptvty s only a constant factor [3]). Our examples are smple nstances of Set Packng and b-matchng (A {0, } d n ). Lemma 2.. There are nstances of Stochastc Set Packng, such that d ADAP T NONADAP T. 2 Proof. Defne tems of type =, 2,..., d where tems of type have sze vector S() = Be(p) e,.e. a random Bernoull varable Be(p) n the -th component, and 0 n the remanng components (p > 0 to be chosen later). We have an unlmted supply of tems of each type. All tems have unt value and we assume unt capacty b = (,,... ). An adaptve polcy can nsert tems of each type untl a sze of s attaned n the respectve component; the expected number of tems of each type nserted s /p. Therefore ADAP T d/p. On the other hand, consder a set of tems F. We estmate the probablty that F s a feasble soluton. For every component, let k denote the number of tems of type n F. We have: and Pr[S (F ) ] = ( p) k + k p( p) k = ( + p(k ))( p) k Pr[ S(F ) ] ( + p) k ( p) k = ( p 2 ) k d ( p 2 ) k = e p2 P d = (k ) = e p2 ( F d). Thus the probablty that a set of tems fts decreases exponentally wth ts sze. For any non-adaptve polcy, the probablty that the frst k tems n the sequence are nserted successfully s at most e p2 (k d), and we can estmate the expected value acheved. d + NONADAP T = k=d+ e p2 (k d) Pr[k tems ft] k= = d + e p2 d + p 2. We choose p = d /2 for whch we get ADAP T d 3/2 and NONADAP T 2d. We generalze ths example to an arbtrary nteger capacty vector b. Lemma 2.2. There are nstances of Stochastc b- matchng such that ADAP T λ 4 NONADAP T where λ s the soluton of d = /λb+ =. Proof. Let p satsfy d = pb+ =. Consder the same set of tems that we used n the prevous proof,

4 only the values are modfed as v = p b+ /(b + ). The same adaptve strategy wll now nsert tems of each type, untl t accumulates sze b n the -th component. The expected number of tems of type nserted wll be b /p, and therefore ADAP T d = v b p = p d = b p b+ b + 2p. Consder a set of tems F. We dvde the tems of each type nto blocks of sze b + (for type ). Suppose that the number of blocks of type s k. We estmate the probablty that F s a feasble soluton; we use the fact that each block alone has a probablty of overflow p b+, and these events are ndependent: Pr[S (F ) b ] ( p b+ ) k e kpb +, Pr[ S(F ) b] e P k p b +. Now we express ths probablty as a functon of the value of F. We defned the blocks n such a way that a block of type gets value p b+, and k p b+ s the value of all the blocks. There mght be tems of value less than p b+ of type, whch are not assgned to any block. All these together can have value at most (by the defnton of p). So the probablty that the non-adaptve polcy fts a set of value at least w s Ψ(w) mn{e w, }. Now we can estmate the expected value acheved by any non-adaptve polcy: NONADAP T = 0 Ψ(w)dw + e w dw = 2. It follows that the adaptvty gap s at least /4p = λ/4 where λ satsfes d = /λb+ =. As a specal case, for b = (B, B,..., B), the lemma holds wth λ = d /(B+), whch s the adaptvty gap for stochastc b-matchng that we clamed. In Secton 5, we prove that for Set Packng and b-matchng, these bounds are not only tght, but they can be actually acheved by a polynomal-tme non-adaptve polcy. On the other hand, the best lower bound we have on the adaptvty gap n the general case s Ω( d) (from Set Packng), and we do not know whether ths s the largest possble gap. Our best upper bound s O(d), as mpled by the greedy approxmaton algorthm (Secton 4). 3 Boundng an adaptve polcy In ths secton, we ntroduce a lnear program whch allows us to upper bound the expected value of any adaptve polcy. Ths s based on the same tools that we used n [3] for the -dmensonal case. Ths LP together wth randomzed roundng wll be used n Secton 5 to desgn good non-adaptve polces. Defnton 3.. For an tem wth random sze vector S, we defne the truncated mean sze µ by components as µ j = E[mn{S j, }]. For a set of tems A, we wrte µ(a) = A µ(). The followng lemma can be proved usng the same martngale argument that we used n [3]. Here, we show an alternatve elementary proof. Lemma 3.. For any adaptve polcy, let A denote the (random) set of tems that the polcy attempts to nsert. Then for each component j, E[µ j (A)] b j + where b s the capacty vector. Proof. Consder component j. Denote by M(c) the maxmum expected µ j (A) for a set A that an adaptve polcy can possbly try to nsert wthn capacty c n the j-th component. (For now, all other components can be gnored.) We prove, by nducton on the number of avalable tems, that M(c) c +. Suppose that an optmal adaptve polcy, gven remanng capacty c, nserts tem. Denote by ft(, c) the characterstc functon of the event that tem fts (S j () c) and by s() ts truncated sze (s() = mn{s j (), }). We have M(c) µ j () + E[ft(, c)m(c s())] = E[s() + ft(, c)m(c s())] and usng the nducton hypothess, M(c) E[s() + ft(, c)(c s() + )] = E[ft(, c)(c + ) + ( ft(, c))s()] c +, completng the proof. We can bound the value acheved by any adaptve polcy usng a lnear program. Even though an adaptve polcy can make decsons based on the observed szes of tems, the total probablty that an tem s nserted by the polcy s determned beforehand f we thnk of a polcy n terms of a decson tree, ths total probablty s obtaned by averagng over all the branches of the decson tree where tem s nserted, weghted by the probabltes of executng those branches (whch are determned by the polcy and dstrbutons of tem szes). We do not actually need to wrte out ths probablty explctly n terms of the polcy. Just denote by x the total probablty that the polcy tres to nsert tem. Defnton 3.2. We defne the effectve value of each tem as w = v Pr[tem alone fts].

5 Condtoned on tem beng nserted, the expected value for t s at most w. Therefore, the expected value acheved by the polcy s at most x w. The expected sze nserted s E[ µ(a)] = x µ(). We know that for any adaptve polcy ths s bounded by b j + n the j-th component, so we can wrte the followng LP: { } V = max x w : x µ j () b j + j. 0 x In ths extended abstract, we wll be usng ths LP only for our specal cases n whch an tem always fts when placed alone,.e. w = v. Note the smlarty between ths LP and the usual lnear relaxaton of the determnstc packng problem. The only dfference s that we have b j + nstead of b j on the rght-hand sde, and yet ths LP bounds the performance of any adaptve polcy as we have seen n Secton 2, a much more powerful paradgm n general. We wll put ths lnear program to use n Secton 5. We summarze: Lemma 3.2. ADAP T V. 4 The greedy algorthm A straghtforward generalzaton of the greedy algorthm from [3] gves an O(d)-approxmaton algorthm for General Stochastc Packng. Let s go brefly over the man ponts of the analyss. Remember that, n the general case, we can assume by scalng that b = (,,..., ). Then a natural measure of multdmensonal sze s the l norm of the mean sze vector: d µ(a) = µ j (A). j= The reason to use the l norm here s that t bounds the probablty that a set of tems overflows. Also, the l norm s easy to work wth, because t s lnear and addtve for collectons of tems. Lemma 4.. Pr[ S(A) ] µ(a). Proof. For each component, Markov s nequalty gves us Pr[S j (A) ] = Pr[mn{S j (A), } ] E[mn{S j (A), }] µ j (A), and by the unon bound Pr[ S(A) ] d j= µ j(a) = µ(a). We set a threshold σ (0, ) and we defne heavy tems to be those wth µ() > σ and lght tems those wth µ() σ. The greedy algorthm. Take the more proftable of the followng: A sngle tem, achevng m = max v Pr[ S() ]. A sequence of lght tems, n the order of decreasng v / µ(). Ths acheves expected value at least where M k M k < }. m G = v k ( M k ) n k= = k = µ(), and n = max{k : We employ the followng two lemmas from [3], n whch we only replace µ by µ (whch works thanks to Lemma 4.). As n [3], we set σ = /3. Lemma 4.2. For σ = /3, the expected value an adaptve polcy gets for heavy tems s E[v(H)] E[ H ] m 3E[ µ(h ) ] m where H s the set of heavy tems the polcy attempts to nsert. Lemma 4.3. For σ = /3, the expected value an adaptve polcy gets for lght tems s E[v(L)] ( + 3E[ µ(l) ]) m G. We observe that for the random set A that an adaptve polcy tres to nsert, Lemma 3. mples E[ µ(a ) ] = d E[µ j (A )] 2d. j= Therefore E[ µ(h ) ] + E[ µ(l) ] 2d and we get the followng. Theorem 4.. The greedy algorthm for Stochastc Packng acheves expected value at least GREEDY = max{m, m G }, and ADAP T ( + 6d) GREEDY. Ths also proves that the adaptvty gap for Stochastc Packng s at most O(d). It remans an open queston whether the gap can actually be Θ(d). Our best lower bound s Ω( d), see Secton 2. 5 Stochastc Set Packng and b-matchng As a specal case, consder the Stochastc Set Packng problem. We have seen that n ths case the adaptvty gap can be as large as Θ( d). We prove that ths s ndeed tght. Moreover, we present an algorthmc

6 approach to fnd an O( d)-approxmate non-adaptve polcy. Our soluton wll be a fxed collecton of tems that s, we nsert all these tems, and we collect a nonzero proft only f all the respectve sets turn out to be dsjont. The frst step s to replace the l norm by a stronger measure of sze, whch allows one to estmate better the probablty that a collecton of tems s a feasble soluton. Defnton 5.. For a set of tems A, ˆµ(A) = µ() µ(j). {,j} ( A 2) Lemma 5.. For a set of tems A wth sze vectors n {0, } d, Pr[ S(A) ] ˆµ(A). Proof. A set of tems can overflow n coordnate l only f at least two tems attan sze n that coordnate. For a par of tems {, j}, the probablty of ths happenng s µ l ()µ l (j). By the unon bound: P r[s l (A) > ] µ l ()µ l (j), P r[ S(A) > ] {,j} ( A 2) {,j} ( A 2) µ() µ(j) = ˆµ(A). Now we use the LP formulaton ntroduced n Secton 3. Snce we can solve the LP n polynomal tme, we can assume that we have a soluton x such that x µ() 2, and V = x w = x v bounds the expected value of any adaptve polcy (Lemma 3.2). We can also assume that the value of any tem s at most β d V for some fxed β > 0, otherwse the most valuable tem alone s a d β -approxmaton of the optmum. We sample a random set of tems F, tem ndependently wth probablty q = α d x. Constants α, β wll be chosen later. We estmate the expected value that we get for the set obtaned n ths way. Note that there are two levels of expectaton here: one related to our samplng, and another to the resultng set beng used as a soluton of a stochastc problem. The expectaton denoted by E[] n the followng computaton s the one related to our samplng. Usng Lemma 5., we can lower bound the expected value obtaned by nsertng set F by v(f )( ˆµ(F )). The expectaton of ths value wth respect to our random samplng s: E[v(F )( ˆµ(F ))] = E v F F v {j,k} ( F 2) µ(j) µ(k) [ ] = E v E F E {j,k} ( F 2) {j,k} ( F 2) F \{j,k} (v j + v k ) µ(j) µ(k) v µ(j) µ(k) q v q j q k v j µ(j) µ(k) j,k q q j q k v µ(j) µ(k) 2,j,k α V α2 β d d V ( ) x 3/2 j µ(j) x k µ(k) j k α3 2d V ( ) x 3/2 j µ(j) x k µ(k) j α d ( 4αβ 2α 2 )V, where we used v j β d V and x j µ(j) 2. We choose α and β to satsfy α( 4αβ 2α 2 ) = β and then maxmze ths value, whch yelds α 2 = ( )/8 and β 2 = ( 33 59)/28. Then d/β < 5.6 d s our approxmaton factor. Usng the method of condtonal expectatons (on E[v(F )( ˆµ(F ))] whch can be computed exactly), we can also fnd the set F n a determnstc fashon. We summarze: Theorem 5.. For Stochastc Set Packng, there s a polynomal-tme algorthm whch fnds a set of tems yeldng expected value at least ADAP T/5.6 d. Therefore ADAP T 5.6 d NONADAP T. Ths closes the adaptvty gap for the Stochastc Set Packng problem up to a constant factor, snce we know from Secton 2 that t could be as large as 2 d. Next, we sketch how ths algorthm generalzes to b- matchng, wth an arbtrary nteger vector b. A natural generalzaton of ˆµ(A) and Lemma 5. s the followng. Defnton 5.2. For a set of tems A, ˆµ b (A) = d k B ( A B b l +) µ l (). Lemma 5.2. For a set of tems A wth sze vectors n {0, } d, Pr[ S(A) b] ˆµ b (A). Proof. Smlarly to Lemma 5., a set of tems can overflow n coordnate l, only f b l + tems attan sze

7 n ther j-th component. Ths happens wth probablty B µ l() and we apply the unon bound. Usng ths measure of sze, we apply a procedure smlar to our soluton of Stochastc Set Packng. We solve { } V = max x v : x µ l () b l + l 0 x whch s an upper bound on the optmum by Lemma 3.2. We assume that the value of each tem s at most β λ V and we sample F, each tem wth probablty q = α λ x where j /λbj+ = ; α, β > 0 to be chosen later. We estmate the expected value of F : E[v(F )( µ b (F ))] [ ] d = E v E F d E q v β λ V d q v α λ V β λ V d α λ V d B ( F F \B b l +) α λ V β λ V d α λ V d B ( F B b l +) d (b l + ) B =b l + j B b l! (b l + )! (b l + )! v µ l (j) j B v µ l (j) j B B =b l + j B q j µ l (j) ( q µ l () ( q µ l () ) bl + q j µ l (j) ) bl + ( α b l! λ (b bl + l + )) ( α ) λ (b bl + l + ). We use Strlng s formula, (b l + )! > ( ) b l + bl + e, and b l! > (b l+)! > ( b l ) + bl + 2 b l + 2e. Also, we assume 2eα <. E[v(F )( µ b (F ))] α λ V β d ( ) bl + 2eα λ V α d ( eα ) λ λ V bl + λ α λ V 2eαβ d λ V λ b l+ eα2 λ V = α V( 2eβ eα). λ d λ b l+ We choose optmally 2eα = + 3 and 2eβ = 2 3 whch gves an approxmaton factor of 2eλ/(2 3) < 2λ. We can derandomze the algorthm usng condtonal expectatons provded that all the b l s are constant. In that case, we can evaluate E[v(F )( µ b (F ))] by summng a polynomal number of terms. Theorem 5.2. For Stochastc b-matchng wth constant capacty b, there s a polynomal-tme algorthm whch fnds a set of tems yeldng expected value at least ADAP T/2λ where λ s the soluton of j /λbj+ =. Therefore ADAP T 2λ NONADAP T. Ths closes the adaptvty gap for b-matchng up to a constant factor, snce we know that t could be as large as λ/4. In case b = (B, B,..., B), we get λ = d /(B+). Ths means that our approxmaton factors for Stochastc Set Packng and b-matchng are near-optmal even n the determnstc case where we have hardness of d /(B+) ɛ -approxmaton for any fxed ɛ > 0. 6 Lmtatons when knowng only the mean szes Our algorthms only use knowledge of the truncated mean szes of tems. Here we show that ths knowledge does not allow to determne the expected value of an optmal adaptve strategy to wthn a factor better than Θ(d) n the general case and Θ( d) n the Set Packng case. Consder two nstances of General Stochastc Packng wth d tems. Item n the frst nstance has (determnstc) sze /(2d) n all components j and sze or 0 wth probablty /2 n component. In the second nstance, tem has determnstc sze equal to the expected sze of tem n the frst nstance. In the second nstance all d tems ft, whle n the frst the expected number of tems that we can ft s O(). In the Set Packng case, consder two dfferent nstances wth an nfnte supply (or large supply) of the same tem. In the frst nstance, an tem has sze (,,, ) wth probablty /d and sze (0, 0,, 0) otherwse. In the second nstance, an tem has sze e for =,, d wth probablty /d. The expected sze of an tem s (/d, /d,, /d) n both nstances. In the frst nstance, any polcy wll ft 2d tems n expectaton whle n the second nstance t wll get Θ( d) tems by the brthday paradox, for a rato of Θ( d).

8 7 Restrcted Stochastc Packng As the last varant of Stochastc Packng, we consder nstances where the tem szes are vectors restrcted to S() [0, ] d and the capacty vector s a gven vector b R d + wth b j for all j. Smlarly to b-matchng, we prove an approxmaton factor as a functon of the capacty b, and we fnd that our approach s partcularly successful n case of capacty very large compared to tem szes. Theorem 7.. S For Restrcted Stochastc Packng wth tem szes S() [0, ] d and capacty b, there s a polynomal-tme algorthm whch fnds a set of tems yeldng expected value at least ADAP T/20λ where λ s the soluton of d = /λb =. I.e., ADAP T 20λ NONADAP T. Proof. Consder the LP boundng the performance of any adaptve polcy: { } V = max x v : x µ j () b j + j. 0 x Assume that v β λ V for each tem, for some constant β > 0 to be chosen later, otherwse one tem alone s a good approxmate soluton. We fnd an optmal soluton x and defne q = α λ x, α > 0 agan to be chosen later. Our randomzed non-adaptve polcy nserts tem wth probablty q. Let s estmate the probablty that ths random set of tems F fts, wth respect to both (ndependent) levels of randomzaton - our randomzed polcy and the random tem szes. For each j, E[S j (F )] = q µ j () α λ (b j + ). Snce ths s a sum of [0, ] ndependent random varables, we apply the Chernoff bound (for random varables wth support n [0, ] as n [7], but we use the form gven n Theorem 4. of [8] for the bnomal case) to estmate the probablty of overflow (use µ α(b j + )/λ, + δ = b j /µ): ( e δ ) µ ( ) (+δ)µ e Pr[S j (F ) > b j ] < ( + δ) +δ < + δ ( ) bj ( ) bj eµ 2eα λ b j and usng the unon bound, Pr[ j; S j (F ) > b j ] < 2eα d λ bj j= = 2eα. Now we estmate the probablty that the value of F s too low. We assume that v β λ V and by scalng we obtan values v = λ βv v [0, ]. We sample each of them wth probablty q whch yelds a random sum W wth expectaton E[W ] = q v = α/β. Agan by Chernoff bound (extenson of Theorem 4.2 of [8]), Pr [W < 2 ] E[W ] < e E[W ]/8 = e α/8β. We choose α = /0 and β = /00 whch yelds Pr[ j; S j (F ) > b j ] < 2eα < and Pr[v(F ) < 20λV] < e α/8β < 0.287, whch means that wth probablty at least 0.69, we get a feasble soluton of value 20λ V. The expected value acheved by our randomzed polcy s at least 20λ V. Fnally, note that any randomzed non-adaptve polcy can be seen as a convex lnear combnaton of determnstc non-adaptve polces. Therefore, there s also a determnstc non-adaptve polcy achevng NONADAP T 20λ ADAP T. A fxed set F achevng expected value at least ADAP T/20λ can be found usng the method of pessmstc estmators appled to the Chernoff bounds, see [9]. 8 Inapproxmablty of PIP Here we mprove the known results on the hardness of approxmaton for PIP. In the general case, t was only known [] that a d /2 ɛ -approxmaton, for any fxed ɛ > 0, would mply N P = ZP P (usng a reducton from Max Clque). We mprove ths result to d ɛ. Theorem 8.. There s no polynomal-tme d ɛ - approxmaton algorthm for PIP for any ɛ > 0, unless NP = ZP P. Proof. We use Håstad s result on the napproxmablty of Max Clque [6], or more convenently maxmum stable set. For a graph G, we defne a PIP nstance: Let A R+ d n be a matrx where d = n = V (G), A = for every, A j = /n for {, j} E(G) and A j = 0 otherwse. Let b = v = (,,... ). It s easy to see that A x b for x {0, } n f and only f x s the characterstc vector of a stable set. Therefore approxmatng the optmum of ths PIP to wthn d ɛ for any ɛ > 0 would mply an n ɛ -approxmaton algorthm for maxmum stable set, whch would mply NP = ZP P. Ths proves that our greedy algorthm (Secton 4) s essental optmal even n the determnstc case. Next we turn to the case of small tems, n partcular A [0, ] d n and b = (B, B,..., B), B 2 nteger. In ths case, we have an O(d /B )-approxmaton algorthm

9 (Secton 7) and the known hardness result [] was that a d /(B+) ɛ -approxmaton would mply NP = ZP P. We strengthen ths result to d /B ɛ. Theorem 8.2. There s no polynomal-tme d /B ɛ - approxmaton algorthm for PIP wth A [0, ] d n and b = (B, B,..., B), B Z+, B 2, unless NP = ZP P. Proof. For a gven graph G = (V, E), denote by d the number of B-clques (d < n B ). Defne a d n matrx A (.e., ndexed by the B-clques and vertces of G) where A(Q, v) = f vertex v belongs to clque Q, A(Q, v) = /n f vertex v s connected by an edge to clque Q, and A(Q, v) = 0 otherwse. Denote the optmum of ths PIP wth all values v = by V. Let ɛ > 0 be arbtrarly small, and assume that we can approxmate V to wthn a factor of d /B ɛ. Suppose that x s the characterstc vector of a stable set S, S = α(g). Then A x b because n any clque, there s at most one member of S and the remanng vertces contrbute at most /n each. Thus the optmum of the PIP s V v x v = α(g). If A x b for some x {0, } d, then the subgraph nduced by S = {v : x v = } cannot have a clque larger than B: suppose R S s a clque of sze B +, and Q R s a sub-clque of sze B. Then (A x)(q) (the component of A x ndexed by Q) must exceed B, snce t collects from each vertex n Q plus at least /n from the remanng vertex n R \ Q. Fnally, we nvoke a lemma from [] whch states that a subgraph on S = v x v vertces wthout clques larger than B must have a stable set of sze α(g) S /B,.e. V (α(g)) B. We assume that we can fnd W such that V/d /B ɛ W V. Takng a = W /B, we show that a must therefore be a n ɛ -approxmaton to α(g). We know that a V /B α(g). On the other hand, a ( ) /B V α(g)/b d /B ɛ n /B ɛ α(g) n ɛ where we have used V α(g), d < n B, and fnally α(g) n. Ths proves that a s an n ɛ -approxmaton to α(g), whch would mply NP = ZP P. 9 PSPACE-hardness of Stochastc Packng Consder the problem of fndng the optmal adaptve polcy. In [3], we showed how adaptve polces for Stochastc Knapsack are related to Arthur-Merln games. Ths yelds PSPACE-hardness results for certan questons; namely, whether t s possble to fll the knapsack exactly to ts capacty wth a certan probablty, or what s the adaptve optmum for a Stochastc Knapsack nstance wth randomness n both sze and value of each tem. However, we were not able to prove that t s PSPACE-hard to fnd the adaptve optmum wth determnstc tem values. In contrast to the napproxmablty results of Secton 8, here we do not regard dmenson d as part of the nput. Let us remark that for determnstc PIP, t s NP-hard to fnd the optmum but there s a PTAS for any fxed d [4]. For Stochastc Packng, ths s mpossble due to the adaptvty gap and lmtaton of knowng only the mean tem szes. However, consder a scenaro where the probablty dstrbutons are dscrete and completely known. Then we can consder fndng the optmum adaptve polcy exactly. Here we prove that ths s PSPACE-hard even for Stochastc Packng wth only two random sze components and determnstc values. For the PSPACE-hardness reducton, we refer to the followng PSPACE-hard problem (see [2], Fact 4.). Problem: MAX-PROB SSAT Input: Boolean 3-cnf formula Φ : {0, } 2k {0, } wth varables x, y,..., x k, y k. P (Φ) = Mx Ay Mx 2 Ay 2... Mx k Ay k Φ(x, y,... x k, y k ) where Mx f(x) = max{f(0), f()} and Ay g(y) = (g(0) + g())/2. Output: YES, f P (Φ) = ; NO, f P (Φ) /2. Theorem 9.. For Stochastc Packng n a fxed dmenson d 2, let ˆp(V ) be the maxmum probablty that an adaptve polcy nserts successfully a set of tems of total value at least V. Then for any fxed ɛ > 0, t s PSPACE-hard to dstngush whether ˆp(V ) = or ˆp(V ) 3/4. Proof. We can assume that d = 2. We defne a Stochastc Packng nstance correspondng to a 3-cnf formula Φ(x, y,..., x k, y k ) of m clauses. The 2- dmensonal szes wll have the same format n each component, [V ARS CLAUSES], where V ARS have k dgts and CLAUSES have m dgts. All dgts are n base 0 to avod any overflow. It wll be convenent to consder the ndvdual dgts as 2-dmensonal, wth a par of components ndexed by 0 and. In addton, we defne determnstc tem values wth the same format [V ARS CLAUSES]. For each {,..., d}, x {0, } and f {0, }, we defne a varable tem I (x, f ) whch has 4 possble random szes ndexed by two random bts y, r {0, }: s(i (x, f ), y, r ) = [V ARS(, f, r ) CLAUSES(, x, y )].

10 V ARS(, f, r ) have two (three for = and one for = k) nonzero dgts: the -th most sgnfcant dgt has a n the f -component (n both components for =, ndependently of f ), and the ( + )-th most sgnfcant dgt has a n the r -component, except for = k. Note that the polcy can choose n whch component to place the f contrbuton (for > ), whle the placement of the r contrbuton s random. In CLAUSES(, x, y ), we get a nonzero value n the dgts correspondng to clauses n whch x or y appears. Varable x contrbutes to the dgt n the -component f the respectve clause s satsfed by the value of x, or n the 0-component f the clause s not satsfed. Smlarly, y contrbutes to the clauses where t appears. If both x and y appear n the same clause, the contrbutons add up. The values of tems I (x, f ) are defned as val(i (x, f )) = [V AR() 0] where V AR() contans a n the -th dgt and zeros otherwse. Then we defne fll-n tems F j ( = 0, ) whose sze only contans a n the -component for the j-th clause dgt. For each j, we have 3 tems of type F j0 and 2 tems of type F j. Ther values are val(f j ) = [0 CLAUSE(j)] whch means a markng the j-th clause. The capacty of the knapsack s C = [ ] n each dmenson and the target value s also V = [ ]. Assume that P (Φ) =. We can then defne an adaptve polcy whch nserts one tem I for each =, 2,..., k (n ths order), choosng f + = r for each < k. Based on the satsfyng strategy for formula Φ, the polcy satsfes each clause and then adds flln tems to acheve value n each dgt of V ARS and value 3 n each dgt of CLAUSES. On the other hand, assume P (Φ) /2. Any adaptve polcy nsertng exactly tem I for each and abdng by the standard orderng of tems can acheve the target value only f all clauses are properly satsfed (because otherwse t would need 3 tems of type F j for some clause), and that happens wth probablty at most /2. However, we have to be careful about cheatng polces. Here, cheatng means ether nsertng I after I + or not nsertng exactly copy of each I. Consder a cheatng polcy and the frst for whch ths happens. In case I s not nserted at all, the polcy cannot acheve the target value for V ARS. In case more than copy of I s nserted or I s nserted after I +, there s /2 probablty of overflow n the V ARS block of capacty. Ths s because the contrbuton of I to the (+)-th dgt of V ARS hts a random component, whle one of the two components would have been flled by I + or another copy of I already. Ether way, ths leads to a falure wth probablty at least /2, condtoned on the event of cheatng. In the worst case, the probablty of success of a cheatng polcy can be 3/4. Theorem 9.2. For a 2-dmensnal stochastc knapsack nstance, t s PSPACE-hard to maxmze the expected value acheved by an adaptve polcy. Proof. We use the reducton from the prevous proof. The maxmum value that any polcy can acheve s V = [ ]. In case of a YES nstance, an optmal polcy acheves V wth probablty, whereas n case of a NO nstance, t can succeed wth probablty at most 3/4. Therefore the expected value obtaned n ths case s at most V /4. References [] C. Chekur and S. Khanna: On multdmensonal packng problems, SIAM J. Computng 33:837 85, [2] A. Condon, J. Fegenbaum, C. Lund and P. Shor: Random debaters and the hardness of approxmatng stochastc functons, SIAM J. Comp. 26: , 997. [3] B. Dean, M. X. Goemans and J. Vondrák: Approxmatng the stochastc knapsack: the beneft of adaptvty. To appear n FOCS, [4] A.M. Freze and M.R.B. Clarke: Approxmaton algorthms for the m-dmensonal 0- knapsack problem: worst-case and probablstc analyses. European J. Oper. Res. 5:00 09, 984. [5] M. Halldórsson: Approxmatons of weghted ndependent set and heredtary subset problems. J. Graph Algorthms and Applcatons 4 (): 6, [6] J.Håstad: Clque s hard to approxmate to wthn n ɛ. In FOCS: , 996. [7] W. Hoeffdng: Probablty nequaltes for sums of bounded random varables. Amer. Stat. Assoc. J. 58:3 30, 963. [8] R. Motwan and P. Raghavan: Randomzed Algorthms, Cambrdge Unversty Press 995. [9] P. Raghavan: Probablstc constructon of determnstc algorthms: approxmatng packng nteger programs. J. Comp. and System Sc. 37:30 43, 988. [0] P. Raghavan and C. D. Thompson: Randomzed roundng: a technque for provably good algorthms and algorthmc proofs. Combnatorca 7: , 987. [] A. Srnvasan. Improved approxmatons of packng and coverng problems. In STOC: , 995.

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016 CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7 Stanford Unversty CS54: Computatonal Complexty Notes 7 Luca Trevsan January 9, 014 Notes for Lecture 7 1 Approxmate Countng wt an N oracle We complete te proof of te followng result: Teorem 1 For every

More information

Edge Isoperimetric Inequalities

Edge Isoperimetric Inequalities November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k. THE CELLULAR METHOD In ths lecture, we ntroduce the cellular method as an approach to ncdence geometry theorems lke the Szemeréd-Trotter theorem. The method was ntroduced n the paper Combnatoral complexty

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Eigenvalues of Random Graphs

Eigenvalues of Random Graphs Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the

More information

Communication Complexity 16:198: February Lecture 4. x ij y ij

Communication Complexity 16:198: February Lecture 4. x ij y ij Communcaton Complexty 16:198:671 09 February 2010 Lecture 4 Lecturer: Troy Lee Scrbe: Rajat Mttal 1 Homework problem : Trbes We wll solve the thrd queston n the homework. The goal s to show that the nondetermnstc

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

Every planar graph is 4-colourable a proof without computer

Every planar graph is 4-colourable a proof without computer Peter Dörre Department of Informatcs and Natural Scences Fachhochschule Südwestfalen (Unversty of Appled Scences) Frauenstuhlweg 31, D-58644 Iserlohn, Germany Emal: doerre(at)fh-swf.de Mathematcs Subject

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Vapnik-Chervonenkis theory

Vapnik-Chervonenkis theory Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES SVANTE JANSON Abstract. We gve explct bounds for the tal probabltes for sums of ndependent geometrc or exponental varables, possbly wth dfferent

More information

a b a In case b 0, a being divisible by b is the same as to say that

a b a In case b 0, a being divisible by b is the same as to say that Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :

More information

arxiv: v1 [math.co] 1 Mar 2014

arxiv: v1 [math.co] 1 Mar 2014 Unon-ntersectng set systems Gyula O.H. Katona and Dánel T. Nagy March 4, 014 arxv:1403.0088v1 [math.co] 1 Mar 014 Abstract Three ntersecton theorems are proved. Frst, we determne the sze of the largest

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm NYU, Fall 2016 Lattces Mn Course Lecture 2: Gram-Schmdt Vectors and the LLL Algorthm Lecturer: Noah Stephens-Davdowtz 2.1 The Shortest Vector Problem In our last lecture, we consdered short solutons to

More information

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan Wnter 2008 CS567 Stochastc Lnear/Integer Programmng Guest Lecturer: Xu, Huan Class 2: More Modelng Examples 1 Capacty Expanson Capacty expanson models optmal choces of the tmng and levels of nvestments

More information

A 2D Bounded Linear Program (H,c) 2D Linear Programming

A 2D Bounded Linear Program (H,c) 2D Linear Programming A 2D Bounded Lnear Program (H,c) h 3 v h 8 h 5 c h 4 h h 6 h 7 h 2 2D Lnear Programmng C s a polygonal regon, the ntersecton of n halfplanes. (H, c) s nfeasble, as C s empty. Feasble regon C s unbounded

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

HMMT February 2016 February 20, 2016

HMMT February 2016 February 20, 2016 HMMT February 016 February 0, 016 Combnatorcs 1. For postve ntegers n, let S n be the set of ntegers x such that n dstnct lnes, no three concurrent, can dvde a plane nto x regons (for example, S = {3,

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Expected Value and Variance

Expected Value and Variance MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

Finding Primitive Roots Pseudo-Deterministically

Finding Primitive Roots Pseudo-Deterministically Electronc Colloquum on Computatonal Complexty, Report No 207 (205) Fndng Prmtve Roots Pseudo-Determnstcally Ofer Grossman December 22, 205 Abstract Pseudo-determnstc algorthms are randomzed search algorthms

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

MATH 5707 HOMEWORK 4 SOLUTIONS 2. 2 i 2p i E(X i ) + E(Xi 2 ) ä i=1. i=1

MATH 5707 HOMEWORK 4 SOLUTIONS 2. 2 i 2p i E(X i ) + E(Xi 2 ) ä i=1. i=1 MATH 5707 HOMEWORK 4 SOLUTIONS CİHAN BAHRAN 1. Let v 1,..., v n R m, all lengths v are not larger than 1. Let p 1,..., p n [0, 1] be arbtrary and set w = p 1 v 1 + + p n v n. Then there exst ε 1,..., ε

More information

1 The Mistake Bound Model

1 The Mistake Bound Model 5-850: Advanced Algorthms CMU, Sprng 07 Lecture #: Onlne Learnng and Multplcatve Weghts February 7, 07 Lecturer: Anupam Gupta Scrbe: Bryan Lee,Albert Gu, Eugene Cho he Mstake Bound Model Suppose there

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Perfect Competition and the Nash Bargaining Solution

Perfect Competition and the Nash Bargaining Solution Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange

More information

Capacity Constraints Across Nests in Assortment Optimization Under the Nested Logit Model

Capacity Constraints Across Nests in Assortment Optimization Under the Nested Logit Model Capacty Constrants Across Nests n Assortment Optmzaton Under the Nested Logt Model Jacob B. Feldman School of Operatons Research and Informaton Engneerng, Cornell Unversty, Ithaca, New York 14853, USA

More information

Lecture 5 September 17, 2015

Lecture 5 September 17, 2015 CS 229r: Algorthms for Bg Data Fall 205 Prof. Jelan Nelson Lecture 5 September 7, 205 Scrbe: Yakr Reshef Recap and overvew Last tme we dscussed the problem of norm estmaton for p-norms wth p > 2. We had

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

REAL ANALYSIS I HOMEWORK 1

REAL ANALYSIS I HOMEWORK 1 REAL ANALYSIS I HOMEWORK CİHAN BAHRAN The questons are from Tao s text. Exercse 0.0.. If (x α ) α A s a collecton of numbers x α [0, + ] such that x α

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

Assortment Optimization under the Paired Combinatorial Logit Model

Assortment Optimization under the Paired Combinatorial Logit Model Assortment Optmzaton under the Pared Combnatoral Logt Model Heng Zhang, Paat Rusmevchentong Marshall School of Busness, Unversty of Southern Calforna, Los Angeles, CA 90089 hengz@usc.edu, rusmevc@marshall.usc.edu

More information

Appendix B. Criterion of Riemann-Stieltjes Integrability

Appendix B. Criterion of Riemann-Stieltjes Integrability Appendx B. Crteron of Remann-Steltes Integrablty Ths note s complementary to [R, Ch. 6] and [T, Sec. 3.5]. The man result of ths note s Theorem B.3, whch provdes the necessary and suffcent condtons for

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Technical Note: Capacity Constraints Across Nests in Assortment Optimization Under the Nested Logit Model

Technical Note: Capacity Constraints Across Nests in Assortment Optimization Under the Nested Logit Model Techncal Note: Capacty Constrants Across Nests n Assortment Optmzaton Under the Nested Logt Model Jacob B. Feldman, Huseyn Topaloglu School of Operatons Research and Informaton Engneerng, Cornell Unversty,

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

Lecture 4: Constant Time SVD Approximation

Lecture 4: Constant Time SVD Approximation Spectral Algorthms and Representatons eb. 17, Mar. 3 and 8, 005 Lecture 4: Constant Tme SVD Approxmaton Lecturer: Santosh Vempala Scrbe: Jangzhuo Chen Ths topc conssts of three lectures 0/17, 03/03, 03/08),

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Math 426: Probability MWF 1pm, Gasson 310 Homework 4 Selected Solutions

Math 426: Probability MWF 1pm, Gasson 310 Homework 4 Selected Solutions Exercses from Ross, 3, : Math 26: Probablty MWF pm, Gasson 30 Homework Selected Solutons 3, p. 05 Problems 76, 86 3, p. 06 Theoretcal exercses 3, 6, p. 63 Problems 5, 0, 20, p. 69 Theoretcal exercses 2,

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

CS 331 DESIGN AND ANALYSIS OF ALGORITHMS DYNAMIC PROGRAMMING. Dr. Daisy Tang

CS 331 DESIGN AND ANALYSIS OF ALGORITHMS DYNAMIC PROGRAMMING. Dr. Daisy Tang CS DESIGN ND NLYSIS OF LGORITHMS DYNMIC PROGRMMING Dr. Dasy Tang Dynamc Programmng Idea: Problems can be dvded nto stages Soluton s a sequence o decsons and the decson at the current stage s based on the

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Société de Calcul Mathématique SA

Société de Calcul Mathématique SA Socété de Calcul Mathématque SA Outls d'ade à la décson Tools for decson help Probablstc Studes: Normalzng the Hstograms Bernard Beauzamy December, 202 I. General constructon of the hstogram Any probablstc

More information

Lecture Randomized Load Balancing strategies and their analysis. Probability concepts include, counting, the union bound, and Chernoff bounds.

Lecture Randomized Load Balancing strategies and their analysis. Probability concepts include, counting, the union bound, and Chernoff bounds. U.C. Berkeley CS273: Parallel and Dstrbuted Theory Lecture 1 Professor Satsh Rao August 26, 2010 Lecturer: Satsh Rao Last revsed September 2, 2010 Lecture 1 1 Course Outlne We wll cover a samplng of the

More information

Exercise Solutions to Real Analysis

Exercise Solutions to Real Analysis xercse Solutons to Real Analyss Note: References refer to H. L. Royden, Real Analyss xersze 1. Gven any set A any ɛ > 0, there s an open set O such that A O m O m A + ɛ. Soluton 1. If m A =, then there

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

THE WEIGHTED WEAK TYPE INEQUALITY FOR THE STRONG MAXIMAL FUNCTION

THE WEIGHTED WEAK TYPE INEQUALITY FOR THE STRONG MAXIMAL FUNCTION THE WEIGHTED WEAK TYPE INEQUALITY FO THE STONG MAXIMAL FUNCTION THEMIS MITSIS Abstract. We prove the natural Fefferman-Sten weak type nequalty for the strong maxmal functon n the plane, under the assumpton

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Random Walks on Digraphs

Random Walks on Digraphs Random Walks on Dgraphs J. J. P. Veerman October 23, 27 Introducton Let V = {, n} be a vertex set and S a non-negatve row-stochastc matrx (.e. rows sum to ). V and S defne a dgraph G = G(V, S) and a drected

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information