On Random Sampling Auctions for Digital Goods

Size: px
Start display at page:

Download "On Random Sampling Auctions for Digital Goods"

Transcription

1 On Random Samplng Auctons for Dgtal Goods Saeed Alae Department of Computer Scence Unversty of Maryland College Park, MD 07 Azarakhsh Malekan Department of Computer Scence Unversty of Maryland College Park, MD 07 Aravnd Srnvasan Department of Computer Scence and Insttute for Advanced Computer Studes Unversty of Maryland College Park, MD 07 ABSTRACT In the context of auctons for dgtal goods, an nterestng Random Samplng Optmal Prce aucton (RSOP) has been proposed by Goldberg, Hartlne and Wrght; ths leads to a truthful mechansm. Snce random samplng s a popular approach for auctons that ams to maxmze the seller s revenue, ths method has been analyzed further by Fege, Flaxman, Hartlne and Klenberg, who have shown that t s 5-compettve n the worst case whch s substantally better than the prevously proved bounds but stll far from the conjectured compettve rato of. In ths paper, we prove that RSOP s ndeed -compettve for a large class of nstances n whch the number of bdders recevng the tem at the optmal unform prce, s at least 6. We also show that t s.68 compettve for the small class of remanng nstances thus leavng a neglgble gap between the lower and upper bound. Furthermore, we develop a robust verson of RSOP one n whch the seller s revenue s, wth hgh probablty, not much below ts mean when the above parameter grows large. We employ a mx of probablstc technques and dynamc programmng to compute these bounds. Categores and Subject Descrptors F..0 [Theory of Computaton]: AALYSIS OF ALGO- RITHMS AD PROBLEM COMPLEXITYGeneral ; G.3 [Mathematcs of Computng]: PROBABILITY AD STATISTICS Probablstc algorthms General Terms Algorthms, Desgn, Economcs, Theory Supported n part by SF Award CS Supported by SF Award CCF Supported n part by SF ITR Award CS and SF Award CS Permsson to make dgtal or hard copes of all or part of ths work for personal or classroom use s granted wthout fee provded that copes are not made or dstrbuted for proft or commercal advantage and that copes bear ths notce and the full ctaton on the frst page. To copy otherwse, to republsh, to post on servers or to redstrbute to lsts, requres pror specfc permsson and/or a fee. Copyrght 00X ACM X-XXXXX-XX-X/XX/XX...$5.00. Keywords Random Samplng, Aucton, Mechansm Desgn. ITRODUCTIO In recent years, there has been a consderable amount of work n algorthmc mechansm desgn. One of the prmary constrants that much of ths work tres to enforce s ncentve compatblty, whch means that beng truthful s the best for each agent. In ths work, we study a popular random-samplng-based ncentve-compatble mechansm ( RSOP ) for auctons of dgtal goods where we am to maxmze the auctoneer s expected revenue; we prove by a mx of analytcal methods and computng-based approaches (the latter based on rgorous mathematcal arguments) that ths mechansm has a much better compettve rato than was known before, and place lmts on how good ths mechansm can be n the worst case. Further, RSOP as defned, can delver a very low revenue to the auctoneer wth nonelgble probablty: we develop a more robust verson whch nherts the good propertes of RSOP, and wll addtonally return a good-qualty soluton wth hgh probablty (and not just n expectaton) as the number of wnnng bdders n an optmal soluton grows. Our basc problem s as follows. A seller (also referred to as auctoneer) has a good that she/he can make an unlmted number of copes of such as a dgtal good. We also have bdders wth unknown valuatons v, v,..., v for the good; ths means that bdder wll buy the good ff t s offered at a prce of at most v to hm/her. We am to desgn a (randomzed) ncentve-compatble mechansm that wll maxmze the seller s expected total revenue. (We assume that the seller can make up to copes f necessary at neglgble cost, so that the seller s revenue equals her/hs proft.) A classcal work of Myerson has studed ths problem under the Bayesan settng, where we assume a dstrbuton on the bds v ; knowledge of the pror nformaton about the bd dstrbuton s essental to hs work []. Here, we wll work throughout wth the classcal computer scence approach to ths problem, whch s to assume the worst case: ths s the pror-free varant of our problem where we allow an arbtrary (unknown and worst-case) dstrbuton of the bds. In the sprt of the compettve analyss of onlne algorthms, ths naturally leads to the followng noton of compettve rato. ote that f the bds v v v are known n an nstance I, then proft-maxmzaton s trval: lettng = argmax v, we sell the good at prce v,

2 to get an optmal revenue OP T (I) = v. The compettve rato of an ncentve-compatble mechansm s defned to be the largest possble value, taken over all possble nstances I, of OP T (I) dvded by the expected proft obtaned by our mechansm on I. ote that ths rato s at least. The pror-free varant of our problem has been frst nvestgated n [6, 5]. Random samplng s one of the most natural methods that s used n pror-free settngs when the objectve s to maxmze the auctoneer s revenue. The work of [6] develops a natural random-samplng-based approach for our problem, Random Samplng Optmal Prce (RSOP). In RSOP, the bdders are parttoned nto two groups unformly at random and the optmal prce of each set s offered to the other set. It has been shown that RSOP returns a proft very close to optmal for many classes of nterestng nputs ([], []). There has also been a far amount of work analyzng the compettve rato of RSOP. In [5], Goldberg et al. showed that the compettve rato of RSOP s 7600, and conjectured that the compettve rato should be ; note that ths value of cannot be lowered further snce RSOP attans a value of when we have = and v = v. Later, Fege et al. mproved the analyss and showed that ths rato s at most 5 [3]. There are at least two reasons for tryng to prove that RSOP s compettve rato s. Frst, RSOP s very natural and gvng a tght analyss appears to be of nherent nterest. Second, RSOP s very easly mplementable and hence easly adaptble to dfferent settngs (e.g., double auctons [], onlne lmted-supply auctons [8], combnatoral auctons [], [5], and for the money burnng problem [9]). Summary of our results:. To descrbe our results, we wll need the noton of wnners (w.r.t. the optmal sngle-prce aucton). In our defnton of OP T (I) where we set = argmax v, let be the largest ndex that satsfes ths defnton. Recall that n the offlne case where we know all the v and compute as ths maxmzng ndex, we sell at the sngle prce v, whch s then bought by bdders,,..., to gve an optmal revenue OP T (I) = v to the auctoneer. Snce the number of bdders who get the good n ths case s, we refer to as the number of wnners (w.r.t. the optmal sngle-prce aucton). ote that s determned unquely by the values v v v. Many of our results are motvated by the followng queston: the nstance seen above where n = = and the compettve rato of RSOP s, seems qute unque. In partcular, when sellng a dgtal good, one expects the typcal number of buyers to be large. Does RSOP do much better than known before, when s large? Our man results are four-fold as follows, and are obtaned by an mproved probablstc analyss aded by a dynamc programmng computaton and correlaton nequaltes: I. Improved upper bounds: We prove that the compettve rato of RSOP s: less than.68, mprovng upon the upper-bound 5 of Fege et al. [3]; less than f the number of wnners s at least 6; There s a subtlety here that requres n the defnton of OP T (I), an ssue that we wll dscuss later. upper-bounded by a quantty that approaches 3.3 as. These results ndcate that RSOP does much better than known n the practcally-nterestng case where s large, and that perhaps the only case where the compettve rato of s attaned s the case where = and v = v. II. Lower bounds: We prove that even f gets arbtrarly large, one can construct nstances I wth such, for whch the compettve rato s at least.65. III. Combnatoral approach: We also present a combnatoral approach for the case where the bd values are ether or h and show that the compettve rato of RSOP s at most n ths case. IV. Robustness: The compettve rato s the expected value for a maxmzaton problem, whch n general s not a suffcently-good ndcator of usablty: a nonnegatve random varable wth a large mean can stll be very small wth hgh probablty. (Ths s n contrast wth upper-bounds on the expectaton for mnmzaton problems wth non-negatve objectves, where Markov s nequalty bounds the probablty of the objectve becomng prohbtvely hgh.) Indeed, RSOP nevtably has a non-neglgble probablty of returnng zero proft, n cases where s small. Snce the case of large s a very natural one, we could ask: s RSOP robust the proft does not devate much below the mean wth hgh probablty when s large? It can be shown that ths s not always the case. Therefore, we develop a new ncentve-compatble mechansm RSOP robust (ɛ, δ) parameterzed by ɛ, δ (0, ), whch has the followng two propertes: () for any nput nstance, the expected proft s at least one-tenth the optmal proft for ɛ small enough (say, ɛ 0.); () there s a value 0(ɛ, δ) such that for any nput nstance wth 0, the proft s at least (/ ɛ) tmes the optmal proft, wth probablty at least δ. ote that ths protocol does not requre any nformaton about the nput nstance (such as the value of ), and delvers a good soluton wth hgh probablty for the practcally-nterestng case of large. Due to space-constrants, the proof of the last ( robustness ) tem above s deferred to the full verson. Several addtonal detals and proofs are also omtted due to lack of space.. PROBLEM DEFIITIO We consder auctonng dgtal goods to bdders wth bd values v, v,..., v. Wthout loss of generalty, we assume v v v. The Random Samplng Optmal Prce aucton parttons the bds nto two sets A and B such that each bd v ndependently goes to ether of A or B wth probablty /. We then compute the optmal prce of each set (among the two sets A and B) and offer t to the other set: note that the optmal prce of a sequence G = u u u k of bds n nondecreasng order, s u G where G = argmax u. (Thus, we wll use ths The proofs are avalable n the onlne verson whch can be found at saeed/archve/rsop.pdf

3 defnton once wth G = A when we compute the optmal prce for A and offer that prce to B, and wll use ths defnton agan wth G = B when we compute the optmal prce for B and offer that prce to A.) For our nput nstance I = v, v,..., v of bds, we defne the optmal proft of I as OP T (I) = v where = argmax v. ote that we force here: wthout ths, t can be shown that no ncentve compatble mechansm can acheve a constant fracton of the optmal proft n the case where v v [7]. (ote that G above s allowed to be one; t s only the that we use n the defnton of OP T (I) that s requred to be at least two, n order to dsallow negatve results [7].) 3. ASSUMPTIOS To smplfy the proofs we make the followng assumptons throughout the rest of ths paper. WLOG, we assume we have an nfnte number of bds v, v, n whch all the bds after v are zero so our analyss wll be ndependent of. WLOG, to smplfy the analyss, we assume that OP T (I) = snce we can always scale all the bds by a constant factor wthout affectng the mechansm. For the sake of notaton we use E[RSOP] to denote the expected proft of RSOP on an nput nstance where the expectaton s taken over random parttons of the bds. ote that by our prevous assumpton that OP T = we have E[RSOP] and the compettve rato of RSOP can be defned as max I E[RSOP]. WLOG, we assume that v s always n set B snce the mechansm s symmetrc for both A and B and so we can relabel the sets. WLOG, we only consder the proft obtaned from B by offerng the optmal prce of A and we assume the obtaned proft from A when offered the optmal prce of B s 0. The justfcaton for ths assumpton s that we are computng the E[RSOP ] for the worst case nput. ote that for any gven nput nstance we can replace v wth a very large bd such that the optmal prce of set B s v n whch case by offerng prce of v to set A we don t obtan any proft.. THE BASIC LOWER BOUD In ths secton, we gve a basc lower bound that shows RSOP s ndeed -compettve for a large class of nput nstances. In the next secton, we mprove ths result usng a more sophstcated lower bound, but based on the same dea. We start by statng the man theorem of ths secton: Theorem.. For any nput nstance I = {v, v, } where there are more than 0 bds above the optmal unform prce (.e. > 0), the expected proft of RSOP s at least (.e., E[RSOP] ). The actual computed lower bound values can be found n Table. We prove the theorem throughout the rest of ths secton. The outlne of the proof s as follows. Frst, we defne a lower boundng functon (LBF) whch, for each partton of bds to two sets (A, B), returns a value whch s less than or equal to the proft of RSOP. Most mportantly, our LBF only depends on and on how the bds are parttoned but s ndependent of the actual value of the bds v, v,. The expected value of the LBF s clearly a lower bound for E[RSOP]. After defnng the LBF functon, n Subsecton., we explan how we can compute the expected value of the LBF for any gven. We then compute the LBF for all values of from 0 up to = 5000 and show that the expected value of LBF s ndeed greater than and so s E[RSOP] for 0. The computaton of the lower bound nvolves a combnaton of probablstc technques and dynamc programmng. Later, n Subsecton., we compute a lower bound on the expected value of the LBF assumng that > = 5000 and show that t s ndeed greater than and that completes the proof of Theorem.. Before we start wth the proof, let us make the followng observatons whch gves an ntuton to our proof: Observaton.. For a gven, roughly, we expect about half of v,, v to fall n set A and the other half to fall n set B. In other words, let s = #{j j, v j A}, we expect s. Observaton.3. The optmal proft of set A s at least as much as the proft that we get f we offer v to A. Let A be the ndex of optmal prce n A. The optmal proft of set A s at least s v. Snce we assumed v = OP T =, essentally v = and therefore we can use s as a lower bound on the optmal proft of set A. Formally, assumng Prof(A, v A ) denotes the proft that we get from a set A by offerng the prce v A to t: Prof(A, v A ) s (.) ote that based on Observaton. we expect ths quantty to be about. Observaton.. Defne z = s s whch s the rato of the number of bds from v,, v that fall n B to the number of those that fall n A. It s easy to see that the rato of proft of set B when offered v A to proft of set A when offered the same v A s the same as z A. Formally: Prof(B, v A ) Prof(A, v A ) = z A (.) otce that A depends on the actual value of the bds and thus (.) s hard to work wth. To work around that, we use z = mn z as a lower bound for z A. Therefore: Prof(B, v A ) Prof(A, v A ) z (.3) The outlne of the proof of our basc lower bound for E[RSOP] s as follows. We combne Observaton.3 and Observaton. to get the followng: E[RSOP] E[Prof(B, v A )] (.) E[Prof(A, v A ) Prof(B, v A ) Prof(A, v A ) ] (.5) E[ s z] (.6)

4 ote that (.6) allows us to compute the lower bound regardless of the actual values of v because the rght hand sde of (.6) s totally ndependent of the v values except for. Also note that for any gven nput nstance I, depends only on I and not on how we partton the bds so n computng E[RSOP], s a constant (for a fxed I) and not a random varable. Ideally, we would lke to separate E[ s z] to E[ s ]E[z], but snce s and z are correlated we cannot do that. evertheless, the correlaton decrease as ncreases whch suggests that for suffcently large we can separate the two terms. In Subsecton., we present a dynamc programmng method for computng E[ s z] for any fxed. We then use the dynamc program to compute the lower bound on E[ s z] for values of = In Subsecton., we gve a lower bound on E[ s z] for all values of > = 5000 by separatng the E[ s z] to E[ s ]E[z] and subtractng the maxmum possble dfference caused by that.. When there are a few bds above the optmal unform prce In ths subsecton we show the followng: We show how we can compute a lower bound on E[ s z] and therefore for E[RSOP] for any fxed. We compute the above lower bound for all values of up to = 5000 and verfy that for 0 t s ndeed better than. The computed lower bounds for varous values of can be found n Table. We can compute a lower bound for E[ s z] and therefore for E[RSOP] by defnng a set of events and then breakng E[ s z] over those events usng the law of total expectaton. As we showed before, E[RSOP] E[ s z] so we only need to compute a lower bound on E[ s z]. Snce s and z are correlated random varables we cannot separate them n E[ s z]. The dea s that when we condton E[ s z] on any of these events we can derve lower bounds for both s and z. We then use the above method to compute a lower bound on E[RSOP] for all the values of = 5000 to show that for 0 the lower s better than. In the next subsecton, we prove a lower bound of better than for all values of >. Frst we defne the followng notaton: E T R : If T s a subset of ndces and R s an nterval whch s n [0, ) and sup(r) s the supremum of R then ER T s the event n whch for all ndces T, we have s sup(r) and at least for one n set T we have s R. Formally, ER T = { T : sup(r) T : R}. s s < For example, we mght use E [,0] [0.,0.5] to denote the event n whch for 0 the s s at most 0.5 and there s some j 0 such that s j [0., 0.5]. As a j shorthand we mght sometmes use a sngle number nstead of an nterval to denote the nterval from 0 up to and ncludng that number. We may also omt the subset of ndces altogether n whch case we assume [0, ). So we can derve the followng alternate notatons: Eα, k E α. We may also use one specal notaton E k,j α = { k : s α s k = j}. P r[e] : The probablty of event E happenng. Ê[X E] : The normalzed condtonal expected value of a random varable X whch s: We frst show the followng: Ê[X E] = E[X E]P r[e] (.7) Lemma.5. For any sequence of α 0,, α m such that 0 = α 0 < α < < α m =, the followng s a lower bound on E[ s z]: E[ s m z] (Ê[ s Eα ] Ê[ s Eα α ]) (.8) α = n whch by defnton E α s the event n whch for any ndex j, the fracton of the v,, v j that fall n set A s less than α. We actually prove the followng more general statement. The proof s omtted due to lack of space. Lemma.6. For any gven postve random varable x and any sequence of α 0,, α m such that 0 = α 0 < α < < α m =, the followng nequalty always holds n whch the random varable z s defned as z = mn(z, ): E[xz] E[xz ] m (Ê[x Eα ] Ê[x Eα α ]) (.9) α = In whch by defnton E α s the event n whch for any ndex j, the fracton of the v,, v j that fall n set A s less than α. The ntuton behnd Lemma.6 s the followng: We want to fnd lower bounds on z so we break the expected value over a set of small events. Under each event E α we have z α α based on the defnton of E α. Roughly, Ê[x Eα ] Ê[x E α ] s the porton of the expected value for whch the best lower bound for z that we can guarantee s α α. The choce of m and α 0,, α m n Lemma.6 greatly affects the value of the lower bound. Generally, ncreasng m mproves the lower bound but at the cost of more computaton. We wll provde the values of α and m that we used to get our desred lower bound later. We clam that the coeffcent of each term Ê[x Eα ] on the rght hand sde of (.6) s postve and therefore we can use a lower bound for each Ê[x Eα ] nstead of ts exact value and the nequalty stll holds. We prove our clam as follows. If we expand the sum on the rght hand sde of (.9), each Ê[ s E α ] appears exactly twce except for = 0 and = m. Snce α 0 = 0 and α m =, the value of Ê[ s E α0 ] s 0 and also the coeffcent of Ê[ s E αm ] s 0. Except for those two, every other Ê[ s E α ] has a coeffcent of α α α + α + whch s postve and proves our clam. Therefore, we can relax the nequalty by substtutng each Ê[ s E α ] wth ts lower bound. Sofar, the problem has been reduced to computng a lower bound on Ê[ s E α ] whch we explan next. The proof of the followng lemma s omtted due to lack of space.

5 Lemma.7. For any random varable x such that x [0, ] and any α [0, ] and any n the followng always holds: Ê[x E α] Ê[x E n α] P r[e n α]( P r[e (n, ) α ]) (.0) Intutvely, Lemma.7 s sayng that f nstead of computng Ê[x Eα] we can approxmate t by Ê[x E α], n the maxmum that we may over-approxmate s at most P r[eα]( n P r[e α (n, ) ]) whch s the probablty of the event n whch for any j < n, s j < αj and then there s some j > n such that s j αj. ote that snce x, ts normalzed expected value condtoned on any event s less than the probablty of that event. By choosng a large enough n we can make sure that the over approxmaton upper bound gets close enough to 0. Agan, n Lemma.7, ncreasng n mproves the lower bound, but the computaton cost of Ê[ s Eα] n and P r[eα] n wll ncrease. To use Lemma.7 for x = s, effectvely we need to be able to compute Ê[ s Eα], n P r[eα] n and P r[e α (n, ) ]. ext we show how to compute the frst two exactly by usng dynamc programmng. Later n Lemma.9 we show how to get a lower bound on the thrd one. The proof of the followng lemma s omtted due to lack of space. Lemma.8. The exact value of Ê[ s Eα] n and P r[eα] n can be computed usng the followng dynamc program. Recall that Eα k,j s the event n whch for all r k, the fracton of v,, v r that fall n A s less than α and exactly j of v,, v k fall n A: P r[e k,j α ] = Ê[ s k,j Eα ] = P r[e k α] = Ê[ s E k α] = P r[e k,j α ] j = 0 k > 0 k,j P r[e α ] + k,j P r[e α ] 0 < j αk 0 j > αk j = k = 0 (.) 0 j = 0 Ê[ s Eα k,j ] + Ê[ s E k,j 0 < j αk α ] k > j k j=0 k j=0 P r[e k,j α ] 0 j αk k = (.) P r[e k,j α ] (.3) Ê[ s k,j Eα ] (.) Intutvely, (.) means the event Eα k,j happens f ether Eα k,j happens and v j falls n set A (whch happens wth probablty k,j ) or E α happens and v j falls n set B (agan, wth probablty ). The ntuton behnd (.) s very smlar to (.) when k >. When k =, under the event Eα k,j we know that exactly j of v,, v are n set A and so s = j. k Computng Ê[ s Eα] n and P r[eα] n usng the above recurrence relaton and dynamc programmng takes O(n ) tme and O(n) memory. Fnally, n order to complete our lower boundng method we need to compute P r[e α (n, ) ]. ext we show how we can fnd a lower bound for P r[e α (n, ) ]. The proof of the followng lemma s omtted due to lack of space. Lemma.9. For any α [0.5, ] and any n, n such that n < n, the followng two always hold: P r[e (n, ) α n whch : + ] ( Cαn ) C α C α = ( α )α ( α) n k=n+ ( C α k ) (.5) (.6) (.5) s based on a varant of Chernoff bound and gves a very good lower bound when n and n are suffcently large. To get the desred lower bound for RSOP we set the parameters as the followng. In usng Lemma.6 we set m = 00, α = 0.5, α m =.0 and dstrbuted the α,, α m evenly on [0.5,.0] (that s α α = 0.5 ). We then used m Lemma.7 to compute Ê[ s E α ] for each together wth Lemma.8 by settng n = 5000 and also used Lemma.9 to compute P r[e α (n, ) ] by settng n = The results of our computaton for varous choces of s lsted n Table. otce that for > 0 we get a lower bound better than 0.5 and thus a compettve rato better than.. When there are many bds above the optmal unform prce In ths subsecton we show the followng: We show how to compute a lower bound on E[ s z] that holds for all values of >. We compute the above lower bound for = 5000 to get a lower bound of, thus showng that for all 3.5 >, E[RSOP] E[ s z] > 3.5. In the prevous subsecton, we showed how to compute a lower bound for E[ s z] for any fxed value of and we used that to compute the E[ s z] for all values of up to. The dea s that when s large (.e., > ), the two random varables s and z are almost ndependent and so the expected value of ther product s very close to the product of ther expected values. Also for a large the value of s s very close to so E[ s z] would be roughly E[z]. The proof of the followng lemma s omtted due to lack of space. Lemma.0. For any α [0, ] the followng always holds: E[ s z] α(e[z ] P r[e [,] α ]) (.7) Intutvely, when s large, n (.7) the P r[e α [,] ] s very close to 0 even when α = ɛ t roughly gves a lower bound of about E[z ]. ext we show how to compute an upper bound on P r[e α [,] ] to support our clam. The proof of the followng lemma s omtted due to lack of space.

6 Lemma.. For any α [0, 0.5], the followng always holds: n whch : P r[e [,] α ] C α (.8) C α = ( ) α α ( α ), α = α (.9) The only task that remans s to compute a good lower bound on E[z ]. Theorem.. E[z] E[z ] 0.6. Intutvely, z s a measure of the least rato of the number of bds n B to the number of bds n A among any prefx of the bds. A larger z ndcates a more balanced partton. Ths s an mportant statstc for any random samplng method n general (note that z only depends on how we partton the bds and not the value of the bds). Proof. We can apply the Lemma.6 by pluggng x = to compute E[z ] = E[xz ] to get the followng: m E[z ] Ê[ Eα α ]) α = (.0) m E[z ] (P r[e α ] P r[e α ]) α α (.) = To get (.) from (.0) we have used the defnton of of Ê[] from (.7). Also, we have that P r[e α] P r[eα]p n r[e α (n, ) ] by the FKG nequalty []. We can apply the FKG nequalty because the two events Eα n and E α (n, ) are postvely correlated on the dstrbutve lattce formed by partally orderng the nstances of the parttonng by a subset relaton on set A therefore ther probablty of ther ntersecton s greater than or equal to the product of ther probabltes. Agan, f we substtute each P r[e α ] wth ts lower bound the nequalty stll holds because of the followng. The coeffcent of each P r[e α ] term after rearrangng the sum on the rght hand sde of (.) s postve except for P r[e α0 ] whch s tself 0 because α 0 = 0. By tunng the parameters as we wll explan at the end of ths secton we get a lower bound of E[z] E[z ] 0.6. It s worth mentonng that by usng a smlar method, we computed an upper bound of E[z] 0.63 whch ndcates that our analyss of E[z] s very tght. That completes our method for computng a lower bound on E[ s z] whch s ndependent of for suffcently large. To compute E[z ] we used (.) whch we derved from Lemma.6 by settng x =, m = 00, α = 0.5, α m =.0 and dstrbutng the α,, α m evenly on [0.5,.0] (that s α α = 0.5 ). Together wth that we also used m Lemma.9 by settng x = s, n = and n = and Lemma.8 by settng n = to compute P r[e α ] for each. To get our desred lower bound on E[ s z] when = 5000, we used Lemma.0 to separate the z and s as n (.7). Usng E[z] 0.6 together wth Lemma. and settng α = 0.5 we get that for any > 5000, P r[e α [,] ] and so E[RSOP] 0.8 whch s equvalent to a compettve rato of 3.5 whch s better than. 5. THE EXHAUSTIVE SEARCH LOWER- BOUD In the prevous secton, we showed that for > 0, E[RSOP]. In ths secton, we show the followng: We show how to compute an mproved lower bound on E[RSOP] for any fxed 0. We compute the above lower bound on E[RSOP] for all 0 to get a lower bound of when 6 0 and a lower bound of when 6. The.68 computed values of our lower bound for all values of 0 can be found n Table. In the rest of ths secton we explan an Exhaustve-Search approach for mprovng the lower-bound of RSOP for the cases where s small (.e., 0). The basc lower bound of E[ s z] n Secton does not work well enough n these cases manly because s and z are negatvely correlated and ther correlaton s much stronger when s small. Also because v s always n B and so s s always 0, the expected value of s decreases as decreases such that for = we have s = whch s far from. The dea s to try all possble values for the frst few v but nstead of usng an exact value for each v we use an nterval for each v and we try all the possble combnaton of these ntervals to cover all the possble nput nstances. We then report the lowest E[RSOP] of all the dfferent combnatons as the lower bound. Theorem 5.. For any nput nstance I = {v, v, } where there are between 6 to 0 bds above the optmal unform prce (.e. 6 0), the expected proft of RSOP s at least (.e., E[RSOP] ). Also, f there are between to 5 bds above the optmal prce, the expected proft of RSOP s at least. The actual computed lower bound.68 values can be found n Table. Due to the complexty of the proofs and lack of space we only gve an outlne of our method 3. Frst we defne as the ndex of the wnnng prce after n the optmal sngle prce aucton (.e., we are choosng the wnnng prce from the bds whose ndex are greater than ). Agan we don t take as a random varable. Instead we provde a lower bound for RSOP for any fxed and and another lower bound for suffcently large. ote that depends on the set of bds as a whole and does not depend on how the bds are parttoned by RSOP. Formally = max argmax > v. Algorthm 5.. Exhaustve-Search(m,,, r, r ) For some gven m we consder the frst m hghest bds, that s v,, v m and also v. We then restrct each bd v where S = {,, m, } to some nterval [l, h ] as we explan later and fnd a lower-bound for the utlty of RSOP assumng those restrctons. We try all the possble combnaton of these ntervals for the frst m bds and for v so as to cover all possble cases (remember that v = snce we assumed that OP T = ). Then we take the lowest lower bound among all those combnaton and report t as the lower bound of E[RSOP] for that specfc choce of and. We wll also provde a way of computng a lower bound whch s 3 The complete proof s about -3 tmes the length of the proofs of the basc lower bound of Secton.

7 ndependent of the actual when s greater than a certan value. We then take the mnmum of that for all choces of and use t as a lower bound for E[RSOP] for the specfc choce of (remember that we are only nterested n 0 snce for > 0 the basc lower bound of Secton s already better than 0.5). In order to try all the combnaton of ntervals we do the followng. Snce OP T =, each bd v s always n the nterval [0, ]. For some gven parameter r, we dvde ths nterval to r smaller ntervals [ 0, r ],, [, r ]. For r r r r each S, we set [l, u ] to one of the mentoned r ntervals. We wll do the same thng for v except that we dvde t to r dfferent ntervals for some gven r. As a result we can have ether r (m ) r or r (m ) r possble combnatons dependng on whether m or > m. ote that v s always restrcted to be exactly because OP T =. Also note that some of these combnatons mght be partally or even entrely mpossble because they should satsfy the constrant of v v and v > v for all >. So we dscard or refne some combnatons (for example by settng u mn(u, u )). ext we show how we compute the lower bound based on the range restrctons of Algorthm 5.. Algorthm 5.3. Restrcted-RSOP-Lowerbound(m,,, r, r, {(l, u )}) Here we use E[u Az ] as a lower bound for E[RSOP] n whch agan u A s a random varable ndcatng the lower bound on the utlty of set A and z a random varable ndcatng the restrcted least prefx raton of B to A whch s slghtly dfferent from z. In z we are consderng the range restrctons that we explan next. To compute the lower-bound, we enumerate all m possble ways of parttonng v,, v m and refer to them wth events D,, D m. Then based on the law of total expectaton we can compute a lower-bound by E[RSOP] E[u Az ] = m = Ê[u Az D ]. Bascally, under each event D, we fx the parttonng of the frst m bds and then apply all the prevous technques that we dscussed n Secton to the tal of the bds that s v m+, v m+, wth some modfcaton whch we explan next. Frst, nstead of usng s as a lower bound for the utlty of set A we use u A = max S s l as a lower bound on the proft of set A. We also modfy the (.), (.), (.3), (.) to condton them on event D. Also we replace the term j k,j P r[e α ] n (.) wth u AP r[eα k,j ]. The most mportant change n the computatons from Secton s that whenever the value of z s condtoned on an event Eα T (as defned n Subsecton.) f α u < max {,,m} s l we can argue that because by defnton of, v v for all >, then the wnnng prce n set A should be among v,, v m (because for all j > m we have αjv j < max {,,m} s l and αjv j s the maxmum utlty one can possbly get n set A by choosng v j as the wnnng prce under event Eα T ). By choosng m =, r = 3, r = 00 and the rest of the parameters as n Secton we get a lower bound of for = over all values of whch s equvalent to a compettve rato of.68 whch s also the upper bound of compettve rato of RSOP over all. Table shows the exhaustve search lower-bounds for 0. In our computatons, we notced that = + was the worst case among all choces of. 6. A UPPER BOUD FOR THE PERFOR- MACE OF RSOP FOR AY In prevous works, t has been shown that E[RSOP] s for some nstances (e.g. [3], [5]). However n all those nstances, =. In ths secton, we show that the lower bound for E[RSOP] cannot be mproved further than 3/8 for any value of. Theorem 6.. For any there exsts an nput nstance I for whch E[RSOP] 3 8. Before provng the theorem we defne the followng. Defnton 6. (Equal Revenue Instance). We refer to the nput nstance wth bdders n whch v = as Equal Revenue wth bdders. otce that choosng any of the v as the wnnng prce yelds a proft of. Observaton 6.3. For an equal revenue nput nstance, RSOP always offers the worst prce to the other set. In other words, the optmal prce of set A s the worst prce that we could offer to set B and vce versa. The prevous observaton suggests that an equal revenue nstance mght actually be the worst case nput nstance for RSOP however that s not qute true at least for small values of. Furthermore, analyzng the performance of RSOP on equal revenue nstances for general s not easy. Therefore, we defne a modfed verson of RSOP, call t RSOP whch s very smlar to RSOP and yelds about the same proft. We then analyze the performance of RSOP on equal revenue nstances and use that to upper bound the performance of RSOP. In RSOP, as n RSOP, we partton the bdders nto two sets at random and then offer the best sngle prce of each set to the other set. The only dfference s n the case that one of the sets s empty. In ths case, n RSOP, the offered prce from the empty sde to the other set wll be nstead of 0. Lemma 6.. E[RSOP ] on an equal revenue nstance wth bdders s decreasng functon of. Proof. The proof s by nducton. Assume, j : < j, E[RSOP ] for an equal revenue nstance wth elements s larger than E[RSOP ] for an equal revenue nstance wth j elements. ow, we need to show, j : < j ths property holds as well. It s enough to show that E[RSOP ] for an equal revenue nstance wth bdders s less than E[RSOP ] for an equal revenue nstance wth bdders. Consder the random parttons of the nstance wth bdders. As before, WLOG assume that v B. ow, categorze parttons to two groups:. Parttons n whch v B. These parttons can be bult by consderng all the parttons for bdders and addng v to B n each partton. Call the orgnal parttons for bdders, A and B.. Parttons n whch v A. Agan we can buld all these parttons by consderng the parttons for bdders and addng v to A. Call the orgnal parttons wthout v, A and B. Each of the above cases can happen wth probablty. We compare the expected proft of each case wth E[RSOP ] for equal revenue nstance wth bdders. In fact, we

8 wll show that the expected proft of parttons belongng to case, s exactly the same as E[RSOP ] for equal revenue nstance wth bdders. Also, we show that the expected revenue of cases of parttons belongng to case, s at most equal to E[RSOP ] of the equal revenue nstance wth bdders. There s a one-to-one correspondence between the parttons belongng to case and parttons of the equal revenue nstance wth bdders. We can see that the proft of each partton s exactly the same as the proft of ts correspondng partton wth bdders. Consder the partton A and B and ts correspondng partton A and B. If A (and correspondngly A ), the offered prce to B s the same as the offered prce to B by A and t s always larger than. It means that the proft obtaned from the elements n B that belongs to B s also the same and we don t obtan any proft from v snce t s smaller than the the offered prce. If A = A =, the offered prce to the other set, for the equal revenue case wth bdders, s and the obtaned proft from B s ( ). =. For the case wth bdders, the offered prce to the other set s however we have also bdders n B so the total proft obtaned from B s. whch gves the same proft. We have also a one-to-one correspondence between parttons n case and the parttons of the equal revenue nstance wth bdders. If A, then the obtaned proft from B s at most equal to the obtaned proft from B. There are two possble cases here. Ether the offered prce to B and B are the same, n whch case the obtaned proft from both sets are the same as well. In the other case, addng to A ( to obtan A) has changed the best prce for A. In the latter case, the offered prce by A to B should be. Also note that, n the partton of an equal revenue nstance, the best prce for set A s the worst offered prce for set B, whch means that we are only reducng the proft obtaned from B when we change the selected prce n A to from the selected prce for A. Also f A =, the obtaned proft n the equal revenue nstance wth bdders s. However n the correspondng nstance, contanng v = n A, the offered prce to B s and we have only elements n B n ths case. So the total obtaned proft s n <. So the expected proft of all the parttons belongng to the second category s less than E[RSOP ] for equal n revenue nstances wth bdders. Puttng both cases together, we can conclude that the total expected proft s only decreased when the number of bdders s ncreased. It can be shown that for equal-revenue nstances, E[RSOP] = E[RSOP ]. The proft obtaned by both methods are always the same except for the case that A =. Ths event happens wth probablty and the obtaned proft s. (The obtaned proft n RSOP s and the proft of RSOP s 0 n ths case.) It can be shown that for 6, for the equal revenue nstances, E[RSOP ]. Usng Lemma 6., we can conclude that E[RSOP] /.65 for the equal revenue nstance.65 for any. Fnally, for any gven wnner ndex j, we show how to fnd an nstance for whch we have = j and also E[RSOP] for that nstance s equal to E[RSOP] for the equal revenue nstance wth j bdders. For a gven j, we defne ts correspondng nstance as follows (and refer to t as perturbed equal revenue): Consder the equal revenue nstance wth j bdders. Construct the perturbed equal revenue nstance by changng only v j to + ɛ nstead of. (The value of the j j rest of the bds are smlar to the equal revenue nstance.) It s easy to see that the beneft obtaned by RSOP from the equal revenue nstance wth j bdders s convergng to the beneft obtaned from perturbed equal revenue nstance when ɛ 0 whch completes the proof of the theorem. 7. THE ITERESTIG CASE OF H AD In ths secton, we descrbe a combnatoral approach whch shows that E[RSOP] s at least of the optmal proft for all the nstances where bdders have only one of the two possble valuatons, and h. We call an nstance, an equal proft nstance, f selectng ether or h as the unform prce returns the same proft. In the rest of ths secton, for a gven nstance of nput, we denote the number of h bds by h and the number of bds by. Also the proft obtaned from a set S by offerng prce p, s represented by Prof(S, p). We frst show that: Lemma 7.. For an equal proft nstance, E[RSOP] OP T + h. Proof. The proof s based on nducton on h. We frst show that for the base case of h =, we have E[RSOP] h = h + h. Because ths s an equal proft nstance, when h =, t should be that = h. ow consder the parttonng of the bdders nto two groups A and B. WLOG, assume that v B whch means the optmal prce of set B whch s offered to set A s h and Prof(A, h) = 0. On the other hand, snce the valuatons of all bdders n set A are the optmal prce of set A whch s offered to set B s always. To compute Prof(B, ) t s enough to compute E[ B ]. Snce bdders are parttoned unformly at random, we can conclude that E[ B ] = h + h/ whch completes the proof for h =. To prove the nducton step for h, we assume that for all values of h k, E[RSOP] OP T/ + h/. ow consder an equal proft nstance I wth h = k +. We can wrte all the possble ways of parttonng the bds n ths new nstance as the cartesan product of all the possble ways to partton the bds nto two equal proft nstances, one wth h = and the other wth h = k. In other words, call the nstance wth h =, I and the nstance wth h = k, I. Construct all the possble parttons of bdders nto two groups (A and B) for the equal revenue nstance wth h = k +. We can see that any possble partton n I can be constructed by combnng exactly one partton of I and one partton of I (one-to-one mappng). For a gven partton A and B of an nstance I, call the correspondng parttons from I, A and B and the correspondng partton from I, A and B, so A = A A and B = B B. In the rest of ths secton, we use the smple observaton that n any equal proft nstance I, f the optmal prce for set A s, then the optmal prce for B has to be h and vce versa. In the rest of the proof, we use the noton of prce par to present the optmal prces of each sde of a partton. (e.g. prce par (, h) means that the optmal prce for set A s and the optmal prce for set B s h.) We have possble prce par s for a combnaton of two parttons taken from I and I. However, of these cases can be reduced to the other by renamng A and B, so we only consder the frst cases:

9 The prce par of both (A, B ) and (A, B ) are (, h). Call the combnaton of these parttons (A, B). We can see that the prce par for (A, B) would be (, h) as well. So the extracted proft from each sde, s exactly equal to the sum of the profts obtaned from (A, B ) and (A, B ). The prce par of (A, B ) s (, h) but the prce par for (A, B ) s (h, ). Snce, we are consderng an equal proft nstance, we know that prce par for (A, B) should be ether (, h) or (h, ) as well. WLOG assume the prce par of (A, B) s (, h). We can see that, the proft extracted from bdders n I n (A, B) partton s exactly the same as the extracted proft n (A, B ) nstance snce the offered prces to each sde are the same. ow, for the bdders belongng to I, the extracted proft n (A, B) s at least as hgh as the extracted proft n (A, B ) partton. The reason s that, n I the offered prce to B s h however the best prce for B s.( Snce the prce par for (A, B ) was (h, )) So by offerng prce to B, the extracted proft from bdders on the B sde s only ncreased. Also by usng the same argument, offerng prce h to elements n A s only ncreasng the extracted proft from them. So we can conclude that, n ths case, the extracted proft n (A, B), s at least as hgh as the sum of the extracted proft from (A, B ) and (A, B ). We can rewrte E[RSOP] as the sum of the expected proft obtaned from bdders n I and the expected proft obtaned from bdders n I. Snce every partton of bdders n I appears n the same number of parttons of I and by usng the above argument, we can conclude that the expected proft obtaned from bdders n I, s at least as much E[RSOP] for the equal proft nstance I. Usng smlar argument for I, we can see that E[RSOP] for the equal proft nstance I, s at least as much as the sum of the E[RSOP] for equal proft nstances I and I. ow, by usng nducton, we have E I[RSOP] E I [RSOP]+E I [RSOP] h + h + h + h > OP T/ + h/. ext, we show how to use lemma 7. to prove that: Lemma 7.. The compettve rato of RSOP for any nstance wth only two knd of valuatons s at most. In lemma 7., we proved that the compettve rato of RSOP s at most for equal proft nstances. Here, we show that n fact, we can generalze the result to any nstance consstng of and h bds. We face two scenaros here:. Ether n n h (h ) whch means that our nstance s a combnaton of an equal proft nstance and a extra set of bdders wth value.. Or n < n h (h ). That means, we have an nstance whch s a combnaton of an equal proft nstance and some extra (at least ) bdder(s) wth valuaton h and less than h extra bdder(s) wth valuaton. We gve the proof for each scenaro separately. Agan, we denote the orgnal nstance by I, the equal proft part of I, by I and the rest by I. Also, for a partton (A, B) of I, we denote the part of A belongng to I by A and the part belongng to I by A. (Smlarly for B wth B and B.) In scenaro, ether the prce par of (A, B ) s (, h) or t s (h, ). In the frst case, we can conclude that (A, B) s ether (, ) or (, h) whch means that the offered prce from A to B s always. So the obtaned proft from set B s equal to the sum of the profts of B and B n I and I nstances. If the offered prce from B to A s, wth the smlar argument gven n Lemma 7., we can see that the proft obtaned from B s at least as much as the total proft of B and B n I and I nstances. However f the offered prce s h, we get the same proft from the elements that were comng from A and we loose all the proft that was obtaned from A. However the amount of loss can be upper bounded by the number of s n I whch s at most h. The concluson s that the obtaned proft from (A, B) for nstance I, s at least as much as the the proft that we could obtan from (A, B ) for nstance I. By usng lemma 7. we know that the obtaned proft by RSOP from (A, B ) s at least h / h + h/. Also the optmal proft that can be obtaned from (A, B) s at most h h+h. That means that we already obtaned / of the optmal proft by RSOP. In scenaro, the best prce for I s h. We call the number of h bds n I by h and the number of h bds belongng to I by h. The optmal proft can be defned by h h. Here we are n one of the followng cases: Ether the prce par of (A, B ) s (, h) and for (A, B ) s (, h) ( whch means that the number of h bds n A s 0). In ths case, the prce par of (A, B) s (, h). Ths means that the beneft that we obtan from bdders n I n (A, B) s the same as the proft we obtaned n (A, B ). However, we are loosng the proft from h bds n B. Or (A, B ) = (, h) and (A, B ) = (h, h). There are two possbltes here: Ether prce par of (A, B) s (h, h) or t s (, h). If the prce par s (h, h), the proft obtaned from A n (A, B) s the same as the obtaned proft n I wth partton (A, B ). However the beneft obtaned from B can only ncrease snce we offer prce h. Also, n ths case, we extract all the proft from h bds n I. On the other hand, f (A, B) = (, h) we agan extract the same proft from the nstance I and also we obtan all the proft from the h bds n A. So n both cases, the proft extracted n I from the bdders belongng to I, s at least as much as the amount extracted n RSOP from those bdders n I nstance. Also we always extract all the proft from bdders wth h value that are belongng to A. Assumng that we are parttonng the bdders always unformly at random, we can conclude that the expected number of h bds belongng to A s h/. So the total proft obtaned by RSOP from I s at least the proft obtaned by RSOP from I plus h h/. In other words the proft that wll be obtaned n ths scenaro s at least h h/+h/+ h/ > h h /. Thus, E[RSOP] OP T/ for all nstances wth only two dfferent bd values. 8. COCLUSIO We have further mproved upon the bounds on the compettveness of RSOP through a mx of probablstc technques and computer-aded analyss. More specfcally, we have proved that the compettve rato of RSOP s: () less than.68, () less than f the number of wnners s

10 at least 6; and () upper-bounded by a quantty that approaches 3.3 as, and (v) has a robust verson as gets large. These ndcate that RSOP does much better than known n the practcally-nterestng case where s large, and that perhaps the only case where the compettve rato of s attaned s the case where n = and v = v. It s an nterestng open problem to pn down the compettve rato as a functon of. We have also shown that even f gets arbtrarly large, one can construct nstances I wth such, for whch the compettve rato s at least.65. Fnally, our work presents a combnatoral approach for the case where the bd values are chosen from {, h}, and shows that the compettve rato of RSOP s at most n ths case. 9. ACKOWLEDGMET We would lke to thank Jason Hartlne for several valuable dscussons. We also thank the anonymous referees for ther helpful comments. 0. REFERECES [] M.-F. Balcan, A. Blum, J. D. Hartlne, and Y. Mansour. Mechansm desgn va machne learnng. In FOCS, pages 605 6, 005. [] S. Balga and R. Vohra. Market research and market desgn. Advances n Theoretcal Economcs, 3(): , 003. [3] U. Fege, A. Flaxman, J. D. Hartlne, and R. D. Klenberg. On the compettve rato of the random samplng aucton. In WIE, pages , 005. [] C. M. Fortun, P. W. Kasteleyn, and J. Gnbre. Correlaton nequaltes on some partally ordered sets. Communcatons n Mathematcal Physcs, :89 03, June 97. [5] A. V. Goldberg and J. D. Hartlne. Compettve auctons for multple dgtal goods. In ESA 0: Proceedngs of the 9th Annual European Symposum on Algorthms, pages 6 7, London, UK, 00. Sprnger-Verlag. [6] A. V. Goldberg, J. D. Hartlne, and A. Wrght. Compettve auctons and dgtal goods. In SODA, pages 735 7, 00. [7] A. V. Goldberg, J. D. Hartlne, and A. Wrght. Compettve auctons and dgtal goods. In SODA 0: Proceedngs of the twelfth annual ACM-SIAM symposum on Dscrete algorthms, pages 735 7, Phladelpha, PA, USA, 00. Socety for Industral and Appled Mathematcs. [8] M. T. Hajaghay, R. D. Klenberg, and D. C. Parkes. Adaptve lmted-supply onlne auctons. In ACM Conference on Electronc Commerce, pages 7 80, 00. [9] J. D. Hartlne and T. Roughgarden. Optmal mechansm desgn and money burnng. CoRR, abs/ , 008. [0] W. Hoeffdng. Probablty nequaltes for sums of bounded random varables. Amercan Statstcal Assocaton Journal, 58:3 30, 963. [] R. Myerson. Optmal aucton desgn. Mathematcs of Operatons Research, 6():58 73, 98. [] I. Segal. Optmal prcng mechansms wth unknown demand. Amercan Economc Revew, 93(3):509 59, June 003. APPEDIX A. RESULTS E[RSOP ] Compettve-Rato Table : The result of usng the basc lower-bound by choosng n = 5000 E[RSOP ] Compettve-Rato Table : The result of usng the exhaustve-search lower-bound by choosng m =, r = 3, r = 00

A On Random Sampling Auctions for Digital Goods 1

A On Random Sampling Auctions for Digital Goods 1 A On Random Samplng Auctons for Dgtal Goods 1 SAEED ALAEI, Unversty of Maryland AZARAKHSH MALEKIAN, Unversty of Maryland ARAVIND SRINIVASAN 2, Unversty of Maryland In the context of auctons for dgtal goods,

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

On Random Sampling Auctions for Digital Goods

On Random Sampling Auctions for Digital Goods On Random Sampling Auctions for Digital Goods Saeed Alaei Azarakhsh Malekian Aravind Srinivasan February 8, 2009 Abstract In the context of auctions for digital goods, an interesting Random Sampling Optimal

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016 CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng

More information

Lecture 14: Bandits with Budget Constraints

Lecture 14: Bandits with Budget Constraints IEOR 8100-001: Learnng and Optmzaton for Sequental Decson Makng 03/07/16 Lecture 14: andts wth udget Constrants Instructor: Shpra Agrawal Scrbed by: Zhpeng Lu 1 Problem defnton In the regular Mult-armed

More information

Vickrey Auction VCG Combinatorial Auctions. Mechanism Design. Algorithms and Data Structures. Winter 2016

Vickrey Auction VCG Combinatorial Auctions. Mechanism Design. Algorithms and Data Structures. Winter 2016 Mechansm Desgn Algorthms and Data Structures Wnter 2016 1 / 39 Vckrey Aucton Vckrey-Clarke-Groves Mechansms Sngle-Mnded Combnatoral Auctons 2 / 39 Mechansm Desgn (wth Money) Set A of outcomes to choose

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Lecture Randomized Load Balancing strategies and their analysis. Probability concepts include, counting, the union bound, and Chernoff bounds.

Lecture Randomized Load Balancing strategies and their analysis. Probability concepts include, counting, the union bound, and Chernoff bounds. U.C. Berkeley CS273: Parallel and Dstrbuted Theory Lecture 1 Professor Satsh Rao August 26, 2010 Lecturer: Satsh Rao Last revsed September 2, 2010 Lecture 1 1 Course Outlne We wll cover a samplng of the

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

a b a In case b 0, a being divisible by b is the same as to say that

a b a In case b 0, a being divisible by b is the same as to say that Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

A Lower Bound on the Competitive Ratio of Truthful Auctions

A Lower Bound on the Competitive Ratio of Truthful Auctions A Lower Bound on the Compettve Rato of Truthful Auctons Andrew V Goldberg 1, Jason D Hartlne 1, Anna R Karln 2, and Mchael Saks 3 1 Mcrosoft Research, SVC/5, 1065 La Avenda, Mountan Vew, CA 94043 {goldberg,hartlne}@mcrosoftcom

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Edge Isoperimetric Inequalities

Edge Isoperimetric Inequalities November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary

More information

P exp(tx) = 1 + t 2k M 2k. k N

P exp(tx) = 1 + t 2k M 2k. k N 1. Subgaussan tals Defnton. Say that a random varable X has a subgaussan dstrbuton wth scale factor σ< f P exp(tx) exp(σ 2 t 2 /2) for all real t. For example, f X s dstrbuted N(,σ 2 ) then t s subgaussan.

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan Wnter 2008 CS567 Stochastc Lnear/Integer Programmng Guest Lecturer: Xu, Huan Class 2: More Modelng Examples 1 Capacty Expanson Capacty expanson models optmal choces of the tmng and levels of nvestments

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13]

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13] Algorthms Lecture 11: Tal Inequaltes [Fa 13] If you hold a cat by the tal you learn thngs you cannot learn any other way. Mark Twan 11 Tal Inequaltes The smple recursve structure of skp lsts made t relatvely

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

E Tail Inequalities. E.1 Markov s Inequality. Non-Lecture E: Tail Inequalities

E Tail Inequalities. E.1 Markov s Inequality. Non-Lecture E: Tail Inequalities Algorthms Non-Lecture E: Tal Inequaltes If you hold a cat by the tal you learn thngs you cannot learn any other way. Mar Twan E Tal Inequaltes The smple recursve structure of sp lsts made t relatvely easy

More information

THE SUMMATION NOTATION Ʃ

THE SUMMATION NOTATION Ʃ Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the

More information

Economics 101. Lecture 4 - Equilibrium and Efficiency

Economics 101. Lecture 4 - Equilibrium and Efficiency Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Vapnik-Chervonenkis theory

Vapnik-Chervonenkis theory Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

Communication Complexity 16:198: February Lecture 4. x ij y ij

Communication Complexity 16:198: February Lecture 4. x ij y ij Communcaton Complexty 16:198:671 09 February 2010 Lecture 4 Lecturer: Troy Lee Scrbe: Rajat Mttal 1 Homework problem : Trbes We wll solve the thrd queston n the homework. The goal s to show that the nondetermnstc

More information

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7 Stanford Unversty CS54: Computatonal Complexty Notes 7 Luca Trevsan January 9, 014 Notes for Lecture 7 1 Approxmate Countng wt an N oracle We complete te proof of te followng result: Teorem 1 For every

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Lecture Space-Bounded Derandomization

Lecture Space-Bounded Derandomization Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

CS286r Assign One. Answer Key

CS286r Assign One. Answer Key CS286r Assgn One Answer Key 1 Game theory 1.1 1.1.1 Let off-equlbrum strateges also be that people contnue to play n Nash equlbrum. Devatng from any Nash equlbrum s a weakly domnated strategy. That s,

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

On the correction of the h-index for career length

On the correction of the h-index for career length 1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat

More information

Module 17: Mechanism Design & Optimal Auctions

Module 17: Mechanism Design & Optimal Auctions Module 7: Mechansm Desgn & Optmal Auctons Informaton Economcs (Ec 55) George Georgads Examples: Auctons Blateral trade Producton and dstrbuton n socety General Setup N agents Each agent has prvate nformaton

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

1 Generating functions, continued

1 Generating functions, continued Generatng functons, contnued. Generatng functons and parttons We can make use of generatng functons to answer some questons a bt more restrctve than we ve done so far: Queston : Fnd a generatng functon

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013 COS 511: heoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 15 Scrbe: Jemng Mao Aprl 1, 013 1 Bref revew 1.1 Learnng wth expert advce Last tme, we started to talk about learnng wth expert advce.

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium?

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium? APPLIED WELFARE ECONOMICS AND POLICY ANALYSIS Welfare Propertes of General Equlbrum What can be sad about optmalty propertes of resource allocaton mpled by general equlbrum? Any crteron used to compare

More information

and problem sheet 2

and problem sheet 2 -8 and 5-5 problem sheet Solutons to the followng seven exercses and optonal bonus problem are to be submtted through gradescope by :0PM on Wednesday th September 08. There are also some practce problems,

More information

Randomness and Computation

Randomness and Computation Randomness and Computaton or, Randomzed Algorthms Mary Cryan School of Informatcs Unversty of Ednburgh RC 208/9) Lecture 0 slde Balls n Bns m balls, n bns, and balls thrown unformly at random nto bns usually

More information

e - c o m p a n i o n

e - c o m p a n i o n OPERATIONS RESEARCH http://dxdoorg/0287/opre007ec e - c o m p a n o n ONLY AVAILABLE IN ELECTRONIC FORM 202 INFORMS Electronc Companon Generalzed Quantty Competton for Multple Products and Loss of Effcency

More information

Hila Etzion. Min-Seok Pang

Hila Etzion. Min-Seok Pang RESERCH RTICLE COPLEENTRY ONLINE SERVICES IN COPETITIVE RKETS: INTINING PROFITILITY IN THE PRESENCE OF NETWORK EFFECTS Hla Etzon Department of Technology and Operatons, Stephen. Ross School of usness,

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm NYU, Fall 2016 Lattces Mn Course Lecture 2: Gram-Schmdt Vectors and the LLL Algorthm Lecturer: Noah Stephens-Davdowtz 2.1 The Shortest Vector Problem In our last lecture, we consdered short solutons to

More information

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8 U.C. Berkeley CS278: Computatonal Complexty Handout N8 Professor Luca Trevsan 2/21/2008 Notes for Lecture 8 1 Undrected Connectvty In the undrected s t connectvty problem (abbrevated ST-UCONN) we are gven

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Basically, if you have a dummy dependent variable you will be estimating a probability.

Basically, if you have a dummy dependent variable you will be estimating a probability. ECON 497: Lecture Notes 13 Page 1 of 1 Metropoltan State Unversty ECON 497: Research and Forecastng Lecture Notes 13 Dummy Dependent Varable Technques Studenmund Chapter 13 Bascally, f you have a dummy

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

k t+1 + c t A t k t, t=0

k t+1 + c t A t k t, t=0 Macro II (UC3M, MA/PhD Econ) Professor: Matthas Kredler Fnal Exam 6 May 208 You have 50 mnutes to complete the exam There are 80 ponts n total The exam has 4 pages If somethng n the queston s unclear,

More information

REAL ANALYSIS I HOMEWORK 1

REAL ANALYSIS I HOMEWORK 1 REAL ANALYSIS I HOMEWORK CİHAN BAHRAN The questons are from Tao s text. Exercse 0.0.. If (x α ) α A s a collecton of numbers x α [0, + ] such that x α

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

A 2D Bounded Linear Program (H,c) 2D Linear Programming

A 2D Bounded Linear Program (H,c) 2D Linear Programming A 2D Bounded Lnear Program (H,c) h 3 v h 8 h 5 c h 4 h h 6 h 7 h 2 2D Lnear Programmng C s a polygonal regon, the ntersecton of n halfplanes. (H, c) s nfeasble, as C s empty. Feasble regon C s unbounded

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

Online Appendix. t=1 (p t w)q t. Then the first order condition shows that

Online Appendix. t=1 (p t w)q t. Then the first order condition shows that Artcle forthcomng to ; manuscrpt no (Please, provde the manuscrpt number!) 1 Onlne Appendx Appendx E: Proofs Proof of Proposton 1 Frst we derve the equlbrum when the manufacturer does not vertcally ntegrate

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

Lecture 4: Constant Time SVD Approximation

Lecture 4: Constant Time SVD Approximation Spectral Algorthms and Representatons eb. 17, Mar. 3 and 8, 005 Lecture 4: Constant Tme SVD Approxmaton Lecturer: Santosh Vempala Scrbe: Jangzhuo Chen Ths topc conssts of three lectures 0/17, 03/03, 03/08),

More information

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

Perfect Competition and the Nash Bargaining Solution

Perfect Competition and the Nash Bargaining Solution Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange

More information

Eigenvalues of Random Graphs

Eigenvalues of Random Graphs Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the

More information

Math 426: Probability MWF 1pm, Gasson 310 Homework 4 Selected Solutions

Math 426: Probability MWF 1pm, Gasson 310 Homework 4 Selected Solutions Exercses from Ross, 3, : Math 26: Probablty MWF pm, Gasson 30 Homework Selected Solutons 3, p. 05 Problems 76, 86 3, p. 06 Theoretcal exercses 3, 6, p. 63 Problems 5, 0, 20, p. 69 Theoretcal exercses 2,

More information

Finding Primitive Roots Pseudo-Deterministically

Finding Primitive Roots Pseudo-Deterministically Electronc Colloquum on Computatonal Complexty, Report No 207 (205) Fndng Prmtve Roots Pseudo-Determnstcally Ofer Grossman December 22, 205 Abstract Pseudo-determnstc algorthms are randomzed search algorthms

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

Endogenous timing in a mixed oligopoly consisting of a single public firm and foreign competitors. Abstract

Endogenous timing in a mixed oligopoly consisting of a single public firm and foreign competitors. Abstract Endogenous tmng n a mxed olgopoly consstng o a sngle publc rm and oregn compettors Yuanzhu Lu Chna Economcs and Management Academy, Central Unversty o Fnance and Economcs Abstract We nvestgate endogenous

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Managing Capacity Through Reward Programs. on-line companion page. Byung-Do Kim Seoul National University College of Business Administration

Managing Capacity Through Reward Programs. on-line companion page. Byung-Do Kim Seoul National University College of Business Administration Managng Caacty Through eward Programs on-lne comanon age Byung-Do Km Seoul Natonal Unversty College of Busness Admnstraton Mengze Sh Unversty of Toronto otman School of Management Toronto ON M5S E6 Canada

More information

1 The Mistake Bound Model

1 The Mistake Bound Model 5-850: Advanced Algorthms CMU, Sprng 07 Lecture #: Onlne Learnng and Multplcatve Weghts February 7, 07 Lecturer: Anupam Gupta Scrbe: Bryan Lee,Albert Gu, Eugene Cho he Mstake Bound Model Suppose there

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information