Fitting a Graph to One-Dimensional Data. September 11, 2018
|
|
- Lorena Walters
- 5 years ago
- Views:
Transcription
1 Fttng a Graph to One-Dmensonal Data Su-Wng Cheng 1 Otfred Cheong 2 Taegyoung Lee 2 September 11, 2018 arxv: v1 [cs.cg] 9 Sep 2018 Abstract Gven n data ponts n R d, an approprate edge-weghted graph connectng the data ponts fnds applcaton n solvng clusterng, classfcaton, and regressson problems. The graph proposed by Datch, Kelner and Spelman(ICML 2009) can be computed by quadratc programmng and hence n polynomal tme. Whle n practce a more effcent algorthm would be preferable, replacng quadratc programmng s challengng even for the specal case of ponts n one dmenson. We develop a dynamc programmng algorthm for ths case that runs n O(n 2 ) tme. Its practcal effcency s also confrmed n our expermental results. 1 Introducton Many nterestng data sets can be nterpreted as pont sets n R d, where the dmenson d s the number of features of nterest of each data pont, and the coordnates are the values of each feature. To model the smlarty between dscrete samples, one can ntroduce approprate undrected weghted edges connectng proxmal ponts. Such a graph s useful n applcatons such as classfcaton, regresson, and clusterng (see, for nstance, [5, 8]). For example, let w j denote the weght determned for the edge that connects two ponts p and p j, and regresson can be performed to predct functon values f s at the ponts p s by mnmzng,j w j(f f j ) 2, subject to fxng the subset of known f s [1]. As another example, for any gven nteger k, one can obtan a partton of the weghted graph nto k clusters based on spectral analyss of the egenvectors of the Laplacan of the weghted graph [1, 5]. Note that the weghted graph may actually be connected. To allow effcent data analyss, t s mportant that the weghted graph s sparse. Dfferent proxmty graphs have been suggested for ths purpose. The knn-graph connects each pont to ts k nearest neghbors. The ε-ball graph connects each pont to all other ponts that are wthn a dstance ε. In both cases, an edge of length l s assgned a weght of exp( l 2 /2σ 2 ), where the parameters k, ε and σ need to bespecfed by the user. It s unclear how to set these parameters n an automatc, effcent way. Several studes have found the knn-graph and the ε-ball graphs to be nferor to other graphs proposed [1, 2, 7]. We consder the graph proposed by Datch, Kelner, and Spelman [1]. It s provably sparse, and experments have shown that t offers good performance n classfcaton, clusterng and regresson. Ths graph s defned va quadratc optmzaton as follows: Let P = {p 1,p 2,...,p n } be a set of 1 Supported by Research Grants Councl, Hong Kong, Chna (project no ). Department of Computer Scence and Engneerng, HKUST, Clear Water Bay, Hong Kong. Emal: scheng@cse.ust.hk 2 Supported by ICT R&D program of MSIP/IITP [R ]. School of Computng, KAIST, Daejeon, South Korea. Emal: otfred@kast.arpost.net, taegyoung@kast.ac.kr 1
2 n ponts n R d. We assgn weghts w j 0 to each par of ponts (p,p j ), such that w j = w j and w = 0. These weghts determne for each pont p a vector v, as follows: v = n w j (p j p ). Let v denote v. The weghts are chosen so as to mnmze the sum j=1 Q = n v 2, =1 under the constrant that the weghts for each pont add up to at least one (to prevent the trval soluton of w j = 0 for all and j): n w j 1 for 1 n. j=1 The resultng graph contans an edge connectng p and p j f and only f w j > 0. Datch et al. [1] showed that there s an optmal soluton where at most (d+1)n weghts are non-zero. Moreover, n two dmensons, optmal weghts can be chosen such that the graph s planar. Clearly, the optmal weghts can be computed by quadratc programmng. A quadratc programmng problem wth m varables, c constrants, and L nput bts can be solved n O(m 4 L 2 ) tme usng the method of Ye and Tse [6]. There s another algorthm by Kapoor and Vadya [4] that has an asymptotc runnng tme of O((m+c) 3.67 L logl log(m+c)). In our case, there are n(n 1)/2 varables and Θ(n) constrants. So the runnng tme s O(n 7.34 L logl logn), whch s mpractcal even for moderately large n. Datch et al. reported that a data set of 4177 ponts requres a processng tme of approxmately 13.8 hours. Graphs based on optmzng other convex qualty measures have also been consdered [3, 7]. Our goal s to desgn an algorthm to compute the optmal weghts n Datch et al. s formulaton that s sgnfcantly faster than quadratc programmng. Perhaps surprsngly, ths problem s challengng even for ponts n one dmenson, that s, when all ponts le on a lne. In ths case, t s not dffcult to show (Lemma 2.1) that there s an optmal soluton such that w j > 0 f and only f p and p j are consecutve. Ths reduces the number of varables to n 1. Even n one dmenson, the weghts n an optmal soluton do not seem to follow any smple pattern as we llustrate n the followng two examples. Some weghts n an optmal soluton can be arbtrarly hgh. Consder four ponts p 1,p 2,p 3,p 4 n left-to-rght order such that p 1 p 2 = p 3 p 4 = 1 and p 2 p 3 = ε. By symmetry, w 12 = w 34, and so v 1 = v 4 = w 12. Snce w 12 +w 23 1 and w 23 +w 34 1 are trvally satsfed by the requrement that w 12 = w 34 1, we can make v 2 zero by settng w 23 = w 12 /ε. In the optmal soluton, w 12 = w 34 = 1 and w 23 = 1/ε. So w 23 can be arbtrarly large. Gven ponts p 1,,p n n left-to-rght order, t seems deal to make v a zero vector. One can do ths for [2,n 1] by settng w 1, /w,+1 = p p +1 / p 1 p, however, some of the constrants w +w +1 1 may be volated. Even f we are lucky that for [2,n 1], we can set w 1, /w,+1 = p p +1 / p 1 p wthout volatng w +w +1 1, the soluton may not be optmal as we show below. Requrng v = 0 for [2,n 1] gves v 1 = v n = w 12 p 1 p 2. In 2
3 general, we have p 1 p 2 p n 1 p n, so we can assume that p 1 p 2 > p n 1 p n. Then, w n 1,n = w 12 p 1 p 2 / p n 1 p n > 1 as w Snce w n 1,n > 1, one can decrease w n 1,n by a small quantty δ whle keepng ts value greater than 1. Both constants w n 1,n 1 and w n 2,n 1 + w n 1,n 1arestllsatsfed. Observethatv n dropstow 12 p 1 p 2 δ p n 1 p n andv n 1 ncreases to δ p n 1 p n. Hence, v 2 n 1 +v2 n decreases by 2δw 12 p 1 p 2 p n 1 p n 2δ 2 p n 1 p n 2, and so does Q. The orgnal settng of the weghts s thus not optmal. If w n 3,n 2 +w n 2,n 1 > 1, t wll brng further beneft to decrease w n 2,n 1 slghtly so that v n 1 decreases slghtly from δ p n 1 p n and v n 2 ncreases slghtly from zero. Intutvely, nstead of concentratng w 12 p 1 p 2 at v n, t s better to dstrbute t over multple ponts n order to decrease the sum of squares. But t does not seem easy to determne the best weghts. Although there are only n 1 varables n one dmenson, quadratc programmng stll yelds a hgh runnng tme of O(n 3.67 L logl logn). We present a dynamc programmng algorthm that computes the optmal weghts n O(n 2 ) tme n the one-dmensonal case. The ntermedate soluton has an nterestng structure such that the dervature of ts qualty measure depends on the dervatve of a subproblem s qualty measure as well as the nverse of ths dervatve functon. Ths makes t unclear how to bound the sze of an explct representaton of the ntermedate soluton. Instead, we develop an mplct representaton that facltates the dynamc programmng algorthm. We mplemented our algorthm, wth both the explct and the mplct representaton of ntermedate solutons. Both versons run substantally faster than the quadratc solver n cvxopt. For nstance, for 3200 ponts, cvxopt needs over 20 mnutes to solve the quadratc program, whle our algorthm takes less than half a second to compute the optmal weghts. 2 A sngle-parameter qualty measure functon We wll assume that the ponts are gven n sorted order, so that p 1 < p 2 < p 3 < < p n. We frst argue that the only weghts that need to be non-zero are the weghts between consecutve ponts, that s, weghts of the form w,+1. Lemma 2.1. For d = 1, there s an optmal soluton where only weghts between consecutve ponts are non-zero. Proof. Assume an optmal soluton where w k > 0 and < j < k. We construct a new optmal soluton as follows: Let a = p j p, b = p k p j, and w = w k. In the new soluton, we set w k = 0, ncrease w j by a+b a w, and ncrease w jk by a+b b w. Note that snce a+b > a and a+b > b, the sum of weghts at each vertex ncreases, and so the weght vector remans feasble. The value v j changes by a a+b a+b a w+b b w = 0, the value v changes by (a+b) w+a a+b a w = 0, and the value v k changes by +(a+b) w b a+b b w = 0. It follows that the new soluton has the same qualty as the orgnal one, and s therefore also optmal. To smplfy the notaton, we set d = p +1 p, for 1 < n, rename the weghts as w := w,+1, agan for 1 < n, and observe that v 1 = w 1 d 1, v = w d w 1 d 1 v n = w n 1 d n 1. for 2 n 1, 3
4 For [2,n 1], we ntroduce the quantty Q = w 2 + vj 2 = w 2 + 1w1 2 + j=1 (d j w j d j 1 w j 1 ) 2, and note that Q n 1 = n =1 v2 = Q. Thus, our goal s to choose the n 1 non-negatve weghts w 1,...,w n 1 such that Q n 1 s mnmzed, under the constrants j=2 w 1 1, w j +w j+1 1 w n 1 1. for 2 j n 2, The quantty Q depends on the weghts w 1,w 2,...,w. We concentrate on the last one of these weghts, and consder the functon w Q (w ) = mn w 1,...,w 1 Q, where the mnmum s taken over all choces of w 1,...,w 1 that respect the constrants w 1 1 and w j +w j+1 1 for 2 j 1. The functon Q (w ) s defned on [0, ). We denote the dervatve of the functon w Q (w ) by R. We wll see shortly that R s a contnuous, pecewse lnear functon. Snce R s not dfferentable everywhere, we defne S (x) to be the rght dervatve of R, that s S (x) = lm y x + R (y). The followng theorem dscusses R and S. The shorthand ξ := 2d d +1, for 1 < n 1, wll be convenent n ts proof and the rest of the paper. Theorem 2.1. The functon R s strctly ncreasng, contnuous, and pecewse lnear on the range [0, ). We have R (0) < 0, S (x) (2 + 2 /) for all x 0, and R (x) = (2+ 2 /) x for suffcently large x > 0. Proof. We prove all clams by nducton over. The base case s = 2. Observe that Q 2 = v 2 1 +v2 2 +d2 2 w2 2 = 2d2 1 w2 1 2d 1 w 1 w w2 2. For fxed w 2, the dervatve wth respect to w 1 s w 1 Q 2 = 4 1 w 1 2d 1 w 2, (1) whch mples that Q 2 s mnmzed for w 1 = 2d 1 w 2. Ths choce s feasble (wth respect to the constrant w 1 1) when w 2 2d 1. If w 2 < 2d 1, then w 1 Q 2 s postve for all values of w 1 1, so the mnmum occurs at w 1 = 1. It follows that { 3 Q 2 (w 2 ) = 2 d2 2 w2 2 for w 2 2d 1, 2 2 w2 2 ξ 1w otherwse, 4
5 and so we have R 2 (w 2 ) = { 3 2 w 2 for w 2 2d 1, 4 2 w 2 ξ 1 otherwse. (2) In other words, R 2 s pecewse lnear and has a sngle breakpont at 2d 1. The functon R 2 s contnuous because 3 2 w 2 = 4 2 w 2 ξ 1 when w 2 = 2d 1. We have R 2 (0) = ξ 1 < 0, S 2 (x) 3 2 for all x 0, and R 2 (x) = 3 2 x for x 2d 1. The fact that S 2 (x) 3 2 > 0 makes R 2 strctly ncreasng. Consder now 2, assume that R and S satsfy the nducton hypothess, and consder Q +1. By defnton, we have Q +1 = Q ξ w w w+1. 2 (3) For a gven value of w +1 0, we need to fnd the value of w that wll mnmze Q +1. The dervatve s w Q +1 = R (w ) ξ w +1. The mnmum thus occurs when R (w ) = ξ w +1. Snce R s a strctly ncreasng contnuous functon wth R (0) < 0 and lm x R (x) =, for any gven w +1 0, there exsts a unque value w = R 1 (ξ w +1 ). However, we also need to satsfy the constrant w +w We frst show that R +1 s contnuous and pecewse lnear, and that R+1 1 (0) < 0. We wll dstngush two cases, based on the value of w := R 1 (0). Case 1: w 1. Ths means that R 1 (ξ w +1 ) 1 for any w +1 0, and so the constrant of w +w +1 1 s satsfed for the optmal choce of w = R 1 (ξ w +1 ). It follows that The dervatve R +1 s therefore Q +1 (w +1 ) = Q ( R 1 (ξ w +1 ) ) ξ w +1 R 1 (ξ w +1 )+2 +1 w2 +1. R +1 (w +1 ) = R (R 1 (ξ w +1 )) R (R 1 ξ (ξ w +1 )) ξ ξ R 1 (ξ w +1 ) ξ w +1 R (R 1 (ξ w +1 )) +4d2 +1 w +1 = 4 +1w +1 ξ R 1 (ξ w +1 ). (4) SnceR scontnuousandpecewselnear, sosr 1, andthereforer +1 scontnuousandpecewse lnear. We have R +1 (0) = ξ w < 0. Case 2: w < 1. Consder the functon x f(x) = x + R (x)/ξ. Snce R s contnuous and strctly ncreasng by the nductve assumpton, so s the functon f. Observe that f(w ) = w < 1. As w < 1, we have R (1) > R (w ) = 0, whch mples that f(1) > 1. Thus, there exsts a unque value w (w,1) such that f(w ) = w +R (w )/ξ = 1. For w +1 1 w = R (w )/ξ, we have R 1 (ξ w +1 ) w, and so R 1 (ξ w +1 )+w Ths mples that the constrant w +w +1 1 s satsfed when Q +1 (w +1 ) s mnmzed for the optmal choce of w = R 1 (ξ w +1 ). So R +1 s as n (4) n Case 1. 5
6 When w +1 < 1 w, the constrant w +w +1 1 mples that w 1 w +1 > w. For any w > w we have w Q +1 = R (w ) ξ w +1 > R (w ) ξ (1 w ) = 0. So Q +1 s ncreasng, and the mnmal value s obtaned for the smallest feasble choce of w, that s, for w = 1 w +1. It follows that and so the dervatve R +1 s Q +1 (w +1 ) = Q (1 w +1 ) ξ w +1 (1 w +1 )+2 +1w 2 +1 = Q (1 w +1 ) ξ w +1 +(ξ )w2 +1, R +1 (w +1 ) = R (1 w +1 )+(2ξ )w +1 ξ. (5) Combnng (4) and (5), we have { R (1 w +1 )+(2ξ R +1 (w +1 ) = )w +1 ξ for w +1 < 1 w, 4 +1 w +1 ξ R 1 (ξ w +1 ) for w +1 1 w. (6) For w +1 = 1 w, we have R (1 w +1 ) = R (w ) = ξ (1 w w )) = w, and so both expressons have the same value: R (1 w +1 )+(2ξ )w +1 ξ = ξ w ξ +2ξ 2ξ w +4 +1(1 w ) ξ = 4 +1 (1 w ) ξ w = 4 +1 (1 w ) ξ R 1 (ξ w +1 ). ) and R 1 (ξ w +1 ) = R 1 (ξ (1 SnceR s contnuous andpecewse lnear, thsmples thatr +1 s contnuous and pecewse lnear. We have R +1 (0) = R (1) ξ. Snce w < 1, we have R (1) > R (w ) = 0, and so R +1(0) < 0. Next, we show that S +1 (x) (2+ 2 /+1) +1 for all x 0, whch mples that R +1 s strctly ncreasng. If w < 1 and x < 1 w, then by (6), S +1 (x) = S (1 x)+2ξ > 4d2 +1 > (2+2 /+1) +1. If w 1 or x > 1 w, we have by (4) and (6) that R +1(x) = 4 +1 x ξ R 1 (ξ x). By the nductve assumpton that S (x) (2+2/) for all x 0, we get x R 1 (x) 1/ ( (2+2/) ). It follows that S +1 (x) 4 +1 (2d d +1 ) 2 ( (2+ 2 /) = 4 4 ) ( = 4 2 ) +1 / +1 = ( ) +1. Ths establshes the lower bound on S +1 (x). Fnally, by the nductve assumpton, when x s large enough, we have R 1 (x) = x/ ( (2+2/) ), and so R +1 (x) = 4 +1x (2d d +1 ) 2 ( (2+ 2 /) x = 2+ 2 ) +1 +1x, completng the nductve step and therefore the proof. 6
7 3 The algorthm Our algorthm progressvely constructs a representaton of the functons R 2,R 3,...,R n 1. The functon representaton supports the followng three operatons: Op 1: gven x, return R (x); Op 2: gven y, return R 1 (y); Op 3: gven ξ, return x such that x + R (x ) ξ = 1. TheproofofTheorem2.1 gves therelaton between R +1 andr. Thswll allow ustoconstruct the functons one by one we dscuss the detaled mplementaton n Sectons 3.1 and 3.2 below. Once all functons R 2,...,R n 1 are constructed, the optmal weghts w 1,w 2,...,w n 1 are computed from the R s as follows. Recall that Q = Q n 1, so w n 1 s the value mnmzng Q n 1 (w n 1 ) under the constrant w n 1 1. If Rn 1 1 (0) 1, then R 1 n 1 (0) s the optmal value for w n 1; otherwse, we set w n 1 to 1. To obtan w n 2, recall from (3) that Q = Q n 1 = Q n 2 (w n 2 ) ξ n 2 w n 2 w n 1 +2 n 1 w2 n 1. Snce we have already determned the correct value of w n 1, t remans to choose w n 2 so that Q n 1 s mnmzed. Snce w n 2 Q n 1 = R n 2 (w n 2 ) ξ n 2 w n 1, Q n 1 s mnmzed when R n 2 (w n 2 ) = ξ n 2 w n 1, and so w n 2 = R 1 n 2 (ξ n 2w n 1 ). In general, for [2,n 2], we can obtan w from w +1 by observng that Q n 1 = Q (w ) ξ w w +1 +g(w +1,...,w n 1 ), where g s functon that only depends on w +1,...,w n 1. Takng the dervatve agan, we have w Q n 1 = R (w ) ξ w +1, so choosng w = R 1 (ξ w +1 ) mnmzes Q n 1. To also satsfy the constrant w + w +1 1, we need to choose w = max{r 1 (ξ w +1 ), 1 w +1 } for [2,n 2]. Fnally, from the dscusson that mmedately follows (1), we set w 1 = max{ 2d 1 w 2, 1}. To summarze, we have w n 1 = max{rn 1 1 (0), 1}, w = max{r 1 (ξ w +1 ), 1 w +1 }, w 1 = max{ 2d 1 w 2, 1}. for [2,n 2], It follows that we can obtan the optmal weghts usng a sngle Op 2 on each R. 3.1 Explct representaton of pecewse lnear functons Snce R s a pecewse lnear functon, a natural representaton s a sequence of lnear functons, together wth the sequence of breakponts. Snce R s strctly ncreasng, all three operatons can 7
8 then be mplemented to run n tme O(logk) usng bnary search, where k s the number of functon peces. The functon R 2 conssts of exactly two peces. We construct t drectly from d 1,, and ξ 1 usng (2). To construct R +1 from R, we frst compute w = R 1 (0) usng Op 2 on R. If w 1, then by (4) each pece of R, startng at the x-coordnate w, gves rse to a lnear pece of R +1, so the number of peces of R +1 s at most that of R. If w < 1, then we compute w usng Op 3 on R. The new functon R +1 has a breakpont at 1 w by (6). Its peces for x 1 w are computed from the peces of R startng at the x-coordnate w. Its peces for 0 x < 1 w are computed from the peces of R between the x-coordnates 1 and w. (Increasng w +1 now corresponds to a decreasng w.) Ths mples that every pece of R that covers x-coordnates n the range [w,1] wll gve rse to two peces of R +1, so the number of peces of R +1 may be twce the number of peces of R. Therefore, although ths method works, t s unclear whether the number of lnear peces of R s bounded by a polynomal n. 3.2 A quadratc tme mplementaton Snce we have no polynomal bound on the number of lnear peces of the functon R n 1, we turn to an mplct representaton of R. The representaton s based on the fact that there s a lnear relatonshp between ponts on the graphs of the functons R and R +1. Concretely, let y = R (x ), and y +1 = R +1 (x +1 ). Recall the followng relaton from (4) for the case of w 1: R +1 (w +1 ) = 4 +1w +1 ξ R 1 (ξ w +1 ). We can express ths relaton as a system of two equatons: Ths can be rewrtten as or n matrx notaton x +1 y +1 1 = M +1 y +1 = 4 +1 x +1 ξ x, y = ξ x +1. y +1 = 4 +1 y /ξ ξ x, x +1 = y /ξ, 0 1/ξ 0, where M +1 = ξ 4 +1 /ξ 0. (7) x y 1 On the other hand, f w < 1, then R +1 has a breakpont at 1 w. The value w can be obtaned by appyng Op 3 to R. We compute the coordnates of ths breakpont: (1 w,r +1(1 w )). Note that R +1(1 w ) = 4d2 +1 (1 w ) ξ R 1 (ξ (1 w )) whch can be computed by applyng Op 2 to R. For x +1 > 1 w, the relatonshp between (x,y ) and (x +1,y +1 ) s gven by (7). For 0 x +1 < 1 w, recall from (5) that R +1 (w +1 ) = R (1 w +1 )+(2ξ )w +1 ξ. 8
9 We agan rewrte ths as y +1 = y +(2ξ +4 +1)x +1 ξ, x = 1 x +1, whch gves or n matrx notaton: x +1 y +1 1 = L +1 y +1 = y +(2ξ +4 +1)(1 x ) ξ, x +1 = 1 x, x y 1, where L +1 = ξ ξ The functon R +1 s stored by storng the breakpont (x +1,y +1 ) = (1 w,r +1(1 w )) as well as the two matrces L +1 and M +1. Note that the frst functon R 2 s stored explctly. A new functon R +1 can be constructed n constant tme plus a constant number of queres on R and requres constant space only. We now explan how the three operatons Op 1, Op 2, and Op 3 are mplemented on ths representaton of the functon R. For an operaton on R, we progressvely buld transformaton matrces T,T 1,T 2,...,T 3,T 2 such that (x,y,1) = Tj (x j,y j,1) for every 2 j n a neghborhood of the query. Once we obtan T2, we use our explct representaton of R 2 to express y as a lnear functon of x n a neghborhood of the query, whch then allows us to answer the query. The frst matrx T s the dentty matrx. We obtan T j from T j+1, for j [2, 1], as follows: If R j+1 has no breakpont, then Tj = T j+1 M j+1. If R j+1 has a breakpont (x j+1,y j+1 ), then ether Tj = T j+1 M j+1 or Tj = T j+1 L j+1, dependng on whch sde of the breakpont apples to the answer of the query. We can decde ths by comparng (x,y ) = Tj+1 (x j+1,y j+1,1) wth the query. More precsely, for Op 1 we compare the nput x wth x, for Op 2 we compare the nput y wth y, and for Op 3 we compute x +y /ξ and compare wth 1. It follows that our mplct representaton of R supports all three operatons on R n tme O(), and so the total tme to construct R n 1 s O(n 2 ). Theorem 3.1. Gven n ponts on a lne, we can compute an optmal set of weghts for mnmzng the qualty measure Q n O(n 2 ) tme. 4 Experments We have mplemented both the explct and mplct representatons n Python. For comparson, we used the quadratc solver cvxopt 1 usng the modelng lbrary pcos 2 (our code s avalable at 9
10 n QP Explct Implct , Table 1: Runnng tmes of the three methods (n seconds). small unform large unform Gaussan n avg max avg max avg max Table 2: Average and maxmum number of peces for three dfferent dstrbutons. Runnng tmes. To compare the runnng tme of the dfferent methods, we frst generated problem nstances randomly, by settng each nterpont dstance d to an ndependent random value, taken unformly from the ntegers {1,2,...,50}. Table 1 shows the results. Perhapssurprsngly, thesmplemethodthat representseach R as asequenceof lnearfunctons outperforms the other two methods. Apparently, at least for random nterpont dstances, the number of lnear peces of these functons does not grow fast. Number of peces. To nvestgate ths further, we have generated problem nstances, wth varous dstrbutons used for the random generaton of nterpont dstances. The results can be seen n Table 2. In the small unform dstrbuton, nterpont dstances are taken unformly from the set {1,2,...,50}, for the large unform dstrbuton from the set {1,2,...,10,000}. In the thrd column, nterpont dstances are sampled from a Gaussan dstrbuton wth mean 100 and standard devaton 30. For each dstrbuton and n, we compute the functons R 2,R 3,...,R n 1, and take the maxmum of the number of peces over these n 2 functons. We repeat each experment 1,000 tmes, and show both the average and the maxmum of the number of peces found. The table explans why the smple method performs so well n practce: as long as the number of peces remans small, ts runnng tme s essentally lnear. In fact, we are not even usng bnary search to mplement the three operatons on the pecewse lnear functons. Precson. The cvxopt solver uses an teratve procedure n floatng pont arthmetc, and so ts precson s lmted. Wth the tolerance set to the maxmum feasble value of 10 6, some weghts dffer from our algorthm s soluton by as much as Our algorthm can easly be mplemented usng exact or hgh-precson arthmetc. In fact, n our mplementaton t suffces to provde the ntal dstance vector usng Python Fracton objects for exact ratonal arthmetc, or as
11 hgh-precson floatng pont numbers from the mpmath Python lbrary. 3 Usng ratonal arthmetc, computng the exact optmal soluton for 3200 ponts wth nteger nterpont dstances from the set {1,2,...,50} takes between 1.4 and 4 seconds. 5 Concluson Whle n practce the explct representaton of the functons R works well, we do not have a polynomal tme bound on the runnng tme usng ths method. Future work should determne f ths method can ndeed be slow on some nstances, or f the number of peces can be bounded. It would also be nce to obtan an algorthm for hgher dmensons that s not based on a quadratc programmng solver. In two dmensons, we have conducted some experments that ndcate that the Delaunay trangulaton of the pont set contans a well-fttng graph. If we choose the graph edges only from the Delaunay edges and compute the optmal edge weghts, the resultng qualty measure s very close to the best qualty measure n the unrestrcted case. It s concevable that one can obtan a provably good approxmaton from the Delaunay trangulaton. References [1] S.I. Datch, J.A. Kelner, and D.A. Spelman. Fttng a graph to vector data. ICML, 2009, [2] S. Han, H. Huang, H. Qn, and D. Yu. Localty-preservng L1-graph and ts applcaton n clusterng. Proceedngs of the 30th Annual ACM Symposum on Appled Computg, 2015, [3] T. Jebara, J. Wang, and S.-F. Chang. Graph constructon and b-matchng for sem-supervsed learnng. ICML, 2009, [4] S. Kapoor and P.M. Vadya. Fast algorthms for convex quadratc programmng and multcommodty flows. Proceedngs of the 18th Annual ACM Symposum on Theory of Computng, 1986, [5] A.Y. Ng, M.I. Jordan, and Y. Wess. On spectral clusterng: analyss and an algorthm. NIPS, 2001, [6] Y. Ye and E. Tse. An extenson of Karmarkar s projectve algorthm for convex quadratc programmng. Mathematcal Programmng, 44 (1989), [7] Y.-M. Zhang, K. Huang, and C.-L. Lu. Learnng localty preservng graph from data. IEEE Transactons on Cybernetcs, 44 (2014), [8] D. Zhou, O. Bousquet, T.N. Lal, J. Weston, and B. Schölkopf. Learnng wth local and global consstency. NIPS, 2003,
Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011
Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationMaximizing the number of nonnegative subsets
Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016
U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationCalculation of time complexity (3%)
Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationYong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )
Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often
More informationConvexity preserving interpolation by splines of arbitrary degree
Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationSome modelling aspects for the Matlab implementation of MMA
Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton
More informationThe Minimum Universal Cost Flow in an Infeasible Flow Network
Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran
More informationOutline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique
Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng
More informationSemi-supervised Classification with Active Query Selection
Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples
More informationAdditional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty
Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,
More informationFoundations of Arithmetic
Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More informationExercises. 18 Algorithms
18 Algorthms Exercses 0.1. In each of the followng stuatons, ndcate whether f = O(g), or f = Ω(g), or both (n whch case f = Θ(g)). f(n) g(n) (a) n 100 n 200 (b) n 1/2 n 2/3 (c) 100n + log n n + (log n)
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationSolutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.
Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,
More informationGeneral viscosity iterative method for a sequence of quasi-nonexpansive mappings
Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,
More informationLecture 21: Numerical methods for pricing American type derivatives
Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)
More informationCOS 521: Advanced Algorithms Game Theory and Linear Programming
COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.
More informationSupplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso
Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING
1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationNotes on Frequency Estimation in Data Streams
Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationGrover s Algorithm + Quantum Zeno Effect + Vaidman
Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the
More informationChapter Newton s Method
Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve
More informationLOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin
Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More information2.3 Nilpotent endomorphisms
s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms
More informationPolynomial Regression Models
LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance
More informationApproximate Smallest Enclosing Balls
Chapter 5 Approxmate Smallest Enclosng Balls 5. Boundng Volumes A boundng volume for a set S R d s a superset of S wth a smple shape, for example a box, a ball, or an ellpsod. Fgure 5.: Boundng boxes Q(P
More informationThe Geometry of Logit and Probit
The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.
More informationLinear Classification, SVMs and Nearest Neighbors
1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush
More informationTHE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens
THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationNP-Completeness : Proofs
NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationEstimation: Part 2. Chapter GREG estimation
Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the
More informationCHAPTER III Neural Networks as Associative Memory
CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people
More informationBOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS
BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all
More informationEdge Isoperimetric Inequalities
November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary
More informationOn the Multicriteria Integer Network Flow Problem
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of
More informationLecture 14: Bandits with Budget Constraints
IEOR 8100-001: Learnng and Optmzaton for Sequental Decson Makng 03/07/16 Lecture 14: andts wth udget Constrants Instructor: Shpra Agrawal Scrbed by: Zhpeng Lu 1 Problem defnton In the regular Mult-armed
More informationCanonical transformations
Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More informationSection 8.3 Polar Form of Complex Numbers
80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the
More informationLecture 12: Classification
Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationn ). This is tight for all admissible values of t, k and n. k t + + n t
MAXIMIZING THE NUMBER OF NONNEGATIVE SUBSETS NOGA ALON, HAROUT AYDINIAN, AND HAO HUANG Abstract. Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what
More informationLecture 5 Decoding Binary BCH Codes
Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture
More informationLecture 4. Instructor: Haipeng Luo
Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would
More informationFinding Dense Subgraphs in G(n, 1/2)
Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng
More informationMaximal Margin Classifier
CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org
More informationA new construction of 3-separable matrices via an improved decoding of Macula s construction
Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula
More informationBezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0
Bezer curves Mchael S. Floater August 25, 211 These notes provde an ntroducton to Bezer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of the
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationLecture 17 : Stochastic Processes II
: Stochastc Processes II 1 Contnuous-tme stochastc process So far we have studed dscrete-tme stochastc processes. We studed the concept of Makov chans and martngales, tme seres analyss, and regresson analyss
More informationMATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS
MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS These are nformal notes whch cover some of the materal whch s not n the course book. The man purpose s to gve a number of nontrval examples
More informationSpectral Graph Theory and its Applications September 16, Lecture 5
Spectral Graph Theory and ts Applcatons September 16, 2004 Lecturer: Danel A. Spelman Lecture 5 5.1 Introducton In ths lecture, we wll prove the followng theorem: Theorem 5.1.1. Let G be a planar graph
More informationLecture 10 Support Vector Machines. Oct
Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron
More informationAffine transformations and convexity
Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/
More informationSpeeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem
H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationHow Strong Are Weak Patents? Joseph Farrell and Carl Shapiro. Supplementary Material Licensing Probabilistic Patents to Cournot Oligopolists *
How Strong Are Weak Patents? Joseph Farrell and Carl Shapro Supplementary Materal Lcensng Probablstc Patents to Cournot Olgopolsts * September 007 We study here the specal case n whch downstream competton
More informationMLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012
MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:
More informationIntroduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:
CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and
More informationLecture 4: Constant Time SVD Approximation
Spectral Algorthms and Representatons eb. 17, Mar. 3 and 8, 005 Lecture 4: Constant Tme SVD Approxmaton Lecturer: Santosh Vempala Scrbe: Jangzhuo Chen Ths topc conssts of three lectures 0/17, 03/03, 03/08),
More informationLecture 3. Ax x i a i. i i
18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest
More informationMin Cut, Fast Cut, Polynomial Identities
Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.
More informationDynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)
/24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More informationSingle-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition
Sngle-Faclty Schedulng over Long Tme Horzons by Logc-based Benders Decomposton Elvn Coban and J. N. Hooker Tepper School of Busness, Carnege Mellon Unversty ecoban@andrew.cmu.edu, john@hooker.tepper.cmu.edu
More informationLecture 3: Dual problems and Kernels
Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM
More informationMatrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD
Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo
More informationLecture 17: Lee-Sidford Barrier
CSE 599: Interplay between Convex Optmzaton and Geometry Wnter 2018 Lecturer: Yn Tat Lee Lecture 17: Lee-Sdford Barrer Dsclamer: Please tell me any mstake you notced. In ths lecture, we talk about the
More informationECE559VV Project Report
ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More information