Simple Analyses of the Sparse Johnson-Lindenstrauss Transform

Size: px
Start display at page:

Download "Simple Analyses of the Sparse Johnson-Lindenstrauss Transform"

Transcription

1 Smple Analyses of the Sparse Johnson-Lndenstrauss Transform Mchael B. Cohen 1, T. S. Jayram 2, and Jelan Nelson 3 1 MIT, 32 Vassar Street, Cambrdge, MA 2139, USA mcohen@mt.edu 2 IBM Almaden Research Center, 65 Harry Road, San Jose, CA 9512, USA jayram@us.bm.com 3 Harvard John A. Paulson School of Engneerng and Appled Scences, 29 Oxford Street, Cambrdge, MA 2138, USA mnlek@seas.harvard.edu Abstract For every n-pont subset X of Eucldean space and target dstorton 1+ε for < ε < 1, the Sparse Johnson Lndenstrauss Transform (SJLT) of [19] provdes a lnear dmensonalty-reducng map f : X l m 2 where f(x) = Πx for Π a matrx wth m rows where (1) m = O(ε 2 log n), and (2) each column of Π s sparse, havng only O(εm) non-zero entres. Though the constructons gven for such Π n [19] are smple, the analyses are not, employng ntrcate combnatoral arguments. We here gve two smple alternatve proofs of the man result of [19], nvolvng no delcate combnatorcs. One of these proofs has already been tested pedagogcally, requrng slghtly under forty mnutes by the thrd author at a casual pace to cover all detals n a blackboard course lecture ACM Subject Classfcaton F.2. General Keywords and phrases dmensonalty reducton, Johnson-Lndenstrauss, Sparse Johnson-Lndenstrauss Transform Dgtal Object Identfer 1.423/OASIcs.SOSA Introducton A wdely appled technque to gan speedup and reduce memory footprnt when processng hgh-dmensonal data s to frst apply a dmensonalty-reducng map whch approxmately preserves the geometry of the nput n a pre-processng step. One cornerstone result along these lnes s the followng Johnson-Lndenstrauss (JL) lemma [16]. Lemma 1 (JL lemma). For all < ε < 1, ntegers n, d > 1, and X R d wth X = n, there exsts f : X R m wth m = O(ε 2 log n) such that y, z X, (1 ε) y z 2 f(y) f(z) 2 (1 + ε) y z 2. (1) The last two authors dedcate ths work to the frst author, who n ths work and other nteractons was a constant source of energy and ntellectual enthusasm. M. B. Cohen s supported by NSF grants CCF and CCF , and by a Natonal Defense Scence and Engneerng Graduate Fellowshp. J. Nelson s supported by NSF grant IIS and CAREER award CCF-13567, ONR Young Investgator award N and DORECG award N , an Alfred P. Sloan Research Fellowshp, and a Google Faculty Research Award. Mchael B. Cohen, T. S. Jayram, and Jelan Nelson; lcensed under Creatve Commons Lcense CC-BY 1st Symposum on Smplcty n Algorthms (SOSA 218). Edtor: Ramund Sedel; Artcle No. 15; pp. 15:1 15:9 Open Access Seres n Informatcs Schloss Dagstuhl Lebnz-Zentrum für Informatk, Dagstuhl Publshng, Germany

2 15:2 Smple Analyses of the Sparse Johnson-Lndenstrauss Transform The target dmenson m gven by the JL lemma s optmal for nearly the full range of n, d, ε; n partcular, for any n, d, ε, there exsts a pont set X R d wth X = n such that any (1 + ε)-dstorton embeddng of X nto R m under the Eucldean norm must have m = Ω(mn{n, d, ε 2 log(ε 2 n)}) [21, 5]. Note that an sometrc embeddng (.e. ε = ) s always achevable nto dmenson m = mn{n 1, d}, and thus the lower bound s optmal except potentally for ε close to 1/ n. All known proofs of the JL lemma nstantate f as a lnear map. The orgnal proof n [16] pcked f(x) = Πx where Π R m d was an approprately scaled orthogonal projecton onto a unformly random m-dmensonal subspace. It was then shown that as long as m = Ω(ε 2 log(1/δ)), x R d such that x 2 = 1, P Π ( Πx > ε) < δ. (2) The JL lemma then followed by settng δ < 1/ ( n 2) and consderng x = (y z)/ y z 2 for each par y, z X, and adjustng ε by a constant factor. It s known that ths bound of m for attanng (2) s tght; that s, m must be Ω(mn{d, ε 2 log(1/δ)}) [15, 17]. One should typcally thnk of applyng dmensonalty reducton technques for applcatons as beng a two-step process: frst (1) one apples the dmenson-reducng map f to the data, then (2) one runs some algorthm on the lower dmensonal data f(x). Whle reducng m typcally speeds up the second phase, n order to speed up the frst phase t s necessary to gve an f whch can be both found and appled to data quckly. To ths end, Achloptas showed Π can be chosen wth..d. entres where Π,j = wth probablty 2/3, and otherwse Π,j s unform n ±1/ m/3 [1]. Ths was accomplshed wthout ncreasng m by even a constant factor over prevous best analyses of the JL lemma. Thus essentally a 3x speedup n step (2) s obtaned wthout any loss n the qualty of dmensonalty reducton. Later, Alon and Chazelle developed the FJLT [2] whch uses the Fast Fourer Transform to mplement a JL map Π wth m = O(ε 2 log n) supportng matrx-vector multplcaton n tme O(d log d + m 3 ). Later work of [3] gave a dfferent constructon whch, for the same m, mproved the multplcaton tme to O(d log d + m 2+γ ) for arbtrarly small γ >. More recently, a sequence of works gve embeddng tme O(d log d) but wth a suboptmal embeddng dmenson m = O(ε 2 log n poly(log log n)) [4, 2, 22, 6, 12]. Note that the lne of work begnnng wth the FJLT requres Ω(d log d) embeddng tme per pont, whch s worse than the O(m x ) tme to embed x usng a dense Π f x s suffcently sparse. Here x denotes the number of non-zero entres n x. Motvated by speedng up dmensonalty reducton further for sparse nputs, Kane and Nelson n [19], followng [1, 18, 7], ntroduced the SJLT wth m = O(ε 2 log n), and wth s = O(εm) nonzero entres per column. Ths reduced the embeddng tme to compute Πx from O(m x ) to O(s x ) = O(εm x ). The orgnal analyss of the SJLT n [19] showed Equaton (2) for m = O(ε 2 log(1/δ)), s = O(ε 1 log(1/δ)) va the moment method. Specfcally, the analyss there for x 2 = 1 defned Z = Πx (3) then used Markov s nequalty to yeld P( Z > ε) < ε q E Z q for some large even nteger q (specfcally q = Θ(log(1/δ))). The bulk of the work was n boundng E Z q, whch was accomplshed by expandng Z q as a polynomal wth exponentally many terms, groupng terms wth smlar combnatoral structure, then employng ntrcate combnatorcs to acheve a suffcently good bound.

3 M. B. Cohen, T. S. Jayram, and J. Nelson 15:3 Our Man Contrbuton. We gve two new analyses of the SJLT of [19], both of whch avod expandng Z q nto many terms and employng ntrcate combnatorcs. As mentoned n the abstract, one of these proofs has already been tested pedagogcally, requrng slghtly under forty mnutes by the thrd author at a casual pace to cover all detals n a blackboard lecture. 2 Prelmnares We say f(x) g(x) f f(x) = O(g(x)), and f(x) g(x) denotes f(x) = Θ(g(x)). For random varable X and q R, X q denotes (E X q ) 1/q. Mnkowsk s nequalty, whch we repeatedly use, states that q s a norm for q 1. If X depends on many random sources, e.g. X = X(a, b), we use X Lq(a), say, to denote the q-norm over the randomness n a (and thus the result wll be a random varable dependng only on b). A Bernoull-Rademacher random varable X = ησ wth parameter p s such that η s a Bernoull random varable (on {, 1}) wth E η = p and σ s a Rademacher random varable,.e. unform n { 1, 1}. Overloadng notaton, a random vector X whose coordnates are..d. Bernoull-Rademacher wth parameter p wll also be called by the same name. For a square real matrx A, let A be obtaned by zerong out the dagonal of A. Throughout ths paper we use F to denote Frobenus norm, and to denote l 2 l 2 operator norm. Both our SJLT analyses n ths work show Eq. (2) by analyzng tal bounds for the random varable Z defned n Eq. (3). We contnue to use the same notaton, where x R d of unt norm s as n Eq. (3). Our frst SJLT analyss uses the followng moment bounds for the bnomal dstrbuton and for quadratc forms wth Rademacher random varables. Lemma 2 ([14]). For Y dstrbuted as Bnomal(N, α) for nteger N 1 and α (, 1), let 1 p N and defne B := p/(αn). Then Y p { p log B p B f B e f B < e A more modern, general proof of the below Hanson-Wrght nequalty can be found n [23]. Theorem 3 (Hanson-Wrght nequalty [11]). For σ 1,..., σ n ndependent Rademachers and A R n n, for all q 1 σ T Aσ E σ T Aσ q q A F + q A. Our second analyss uses a standard decouplng nequalty; a proof s n [25, Remark 6.1.3] Theorem 4 (Decouplng). Let A R n n be arbtrary, and X 1,..., X n be ndependent and mean zero. Then, for every convex functon F : R R E F ( A,j X X j ) E F (4 A,j X X j) jj,j where the X are ndependent copes of the X. Before descrbng the SJLT, we descrbe the related CountSketch of [8], whch was shown to satsfy Eq. (3) n [24]. In ths constructon for Π, one pcks a hash functon h : [d] [m] from a parwse ndependent famly, and a functon σ : [d] { 1, 1} from a 4-wse ndependent famly. Then for each [d], Π h(), = σ(), and the rest of the th column s. It was shown S O S A 2 1 8

4 15:4 Smple Analyses of the Sparse Johnson-Lndenstrauss Transform m m () () Fgure 1 Both dstrbutons have s non-zeroes per column, wth each non-zero beng ndependent n ±1/ s. In (), they are n random locatons, wthout replacement. () s the CountSketch (wth s > 1), whose rows are grouped nto s blocks of sze m/s each, wth one non-zero per block per column n a unformly random locaton, ndependent of other blocks; n ths example, m = 8, s = 4. n [24] that ths dstrbuton satsfes Eq. (3) for m = Ω(1/(ε 2 δ)). Note that the column sparsty s equals 1. The analyss s smply va Chebyshev s nequalty,.e. boundng the second moment of Z. The reason for the poor dependence n m on the falure probablty δ s that we use Chebyshev s nequalty. Ths s avoded by boundng a hgher moment (as n [19], or our frst analyss n ths work), or by analyzng the moment generatng functon (MGF) (as n our second analyss n ths work). To mprove the dependence of m on 1/δ, we allow ourselves to ncrease s. Now we descrbe the SJLT. Ths s a JL dstrbuton over Π havng exactly s non-zero entres per column where each entry s a scaled Bernoull-Rademacher. Specfcally, n the SJLT, the random Π R m d satsfes Π r, = η r, σ r, / s for some nteger 1 s m. The σ r, are ndependent Rademachers and jontly ndependent of the Bernoull random varables η r, satsfyng: (a) For any [d], m r=1 η r, = s. That s, each column of Π has exactly s non-zero entres. (b) For all r [m], [d], E η r, = s/m. (c) The η r, are negatvely correlated: S [d] [n], E (r,) S η r, (r,) S E η r, = (s/m) S. See Fgure 1 for at least two natural dstrbutons satsfyng the above requrements. Thus Πx 2 2 = 1 s m r=1,j=1 d η r, η r,j σ r, σ r,j x x j. Usng (a) above we have (1/s) r η r,x 2 = x 2 2 = 1, so that Z = Πx = 1 s m η r, η r,j σ r, σ r,j x x j. (4) r=1 j Remark. In both our analyses, tem (a) above s only used to remove the dagonal = j terms from eq. (4). Thenceforth, t turns out n both analyses of SJLT that (b) and (c) mply we can assume the η r, are fully ndependent,.e., the entres of Π are fully ndependent. Ths s not the same as sayng we can replace the sketch matrx Π wth fully ndependent entres because then part (a) would be volated and t s mportant for only the cross terms n the quadratc form representng Z to be present. In the analyss we justfy ths assumpton by consderng the nteger moments of Z whch we show here cannot decrease by replacement wth fully ndependent entres. For each nteger q, each monomal n the expanson of Z q has expectaton equal to s q x d1 α 1 x dt α t (E (r,) S η r,) whenever all the d j are even,

5 M. B. Cohen, T. S. Jayram, and J. Nelson 15:5 and S contans all the dstnct (r, ) such that η r, appears n the monomal; otherwse the expectaton equals. Now, s q x d1 α 1 x dt α t s nonnegatve, and E (r,) S η r, (s/m) S. Thus monomals expectatons are term-by-term domnated by the case that all η r, are..d. Bernoull wth expectaton s/m. 3 Proof Overvew Hanson-Wrght analyss. Note Z can be wrtten as the quadratc form σ T A x,η σ, where A x,η s block dagonal wth m blocks, where the rth block s (1/s)x (r) (x (r) ) T but wth the dagonal zeroed out. Here x (r) s the vector wth (x (r) ) = η r, x. To apply Hanson-Wrght, we must then bound A x,η F p and A x,η p, over the randomness of η. Ths was done n [19], but suboptmally, leadng to a smple proof there but of a weaker result (namely, the bound on s proven there was suboptmal by a log(1/ε) factor). As already observed n [19], a smple calculaton shows A x,η 1/s wth probablty 1. In ths work we mprove the analyss of A x,η F p by a smple combnaton of the trangle and Bernsten nequaltes to yeld a tght analyss. MGF analyss. We apply the Chernoff-Rubn bound P( Z > ε) 2e tε E cosh(tz), so that we must bound E cosh(tz) (for t n some bounded range) then optmze the choce of t. We accomplsh our analyss by wrtng Z = X T A X for an approprate matrx A where X s a Bernoull-Rademacher vector, by Taylor expanson of cosh and consderatons smlar to Remark 2. We then bound E cosh(tx T A X) usng decouplng followed by arguments smlar to [13, 23]. We note one can also recover an MGF-based analyss by specalzng the analyss of [9] for analyzng sparse oblvous subspace embeddngs to the case of 1-dmensonal subspaces, though the resultng proof would be qute dfferent from the one presented here. We beleve the MGF-based analyss we gve n ths work appeals to more standard arguments, although the analyss n [9] does provde the advantage that t yelds tradeoff bounds for s, m. 4 Our SJLT analyses 4.1 A frst analyss: va the Hanson-Wrght nequalty Theorem 5. For Π comng from an SJLT dstrbuton, as long as m ε 2 log(1/δ) and s εm, x : x 2 = 1, P Π ( Πx > ε) < δ. Proof. As noted, we can wrte Z as a quadratc form Z = Πx = 1 s m def η r, η r,j σ r, σ r,j x x j = σ T A x,η σ, r=1 j Set q = Θ(log(1/δ)) = Θ(s 2 /m). By Hanson-Wrght and the trangle nequalty, Z q q A x,η F + q A x,η q q A x,η F q + q A x,η q, where A x,η s defned n Secton 3. Snce A x,η s block-dagonal, ts operator norm s the largest operator norm of any block. The egenvalue of the rth block s at most (1/s) S O S A 2 1 8

6 15:6 Smple Analyses of the Sparse Johnson-Lndenstrauss Transform max{ x (r) 2 2, x (r) 2 } 1/s, and thus A x,η 1/s wth probablty 1. Q,j = m r=1 η r,η r,j so that A x,η 2 F = 1 s 2 x 2 x 2 j Q,j. j Next, defne Suppose η r1,,..., η rs, = 1 for dstnct r t and wrte Q,j = s t=1 Y t, where Y t s an ndcator random varable for the event η rt,j = 1. By Remark 2 we may assume the Y t are ndependent, n whch case Q,j s dstrbuted as Bnomal(s, s/m). Then by Lemma 2, Q,j q q. Thus, A x,η F q = A x,η 2 F 1/2 q/2 1 s 2 x 2 x 2 j Q,j 1/2 q j 1 x 2 x 2 j Q,j q s j 1/2 (trangle nequalty) 1 m Then by Markov s nequalty and the settngs of q, s, m, P( Πx > ε) = P( σ T A x,η σ > ε) < ε q C q (m q/2 + s q ) < δ. Remark. Less general bounds than Lemma 2 would have stll suffced for our purposes. For example, Bernsten s nequalty and the trangle nequalty together mply Y p αn + p for any p 1, whch suffces for our applcaton snce we were nterested n the case p = αn. 4.2 A second analyss: boundng the MGF In ths analyss we show the followng bound on the symmetrzed MGF of the error: E cosh(tz) exp ( ) K 2 t 2 m, for t s K, where K = 4 2 (5) Usng the above, we obtan tal estmates n a standard manner. By the generc Chernoff- Rubn bound: P( Z > ε) 2e tε E cosh(tz) 2 exp ( K 2 t 2 m tε), for all t s K Optmzng over the choce of t, we obtan the tal bound: P( Z > ε) 2 max { exp( C 2 ε 2 m), exp( Cεs) }, where C = Remark. The cross-over pont for the two bounds s when s m = Θ(ε). To obtan a falure probablty of δ, ths yelds the desred s = O ( 1 ε log( )) ( 1 δ and m = O 1 ε log ( )) 1 2 δ. Our goal now s to prove eq. (5) for t satsfyng t s K. Now by Taylor expanson, we have E cosh(tz) = t q even q q! E Z q. Therefore, by secton 2, we may assume that the η r, are fully ndependent to bound E cosh(tz) from above. Now E cosh(tz) = 1 2 (E exp(tz) + E exp( tz)) max { E exp(tz), E exp( tz) }, for all t R. Let B def = 1 s xxt. Let Π = 1 s H and let Y 1, Y 2,..., Y m denote the rows of H. Then Z = m r=1 Y r T B Y r. By the ndependence

7 M. B. Cohen, T. S. Jayram, and J. Nelson 15:7 assumpton, Y are..d. Bernoull-Rademacher vectors. Lettng Y denote an dentcal copy of a sngle row of H, E exp(±tz) = r E exp(±ty T r B Y r ) = ( E exp(±ty T B Y ) ) m, for all t R (6) Let Y be an ndependent copy of Y. By decouplng (Theorem 4), E exp(ty T B Y ) E exp(4ty T BY ) = E exp(y T BY ), We show below that for all t R, where B def = 4tB (7) E exp(y T BY ) 1 + K2 t 2 m 2, provded t s K, where K = 4 2 (8) Substtutng ths bound n eq. (7) and combnng wth eq. (6), we obtan: E exp(±tz) ( ) 1 + K2 t 2 m ( m exp K ) 2 t 2 2 m, provded t s K, where K = 4 2, whch completes the proof of (5) as desred. It remans to prove eq. (8). Blnear forms of Bernoull-Rademacher random varables. The MGF of a Bernoull-Rademacher random varable X = ησ wth parameter p equals E exp(tx) = 1 p + p E exp(tσ) 1 p + p exp(t 2 /2), for all t R. Let λ(z) def = exp(z) 1. Rewrtng the above, we have E λ(tx) p λ(t 2 /2) = p E λ(tg), where g N (, 1). We show an analogous replacement nequalty for Bernoull-Rademacher vectors. Lemma 6. Let Y be a Bernoull-Rademacher vector wth parameter p. Then: E λ(b T Y ) p λ( b 2 /2) = p E λ(b T g) for all vectors b, where g N (, I n ) Proof. By stablty of Gaussans, E exp(b T g) = exp( b 2 2/2), demonstratng the last equalty above. Let g(t) def = S t S 1 S λ(b2 /2) for t. We have ( 1 + t λ(b 2 /2) ) = 1 + tg(t). Now: E exp(b T Y ) = E exp(b Y ) = ( 1 + E λ(b Y ) ) ( 1 + p λ(b 2 /2) ) = 1 + pg(p) Thus, E λ(b T Y ) pg(p) pg(1), snce g(t). To conclude, we clam that g(1) = λ( b 2 2/2). Indeed: 1 + g(1) = (1 + λ(b 2 /2)) = exp(b 2 /2) = exp ( b 2 /2 ) = 1 + λ ( b 2 2/2 ) Let p def = s m. In the left sde of eq. (8), we have E exp(y T BY ) = 1 + E λ(y T BY ). By the law of total expectaton: E Y,Y λ(y T BY ) = E Y E Y [λ((y T B)Y ) Y ] p E Y E g [λ((y T B)g ) Y ] (by lemma 6, appled to Y ) Exchange the order of expectatons of Y and g va Fubn-Tonell s theorem. Now apply lemma 6, ths tme to Y. Fnsh usng the law of total expectaton whch yelds an upper bound of p 2 E λ(g T Bg ). Thus: E exp(y T BY ) 1 + p 2 E λ(g T Bg ) (9) S O S A 2 1 8

8 15:8 Smple Analyses of the Sparse Johnson-Lndenstrauss Transform In order to be self-contaned we nclude a standard proof of the followng lemma, though note that the lemma tself s equvalent to the Hanson-Wrght nequalty for gaussan random varables snce t gves a bound on the MGF of decoupled quadratc forms n gaussan random varables. Lemma 7. E exp(g T Qg ) exp ( Q 2 F ) for ndependent g, g N (, I n ), provded Q 1 2. Proof. Let Q = UΣV T, where Σ = dag(s 1,..., s n ). So E exp(g T Qg ) = E exp(g T UΣV T g ). Snce U s orthonormal, by rotatonal nvarance, U T g N (, I n ) and s ndependent of V T g N (, I n ). Therefore, E exp(g T Qg ) = E exp(g T Σg ). Now g T Σg = s g g, therefore: E exp(g T Σg ) = E E[exp(s g g ) g ] = E exp(s 2 g 2 /2) = 1 1 s 2 Now s 2 Q for each. Use the bound e x 1 x for x 1 2 E exp(g T Qg ) exp(s 2 ) = exp( s 2 ) = exp ( Q 2 ) F so that: Note that B F = 4t B F and B = 4t B. Now B = 1 s xxt, so that B F = B = 1 s. Usng the above proposton n the rght sde of eq. (9) wth Q = B, we obtan: E exp(y T BY ) 1 + p 2 λ ( K 2 t 2 2s 2 ), provded t s K, where K = 4 2 In the rght sde above, use the bound λ(x) 2x, whch holds for x 1 2, and substtute p = s m so that E exp(y T BY ) 1 + K2 t 2 m 2, provded t s K, where K = 4 2 Ths yelds the desred bound stated n eq. (8). References 1 Dmtrs Achloptas. Database-frendly random projectons: Johnson-lndenstrauss wth bnary cons. J. Comput. Syst. Sc., 66(4): , Nr Alon and Bernard Chazelle. The fast Johnson-Lndenstrauss transform and approxmate nearest neghbors. SIAM J. Comput., 39(1):32 322, Nr Alon and Edo Lberty. Fast dmenson reducton usng Rademacher seres on dual BCH codes. Dscrete & Computatonal Geometry, 42(4):615 63, Nr Alon and Edo Lberty. An almost optmal unrestrcted fast Johnson-Lndenstrauss transform. ACM Trans. Algorthms, 9(3):21:1 21:12, Noga Alon and Bo az Klartag. Optmal compresson of approxmate nner products and dmenson reducton. In Proceedngs of the 58th Annual IEEE Symposum on Foundatons of Computer Scence (FOCS), Jean Bourgan. An mproved estmate n the restrcted sometry problem. Geometrc Aspects of Functonal Analyss, Lecture Notes n Mathematcs Volume 2116:65 7, Vladmr Braverman, Rafal Ostrovsky, and Yuval Raban. Rademacher chaos, random Euleran graphs and the sparse Johnson-Lndenstrauss transform. CoRR, abs/ , Moses Charkar, Kevn C. Chen, and Martn Farach-Colton. Fndng frequent tems n data streams. Theor. Comput. Sc., 312(1):3 15, 24.

9 M. B. Cohen, T. S. Jayram, and J. Nelson 15:9 9 Mchael B. Cohen. Nearly tght oblvous subspace embeddngs by trace nequaltes. In Proceedngs of the Twenty-Seventh Annual ACM-SIAM Symposum on Dscrete Algorthms (SODA), pages , Anrban Dasgupta, Rav Kumar, and Tamás Sarlós. A sparse Johnson-Lndenstrauss transform. In Proceedngs of the 42nd ACM Symposum on Theory of Computng (STOC), pages , Davd Lee Hanson and Farroll Tm Wrght. A bound on tal probabltes for quadratc forms n ndependent random varables. Ann. Math. Statst., 42: , Ishay Havv and Oded Regev. The restrcted sometry property of subsampled Fourer matrces. In Proceedngs of the 27th Annual ACM-SIAM Symposum on Dscrete Algorthms (SODA), pages , Potr Indyk and Assaf Naor. Nearest-neghbor-preservng embeddngs. ACM Trans. Algorthms, 3(3):31, Meena Jagadeesan. Smple analyss of sparse, sgn-consstent JL. CoRR, abs/ , T. S. Jayram and Davd P. Woodruff. Optmal bounds for Johnson-Lndenstrauss transforms and streamng problems wth subconstant error. ACM Trans. Algorthms, 9(3):26:1 26:17, Wllam B. Johnson and Joram Lndenstrauss. Extensons of Lpschtz mappngs nto a Hlbert space. Contemporary Mathematcs, 26:189 26, Danel M. Kane, Raghu Meka, and Jelan Nelson. Almost optmal explct Johnson- Lndenstrauss famles. In Proceedngs of the 15th Internatonal Workshop on Randomzaton and Computaton (RANDOM), pages , August Danel M. Kane and Jelan Nelson. A derandomzed sparse Johnson-Lndenstrauss transform. CoRR, abs/ , Danel M. Kane and Jelan Nelson. Sparser Johnson-Lndenstrauss transforms. J. ACM, 61(1):4, January 214. Prelmnary verson n SODA Felx Krahmer and Rachel Ward. New and mproved Johnson-Lndenstrauss embeddngs va the Restrcted Isometry Property. SIAM J. Math. Anal., 43(3): , Kasper Green Larsen and Jelan Nelson. Optmalty of the Johnson-Lndenstrauss lemma. In Proceedngs of the 58th Annual IEEE Symposum on Foundatons of Computer Scence (FOCS), Jelan Nelson, Erc Prce, and Mary Wootters. New constructons of RIP matrces wth fast multplcaton and fewer rows. In Proceedngs of the 25th Annual ACM-SIAM Symposum on Dscrete Algorthms (SODA), pages , Mark Rudelson and Roman Vershynn. Hanson-Wrght nequalty and sub-gaussan concentraton. Electronc Communcatons n Probablty, 18:1 9, Mkkel Thorup and Yn Zhang. Tabulaton-based 5-ndependent hashng wth applcatons to lnear probng and second moment estmaton. SIAM J. Comput., 41(2): , Roman Vershynn. Hgh-Dmensonal Probablty. May 217. Last accessed at http: //www-personal.umch.edu/~romanv/papers/hdp-book/hdp-book.pdf on August 22, 217. S O S A 2 1 8

Dimensionality Reduction Notes 2

Dimensionality Reduction Notes 2 Dmensonalty Reducton Notes 2 Jelan Nelson mnlek@seas.harvard.edu August 11, 2015 1 Optmalty theorems for JL Yesterday we saw for MJL that we could acheve target dmenson m = O(ε 2 log N), and for DJL we

More information

Dimensionality Reduction Notes 1

Dimensionality Reduction Notes 1 Dmensonalty Reducton Notes 1 Jelan Nelson mnlek@seas.harvard.edu August 10, 2015 1 Prelmnares Here we collect some notaton and basc lemmas used throughout ths note. Throughout, for a random varable X,

More information

CIS 700: algorithms for Big Data

CIS 700: algorithms for Big Data CIS 700: algorthms for Bg Data Lecture 5: Dmenson Reducton Sldes at htt://grgory.us/bg-data-class.html Grgory Yaroslavtsev htt://grgory.us Today Dmensonalty reducton AMS as dmensonalty reducton Johnson-Lndenstrauss

More information

An introduction to chaining, and applications to sublinear algorithms

An introduction to chaining, and applications to sublinear algorithms An ntroducton to channg, and applcatons to sublnear algorthms Jelan Nelson Harvard August 28, 2015 What s ths talk about? What s ths talk about? Gven a collecton of random varables X 1, X 2,...,, we would

More information

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

HANSON-WRIGHT INEQUALITY AND SUB-GAUSSIAN CONCENTRATION

HANSON-WRIGHT INEQUALITY AND SUB-GAUSSIAN CONCENTRATION HANSON-WRIGHT INEQUALITY AND SUB-GAUSSIAN CONCENTRATION MARK RUDELSON AND ROMAN VERSHYNIN Abstract. In ths expostory note, we gve a modern proof of Hanson-Wrght nequalty for quadratc forms n sub-gaussan

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Lecture 9 Sept 29, 2017

Lecture 9 Sept 29, 2017 Sketchng Algorthms for Bg Data Fall 2017 Prof. Jelan Nelson Lecture 9 Sept 29, 2017 Scrbe: Mtal Bafna 1 Fast JL transform Typcally we have some hgh-mensonal computatonal geometry problem, an we use JL

More information

Lecture 4: Constant Time SVD Approximation

Lecture 4: Constant Time SVD Approximation Spectral Algorthms and Representatons eb. 17, Mar. 3 and 8, 005 Lecture 4: Constant Tme SVD Approxmaton Lecturer: Santosh Vempala Scrbe: Jangzhuo Chen Ths topc conssts of three lectures 0/17, 03/03, 03/08),

More information

Eigenvalues of Random Graphs

Eigenvalues of Random Graphs Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the

More information

HANSON-WRIGHT INEQUALITY AND SUB-GAUSSIAN CONCENTRATION

HANSON-WRIGHT INEQUALITY AND SUB-GAUSSIAN CONCENTRATION HANSON-WRIGHT INEQUALITY AND SUB-GAUSSIAN CONCENTRATION MARK RUDELSON AND ROMAN VERSHYNIN Abstract. In ths expostory note, we gve a modern proof of Hanson-Wrght nequalty for quadratc forms n sub-gaussan

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

Chernoff-Hoeffding Inequality

Chernoff-Hoeffding Inequality Chernoff-Hoeffdng Inequalty When dealng wth modern bg data sets, a very common theme s reducng the set through a random process. These generally work by makng many smple estmates of the full data set,

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES SVANTE JANSON Abstract. We gve explct bounds for the tal probabltes for sums of ndependent geometrc or exponental varables, possbly wth dfferent

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Approximate Smallest Enclosing Balls

Approximate Smallest Enclosing Balls Chapter 5 Approxmate Smallest Enclosng Balls 5. Boundng Volumes A boundng volume for a set S R d s a superset of S wth a smple shape, for example a box, a ball, or an ellpsod. Fgure 5.: Boundng boxes Q(P

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

CSCE 790S Background Results

CSCE 790S Background Results CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

The Second Eigenvalue of Planar Graphs

The Second Eigenvalue of Planar Graphs Spectral Graph Theory Lecture 20 The Second Egenvalue of Planar Graphs Danel A. Spelman November 11, 2015 Dsclamer These notes are not necessarly an accurate representaton of what happened n class. The

More information

Hanson-Wright inequality and sub-gaussian concentration

Hanson-Wright inequality and sub-gaussian concentration Electron. Commun. Probab. 18 (013, no. 8, 1 9. DOI: 10.114/ECP.v18-865 ISSN: 1083-589X ELECTRONIC COMMUNICATIONS n PROBABILITY Hanson-Wrght nequalty and sub-gaussan concentraton Mark Rudelson Roman Vershynn

More information

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm NYU, Fall 2016 Lattces Mn Course Lecture 2: Gram-Schmdt Vectors and the LLL Algorthm Lecturer: Noah Stephens-Davdowtz 2.1 The Shortest Vector Problem In our last lecture, we consdered short solutons to

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities Supplementary materal: Margn based PU Learnng We gve the complete proofs of Theorem and n Secton We frst ntroduce the well-known concentraton nequalty, so the covarance estmator can be bounded Then we

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0 Bezer curves Mchael S. Floater August 25, 211 These notes provde an ntroducton to Bezer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of the

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Lecture Space-Bounded Derandomization

Lecture Space-Bounded Derandomization Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

Lecture 4: September 12

Lecture 4: September 12 36-755: Advanced Statstcal Theory Fall 016 Lecture 4: September 1 Lecturer: Alessandro Rnaldo Scrbe: Xao Hu Ta Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer: These notes have not been

More information

Lecture 5 September 17, 2015

Lecture 5 September 17, 2015 CS 229r: Algorthms for Bg Data Fall 205 Prof. Jelan Nelson Lecture 5 September 7, 205 Scrbe: Yakr Reshef Recap and overvew Last tme we dscussed the problem of norm estmaton for p-norms wth p > 2. We had

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP

FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP C O L L O Q U I U M M A T H E M A T I C U M VOL. 80 1999 NO. 1 FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP BY FLORIAN K A I N R A T H (GRAZ) Abstract. Let H be a Krull monod wth nfnte class

More information

Spectral Graph Theory and its Applications September 16, Lecture 5

Spectral Graph Theory and its Applications September 16, Lecture 5 Spectral Graph Theory and ts Applcatons September 16, 2004 Lecturer: Danel A. Spelman Lecture 5 5.1 Introducton In ths lecture, we wll prove the followng theorem: Theorem 5.1.1. Let G be a planar graph

More information

Vapnik-Chervonenkis theory

Vapnik-Chervonenkis theory Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown

More information

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013 ISSN: 2277-375 Constructon of Trend Free Run Orders for Orthogonal rrays Usng Codes bstract: Sometmes when the expermental runs are carred out n a tme order sequence, the response can depend on the run

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0 Bézer curves Mchael S. Floater September 1, 215 These notes provde an ntroducton to Bézer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Math 217 Fall 2013 Homework 2 Solutions

Math 217 Fall 2013 Homework 2 Solutions Math 17 Fall 013 Homework Solutons Due Thursday Sept. 6, 013 5pm Ths homework conssts of 6 problems of 5 ponts each. The total s 30. You need to fully justfy your answer prove that your functon ndeed has

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

P exp(tx) = 1 + t 2k M 2k. k N

P exp(tx) = 1 + t 2k M 2k. k N 1. Subgaussan tals Defnton. Say that a random varable X has a subgaussan dstrbuton wth scale factor σ< f P exp(tx) exp(σ 2 t 2 /2) for all real t. For example, f X s dstrbuted N(,σ 2 ) then t s subgaussan.

More information

Introduction to Algorithms

Introduction to Algorithms Introducton to Algorthms 6.046J/8.40J Lecture 7 Prof. Potr Indyk Data Structures Role of data structures: Encapsulate data Support certan operatons (e.g., INSERT, DELETE, SEARCH) Our focus: effcency of

More information

Perron Vectors of an Irreducible Nonnegative Interval Matrix

Perron Vectors of an Irreducible Nonnegative Interval Matrix Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

Some basic inequalities. Definition. Let V be a vector space over the complex numbers. An inner product is given by a function, V V C

Some basic inequalities. Definition. Let V be a vector space over the complex numbers. An inner product is given by a function, V V C Some basc nequaltes Defnton. Let V be a vector space over the complex numbers. An nner product s gven by a functon, V V C (x, y) x, y satsfyng the followng propertes (for all x V, y V and c C) (1) x +

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Stanford University Graph Partitioning and Expanders Handout 3 Luca Trevisan May 8, 2013

Stanford University Graph Partitioning and Expanders Handout 3 Luca Trevisan May 8, 2013 Stanford Unversty Graph Parttonng and Expanders Handout 3 Luca Trevsan May 8, 03 Lecture 3 In whch we analyze the power method to approxmate egenvalues and egenvectors, and we descrbe some more algorthmc

More information

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space. Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +

More information

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7 Stanford Unversty CS54: Computatonal Complexty Notes 7 Luca Trevsan January 9, 014 Notes for Lecture 7 1 Approxmate Countng wt an N oracle We complete te proof of te followng result: Teorem 1 For every

More information

Communication Complexity 16:198: February Lecture 4. x ij y ij

Communication Complexity 16:198: February Lecture 4. x ij y ij Communcaton Complexty 16:198:671 09 February 2010 Lecture 4 Lecturer: Troy Lee Scrbe: Rajat Mttal 1 Homework problem : Trbes We wll solve the thrd queston n the homework. The goal s to show that the nondetermnstc

More information

P A = (P P + P )A = P (I P T (P P ))A = P (A P T (P P )A) Hence if we let E = P T (P P A), We have that

P A = (P P + P )A = P (I P T (P P ))A = P (A P T (P P )A) Hence if we let E = P T (P P A), We have that Backward Error Analyss for House holder Reectors We want to show that multplcaton by householder reectors s backward stable. In partcular we wsh to show fl(p A) = P (A) = P (A + E where P = I 2vv T s the

More information

Randić Energy and Randić Estrada Index of a Graph

Randić Energy and Randić Estrada Index of a Graph EUROPEAN JOURNAL OF PURE AND APPLIED MATHEMATICS Vol. 5, No., 202, 88-96 ISSN 307-5543 www.ejpam.com SPECIAL ISSUE FOR THE INTERNATIONAL CONFERENCE ON APPLIED ANALYSIS AND ALGEBRA 29 JUNE -02JULY 20, ISTANBUL

More information

arxiv: v1 [quant-ph] 6 Sep 2007

arxiv: v1 [quant-ph] 6 Sep 2007 An Explct Constructon of Quantum Expanders Avraham Ben-Aroya Oded Schwartz Amnon Ta-Shma arxv:0709.0911v1 [quant-ph] 6 Sep 2007 Abstract Quantum expanders are a natural generalzaton of classcal expanders.

More information

Mining Data Streams-Estimating Frequency Moment

Mining Data Streams-Estimating Frequency Moment Mnng Data Streams-Estmatng Frequency Moment Barna Saha October 26, 2017 Frequency Moment Computng moments nvolves dstrbuton of frequences of dfferent elements n the stream. Frequency Moment Computng moments

More information

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k. THE CELLULAR METHOD In ths lecture, we ntroduce the cellular method as an approach to ncdence geometry theorems lke the Szemeréd-Trotter theorem. The method was ntroduced n the paper Combnatoral complexty

More information

MATH 5707 HOMEWORK 4 SOLUTIONS 2. 2 i 2p i E(X i ) + E(Xi 2 ) ä i=1. i=1

MATH 5707 HOMEWORK 4 SOLUTIONS 2. 2 i 2p i E(X i ) + E(Xi 2 ) ä i=1. i=1 MATH 5707 HOMEWORK 4 SOLUTIONS CİHAN BAHRAN 1. Let v 1,..., v n R m, all lengths v are not larger than 1. Let p 1,..., p n [0, 1] be arbtrary and set w = p 1 v 1 + + p n v n. Then there exst ε 1,..., ε

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

On some variants of Jensen s inequality

On some variants of Jensen s inequality On some varants of Jensen s nequalty S S DRAGOMIR School of Communcatons & Informatcs, Vctora Unversty, Vc 800, Australa EMMA HUNT Department of Mathematcs, Unversty of Adelade, SA 5005, Adelade, Australa

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

Lecture 3 January 31, 2017

Lecture 3 January 31, 2017 CS 224: Advanced Algorthms Sprng 207 Prof. Jelan Nelson Lecture 3 January 3, 207 Scrbe: Saketh Rama Overvew In the last lecture we covered Y-fast tres and Fuson Trees. In ths lecture we start our dscusson

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

SUCCESSIVE MINIMA AND LATTICE POINTS (AFTER HENK, GILLET AND SOULÉ) M(B) := # ( B Z N)

SUCCESSIVE MINIMA AND LATTICE POINTS (AFTER HENK, GILLET AND SOULÉ) M(B) := # ( B Z N) SUCCESSIVE MINIMA AND LATTICE POINTS (AFTER HENK, GILLET AND SOULÉ) S.BOUCKSOM Abstract. The goal of ths note s to present a remarably smple proof, due to Hen, of a result prevously obtaned by Gllet-Soulé,

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

THE WEIGHTED WEAK TYPE INEQUALITY FOR THE STRONG MAXIMAL FUNCTION

THE WEIGHTED WEAK TYPE INEQUALITY FOR THE STRONG MAXIMAL FUNCTION THE WEIGHTED WEAK TYPE INEQUALITY FO THE STONG MAXIMAL FUNCTION THEMIS MITSIS Abstract. We prove the natural Fefferman-Sten weak type nequalty for the strong maxmal functon n the plane, under the assumpton

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor Taylor Enterprses, Inc. Control Lmts for P Charts Copyrght 2017 by Taylor Enterprses, Inc., All Rghts Reserved. Control Lmts for P Charts Dr. Wayne A. Taylor Abstract: P charts are used for count data

More information

Another converse of Jensen s inequality

Another converse of Jensen s inequality Another converse of Jensen s nequalty Slavko Smc Abstract. We gve the best possble global bounds for a form of dscrete Jensen s nequalty. By some examples ts frutfulness s shown. 1. Introducton Throughout

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information