Lecture 20 November 7, 2013

Size: px
Start display at page:

Download "Lecture 20 November 7, 2013"

Transcription

1 CS 229r: Algoriths for Big Data Fall 2013 Prof. Jelani Nelson Lecture 20 Noveber 7, 2013 Scribe: Yun Willia Yu 1 Introduction Today we re going to go through the analysis of atrix copletion. First though, let s go through the history of prior work on this proble. Recall the setup and odel: Matrix copletion setup: Model: Want to recover M R n 1 n 2, under the assuption that rank(m) = r, where r is sall. Only soe sall subset of the entries (M ij ) ij Ω are revealed, where Ω [n 1 ] [n 2 ], Ω = n 1, n 2 ties we saple i, j uniforly at rando + insert into Ω (so Ω is a ultiset). Note that the sae results hold if we choose entries without replaceent, but it s easier to analyze this way. In fact, if you can show that if recovery works with replaceent, then that iplies that recovery works without replaceent, which akes sense because you d only be seeing ore inforation about M. We recover M by Nuclear Nor Miniization (NNM): Solve the progra in X s.t. i, j Ω, X ij = M ij [Recht, Fazel, Parrilo 10] [RFP10] was first to give soe rigorous guarantees for NNM. As you ll see on the pset, you can actually solve this in polynoial tie since it s an instance of what s known as a sei-definite progra. [Candés, Recht, 09] [CR09] was the first paper to show provable guarantees for NNM applied to atrix copletion. There were soe quantitative iproveents (in the paraeters) in two papers: [Candés, Tao 09] [CT10] and [Keshavan, Montanari, Oh 09] [KMO10] Today we re going to cover an even newer analysis given in [Recht, 2011] [Rec11], which has a couple of advantages. First, it has the laxest of all the conditions. Second, it s also the siplest of all the analyses in the papers. Thus, it s really better in every way there is. The approach of [Rec11] was inspired by work in quantu toography [GLF+10]. A ore general theore than the one proven in class today was later proven by Gross [Gross]. 1

2 2 Theore Stateent We re alost ready to forally state the ain theore, but need a couple of definitions first. Definition 1. Let M = UΣV be the singular value decoposition. (Note that U R n 1 r and V R n 2 r.) Definition 2. Define the incoherence of the subspace U as µ(u) = n 1 r ax i P U e i 2, where P U is projection onto U. Siilarly, the incoherence of V is µ(v ) = n 2 r ax i P V e i 2, where P V is projection onto V. Definition 3. µ 0 def = ax{µ(u), µ(v )}. Definition 4. µ 1 def = UV n1 n 2 /r, where UV is the largest agnitude of an entry of UV. Theore 1. If ax{µ 2 1, µ 0} n 2 r log 2 (n 2 ) then with high probability M is the unique solution to the sei-definite progra in X s.t. i, j Ω, X ij = M ij. Note that 1 µ 0 n 2 r. The way µ 0 can be n 2 r is if a standard basis vector appears in a colun of V, and the way µ 0 can get all the way down to 1 is like the best case scenario where all the entries 1 of V are like 1 n2 and all the entries of U are like n1, so for exaple if you took a Fourier atrix and cut off soe of its coluns. Thus, the condition on is a good bound if the atrix has low incoherence. One ight wonder about the necessity of all the funny ters in the condition on. Unfortunately, [Candes, Tao, 09] [CT10] showed µ 0 n 2 r log(n 2 ) is needed. If you want to have any decent chance of recovering M over the rando choice of Ω using this SDP, then you need to saple at least that any entries. The condition isn t copletely tight because of the square in the log factor and the dependence on µ 2 1. However, you can show that µ2 1 µ2 0 r. Just like in copressed sensing, there are also soe iterative algoriths to recover M, but we re not going to analyze the in class. For exaple, the SparSA algorith given in [Wright, Nowak, Figueiredo 09] [WNF09] (thanks for Ben Recht for pointing this out to e). That algorith roughly looks as follows when one wants to iniize AX M 2 F + µ X : Pick X 0, and a stepsize t and iterate (a)-(d) soe nuber of ties: (a) Z = X k t A T (AX k M) (b) [U, diag(s), V ] = svd(z) (c) r = ax(s µt, 0) (d) X k+1 = Udiag(r)V T As an aside, trace-nor iniization is actually tolerant to noise, but I not going to cover that. 3 Analysis The way that the analysis is going to go is we re going to condition on lots of good events all happening, and if those good events happen, then the iniization works. The way I going to structure the proof is I ll first state what all those events are, then I ll show why those events ake the iniization work, and finally I ll bound the probability of those events not happening. 2

3 3.1 Background and ore notation Before I do that, I want to say one things about the trace nor. How any people are failiar with dual nors? How any people have heard of the Hahn-Banach theore? OK, good. Definition 5. A, B def = Tr(A B) = i,j A ijb ij Clai 1. The dual of the trace nor is the operator nor: = sup A, B B s.t. B 1 This akes sense because the dual of l 1 for vectors is l and this sort of looks like that because the trace nor and operator nor are respectively like the l 1 and l nor of the singular value vector. More rigorously, we can prove it by proving inequality in both directions. One direction is not so hard, but the other requires the following lea. Lea 1. }{{} (1) = in X,Y s.t. A=XY X F Y F }{{} (2) 1 ( ) X 2 F 2 F A=XY } {{ } (3) = in X,Y s.t. Proof of lea. (2) (3): AM-GM inequality: xy 1 2 (x2 + y 2 ). (3) (1): We basically just need to exhibit an X and Y which are give soething that is at ost the. Set X = Y = A 1/2. In general, given f : R + R +, then f(a) = Uf(Σ)V. i.e. write the SVD of A and apply f to each diagonal entry of Σ. You can easily check that A 1/2 A 1/2 = A and that the square of the Frobenius nor of A 1/2 is exactly the trace nor. (1) (2): Let X, Y be soe atrices such that A = XY. Then = XY sup {a i } orthonoral basis {b i } orthonoral basis = sup sup Y a i, X b i i Y a i X b i i sup( i Y a i 2 ) 1/2 ( i XY a i, b i i X b i 2 ) 1/2 This can be seen to be true by letting a i =v i and b i =u i (fro the SVD), when we get equality. (by Cauchy-Schwarz) = X F Y F because {a i },{b i } are orthonoral bases and the Frobenius nor is rotationally invariant 3

4 Proof of clai. Part 1: sup A, B. B =1 We show this by writing A = UΣV. Then take B = i u ivi. That will give you soething on the right that is at least the trace nor. As an aside, in general, this is how dual nors are defined. Given a nor X the dual nor is defined by Z X = sup Y X 1 Z, Y. In this case, we re proving the dual of the operator nor is the trace nor. Or, for exaple, the dual nor of the Schatten p-nor is the Schatten q-nor where 1 p + 1 q = 1. As an aside, if X is a nored space with nor then X is the set of all linear functionals λ x : X R for x X with dual nor λ x = sup y X x, y. One can then ap x X to (X ) by the evaluation ap f : X (X ) : for λ X, f(x)(λ) = λ(x). Then f is injective and the nors of x and f(x) are equal by the Hahn Banach theore, though f need not be surjective (in the case where it is, X is called a reflexive Banach space). You can learn ore on wiki if you want, or take a functional analysis class. Part 2: We show this using the lea. A, B B s.t. B = 1. Write A = XY s.t. = X F Y F (lea guarantees that there exists such an X and Y ). Write B = i σ ia i b i, i, σ i 1. Then using a siilar arguent to last tie A, B = XY, i σ i a i b i = i i σ i Y a i, X b i Y a i, X b i = X F Y = which concludes the proof of the clai. Recall that the set of atrices that are n 1 n 2 is itself a vector space. I going to decopose that vector space into T and the orthogonal copleent of T by defining the following projection operators. P T (Z) def = (I P U )Z(I P V ) P T (Z) def = Z P T (Z) 4

5 So basically, the atrices that are in the vector space T are the atrices that can be written as the su of rank 1 atrices a i b i where the a i s are orthogonal to all the u s and the b i s are orthogonal to all the v s. Also define R Ω (Z) as only keeping entries in Ω, ultiplied by ultiplicity in Ω. If you think of the operator R Ω : R n 1n 2 R n 1n 2 as a atrix, it is a diagonal atrix with the ultiplicity of entries in Ω on the diagonal. 3.2 Good events With high probability probability 1 1 poly(n 2 ), and you can ake the 1 poly(n 2 ) factor decay as uch as you want by increasing the constant in fro of all these events happen: 1. P T R Ω P T P T µ 0 r(n 1 +n 2 ) log(n 2 ) 1 2 (this is a deviation inequality fro the expectation over the randoness coing fro Ω) 2. ( n 1n 2 R Ω I)Z n 1 n 2 2 log(n 1+n 2 ) Z (this is another deviation inequality fro the expectation) 3. If Z T then P T R Ω (Z) Z µ 0 rn 2 log(n 2 ) Z 4. R Ω log(n 2 ) This one is actually really easy (also the shortest): it s just balls and bins. We ve already said it s a diagonal atrix, so the operator nor is just the largest diagonal entry. Iagine we have balls, and we re throwing the independently at rando into bins, naely the diagonal entries, and this is just how loaded is the axiu bin. In particular, <, or else we wouldn t be doing atrix copletion since we d have the whole atrix. In general, when you throw t balls into t bins, the axiu load by the Chernoff bound is at ost log t. In fact, it s at ost log t/ log log t, but who cares, since that would save us an extra log log soewhere. Actually, I not even sure it would save us that since there are other log s that coe into play. 5. Y in range(r Ω ) s.t. (5a) P T (Y ) UV F r 2n 2 (5b) P T (Y ) < Recovery conditioned on good events Now that we ve stated all these things, let s show that they iply trace nor iniization actually works. We want to ake sure arg in X X s.t. R Ω (X)=R Ω (M) is unique and equal to M. 5

6 Let Z Ker(R Ω ), (Z 0); we want to show M + Z > M. First we want to argue that P T (Z) F cannot be big. Lea 2. P T (Z) F n 2 2r P T (Z) F Proof. Also Also have 0 = R Ω (Z) F R Ω (P T (Z)) F R Ω (P T (Z)) F R Ω (P T (Z)) 2 F = R ΩP T Z, R Ω P T Z P T Z, R Ω P T Z = Z, P T R Ω P T Z = P T Z, P T R Ω P T P T Z = P T Z, P T P T Z P T Z 2 F n 1 n 2 P T R Ω P T P T Z 2 F P T Z, (P T R Ω P T P T Z 2 F )P T Z R Ω (P T (Z)) 2 F R Ω 2 P T (Z) 2 F log 2 (n 2 ) P T (Z) 2 F Suarize: cobining all the inequalities together, and then aking use of our choice of, log 2 (n 2 ) P T (Z) F P T (Z) F n2 2r P T (Z) F Pick U, V s.t. U V, P T (Z) = P T (Z) and s.t. [U, U ], [V, V ] orthogonal atrices. We know fro clai 1 that the trace nor is exactly the sup over all B atrices of the inner product. But the B atrix that achieves the sup has all singular values equal to 1, so B = U V, because P T (Z) is in the orthogonal space so B should also be in the orthogonal space. Now we have a long chain of inequalities to show that the trace of any M + Z is greater than the trace of M: 6

7 M + Z UV + U V, M + Z by clai 1 = M + UV + U V, Z since M U V = M + UV + U V Y, Z since Z ker(r Ω ) and Y range(r Ω ) = M + UV P T (Y ), P T (Z) + U V P T (Y ), P T (Z) decoposition into T & T M UV P T (Y ) F P T (Z) F x,y x 2 y 2 + P T (Z) by our choice of UV P T (Y ) P T (Z) F nor inequality But note that the operator nor is always bigger than the Frobenius nor, so P T (Z) P T (Z) F. We want to ensure that that ter is strictly bigger than the two negative ters. By condition (5b), we ensure that P T (Y ) P T (Z) F < 1 2 P T (Z) F. By condition (5a) and lea 2, we can also ensure that UV P T (Y ) F P T (Z) F < 1 2 P T (Z) F. Thus, back to the ain equation: r M + Z > M P T (Z) 2n F P T (Z) F M Hence, when all of the good conditions hold, iniizing the trace nor recovers M. 3.4 Probability of good events holding Unfortunately, we do not have enough tie to go through the full analysis. We ight overflow soe of this into next lecture, but for now, let s introduce the noncoutative Bernstein inequality we use to get conditions (1) and (2). As an aside, I tend to call all of these inequalities Chernoff inequalities, since they re all quite siilar, but this one really should have a different nae because it the proof for this atrix Bernstein is very different fro the proof of ordinary Chernoff. Theore 2 (Non-coutative Bernstein Chernoff inequality). Suppose X 1,..., X N are rando atrices of the sae diensions and E X i = 0 s.t. 1. X i M, i w.p σ 2 i = ax{ E X ix i, E X i X i } Then ( ) N P X i > λ i=1 { (n 1 + n 2 ) ax exp ( Cλ 2 ɛσ 2 i ) ( )} Cλ, exp M As entioned, conditions (2) and (3) were deviation inequalities fro expectation, so we can get the using Bernstein on the rando atrices over distribution of Ω (subtracting out the expectation to set expectation 0 where appropriate). As an additional aside, conditions (4), (5), and (1) were used in the proofs above. However, we only need conditions (2) and (3) to show (5). Next tie if we have tie, we ight say soething about proving (5). 7

8 4 Concluding rearks Why would you think of trace iniization as solving atrix copletion? Analogously, why would you use l 1 iniization for copressed sensing? In soe way, these two questions are very siilar in that rank is like the support size of the singular value vector, and trace nor is the l 1 nor of the singular value vector, so the two are very analogous. l 1 iniization sees like a natural choice, since it is the closest convex function to support size fro all the l p nors (and being convex allows us to solve the progra in polynoial tie). References [CR09] [CT10] [Gross] Eanuel J Candès and Benjain Recht, Exact atrix copletion via convex optiization, Foundations of Coputational atheatics 9 (2009), no. 6, Eanuel J Candès and Terence Tao, The power of convex relaxation: Near-optial atrix copletion, Inforation Theory, IEEE Transactions on 56 (2010), no. 5, David Gross, Recovering low-rank atrices fro few coefficients in any basis, Inforation Theory, IEEE Transactions on (2011), no. 57, : [GLF+10] David Gross, Yi-Kai Liu, Steven T. Flaia, Stephen Becker, and Jens Eisert. Quantu state toography via copressed sensing, Physical Review Letters (2010), 105(15): [KMO10] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh, Matrix copletion fro noisy entries, The Journal of Machine Learning Research 99 (2010), [Rec11] [RFP10] Benjain Recht, A sipler approach to atrix copletion, The Journal of Machine Learning Research 12 (2011), Benjain Recht, Marya Fazel, and Pablo A Parrilo, Guaranteed iniu-rank solutions of linear atrix equations via nuclear nor iniization, SIAM review 52 (2010), no. 3, [WNF09] Stephen J Wright, Robert D Nowak, and Mário AT Figueiredo, Sparse reconstruction by separable approxiation, Signal Processing, IEEE Transactions on 57 (2009), no. 7,

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

Exact tensor completion with sum-of-squares

Exact tensor completion with sum-of-squares Proceedings of Machine Learning Research vol 65:1 54, 2017 30th Annual Conference on Learning Theory Exact tensor copletion with su-of-squares Aaron Potechin Institute for Advanced Study, Princeton David

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Lecture 9 November 23, 2015

Lecture 9 November 23, 2015 CSC244: Discrepancy Theory in Coputer Science Fall 25 Aleksandar Nikolov Lecture 9 Noveber 23, 25 Scribe: Nick Spooner Properties of γ 2 Recall that γ 2 (A) is defined for A R n as follows: γ 2 (A) = in{r(u)

More information

The Hilbert Schmidt version of the commutator theorem for zero trace matrices

The Hilbert Schmidt version of the commutator theorem for zero trace matrices The Hilbert Schidt version of the coutator theore for zero trace atrices Oer Angel Gideon Schechtan March 205 Abstract Let A be a coplex atrix with zero trace. Then there are atrices B and C such that

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Lecture 21 Nov 18, 2015

Lecture 21 Nov 18, 2015 CS 388R: Randoized Algoriths Fall 05 Prof. Eric Price Lecture Nov 8, 05 Scribe: Chad Voegele, Arun Sai Overview In the last class, we defined the ters cut sparsifier and spectral sparsifier and introduced

More information

1 Generalization bounds based on Rademacher complexity

1 Generalization bounds based on Rademacher complexity COS 5: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #0 Scribe: Suqi Liu March 07, 08 Last tie we started proving this very general result about how quickly the epirical average converges

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs On the Inapproxiability of Vertex Cover on k-partite k-unifor Hypergraphs Venkatesan Guruswai and Rishi Saket Coputer Science Departent Carnegie Mellon University Pittsburgh, PA 1513. Abstract. Coputing

More information

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup) Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains

More information

A Probabilistic and RIPless Theory of Compressed Sensing

A Probabilistic and RIPless Theory of Compressed Sensing A Probabilistic and RIPless Theory of Copressed Sensing Eanuel J Candès and Yaniv Plan 2 Departents of Matheatics and of Statistics, Stanford University, Stanford, CA 94305 2 Applied and Coputational Matheatics,

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory TTIC 31120 Prof. Nati Srebro Lecture 2: PAC Learning and VC Theory I Fro Adversarial Online to Statistical Three reasons to ove fro worst-case deterinistic

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

arxiv: v1 [cs.ds] 29 Jan 2012

arxiv: v1 [cs.ds] 29 Jan 2012 A parallel approxiation algorith for ixed packing covering seidefinite progras arxiv:1201.6090v1 [cs.ds] 29 Jan 2012 Rahul Jain National U. Singapore January 28, 2012 Abstract Penghui Yao National U. Singapore

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

Bipartite subgraphs and the smallest eigenvalue

Bipartite subgraphs and the smallest eigenvalue Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010 A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING By Eanuel J Candès Yaniv Plan Technical Report No 200-0 Noveber 200 Departent of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

Introduction to Machine Learning. Recitation 11

Introduction to Machine Learning. Recitation 11 Introduction to Machine Learning Lecturer: Regev Schweiger Recitation Fall Seester Scribe: Regev Schweiger. Kernel Ridge Regression We now take on the task of kernel-izing ridge regression. Let x,...,

More information

1 Identical Parallel Machines

1 Identical Parallel Machines FB3: Matheatik/Inforatik Dr. Syaantak Das Winter 2017/18 Optiizing under Uncertainty Lecture Notes 3: Scheduling to Miniize Makespan In any standard scheduling proble, we are given a set of jobs J = {j

More information

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements 1 Copressive Distilled Sensing: Sparse Recovery Using Adaptivity in Copressive Measureents Jarvis D. Haupt 1 Richard G. Baraniuk 1 Rui M. Castro 2 and Robert D. Nowak 3 1 Dept. of Electrical and Coputer

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

Lecture October 23. Scribes: Ruixin Qiang and Alana Shine

Lecture October 23. Scribes: Ruixin Qiang and Alana Shine CSCI699: Topics in Learning and Gae Theory Lecture October 23 Lecturer: Ilias Scribes: Ruixin Qiang and Alana Shine Today s topic is auction with saples. 1 Introduction to auctions Definition 1. In a single

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

arxiv: v1 [cs.ds] 17 Mar 2016

arxiv: v1 [cs.ds] 17 Mar 2016 Tight Bounds for Single-Pass Streaing Coplexity of the Set Cover Proble Sepehr Assadi Sanjeev Khanna Yang Li Abstract arxiv:1603.05715v1 [cs.ds] 17 Mar 2016 We resolve the space coplexity of single-pass

More information

Convex Programming for Scheduling Unrelated Parallel Machines

Convex Programming for Scheduling Unrelated Parallel Machines Convex Prograing for Scheduling Unrelated Parallel Machines Yossi Azar Air Epstein Abstract We consider the classical proble of scheduling parallel unrelated achines. Each job is to be processed by exactly

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

Finite fields. and we ve used it in various examples and homework problems. In these notes I will introduce more finite fields

Finite fields. and we ve used it in various examples and homework problems. In these notes I will introduce more finite fields Finite fields I talked in class about the field with two eleents F 2 = {, } and we ve used it in various eaples and hoework probles. In these notes I will introduce ore finite fields F p = {,,...,p } for

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

G G G G G. Spec k G. G Spec k G G. G G m G. G Spec k. Spec k

G G G G G. Spec k G. G Spec k G G. G G m G. G Spec k. Spec k 12 VICTORIA HOSKINS 3. Algebraic group actions and quotients In this section we consider group actions on algebraic varieties and also describe what type of quotients we would like to have for such group

More information

List Scheduling and LPT Oliver Braun (09/05/2017)

List Scheduling and LPT Oliver Braun (09/05/2017) List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)

More information

1 Rademacher Complexity Bounds

1 Rademacher Complexity Bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #10 Scribe: Max Goer March 07, 2013 1 Radeacher Coplexity Bounds Recall the following theore fro last lecture: Theore 1. With probability

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

Supplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data

Supplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data Suppleentary to Learning Discriinative Bayesian Networks fro High-diensional Continuous Neuroiaging Data Luping Zhou, Lei Wang, Lingqiao Liu, Philip Ogunbona, and Dinggang Shen Proposition. Given a sparse

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

1 Proof of learning bounds

1 Proof of learning bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a

More information

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13 CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture

More information

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest

More information

Testing Properties of Collections of Distributions

Testing Properties of Collections of Distributions Testing Properties of Collections of Distributions Reut Levi Dana Ron Ronitt Rubinfeld April 9, 0 Abstract We propose a fraework for studying property testing of collections of distributions, where the

More information

Support Vector Machines MIT Course Notes Cynthia Rudin

Support Vector Machines MIT Course Notes Cynthia Rudin Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

A Bernstein-Markov Theorem for Normed Spaces

A Bernstein-Markov Theorem for Normed Spaces A Bernstein-Markov Theore for Nored Spaces Lawrence A. Harris Departent of Matheatics, University of Kentucky Lexington, Kentucky 40506-0027 Abstract Let X and Y be real nored linear spaces and let φ :

More information

arxiv: v5 [cs.it] 16 Mar 2012

arxiv: v5 [cs.it] 16 Mar 2012 ONE-BIT COMPRESSED SENSING BY LINEAR PROGRAMMING YANIV PLAN AND ROMAN VERSHYNIN arxiv:09.499v5 [cs.it] 6 Mar 0 Abstract. We give the first coputationally tractable and alost optial solution to the proble

More information

PHY307F/407F - Computational Physics Background Material for Expt. 3 - Heat Equation David Harrison

PHY307F/407F - Computational Physics Background Material for Expt. 3 - Heat Equation David Harrison INTRODUCTION PHY37F/47F - Coputational Physics Background Material for Expt 3 - Heat Equation David Harrison In the Pendulu Experient, we studied the Runge-Kutta algorith for solving ordinary differential

More information

Learnability and Stability in the General Learning Setting

Learnability and Stability in the General Learning Setting Learnability and Stability in the General Learning Setting Shai Shalev-Shwartz TTI-Chicago shai@tti-c.org Ohad Shair The Hebrew University ohadsh@cs.huji.ac.il Nathan Srebro TTI-Chicago nati@uchicago.edu

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies Approxiation in Stochastic Scheduling: The Power of -Based Priority Policies Rolf Möhring, Andreas Schulz, Marc Uetz Setting (A P p stoch, r E( w and (B P p stoch E( w We will assue that the processing

More information

A Theoretical Analysis of a Warm Start Technique

A Theoretical Analysis of a Warm Start Technique A Theoretical Analysis of a War Start Technique Martin A. Zinkevich Yahoo! Labs 701 First Avenue Sunnyvale, CA Abstract Batch gradient descent looks at every data point for every step, which is wasteful

More information

Ph 20.3 Numerical Solution of Ordinary Differential Equations

Ph 20.3 Numerical Solution of Ordinary Differential Equations Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing

More information

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Tie-Varying Jaing Links Jun Kurihara KDDI R&D Laboratories, Inc 2 5 Ohara, Fujiino, Saitaa, 356 8502 Japan Eail: kurihara@kddilabsjp

More information

Highly Robust Error Correction by Convex Programming

Highly Robust Error Correction by Convex Programming Highly Robust Error Correction by Convex Prograing Eanuel J. Candès and Paige A. Randall Applied and Coputational Matheatics, Caltech, Pasadena, CA 9115 Noveber 6; Revised Noveber 7 Abstract This paper

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

Lean Walsh Transform

Lean Walsh Transform Lean Walsh Transfor Edo Liberty 5th March 007 inforal intro We show an orthogonal atrix A of size d log 4 3 d (α = log 4 3) which is applicable in tie O(d). By applying a rando sign change atrix S to the

More information

Multi-Scale/Multi-Resolution: Wavelet Transform

Multi-Scale/Multi-Resolution: Wavelet Transform Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the

More information

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Understanding Machine Learning Solution Manual

Understanding Machine Learning Solution Manual Understanding Machine Learning Solution Manual Written by Alon Gonen Edited by Dana Rubinstein Noveber 17, 2014 2 Gentle Start 1. Given S = ((x i, y i )), define the ultivariate polynoial p S (x) = i []:y

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Lecture 13 Eigenvalue Problems

Lecture 13 Eigenvalue Problems Lecture 13 Eigenvalue Probles MIT 18.335J / 6.337J Introduction to Nuerical Methods Per-Olof Persson October 24, 2006 1 The Eigenvalue Decoposition Eigenvalue proble for atrix A: Ax = λx with eigenvalues

More information

Supplement to: Subsampling Methods for Persistent Homology

Supplement to: Subsampling Methods for Persistent Homology Suppleent to: Subsapling Methods for Persistent Hoology A. Technical results In this section, we present soe technical results that will be used to prove the ain theores. First, we expand the notation

More information

Weighted- 1 minimization with multiple weighting sets

Weighted- 1 minimization with multiple weighting sets Weighted- 1 iniization with ultiple weighting sets Hassan Mansour a,b and Özgür Yılaza a Matheatics Departent, University of British Colubia, Vancouver - BC, Canada; b Coputer Science Departent, University

More information

Introduction to Optimization Techniques. Nonlinear Programming

Introduction to Optimization Techniques. Nonlinear Programming Introduction to Optiization echniques Nonlinear Prograing Optial Solutions Consider the optiization proble in f ( x) where F R n xf Definition : x F is optial (global iniu) for this proble, if f( x ) f(

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information

Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization

Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization Robust Principal Coponent Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optiization John Wright, Arvind Ganesh, Shankar Rao, and Yi Ma Departent of Electrical Engineering University

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words) 1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu

More information

Detection and Estimation Theory

Detection and Estimation Theory ESE 54 Detection and Estiation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electronic Systes and Signals Research Laboratory Electrical and Systes Engineering Washington University 11 Urbauer

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography

Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography Tight Bounds for axial Identifiability of Failure Nodes in Boolean Network Toography Nicola Galesi Sapienza Università di Roa nicola.galesi@uniroa1.it Fariba Ranjbar Sapienza Università di Roa fariba.ranjbar@uniroa1.it

More information

Support Vector Machines. Machine Learning Series Jerry Jeychandra Blohm Lab

Support Vector Machines. Machine Learning Series Jerry Jeychandra Blohm Lab Support Vector Machines Machine Learning Series Jerry Jeychandra Bloh Lab Outline Main goal: To understand how support vector achines (SVMs) perfor optial classification for labelled data sets, also a

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008 LIDS Report 2779 1 Constrained Consensus and Optiization in Multi-Agent Networks arxiv:0802.3922v2 [ath.oc] 17 Dec 2008 Angelia Nedić, Asuan Ozdaglar, and Pablo A. Parrilo February 15, 2013 Abstract We

More information

Birthday Paradox Calculations and Approximation

Birthday Paradox Calculations and Approximation Birthday Paradox Calculations and Approxiation Joshua E. Hill InfoGard Laboratories -March- v. Birthday Proble In the birthday proble, we have a group of n randoly selected people. If we assue that birthdays

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Asynchronous Gossip Algorithms for Stochastic Optimization

Asynchronous Gossip Algorithms for Stochastic Optimization Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu

More information

i ij j ( ) sin cos x y z x x x interchangeably.)

i ij j ( ) sin cos x y z x x x interchangeably.) Tensor Operators Michael Fowler,2/3/12 Introduction: Cartesian Vectors and Tensors Physics is full of vectors: x, L, S and so on Classically, a (three-diensional) vector is defined by its properties under

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Consistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material

Consistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material Consistent Multiclass Algoriths for Coplex Perforance Measures Suppleentary Material Notations. Let λ be the base easure over n given by the unifor rando variable (say U over n. Hence, for all easurable

More information

Principal Components Analysis

Principal Components Analysis Principal Coponents Analysis Cheng Li, Bingyu Wang Noveber 3, 204 What s PCA Principal coponent analysis (PCA) is a statistical procedure that uses an orthogonal transforation to convert a set of observations

More information

Tail estimates for norms of sums of log-concave random vectors

Tail estimates for norms of sums of log-concave random vectors Tail estiates for nors of sus of log-concave rando vectors Rados law Adaczak Rafa l Lata la Alexander E. Litvak Alain Pajor Nicole Toczak-Jaegerann Abstract We establish new tail estiates for order statistics

More information