Matrix Concentration. Nick Harvey University of British Columbia
|
|
- Curtis Andrews
- 6 years ago
- Views:
Transcription
1 Matrix Concentration Nick Harvey University of British Columbia
2 The Problem Given any random nxn, symmetric matrices Y 1,,Y k. Show that i Y i is probably close to E[ i Y i ]. Why? A matrix generalization of the Chernoff bound. Much research on eigenvalues of a random matrix with independent entries. This is more general.
3 Chernoff/Hoeffding Bound Theorem: Let Y 1,,Y k be independent random scalars in [0,R]. Let Y = i Y i. Suppose that ¹ L E[Y] ¹ U. Then
4 Rudelson s Sampling Lemma Theorem: [Rudelson 99] Let Y 1,,Y k be i.i.d. rank-1, PSD matrices of size nxn s.t. E[Y i ]=I, ky i k R. Let Y = i Y i, so E[Y]=k I. Then Example: Balls and bins Throw k balls uniformly into n bins Y i = Uniform over If k = O(n log n / ² 2 ), all bins same up to factor 1 ²
5 Rudelson s Sampling Lemma Theorem: [Rudelson 99] Let Y 1,,Y k be i.i.d. rank-1, PSD matrices of size nxn s.t. E[Y i ]=I, ky i k R. Let Y = i Y i, so E[Y]=k I. Then Pros: We ve generalized to PSD matrices Mild issue: We assume E[Y i ] = I. Cons: Y i s must be identically distributed rank-1 matrices only
6 Rudelson s Sampling Lemma Theorem: [Rudelson-Vershynin 07] Let Y 1,,Y k be i.i.d. rank-1, PSD matrices s.t. E[Y i ]=I, ky i k R. Let Y = i Y i, so E[Y]=k I. Then Pros: We ve generalized to PSD matrices Mild issue: We assume E[Y i ] = I. Cons: Y i s must be identically distributed rank-1 matrices only
7 Rudelson s Sampling Lemma Theorem: [Rudelson-Vershynin 07] Let Y 1,,Y k be i.i.d. rank-1, PSD matrices s.t. E[Y i ]=I. Let Y= i Y i, so E[Y]=k I. Assume Y i ¹ R I. Then Notation: A ¹ B, B-A is PSD I ¹ A ¹ I, all eigenvalue of A lie in [, ] Mild issue: We assume E[Y i ] = I.
8 Rudelson s Sampling Lemma Theorem: [Rudelson-Vershynin 07] Let Y 1,,Y k be i.i.d. rank-1, PSD matrices. Let Z=E[Y i ], Y= i Y i, so E[Y]=k Z. Assume Y i ¹ R Z. Then Apply previous theorem to { Z -1/2 Y i Z -1/2 : i=1,,k }. Use the fact that A ¹ B, Z -1/2 A Z -1/2 ¹ Z -1/2 B Z -1/2 So (1-²) k Z ¹ i Y i ¹ (1+²) k Z, (1-²) k I ¹ i Z -1/2 Y i Z -1/2 ¹ (1+²) k I
9 Ahlswede-Winter Inequality Theorem: [Ahlswede-Winter 02] Let Y 1,,Y k be i.i.d. PSD matrices of size nxn. Let Z=E[Y i ], Y= i Y i, so E[Y]=k Z. Assume Y i ¹ R Z. Then Pros: We ve removed the rank-1 assumption. Proof is much easier than Rudelson s proof. Cons: Still need Y i s to be identically distributed. (More precisely, poor results unless E[Y a ] = E[Y b ].)
10 Tropp s User-Friendly Tail Bound Theorem: [Tropp 12] Let Y 1,,Y k be independent, PSD matrices of size nxn. s.t. ky i k R. Let Y= i Y i. Suppose ¹ L I ¹ E[Y] ¹ ¹ U I. Then Pros: Y i s do not need to be identically distributed Poisson-like bound for the right-tail Proof not difficult (but uses Lieb s inequality) Mild issue: Poor results unless min (E[Y]) ¼ max (E[Y]).
11 Tropp s User-Friendly Tail Bound Theorem: [Tropp 12] Let Y 1,,Y k be independent, PSD matrices of size nxn. Let Y= i Y i. Let Z=E[Y]. Suppose Y i ¹ R Z. Then
12 Tropp s User-Friendly Tail Bound Theorem: [Tropp 12] Let Y 1,,Y k be independent, PSD matrices of size nxn. s.t. ky i k R. Let Y= i Y i. Suppose ¹ L I ¹ E[Y] ¹ ¹ U I. Then Example: Balls and bins For b=1,,n For t=1,,8 log(n)/² 2 With prob ½, throw a ball into bin b Let Y b,t = with prob ½, otherwise 0.
13 Additive Error Previous theorems give multiplicative error: (1-²) E[ i Y i ] ¹ i Y i ¹ (1+²) E[ i Y i ] Additive error also useful: k i Y i - E[ i Y i ]k ² Theorem: [Rudelson & Vershynin 07] Let Y 1,,Y k be i.i.d. rank-1, PSD matrices. Let Z=E[Y i ]. Suppose kzk 1, ky i k R. Then Theorem: [Magen & Zouzias 11] If instead rank Y i k := (R log(r/² 2 )/² 2 ), then
14 Proof of Ahlswede-Winter Key idea: Bound matrix moment generating function k Let S k = i=1 Y i tr e A+B tr e A e B Golden-Thompson Inequality Weakness: This is brutal By induction,
15 How to improve Ahlswede-Winter? Golden-Thompson Inequality tr e A+B tr e A e B for all symmetric matrices A, B. Does not extend to three matrices! tr e A+B+C tr e A e B e C is FALSE. Lieb s Inequality: For any symmetric matrix L, the map f : PSD Cone! R defined by f(a) = tr exp( L + log(a) ) is concave.
16 Beyond the basics Hoeffding (non-uniform bounds on Y i s) [Tropp 12] Bernstein (use bound on Var[Y i ]) [Tropp 12] Freedman (martingale version of Bernstein) [Tropp 12] Stein s Method (slightly sharper results) [Mackey et al. 12] Pessimistic Estimators for Ahlswede-Winter inequality [Wigderson-Xiao 08]
17 Summary We now have beautiful, powerful, flexible extension of Chernoff bound to matrices. Ahlswede-Winter has a simple proof; Tropp s inequality is very easy to use. Several important uses to date; hopefully more uses in the future.
Using Friendly Tail Bounds for Sums of Random Matrices
Using Friendly Tail Bounds for Sums of Random Matrices Joel A. Tropp Computing + Mathematical Sciences California Institute of Technology jtropp@cms.caltech.edu Research supported in part by NSF, DARPA,
More informationA Matrix Expander Chernoff Bound
A Matrix Expander Chernoff Bound Ankit Garg, Yin Tat Lee, Zhao Song, Nikhil Srivastava UC Berkeley Vanilla Chernoff Bound Thm [Hoeffding, Chernoff]. If X 1,, X k are independent mean zero random variables
More informationStein s Method for Matrix Concentration
Stein s Method for Matrix Concentration Lester Mackey Collaborators: Michael I. Jordan, Richard Y. Chen, Brendan Farrell, and Joel A. Tropp University of California, Berkeley California Institute of Technology
More informationGraph Sparsifiers: A Survey
Graph Sparsifiers: A Survey Nick Harvey UBC Based on work by: Batson, Benczur, de Carli Silva, Fung, Hariharan, Harvey, Karger, Panigrahi, Sato, Spielman, Srivastava and Teng Approximating Dense Objects
More informationA NOTE ON SUMS OF INDEPENDENT RANDOM MATRICES AFTER AHLSWEDE-WINTER
A NOTE ON SUMS OF INDEPENDENT RANDOM MATRICES AFTER AHLSWEDE-WINTER 1. The method Ashwelde and Winter [1] proposed a new approach to deviation inequalities for sums of independent random matrices. The
More informationMatrix concentration inequalities
ELE 538B: Mathematics of High-Dimensional Data Matrix concentration inequalities Yuxin Chen Princeton University, Fall 2018 Recap: matrix Bernstein inequality Consider a sequence of independent random
More informationarxiv: v1 [math.pr] 11 Feb 2019
A Short Note on Concentration Inequalities for Random Vectors with SubGaussian Norm arxiv:190.03736v1 math.pr] 11 Feb 019 Chi Jin University of California, Berkeley chijin@cs.berkeley.edu Rong Ge Duke
More informationSparsification by Effective Resistance Sampling
Spectral raph Theory Lecture 17 Sparsification by Effective Resistance Sampling Daniel A. Spielman November 2, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened
More informationOn Some Extensions of Bernstein s Inequality for Self-Adjoint Operators
On Some Extensions of Bernstein s Inequality for Self-Adjoint Operators Stanislav Minsker e-mail: sminsker@math.duke.edu Abstract: We present some extensions of Bernstein s inequality for random self-adjoint
More informationUser-Friendly Tools for Random Matrices
User-Friendly Tools for Random Matrices Joel A. Tropp Computing + Mathematical Sciences California Institute of Technology jtropp@cms.caltech.edu Research supported by ONR, AFOSR, NSF, DARPA, Sloan, and
More informationUser-Friendly Tail Bounds for Sums of Random Matrices
DOI 10.1007/s10208-011-9099-z User-Friendly Tail Bounds for Sums of Random Matrices Joel A. Tropp Received: 16 January 2011 / Accepted: 13 June 2011 The Authors 2011. This article is published with open
More informationStein Couplings for Concentration of Measure
Stein Couplings for Concentration of Measure Jay Bartroff, Subhankar Ghosh, Larry Goldstein and Ümit Işlak University of Southern California [arxiv:0906.3886] [arxiv:1304.5001] [arxiv:1402.6769] Borchard
More informationBounds for all eigenvalues of sums of Hermitian random matrices
20 Chapter 2 Bounds for all eigenvalues of sums of Hermitian random matrices 2.1 Introduction The classical tools of nonasymptotic random matrix theory can sometimes give quite sharp estimates of the extreme
More informationOutline. Martingales. Piotr Wojciechowski 1. 1 Lane Department of Computer Science and Electrical Engineering West Virginia University.
Outline Piotr 1 1 Lane Department of Computer Science and Electrical Engineering West Virginia University 8 April, 01 Outline Outline 1 Tail Inequalities Outline Outline 1 Tail Inequalities General Outline
More informationConcentration of Measures by Bounded Couplings
Concentration of Measures by Bounded Couplings Subhankar Ghosh, Larry Goldstein and Ümit Işlak University of Southern California [arxiv:0906.3886] [arxiv:1304.5001] May 2013 Concentration of Measure Distributional
More informationUpper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1
Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Feng Wei 2 University of Michigan July 29, 2016 1 This presentation is based a project under the supervision of M. Rudelson.
More informationRandomized Algorithms
Randomized Algorithms 南京大学 尹一通 Martingales Definition: A sequence of random variables X 0, X 1,... is a martingale if for all i > 0, E[X i X 0,...,X i1 ] = X i1 x 0, x 1,...,x i1, E[X i X 0 = x 0, X 1
More informationOXPORD UNIVERSITY PRESS
Concentration Inequalities A Nonasymptotic Theory of Independence STEPHANE BOUCHERON GABOR LUGOSI PASCAL MASS ART OXPORD UNIVERSITY PRESS CONTENTS 1 Introduction 1 1.1 Sums of Independent Random Variables
More informationMultivariate trace inequalities. David Sutter, Mario Berta, Marco Tomamichel
Multivariate trace inequalities David Sutter, Mario Berta, Marco Tomamichel What are trace inequalities and why we should care. Main difference between classical and quantum world are complementarity and
More informationRandom Methods for Linear Algebra
Gittens gittens@acm.caltech.edu Applied and Computational Mathematics California Institue of Technology October 2, 2009 Outline The Johnson-Lindenstrauss Transform 1 The Johnson-Lindenstrauss Transform
More informationOn the Spectra of General Random Graphs
On the Spectra of General Random Graphs Fan Chung Mary Radcliffe University of California, San Diego La Jolla, CA 92093 Abstract We consider random graphs such that each edge is determined by an independent
More informationUser-Friendly Tools for Random Matrices
User-Friendly Tools for Random Matrices Joel A. Tropp Computing + Mathematical Sciences California Institute of Technology jtropp@cms.caltech.edu Research supported by ONR, AFOSR, NSF, DARPA, Sloan, and
More informationTechnical Report No January 2011
USER-FRIENDLY TAIL BOUNDS FOR MATRIX MARTINGALES JOEL A. TRO! Technical Report No. 2011-01 January 2011 A L I E D & C O M U T A T I O N A L M A T H E M A T I C S C A L I F O R N I A I N S T I T U T E O
More informationarxiv: v1 [math.pr] 22 May 2008
THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered
More informationMATRIX CONCENTRATION INEQUALITIES VIA THE METHOD OF EXCHANGEABLE PAIRS 1
The Annals of Probability 014, Vol. 4, No. 3, 906 945 DOI: 10.114/13-AOP89 Institute of Mathematical Statistics, 014 MATRIX CONCENTRATION INEQUALITIES VIA THE METHOD OF EXCHANGEABLE PAIRS 1 BY LESTER MACKEY,MICHAEL
More informationarxiv: v6 [math.pr] 16 Jan 2011
USER-FRIENDLY TAIL BOUNDS FOR SUMS OF RANDOM MATRICES JOEL A. TRO arxiv:1004.4389v6 [math.r] 16 Jan 2011 Abstract. This paper presents new probability inequalities for sums of independent, random, selfadjoint
More informationA Tutorial on Matrix Approximation by Row Sampling
A Tutorial on Matrix Approximation by Row Sampling Rasmus Kyng June 11, 018 Contents 1 Fast Linear Algebra Talk 1.1 Matrix Concentration................................... 1. Algorithms for ɛ-approximation
More informationStatistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 2: Introduction to statistical learning theory. 1 / 22 Goals of statistical learning theory SLT aims at studying the performance of
More informationarxiv: v7 [math.pr] 15 Jun 2011
USER-FRIENDLY TAIL BOUNDS FOR SUMS OF RANDOM MATRICES JOEL A. TRO arxiv:1004.4389v7 [math.r] 15 Jun 2011 Abstract. This paper presents new probability inequalities for sums of independent, random, selfadjoint
More informationThe first bound is the strongest, the other two bounds are often easier to state and compute. Proof: Applying Markov's inequality, for any >0 we have
The first bound is the strongest, the other two bounds are often easier to state and compute Proof: Applying Markov's inequality, for any >0 we have Pr (1 + ) = Pr For any >0, we can set = ln 1+ (4.4.1):
More informationBalls & Bins. Balls into Bins. Revisit Birthday Paradox. Load. SCONE Lab. Put m balls into n bins uniformly at random
Balls & Bins Put m balls into n bins uniformly at random Seoul National University 1 2 3 n Balls into Bins Name: Chong kwon Kim Same (or similar) problems Birthday paradox Hash table Coupon collection
More informationMatrix Completion and Matrix Concentration
Matrix Completion and Matrix Concentration Lester Mackey Collaborators: Ameet Talwalkar, Michael I. Jordan, Richard Y. Chen, Brendan Farrell, Joel A. Tropp, and Daniel Paulin Stanford University UCLA UC
More informationConcentration Inequalities for Random Matrices
Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic
More informationRandom Matrices: Invertibility, Structure, and Applications
Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More information. Find E(V ) and var(v ).
Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number
More informationMahler Conjecture and Reverse Santalo
Mini-Courses Radoslaw Adamczak Random matrices with independent log-concave rows/columns. I will present some recent results concerning geometric properties of random matrices with independent rows/columns
More informationRandom regular digraphs: singularity and spectrum
Random regular digraphs: singularity and spectrum Nick Cook, UCLA Probability Seminar, Stanford University November 2, 2015 Universality Circular law Singularity probability Talk outline 1 Universality
More informationGraph Sparsification II: Rank one updates, Interlacing, and Barriers
Graph Sparsification II: Rank one updates, Interlacing, and Barriers Nikhil Srivastava Simons Institute August 26, 2014 Previous Lecture Definition. H = (V, F, u) is a κ approximation of G = V, E, w if:
More informationInvertibility of random matrices
University of Michigan February 2011, Princeton University Origins of Random Matrix Theory Statistics (Wishart matrices) PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]
More informationLeast singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA)
Least singular value of random matrices Lewis Memorial Lecture / DIMACS minicourse March 18, 2008 Terence Tao (UCLA) 1 Extreme singular values Let M = (a ij ) 1 i n;1 j m be a square or rectangular matrix
More informationCPSC 536N: Randomized Algorithms Term 2. Lecture 2
CPSC 536N: Randomized Algorithms 2014-15 Term 2 Prof. Nick Harvey Lecture 2 University of British Columbia In this lecture we continue our introduction to randomized algorithms by discussing the Max Cut
More informationMarch 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang.
Florida State University March 1, 2018 Framework 1. (Lizhe) Basic inequalities Chernoff bounding Review for STA 6448 2. (Lizhe) Discrete-time martingales inequalities via martingale approach 3. (Boning)
More informationSparsification of Graphs and Matrices
Sparsification of Graphs and Matrices Daniel A. Spielman Yale University joint work with Joshua Batson (MIT) Nikhil Srivastava (MSR) Shang- Hua Teng (USC) HUJI, May 21, 2014 Objective of Sparsification:
More informationarxiv: v2 [math.pr] 27 Mar 2014
The Annals of Probability 2014, Vol. 42, No. 3, 906 945 DOI: 10.1214/13-AOP892 c Institute of Mathematical Statistics, 2014 arxiv:1201.6002v2 [math.pr] 27 Mar 2014 MATRIX CONCENTRATION INEQUALITIES VIA
More informationOnline Semidefinite Programming
Online Semidefinite Programming Noa Elad 1, Satyen Kale 2, and Joseph Seffi Naor 3 1 Computer Science Dept, Technion, Haifa, Israel noako@cstechnionacil 2 Yahoo Research, New York, NY, USA satyen@yahoo-inccom
More informationConcentration of Measures by Bounded Size Bias Couplings
Concentration of Measures by Bounded Size Bias Couplings Subhankar Ghosh, Larry Goldstein University of Southern California [arxiv:0906.3886] January 10 th, 2013 Concentration of Measure Distributional
More informationAPPENDIX A PROOFS. Proof 1: No Generalized Inverse Will Allocate u. Denote by the subset of ( ) which maps to ( ), and by u the vectors in.
APPENDIX A PROOFS Proof 1: No Generalized Inverse Will Allocate u m Denote by the subset of ( ) which maps to ( ), and by u the vectors in. The span( ) = R m unless there are columns of B which are all
More informationProbability Background
Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second
More informationTail Inequalities. The Chernoff bound works for random variables that are a sum of indicator variables with the same distribution (Bernoulli trials).
Tail Inequalities William Hunt Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV William.Hunt@mail.wvu.edu Introduction In this chapter, we are interested
More informationGaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula
Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula Larry Goldstein, University of Southern California Nourdin GIoVAnNi Peccati Luxembourg University University British
More informationMultivariate Statistics Random Projections and Johnson-Lindenstrauss Lemma
Multivariate Statistics Random Projections and Johnson-Lindenstrauss Lemma Suppose again we have n sample points x,..., x n R p. The data-point x i R p can be thought of as the i-th row X i of an n p-dimensional
More informationDELOCALIZATION OF EIGENVECTORS OF RANDOM MATRICES
DELOCALIZATION OF EIGENVECTORS OF RANDOM MATRICES MARK RUDELSON Preliminary version Abstract. Let x S n 1 be a unit eigenvector of an n n random matrix. This vector is delocalized if it is distributed
More informationLecture 4. P r[x > ce[x]] 1/c. = ap r[x = a] + a>ce[x] P r[x = a]
U.C. Berkeley CS273: Parallel and Distributed Theory Lecture 4 Professor Satish Rao September 7, 2010 Lecturer: Satish Rao Last revised September 13, 2010 Lecture 4 1 Deviation bounds. Deviation bounds
More informationAnti-concentration Inequalities
Anti-concentration Inequalities Roman Vershynin Mark Rudelson University of California, Davis University of Missouri-Columbia Phenomena in High Dimensions Third Annual Conference Samos, Greece June 2007
More informationLecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora
princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora Scribe: Today we continue the
More informationA sequential hypothesis test based on a generalized Azuma inequality 1
A sequential hypothesis test based on a generalized Azuma inequality 1 Daniël Reijsbergen a,2, Werner Scheinhardt b, Pieter-Tjerk de Boer b a Laboratory for Foundations of Computer Science, University
More informationHoeffding, Chernoff, Bennet, and Bernstein Bounds
Stat 928: Statistical Learning Theory Lecture: 6 Hoeffding, Chernoff, Bennet, Bernstein Bounds Instructor: Sham Kakade 1 Hoeffding s Bound We say X is a sub-gaussian rom variable if it has quadratically
More informationStein s Method: Distributional Approximation and Concentration of Measure
Stein s Method: Distributional Approximation and Concentration of Measure Larry Goldstein University of Southern California 36 th Midwest Probability Colloquium, 2014 Concentration of Measure Distributional
More informationU.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018
U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 Lecture 3 In which we show how to find a planted clique in a random graph. 1 Finding a Planted Clique We will analyze
More informationEfron Stein Inequalities for Random Matrices
Efron Stein Inequalities for Random Matrices Daniel Paulin, Lester Mackey and Joel A. Tropp D. Paulin Department of Statistics and Applied Probability National University of Singapore 6 Science Drive,
More informationRANDOM MATRICES: LAW OF THE DETERMINANT
RANDOM MATRICES: LAW OF THE DETERMINANT HOI H. NGUYEN AND VAN H. VU Abstract. Let A n be an n by n random matrix whose entries are independent real random variables with mean zero and variance one. We
More informationMultivariate Trace Inequalities
Multivariate Trace Inequalities Mario Berta arxiv:1604.03023 with Sutter and Tomamichel (to appear in CMP) arxiv:1512.02615 with Fawzi and Tomamichel QMath13 - October 8, 2016 Mario Berta (Caltech) Multivariate
More informationThe largest eigenvalues of the sample covariance matrix. in the heavy-tail case
The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)
More informationError Correction via Linear Programming
Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d
More informationError Bound for Compound Wishart Matrices
Electronic Journal of Statistics Vol. 0 (2006) 1 8 ISSN: 1935-7524 Error Bound for Compound Wishart Matrices Ilya Soloveychik, The Rachel and Selim Benin School of Computer Science and Engineering, The
More informationUser-Friendly Tools for Random Matrices: An Introduction
User-Friendly Tools for Random Matrices: An Introduction Joel A. Tropp 3 December 2012 NIPS Version i Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection
More informationConcentration inequalities and the entropy method
Concentration inequalities and the entropy method Gábor Lugosi ICREA and Pompeu Fabra University Barcelona what is concentration? We are interested in bounding random fluctuations of functions of many
More informationConcentration function and other stuff
Concentration function and other stuff Sabrina Sixta Tuesday, June 16, 2014 Sabrina Sixta () Concentration function and other stuff Tuesday, June 16, 2014 1 / 13 Table of Contents Outline 1 Chernoff Bound
More informationExpansion properties of random Cayley graphs and vertex transitive graphs via matrix martingales
Expansion properties of random Cayley graphs and vertex transitive graphs via matrix martingales Demetres Christofides Klas Markström Abstract The Alon-Roichman theorem states that for every ε > 0 there
More informationSparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing
Sparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing Bobak Nazer and Robert D. Nowak University of Wisconsin, Madison Allerton 10/01/10 Motivation: Virus-Host Interaction
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationList of Errata for the Book A Mathematical Introduction to Compressive Sensing
List of Errata for the Book A Mathematical Introduction to Compressive Sensing Simon Foucart and Holger Rauhut This list was last updated on September 22, 2018. If you see further errors, please send us
More informationLecture 2: Convex functions
Lecture 2: Convex functions f : R n R is convex if dom f is convex and for all x, y dom f, θ [0, 1] f is concave if f is convex f(θx + (1 θ)y) θf(x) + (1 θ)f(y) x x convex concave neither x examples (on
More informationDERIVING MATRIX CONCENTRATION INEQUALITIES FROM KERNEL COUPLINGS. Daniel Paulin Lester Mackey Joel A. Tropp
DERIVING MATRIX CONCENTRATION INEQUALITIES FROM KERNEL COUPLINGS By Daniel Paulin Lester Mackey Joel A. Tropp Technical Report No. 014-10 August 014 Department of Statistics STANFORD UNIVERSITY Stanford,
More informationExercises in Extreme value theory
Exercises in Extreme value theory 2016 spring semester 1. Show that L(t) = logt is a slowly varying function but t ǫ is not if ǫ 0. 2. If the random variable X has distribution F with finite variance,
More information1 Regression with High Dimensional Data
6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:
More informationDefinition 2.3. We define addition and multiplication of matrices as follows.
14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row
More informationROW PRODUCTS OF RANDOM MATRICES
ROW PRODUCTS OF RANDOM MATRICES MARK RUDELSON Abstract Let 1,, K be d n matrices We define the row product of these matrices as a d K n matrix, whose rows are entry-wise products of rows of 1,, K This
More informationRandomized Algorithms
Randomized Algorithms Saniv Kumar, Google Research, NY EECS-6898, Columbia University - Fall, 010 Saniv Kumar 9/13/010 EECS6898 Large Scale Machine Learning 1 Curse of Dimensionality Gaussian Mixture Models
More informationModels of collective inference
Models of collective inference Laurent Massoulié (Microsoft Research-Inria Joint Centre) Mesrob I. Ohannessian (University of California, San Diego) Alexandre Proutière (KTH Royal Institute of Technology)
More informationTail and Concentration Inequalities
CSE 694: Probabilistic Analysis and Randomized Algorithms Lecturer: Hung Q. Ngo SUNY at Buffalo, Spring 2011 Last update: February 19, 2011 Tail and Concentration Ineualities From here on, we use 1 A to
More informationTAIL BOUNDS FOR ALL EIGENVALUES OF A SUM OF RANDOM MATRICES
TAIL BOUNDS FOR ALL EIGENVALUES OF A SUM OF RANDOM MATRICES ALEX GITTENS AND JOEL A TRO Abstract This work introduces the minimax Laplace transform method, a modification of the cumulant-based matrix Laplace
More informationKevin James. MTHSC 3110 Section 2.1 Matrix Operations
MTHSC 3110 Section 2.1 Matrix Operations Notation Let A be an m n matrix, that is, m rows and n columns. We ll refer to the entries of A by their row and column indices. The entry in the i th row and j
More informationThe circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)
The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues
More informationRandomized algorithms in numerical linear algebra
Acta Numerica (2017), pp. 95 135 c Cambridge University Press, 2017 doi:10.1017/s0962492917000058 Printed in the United Kingdom Randomized algorithms in numerical linear algebra Ravindran Kannan Microsoft
More informationA SIMPLE TOOL FOR BOUNDING THE DEVIATION OF RANDOM MATRICES ON GEOMETRIC SETS
A SIMPLE TOOL FOR BOUNDING THE DEVIATION OF RANDOM MATRICES ON GEOMETRIC SETS CHRISTOPHER LIAW, ABBAS MEHRABIAN, YANIV PLAN, AND ROMAN VERSHYNIN Abstract. Let A be an isotropic, sub-gaussian m n matrix.
More informationLecture 4: Applications: random trees, determinantal measures and sampling
Lecture 4: Applications: random trees, determinantal measures and sampling Robin Pemantle University of Pennsylvania pemantle@math.upenn.edu Minerva Lectures at Columbia University 09 November, 2016 Sampling
More informationReview of Linear Algebra
Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources
More informationWrite your Registration Number, Test Centre, Test Code and the Number of this booklet in the appropriate places on the answersheet.
2016 Booklet No. Test Code : PSA Forenoon Questions : 30 Time : 2 hours Write your Registration Number, Test Centre, Test Code and the Number of this booklet in the appropriate places on the answersheet.
More informationExtreme eigenvalues of Erdős-Rényi random graphs
Extreme eigenvalues of Erdős-Rényi random graphs Florent Benaych-Georges j.w.w. Charles Bordenave and Antti Knowles MAP5, Université Paris Descartes May 18, 2018 IPAM UCLA Inhomogeneous Erdős-Rényi random
More information1 Quantum states and von Neumann entropy
Lecture 9: Quantum entropy maximization CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: February 15, 2016 1 Quantum states and von Neumann entropy Recall that S sym n n
More informationLeast squares under convex constraint
Stanford University Questions Let Z be an n-dimensional standard Gaussian random vector. Let µ be a point in R n and let Y = Z + µ. We are interested in estimating µ from the data vector Y, under the assumption
More informationLow-Rank Matrix Recovery
ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationCommon-Knowledge / Cheat Sheet
CSE 521: Design and Analysis of Algorithms I Fall 2018 Common-Knowledge / Cheat Sheet 1 Randomized Algorithm Expectation: For a random variable X with domain, the discrete set S, E [X] = s S P [X = s]
More informationHigh-dimensional distributions with convexity properties
High-dimensional distributions with convexity properties Bo az Klartag Tel-Aviv University A conference in honor of Charles Fefferman, Princeton, May 2009 High-Dimensional Distributions We are concerned
More informationGraph Sparsification I : Effective Resistance Sampling
Graph Sparsification I : Effective Resistance Sampling Nikhil Srivastava Microsoft Research India Simons Institute, August 26 2014 Graphs G G=(V,E,w) undirected V = n w: E R + Sparsification Approximate
More informationarxiv: v5 [math.na] 16 Nov 2017
RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem
More information