Invertibility of random matrices

Size: px
Start display at page:

Download "Invertibility of random matrices"

Transcription

1 University of Michigan February 2011, Princeton University

2 Origins of Random Matrix Theory Statistics (Wishart matrices) PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info] Physics (Wigner matrices) Slow neutron resonance on thorium 232 and uranium 238 nuclei [Rahn et al, Phys. Rev. C 6 (1972), 1854]

3 Statistics: covariance estimation Basic problem in statistics: estimate the covariance matrix Σ = EXX T of a high-dimensional distribution; X is the random vector. What for? Principal Component Analysis (PCA): detect the principal axes along which most dependence occurs: PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]

4 Statistics: covariance estimation Unbiased estimator of Σ is the sample covariance matrix Σ N = 1 N N X k X T k k=1 obtained from N independent samples X k. Σ N is a random matrix, called Wishart matrix after John Wishart (1928). The origin of random matrix theory.

5 Statistics: covariance estimation Covariance Estimation Problem. Determine the minimal sample size N = N(n) that guarantees with high probability (say, 0.99) that the sample covariance matrix Σ N estimates the actual covariance matrix Σ of an n-dimensional distribution with fixed accuracy (say, ε = 0.01) in the operator norm: Σ n Σ ε Σ. PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]

6 Statistics: covariance estimation Theorem (follows from Rudelson 99) For general distributions supported in a ball of radius O( n) in R n, the optimal sample size for covariance estimation is N n log n. Theorem can be proved by noting that Σ n = 1 N N k=1 X kx T k is a sum of independent random matrices X k X T k, and applying non-commutative Khinchine or Bernstein inequalities. Problem. Describe the distributions for which log n is needed. Not needed for log-concave distributions (Adamczak, Litvak, Pajor, Tomczak 10). Conjecture: not needed under even mild moment conditions, e.g. (2 + ε)th moment. True for (4 + ε)th moment with log log n oversampling (V 10).

7 Statistics: structured covariance estimation In modern applications, smaller sample sizes are desirable, N n (cf. compressed sensing). This is possible in presence of structure. Low Rank Theorem (still follows from Rudelson 99) Suppose a distribution is supported in a ball of radius O( n) in R n and Σ is approximately of low rank k. Then the optimal sample size for covariance estimation is Sparse Theorem (Levina-V. 10) N k log n. Consider a Gaussian distribution in R n, whose covariance matrix Σ is sparse, having k nonzero entries per row, whose locations are known. Then the optimal sample size for covariance estimation is N k log 6 n. Optimal result should be N k log(n/k). General distributions?

8 Statistics: structured covariance estimation Sparse Theorem (Levina-V. 10) Consider a Gaussian distribution in R n, whose covariance matrix Σ is sparse, having k nonzero entries per row, whose locations are known. Then the optimal sample size for covariance estimation is N k log 6 n. Planted Clique Problem. What if sparsity pattern is not known? For example, an adversary puts entries 1/k in some k k minor of Σ ( clique ); all other entries are zero. Note that Σ = 1. What is the sample size needed to determine the location of the clique? Conjectured: N = O (k); existing technique gives N = O (k 2 ).

9 Connection with random matrix theory The sample covariance matrix of an n-dimensional distribution Σ N = 1 N N X k X T k is a random matrix an n n Wishart matrix. Suppose for simplicity that the actual (population) covariance matrix Σ = EXX T equals identity ( isotropic distribution). Then the desired estimation Σ N I ε is equivalent to saying that all eigenvalues of Σ N are concentrated around 1: i=1 Marchenko-Pastur density General problem: describe the distribution of the extreme eigenvalues of of random matrices (hard edge and soft edge).

10 Extreme eigenvalues of random matrices Model: H is an N n matrix with iid entries, zero mean, unit variance, sub-gaussian moments. The singular values λ k (H) are the eigenvalues of H T H. Bai-Yin law: as N, n with N/n const, λ min (H) N n, λ max (H) N + n. Moreover, the limiting distribution of λ min (H), λ max (H) properly normalized is the Tracy-Widom distribution. (Tracy-Widom 94, Soshnikov 02, Feldheim-Sodin 10.) Non-asymptotic versions (Rudelson-V 09): { P λ max (H) t( N + } n) 2e ct2 N (standard); { P λ min (H) ε( N } n) (Cε) N n+1 + c N. Also (Feldheim-Sodin 10).

11 Invertibility problem Determining the hard edge is most difficult for square random matrices H. They correspond to the phase transition from underdetermined to overdetermined linear systems. The edges determine the spectral norm of H and the inverse: 1/λ min (H) = H 1, λ max (H) = H. Invertibility Problem for square random matrices H: (a) What is the singularity probability for H? (b) What is the typical value of H 1? Applications (von Neumann-Goldstine 47): test numerical linear solvers on random inputs H. Average-case analysis. One needs to know: (a) how often H is singular; (b) what is the typical condition number κ(h) = λ max (H)/λ min (H).

12 Invertibility problem: iid entries Invertibility Problem for n n random matrices H: (a) What is the singularity probability p n for H? (b) What is the typical value of H 1? Matrices H with iid entries ( Wigner matrices ). Examples: Gaussian matrices (GUE) with N(0, 1) entries, Bernoulli matrix with ±1 entries. (a) for Bernoulli: p n 0 as n (Komlos 68). Exponentially small: p n c n (Kahn-Komlos-Szemeredi 95). Conjecture: p n = ( o(1))n. Best known: p n = ( o(1)) n (Bourgain, Vu, Wood 10). (b): H 1 n with high probability (Edelman 88, Szarek 90 for Gaussian, Rudelson-V. 08 for general): { P λ min (H) ε/ } n Cε + c n.

13 Invertibility problem: symmetric matrices Invertibility Problem for n n random matrices H: (a) What is the singularity probability p n for H? (b) What is the typical value of H 1? Symmetric matrices H with iid above-diagonal entries; diagonal arbitrary fixed. (n n Wigner matrices). (a) for symmetric Bernoulli: p n n 1/8 (Costello-Tao-Vu 06). Conjecture same as for iid entries: p n = ( o(1))n. (b): H 1 n with high probability, for continuous distributions (Erdös-Schlein-Tau 10). Universality: if the first four moments of H ij are the same as for Gaussian (Tao-Vu 10). Not readily applied for Bernoulli. New results: p n exp( n const ) and H 1 n for general symmetric matrices (V 11):

14 Invertibility problem: symmetric matrices Theorem (Invertibility of symmetric random matrices, V 11) Let H be a symmetric random matrix, whose above-diagonal entries are iid with mean zero, unit variance, and finite subgaussian moments. Then for every z R, { P min λ k (H) z ε/ } n Cε 1/9 + exp( n c ). k This implies delocalization of the spectrum. Eigenvalues miss intervals of length 1/ n (average spacing); do not stick to any point with high probability 1 exp( n c ): Controls Green s function (H zi ) 1 = 1/ min λ k (H) z : (H zi ) 1 n with high probability.

15 Invertibility problem: symmetric matrices Theorem (Invertibility of symmetric random matrices, V 11) Let H be a symmetric random matrix, whose above-diagonal entries are iid with mean zero, unit variance, and finite subgaussian moments. Then for every z R, { P min λ k (H) z ε/ } n Cε 1/9 + exp( n c ). k For continuous distributions, the singularity probability is 0: { P min λ k (H) z ε/ } n Cε. k (Erdös-Schlein-Tau 10). Independent simultaneous result (Nguyen 11): for every B > 0 there exists A > 0: { P min k λ k (H) z n A} n B.

16 Proof of Invertibility Theorem Theorem (Invertibility of symmetric random matrices, V 11) Let H be a symmetric random matrix, whose above-diagonal entries are iid with mean zero, unit variance, and finite subgaussian moments. Then for every z R, { P min λ k (H) z ε/ } n Cε 1/9 + exp( n c ). k For simplicity, assume z = 0. Variational characterization: min λ k (H) = min Hx. k n 1 x S So we need, with high probability, a uniform lower bound Hx n 1/2 for all vectors x on the sphere S n 1. This is a geometric problem.

17 Proof. Step 1: Decomposition of the sphere Hx n 1/2 for all vectors x on the sphere S n 1? General architecture of proof: (Rudelson-V 08). Decompose S n 1 into two classes of compressible and incompressible vectors: S n 1 = Comp Incomp. Compressible vectors are those within distance 0.1 from sparse vectors (of support 0.1n). Incompressible are the rest. Prove the lower bound (invertibility) for each class separately.

18 Proof. Step 2: Compressible vectors Hx n 1/2 for all vectors x on the sphere S n 1? Compressible vectors are simple to control. There are not too many of them the metric entropy of Comp is small. A covering argument reduces the problem to a lower bound for a single vector x. Replace H by its above-diagonal minor G by conditioning; then G has independent entries, and independent rows G k. Gx 2 2 = k G k, x 2 is a sum of independent random variables. Finish by standard concentration technique: Hx Gx n 1/2 with high probability 1 e cn. This is even better than we need.

19 Proof. Step 3: Incopressible vectors Incompressible vectors are difficult there are many of them. The problem reduces to: Distance problem. Estimate the distance between a random vector X and a random hyperplane E in R n. Show that dist(x, E) 1 with high probability, where X = a column of H and E = span of the other columns. This is a quantitative form of saying H is non-singular. For matrices with iid entries, a solution in (Rudelson-V 08). Difficulty here: X and E are not independent.

20 Proof. Step 4: Distance problem Distance Theorem. Let X = first column of a symmetric random matrix H, and E = span of the other columns. Then dist(x, E) 1 w.h.p. To prove this result, decompose Linear algebra allows to express B 1 Z, Z a 11 dist(x, E) = 1 + B 1 Z. 2 Here B is a symmetric random matrix (similar to H); Z is an independent random vector with iid coordinates. The problem reduces to concentration of quadratic forms:

21 Proof. Step 5: Concentration for quadratic forms Theorem (concentration of quadratic forms, V 11) Let H be a symmetric random matrix, X = independent random vector with iid coordinates. Then the distribution of the quadratic form H 1 X, X is spread on R. Specifically, for every u R, H 1 X, X u (E H 1 X, X 2 ) 1/2 = H 1 HS w.h.p. Proof uses decoupling: replace by a bilinear form H 1 Y, X for an independent Y. The problem reduces to showing that a, X u 1 w.h.p. where a = H 1 Y H 1 Y 2.

22 Proof. Step 5: Concentration for quadratic forms H = symmetric random matrix, X, Y = indep. random vectors. a, X u 1 w.h.p.? where a = H 1 Y H 1 Y 2. a and X are independent, so condition on a and express n S := a, X = a k X k, k=1 sum of independent random variables. We need to show that the distribution of S is spread. This is a Littlewood-Offord Problem. The spread of S depends on the additive structure of the coefficient vector a (crucial). The less structure in a, the more S is spread:

23 Proof. Step 6: Littlewood-Offord Problem Littlewood-Offord Problem. Consider a sum of ind. rand. variables S := a, X = n k=1 a kx k. If a has little additive structure, then the distribution of S is spread. How to quantify spread? With Lévy concentration function L(S, ε) = sup P { S u ε }, ε 0. u R How to quantify additive structure? With Diophantine approximation: the least common denominator (LCD) D(a) = inf { θ > 0 : dist(θx, Z n ) 10 log + θ }.

24 Proof. Step 6: Littlewood-Offord Problem Littlewood-Offord Theorem. (Rudelson-V 08) A sum of ind. rand. variables S := a, X = a k X k satisfies L(S, ε) ε + 1 D(a), ε 0. The less structure (the larger D(a)), the more S is spread (the smaller L(S, ε)). To successfully use this theorem, we need to know that the coefficient vector a has little structure. In our problem, H 1 Y a = H 1 Y 2 where H = symmetric random matrix, Y = ind. rand. vector. In other words, we need to show that the random matrix H 1 destroys additive structure.

25 Proof. Step 7: Action of H 1 is unstructured Theorem (Structure of inverse, V 11) Let H be an n n symmetric random matrix. Consider a = H 1 y H 1 y 2, where y is an arbitrary fixed vector. Then, with high probability 1 e cn, (i) a is an incompressible vector; (ii) a is unstructured. Conjecture: D(a) e cn. Proved: D(â) n c/λ, where â is a restriction of a onto some carefully chosen set of λn coefficients. λ (0, 1) is arbitrary. Most difficult step, currently not optimal. From this theorem, everything follows (apply together with Littlewood-Offord theory concentration of bilinear and quadratic forms distance theorem invertibility for incompressible vectors invertibility theorem.)

26 Action of H 1 is unstructured Theorem (Structure of inverse, V 11) Let H be an n n symmetric random matrix. Consider a = H 1 y H 1 y 2, where y is an arbitrary fixed vector. Then, w.h.p., the vector a is incompressible and unstructured. Corollary. Columns of H 1 are incompressible, unstructured. Note that the action of H does not destroy structure: the columns of Bernoulli matrix H are structured (±1 entries). Corollary. Same theorem holds for Green s function (H z) 1. Corollary (Delocalization of eigenvectors) All eigenvectors of H are incompressible, unstructured. Proof. If Hv = λv then v (H λ) 1 (0). Apply Structure Theorem for H λ and y = 0. (+ approximation argument). (Erdös-Schlein-Tau 09): a different version of delocalization.

27 References Survey: M. Rudelson, R. Vershynin, Non-asymptotic theory of random matrices: extreme singular values, Tutorial: R. Vershynin, Introduction to the non-asymptotic analysis of random matrices, General Covariance Estimation: R. Vershynin, How close is the sample covariance matrix to the actual covariance matrix? Sparse Covariance Estimation: L. Levina, R. Vershynin, Partial estimation of covariance matrices, Invertibility of Symmetric Matrices: R. Vershynin, Invertibility of symmetric random matrices, 2011.

Random Matrices: Invertibility, Structure, and Applications

Random Matrices: Invertibility, Structure, and Applications Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37

More information

Small Ball Probability, Arithmetic Structure and Random Matrices

Small Ball Probability, Arithmetic Structure and Random Matrices Small Ball Probability, Arithmetic Structure and Random Matrices Roman Vershynin University of California, Davis April 23, 2008 Distance Problems How far is a random vector X from a given subspace H in

More information

Invertibility of symmetric random matrices

Invertibility of symmetric random matrices Invertibility of symmetric random matrices Roman Vershynin University of Michigan romanv@umich.edu February 1, 2011; last revised March 16, 2012 Abstract We study n n symmetric random matrices H, possibly

More information

Concentration Inequalities for Random Matrices

Concentration Inequalities for Random Matrices Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic

More information

Anti-concentration Inequalities

Anti-concentration Inequalities Anti-concentration Inequalities Roman Vershynin Mark Rudelson University of California, Davis University of Missouri-Columbia Phenomena in High Dimensions Third Annual Conference Samos, Greece June 2007

More information

arxiv: v2 [math.fa] 7 Apr 2010

arxiv: v2 [math.fa] 7 Apr 2010 Proceedings of the International Congress of Mathematicians Hyderabad, India, 2010 Non-asymptotic theory of random matrices: extreme singular values arxiv:1003.2990v2 [math.fa] 7 Apr 2010 Mark Rudelson,

More information

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA)

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA) Least singular value of random matrices Lewis Memorial Lecture / DIMACS minicourse March 18, 2008 Terence Tao (UCLA) 1 Extreme singular values Let M = (a ij ) 1 i n;1 j m be a square or rectangular matrix

More information

Random matrices: A Survey. Van H. Vu. Department of Mathematics Rutgers University

Random matrices: A Survey. Van H. Vu. Department of Mathematics Rutgers University Random matrices: A Survey Van H. Vu Department of Mathematics Rutgers University Basic models of random matrices Let ξ be a real or complex-valued random variable with mean 0 and variance 1. Examples.

More information

arxiv: v1 [math.pr] 22 May 2008

arxiv: v1 [math.pr] 22 May 2008 THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered

More information

Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1

Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Feng Wei 2 University of Michigan July 29, 2016 1 This presentation is based a project under the supervision of M. Rudelson.

More information

Exponential tail inequalities for eigenvalues of random matrices

Exponential tail inequalities for eigenvalues of random matrices Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify

More information

THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX

THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX MARK RUDELSON AND ROMAN VERSHYNIN Abstract. We prove an optimal estimate of the smallest singular value of a random subgaussian matrix, valid

More information

Random matrices: Distribution of the least singular value (via Property Testing)

Random matrices: Distribution of the least singular value (via Property Testing) Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued

More information

Random regular digraphs: singularity and spectrum

Random regular digraphs: singularity and spectrum Random regular digraphs: singularity and spectrum Nick Cook, UCLA Probability Seminar, Stanford University November 2, 2015 Universality Circular law Singularity probability Talk outline 1 Universality

More information

THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX

THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX MARK RUDELSON AND ROMAN VERSHYNIN Abstract. We prove an optimal estimate on the smallest singular value of a random subgaussian matrix, valid

More information

THE LITTLEWOOD-OFFORD PROBLEM AND INVERTIBILITY OF RANDOM MATRICES

THE LITTLEWOOD-OFFORD PROBLEM AND INVERTIBILITY OF RANDOM MATRICES THE LITTLEWOOD-OFFORD PROBLEM AND INVERTIBILITY OF RANDOM MATRICES MARK RUDELSON AND ROMAN VERSHYNIN Abstract. We prove two basic conjectures on the distribution of the smallest singular value of random

More information

The Littlewood Offord problem and invertibility of random matrices

The Littlewood Offord problem and invertibility of random matrices Advances in Mathematics 218 2008) 600 633 www.elsevier.com/locate/aim The Littlewood Offord problem and invertibility of random matrices Mark Rudelson a,1, Roman Vershynin b,,2 a Department of Mathematics,

More information

arxiv: v3 [math.pr] 8 Oct 2017

arxiv: v3 [math.pr] 8 Oct 2017 Complex Random Matrices have no Real Eigenvalues arxiv:1609.07679v3 [math.pr] 8 Oct 017 Kyle Luh Abstract Let ζ = ξ + iξ where ξ, ξ are iid copies of a mean zero, variance one, subgaussian random variable.

More information

RANDOM MATRICES: OVERCROWDING ESTIMATES FOR THE SPECTRUM. 1. introduction

RANDOM MATRICES: OVERCROWDING ESTIMATES FOR THE SPECTRUM. 1. introduction RANDOM MATRICES: OVERCROWDING ESTIMATES FOR THE SPECTRUM HOI H. NGUYEN Abstract. We address overcrowding estimates for the singular values of random iid matrices, as well as for the eigenvalues of random

More information

EIGENVECTORS OF RANDOM MATRICES OF SYMMETRIC ENTRY DISTRIBUTIONS. 1. introduction

EIGENVECTORS OF RANDOM MATRICES OF SYMMETRIC ENTRY DISTRIBUTIONS. 1. introduction EIGENVECTORS OF RANDOM MATRICES OF SYMMETRIC ENTRY DISTRIBUTIONS SEAN MEEHAN AND HOI NGUYEN Abstract. In this short note we study a non-degeneration property of eigenvectors of symmetric random matrices

More information

Mahler Conjecture and Reverse Santalo

Mahler Conjecture and Reverse Santalo Mini-Courses Radoslaw Adamczak Random matrices with independent log-concave rows/columns. I will present some recent results concerning geometric properties of random matrices with independent rows/columns

More information

The rank of random regular digraphs of constant degree

The rank of random regular digraphs of constant degree The rank of random regular digraphs of constant degree Alexander E. Litvak Anna Lytova Konstantin Tikhomirov Nicole Tomczak-Jaegermann Pierre Youssef Abstract Let d be a (large) integer. Given n 2d, let

More information

Concentration and Anti-concentration. Van H. Vu. Department of Mathematics Yale University

Concentration and Anti-concentration. Van H. Vu. Department of Mathematics Yale University Concentration and Anti-concentration Van H. Vu Department of Mathematics Yale University Concentration and Anti-concentration X : a random variable. Concentration and Anti-concentration X : a random variable.

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

Smallest singular value of sparse random matrices

Smallest singular value of sparse random matrices Smallest singular value of sparse random matrices Alexander E. Litvak 1 Omar Rivasplata Abstract We extend probability estimates on the smallest singular value of random matrices with independent entries

More information

Inverse Theorems in Probability. Van H. Vu. Department of Mathematics Yale University

Inverse Theorems in Probability. Van H. Vu. Department of Mathematics Yale University Inverse Theorems in Probability Van H. Vu Department of Mathematics Yale University Concentration and Anti-concentration X : a random variable. Concentration and Anti-concentration X : a random variable.

More information

Sampling and high-dimensional convex geometry

Sampling and high-dimensional convex geometry Sampling and high-dimensional convex geometry Roman Vershynin SampTA 2013 Bremen, Germany, June 2013 Geometry of sampling problems Signals live in high dimensions; sampling is often random. Geometry in

More information

RANDOM MATRICES: TAIL BOUNDS FOR GAPS BETWEEN EIGENVALUES. 1. Introduction

RANDOM MATRICES: TAIL BOUNDS FOR GAPS BETWEEN EIGENVALUES. 1. Introduction RANDOM MATRICES: TAIL BOUNDS FOR GAPS BETWEEN EIGENVALUES HOI NGUYEN, TERENCE TAO, AND VAN VU Abstract. Gaps (or spacings) between consecutive eigenvalues are a central topic in random matrix theory. The

More information

Concentration inequalities: basics and some new challenges

Concentration inequalities: basics and some new challenges Concentration inequalities: basics and some new challenges M. Ledoux University of Toulouse, France & Institut Universitaire de France Measure concentration geometric functional analysis, probability theory,

More information

Geometry of log-concave Ensembles of random matrices

Geometry of log-concave Ensembles of random matrices Geometry of log-concave Ensembles of random matrices Nicole Tomczak-Jaegermann Joint work with Radosław Adamczak, Rafał Latała, Alexander Litvak, Alain Pajor Cortona, June 2011 Nicole Tomczak-Jaegermann

More information

NO-GAPS DELOCALIZATION FOR GENERAL RANDOM MATRICES MARK RUDELSON AND ROMAN VERSHYNIN

NO-GAPS DELOCALIZATION FOR GENERAL RANDOM MATRICES MARK RUDELSON AND ROMAN VERSHYNIN NO-GAPS DELOCALIZATION FOR GENERAL RANDOM MATRICES MARK RUDELSON AND ROMAN VERSHYNIN Abstract. We prove that with high probability, every eigenvector of a random matrix is delocalized in the sense that

More information

Estimates for the concentration functions in the Littlewood Offord problem

Estimates for the concentration functions in the Littlewood Offord problem Estimates for the concentration functions in the Littlewood Offord problem Yulia S. Eliseeva, Andrei Yu. Zaitsev Saint-Petersburg State University, Steklov Institute, St Petersburg, RUSSIA. 203, June Saint-Petersburg

More information

RANDOM MATRICES: LAW OF THE DETERMINANT

RANDOM MATRICES: LAW OF THE DETERMINANT RANDOM MATRICES: LAW OF THE DETERMINANT HOI H. NGUYEN AND VAN H. VU Abstract. Let A n be an n by n random matrix whose entries are independent real random variables with mean zero and variance one. We

More information

Large sample covariance matrices and the T 2 statistic

Large sample covariance matrices and the T 2 statistic Large sample covariance matrices and the T 2 statistic EURANDOM, the Netherlands Joint work with W. Zhou Outline 1 2 Basic setting Let {X ij }, i, j =, be i.i.d. r.v. Write n s j = (X 1j,, X pj ) T and

More information

Inhomogeneous circular laws for random matrices with non-identically distributed entries

Inhomogeneous circular laws for random matrices with non-identically distributed entries Inhomogeneous circular laws for random matrices with non-identically distributed entries Nick Cook with Walid Hachem (Telecom ParisTech), Jamal Najim (Paris-Est) and David Renfrew (SUNY Binghamton/IST

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

INVESTIGATE INVERTIBILITY OF SPARSE SYMMETRIC MATRIX arxiv: v2 [math.pr] 25 Apr 2018

INVESTIGATE INVERTIBILITY OF SPARSE SYMMETRIC MATRIX arxiv: v2 [math.pr] 25 Apr 2018 INVESTIGATE INVERTIBILITY OF SPARSE SYMMETRIC MATRIX arxiv:1712.04341v2 [math.pr] 25 Apr 2018 FENG WEI Abstract. In this paper, we investigate the invertibility of sparse symmetric matrices. We will show

More information

Lecture 3. Random Fourier measurements

Lecture 3. Random Fourier measurements Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our

More information

arxiv: v3 [math-ph] 21 Jun 2012

arxiv: v3 [math-ph] 21 Jun 2012 LOCAL MARCHKO-PASTUR LAW AT TH HARD DG OF SAMPL COVARIAC MATRICS CLAUDIO CACCIAPUOTI, AA MALTSV, AD BJAMI SCHLI arxiv:206.730v3 [math-ph] 2 Jun 202 Abstract. Let X be a matrix whose entries are i.i.d.

More information

LOWER BOUNDS FOR THE SMALLEST SINGULAR VALUE OF STRUCTURED RANDOM MATRICES. By Nicholas Cook University of California, Los Angeles

LOWER BOUNDS FOR THE SMALLEST SINGULAR VALUE OF STRUCTURED RANDOM MATRICES. By Nicholas Cook University of California, Los Angeles Submitted to the Annals of Probability arxiv: arxiv:1608.07347 LOWER BOUNDS FOR THE SMALLEST SINGULAR VALUE OF STRUCTURED RANDOM MATRICES By Nicholas Cook University of California, Los Angeles We obtain

More information

SHARP TRANSITION OF THE INVERTIBILITY OF THE ADJACENCY MATRICES OF SPARSE RANDOM GRAPHS

SHARP TRANSITION OF THE INVERTIBILITY OF THE ADJACENCY MATRICES OF SPARSE RANDOM GRAPHS SHARP TRANSITION OF THE INVERTIBILITY OF THE ADJACENCY MATRICES OF SPARSE RANDOM GRAPHS ANIRBAN BASAK AND MARK RUDELSON Abstract. We consider three different models of sparse random graphs: undirected

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

Comparison Method in Random Matrix Theory

Comparison Method in Random Matrix Theory Comparison Method in Random Matrix Theory Jun Yin UW-Madison Valparaíso, Chile, July - 2015 Joint work with A. Knowles. 1 Some random matrices Wigner Matrix: H is N N square matrix, H : H ij = H ji, EH

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

On the singular values of random matrices

On the singular values of random matrices On the singular values of random matrices Shahar Mendelson Grigoris Paouris Abstract We present an approach that allows one to bound the largest and smallest singular values of an N n random matrix with

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Universality of local spectral statistics of random matrices

Universality of local spectral statistics of random matrices Universality of local spectral statistics of random matrices László Erdős Ludwig-Maximilians-Universität, Munich, Germany CRM, Montreal, Mar 19, 2012 Joint with P. Bourgade, B. Schlein, H.T. Yau, and J.

More information

Eigenvalue variance bounds for Wigner and covariance random matrices

Eigenvalue variance bounds for Wigner and covariance random matrices Eigenvalue variance bounds for Wigner and covariance random matrices S. Dallaporta University of Toulouse, France Abstract. This work is concerned with finite range bounds on the variance of individual

More information

arxiv: v2 [math.pr] 16 Aug 2014

arxiv: v2 [math.pr] 16 Aug 2014 RANDOM WEIGHTED PROJECTIONS, RANDOM QUADRATIC FORMS AND RANDOM EIGENVECTORS VAN VU DEPARTMENT OF MATHEMATICS, YALE UNIVERSITY arxiv:306.3099v2 [math.pr] 6 Aug 204 KE WANG INSTITUTE FOR MATHEMATICS AND

More information

Extreme eigenvalues of Erdős-Rényi random graphs

Extreme eigenvalues of Erdős-Rényi random graphs Extreme eigenvalues of Erdős-Rényi random graphs Florent Benaych-Georges j.w.w. Charles Bordenave and Antti Knowles MAP5, Université Paris Descartes May 18, 2018 IPAM UCLA Inhomogeneous Erdős-Rényi random

More information

Random Matrix: From Wigner to Quantum Chaos

Random Matrix: From Wigner to Quantum Chaos Random Matrix: From Wigner to Quantum Chaos Horng-Tzer Yau Harvard University Joint work with P. Bourgade, L. Erdős, B. Schlein and J. Yin 1 Perhaps I am now too courageous when I try to guess the distribution

More information

BILINEAR AND QUADRATIC VARIANTS ON THE LITTLEWOOD-OFFORD PROBLEM

BILINEAR AND QUADRATIC VARIANTS ON THE LITTLEWOOD-OFFORD PROBLEM BILINEAR AND QUADRATIC VARIANTS ON THE LITTLEWOOD-OFFORD PROBLEM KEVIN P. COSTELLO Abstract. If f(x,..., x n) is a polynomial dependent on a large number of independent Bernoulli random variables, what

More information

Estimation of large dimensional sparse covariance matrices

Estimation of large dimensional sparse covariance matrices Estimation of large dimensional sparse covariance matrices Department of Statistics UC, Berkeley May 5, 2009 Sample covariance matrix and its eigenvalues Data: n p matrix X n (independent identically distributed)

More information

Random Matrix Theory and its Applications to Econometrics

Random Matrix Theory and its Applications to Econometrics Random Matrix Theory and its Applications to Econometrics Hyungsik Roger Moon University of Southern California Conference to Celebrate Peter Phillips 40 Years at Yale, October 2018 Spectral Analysis of

More information

BALANCING GAUSSIAN VECTORS. 1. Introduction

BALANCING GAUSSIAN VECTORS. 1. Introduction BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors

More information

arxiv: v1 [math.pr] 22 Dec 2018

arxiv: v1 [math.pr] 22 Dec 2018 arxiv:1812.09618v1 [math.pr] 22 Dec 2018 Operator norm upper bound for sub-gaussian tailed random matrices Eric Benhamou Jamal Atif Rida Laraki December 27, 2018 Abstract This paper investigates an upper

More information

Non-convex Optimization for Linear System with Pregaussian Matrices. and Recovery from Multiple Measurements. Yang Liu

Non-convex Optimization for Linear System with Pregaussian Matrices. and Recovery from Multiple Measurements. Yang Liu Non-convex Optimization for Linear System with Pregaussian Matrices and Recovery from Multiple Measurements by Yang Liu Under the direction of Ming-Jun Lai Abstract The extremal singular values of random

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May University of Luxembourg Master in Mathematics Student project Compressed sensing Author: Lucien May Supervisor: Prof. I. Nourdin Winter semester 2014 1 Introduction Let us consider an s-sparse vector

More information

Non white sample covariance matrices.

Non white sample covariance matrices. Non white sample covariance matrices. S. Péché, Université Grenoble 1, joint work with O. Ledoit, Uni. Zurich 17-21/05/2010, Université Marne la Vallée Workshop Probability and Geometry in High Dimensions

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora Scribe: Today we continue the

More information

1 Tridiagonal matrices

1 Tridiagonal matrices Lecture Notes: β-ensembles Bálint Virág Notes with Diane Holcomb 1 Tridiagonal matrices Definition 1. Suppose you have a symmetric matrix A, we can define its spectral measure (at the first coordinate

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

High-dimensional distributions with convexity properties

High-dimensional distributions with convexity properties High-dimensional distributions with convexity properties Bo az Klartag Tel-Aviv University A conference in honor of Charles Fefferman, Princeton, May 2009 High-Dimensional Distributions We are concerned

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

arxiv: v2 [math.pr] 15 Dec 2010

arxiv: v2 [math.pr] 15 Dec 2010 HOW CLOSE IS THE SAMPLE COVARIANCE MATRIX TO THE ACTUAL COVARIANCE MATRIX? arxiv:1004.3484v2 [math.pr] 15 Dec 2010 ROMAN VERSHYNIN Abstract. GivenaprobabilitydistributioninR n withgeneral(non-white) covariance,

More information

The Matrix Dyson Equation in random matrix theory

The Matrix Dyson Equation in random matrix theory The Matrix Dyson Equation in random matrix theory László Erdős IST, Austria Mathematical Physics seminar University of Bristol, Feb 3, 207 Joint work with O. Ajanki, T. Krüger Partially supported by ERC

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

, then the ESD of the matrix converges to µ cir almost surely as n tends to.

, then the ESD of the matrix converges to µ cir almost surely as n tends to. CIRCULAR LAW FOR RANDOM DISCRETE MATRICES OF GIVEN ROW SUM HOI H. NGUYEN AND VAN H. VU Abstract. Let M n be a random matrix of size n n and let λ 1,..., λ n be the eigenvalues of M n. The empirical spectral

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Limitations in Approximating RIP

Limitations in Approximating RIP Alok Puranik Mentor: Adrian Vladu Fifth Annual PRIMES Conference, 2015 Outline 1 Background The Problem Motivation Construction Certification 2 Planted model Planting eigenvalues Analysis Distinguishing

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

REPORT DOCUMENTATION PAGE

REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-088 REPORT DOCUMENTATION PAGE Public report ing burden for this collection of informat ion is estimated to average hour per response, including the time for reviewing instructions,

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Principal Components Theory Notes

Principal Components Theory Notes Principal Components Theory Notes Charles J. Geyer August 29, 2007 1 Introduction These are class notes for Stat 5601 (nonparametrics) taught at the University of Minnesota, Spring 2006. This not a theory

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Isotropic local laws for random matrices

Isotropic local laws for random matrices Isotropic local laws for random matrices Antti Knowles University of Geneva With Y. He and R. Rosenthal Random matrices Let H C N N be a large Hermitian random matrix, normalized so that H. Some motivations:

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Random Matrix Theory and its applications to Statistics and Wireless Communications Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Sergio Verdú Princeton University National

More information

III. Quantum ergodicity on graphs, perspectives

III. Quantum ergodicity on graphs, perspectives III. Quantum ergodicity on graphs, perspectives Nalini Anantharaman Université de Strasbourg 24 août 2016 Yesterday we focussed on the case of large regular (discrete) graphs. Let G = (V, E) be a (q +

More information

On the concentration of eigenvalues of random symmetric matrices

On the concentration of eigenvalues of random symmetric matrices On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest

More information

arxiv:math/ v1 [math.pr] 11 Mar 2007

arxiv:math/ v1 [math.pr] 11 Mar 2007 The condition number of a randomly perturbed matrix arxiv:math/7337v1 [math.pr] 11 Mar 27 ABSTRACT Terence Tao Department of Mathematics, UCLA Los Angeles CA 995-1555, USA. tao@math.ucla.edu Let M be an

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

PCA, Kernel PCA, ICA

PCA, Kernel PCA, ICA PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per

More information

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices László Erdős University of Munich Oberwolfach, 2008 Dec Joint work with H.T. Yau (Harvard), B. Schlein (Cambrigde) Goal:

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

arxiv: v3 [math.pr] 24 Mar 2016

arxiv: v3 [math.pr] 24 Mar 2016 BOUND FOR THE MAXIMAL PROBABILITY IN THE LITTLEWOOD OFFORD PROBLEM ANDREI YU. ZAITSEV arxiv:151.00697v3 [math.pr] 4 Mar 016 Abstract. The paper deals with studying a connection of the Littlewood Offord

More information

Random Matrices: Beyond Wigner and Marchenko-Pastur Laws

Random Matrices: Beyond Wigner and Marchenko-Pastur Laws Random Matrices: Beyond Wigner and Marchenko-Pastur Laws Nathan Noiry Modal X, Université Paris Nanterre May 3, 2018 Wigner s Matrices Wishart s Matrices Generalizations Wigner s Matrices ij, (n, i, j

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Approximately Gaussian marginals and the hyperplane conjecture

Approximately Gaussian marginals and the hyperplane conjecture Approximately Gaussian marginals and the hyperplane conjecture Tel-Aviv University Conference on Asymptotic Geometric Analysis, Euler Institute, St. Petersburg, July 2010 Lecture based on a joint work

More information