Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1

Size: px
Start display at page:

Download "Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1"

Transcription

1

2 Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Feng Wei 2 University of Michigan July 29, This presentation is based a project under the supervision of M. Rudelson. 2 Partially supported by M. Rudelson s NSF Grant DMS , and USAF Grant FA Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 1 / 16

3 Motivation and Backgound Asymptotic Distribution of Singular Values Let A be an n n random matrix with i.i.d. entries of mean 0 and variance 1. s 1 (A) s 2 (A) s n (A) denote the singular values of A. Consider µ(j) = 1 { ( ) } A n n # i : s i J, J R. By Quarter Circular Law, dµ(x) 1 π 4 x 2 1 [0,2] (x)dx as n. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 2 / 16

4 Motivation and Backgound Asymptotic Distribution of Singular Values Let A be an n n random matrix with i.i.d. entries of mean 0 and variance 1. s 1 (A) s 2 (A) s n (A) denote the singular values of A. Consider µ(j) = 1 { ( ) } A n n # i : s i J, J R. By Quarter Circular Law, dµ(x) 1 π 4 x 2 1 [0,2] (x)dx as n. A simple computation using the limiting distribution shows the lth l smallest singular value s n+1 l (A) is in the order of n for l = 1, 2,, n. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 2 / 16

5 Motivation and Backgound Asymptotic Distribution of Singular Values Let A be an n n random matrix with i.i.d. entries of mean 0 and variance 1. s 1 (A) s 2 (A) s n (A) denote the singular values of A. Consider µ(j) = 1 { ( ) } A n n # i : s i J, J R. By Quarter Circular Law, dµ(x) 1 π 4 x 2 1 [0,2] (x)dx as n. A simple computation using the limiting distribution shows the lth l smallest singular value s n+1 l (A) is in the order of n for l = 1, 2,, n. Question What is the distribution of the singular values for a fixed large n? Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 2 / 16

6 Motivation and Backgound Asymptotic Distribution of Singular Values Let A be an n n random matrix with i.i.d. entries of mean 0 and variance 1. s 1 (A) s 2 (A) s n (A) denote the singular values of A. Consider µ(j) = 1 { ( ) } A n n # i : s i J, J R. By Quarter Circular Law, dµ(x) 1 π 4 x 2 1 [0,2] (x)dx as n. A simple computation using the limiting distribution shows the lth l smallest singular value s n+1 l (A) is in the order of n for l = 1, 2,, n. Question What is the distribution of the singular values for a fixed large n? Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 2 / 16

7 Motivation and Backgound Sub-gaussian Random Variables Definition Let θ > 0. Let Z be a random variable. Then the ψ θ -norm of Z is defined as { ( ) Z θ Z ψθ := inf λ > 0 : E exp 2} λ If Z ψθ <, then Z is called a ψ θ random variable. This condition is satisfied for broad classes of random variables. In particular, a bounded random variable is ψ θ for any θ > 0, a normal random variable is ψ 2, and a Poisson variable is ψ 1. A ψ 2 random variable is also called sub-gaussian. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 3 / 16

8 Motivation and Backgound Sub-gaussian Random Variables Definition Let θ > 0. Let Z be a random variable. Then the ψ θ -norm of Z is defined as { ( ) Z θ Z ψθ := inf λ > 0 : E exp 2} λ If Z ψθ <, then Z is called a ψ θ random variable. This condition is satisfied for broad classes of random variables. In particular, a bounded random variable is ψ θ for any θ > 0, a normal random variable is ψ 2, and a Poisson variable is ψ 1. A ψ 2 random variable is also called sub-gaussian. Moreover, X ψ2 = K P ( X > t) exp(1 ct2 K 2 ). Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 3 / 16

9 Motivation and Backgound Sub-gaussian Random Variables Definition Let θ > 0. Let Z be a random variable. Then the ψ θ -norm of Z is defined as { ( ) Z θ Z ψθ := inf λ > 0 : E exp 2} λ If Z ψθ <, then Z is called a ψ θ random variable. This condition is satisfied for broad classes of random variables. In particular, a bounded random variable is ψ θ for any θ > 0, a normal random variable is ψ 2, and a Poisson variable is ψ 1. A ψ 2 random variable is also called sub-gaussian. Moreover, X ψ2 = K P ( X > t) exp(1 ct2 K 2 ). Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 3 / 16

10 Motivation and Backgound Non-asymptotic Distribution of Singular Values The extreme singular values are better studied: It is easy to show that the operator norm of N n i.i.d. sub-gaussian matrix is in the order of N with high probability. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 4 / 16

11 Motivation and Backgound Non-asymptotic Distribution of Singular Values The extreme singular values are better studied: It is easy to show that the operator norm of N n i.i.d. sub-gaussian matrix is in the order of N with high probability. In 2008, M. Rudelson proved that the smallest singular value of a square i.i.d. sub-gaussian matrix is lower bounded by n 3/2 with high probability. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 4 / 16

12 Motivation and Backgound Non-asymptotic Distribution of Singular Values The extreme singular values are better studied: It is easy to show that the operator norm of N n i.i.d. sub-gaussian matrix is in the order of N with high probability. In 2008, M. Rudelson proved that the smallest singular value of a square i.i.d. sub-gaussian matrix is lower bounded by n 3/2 with high probability. This result was later extended and improved by A. Basak and M. Rudelson, M. Rudelson and R. Vershynin, T. Tao and V. Vu in square matrices case. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 4 / 16

13 Motivation and Backgound Non-asymptotic Distribution of Singular Values The extreme singular values are better studied: It is easy to show that the operator norm of N n i.i.d. sub-gaussian matrix is in the order of N with high probability. In 2008, M. Rudelson proved that the smallest singular value of a square i.i.d. sub-gaussian matrix is lower bounded by n 3/2 with high probability. This result was later extended and improved by A. Basak and M. Rudelson, M. Rudelson and R. Vershynin, T. Tao and V. Vu in square matrices case. In 2009, M. Rudelson and R. Vershynin proved a sharp bound for smallest singular value of all rectangular sub-gaussian matrices. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 4 / 16

14 Motivation and Backgound Non-asymptotic Distribution of Singular Values The extreme singular values are better studied: It is easy to show that the operator norm of N n i.i.d. sub-gaussian matrix is in the order of N with high probability. In 2008, M. Rudelson proved that the smallest singular value of a square i.i.d. sub-gaussian matrix is lower bounded by n 3/2 with high probability. This result was later extended and improved by A. Basak and M. Rudelson, M. Rudelson and R. Vershynin, T. Tao and V. Vu in square matrices case. In 2009, M. Rudelson and R. Vershynin proved a sharp bound for smallest singular value of all rectangular sub-gaussian matrices. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 4 / 16

15 Motivation and Backgound Lower Bound for Singular Values Theorem.(M. Rudelson and R. Vershynin, 2009) Let G be an N n random matrix, N n, whose elements are independent copies of a centered sub-gaussian random variable with unit variance. Then for every ε > 0, we have ( ( )) P s n (G) ε N n 1 (Cε) N n+1 + e C N where C, C > 0 depend (polynomially) only on the sub-gaussian moment K. Consider an n n i.i.d. sub-gaussian matrix A and let B be the first n + 1 l columns of A. Then with high probability, ( n ) s n+1 l (A) s n+1 l (B) c n l c l n. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 5 / 16

16 Motivation and Backgound Lower Bound for Singular Values Theorem.(M. Rudelson and R. Vershynin, 2009) Let G be an N n random matrix, N n, whose elements are independent copies of a centered sub-gaussian random variable with unit variance. Then for every ε > 0, we have ( ( )) P s n (G) ε N n 1 (Cε) N n+1 + e C N where C, C > 0 depend (polynomially) only on the sub-gaussian moment K. Consider an n n i.i.d. sub-gaussian matrix A and let B be the first n + 1 l columns of A. Then with high probability, ( n ) s n+1 l (A) s n+1 l (B) c n l c l n. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 5 / 16

17 Motivation and Backgound Upper Bound for Smallest Singular Values Theorem.(M. Rudelson and R. Vershynin, 2008) Let A be an n n i.i.d. sub-gaussian matrix whose entries have mean 0, variance 1 and sub-gaussian moment K. Then for any t 2, ( ) P s n (A) tn 1 2 C log t + c n t where C > 0 and c (0, 1) depend only on K. Theorem.(H. Nguyen and V. Vu., 2016) Let A be an n n i.i.d. sub-gaussian matrix whose entries have mean 0, variance 1 and sub-gaussian moment K. Then for any t > 0, ( ) P s n (A) tn 1 2 C 1 exp( C 2 t) where C 1, C 2 > 0 depend only on K. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 6 / 16

18 Motivation and Backgound Upper Bound for Smallest Singular Values Theorem.(M. Rudelson and R. Vershynin, 2008) Let A be an n n i.i.d. sub-gaussian matrix whose entries have mean 0, variance 1 and sub-gaussian moment K. Then for any t 2, ( ) P s n (A) tn 1 2 C log t + c n t where C > 0 and c (0, 1) depend only on K. Theorem.(H. Nguyen and V. Vu., 2016) Let A be an n n i.i.d. sub-gaussian matrix whose entries have mean 0, variance 1 and sub-gaussian moment K. Then for any t > 0, ( ) P s n (A) tn 1 2 C 1 exp( C 2 t) where C 1, C 2 > 0 depend only on K. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 6 / 16

19 Motivation and Backgound Two-side Bound for Singular Values Theorem.(S. Szarek, 1990) Let G be an n n i.i.d. standard Gaussian matrix. Then there exist universal constants c 1, c 2, c, C, s.t. ( c1 l P s n+1 l (G) c ) 2l 1 C exp( cl 2 ). n n From a paper of T. Tao and V. Vu. in 2010, together with the result of Szarek, one can deduce some non-asymptotic bounds for random i.i.d. square matrix for l n c where c is a small constant. However, the tail bound is not exponential type. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 7 / 16

20 Motivation and Backgound Two-side Bound for Singular Values Theorem.(S. Szarek, 1990) Let G be an n n i.i.d. standard Gaussian matrix. Then there exist universal constants c 1, c 2, c, C, s.t. ( c1 l P s n+1 l (G) c ) 2l 1 C exp( cl 2 ). n n From a paper of T. Tao and V. Vu. in 2010, together with the result of Szarek, one can deduce some non-asymptotic bounds for random i.i.d. square matrix for l n c where c is a small constant. However, the tail bound is not exponential type. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 7 / 16

21 Main Results Levy Concentration Function Definition Let Z be a random vector that takes values in R n. The concentration function of Z is defined as L(Z, t) = sup u R n P{ Z u 2 t}, t 0. Assumption 1 Let p > 0. Let A be an n m random matrix whose entries are i.i.d. random variables, with mean 0, variance 1 and ψ 2 -norm K. Assume also that there exists 0 < s s 0 (p, K) such that L(A i,j, s) ps. Here, s 0 (p, K) is a given function depending only on p and K. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 8 / 16

22 Main Results Levy Concentration Function Definition Let Z be a random vector that takes values in R n. The concentration function of Z is defined as L(Z, t) = sup u R n P{ Z u 2 t}, t 0. Assumption 1 Let p > 0. Let A be an n m random matrix whose entries are i.i.d. random variables, with mean 0, variance 1 and ψ 2 -norm K. Assume also that there exists 0 < s s 0 (p, K) such that L(A i,j, s) ps. Here, s 0 (p, K) is a given function depending only on p and K. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 8 / 16

23 Main Results Main Theorem Theorem Let A be an n n random matrix that satisfies Assumption 1. Then there exist constants C 1, C 2 > 0 such that for all t > 1 and l = 1, 2,, n, ( ) tl P s n+1 l (A) C 1 1 exp( C 2 tl) n where C 1, C 2 are constants that depend only on K, p. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices 9 / 16

24 Main Results where C i s are constants that depends only on K, p. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices10 / 16 Corollaries Corollary Let A be an n (n k) random matrix that satisfies Assumption 1. Then there exist constants C 1, C 2 > 0 such that for all t > 1 and l = 1,, n, ( ) tl P s n+1 l (A) C 1 1 exp( C 2 tl) n where C 1, C 2 are constants that depend only on K, p. Corollary Let A be an n n random matrix that satisfies Assumption 1. Then there exist 0 < C 1 < C 2 and C 3 > 0, such that for all l = 1, 2,, n, ( C1 l P s n+1 l (A) C ) 2l 1 exp( C 3 l) n n

25 Main Results where C i s are constants that depends only on K, p. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices10 / 16 Corollaries Corollary Let A be an n (n k) random matrix that satisfies Assumption 1. Then there exist constants C 1, C 2 > 0 such that for all t > 1 and l = 1,, n, ( ) tl P s n+1 l (A) C 1 1 exp( C 2 tl) n where C 1, C 2 are constants that depend only on K, p. Corollary Let A be an n n random matrix that satisfies Assumption 1. Then there exist 0 < C 1 < C 2 and C 3 > 0, such that for all l = 1, 2,, n, ( C1 l P s n+1 l (A) C ) 2l 1 exp( C 3 l) n n

26 Outline of Proof Basic Idea of the Proof We want to prove s n+1 l (A) cl n with high probability. Instead of proving upper bound for s n+1 l (A), we prove lower bound for s l (A 1 ). So we only need to find an l-dimensional random subspace E such that with high probability, for all y E, A 1 y 2 y 2 C n l Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices11 / 16

27 Outline of Proof Basic Idea of the Proof We want to prove s n+1 l (A) cl n with high probability. Instead of proving upper bound for s n+1 l (A), we prove lower bound for s l (A 1 ). So we only need to find an l-dimensional random subspace E such that with high probability, for all y E, A 1 y 2 y 2 C n l Let X i, i = 1,, n denote the columns of A and H l = span{x l+1,, X n }. Then our aimed subspace is Hl to prove it using union bound argument. and we want Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices11 / 16

28 Outline of Proof Basic Idea of the Proof We want to prove s n+1 l (A) cl n with high probability. Instead of proving upper bound for s n+1 l (A), we prove lower bound for s l (A 1 ). So we only need to find an l-dimensional random subspace E such that with high probability, for all y E, A 1 y 2 y 2 C n l Let X i, i = 1,, n denote the columns of A and H l = span{x l+1,, X n }. Then our aimed subspace is Hl to prove it using union bound argument. and we want Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices11 / 16

29 Outline of Proof Union Bound Argument on Random Subspace Two obstacles if we argue over the sphere on H l : H l is random and it depends on A. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices12 / 16

30 Outline of Proof Union Bound Argument on Random Subspace Two obstacles if we argue over the sphere on H l : H l is random and it depends on A. Estimating A 1 y 2 may be as hard as the orginal problem. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices12 / 16

31 Outline of Proof Union Bound Argument on Random Subspace Two obstacles if we argue over the sphere on H l : H l is random and it depends on A. Estimating A 1 y 2 may be as hard as the orginal problem. Instead of applying union bound on unit sphere, we apply it on Pl A ls l 1, where A l is the first l columns of A and Pl is the orthogonal projection onto Hl. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices12 / 16

32 Outline of Proof Union Bound Argument on Random Subspace Two obstacles if we argue over the sphere on H l : H l is random and it depends on A. Estimating A 1 y 2 may be as hard as the orginal problem. Instead of applying union bound on unit sphere, we apply it on Pl A ls l 1, where A l is the first l columns of A and Pl is the orthogonal projection onto Hl. Let s construct a net N on S l 1, if we have control Pl A l, then Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices12 / 16

33 Outline of Proof Union Bound Argument on Random Subspace Two obstacles if we argue over the sphere on H l : H l is random and it depends on A. Estimating A 1 y 2 may be as hard as the orginal problem. Instead of applying union bound on unit sphere, we apply it on Pl A ls l 1, where A l is the first l columns of A and Pl is the orthogonal projection onto Hl. Let s construct a net N on S l 1, if we have control Pl A l, then For any y N, P l A ly 2 is bounded from above. P l A ln is still a net on P l A ls l 1 but with a different scale. A 1 Pl A ly 2 2 BA ly 2 2, where B is an (n l) n random matrix independent to A l. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices12 / 16

34 Outline of Proof Union Bound Argument on Random Subspace Two obstacles if we argue over the sphere on H l : H l is random and it depends on A. Estimating A 1 y 2 may be as hard as the orginal problem. Instead of applying union bound on unit sphere, we apply it on Pl A ls l 1, where A l is the first l columns of A and Pl is the orthogonal projection onto Hl. Let s construct a net N on S l 1, if we have control Pl A l, then For any y N, P l A ly 2 is bounded from above. P l A ln is still a net on P l A ls l 1 but with a different scale. A 1 Pl A ly 2 2 BA ly 2 2, where B is an (n l) n random matrix independent to A l. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices12 / 16

35 Outline of Proof Quantities to be Estimated There are several Quantities need to be estimated: Large deviation of Pl A l. Large deviation of A 1. H l Property of the matrix B. Small ball probability of BX 2 given B is a fixed matrix and X is a sub-gaussian random vector. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices13 / 16

36 Remarks Measure Concentration Tools Major measure concentration results we applied: Corollaries of Hanson-Wright inequality. Small ball probability for linear image of high dimension distribution. Small ball probability of distance from a sub-gaussian vector to a random subspace. Lower bound for smallest singular value of rectangular i.i.d. sub-gaussian matrix. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices14 / 16

37 Remarks The Concentration Function Condition Theorem.(M. Rudelson and R. Vershynin, 2014) Consider a random vector Z where Z i are real-valued independent random variables. Let t, p 0 be such that L(Z i, t) p for all i = 1,, n Let D be an m n matrix and ε (0, 1). Then L(DZ, t D HS ) (c ε p) (1 ε)r(d) where r(d) = D 2 HS / D 2 2 and c ε depend only on ε. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices15 / 16

38 Remarks Thanks. Feng Wei (University of Michigan) Upper Bound for Intermediate Singular Values of Random Sub-Gaussian July 29, 2016 Matrices16 / 16

arxiv: v1 [math.pr] 22 May 2008

arxiv: v1 [math.pr] 22 May 2008 THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered

More information

Invertibility of random matrices

Invertibility of random matrices University of Michigan February 2011, Princeton University Origins of Random Matrix Theory Statistics (Wishart matrices) PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]

More information

Small Ball Probability, Arithmetic Structure and Random Matrices

Small Ball Probability, Arithmetic Structure and Random Matrices Small Ball Probability, Arithmetic Structure and Random Matrices Roman Vershynin University of California, Davis April 23, 2008 Distance Problems How far is a random vector X from a given subspace H in

More information

Invertibility of symmetric random matrices

Invertibility of symmetric random matrices Invertibility of symmetric random matrices Roman Vershynin University of Michigan romanv@umich.edu February 1, 2011; last revised March 16, 2012 Abstract We study n n symmetric random matrices H, possibly

More information

THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX

THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX MARK RUDELSON AND ROMAN VERSHYNIN Abstract. We prove an optimal estimate on the smallest singular value of a random subgaussian matrix, valid

More information

THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX

THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX MARK RUDELSON AND ROMAN VERSHYNIN Abstract. We prove an optimal estimate of the smallest singular value of a random subgaussian matrix, valid

More information

Random matrices: Distribution of the least singular value (via Property Testing)

Random matrices: Distribution of the least singular value (via Property Testing) Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued

More information

Anti-concentration Inequalities

Anti-concentration Inequalities Anti-concentration Inequalities Roman Vershynin Mark Rudelson University of California, Davis University of Missouri-Columbia Phenomena in High Dimensions Third Annual Conference Samos, Greece June 2007

More information

Random regular digraphs: singularity and spectrum

Random regular digraphs: singularity and spectrum Random regular digraphs: singularity and spectrum Nick Cook, UCLA Probability Seminar, Stanford University November 2, 2015 Universality Circular law Singularity probability Talk outline 1 Universality

More information

THE LITTLEWOOD-OFFORD PROBLEM AND INVERTIBILITY OF RANDOM MATRICES

THE LITTLEWOOD-OFFORD PROBLEM AND INVERTIBILITY OF RANDOM MATRICES THE LITTLEWOOD-OFFORD PROBLEM AND INVERTIBILITY OF RANDOM MATRICES MARK RUDELSON AND ROMAN VERSHYNIN Abstract. We prove two basic conjectures on the distribution of the smallest singular value of random

More information

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA)

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA) Least singular value of random matrices Lewis Memorial Lecture / DIMACS minicourse March 18, 2008 Terence Tao (UCLA) 1 Extreme singular values Let M = (a ij ) 1 i n;1 j m be a square or rectangular matrix

More information

The Littlewood Offord problem and invertibility of random matrices

The Littlewood Offord problem and invertibility of random matrices Advances in Mathematics 218 2008) 600 633 www.elsevier.com/locate/aim The Littlewood Offord problem and invertibility of random matrices Mark Rudelson a,1, Roman Vershynin b,,2 a Department of Mathematics,

More information

Concentration Inequalities for Random Matrices

Concentration Inequalities for Random Matrices Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

NO-GAPS DELOCALIZATION FOR GENERAL RANDOM MATRICES MARK RUDELSON AND ROMAN VERSHYNIN

NO-GAPS DELOCALIZATION FOR GENERAL RANDOM MATRICES MARK RUDELSON AND ROMAN VERSHYNIN NO-GAPS DELOCALIZATION FOR GENERAL RANDOM MATRICES MARK RUDELSON AND ROMAN VERSHYNIN Abstract. We prove that with high probability, every eigenvector of a random matrix is delocalized in the sense that

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

SHARP TRANSITION OF THE INVERTIBILITY OF THE ADJACENCY MATRICES OF SPARSE RANDOM GRAPHS

SHARP TRANSITION OF THE INVERTIBILITY OF THE ADJACENCY MATRICES OF SPARSE RANDOM GRAPHS SHARP TRANSITION OF THE INVERTIBILITY OF THE ADJACENCY MATRICES OF SPARSE RANDOM GRAPHS ANIRBAN BASAK AND MARK RUDELSON Abstract. We consider three different models of sparse random graphs: undirected

More information

BALANCING GAUSSIAN VECTORS. 1. Introduction

BALANCING GAUSSIAN VECTORS. 1. Introduction BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors

More information

DELOCALIZATION OF EIGENVECTORS OF RANDOM MATRICES

DELOCALIZATION OF EIGENVECTORS OF RANDOM MATRICES DELOCALIZATION OF EIGENVECTORS OF RANDOM MATRICES MARK RUDELSON Preliminary version Abstract. Let x S n 1 be a unit eigenvector of an n n random matrix. This vector is delocalized if it is distributed

More information

arxiv: v3 [math.pr] 8 Oct 2017

arxiv: v3 [math.pr] 8 Oct 2017 Complex Random Matrices have no Real Eigenvalues arxiv:1609.07679v3 [math.pr] 8 Oct 017 Kyle Luh Abstract Let ζ = ξ + iξ where ξ, ξ are iid copies of a mean zero, variance one, subgaussian random variable.

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

arxiv: v2 [math.pr] 10 Oct 2014

arxiv: v2 [math.pr] 10 Oct 2014 DELOCALIZATION OF EIGENVECTORS OF RANDOM MATRICES WITH INDEPENDENT ENTRIES MARK RUDELSON AND ROMAN VERSHYNIN arxiv:1306.2887v2 [math.pr] 10 Oct 2014 Abstract. We prove that an n n random matrix G with

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Concentration and Anti-concentration. Van H. Vu. Department of Mathematics Yale University

Concentration and Anti-concentration. Van H. Vu. Department of Mathematics Yale University Concentration and Anti-concentration Van H. Vu Department of Mathematics Yale University Concentration and Anti-concentration X : a random variable. Concentration and Anti-concentration X : a random variable.

More information

RANDOM MATRICES: OVERCROWDING ESTIMATES FOR THE SPECTRUM. 1. introduction

RANDOM MATRICES: OVERCROWDING ESTIMATES FOR THE SPECTRUM. 1. introduction RANDOM MATRICES: OVERCROWDING ESTIMATES FOR THE SPECTRUM HOI H. NGUYEN Abstract. We address overcrowding estimates for the singular values of random iid matrices, as well as for the eigenvalues of random

More information

Inverse Theorems in Probability. Van H. Vu. Department of Mathematics Yale University

Inverse Theorems in Probability. Van H. Vu. Department of Mathematics Yale University Inverse Theorems in Probability Van H. Vu Department of Mathematics Yale University Concentration and Anti-concentration X : a random variable. Concentration and Anti-concentration X : a random variable.

More information

Non-Asymptotic Theory of Random Matrices Lecture 4: Dimension Reduction Date: January 16, 2007

Non-Asymptotic Theory of Random Matrices Lecture 4: Dimension Reduction Date: January 16, 2007 Non-Asymptotic Theory of Random Matrices Lecture 4: Dimension Reduction Date: January 16, 2007 Lecturer: Roman Vershynin Scribe: Matthew Herman 1 Introduction Consider the set X = {n points in R N } where

More information

Smallest singular value of sparse random matrices

Smallest singular value of sparse random matrices Smallest singular value of sparse random matrices Alexander E. Litvak 1 Omar Rivasplata Abstract We extend probability estimates on the smallest singular value of random matrices with independent entries

More information

Using Friendly Tail Bounds for Sums of Random Matrices

Using Friendly Tail Bounds for Sums of Random Matrices Using Friendly Tail Bounds for Sums of Random Matrices Joel A. Tropp Computing + Mathematical Sciences California Institute of Technology jtropp@cms.caltech.edu Research supported in part by NSF, DARPA,

More information

arxiv: v2 [math.fa] 7 Apr 2010

arxiv: v2 [math.fa] 7 Apr 2010 Proceedings of the International Congress of Mathematicians Hyderabad, India, 2010 Non-asymptotic theory of random matrices: extreme singular values arxiv:1003.2990v2 [math.fa] 7 Apr 2010 Mark Rudelson,

More information

EIGENVECTORS OF RANDOM MATRICES OF SYMMETRIC ENTRY DISTRIBUTIONS. 1. introduction

EIGENVECTORS OF RANDOM MATRICES OF SYMMETRIC ENTRY DISTRIBUTIONS. 1. introduction EIGENVECTORS OF RANDOM MATRICES OF SYMMETRIC ENTRY DISTRIBUTIONS SEAN MEEHAN AND HOI NGUYEN Abstract. In this short note we study a non-degeneration property of eigenvectors of symmetric random matrices

More information

INVESTIGATE INVERTIBILITY OF SPARSE SYMMETRIC MATRIX arxiv: v2 [math.pr] 25 Apr 2018

INVESTIGATE INVERTIBILITY OF SPARSE SYMMETRIC MATRIX arxiv: v2 [math.pr] 25 Apr 2018 INVESTIGATE INVERTIBILITY OF SPARSE SYMMETRIC MATRIX arxiv:1712.04341v2 [math.pr] 25 Apr 2018 FENG WEI Abstract. In this paper, we investigate the invertibility of sparse symmetric matrices. We will show

More information

ROW PRODUCTS OF RANDOM MATRICES

ROW PRODUCTS OF RANDOM MATRICES ROW PRODUCTS OF RANDOM MATRICES MARK RUDELSON Abstract Let 1,, K be d n matrices We define the row product of these matrices as a d K n matrix, whose rows are entry-wise products of rows of 1,, K This

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini April 27, 2018 1 / 80 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

A NOTE ON SUMS OF INDEPENDENT RANDOM MATRICES AFTER AHLSWEDE-WINTER

A NOTE ON SUMS OF INDEPENDENT RANDOM MATRICES AFTER AHLSWEDE-WINTER A NOTE ON SUMS OF INDEPENDENT RANDOM MATRICES AFTER AHLSWEDE-WINTER 1. The method Ashwelde and Winter [1] proposed a new approach to deviation inequalities for sums of independent random matrices. The

More information

RANDOM MATRICES: TAIL BOUNDS FOR GAPS BETWEEN EIGENVALUES. 1. Introduction

RANDOM MATRICES: TAIL BOUNDS FOR GAPS BETWEEN EIGENVALUES. 1. Introduction RANDOM MATRICES: TAIL BOUNDS FOR GAPS BETWEEN EIGENVALUES HOI NGUYEN, TERENCE TAO, AND VAN VU Abstract. Gaps (or spacings) between consecutive eigenvalues are a central topic in random matrix theory. The

More information

Exponential tail inequalities for eigenvalues of random matrices

Exponential tail inequalities for eigenvalues of random matrices Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify

More information

Estimates for the concentration functions in the Littlewood Offord problem

Estimates for the concentration functions in the Littlewood Offord problem Estimates for the concentration functions in the Littlewood Offord problem Yulia S. Eliseeva, Andrei Yu. Zaitsev Saint-Petersburg State University, Steklov Institute, St Petersburg, RUSSIA. 203, June Saint-Petersburg

More information

Random matrices: A Survey. Van H. Vu. Department of Mathematics Rutgers University

Random matrices: A Survey. Van H. Vu. Department of Mathematics Rutgers University Random matrices: A Survey Van H. Vu Department of Mathematics Rutgers University Basic models of random matrices Let ξ be a real or complex-valued random variable with mean 0 and variance 1. Examples.

More information

Delocalization of eigenvectors of random matrices Lecture notes

Delocalization of eigenvectors of random matrices Lecture notes Delocalization of eigenvectors of random matrices Lecture notes Mark Rudelson Abstract. Let x S n 1 be a unit eigenvector of an n n random matrix. This vector is delocalized if it is distributed roughly

More information

arxiv: v1 [math.pr] 22 Dec 2018

arxiv: v1 [math.pr] 22 Dec 2018 arxiv:1812.09618v1 [math.pr] 22 Dec 2018 Operator norm upper bound for sub-gaussian tailed random matrices Eric Benhamou Jamal Atif Rida Laraki December 27, 2018 Abstract This paper investigates an upper

More information

On the Bennett-Hoeffding inequality

On the Bennett-Hoeffding inequality On the Bennett-Hoeffding inequality of Iosif 1,2,3 1 Department of Mathematical Sciences Michigan Technological University 2 Supported by NSF grant DMS-0805946 3 Paper available at http://arxiv.org/abs/0902.4058

More information

LOWER BOUNDS FOR THE SMALLEST SINGULAR VALUE OF STRUCTURED RANDOM MATRICES. By Nicholas Cook University of California, Los Angeles

LOWER BOUNDS FOR THE SMALLEST SINGULAR VALUE OF STRUCTURED RANDOM MATRICES. By Nicholas Cook University of California, Los Angeles Submitted to the Annals of Probability arxiv: arxiv:1608.07347 LOWER BOUNDS FOR THE SMALLEST SINGULAR VALUE OF STRUCTURED RANDOM MATRICES By Nicholas Cook University of California, Los Angeles We obtain

More information

arxiv: v2 [math.pr] 15 Dec 2010

arxiv: v2 [math.pr] 15 Dec 2010 HOW CLOSE IS THE SAMPLE COVARIANCE MATRIX TO THE ACTUAL COVARIANCE MATRIX? arxiv:1004.3484v2 [math.pr] 15 Dec 2010 ROMAN VERSHYNIN Abstract. GivenaprobabilitydistributioninR n withgeneral(non-white) covariance,

More information

Generalization theory

Generalization theory Generalization theory Daniel Hsu Columbia TRIPODS Bootcamp 1 Motivation 2 Support vector machines X = R d, Y = { 1, +1}. Return solution ŵ R d to following optimization problem: λ min w R d 2 w 2 2 + 1

More information

On the Bennett-Hoeffding inequality

On the Bennett-Hoeffding inequality On the Bennett-Hoeffding inequality Iosif 1,2,3 1 Department of Mathematical Sciences Michigan Technological University 2 Supported by NSF grant DMS-0805946 3 Paper available at http://arxiv.org/abs/0902.4058

More information

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)

More information

Eigenvalue variance bounds for Wigner and covariance random matrices

Eigenvalue variance bounds for Wigner and covariance random matrices Eigenvalue variance bounds for Wigner and covariance random matrices S. Dallaporta University of Toulouse, France Abstract. This work is concerned with finite range bounds on the variance of individual

More information

On the singular values of random matrices

On the singular values of random matrices On the singular values of random matrices Shahar Mendelson Grigoris Paouris Abstract We present an approach that allows one to bound the largest and smallest singular values of an N n random matrix with

More information

Random sets of isomorphism of linear operators on Hilbert space

Random sets of isomorphism of linear operators on Hilbert space IMS Lecture Notes Monograph Series Random sets of isomorphism of linear operators on Hilbert space Roman Vershynin University of California, Davis Abstract: This note deals with a problem of the probabilistic

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Random Matrices: Invertibility, Structure, and Applications

Random Matrices: Invertibility, Structure, and Applications Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37

More information

Stein s Method for Matrix Concentration

Stein s Method for Matrix Concentration Stein s Method for Matrix Concentration Lester Mackey Collaborators: Michael I. Jordan, Richard Y. Chen, Brendan Farrell, and Joel A. Tropp University of California, Berkeley California Institute of Technology

More information

Least squares under convex constraint

Least squares under convex constraint Stanford University Questions Let Z be an n-dimensional standard Gaussian random vector. Let µ be a point in R n and let Y = Z + µ. We are interested in estimating µ from the data vector Y, under the assumption

More information

Sparse recovery for spherical harmonic expansions

Sparse recovery for spherical harmonic expansions Rachel Ward 1 1 Courant Institute, New York University Workshop Sparsity and Cosmology, Nice May 31, 2011 Cosmic Microwave Background Radiation (CMB) map Temperature is measured as T (θ, ϕ) = k k=0 l=

More information

arxiv: v2 [math.pr] 16 Aug 2014

arxiv: v2 [math.pr] 16 Aug 2014 RANDOM WEIGHTED PROJECTIONS, RANDOM QUADRATIC FORMS AND RANDOM EIGENVECTORS VAN VU DEPARTMENT OF MATHEMATICS, YALE UNIVERSITY arxiv:306.3099v2 [math.pr] 6 Aug 204 KE WANG INSTITUTE FOR MATHEMATICS AND

More information

arxiv: v3 [math.pr] 24 Mar 2016

arxiv: v3 [math.pr] 24 Mar 2016 BOUND FOR THE MAXIMAL PROBABILITY IN THE LITTLEWOOD OFFORD PROBLEM ANDREI YU. ZAITSEV arxiv:151.00697v3 [math.pr] 4 Mar 016 Abstract. The paper deals with studying a connection of the Littlewood Offord

More information

SINGULAR VALUES OF GAUSSIAN MATRICES AND PERMANENT ESTIMATORS

SINGULAR VALUES OF GAUSSIAN MATRICES AND PERMANENT ESTIMATORS SINGULAR VALUES OF GAUSSIAN MATRICES AND PERMANENT ESTIMATORS MARK RUDELSON AND OFER ZEITOUNI Abstract. We present estimates on the small singular values of a class of matrices with independent Gaussian

More information

Lecture 3. Random Fourier measurements

Lecture 3. Random Fourier measurements Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and

More information

RANDOM MATRICES: LAW OF THE DETERMINANT

RANDOM MATRICES: LAW OF THE DETERMINANT RANDOM MATRICES: LAW OF THE DETERMINANT HOI H. NGUYEN AND VAN H. VU Abstract. Let A n be an n by n random matrix whose entries are independent real random variables with mean zero and variance one. We

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

Non-convex Optimization for Linear System with Pregaussian Matrices. and Recovery from Multiple Measurements. Yang Liu

Non-convex Optimization for Linear System with Pregaussian Matrices. and Recovery from Multiple Measurements. Yang Liu Non-convex Optimization for Linear System with Pregaussian Matrices and Recovery from Multiple Measurements by Yang Liu Under the direction of Ming-Jun Lai Abstract The extremal singular values of random

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Random Methods for Linear Algebra

Random Methods for Linear Algebra Gittens gittens@acm.caltech.edu Applied and Computational Mathematics California Institue of Technology October 2, 2009 Outline The Johnson-Lindenstrauss Transform 1 The Johnson-Lindenstrauss Transform

More information

Supremum of simple stochastic processes

Supremum of simple stochastic processes Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there

More information

Stable Process. 2. Multivariate Stable Distributions. July, 2006

Stable Process. 2. Multivariate Stable Distributions. July, 2006 Stable Process 2. Multivariate Stable Distributions July, 2006 1. Stable random vectors. 2. Characteristic functions. 3. Strictly stable and symmetric stable random vectors. 4. Sub-Gaussian random vectors.

More information

Matrix Concentration. Nick Harvey University of British Columbia

Matrix Concentration. Nick Harvey University of British Columbia Matrix Concentration Nick Harvey University of British Columbia The Problem Given any random nxn, symmetric matrices Y 1,,Y k. Show that i Y i is probably close to E[ i Y i ]. Why? A matrix generalization

More information

III. Quantum ergodicity on graphs, perspectives

III. Quantum ergodicity on graphs, perspectives III. Quantum ergodicity on graphs, perspectives Nalini Anantharaman Université de Strasbourg 24 août 2016 Yesterday we focussed on the case of large regular (discrete) graphs. Let G = (V, E) be a (q +

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula

Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula Larry Goldstein, University of Southern California Nourdin GIoVAnNi Peccati Luxembourg University University British

More information

Geometry of log-concave Ensembles of random matrices

Geometry of log-concave Ensembles of random matrices Geometry of log-concave Ensembles of random matrices Nicole Tomczak-Jaegermann Joint work with Radosław Adamczak, Rafał Latała, Alexander Litvak, Alain Pajor Cortona, June 2011 Nicole Tomczak-Jaegermann

More information

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions Lower Tail Probabilities and Normal Comparison Inequalities In Memory of Wenbo V. Li s Contributions Qi-Man Shao The Chinese University of Hong Kong Lower Tail Probabilities and Normal Comparison Inequalities

More information

Model Selection and Geometry

Model Selection and Geometry Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model

More information

Isoperimetry of waists and local versus global asymptotic convex geometries

Isoperimetry of waists and local versus global asymptotic convex geometries Isoperimetry of waists and local versus global asymptotic convex geometries Roman Vershynin October 21, 2004 Abstract If two symmetric convex bodies K and L both have nicely bounded sections, then the

More information

SMALL BALL PROBABILITIES FOR LINEAR IMAGES OF HIGH DIMENSIONAL DISTRIBUTIONS

SMALL BALL PROBABILITIES FOR LINEAR IMAGES OF HIGH DIMENSIONAL DISTRIBUTIONS SMALL BALL PROBABILITIES FOR LINEAR IMAGES OF HIGH DIMENSIONAL DISTRIBUTIONS MARK RUDELSON AND ROMAN VERSHYNIN Abstract. We study concentration properties of random vectors of the form AX, where X = (X

More information

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011 LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic

More information

Visibility estimates in Euclidean and hyperbolic germ-grain models and line tessellations

Visibility estimates in Euclidean and hyperbolic germ-grain models and line tessellations Visibility estimates in Euclidean and hyperbolic germ-grain models and line tessellations Pierre Calka Lille, Stochastic Geometry Workshop, 30 March 2011 Outline Visibility in the vacancy of the Boolean

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Markov operators, classical orthogonal polynomial ensembles, and random matrices

Markov operators, classical orthogonal polynomial ensembles, and random matrices Markov operators, classical orthogonal polynomial ensembles, and random matrices M. Ledoux, Institut de Mathématiques de Toulouse, France 5ecm Amsterdam, July 2008 recent study of random matrix and random

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

Learning Theory. Ingo Steinwart University of Stuttgart. September 4, 2013

Learning Theory. Ingo Steinwart University of Stuttgart. September 4, 2013 Learning Theory Ingo Steinwart University of Stuttgart September 4, 2013 Ingo Steinwart University of Stuttgart () Learning Theory September 4, 2013 1 / 62 Basics Informal Introduction Informal Description

More information

Z Algorithmic Superpower Randomization October 15th, Lecture 12

Z Algorithmic Superpower Randomization October 15th, Lecture 12 15.859-Z Algorithmic Superpower Randomization October 15th, 014 Lecture 1 Lecturer: Bernhard Haeupler Scribe: Goran Žužić Today s lecture is about finding sparse solutions to linear systems. The problem

More information

Inhomogeneous circular laws for random matrices with non-identically distributed entries

Inhomogeneous circular laws for random matrices with non-identically distributed entries Inhomogeneous circular laws for random matrices with non-identically distributed entries Nick Cook with Walid Hachem (Telecom ParisTech), Jamal Najim (Paris-Est) and David Renfrew (SUNY Binghamton/IST

More information

Comparison Method in Random Matrix Theory

Comparison Method in Random Matrix Theory Comparison Method in Random Matrix Theory Jun Yin UW-Madison Valparaíso, Chile, July - 2015 Joint work with A. Knowles. 1 Some random matrices Wigner Matrix: H is N N square matrix, H : H ij = H ji, EH

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

Fredholmness of Some Toeplitz Operators

Fredholmness of Some Toeplitz Operators Fredholmness of Some Toeplitz Operators Beyaz Başak Koca Istanbul University, Istanbul, Turkey IWOTA, 2017 Preliminaries Definition (Fredholm operator) Let H be a Hilbert space and let T B(H). T is said

More information

HAFNIANS, PERFECT MATCHINGS AND GAUSSIAN MATRICES

HAFNIANS, PERFECT MATCHINGS AND GAUSSIAN MATRICES HAFNIANS, PERFECT MATCHINGS AND GAUSSIAN MATRICES MARK RUDELSON, ALEX SAMORODNITSKY, AND OFER ZEITOUNI Abstract. We analyze the behavior of the Barvinok estimator of the hafnian of even dimension, symmetric

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Compressive Sensing with Random Matrices

Compressive Sensing with Random Matrices Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview

More information

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University

More information