Geometry of log-concave Ensembles of random matrices
|
|
- Jeffery Hancock
- 5 years ago
- Views:
Transcription
1 Geometry of log-concave Ensembles of random matrices Nicole Tomczak-Jaegermann Joint work with Radosław Adamczak, Rafał Latała, Alexander Litvak, Alain Pajor Cortona, June 2011 Nicole Tomczak-Jaegermann () Cortona, / 21
2 Basic definitions and Notation Let X = (X(1),..., X(N)) be a random vector in R N with full dimensional support. We say that the distribution of X is log-concave, if X has density of the form e h(x) with h: R N (, ] convex; isotropic, if EX i = 0 and EX i X j = δ i,j. For x, y R N we put x = x 2 = ( N i=1 x2 i x, y is the inner product ) 1/2 For I {1,..., N}, by P I we denote the coordinate projection onto {y R N : supp(y) I}. Nicole Tomczak-Jaegermann () Cortona, / 21
3 Examples: log-concave random vectors 1. Let K R n be a convex body ( = compact convex, with non-empty interior) (symmetric means K = K). X a random vector uniformly distributed in K. Then the corresponding probability measure on R N K A µ K (A) = K is log-concave (by Brunn-Minkowski). Moreover, for every convex body K there exists an affine map T such that µ T K is isotropic. 2. The Gaussian vector G = (g 1,..., g n ), where g i s have N(0, 1) distribution, is isotropic and log-concave. 3. Similarly the vector X = (ξ 1,..., ξ n ), where ξ i s are independent with the exponential distribution (i.e., with density f(t) = 1 2 exp( 2 t ), for t R) is isotropic and log-concave. Nicole Tomczak-Jaegermann () Cortona, / 21
4 Matrices Let n, N be integers; X R N a random vector, X 1,..., X n independent, distributed as X. A is an n N matrix which has X i s as rows A = X 1 X 2 X n = For J {1,..., n}, I {1,..., N}, let A(J, I) be the sub-matrix with rows from J and columns from I. For k n, m N, let A k,m = sup A(J, I), J,I over all J k, I m. This is maximum of norms of sub-matrices of k columns and m rows (the operator norms from l N 2 to ln 2 ). Question: Upper bounds for A k,m with high probability Nicole Tomczak-Jaegermann () Cortona, / 21
5 Example: Norms of sub-matrices, uniform version of P. Earlier case: m = N, i.e., A k,n. We select k rows and take all columns. Solved by ALPT (JAMS 2010); connected with length of good approximations of the covariance matrices Case k = 1: For any J with J = 1, and I with I m, sub-matrix of one row has a form ( ) A(J, I) = P I X, so A 1,m = sup I m P I X. A bound for A 1,m will imply a uniform bound for P I X, for t 1. sup P I X bound for A 1,m, ( ) I =m with high probability. Compare with Paouris theorem for a fixed I with I m: for s 1, ( P P I X Cs ) ( N 1 exp s ) N. However the complexity of the family of subsets is ( N m) exp(c N). To prove ( ) we need more advanced technique. Nicole Tomczak-Jaegermann () Cortona, / 21
6 Order Statistics For an N dimensional random vector X by X 1 X 2... X N we denote the nonincreasing rearrangement of X(1),..., X(N) (in particular X 1 = max{ X(1),..., X(N) } and X N = min{ X(1),..., X(N) }). Random variables X k, 1 k N, are called order statistics of X. Problem: Find an upper bound for P(X k t). If coordinates of X i are independent symmetric exponential r.v. with variance 1 then Med(X k ) log(en/k) for k N/2. Nicole Tomczak-Jaegermann () Cortona, / 21
7 Order Statistics for isotropic log-concave vectors Theorem ( ) en Let X be N-dim. log-concave isotropic vector. Then for all t C log k, ( ) ( P X k t exp 1 ) kt. C Actually we need a stronger theorem that uses the following important weak parameter Nicole Tomczak-Jaegermann () Cortona, / 21
8 Weak parameter For a vector X in R N we define We also let σ 1 X Examples σ X (p) := sup (E t, X p ) 1/p p 2. t S N 1 to be the left-inverse function. σ 1 X (s) := sup { t: σ X (t) s } For isotropic log-concave vectors X, σ X (p) p/ 2. For subgaussian vectors X, σ X (p) C p. We say that an isotropic vector X is ψ α if σ X (p) Cp 1/α (uniform distributions on suitable normalized B N p balls are ψ α with α = min(p, 2)) Nicole Tomczak-Jaegermann () Cortona, / 21
9 Order Statistics with weak parameter Theorem ( ) en For any N-dim. log-concave isotropic vector X, l 1 and t C log l, ( ) ( P X l t exp ( 1 σ 1 X l)) C t This theorem implies uniform estimates for norms of projections. Nicole Tomczak-Jaegermann () Cortona, / 21
10 Uniform bound for norms of projections Theorem Let X be an isotropic log-concave vector in R N, and m N. Then for any t 1, ( P sup P I X Ct ( en ) ) m log I =m m is less than or equal to ( (t ( ) en m log ) ) ( exp σ 1 m m ( en ) ) X exp t log. log(em) log(em) m The bound is sharp, except for log in the probability estimate. We conjecture this factor is not needed. Nicole Tomczak-Jaegermann () Cortona, / 21
11 Uniform bound for norms of projections, idea Idea of the proof. We want to estimate ( We have P sup P I X = I =m sup P I X Ct m log I =m ( en m ) ). ( m ) 1/2 ( s 1 ) 1/2, X k i X 2 i 2 k=1 where s = log 2 m. We then use the estimates for order statistics. i=0 Nicole Tomczak-Jaegermann () Cortona, / 21
12 Estimates for order statistics Our approach to estimates for order statistics is based on the suitable estimate of moments of the process N X (t), where for t 0, N X (t) := N i=1 1 {Xi t} That is, N X (t) is equal to the number of coordinates of X larger than or equal to t. Nicole Tomczak-Jaegermann () Cortona, / 21
13 Estimate for N X Theorem For any isotropic log-concave vector X and p 2 we have E(t 2 N X (t)) p (Cp) 2p E(t 2 N X (t)) p (Cσ X (p)) 2p ( Nt 2 ) for t C log. p 2 ( Nt 2 ) for t C log σ 2 X (p). Nicole Tomczak-Jaegermann () Cortona, / 21
14 Estimate for N X Theorem For any isotropic log-concave vector X and p 2 we have E(t 2 N X (t)) p (Cp) 2p E(t 2 N X (t)) p (Cσ X (p)) 2p ( Nt 2 ) for t C log. p 2 ( Nt 2 ) for t C log σ 2 X (p). To get estimate for order statistics we observe that X k t implies that N X (t) k/2 or N X (t) k/2 and vector X is also isotropic and log-concave, and σ X = σ X. Estimates for N X and Chebyshev s inequality give ( 2 ) p(enx P(X k t) (t) p + EN X (t) p) ( CσX (p) ) 2p k t k ( ) provided that t C log(nt 2 /σ 2 1 X (p)). Set p = σ 1 X ec t k and notice that the restriction on t follows by the assumption that t C log(en/k). Nicole Tomczak-Jaegermann () Cortona, / 21
15 Convolutions of measures Let X 1,..., X n be independent isotropic log-concave vectors in R N, and let x = (x i ) R n. Consider the vector Y = Probability for bounds of the process n x i X i R N. i=1 sup P I Y I =m depends on the vector x. Specifically, it is convenient to assume the normalization x 1 and x b 1. Nicole Tomczak-Jaegermann () Cortona, / 21
16 Uniform bound for projections of convolutions Theorem Let Y = n i=1 x ix i, where X 1,..., X n are independent isotropic N-dimensional log-concave vectors. Assume that x 1 and x b 1. i) If b 1 m, then for any t 1, P ( sup P I Y Ct ( en m log I =m m ii) If b 1 m then for any t 1, P ) ) exp ( t ( ) en ) m log m b. log(e 2 b 2 m) ( sup P I Y Ct ( en ) ) m log I =m m ( { ( en ) exp min t 2 m log 2, t ( en )}) m log. m b m Nicole Tomczak-Jaegermann () Cortona, / 21
17 Uniform bound for norms of submatrices Let A be an n N random matrix with independent log-concave isotropic rows X 1,..., X n R N. For k n, m N we let A k,m = the maximal operator norm of a k m submatrix of A. For simplicity assume n N. Theorem For any integers n N, k n, m N and any t 1, we have ) ( P (A k,m Ctλ mk exp tλ mk ), log(3m) where λ mk = log log(3m) ( en ) m log + ( en ) k log. m k Nicole Tomczak-Jaegermann () Cortona, / 21
18 Uniform bound for norms of submatrices, cont. The threshold value of λ km is optimal, up to the factor log log(3m). Under the additional assumption of unconditionality of the rows we can remove this factor and get the sharp estimate. For example, for expectations: Theorem Let A be an n N matrix whose rows are independent isotropic log-concave unconditional random vectors. Then ( m 3N EA km C log m + k log 3n ). k Nicole Tomczak-Jaegermann () Cortona, / 21
19 Reconstruction Let n, N be integers; X R N a random vector, X 1,..., X n independent, distributed as X. A is an n N matrix which has X i s as rows X 1 A = X 2 = X n We can treat A : R N R n given by x ( X i, x ) R n. Let m N. A vector x R N is m-sparse if supp x m. Problem from Compressed Sensing theory: Reconstruct any m-sparse vector x R N from the data Ax R n, with a fast algorithm. Given Ax, find x, knowing that it is sparse. Note that of course A is not-invertible. Nicole Tomczak-Jaegermann () Cortona, / 21
20 RIP and Geometry pf Polytopes Define δ m = δ m (A) as the infimum of δ > 0 such that Ax 2 E Ax 2 δ n x 2 holds for all m-sparse vectors x R N. δ m is the Restricted Isometry Property (RIP) parameter of order m. Candes and Tao (2006): if δ 2m is sufficiently small then ( ) whenever y = Ax has a m-sparse solution x, then x is the unique solution of the l 1 -minimization program: min t l1 with the min over all t such that At = y. Geometry of Polytopes: By Donoho (2005), ( ) is equivalent to the condition that the centrally symmetric polytope A(B N 1 ) is m-centrally neighborly (i.e., any set of less than m vertices containing no opposite pairs, is a vertex set of a face). Nicole Tomczak-Jaegermann () Cortona, / 21
21 Estimate for δ m Lemma Let X 1,..., X n be independent isotropic random vectors in R N. Let 0 < θ < 1 and B 1. Then with probability at least ( ) N 1 exp ( 3θ 2 n/8b 2) m one has δ m θ + 1 n ( ) A 2 k,m + EA 2 k,m, where k n is the largest integer satisfying k (A k,m /B) 2 ; Nicole Tomczak-Jaegermann () Cortona, / 21
22 RIP theorem for matrices with independent rows: Theorem Let n N and 0 < θ < 1. Let A be an n N matrix, whose rows are independent isotropic log-concave random vectors X i, i n. There exists an absolute constant c > 0, such that if m N satisfies ( m log log(3m) log 3N ) 2 ( ) 2 θ c n m log(3/θ) then δ m θ with overwhelming probability. Optimal up to a log log factor. For unconditional distributions can be removed Nicole Tomczak-Jaegermann () Cortona, / 21
Weak and strong moments of l r -norms of log-concave vectors
Weak and strong moments of l r -norms of log-concave vectors Rafał Latała based on the joint work with Marta Strzelecka) University of Warsaw Minneapolis, April 14 2015 Log-concave measures/vectors A measure
More informationOn the singular values of random matrices
On the singular values of random matrices Shahar Mendelson Grigoris Paouris Abstract We present an approach that allows one to bound the largest and smallest singular values of an N n random matrix with
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationSparse and Low Rank Recovery via Null Space Properties
Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,
More informationSparse Recovery with Pre-Gaussian Random Matrices
Sparse Recovery with Pre-Gaussian Random Matrices Simon Foucart Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie Paris, 75013, France Ming-Jun Lai Department of Mathematics University of
More informationHigh-dimensional distributions with convexity properties
High-dimensional distributions with convexity properties Bo az Klartag Tel-Aviv University A conference in honor of Charles Fefferman, Princeton, May 2009 High-Dimensional Distributions We are concerned
More informationInvertibility of random matrices
University of Michigan February 2011, Princeton University Origins of Random Matrix Theory Statistics (Wishart matrices) PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]
More informationUniform uncertainty principle for Bernoulli and subgaussian ensembles
arxiv:math.st/0608665 v1 27 Aug 2006 Uniform uncertainty principle for Bernoulli and subgaussian ensembles Shahar MENDELSON 1 Alain PAJOR 2 Nicole TOMCZAK-JAEGERMANN 3 1 Introduction August 21, 2006 In
More informationApproximately Gaussian marginals and the hyperplane conjecture
Approximately Gaussian marginals and the hyperplane conjecture Tel-Aviv University Conference on Asymptotic Geometric Analysis, Euler Institute, St. Petersburg, July 2010 Lecture based on a joint work
More informationLARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011
LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic
More informationMahler Conjecture and Reverse Santalo
Mini-Courses Radoslaw Adamczak Random matrices with independent log-concave rows/columns. I will present some recent results concerning geometric properties of random matrices with independent rows/columns
More informationOn the distribution of the ψ 2 -norm of linear functionals on isotropic convex bodies
On the distribution of the ψ 2 -norm of linear functionals on isotropic convex bodies Joint work with G. Paouris and P. Valettas Cortona 2011 June 16, 2011 (Cortona 2011) Distribution of the ψ 2 -norm
More informationSampling and high-dimensional convex geometry
Sampling and high-dimensional convex geometry Roman Vershynin SampTA 2013 Bremen, Germany, June 2013 Geometry of sampling problems Signals live in high dimensions; sampling is often random. Geometry in
More informationStability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries
Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Simon Foucart, Drexel University Abstract We investigate the recovery of almost s-sparse vectors x C N from
More informationDimensionality in the Stability of the Brunn-Minkowski Inequality: A blessing or a curse?
Dimensionality in the Stability of the Brunn-Minkowski Inequality: A blessing or a curse? Ronen Eldan, Tel Aviv University (Joint with Bo`az Klartag) Berkeley, September 23rd 2011 The Brunn-Minkowski Inequality
More informationSmall Ball Probability, Arithmetic Structure and Random Matrices
Small Ball Probability, Arithmetic Structure and Random Matrices Roman Vershynin University of California, Davis April 23, 2008 Distance Problems How far is a random vector X from a given subspace H in
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationUniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit
Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,
More informationConvex Geometry. Otto-von-Guericke Universität Magdeburg. Applications of the Brascamp-Lieb and Barthe inequalities. Exercise 12.
Applications of the Brascamp-Lieb and Barthe inequalities Exercise 12.1 Show that if m Ker M i {0} then both BL-I) and B-I) hold trivially. Exercise 12.2 Let λ 0, 1) and let f, g, h : R 0 R 0 be measurable
More informationCompressive Sensing with Random Matrices
Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview
More informationOptimisation Combinatoire et Convexe.
Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix
More informationLeast singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA)
Least singular value of random matrices Lewis Memorial Lecture / DIMACS minicourse March 18, 2008 Terence Tao (UCLA) 1 Extreme singular values Let M = (a ij ) 1 i n;1 j m be a square or rectangular matrix
More informationSparse Legendre expansions via l 1 minimization
Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery
More informationTHE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX
THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX MARK RUDELSON AND ROMAN VERSHYNIN Abstract. We prove an optimal estimate of the smallest singular value of a random subgaussian matrix, valid
More informationConcentration inequalities for non-lipschitz functions
Concentration inequalities for non-lipschitz functions University of Warsaw Berkeley, October 1, 2013 joint work with Radosław Adamczak (University of Warsaw) Gaussian concentration (Sudakov-Tsirelson,
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More informationStochastic geometry and random matrix theory in CS
Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder
More informationConvex inequalities, isoperimetry and spectral gap III
Convex inequalities, isoperimetry and spectral gap III Jesús Bastero (Universidad de Zaragoza) CIDAMA Antequera, September 11, 2014 Part III. K-L-S spectral gap conjecture KLS estimate, through Milman's
More informationTail estimates for norms of sums of log-concave random vectors
Tail estiates for nors of sus of log-concave rando vectors Rados law Adaczak Rafa l Lata la Alexander E. Litvak Alain Pajor Nicole Toczak-Jaegerann Abstract We establish new tail estiates for order statistics
More informationProbabilistic Methods in Asymptotic Geometric Analysis.
Probabilistic Methods in Asymptotic Geometric Analysis. C. Hugo Jiménez PUC-RIO, Brazil September 21st, 2016. Colmea. RJ 1 Origin 2 Normed Spaces 3 Distribution of volume of high dimensional convex bodies
More informationPhase Transition Phenomenon in Sparse Approximation
Phase Transition Phenomenon in Sparse Approximation University of Utah/Edinburgh L1 Approximation: May 17 st 2008 Convex polytopes Counting faces Sparse Representations via l 1 Regularization Underdetermined
More informationOn Z p -norms of random vectors
On Z -norms of random vectors Rafa l Lata la Abstract To any n-dimensional random vector X we may associate its L -centroid body Z X and the corresonding norm. We formulate a conjecture concerning the
More informationChapter 4: Asymptotic Properties of the MLE
Chapter 4: Asymptotic Properties of the MLE Daniel O. Scharfstein 09/19/13 1 / 1 Maximum Likelihood Maximum likelihood is the most powerful tool for estimation. In this part of the course, we will consider
More informationAn example of a convex body without symmetric projections.
An example of a convex body without symmetric projections. E. D. Gluskin A. E. Litvak N. Tomczak-Jaegermann Abstract Many crucial results of the asymptotic theory of symmetric convex bodies were extended
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57
More informationInvariances in spectral estimates. Paris-Est Marne-la-Vallée, January 2011
Invariances in spectral estimates Franck Barthe Dario Cordero-Erausquin Paris-Est Marne-la-Vallée, January 2011 Notation Notation Given a probability measure ν on some Euclidean space, the Poincaré constant
More informationCompressed Sensing: Lecture I. Ronald DeVore
Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function
More informationSparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing
Sparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing Bobak Nazer and Robert D. Nowak University of Wisconsin, Madison Allerton 10/01/10 Motivation: Virus-Host Interaction
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini April 27, 2018 1 / 80 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d
More informationAn algebraic perspective on integer sparse recovery
An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:
More informationSparse solutions of underdetermined systems
Sparse solutions of underdetermined systems I-Liang Chern September 22, 2016 1 / 16 Outline Sparsity and Compressibility: the concept for measuring sparsity and compressibility of data Minimum measurements
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationPart II Probability and Measure
Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More informationRandom Matrices: Invertibility, Structure, and Applications
Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37
More informationEEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as
L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace
More informationLecture 13 October 6, Covering Numbers and Maurey s Empirical Method
CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and
More informationSparse recovery under weak moment assumptions
Sparse recovery under weak moment assumptions Guillaume Lecué 1,3 Shahar Mendelson 2,4,5 February 23, 2015 Abstract We prove that iid random vectors that satisfy a rather weak moment assumption can be
More informationLecture 3. Random Fourier measurements
Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our
More informationRandom matrices: Distribution of the least singular value (via Property Testing)
Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued
More informationROW PRODUCTS OF RANDOM MATRICES
ROW PRODUCTS OF RANDOM MATRICES MARK RUDELSON Abstract Let 1,, K be d n matrices We define the row product of these matrices as a d K n matrix, whose rows are entry-wise products of rows of 1,, K This
More informationTHE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX
THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX MARK RUDELSON AND ROMAN VERSHYNIN Abstract. We prove an optimal estimate on the smallest singular value of a random subgaussian matrix, valid
More informationSections of Convex Bodies via the Combinatorial Dimension
Sections of Convex Bodies via the Combinatorial Dimension (Rough notes - no proofs) These notes are centered at one abstract result in combinatorial geometry, which gives a coordinate approach to several
More informationSubspaces and orthogonal decompositions generated by bounded orthogonal systems
Subspaces and orthogonal decompositions generated by bounded orthogonal systems Olivier GUÉDON Shahar MENDELSON Alain PAJOR Nicole TOMCZAK-JAEGERMANN August 3, 006 Abstract We investigate properties of
More informationOn isotropicity with respect to a measure
On isotropicity with respect to a measure Liran Rotem Abstract A body is said to be isoptropic with respect to a measure µ if the function θ x, θ dµ(x) is constant on the unit sphere. In this note, we
More informationStrengthened Sobolev inequalities for a random subspace of functions
Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationOn John type ellipsoids
On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to
More informationSparse Optimization Lecture: Sparse Recovery Guarantees
Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationINDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina
INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed
More informationNon-convex Optimization for Linear System with Pregaussian Matrices. and Recovery from Multiple Measurements. Yang Liu
Non-convex Optimization for Linear System with Pregaussian Matrices and Recovery from Multiple Measurements by Yang Liu Under the direction of Ming-Jun Lai Abstract The extremal singular values of random
More informationOrder statistics of vectors with dependent coordinates, and the Karhunen Loève basis
Order statistics of vectors with dependent coordinates, and the Karhunen Loève basis Alexander. Litvak and Konstantin Tikhomirov Abstract Let X be an n-dimensional random centered Gaussian vector with
More informationUniversity of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May
University of Luxembourg Master in Mathematics Student project Compressed sensing Author: Lucien May Supervisor: Prof. I. Nourdin Winter semester 2014 1 Introduction Let us consider an s-sparse vector
More informationSmallest singular value of sparse random matrices
Smallest singular value of sparse random matrices Alexander E. Litvak 1 Omar Rivasplata Abstract We extend probability estimates on the smallest singular value of random matrices with independent entries
More informationInvertibility of symmetric random matrices
Invertibility of symmetric random matrices Roman Vershynin University of Michigan romanv@umich.edu February 1, 2011; last revised March 16, 2012 Abstract We study n n symmetric random matrices H, possibly
More informationAsymptotic shape of a random polytope in a convex body
Asymptotic shape of a random polytope in a convex body N. Dafnis, A. Giannopoulos, and A. Tsolomitis Abstract Let K be an isotropic convex body in R n and let Z qk be the L q centroid body of K. For every
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationConstructing Explicit RIP Matrices and the Square-Root Bottleneck
Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry
More informationLinear Analysis Lecture 5
Linear Analysis Lecture 5 Inner Products and V Let dim V < with inner product,. Choose a basis B and let v, w V have coordinates in F n given by x 1. x n and y 1. y n, respectively. Let A F n n be the
More informationMajorizing measures and proportional subsets of bounded orthonormal systems
Majorizing measures and proportional subsets of bounded orthonormal systems Olivier GUÉDON Shahar MENDELSON1 Alain PAJOR Nicole TOMCZAK-JAEGERMANN Abstract In this article we prove that for any orthonormal
More informationOn Sparse Solutions of Underdetermined Linear Systems
On Sparse Solutions of Underdetermined Linear Systems Ming-Jun Lai Department of Mathematics the University of Georgia Athens, GA 30602 January 17, 2009 Abstract We first explain the research problem of
More informationDissertation Defense
Clustering Algorithms for Random and Pseudo-random Structures Dissertation Defense Pradipta Mitra 1 1 Department of Computer Science Yale University April 23, 2008 Mitra (Yale University) Dissertation
More informationAnti-concentration Inequalities
Anti-concentration Inequalities Roman Vershynin Mark Rudelson University of California, Davis University of Missouri-Columbia Phenomena in High Dimensions Third Annual Conference Samos, Greece June 2007
More information1 Lesson 1: Brunn Minkowski Inequality
1 Lesson 1: Brunn Minkowski Inequality A set A R n is called convex if (1 λ)x + λy A for any x, y A and any λ [0, 1]. The Minkowski sum of two sets A, B R n is defined by A + B := {a + b : a A, b B}. One
More informationOn the minimum of several random variables
On the minimum of several random variables Yehoram Gordon Alexander Litvak Carsten Schütt Elisabeth Werner Abstract For a given sequence of real numbers a,..., a n we denote the k-th smallest one by k-
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior
More informationIntroduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011
Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear
More informationAN INTRODUCTION TO COMPRESSIVE SENSING
AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE
More informationZ Algorithmic Superpower Randomization October 15th, Lecture 12
15.859-Z Algorithmic Superpower Randomization October 15th, 014 Lecture 1 Lecturer: Bernhard Haeupler Scribe: Goran Žužić Today s lecture is about finding sparse solutions to linear systems. The problem
More informationRANDOM FIELDS AND GEOMETRY. Robert Adler and Jonathan Taylor
RANDOM FIELDS AND GEOMETRY from the book of the same name by Robert Adler and Jonathan Taylor IE&M, Technion, Israel, Statistics, Stanford, US. ie.technion.ac.il/adler.phtml www-stat.stanford.edu/ jtaylor
More informationThresholds for the Recovery of Sparse Solutions via L1 Minimization
Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu
More informationarxiv: v1 [math.pr] 11 Feb 2019
A Short Note on Concentration Inequalities for Random Vectors with SubGaussian Norm arxiv:190.03736v1 math.pr] 11 Feb 019 Chi Jin University of California, Berkeley chijin@cs.berkeley.edu Rong Ge Duke
More informationUpper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1
Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Feng Wei 2 University of Michigan July 29, 2016 1 This presentation is based a project under the supervision of M. Rudelson.
More informationBALANCING GAUSSIAN VECTORS. 1. Introduction
BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors
More informationChapter 7. Basic Probability Theory
Chapter 7. Basic Probability Theory I-Liang Chern October 20, 2016 1 / 49 What s kind of matrices satisfying RIP Random matrices with iid Gaussian entries iid Bernoulli entries (+/ 1) iid subgaussian entries
More informationPhenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012
Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 202 BOUNDS AND ASYMPTOTICS FOR FISHER INFORMATION IN THE CENTRAL LIMIT THEOREM
More informationTwo hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45
Two hours Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER PROBABILITY 2 14 January 2015 09:45 11:45 Answer ALL four questions in Section A (40 marks in total) and TWO of the THREE questions
More informationMathematical Foundation for Compressed Sensing
Mathematical Foundation for Compressed Sensing Jan-Olov Strömberg Royal Institute of Technology, Stockholm, Sweden Lecture 7, March 19, 2012 An outline for today Last time we stated: An outline for today
More informationOn a class of stochastic differential equations in a financial network model
1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University
More informationNotes 6 : First and second moment methods
Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative
More informationSparsity Regularization
Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation
More informationLecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016
Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,
More informationCOUNTING INTEGER POINTS IN POLYHEDRA. Alexander Barvinok
COUNTING INTEGER POINTS IN POLYHEDRA Alexander Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html Let P R d be a polytope. We want to compute (exactly or approximately)
More information19.1 Problem setup: Sparse linear regression
ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 19: Minimax rates for sparse linear regression Lecturer: Yihong Wu Scribe: Subhadeep Paul, April 13/14, 2016 In
More informationA Sharpened Hausdorff-Young Inequality
A Sharpened Hausdorff-Young Inequality Michael Christ University of California, Berkeley IPAM Workshop Kakeya Problem, Restriction Problem, Sum-Product Theory and perhaps more May 5, 2014 Hausdorff-Young
More informationarxiv: v1 [math.pr] 22 May 2008
THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered
More informationThe rank of random regular digraphs of constant degree
The rank of random regular digraphs of constant degree Alexander E. Litvak Anna Lytova Konstantin Tikhomirov Nicole Tomczak-Jaegermann Pierre Youssef Abstract Let d be a (large) integer. Given n 2d, let
More informationLarge deviations and averaging for systems of slow fast stochastic reaction diffusion equations.
Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint work with Michael Salins 2, Konstantinos Spiliopoulos 3.) 1. Department of Mathematics
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More information