Reconstruction from Anisotropic Random Measurements

Size: px
Start display at page:

Download "Reconstruction from Anisotropic Random Measurements"

Transcription

1 Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

2 Want to estimate a parameter β R p Example: How is a response y R related to the Parkinson s disease affected by a set of genes among the Chinese population? Construct a linear model: y = β T x + ɛ, where E (y x) = β T x Parameter: Non-zero entries in β (sparsity of β) identify a subset of genes and indicate how much they influence y Take a random sample of (X, Y ), and use the sample to estimate β; that is, we have Y = Xβ + ɛ

3 Model selection and parameter estimation When can we approximately recover β from n noisy observations Y? Questions: How many measurements n do we need in order to recover the non-zero positions in β? How does n scale with p or s, where s is the number of non-zero entries of β? What assumptions about the data matrix X are reasonable?

4 Sparse recovery When β is known to be s-sparse for some 1 s n, which means that at most s of the coefficients of β can be non-zero: Assume every s columns of X are linearly independent: Identifiability condition (reasonable once n s) Λ min (s) = min υ 0,s-sparse Xυ n υ > 0 Proposition: (Candès-Tao 05) Suppose that any s columns of the n p matrix X are linearly independent Then, any s-sparse signal β R p can be reconstructed uniquely from Xβ

5 l 0 -minimization How to reconstruct an s-sparse signal β R p from the measurements Y = Xβ given Λ min (s) > 0? Let β be the unique sparsest solution to Xβ = Y : β = arg min β:xβ=y β 0 where β 0 := #{1 i p : β i 0} is the sparsity of β Unfortunately, l 0 -minimization is computationally intractable; (in fact, it is an NP-complete problem)

6 Basis pursuit Consider the following convex optimization problem β := arg min β:xβ=y β 1 Basis pursuit works whenever the n p measurement matrix X is sufficiently incoherent: RIP (Candès-Tao 05) requires that for all T {1,, p} with T s and for all coefficients sequences (c j ) j T, (1 δ s ) c X T c/n (1 + δ s ) c holds for some 0 < δ s < 1 (s-restricted isometry constant) The good matrices for compressed sensing should satisfy the inequalities for the largest possible s

7 Restricted Isometry Property (RIP): examples For Gaussian random matrix, or any sub-gaussian ensemble, RIP holds with s n/ log(p/n) For random Fourier ensemble, or randomly sampled rows of orthonormal matrices, RIP holds for s = O(n/ log 4 p) For a random matrix composed of columns that are independent isotropic vectors with log-concave densities, RIP holds for s = O(n/log (p/n)) References: Candès-Tao 05, 06, Rudelson-Vershynin 05, Donoho 06, Baraniuk et al 08, Mendelson et al 08, Adamczak et al 09

8 Basis pursuit for high dimensional data These algorithms are also robust with regards to noise, and RIP will be replaced by more relaxed conditions In particular, the isotropicity condition which has been assumed in all literature cited above needs to be dropped Let X i R p, i = 1,, n be iid random row vectors of the design matrix X Covariance matrix: Σ(X i ) = EX i X i = EX i Xi T Σ n = 1 n X i X i = 1 n X i Xi T n n i=1 X i is isotropic if Σ(X i ) = I and E X i = n i=1

9 Sparse recovery for Y = Xβ + ɛ Lasso (Tibshirani 96), aka Basis Pursuit (Chen, Donoho and Saunders 98, and others): β = arg min β Y Xβ /n + λ n β 1, where the scaling factor 1/(n) is chosen by convenience Dantzig selector (Candès-Tao 07): (DS) arg min β R p β 1 subject to X T (Y X β)/n λ n References: Greenshtein-Ritov 04, Meinshausen-Bühlmann 06, Zhao-Yu 06, Bunea et al 07, Candès-Tao 07, van de Geer 08, Zhang-Huang 08, Wainwright 09, Koltchinskii 09, Meinshausen-Yu 09, Bickel et al 09, and others

10 The Cone Constraint For an appropriately chosen λ n, the solution of the Lasso or the Dantzig selector satisfies (under iid Gaussian noise), with high probability, υ := β β C(s, k 0 ) k 0 = 1 for the Dantzig selector, and k 0 = 3 for the Lasso Object of interest: for 1 s 0 p, and a positive number k 0, C(s 0, k 0 ) = { x R p J {1,, p}, J = s 0 st x J c 1 k 0 x J 1 } This object has appeared in earlier work in the noiseless setting References: Donoho-Huo 01, Elad-Bruckstein 0, Feuer-Nemirovski 03, Candès-Tao 07, Bickel-Ritov-Tsybakov 09, Cohen-Dahmen-DeVore 09

11 The Lasso solution

12 Restricted Eigenvalue (RE) condition Object of interest: C(s 0, k 0 ) = { x R p J {1,, p}, J = s 0 st x J c 1 k 0 x J 1 } Definition Matrix A q p satisfies RE(s 0, k 0, A) condition with parameter K (s 0, k 0, A) if for any υ 0, 1 K (s 0, k 0, A) := min J {1,,p}, J s 0 Aυ min > 0 υ J c 1 k 0 υ J 1 υ J References: van de Geer 07, Bickel-Ritov-Tsybakov 09, van de Geer-Bühlmann 09

13 An elementary estimate Lemma For each vector υ C(s 0, k 0 ), let T 0 denote the locations of the s 0 1 largest coefficients of υ in absolute values Then υ T c 0 υ 1 T0, and υ T0 υ 1+k0 Implication: Let A be a q p matrix such that RE(s 0, 3k 0, A) condition holds for 0 < K (s 0, 3k 0, A) < Then υ C(s 0, k 0 ) S p 1 Aυ υ T0 K (s 0, k 0, A) 1 K (s 0, k 0, A) 1 + k 0 > 0

14 Sparse eigenvalues Definition For m p, we define the largest and smallest m-sparse eigenvalue of a q p matrix A to be ρ max (m, A) := max t R p,t 0;m sparse At / t, ρ min (m, A) := min t R p,t 0;m sparse At / t If RE(s 0, k 0, A) is satisfied with k 0 1, then the square submatrices of size s 0 of A T A are necessarily positive definite, that is, ρ min (s 0, A) > 0

15 Examples: of A which satisfies the Restricted Eigenvalue condition, but not RIP (Ruskutti, Wainwright, and Yu 10) Spiked Identity matrix: for a [0, 1), Σ p p = (1 a)i p p + a 1 1 T where 1 R p is the vector of all ones ρ min (Σ) > 0 Then for all s 0 s 0 submatrix Σ SS, we have ρ max (Σ SS ) ρ min (Σ SS ) = 1 + a(s 0 1) 1 a Largest sparse eigenvalue as s 0, but Σ 1/ e j = 1 is bounded

16 Motivation: to construct classes of design matrices such that the Restricted Eigenvalue condition will be satisfied Design matrix X has just independent rows, rather than independent entries: eg, consider for some matrix A q p X = ΨA, where rows of the matrix Ψ n q are independent isotropic vectors with subgaussian marginals, and RE(s 0, (1 + ε)k 0, A) holds for some ε > 0, p > s 0 0, and k 0 > 0 Design matrix X consists of independent identically distributed rows with bounded entries, whose covariance matrix Σ(X i ) = EX i X T i satisfies RE(s 0, (1 + ε)k 0, Σ 1/ ) The rows of X will be sampled from some distributions in R p ; The distribution may be highly non-gaussian and perhaps discrete

17 Outline Introduction The main results The reduction principle Applications of the reduction principle Ingredients of the proof Conclusion

18 Notation Let e 1,, e p be the canonical basis of R p For a set J {1,, p}, denote E J = span{e j : j J} For a matrix A, we use A to denote its operator norm For a set V R p, we let conv V denote the convex hull of V For a finite set Y, the cardinality is denoted by Y Let B p and Sp 1 be the unit Euclidean ball and the unit sphere respectively

19 The reduction principle: Theorem Let E = J =d E J for d(3k 0 ) < p, where d(3k 0 ) = s 0 + s 0 max Ae j 16K (s 0, 3k 0, A)(3k 0 ) (3k 0 + 1) j δ and E denotes R p otherwise Let Ψ be a matrix such that x AE (1 δ) x Ψx (1 + δ) x Then RE(s 0, k 0, ΨA) holds with 0 < K (s 0, k 0, ΨA) K (s 0,k 0,A) 1 5δ If the matrix Ψ acts as almost isometry on the images of the d-sparse vectors under A, then the product ΨA satisfies the RE condition with a smaller parameter k 0

20 Reformulation of the reduction principle: Theorem Restrictive Isometry Let Ψ be a matrix such that x AE (1 δ) x Ψx (1 + δ) x ( ) Then for any x A C(s 0, k 0 ) S q 1, (1 5δ) Ψx (1 + 3δ) If the matrix Ψ acts as almost isometry on the images of the d-sparse vectors under A, then it acts the same way on the images of C(s 0, k 0 ) It is reduced to checking that the almost isometry property holds for all vectors from some low-dimensional subspaces which is easier than checking the RE property directly

21 Definition: subgaussian random vectors Let Y be a random vector in R p 1 Y is called isotropic if for every y R p, E ( Y, y ) = y Y is ψ with a constant α if for every y R p, ( ) Y, y ψ := inf{t : E exp( Y, y /t ) } α y The ψ condition on a scalar random variable V is equivalent to the subgaussian tail decay of V, which means for some constant c, P ( V > t) exp( t /c ), for all t > 0 A random vector Y in R p is subgaussian if the one-dimensional marginals Y, y are sub-gaussian random variables for all y R p

22 The first application of the reduction principle Let A be a q p matrix satisfying RE(s 0, 3k 0, A) condition Let m = min(d, p) where d = s 0 + s 0 max Ae j 16K (s 0, 3k 0, A)(3k 0 ) (3k 0 + 1) j δ Theorem Let Ψ be an n q matrix whose rows are independent isotropic ψ random vectors in R q with constant α Suppose n 000mα4 δ log ( ) 60ep mδ Then with probability at least 1 exp( δ n/000α 4 ), RE(s 0, k 0, (1/ n)ψa) condition holds with 0 < K (s 0, k 0, (1/ n)ψa) K (s 0, k 0, A) 1 δ

23 Examples of subgaussian vectors The random vector Y with iid N(0, 1) random coordinates Discrete Gaussian vector, which is a random vector taking values on the integer lattice X p with distribution P(X = m) = C exp( m /) for m X p A vector with independent centered bounded random coordinates In particular, vectors with random symmetric Bernoulli coordinates, in other words, random vertices of the discrete cube

24 Previous results on (sub)gaussian random vectors Raskutti, Wainwright, and Yu 10: RE(s 0, k 0, X) holds for random Gaussian measurements / design matrix X which consists of n = O(s 0 log p) independent copies of a Gaussian random vector Y N p (0, Σ), assuming that the RE condition holds for Σ 1/ Their proof relies on a deep result from the theory of Gaussian random processes Gordon s Minimax Lemma To establish the RE condition for more general classes of random matrices we had to introduce a new approach based on geometric functional analysis, namely, the reduction principle The bound n = O(s 0 log p) can be improved to the optimal one n = O(s 0 log(p/s 0 )) when RE(s 0, k 0, Σ 1/ ) is replaced with RE(s 0, (1 + ε)k 0, Σ 1/ ) for any ε > 0

25 In Zhou 09, subgaussian random matrices of the form X = ΨΣ 1/ was considered, where Σ is a p p positive semidefinite matrix: X satisfies RE(s 0, k 0 ) condition with overwhelming probability if for K := K (s 0, k 0, Σ 1/ ), n > 9c α 4 { δ ( + k 0 ) K 4ρ max (s 0, Σ 1/ )s 0 log 5ep } s 0 log p s 0 Analysis there used a result in Mendelson et al 07, 08 The current result does not involve ρ max (s 0, A), nor the global parameters of the matrices A and Ψ, such as the norm or the smallest singular value Recall the Spiked Identity matrix: for a [0, 1), Σ p p = (1 a)i p p + a 1 1 T which satisfies the RE condition, such that ρ max (s 0, A) grows linearly with s 0 while the maximum of Σ 1/ e j = 1

26 Design matrices with uniformly bounded entries Let Y R p be a random vector such that Y M as and denote Σ = EYY T Let X be an n p matrix, whose rows X 1,, X n Y Set d = s 0 + s 0 max j ( Σ 1/ e j ) 16K (s 0, 3k 0, Σ 1/ )(3k 0 ) (3k 0 + 1) δ Theorem Assume that d p and ρ = ρ min (d, Σ 1/ ) > 0 Let Σ satisfy the RE(s 0, 3k 0, Σ 1/ ) condition Suppose n CM d log p ρδ log 3 ( CM d log p Then with probability at least 1 exp ( δρn/(6m d) ), RE(s 0, k 0, X) holds for matrix X/ n with 0 < K (s 0, k 0, X/ n)) K (s 0,k 0,Σ 1/ ) 1 δ ρδ )

27 Remarks on applying the reduction principle To analyze different classes of random design matrices: Unlike the case of a random matrix with subgaussian marginals, the estimate of the second example contains the minimal sparse singular value ρ min (d, Σ 1/ ) The reconstruction of sparse signals by subgaussian design matrices or by random Fourier ensemble was analyzed in the literature before, however only under the RIP assumptions The reduction principle can be applied to other types of random variables: eg, random vectors with heavy-tailed marginals, or random vectors with log-concave densities References: Rudelson-Vershynin 05, Baraniuk et al 08, Mendelson et al 08, Vershynin 11a, b, Adamczak et al 09

28 Maurey s empirical approximation argument (Pisier 81) Let u 1,, u M R q Let y conv(u 1,, u M ): y = j {1,,M} α j u j where α j 0, and j There exists a set L {1,,, M} such that L m = 4 max j {1,,M} u j ε and a vector y conv(u j, j L) such that y y ε α j = 1 proof An application of the probabilistic methods: if we only want to approximate y rather than exactly represent it as a convex combination of u 1,, u M, this is possible with much fewer points, namely, u 1,, u L

29 Let y = j {1,,M} α j u j where α j 0, and j Goal: to find a vector y conv(u j, j L) such that y y ε α j = 1

30 Let Y be a random vector in R q such that P (Y = u l ) = α l, l {1,, M} Then E (Y ) = α l u l = y l {1,,M}

31 Let Y 1,, Y m be independent copies of Y and let ε 1,, ε m be ±1 iid mean zero Bernoulli random variables, chosen independently of Y 1,, Y m By the standard symmetrization argument we have E y 1 m where m j=1 Y j E 4E 1 m m ε j Y j j=1 4 max l {1,,M} u l m ( ) Y j sup Y j max u l l {1,,M} and the last inequality in (1) follows from the definition of m = 4 m ( ) m E Y j j=1 ε (1)

32 Fix a realization Y j = u kj, j = 1,, m for which u km-1 y 1 m m j=1 u k m Y j ε u k3 u k1 The vector 1 m m j=1 Y j belongs to the convex hull of {u l : l L}, where L is the set of different elements from the sequence k 1,, k m Obviously L m and the lemma is proved QED u k

33 The Inclusion Lemma To( prove the) restricted isometry of Ψ over the set of vectors in A C(s 0, k 0 ) S q 1, we show that this set is contained in the convex hull of the images of the sparse vectors with norms not exceeding (1 δ) 1 Lemma Let 1 > δ > 0 Suppose RE(s 0, k 0, A) condition holds for matrix A q p For a set J {1,, p}, E J = span{e j : j J} Set Then d = d(k 0, A) = s 0 + s 0 max Ae j j ( ) A C(s 0, k 0 ) S q 1 (1 δ) 1 conv where for d p, E J is understood to be R p ( 16K (s 0, k 0, A)k0 (k ) 0 + 1) J d δ AE J S q 1

34 Conclusion We prove a general reduction principle showing that if the matrix Ψ acts as almost isometry on the images of the sparse vectors under A, then the product ΨA satisfies the RE condition with a smaller parameter k 0 We apply the reduction principle to analyze different classes of random design matrices This analysis is reduced to checking that the almost isometry property holds for all vectors from some low-dimensional subspaces, which is easier than checking the RE property directly

35 Thank you!

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements JMLR: Workshop and Conference Proceedings vol 3 (01) 10.1 10.8 5th Annual Conference on Learning Theory Reconstruction from Anisotropic Random Measurements Mark Rudelson Department of Mathematics University

More information

High-dimensional covariance estimation based on Gaussian graphical models

High-dimensional covariance estimation based on Gaussian graphical models High-dimensional covariance estimation based on Gaussian graphical models Shuheng Zhou Department of Statistics, The University of Michigan, Ann Arbor IMA workshop on High Dimensional Phenomena Sept. 26,

More information

Least squares under convex constraint

Least squares under convex constraint Stanford University Questions Let Z be an n-dimensional standard Gaussian random vector. Let µ be a point in R n and let Y = Z + µ. We are interested in estimating µ from the data vector Y, under the assumption

More information

High-dimensional statistics: Some progress and challenges ahead

High-dimensional statistics: Some progress and challenges ahead High-dimensional statistics: Some progress and challenges ahead Martin Wainwright UC Berkeley Departments of Statistics, and EECS University College, London Master Class: Lecture Joint work with: Alekh

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

Lecture 3. Random Fourier measurements

Lecture 3. Random Fourier measurements Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Sparse recovery under weak moment assumptions

Sparse recovery under weak moment assumptions Sparse recovery under weak moment assumptions Guillaume Lecué 1,3 Shahar Mendelson 2,4,5 February 23, 2015 Abstract We prove that iid random vectors that satisfy a rather weak moment assumption can be

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

(Part 1) High-dimensional statistics May / 41

(Part 1) High-dimensional statistics May / 41 Theory for the Lasso Recall the linear model Y i = p j=1 β j X (j) i + ɛ i, i = 1,..., n, or, in matrix notation, Y = Xβ + ɛ, To simplify, we assume that the design X is fixed, and that ɛ is N (0, σ 2

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

arxiv: v2 [math.st] 12 Feb 2008

arxiv: v2 [math.st] 12 Feb 2008 arxiv:080.460v2 [math.st] 2 Feb 2008 Electronic Journal of Statistics Vol. 2 2008 90 02 ISSN: 935-7524 DOI: 0.24/08-EJS77 Sup-norm convergence rate and sign concentration property of Lasso and Dantzig

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Geometry of log-concave Ensembles of random matrices

Geometry of log-concave Ensembles of random matrices Geometry of log-concave Ensembles of random matrices Nicole Tomczak-Jaegermann Joint work with Radosław Adamczak, Rafał Latała, Alexander Litvak, Alain Pajor Cortona, June 2011 Nicole Tomczak-Jaegermann

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Universal low-rank matrix recovery from Pauli measurements

Universal low-rank matrix recovery from Pauli measurements Universal low-rank matrix recovery from Pauli measurements Yi-Kai Liu Applied and Computational Mathematics Division National Institute of Standards and Technology Gaithersburg, MD, USA yi-kai.liu@nist.gov

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

Supremum of simple stochastic processes

Supremum of simple stochastic processes Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Minimax Rates of Estimation for High-Dimensional Linear Regression Over -Balls

Minimax Rates of Estimation for High-Dimensional Linear Regression Over -Balls 6976 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Minimax Rates of Estimation for High-Dimensional Linear Regression Over -Balls Garvesh Raskutti, Martin J. Wainwright, Senior

More information

Random hyperplane tessellations and dimension reduction

Random hyperplane tessellations and dimension reduction Random hyperplane tessellations and dimension reduction Roman Vershynin University of Michigan, Department of Mathematics Phenomena in high dimensions in geometric analysis, random matrices and computational

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Z Algorithmic Superpower Randomization October 15th, Lecture 12

Z Algorithmic Superpower Randomization October 15th, Lecture 12 15.859-Z Algorithmic Superpower Randomization October 15th, 014 Lecture 1 Lecturer: Bernhard Haeupler Scribe: Goran Žužić Today s lecture is about finding sparse solutions to linear systems. The problem

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

Tractable performance bounds for compressed sensing.

Tractable performance bounds for compressed sensing. Tractable performance bounds for compressed sensing. Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure/INRIA, U.C. Berkeley. Support from NSF, DHS and Google.

More information

arxiv: v1 [math.st] 8 Jan 2008

arxiv: v1 [math.st] 8 Jan 2008 arxiv:0801.1158v1 [math.st] 8 Jan 2008 Hierarchical selection of variables in sparse high-dimensional regression P. J. Bickel Department of Statistics University of California at Berkeley Y. Ritov Department

More information

Sparse Recovery with Pre-Gaussian Random Matrices

Sparse Recovery with Pre-Gaussian Random Matrices Sparse Recovery with Pre-Gaussian Random Matrices Simon Foucart Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie Paris, 75013, France Ming-Jun Lai Department of Mathematics University of

More information

Guaranteed Sparse Recovery under Linear Transformation

Guaranteed Sparse Recovery under Linear Transformation Ji Liu JI-LIU@CS.WISC.EDU Department of Computer Sciences, University of Wisconsin-Madison, Madison, WI 53706, USA Lei Yuan LEI.YUAN@ASU.EDU Jieping Ye JIEPING.YE@ASU.EDU Department of Computer Science

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

High-dimensional Statistical Models

High-dimensional Statistical Models High-dimensional Statistical Models Pradeep Ravikumar UT Austin MLSS 2014 1 Curse of Dimensionality Statistical Learning: Given n observations from p(x; θ ), where θ R p, recover signal/parameter θ. For

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

Shifting Inequality and Recovery of Sparse Signals

Shifting Inequality and Recovery of Sparse Signals Shifting Inequality and Recovery of Sparse Signals T. Tony Cai Lie Wang and Guangwu Xu Astract In this paper we present a concise and coherent analysis of the constrained l 1 minimization method for stale

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

A REMARK ON THE LASSO AND THE DANTZIG SELECTOR

A REMARK ON THE LASSO AND THE DANTZIG SELECTOR A REMARK ON THE LAO AND THE DANTZIG ELECTOR YOHANN DE CATRO Abstract This article investigates a new parameter for the high-dimensional regression with noise: the distortion This latter has attracted a

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

Near Optimal Signal Recovery from Random Projections

Near Optimal Signal Recovery from Random Projections 1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:

More information

arxiv: v1 [math.st] 5 Oct 2009

arxiv: v1 [math.st] 5 Oct 2009 On the conditions used to prove oracle results for the Lasso Sara van de Geer & Peter Bühlmann ETH Zürich September, 2009 Abstract arxiv:0910.0722v1 [math.st] 5 Oct 2009 Oracle inequalities and variable

More information

ROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1. BY T. TONY CAI AND ANRU ZHANG University of Pennsylvania

ROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1. BY T. TONY CAI AND ANRU ZHANG University of Pennsylvania The Annals of Statistics 2015, Vol. 43, No. 1, 102 138 DOI: 10.1214/14-AOS1267 Institute of Mathematical Statistics, 2015 ROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1 BY T. TONY CAI AND ANRU ZHANG University

More information

Compressive Sensing with Random Matrices

Compressive Sensing with Random Matrices Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective

The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective Forty-Eighth Annual Allerton Conference Allerton House UIUC Illinois USA September 9 - October 1 010 The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective Gongguo Tang

More information

LASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS FOR HIGH-DIMENSIONAL DATA

LASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS FOR HIGH-DIMENSIONAL DATA The Annals of Statistics 2009, Vol. 37, No. 1, 246 270 DOI: 10.1214/07-AOS582 Institute of Mathematical Statistics, 2009 LASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS FOR HIGH-DIMENSIONAL DATA BY NICOLAI

More information

Risk and Noise Estimation in High Dimensional Statistics via State Evolution

Risk and Noise Estimation in High Dimensional Statistics via State Evolution Risk and Noise Estimation in High Dimensional Statistics via State Evolution Mohsen Bayati Stanford University Joint work with Jose Bento, Murat Erdogdu, Marc Lelarge, and Andrea Montanari Statistical

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

The deterministic Lasso

The deterministic Lasso The deterministic Lasso Sara van de Geer Seminar für Statistik, ETH Zürich Abstract We study high-dimensional generalized linear models and empirical risk minimization using the Lasso An oracle inequality

More information

Linear Regression with Strongly Correlated Designs Using Ordered Weigthed l 1

Linear Regression with Strongly Correlated Designs Using Ordered Weigthed l 1 Linear Regression with Strongly Correlated Designs Using Ordered Weigthed l 1 ( OWL ) Regularization Mário A. T. Figueiredo Instituto de Telecomunicações and Instituto Superior Técnico, Universidade de

More information

Compressed Sensing and Neural Networks

Compressed Sensing and Neural Networks and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications

More information

THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS

THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS YIN ZHANG Abstract. Compressive sensing (CS) is an emerging methodology in computational signal processing that has

More information

Error Correction via Linear Programming

Error Correction via Linear Programming Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Simon Foucart, Drexel University Abstract We investigate the recovery of almost s-sparse vectors x C N from

More information

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

1 Regression with High Dimensional Data

1 Regression with High Dimensional Data 6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:

More information

High-dimensional Statistics

High-dimensional Statistics High-dimensional Statistics Pradeep Ravikumar UT Austin Outline 1. High Dimensional Data : Large p, small n 2. Sparsity 3. Group Sparsity 4. Low Rank 1 Curse of Dimensionality Statistical Learning: Given

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

A New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables

A New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables A New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables Niharika Gauraha and Swapan Parui Indian Statistical Institute Abstract. We consider the problem of

More information

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed

More information

Making Flippy Floppy

Making Flippy Floppy Making Flippy Floppy James V. Burke UW Mathematics jvburke@uw.edu Aleksandr Y. Aravkin IBM, T.J.Watson Research sasha.aravkin@gmail.com Michael P. Friedlander UBC Computer Science mpf@cs.ubc.ca Current

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

General principles for high-dimensional estimation: Statistics and computation

General principles for high-dimensional estimation: Statistics and computation General principles for high-dimensional estimation: Statistics and computation Martin Wainwright Statistics, and EECS UC Berkeley Joint work with: Garvesh Raskutti, Sahand Negahban Pradeep Ravikumar, Bin

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Estimating Unknown Sparsity in Compressed Sensing

Estimating Unknown Sparsity in Compressed Sensing Estimating Unknown Sparsity in Compressed Sensing Miles Lopes UC Berkeley Department of Statistics CSGF Program Review July 16, 2014 early version published at ICML 2013 Miles Lopes ( UC Berkeley ) estimating

More information

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Muhammad Salman Asif Thesis Committee: Justin Romberg (Advisor), James McClellan, Russell Mersereau School of Electrical and Computer

More information

The Sparsity and Bias of The LASSO Selection In High-Dimensional Linear Regression

The Sparsity and Bias of The LASSO Selection In High-Dimensional Linear Regression The Sparsity and Bias of The LASSO Selection In High-Dimensional Linear Regression Cun-hui Zhang and Jian Huang Presenter: Quefeng Li Feb. 26, 2010 un-hui Zhang and Jian Huang Presenter: Quefeng The Sparsity

More information

THE LASSO, CORRELATED DESIGN, AND IMPROVED ORACLE INEQUALITIES. By Sara van de Geer and Johannes Lederer. ETH Zürich

THE LASSO, CORRELATED DESIGN, AND IMPROVED ORACLE INEQUALITIES. By Sara van de Geer and Johannes Lederer. ETH Zürich Submitted to the Annals of Applied Statistics arxiv: math.pr/0000000 THE LASSO, CORRELATED DESIGN, AND IMPROVED ORACLE INEQUALITIES By Sara van de Geer and Johannes Lederer ETH Zürich We study high-dimensional

More information

19.1 Problem setup: Sparse linear regression

19.1 Problem setup: Sparse linear regression ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 19: Minimax rates for sparse linear regression Lecturer: Yihong Wu Scribe: Subhadeep Paul, April 13/14, 2016 In

More information

On the singular values of random matrices

On the singular values of random matrices On the singular values of random matrices Shahar Mendelson Grigoris Paouris Abstract We present an approach that allows one to bound the largest and smallest singular values of an N n random matrix with

More information

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

De-biasing the Lasso: Optimal Sample Size for Gaussian Designs

De-biasing the Lasso: Optimal Sample Size for Gaussian Designs De-biasing the Lasso: Optimal Sample Size for Gaussian Designs Adel Javanmard USC Marshall School of Business Data Science and Operations department Based on joint work with Andrea Montanari Oct 2015 Adel

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline

More information

New ways of dimension reduction? Cutting data sets into small pieces

New ways of dimension reduction? Cutting data sets into small pieces New ways of dimension reduction? Cutting data sets into small pieces Roman Vershynin University of Michigan, Department of Mathematics Statistical Machine Learning Ann Arbor, June 5, 2012 Joint work with

More information

l 1 -Regularized Linear Regression: Persistence and Oracle Inequalities

l 1 -Regularized Linear Regression: Persistence and Oracle Inequalities l -Regularized Linear Regression: Persistence and Oracle Inequalities Peter Bartlett EECS and Statistics UC Berkeley slides at http://www.stat.berkeley.edu/ bartlett Joint work with Shahar Mendelson and

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Compressed Sensing and Related Learning Problems

Compressed Sensing and Related Learning Problems Compressed Sensing and Related Learning Problems Yingzhen Li Dept. of Mathematics, Sun Yat-sen University Advisor: Prof. Haizhang Zhang Advisor: Prof. Haizhang Zhang 1 / Overview Overview Background Compressed

More information

Uniform uncertainty principle for Bernoulli and subgaussian ensembles

Uniform uncertainty principle for Bernoulli and subgaussian ensembles arxiv:math.st/0608665 v1 27 Aug 2006 Uniform uncertainty principle for Bernoulli and subgaussian ensembles Shahar MENDELSON 1 Alain PAJOR 2 Nicole TOMCZAK-JAEGERMANN 3 1 Introduction August 21, 2006 In

More information

The convex algebraic geometry of linear inverse problems

The convex algebraic geometry of linear inverse problems The convex algebraic geometry of linear inverse problems The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher

More information

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA)

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA) Least singular value of random matrices Lewis Memorial Lecture / DIMACS minicourse March 18, 2008 Terence Tao (UCLA) 1 Extreme singular values Let M = (a ij ) 1 i n;1 j m be a square or rectangular matrix

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information