Weighted l 1 Minimization for Sparse Recovery with Prior Information

Size: px
Start display at page:

Download "Weighted l 1 Minimization for Sparse Recovery with Prior Information"

Transcription

1 Weighted l Minimization for Sparse Recovery with Prior Information M. Amin Khajehnejad amin@caltech.edu Weiyu Xu weiyu@altech.edu A. Salman Avestimehr Caltech CMI avestim@caltech.edu Babak Hassibi hassibi@caltech.edu Abstract In this paper we study the compressed sensing problem of recovering a sparse signal from a system of underdetermined linear equations when we have prior information about the probability of each entry of the unknown signal being nonzero. In particular, we focus on a model where the entries of the unknown vector fall into two sets, each with a different probability of being nonzero. We propose a weighted l minimization recovery algorithm and analyze its performance using a Grassman angle approach. We compute explicitly the relationship between the system parameters (the weights, the number of measurements, the size of the two sets, the probabilities of being non-zero) so that an iid random Gaussian measurement matrix along with weighted l minimization recovers almost all such sparse signals with overwhelming probability as the problem dimension increases. This allows us to compute the optimal weights. We also provide simulations to demonstrate the advantages of the method over conventional l optimization. I. INTRODUCTION Compressed sensing is an emerging technique of joint sampling and compression that has been recently proposed as an alternative to Nyquist sampling (followed by compression) for scenarios where measurements can be costly [4]. The whole premise is that sparse signals (signals with many zero or negligible elements in a known basis) can be recovered with far fewer measurements than the ambient dimension of the signal itself. In fact, the major breakthrough in this area has been the demonstration that l minimization can efficiently recover a sufficiently sparse vector from a system of underdetermined linear equations []. The conventional approach to compressed sensing assumes no prior information on the unknown signal other than the fact that it is sufficiently sparse in a particular basis. In many applications, however, additional prior information is available. In fact, in many cases the signal recovery problem (which compressed sensing attempts to address) is a detection or estimation problem in some statistical setting. Some recent work along these lines can be found in [5] (which considers compressed detection and estimation) and [6] (on Bayesian compressed sensing). In other cases, compressed sensing may be the inner loop of a larger estimation problem that feeds prior information on the sparse signal (e.g., its sparsity pattern) to the compressed sensing algorithm. In this paper we will consider a particular model for the sparse signal that assigns a probability of being zero or nonzero to each entry of the unknown vector. The standard compressed sensing model is therefore a special case where these probabilities are all equal (for example, for a k-sparse vector the probabilities will all be k n, where n is the number of entries of the unknown vector). As mentioned above, there are many situations where such prior information may be available, such as in natural images, medical imaging, or in DNA microarrays where the signal is often block sparse, i.e., the signal is more likely to be nonzero in certain blocks rather than in others [7]. While it is possible (albeit cumbersome) to study this model in full generality, in this paper we will focus on the case where the entries of the unknown signal fall into two categories: in the first set (with cardinality n ) the probability of being nonzero is, and in the second set (with cardinality n = n n ) this probability is P. (Clearly, in this case the sparsity will with high probability be around n +n P.) This model is rich enough to capture many of the salient features regarding prior information, while being simple enough to allow a very thorough analysis. While it is in principle possible to extend our techniques to models with more than two categories of entries, the analysis becomes increasingly tedious and so is beyond the scope of this short paper. The contributions of the paper are the following. We propose a weighted l minimization approach for sparse recovery where the l norms of each set are given different weights w i (i =, ). Clearly, one would want to give a larger weight to those entries whose probability of being nonzero is less (thus further forcing them to be zero). The second contribution is to compute explicitly the relationship between the p i, the w i, the ni n, i =, and the number of measurements so that the unknown signal can be recovered with overwhelming probability as n (the so-called weak threshold) for measurement matrices drawn from an iid Gaussian ensemble. The analysis uses the high-dimensional geometry techniques first introduced by Donoho and Tanner [], [3] (e.g., Grassman angles) to obtain sharp thresholds for compressed sensing. However, rather than use the neighborliness condition used in [], [3], we find it more convenient to use the null space characterization of Xu and Hassibi [4], [3]. The resulting A somewhat related method that uses weighted l optimization is Candes et al [8]. The main difference is that there is no prior information and at each step the l optimization is re-weighted using the estimates of the signal obtained in the last minimization step.

2 Grassmanian manifold approach is a general framework for incorporating additional factors into compressed sensing: in [4] it was used to incorporate measurement noise; here it is used to incorporate prior information and weighted l optimization. Our analytic results allow us to compute the optimal weights for any p, p, n, n. We also provide simulation results to show the advantages of the weighted method over standard l minimization. II. MODEL The signal is represented by a n vector x = (x,x,..., x n ) T of real valued numbers, and is non-uniformly sparse with sparsity factor over the (index) set K {,,..n} and sparsity factor P over the set K = {,,..., n} \K. By this, we mean that if i K, x i is a nonzero element with probability and zero with probability. However, if i K the probability of x i being nonzero is P. We assume that K = n and K = n = n n. The measurement matrix A is a m n ( m n = δ< ) matrix with i.i.d N (, ) entries. The observation vector is denoted by y and obeys the following: y = Ax () As mentioned in Section I, l -minimization can recover a vector x with k = µn non-zeros, provided µ is less than a known function of δ. l minimization has the following form: min x () Ax=y () is a linear programming and can be solved polynomially fast (O(n 3 )). However, it fails to encapsulate additional prior information of the signal nature, might there be any such information. One might simply think of modifying () to a weighted l minimization as follows: min x w = min Ax=y Ax=y n w i x i (3) The index w is an indication of the n positive weight vector. Now the question is what is the optimal set of weights, and can one improve the recovery threshold using the weighted l minimization of (3) with those weights rather than ()? We have to be more clear with the objective at this point and what we mean by extending the recovery threshold. First of all note that the vectors generated based on the model described above can have any arbitrary number of nonzeros. However, their support size is typically (with probability arbitrary close to one) around n + n P ). Therefore, there is no such notion of strong threshold as in the case of []. We are asking the question of for what and P signals generated based on this model can be recovered with overwhelming probability as n. Moreover we are wondering if by adjusting w i s according to and P can one extend the typical sparsity to dimension ratio ( n +n P n ) for which reconstruction is successful with high probability. This is the topic of next section. III. COMPUTATION OF THE WEAK THRESHOLD Because of the partial symmetry of the sparsity of the signal we know that the optimum weights should take only two positive values W and W. In other words { W if i K i {,,,n} w i = W if i K Let x be a random sparse signal generated based on the non-uniformly sparse model of section II and be supported on the set K. K is called ɛ-typical if K K n ɛn and K K n P ɛn. Let E be the event that x is recovered by (3). : P[E c ] = P [E c K is ɛ-typical]p [K is ɛ-typical] + P [E c K not ɛ-typical]p [K not ɛ-typical] For any fixed ɛ > P [K not ɛ-typical] will exponentially approach zero as n grows according to the law of large numbers. So, to bound the probability of failed recovery we may assume that K is ɛ-typical for any small enough ɛ. Therefore we just consider the case K = k = n + n P. Similar to the null-space condition of [3], we present a necessary and sufficient condition for x to be the solution to (3). It is as follows: Z N (A) i K w i Z i i K w i Z i Where N (A) denotes the right nullspace of A. We can upper bound P(E c ) with P K, which is the probability that a vector x of a specific sign pattern (say non-positive) and supported on the specific set K is not recovered correctly by (3) (A difference between this upper bound and the one in [4] is that here there is no ( n k) k factor, and that is because we have fixed the support set K and the sign pattern of x). Exactly as done in [4], by restricting x to the cross-polytope {x R n x w =} 3, and noting that x is on a (k )- dimensional face F of the skewed cross-polytope SP = {y R n y w }, P K, is essentially the probability that a uniformly chosen (n m)-dimensional subspace Ψ shifted by the point x, namely (Ψ + x), intersects SP nontrivially at some other point besides x. P K, is then interpreted as the complementary Grassmann angle [9] for the face F with respect to the polytope SP under the Grassmann manifold Gr ( n m)(n). Building on the works by L.A.Santalö [] and P.McMullen [] etc. in high dimensional integral geometry and convex polytopes, the complementary Grassmann angle for the (k )-dimensional face F can be explicitly expressed as the sum of products of internal angles and external angles []: s G I m++s (SP) β(f, G)γ(G, SP), (4) Also we may assume WLG that W = 3 This is because the restricted polytope totally surrounds the origin in R n

3 where s is any nonnegative integer, G is any (m + + s)- dimensional face of the skewed crosspolytope (I m++s (SP) is the set of all such faces), β(, ) stands for the internal angle and γ(, ) stands for the external angle. The internal angles and external angles are basically defined as follows [][]: An internal angle β(f,f ) is the fraction of the hypersphere S covered by the cone obtained by observing the face F from the face F. The internal angle β(f,f ) is defined to be zero when F F and is defined to be one if F = F. An external angle γ(f 3,F 4 ) is the fraction of the hypersphere S covered by the cone of outward normals to the hyperplanes supporting the face F 4 at the face F 3. The external angle γ(f 3,F 4 ) is defined to be zero when F 3 F 4 and is defined to be one if F 3 = F 4. Note that F here is a typical face of SP corresponding to a typical set K. β(f, G) depends not only on the dimension of the face G, but also depends on the number of its vertices supported on K and K. In other words if G is supported on a set L, then β(f, G) is only a function of L K and L K. So we write β(f, G) =β(t,t ) and similarly γ(g, SP )= γ(t,t ) where t = L K n and t = L K n P. Combining the notations and counting the number of faces G, (4) leads to: P(E c ) t ( )n t ( P )n t + t >m k + t +t ( ( P )n t )( ( P )n t ) β(t,t )γ(t,t ) + O(e cn ) (5) for some c >. As n each term in (5) behaves like exp{nψ com (t,t ) nψ int (t,t ) nψ ext (t,t )} where ψ com ψ int and ψ ext are the combinatorial exponent, the internal angle exponent and the external angle exponent of the each term respectively. It can be shown that the necessary and sufficient condition for (5) to tend to zero is that ψ(t,t )= ψ com (t,t ) ψ int (t,t ) ψ ext (t,t ) be uniformly negative for all t and t in (5). In the following sub-sections we will try to evaluate the internal and external angles for a typical face F, and a face G containing F and try to give closed form upper bounds for them. We combine the terms together and compute the exponents using Laplace method in section IV and derive thresholds for nonnegativity of the cumulative exponent using. A. Derivation of the Internal Angles Suppose that F is a typical (k )-dimensional face of the skewed cross-polytope n SP = {y R n y w = w i y i } supported on the subset K with K = k n + n P. Let G be a l dimensional face of SP supported on the set L with F G. Also, let L K = t and L K = t. First we can prove the following lemma: Lemma : Let Con F,G be the positive cone of all the vectors x R n that take the form: b i e i + i=k+ b i e i, (6) where b i, i l are nonnegative real numbers and w i b i = i=k+ e x Con F,G w i b i b w = b = = b k w k dx = β(f, G)V l k (S l k ) e r r l k dx = β(f, G) π (l k)/, (7) where V l k (S l k ) is the spherical volume of the (l k )-dimensional sphere S l k. Proof: Omitted for brevity From (7) we can find the expression for the internal angle. Define U R l k+ as the set of all nonnegative vectors (x,x,,x l k+ ) satisfying: x p, p l k + ( k w p)x = l w px p k+ and define f(x,, x l k+ ):U Con F,G to be the linear and bijective map f(x,,x l k+ )= Con F,G e x ( Γ Γ dx = x w p e p + U e f(x) df (x) e f(x) dx dx l k+ x p k+ w p e p e (P k w p )x P l w p x p k+ dx dx l k+ Γ is the region described by wp)x = (8) px p k+,x p p l k+ (9) where J(A) is due to the change of integral variables and is essentially the determinant of the Jacobian of the variable transform given by the l l k matrix A given by: Ω w iwk+j i k, j l k A i,j = w i k + i l, j = i k Otherwise ()

4 w = 6 = 4 = W 7 w = w = Fig. : Recoverable threshold as a function of W. P =., m =.75n where Ω = k w p. Now J(A) = det(a T A). By finding the eigenvalues of A T A we obtain: J(A) = W t W t Ω+ t W + t W () Ω Now we define a random variable Z =( wp)x wpx p k+ where X,X,,X l k+ are independent random variables, with X p HN(, ), p (l k + wp+k ), as half-normal distributed random variables and X N(, P k ) as a normal distributed random variable. w p by inspection, (8) is equal to Cp Z (). where p Z ( ) is the probability density function for the random variable Z and p Z () is the probability density function p Z ( ) evaluated at the point Z =, and v l k+ π ly ux C = t k wp J(A) = l k w q q=k+ π l k+ l k q (n + t )W +(n P + t )W () Combining (7) and (8): B. Derivation of the External Angle β(t,t )=π k l CpZ () (3) Without loss of generality, assume K = {n k+,,n}. Consider the (l )-dimensional face G = conv{ e n l+ w n l+,..., e n k w n k, e n k+ w n k+,..., e n w n } of the skewed cross-polytope SP. The n l outward normal vectors of the supporting hyperplanes of the facets containing G are given by j i w i e i + n l { n p=n l+ w i e i,j i {, }}. Fig. 3: Successful recovery percentage for different weights. P =. and m =.75n the outward normal cone c(g, SP) at the face G is the positive hull of these normal vectors. Thus e x dx = γ(g, SP )V n l (S n l ) e r r n l dx c(g,sp) = γ(g, SP).π (n l+)/, (4) where V n l (S n l ) is the spherical volume of the (n l)- dimensional sphere S n l. Now define U to be the set {x R n l+ x n l+, x i /w i x n l+, i (n l)} and define f(x,, x n l+ ):U c(g, SP) to be the linear and bijective map f(x,, x n l+ ) = Z c(g,sp) n l x i e i + Z e x dx Z Z w x n l+ w x n l+ e f(x) dx U Z wn l x n l+ w n l x n l+ n i=n l+ e x x n l (P n i=n l+ i )x n l+ dx dx n l+ Z e (P n i=n l+ i )x Z W x «( P )n t Z W x e y dy W x = n l Z e x where ξ = ξ(t,t )= Z Wx ξ q Pn e y dy i=n l+ w i w i x n l+ e i. ( P )n t e dy«y dx W x! r Z W x ξ e y dy! r dx, (5),r = ( P)n t r = ( P )n t. J(A) = det(a T A)=ξ is resulting from the change of variable in the integral. IV. EXPONENT CALCULATION Using the Laplace method we compute the angle exponents. They are given in the following theorems, the proofs of which are omitted for brevity. we assume n = γ n, n = γ n and WLG W =, W = W.

5 9 9 = = w * w = = = w = = (a) (b) Fig. : Successful recovery percentage for weighted l minimization with different weights and suboptimal weights in a nonuniform sparse setting. P =.5 and m =.5n Theorem : Let t = t n, t = t n, g(x) = π e x, G(x) = π x e y dy. Also define C = (t + γ )+ W (t + γ P ), D = γ ( ) t and D = γ ( P ) t. Let x be the unique solution to x of the following: C g(x)d xg(x) Wg(Wx)D = xg(wx) ψ ext (t,t )=Cx D log G(x ) D log G(Wx ) (6) Theorem : Let b = t +W t t +t and ϕ(.) and Φ(.) be the standard Gaussian pdf and cdf functions respectively. Also t let Q(s) = ϕ(s) (t + Wt ϕ(ws) +t )Φ(s) (t +t )Φ(Ws). Define the function ˆM(s) = s Q(s) and solve for s in ˆM(s) = m mb+ω. Let the unique solution be s and set y = s (b ). Compute ˆM(s ) t the rate function Λ (y) =sy t t +t Λ (s) t +t Λ (Ws) at the point s = s, where Λ (s) = s + log(ϕ(s)). The internal angle exponent is then given by: ψ int (t,t ) = (Λ (y)+ m Ω y + log )(t + t ) (7) As an illustration of these results, for P =. and δ = m n =.75 using Theorems and and combining the exponents with the combinatorial exponent, we have calculated the threshold for for different values of in the range [, 3], below which the signal can be recovered. The curve is depicted in Figure. As expected, the curve is suggesting that in this setting weighted l minimization boosts the weak threshold in comparison with l minimization. This is verified in the next section by some examples. V. SIMULATION We demonstrate by some examples that appropriate weights can boost the recovery percentage. We fix P and n =m =, and try l and weighted l minimization for various values of. We choose n = n = n Figure a shows one such comparison for P =.5 and different values of. Note that the optimal value of varies as changes. Figure b illustrates how the optimal weighted l minimization surpasses the ordinary l minimization. The optimal curve is basically achieved by selecting the best weight of Figure a for each single value of. Figure 3 shows the result of simulations in another setting where P =. and m =.75n (similar to the setting of the previous section). It is clear from the figure that the recovery success threshold for has been shifted higher when using weighted l minimization rather than standard l minimization. Note that this result very well matches the theoretical result of Figure. REFERENCES [] David Donoho, High-Dimensional Centrally Symmetric Polytopes with Neighborliness Proportional to Dimension, Discrete and Computational Geometry, (7), pp , 6, Springer. [] Emmanuel Candes and Terence Tao, Decoding by linear programming, IEEE Trans. on Information Theory, 5(), pp , December 5. [3] David Donoho and Jared Tanner, Neighborliness of randomly-projected simplices in high dimensions, Proc. National Academy of Sciences, (7), pp , 5. [4] W. Xu and Babak Hassibi, Compressed Sensing Over the Grassmann Manifold: A Unified Analytical Framework, Forty-Sixth Annual Allerton Conference, September 3-6. [5] Mark Davenport, Michael Wakin, and Richard Baraniuk, Detection and estimation with compressive measurements, Rice ECE Department Technical Report TREE 6, November 6 [6] Shihao Ji, Ya Xue, and Lawrence Carin, Bayesian compressive sensing, IEEE Trans. on Signal Processing, 56(6) pp , June 8 [7] M. Stojnic, F. Parvaresh and B. Hassibi, On the reconstruction of blocksparse signals with an optimal number of measurements, Preprint 8. [8] E. J. Cands, M. Wakin and S. Boyd. Enhancing sparsity by reweighted l minimization. J. Fourier Anal. Appl., [9] Branko Grünbaum, Grassmann angles of convex polytopes. Acta Math., :pp.93-3, 968. [] Branko Grünbaum, Convex polytopes, volume of Graduate Texts in Mathematics. Springer-Verlag, New York, second edition, 3. Prepared and with a preface by Volker Kaibel, Victor Klee and GnterüM. Ziegler. [] L.A.Santaló, Geometría integral en espacios de curvatura constante, Rep.Argetina Publ.Com.Nac.Energí Atómica, Ser.Mat,No.(95) [] Peter McMullen. Non-linear angle-sum relations for polyhedral cones and polytopes. Math. Proc. Cambridge Philos. Soc., 78():pp.47-6, 975. [3] M. Stojnic, W. Xu, and B. Hassibi, Compressed sensing - probabilistic analysis of a null-space characterization, IEEE International Conference on Acoustics, Speech and Signal Processing, Pages: , March 3 8-April 4 8. [4]

Phase Transition Phenomenon in Sparse Approximation

Phase Transition Phenomenon in Sparse Approximation Phase Transition Phenomenon in Sparse Approximation University of Utah/Edinburgh L1 Approximation: May 17 st 2008 Convex polytopes Counting faces Sparse Representations via l 1 Regularization Underdetermined

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Conditions for a Unique Non-negative Solution to an Underdetermined System

Conditions for a Unique Non-negative Solution to an Underdetermined System Conditions for a Unique Non-negative Solution to an Underdetermined System Meng Wang and Ao Tang School of Electrical and Computer Engineering Cornell University Ithaca, NY 14853 Abstract This paper investigates

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very

More information

Compressive Sensing with Random Matrices

Compressive Sensing with Random Matrices Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Geometry of log-concave Ensembles of random matrices

Geometry of log-concave Ensembles of random matrices Geometry of log-concave Ensembles of random matrices Nicole Tomczak-Jaegermann Joint work with Radosław Adamczak, Rafał Latała, Alexander Litvak, Alain Pajor Cortona, June 2011 Nicole Tomczak-Jaegermann

More information

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Item Type text; Proceedings Authors Jagiello, Kristin M. Publisher International Foundation for Telemetering Journal International Telemetering

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Sparse Recovery with Pre-Gaussian Random Matrices

Sparse Recovery with Pre-Gaussian Random Matrices Sparse Recovery with Pre-Gaussian Random Matrices Simon Foucart Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie Paris, 75013, France Ming-Jun Lai Department of Mathematics University of

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Optimal Deterministic Compressed Sensing Matrices

Optimal Deterministic Compressed Sensing Matrices Optimal Deterministic Compressed Sensing Matrices Arash Saber Tehrani email: saberteh@usc.edu Alexandros G. Dimakis email: dimakis@usc.edu Giuseppe Caire email: caire@usc.edu Abstract We present the first

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

Necessary and Sufficient Conditions for Success of the Nuclear Norm Heuristic for Rank Minimization

Necessary and Sufficient Conditions for Success of the Nuclear Norm Heuristic for Rank Minimization Necessary and Sufficient Conditions for Success of the Nuclear Norm Heuristic for Rank Minimization Babak Hassibi*, Benjamin Recht**, and Weiyu Xu Abstract Rank minimization minimizing the rank of a matrix

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Compressed Sensing: Lecture I. Ronald DeVore

Compressed Sensing: Lecture I. Ronald DeVore Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Does Compressed Sensing have applications in Robust Statistics?

Does Compressed Sensing have applications in Robust Statistics? Does Compressed Sensing have applications in Robust Statistics? Salvador Flores December 1, 2014 Abstract The connections between robust linear regression and sparse reconstruction are brought to light.

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations

Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations David Donoho Department of Statistics Stanford University Email: donoho@stanfordedu Hossein Kakavand, James Mammen

More information

Error Correction via Linear Programming

Error Correction via Linear Programming Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing ibeltal F. Alem, Daniel H. Chae, and Rodney A. Kennedy The Australian National

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

The convex algebraic geometry of linear inverse problems

The convex algebraic geometry of linear inverse problems The convex algebraic geometry of linear inverse problems The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher

More information

Recovery of Simultaneously Structured Models using Convex Optimization

Recovery of Simultaneously Structured Models using Convex Optimization Recovery of Simultaneously Structured Models using Convex Optimization Maryam Fazel University of Washington Joint work with: Amin Jalali (UW), Samet Oymak and Babak Hassibi (Caltech) Yonina Eldar (Technion)

More information

Information-Theoretic Limits of Matrix Completion

Information-Theoretic Limits of Matrix Completion Information-Theoretic Limits of Matrix Completion Erwin Riegler, David Stotz, and Helmut Bölcskei Dept. IT & EE, ETH Zurich, Switzerland Email: {eriegler, dstotz, boelcskei}@nari.ee.ethz.ch Abstract We

More information

On Optimal Frame Conditioners

On Optimal Frame Conditioners On Optimal Frame Conditioners Chae A. Clark Department of Mathematics University of Maryland, College Park Email: cclark18@math.umd.edu Kasso A. Okoudjou Department of Mathematics University of Maryland,

More information

On the Reconstruction of Block-Sparse Signals With an Optimal Number of Measurements Mihailo Stojnic, Farzad Parvaresh, and Babak Hassibi

On the Reconstruction of Block-Sparse Signals With an Optimal Number of Measurements Mihailo Stojnic, Farzad Parvaresh, and Babak Hassibi IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 57, NO 8, AUGUST 2009 3075 On the Reconstruction of Block-Sparse Signals With an Optimal Number of Measurements Mihailo Stojnic, Farzad Parvaresh, Babak Hassibi

More information

On State Estimation with Bad Data Detection

On State Estimation with Bad Data Detection On State Estimation with Bad Data Detection Weiyu Xu, Meng Wang, and Ao Tang School of ECE, Cornell University, Ithaca, NY 4853 Abstract We consider the problem of state estimation through observations

More information

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series

More information

Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p

Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p G. M. FUNG glenn.fung@siemens.com R&D Clinical Systems Siemens Medical

More information

Compressed Sensing and Linear Codes over Real Numbers

Compressed Sensing and Linear Codes over Real Numbers Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st,

More information

Spectra of Multiplication Operators as a Numerical Tool. B. Vioreanu and V. Rokhlin Technical Report YALEU/DCS/TR-1443 March 3, 2011

Spectra of Multiplication Operators as a Numerical Tool. B. Vioreanu and V. Rokhlin Technical Report YALEU/DCS/TR-1443 March 3, 2011 We introduce a numerical procedure for the construction of interpolation and quadrature formulae on bounded convex regions in the plane. The construction is based on the behavior of spectra of certain

More information

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Forty-Fifth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 26-28, 27 WeA3.2 Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Benjamin

More information

Cases Where Finding the Minimum Entropy Coloring of a Characteristic Graph is a Polynomial Time Problem

Cases Where Finding the Minimum Entropy Coloring of a Characteristic Graph is a Polynomial Time Problem Cases Where Finding the Minimum Entropy Coloring of a Characteristic Graph is a Polynomial Time Problem Soheil Feizi, Muriel Médard RLE at MIT Emails: {sfeizi,medard}@mit.edu Abstract In this paper, we

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle

More information

Computation and Relaxation of Conditions for Equivalence between l 1 and l 0 Minimization

Computation and Relaxation of Conditions for Equivalence between l 1 and l 0 Minimization Computation and Relaxation of Conditions for Equivalence between l and l Minimization Yoav Sharon John Wright Yi Ma April, 28 Abstract In this paper, we investigate the exact conditions under which the

More information

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May University of Luxembourg Master in Mathematics Student project Compressed sensing Author: Lucien May Supervisor: Prof. I. Nourdin Winter semester 2014 1 Introduction Let us consider an s-sparse vector

More information

arxiv: v1 [cs.it] 14 Apr 2009

arxiv: v1 [cs.it] 14 Apr 2009 Department of Computer Science, University of British Columbia Technical Report TR-29-7, April 29 Joint-sparse recovery from multiple measurements Ewout van den Berg Michael P. Friedlander arxiv:94.251v1

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

Near Optimal Signal Recovery from Random Projections

Near Optimal Signal Recovery from Random Projections 1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:

More information

Data Sparse Matrix Computation - Lecture 20

Data Sparse Matrix Computation - Lecture 20 Data Sparse Matrix Computation - Lecture 20 Yao Cheng, Dongping Qi, Tianyi Shi November 9, 207 Contents Introduction 2 Theorems on Sparsity 2. Example: A = [Φ Ψ]......................... 2.2 General Matrix

More information

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

On Sparsity, Redundancy and Quality of Frame Representations

On Sparsity, Redundancy and Quality of Frame Representations On Sparsity, Redundancy and Quality of Frame Representations Mehmet Açaaya Division of Engineering and Applied Sciences Harvard University Cambridge, MA Email: acaaya@fasharvardedu Vahid Taroh Division

More information

COMPRESSED Sensing (CS) is a method to recover a

COMPRESSED Sensing (CS) is a method to recover a 1 Sample Complexity of Total Variation Minimization Sajad Daei, Farzan Haddadi, Arash Amini Abstract This work considers the use of Total Variation (TV) minimization in the recovery of a given gradient

More information

Explicit Matrices for Sparse Approximation

Explicit Matrices for Sparse Approximation 1 Explicit Matrices for Sparse Approximation Amin Khajehnejad, Arash Saber Tehrani, Alexandros G. Dimakis, Babak Hassibi Abstract We show that girth can be used to certify that sparse compressed sensing

More information

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Going off the grid Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang We live in a continuous world... But we work

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

arxiv: v3 [cs.it] 25 Jul 2014

arxiv: v3 [cs.it] 25 Jul 2014 Simultaneously Structured Models with Application to Sparse and Low-rank Matrices Samet Oymak c, Amin Jalali w, Maryam Fazel w, Yonina C. Eldar t, Babak Hassibi c arxiv:11.3753v3 [cs.it] 5 Jul 014 c California

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Package R1magic. August 29, 2016

Package R1magic. August 29, 2016 Type Package Package R1magic August 29, 2016 Title Compressive Sampling: Sparse Signal Recovery Utilities Version 0.3.2 Date 2015-04-19 Maintainer Depends stats, utils Utilities

More information

1 Regression with High Dimensional Data

1 Regression with High Dimensional Data 6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah 00 AIM Workshop on Ranking LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION By Srikanth Jagabathula Devavrat Shah Interest is in recovering distribution over the space of permutations over n elements

More information

An algebraic perspective on integer sparse recovery

An algebraic perspective on integer sparse recovery An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:

More information

EXPONENTIAL BOUNDS IMPLYING CONSTRUCTION OF COMPRESSED SENSING MATRICES, ERROR-CORRECTING CODES AND NEIGHBORLY POLYTOPES BY RANDOM SAMPLING

EXPONENTIAL BOUNDS IMPLYING CONSTRUCTION OF COMPRESSED SENSING MATRICES, ERROR-CORRECTING CODES AND NEIGHBORLY POLYTOPES BY RANDOM SAMPLING EXPONENTIAL BOUNDS IMPLYING CONSTRUCTION OF COMPRESSED SENSING MATRICES, ERROR-CORRECTING CODES AND NEIGHBORLY POLYTOPES BY RANDOM SAMPLING DAVID L. DONOHO AND JARED TANNER Abstract. In [12] the authors

More information

Group testing example

Group testing example Group testing example Consider a vector x R n with a single nonzero in an unknown location. How can we determine the location and magnitude of the nonzero through inner produces with vectors a i R n for

More information

Intrinsic Volumes of Convex Cones Theory and Applications

Intrinsic Volumes of Convex Cones Theory and Applications Intrinsic Volumes of Convex Cones Theory and Applications M\cr NA Manchester Numerical Analysis Martin Lotz School of Mathematics The University of Manchester with the collaboration of Dennis Amelunxen,

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems 1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical

More information

DIFFERENTIAL POSETS SIMON RUBINSTEIN-SALZEDO

DIFFERENTIAL POSETS SIMON RUBINSTEIN-SALZEDO DIFFERENTIAL POSETS SIMON RUBINSTEIN-SALZEDO Abstract. In this paper, we give a sampling of the theory of differential posets, including various topics that excited me. Most of the material is taken from

More information

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego 1 Outline Course Outline Motivation for Course Sparse Signal Recovery Problem Applications Computational

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

Necessary and Sufficient Conditions for Success of the Nuclear Norm Heuristic for Rank Minimization

Necessary and Sufficient Conditions for Success of the Nuclear Norm Heuristic for Rank Minimization Necessary and Sufficient Conditions for Success of the Nuclear Norm Heuristic for Rank inimization Benjamin Recht, Weiyu Xu, and Babak Hassibi September 7, 2008 Abstract inimizing the rank of a matrix

More information

A Note on the Complexity of L p Minimization

A Note on the Complexity of L p Minimization Mathematical Programming manuscript No. (will be inserted by the editor) A Note on the Complexity of L p Minimization Dongdong Ge Xiaoye Jiang Yinyu Ye Abstract We discuss the L p (0 p < 1) minimization

More information

Package R1magic. April 9, 2013

Package R1magic. April 9, 2013 Package R1magic April 9, 2013 Type Package Title Compressive Sampling: Sparse signal recovery utilities Version 0.2 Date 2013-04-09 Author Maintainer Depends stats, utils Provides

More information

Sparse, stable gene regulatory network recovery via convex optimization

Sparse, stable gene regulatory network recovery via convex optimization Sparse, stable gene regulatory network recovery via convex optimization Arwen Meister June, 11 Gene regulatory networks Gene expression regulation allows cells to control protein levels in order to live

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

Sketching Sparse Covariance Matrices and Graphs

Sketching Sparse Covariance Matrices and Graphs Sketching Sparse Covariance Matrices and Graphs Gautam Dasarathy 1, Pariskhit Shah 2, Badri Narayan Bhaskar 1, and Robert Nowak 1 1 Department of Electrical and Computer Engineering, University of Wisconsin

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

On the Observability of Linear Systems from Random, Compressive Measurements

On the Observability of Linear Systems from Random, Compressive Measurements On the Observability of Linear Systems from Random, Compressive Measurements Michael B Wakin, Borhan M Sanandaji, and Tyrone L Vincent Abstract Recovering or estimating the initial state of a highdimensional

More information

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 13, JULY 1, 2013 3279 Compressed Sensing Affine Rank Minimization Under Restricted Isometry T. Tony Cai Anru Zhang Abstract This paper establishes new

More information

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University

More information

Optimal Deterministic Compressed Sensing Matrices

Optimal Deterministic Compressed Sensing Matrices Optimal Deterministic Compressed Sensing Matrices Arash Saber Tehrani Department of Electrical Engineering University of Southern California Los Angeles, CA 90089, USA email: saberteh@usc.edu Alexandros

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information