Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Size: px
Start display at page:

Download "Noisy Signal Recovery via Iterative Reweighted L1-Minimization"

Transcription

1 Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009

2 Problem Background Setup 1 Suppose x is an unknown signal in R d. 2 Design measurement matrix Φ : R d R m. 3 Collect noisy measurements u = Φx +e. u = Φ x + e 4 Problem: Reconstruct signal x from measurements u 5 Wait, isn t this impossible? def Assume x is s-sparse: x 0 = supp(x) s d.

3 Problem Background Setup 1 Suppose x is an unknown signal in R d. 2 Design measurement matrix Φ : R d R m. 3 Collect noisy measurements u = Φx +e. u = Φ x + e 4 Problem: Reconstruct signal x from measurements u 5 Wait, isn t this impossible? def Assume x is s-sparse: x 0 = supp(x) s d.

4 Problem Background Setup 1 Suppose x is an unknown signal in R d. 2 Design measurement matrix Φ : R d R m. 3 Collect noisy measurements u = Φx +e. u = Φ x + e 4 Problem: Reconstruct signal x from measurements u 5 Wait, isn t this impossible? def Assume x is s-sparse: x 0 = supp(x) s d.

5 Problem Background Applications Compressive Imaging Computational Biology Medical Imaging Astronomy Geophysical Data Analysis Compressive Radar Many more (see

6 Problem Background How can we reconstruct? Obvious way: Suppose the matrix Φ is one-to-one on the set of sparse vectors and e = 0. Set Then ˆx = x! Bad news: ˆx = argmin z 0 such that Φz = u. This would require a search through ( d s) subspaces! Not numerically feasible.

7 Problem Background How can we reconstruct? Obvious way: Suppose the matrix Φ is one-to-one on the set of sparse vectors and e = 0. Set Then ˆx = x! Bad news: ˆx = argmin z 0 such that Φz = u. This would require a search through ( d s) subspaces! Not numerically feasible.

8 Problem Background How else can we reconstruct? Geometric Idea Minimizing the l 0 -ball is too hard, so let s try a different one. Our favorites... Least Squares L1-Minimization (using Linear Programming) Which one?

9 Problem Background How else can we reconstruct? Geometric Idea Minimizing the l 0 -ball is too hard, so let s try a different one. Our favorites... Least Squares L1-Minimization (using Linear Programming) Which one?

10 Problem Background Which one? x x* x = x* Figure: Minimizing the l 2 versus the l 1 balls.

11 Problem Background What do we assume about Φ? Restricted Isometry Property (RIP) The s th restricted isometry constant of Φ is the smallest δ s such that (1 δ s ) x 2 Φx 2 (1+δ s ) x 2 whenever x 0 s. For Gaussian or Bernoulli measurement matrices, with high probability δ s c < 1 when m slogd. Random Fourier and others with fast multiply have similar property.

12 Problem Background What do we assume about Φ? Restricted Isometry Property (RIP) The s th restricted isometry constant of Φ is the smallest δ s such that (1 δ s ) x 2 Φx 2 (1+δ s ) x 2 whenever x 0 s. For Gaussian or Bernoulli measurement matrices, with high probability δ s c < 1 when m slogd. Random Fourier and others with fast multiply have similar property.

13 Results Proven Results L1-Minimization [Candès-Tao] Assume that the measurement matrix Φ satisfies the RIP with δ 2s < 2 1. Then every s-sparse vector x can be exactly recovered from its measurements u = Φx as a unique solution to the linear optimization problem: ˆx = argmin z 1 such that Φz = u.

14 Results Numerical Results 100 Percentage of flat signals recovered with BP, d=256 Percentage recovered s=4 s=12 s=20 s=28 s= Measurements m Figure: The percentage of sparse flat signals exactly recovered by Basis Pursuit as a function of the number of measurements m in dimension d = 256 for various levels of sparsity s.

15 Results What about noise? Noisy Formulation For a non-sparse vector x with noisy measurements u = Φx +e, ˆx = argmin z 1 such that Φz u 2 ε. (1) L1-Minimization [Candès-Romberg-Tao] Let Φ be a measurement matrix satisfying the RIP with δ 2s < 2 1. Then for any arbitrary signal and corrupted measurements u = Φx +e with e 2 ε, the solution ˆx to (1) satisfies ˆx x 2 C s ε+c s x x s 1. s Note: As δ 2s 2 1,C s,c s!!

16 Results What about noise? Noisy Formulation For a non-sparse vector x with noisy measurements u = Φx +e, ˆx = argmin z 1 such that Φz u 2 ε. (1) L1-Minimization [Candès-Romberg-Tao] Let Φ be a measurement matrix satisfying the RIP with δ 2s < 2 1. Then for any arbitrary signal and corrupted measurements u = Φx +e with e 2 ε, the solution ˆx to (1) satisfies ˆx x 2 C s ε+c s x x s 1. s Note: As δ 2s 2 1,C s,c s!!

17 Results What about noise? Noisy Formulation For a non-sparse vector x with noisy measurements u = Φx +e, ˆx = argmin z 1 such that Φz u 2 ε. (1) L1-Minimization [Candès-Romberg-Tao] Let Φ be a measurement matrix satisfying the RIP with δ 2s < 2 1. Then for any arbitrary signal and corrupted measurements u = Φx +e with e 2 ε, the solution ˆx to (1) satisfies ˆx x 2 C s ε+c s x x s 1. s Note: As δ 2s 2 1,C s,c s!!

18 Results What about noise? Noisy Formulation For a non-sparse vector x with noisy measurements u = Φx +e, ˆx = argmin z 1 such that Φz u 2 ε. (1) L1-Minimization [Candès-Romberg-Tao] Let Φ be a measurement matrix satisfying the RIP with δ 2s < 2 1. Then for any arbitrary signal and corrupted measurements u = Φx +e with e 2 ε, the solution ˆx to (1) satisfies ˆx x 2 C s ε+c s x x s 1. s Note: As δ 2s 2 1,C s,c s!!

19 Results Numerical Results Recovery Error from BP on Perturbed Measurements (by Gaussian noise), d= Average Error s=4 s=12 s=20 s=28 s= Measurements m Figure: The recovery error of L1-Minimization under perturbed measurements ( e 2 = 0.5) as a function of the number of measurements m in dimension d = 256 for various levels of sparsity s.

20 Results What if we are close? Suppose we recover ˆx x Most likely, this means ˆx i x i In particular, ˆx i is small/large when x i is small/large Weighted L1 ˆx (2) = argmin z d z i ˆx i such that Φz u 2 ε i=1

21 Results What if we are close? Suppose we recover ˆx x Most likely, this means ˆx i x i In particular, ˆx i is small/large when x i is small/large Weighted L1 ˆx (2) = argmin z d z i ˆx i such that Φz u 2 ε i=1

22 Results What if we are close? Suppose we recover ˆx x Most likely, this means ˆx i x i In particular, ˆx i is small/large when x i is small/large Weighted L1 ˆx (2) = argmin z d z i ˆx i +a such that Φz u 2 ε i=1

23 Results Weighted Geometry x = x* x x* Figure: The geometry of the weighted l 1 -ball. Noise-free case: In cases where ˆx x, we should have that ˆx (2) is closer to x, or even equal. Noisy case: This implies ˆx (2) should be closer to x than ˆx was. Can we repeat this again?

24 Reweighted L1-Minimization Reweighted l 1 -minimization (RWL1) Input: Measurement vector u R m, stability parameter a Output: Reconstructed vector ˆx Initialize Set the weights w i = 1 for i = 1...d. Approximate Solve the reweighted l 1 -minimization problem: ˆx = argmin ˆx R d d w iˆx i subject to Φˆx u 2 ε. i=1 Update Reset the weights: w i = 1 ˆx i +a.

25 Reweighted L1-Minimization Numerical Results 1.4 Sup norm errors after 1 iteration of (reweighted) BP 1.4 Sup norm errors after nine iterations of reweighted BP Figure: l -norm error for reweighted L1 in the noise-free case

26 Reweighted L1-Minimization Numerical Results Figure: Probability of reconstruction [Candès-Wakin-Boyd].

27 Reweighted L1-Minimization Numerical Results with noise 6 5 Reconstruction error for d=256 m=128 s=30 with decreasing e One iteration 9 Iterations Improvement by reweighted L1 after 9 iterations for d=256 m=128 s=30 with decreasing e 150 Error Number of trials Trials Improvement factor Figure: Improvements in the l 2 reconstruction error using reweighted l 1 -minimization versus standard l 1 -minimization for sparse Gaussian signals.

28 Reweighted L1-Minimization Numerical Results with noise Error Reconstruction error for d=256 m=128 s=30 with decreasing e One iteration 9 Iterations Improvement by reweighted L1 after 9 iterations for d=256 m=128 s=30 with decreasing e Number of trials Trials Improvement factor Figure: Improvements in the l 2 reconstruction error using reweighted l 1 -minimization versus standard l 1 -minimization for sparse Bernoulli signals.

29 Reweighted L1-Minimization Observations The noiseless case suggests that an l -norm bound may be required for RWL1 to succeed. In the noisy case it is clear that we cannot recover signal coordinates that are below some threshold. If each iteration of RWL1 improves the error, perhaps we should take a 0. (Recall w i = 1 ˆx i +a ).

30 Reweighted L1-Minimization Observations The noiseless case suggests that an l -norm bound may be required for RWL1 to succeed. In the noisy case it is clear that we cannot recover signal coordinates that are below some threshold. If each iteration of RWL1 improves the error, perhaps we should take a 0. (Recall w i = 1 ˆx i +a ).

31 Reweighted L1-Minimization Observations The noiseless case suggests that an l -norm bound may be required for RWL1 to succeed. In the noisy case it is clear that we cannot recover signal coordinates that are below some threshold. If each iteration of RWL1 improves the error, perhaps we should take a 0. (Recall w i = 1 ˆx i +a ).

32 Main Results Main Results RWL1 - Sparse case [N] Assume Φ satisfies the RIP with δ 2s δ where δ < 2 1. Let x be an s-sparse vector with noisy measurements u = Φx +e where e 2 ε. Assume the smallest nonzero coordinate µ of x satisfies µ 4αε 1 ρ. Then the limiting approximation from reweighted l 1 -minimization satisfies x ˆx 2 C ε, where C = 2α 1+ρ, ρ = 2δ 1 δ and α = 2 1+δ 1 δ.

33 Main Results Remarks Without noise, this result coincides with previous results on L1. The key improvement: As δ 2 1, C remains bounded. The error bound is the limiting bound, but a recursive relation in the proof gives exact improvements per iteration. We show in practice it is attained quite quickly. For signals whose smallest non-zero coefficient µ does not satisfy the condition of the theorem, we may apply the theorem to those coefficients that do satisfy this requirement, and treat the others as noise...

34 Main Results Remarks Without noise, this result coincides with previous results on L1. The key improvement: As δ 2 1, C remains bounded. The error bound is the limiting bound, but a recursive relation in the proof gives exact improvements per iteration. We show in practice it is attained quite quickly. For signals whose smallest non-zero coefficient µ does not satisfy the condition of the theorem, we may apply the theorem to those coefficients that do satisfy this requirement, and treat the others as noise...

35 Main Results Remarks Without noise, this result coincides with previous results on L1. The key improvement: As δ 2 1, C remains bounded. The error bound is the limiting bound, but a recursive relation in the proof gives exact improvements per iteration. We show in practice it is attained quite quickly. For signals whose smallest non-zero coefficient µ does not satisfy the condition of the theorem, we may apply the theorem to those coefficients that do satisfy this requirement, and treat the others as noise...

36 Main Results Remarks Without noise, this result coincides with previous results on L1. The key improvement: As δ 2 1, C remains bounded. The error bound is the limiting bound, but a recursive relation in the proof gives exact improvements per iteration. We show in practice it is attained quite quickly. For signals whose smallest non-zero coefficient µ does not satisfy the condition of the theorem, we may apply the theorem to those coefficients that do satisfy this requirement, and treat the others as noise...

37 Main Results Extension RWL1 - non-sparse extension [N] Assume Φ satisfies the RIP with δ 2s 2 1. Let x be an arbitrary vector with noisy measurements u = Φx +e where e 2 ε. Assume the smallest nonzero coordinate µ of x s satisfies µ 4αε 0 1 ρ, where ε 0 = 1.2( x x s s x x s 1 )+ε. Then the limiting approximation from reweighted l 1 -minimization satisfies and x ˆx 2 4.1α ( x xs/2 1 ) +ε, 1+ρ s x ˆx 2 2.4α ( x x s 2 + x x ) s 1 +ε, 1+ρ s where ρ and α are as before.

38 Main Results Theoretical Results (a) Number of iterations til convergence, M=10, e= (b) Number of iterations til convergence, M=10, e=0.1 6 Iterations til Convergence Iterations til Convergence δ value δ value Iterations til Convergence (c) Number of iterations til convergence, M=10, e= δ value Iterations til Convergence (d) Number of iterations til convergence, M=10, e= δ value Figure: Number of iterations required for theoretical error bounds to reach limiting theoretical error when (a) µ = 10, ε = 0.01, (b) µ = 10, ε = 0.1, (c) µ = 10, ε = 0.5, (d) µ = 10, ε = 1.0.

39 Main Results Recent work Wipf-Nagarajan elaborate on convergence and show connections to reweighted l 2 -minimization. Wipf-Nagarajan also show that a non-separable variant has desirable properties. Xu-Khajehnejad-Avestimehr-Hassibi provide a theoretical foundation for the analysis of RWL1 and show that for a nontrivial class of signals, a variant of RWL1 indeed can improve upon L1 in the noiseless case.

40 Thank you For more information Web: References: Candes, Wakin, Boyd, Enhancing sparsity by reweighted l 1 minimization, J. Fourier Anal. Appl., Needell, Noisy signal recovery via iterative reweighted L1-minimization, Wipf and Nagarajan, Solving Sparse Linear Inverse Problems: Analysis of Reweighted l 1 and l 2 Methods, J. of Selected Topics in Signal Processing, Special Issue on Compressive Sensing, Xu, Khajehnejad, Avestimehr, and Hassibi, Breaking through the Thresholds: an Analysis for Iterative Reweighted l 1 Minimization via the Grassmann Angle Framework, Proc. Allerton Conference on Communication, Control, and computing, Sept

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

Topics in Compressed Sensing

Topics in Compressed Sensing Topics in Compressed Sensing By Deanna Needell B.S. (University of Nevada, Reno) 2003 M.A. (University of California, Davis) 2005 DISSERTATION Submitted in partial satisfaction of the requirements for

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

More information

Acceleration of Randomized Kaczmarz Method

Acceleration of Randomized Kaczmarz Method Acceleration of Randomized Kaczmarz Method Deanna Needell [Joint work with Y. Eldar] Stanford University BIRS Banff, March 2011 Problem Background Setup Setup Let Ax = b be an overdetermined consistent

More information

Exact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP)

Exact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP) 1 Exact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP) Wei Lu and Namrata Vaswani Department of Electrical and Computer Engineering, Iowa State University,

More information

Mismatch and Resolution in CS

Mismatch and Resolution in CS Mismatch and Resolution in CS Albert Fannjiang, UC Davis Coworker: Wenjing Liao, UC Davis SPIE Optics + Photonics, San Diego, August 2-25, 20 Mismatch: gridding error Band exclusion Local optimization

More information

Sparse recovery for spherical harmonic expansions

Sparse recovery for spherical harmonic expansions Rachel Ward 1 1 Courant Institute, New York University Workshop Sparsity and Cosmology, Nice May 31, 2011 Cosmic Microwave Background Radiation (CMB) map Temperature is measured as T (θ, ϕ) = k k=0 l=

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

The Analysis Cosparse Model for Signals and Images

The Analysis Cosparse Model for Signals and Images The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

Z Algorithmic Superpower Randomization October 15th, Lecture 12

Z Algorithmic Superpower Randomization October 15th, Lecture 12 15.859-Z Algorithmic Superpower Randomization October 15th, 014 Lecture 1 Lecturer: Bernhard Haeupler Scribe: Goran Žužić Today s lecture is about finding sparse solutions to linear systems. The problem

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Analog-to-Information Conversion

Analog-to-Information Conversion Analog-to-Information Conversion Sergiy A. Vorobyov Dept. Signal Processing and Acoustics, Aalto University February 2013 Winter School on Compressed Sensing, Ruka 1/55 Outline 1 Compressed Sampling (CS)

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:

More information

Combining geometry and combinatorics

Combining geometry and combinatorics Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Compressive Sampling for Energy Efficient Event Detection

Compressive Sampling for Energy Efficient Event Detection Compressive Sampling for Energy Efficient Event Detection Zainul Charbiwala, Younghun Kim, Sadaf Zahedi, Jonathan Friedman, and Mani B. Srivastava Physical Signal Sampling Processing Communication Detection

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

Compressive Sensing with Random Matrices

Compressive Sensing with Random Matrices Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Fast Hard Thresholding with Nesterov s Gradient Method

Fast Hard Thresholding with Nesterov s Gradient Method Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Compressed Sensing: Lecture I. Ronald DeVore

Compressed Sensing: Lecture I. Ronald DeVore Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function

More information

Robust Sparse Recovery via Non-Convex Optimization

Robust Sparse Recovery via Non-Convex Optimization Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn

More information

Uniqueness Conditions For Low-Rank Matrix Recovery

Uniqueness Conditions For Low-Rank Matrix Recovery Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 3-28-2011 Uniqueness Conditions For Low-Rank Matrix Recovery Yonina C. Eldar Israel Institute of

More information

Estimating Unknown Sparsity in Compressed Sensing

Estimating Unknown Sparsity in Compressed Sensing Estimating Unknown Sparsity in Compressed Sensing Miles Lopes UC Berkeley Department of Statistics CSGF Program Review July 16, 2014 early version published at ICML 2013 Miles Lopes ( UC Berkeley ) estimating

More information

Does Compressed Sensing have applications in Robust Statistics?

Does Compressed Sensing have applications in Robust Statistics? Does Compressed Sensing have applications in Robust Statistics? Salvador Flores December 1, 2014 Abstract The connections between robust linear regression and sparse reconstruction are brought to light.

More information

Super-resolution via Convex Programming

Super-resolution via Convex Programming Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems 1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical

More information

Making Flippy Floppy

Making Flippy Floppy Making Flippy Floppy James V. Burke UW Mathematics jvburke@uw.edu Aleksandr Y. Aravkin IBM, T.J.Watson Research sasha.aravkin@gmail.com Michael P. Friedlander UBC Computer Science mpf@cs.ubc.ca Current

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Making Flippy Floppy

Making Flippy Floppy Making Flippy Floppy James V. Burke UW Mathematics jvburke@uw.edu Aleksandr Y. Aravkin IBM, T.J.Watson Research sasha.aravkin@gmail.com Michael P. Friedlander UBC Computer Science mpf@cs.ubc.ca Vietnam

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Three Generalizations of Compressed Sensing

Three Generalizations of Compressed Sensing Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or

More information

New Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University

New Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University New Applications of Sparse Methods in Physics Ra Inta, Centre for Gravitational Physics, The Australian National University 2 Sparse methods A vector is S-sparse if it has at most S non-zero coefficients.

More information

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed

More information

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle

More information

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Zhilin Zhang and Ritwik Giri Motivation Sparse Signal Recovery is an interesting

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Going off the grid Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang We live in a continuous world... But we work

More information

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.

More information

Compressed Sensing and Linear Codes over Real Numbers

Compressed Sensing and Linear Codes over Real Numbers Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st,

More information

AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE

AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE Ana B. Ramirez, Rafael E. Carrillo, Gonzalo Arce, Kenneth E. Barner and Brian Sadler Universidad Industrial de Santander,

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

ACCORDING to Shannon s sampling theorem, an analog

ACCORDING to Shannon s sampling theorem, an analog 554 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 2, FEBRUARY 2011 Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis Omid Taheri, Student Member,

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

Sequential Compressed Sensing

Sequential Compressed Sensing Sequential Compressed Sensing Dmitry M. Malioutov, Sujay R. Sanghavi, and Alan S. Willsky, Fellow, IEEE Abstract Compressed sensing allows perfect recovery of sparse signals (or signals sparse in some

More information

SParse signal recovery problem is the reconstruction of

SParse signal recovery problem is the reconstruction of An Efficient Greedy Algorithm for Sparse Recovery in Noisy Environment Hao Zhang, Gang Li, Huadong Meng arxiv:98.86v [cs.it] 3 Aug 9 Abstract Greedy algorithm are in widespread use for sparse recovery

More information

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing ibeltal F. Alem, Daniel H. Chae, and Rodney A. Kennedy The Australian National

More information

Recovery of Compressible Signals in Unions of Subspaces

Recovery of Compressible Signals in Unions of Subspaces 1 Recovery of Compressible Signals in Unions of Subspaces Marco F. Duarte, Chinmay Hegde, Volkan Cevher, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract

More information

Instance Optimal Decoding by Thresholding in Compressed Sensing

Instance Optimal Decoding by Thresholding in Compressed Sensing Instance Optimal Decoding by Thresholding in Compressed Sensing Albert Cohen, Wolfgang Dahmen, and Ronald DeVore November 1, 2008 Abstract Compressed Sensing sees to capture a discrete signal x IR N with

More information

Bayesian Methods for Sparse Signal Recovery

Bayesian Methods for Sparse Signal Recovery Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Motivation Motivation Sparse Signal Recovery

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

Towards a Mathematical Theory of Super-resolution

Towards a Mathematical Theory of Super-resolution Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Compressed Sensing: Extending CLEAN and NNLS

Compressed Sensing: Extending CLEAN and NNLS Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction

More information

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 Machine Learning for Signal Processing Sparse and Overcomplete Representations Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 1 Key Topics in this Lecture Basics Component-based representations

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

Spin Glass Approach to Restricted Isometry Constant

Spin Glass Approach to Restricted Isometry Constant Spin Glass Approach to Restricted Isometry Constant Ayaka Sakata 1,Yoshiyuki Kabashima 2 1 Institute of Statistical Mathematics 2 Tokyo Institute of Technology 1/29 Outline Background: Compressed sensing

More information

ORTHOGONAL matching pursuit (OMP) is the canonical

ORTHOGONAL matching pursuit (OMP) is the canonical IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4395 Analysis of Orthogonal Matching Pursuit Using the Restricted Isometry Property Mark A. Davenport, Member, IEEE, and Michael

More information

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego 1 Outline Course Outline Motivation for Course Sparse Signal Recovery Problem Applications Computational

More information

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property : An iterative algorithm for sparse recovery with restricted isometry property Rahul Garg grahul@us.ibm.com Rohit Khandekar rohitk@us.ibm.com IBM T. J. Watson Research Center, 0 Kitchawan Road, Route 34,

More information

On the Role of the Properties of the Nonzero Entries on Sparse Signal Recovery

On the Role of the Properties of the Nonzero Entries on Sparse Signal Recovery On the Role of the Properties of the Nonzero Entries on Sparse Signal Recovery Yuzhe Jin and Bhaskar D. Rao Department of Electrical and Computer Engineering, University of California at San Diego, La

More information