of Orthogonal Matching Pursuit

Similar documents
A new method on deterministic construction of the measurement matrix in compressed sensing

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

GREEDY SIGNAL RECOVERY REVIEW

Stability and Robustness of Weak Orthogonal Matching Pursuits

ORTHOGONAL matching pursuit (OMP) is the canonical

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

Multipath Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

Generalized Orthogonal Matching Pursuit- A Review and Some

Greedy Signal Recovery and Uniform Uncertainty Principles

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

A New Estimate of Restricted Isometry Constants for Sparse Solutions

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

Recovery of Sparse Signals Using Multiple Orthogonal Least Squares

Compressive Sensing and Beyond

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry

Iterative Hard Thresholding for Compressed Sensing

MATCHING PURSUIT WITH STOCHASTIC SELECTION

Observability of a Linear System Under Sparsity Constraints

Strengthened Sobolev inequalities for a random subspace of functions

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1

Sensing systems limited by constraints: physical size, time, cost, energy

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

arxiv: v1 [math.na] 26 Nov 2009

Compressed Sensing with Very Sparse Gaussian Random Projections

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

The Analysis Cosparse Model for Signals and Images

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Compressed Sensing and Robust Recovery of Low Rank Matrices

Orthogonal Matching Pursuit: A Brownian Motion Analysis

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Recent Developments in Compressed Sensing

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

A NEW FRAMEWORK FOR DESIGNING INCOHERENT SPARSIFYING DICTIONARIES

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems

Introduction to Compressed Sensing

Greedy Sparsity-Constrained Optimization

Necessary and sufficient conditions of solution uniqueness in l 1 minimization

Abstract This paper is about the efficient solution of large-scale compressed sensing problems.

Sparsity in Underdetermined Systems

Stopping Condition for Greedy Block Sparse Signal Recovery

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

Analysis of Greedy Algorithms

Robust Sparse Recovery via Non-Convex Optimization

Simultaneous Sparsity

COMPRESSED SENSING IN PYTHON

arxiv: v1 [math.na] 4 Nov 2009

SIGNALS with sparse representations can be recovered

An Unconstrained l q Minimization with 0 < q 1 for Sparse Solution of Under-determined Linear Systems

Exponential decay of reconstruction error from binary measurements of sparse signals

Stochastic geometry and random matrix theory in CS

Exact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

An Introduction to Sparse Approximation

The Pros and Cons of Compressive Sensing

3D INTERPOLATION USING HANKEL TENSOR COMPLETION BY ORTHOGONAL MATCHING PURSUIT A. Adamo, P. Mazzucchelli Aresys, Milano, Italy

Necessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization

A simple test to check the optimality of sparse signal approximations

On Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery

Recovery of Compressible Signals in Unions of Subspaces

Exact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP)

Shifting Inequality and Recovery of Sparse Signals

Spin Glass Approach to Restricted Isometry Constant

Optimisation Combinatoire et Convexe.

ORTHOGONAL MATCHING PURSUIT (OMP) is a

Acommon problem in signal processing is to estimate an

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing

Combining geometry and combinatorics

Frame Coherence and Sparse Signal Processing

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms

Compressed Sensing: Theory and Applications

Sigma Delta Quantization for Compressed Sensing

Compressed Sensing and Redundant Dictionaries

A Performance Guarantee for Orthogonal Matching Pursuit Using Mutual Coherence

Compressed Sensing and Redundant Dictionaries

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Research Article Support Recovery of Greedy Block Coordinate Descent Using the Near Orthogonality Property

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

Sparse Vector Distributions and Recovery from Compressed Sensing

Recovering overcomplete sparse representations from structured sensing

The Fundamentals of Compressive Sensing

Recovery of Sparse Signals Using Multiple Orthogonal Least Squares

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

Reconstruction from Anisotropic Random Measurements

Transcription:

A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement matrix A satisfies δ s+ (A) < s+, then the greedy algorithm Orthogonal Matching Pursuit(OMP) will succeed. That is, OMP can recover every s-sparse signal x in s iterations from b = Ax. Moreover, we shall show the upper bound of RIC is sharp in the following sense. For any given s N, we shall construct a matrix A with the RIC δ s+ (A) = s+ such that OMP may not recover some s-sparse signal x in s iterations. Index Terms Compressed sensing, restricted isometry property, orthogonal matching pursuit, sparse signal reconstruction. I. INTRODUCTION We shall give a sharp RIC bound of OMP in this paper. First let us review some basic concepts. Suppose we have a signal x in R N where N is very large. For instance, x represents a digit photo of 024 024 pixels and N = 024 2 0 6. Denote x 0 to be the number of nonzero entries of x. We say x is s-sparse if x 0 s; and x is sparse if s N. In real world, there Research supported in part by the NSF of China under grant 09789 and 2700, and by the fundamental research funds for the Central Universities. Q. Mo is with the Department of Mathematics, Zhejiang University, Hangzhou, 30027, China (e-mail: moqun@zju.edu.cn ). October 9, 208

2 are many signals are sparse or can be well approximated by sparse signals, either under the canonical basis or other special basis/frames. For simplicity, we assume that x is sparse under the canonical basis in R N. Suppose we have some linear measurements of a signal x. That is, we have b = Ax where A R m N and b R m are given. The key idea of compressed sensing [3], [8] is that it is highly possible to retrieve the sparse signal x from those linear measurements, while the number of measurements is far less than the dimension of the signal, i.e., m N. To retrieve such a sparse signal x, a natural method is to solve the following l 0 problem min x x 0 subject to Ax = b () where A and b are known. To ensure the s sparse solution is unique, we would like to use the restricted isometry property (RIP) which was introduced by Candès and Tao in [4]. A matrix A satisfies the RIP of order s with the restricted isometry constant (RIC) δ s = δ s (A) if δ s is the smallest constant such that holds for all s-sparse signal x. ( δ s ) x 2 2 Ax 2 2 (+δ s) x 2 2 (2) If δ 2s (A) <, the l 0 problem has a unique s-sparse solution [4]. The l 0 problem is equivalent to the l minimization problem when δ 2s (A) < 2/2, please see [2], [3], [] and the references therein. OMP is an efficient greedy algorithm to solve the l 0 problem [7], [6], [7]. To introduce this algorithm, we denote A Ω by the matrix A with indices of its columns in Ω and we denote x Ω as the similar restriction of some vector x. The following iterative algorithm shows the framework of OMP. Input: A, b Set: Ω 0 =, r 0 = b, k = while not converge Ω k = Ω k argmax i r k,ae i x k = argmin z A Ωk z b 2 r k = b A Ωk x k k = k + October 9, 208

3 end while ˆx Ωk = x k, ˆx Ω C k = 0 Return ˆx Now let us consider the conditions in terms of RIC for OMP to exactly recover any s-sparse signal in s iterations. Davenport and Wakin [6] have proven that δ s+ (A) < /(3 s) is sufficient. Later, Liu and Temlyakov [2] have improved the condition to δ s+ (A) < /(( 2+) s). Furthermore, Mo and Shen [4] have pushed the sufficient condition to δ s+ (A) < /( s+) and have shown by detailed examples that a necessary condition is δ s+ (A) < / s. Later, Yang and Hoog [0] improved the sufficient condition to δ s (A)+ sδ s+ (A) <. There is a small gap open between the sufficient condition and the necessary condition of the best results. And we would like to fill out the gap in this paper. The content of this paper is consisted by two parts. We shall prove that the condition δ s+ (A) < s+ is sufficient for OMP to exactly recover any s-sparse x in s iterations. For any positive integer s, we shall construct a matrix with δ s+ = / s+ such that OMP may fail for at least one s-sparse signal in s iterations. II. PRELIMINARIES Before going further, let us introduce some notations. For k =,2,,N, define e k to be the k-th element in the canonical basis of R N. That is, e k is the -sparse vector in R N such that only the k-th entry of e k is. For a given matrix A and a given signal x, define S k := Ax,Ae k, k =,...,N. Denote S 0 := max k {,...,s} S k. The following two lemmas are useful in our analysis. Lemma II.. For any s > 0, define t := ±( s+ )/ s. Then we have t 2 < and A(x+te k ) 2 2 A(t2 x te k ) 2 2 = ( t 4 ) ( Ax,Ax ± s Ax,Ae k ) k =,,N. (3) October 9, 208

4 Proof: By the definition of t, t 2 = ( s+ ) 2 /s = ( s+ )/( s+ + ) <. Moreover, by direct expanding and simplification, we have ( A(x+te k ) 2 2 A(t 2 x te k ) 2 2 = ( t 4 ) Ax,Ax + 2t ) t 2 Ax,Ae k. Now we only need to verify that 2t/( t 2 ) = ± s. This is indeed true since ( s+ / s 2t t 2 = ±2 s+ s++ ) = ± s. Lemma II.2. If the restricted isometry constant δ s+ (A) satisfies δ s+ (A) < s+ (4) and the support of the signal x is a non-empty subset of {,2,...,s}, then S 0 > S k for k > s. Proof: Notice this lemma keeps unchanged if we replace x by cx with some non-zero scalar c. Therefore, we can assume that x 2 =. For the given s-sparse x, we obtain s Ax,Ax = A x k e k,ax = = k= s x k Ae k,ax k= s x k S k k= S 0 x S 0 s x 2 = ss 0. Define t := ( s+ )/ s. Then by the above inequality and Lemma II., we have ( t 4 ) s(s 0 S k ) ( t 4 )( Ax,Ax s Ax,Ae k ) = A(x+te k ) 2 2 A(t2 x te k ) 2 2. (5) October 9, 208

5 Moreover, for any k > s, by the definition of δ s+ (A), we obtain A(x+te k ) 2 2 A(t2 x te k ) 2 2 ( δ s+ (A))( x 2 2 +t 2 ) t 2 (+δ s+ (A))(t 2 x 2 2 +) = (+t 2 ) ( t 2 (+t 2 )δ s+ (A) ) (6) By the definition of t, we have = (+t 2 ) 2( ( t 2 )/(+t 2 ) δ s+ (A) ). ( t 2 )/(+t 2 ) = ( Now combining (5), (6), (7) and condition (4), we obtain ( ) s+ s+ )/ + = / s+. (7) s++ s++ ( t 4 ) s(s 0 S k ) A(x+te k ) 2 2 A(t2 x te k ) 2 2 (+t 2 ) 2( ( t 2 )/(+t 2 ) δ s+ (A) ) = (+t 2 ) (/ ) 2 s+ δ s+ (A) > 0. Therefore, we have S 0 > S k for all k > s. Similarly we can prove that S 0 > S k for all k > s. Hence, S 0 > S k for all k > s. III. MAIN RESULTS Now we are ready to show the main results of this paper. Theorem III.. Suppose that A satisfies δ s+ (A) < s+, (8) then for any s-sparse signal x, OMP will recover x from b = Ax in s iterations. Proof: Without loss of generality, we assume that the support ofxis a subset of{,2,...,s}. Thus, the sufficient condition for OMP choosing an index from {,...,s} in the first iteration is S 0 > S k for all k > s. October 9, 208

6 By Lemma II.2, δ s+ < s+ guarantees the success of the first iteration of OMP. OMP makes an orthogonal projection in each iteration. Hence, it can be proved by induction that OMP always selects an index from the support of x in s-iterations. Theorem III.2. For any given positive integer s, there exist a s-sparse signal x and a matrix A with the restricted isometry constant such that OMP may fail in s iterations. δ s+ (A) = Proof: For any given positive integer s, let By simple calculation, we get s+ s(s+) s A = I s+ s. 0... 0 A T A = s s+ I s s(s+) s+. s+ s+... s+ + s+ (s+) (s+). (s+) (s+) where A T denotes the transpose of A. By direct calculation, we can verify that A T Ax = sx/(s+ ) for all x R s+ satisfying x s+ = 0 and s k= x k = 0. Therefore, s/(s+) is an eigenvalue of A T A with multiplicity of s. Moreover, by direct calculation, we can see that A T Ax = (±/ s+)x withx = (,,...,,± s+) T. Therefore,A T A has another two eigenvalues ±/ s+. Thus, the restricted isometry constant δ s+ (A) is equal to s+. Now let We have x = (,,...,,0) T R s+. S k = Ae k,ax = A T Ae k,x = s/(s+) for all k {,...,s+}., October 9, 208

7 This implies OMP may fail in the first iteration. Since OMP chooses one index in each iteration, we conclude that OMP may fail in s iterations for the given matrix A and the s-sparse signal x. Remark III.3. For the case of measurements with noise, please see [9], [], and the references therein. REFERENCES [] T. Tony Cai and Anru Zhang, Sparse Representation of a Polytope and Recovery of Sparse Signals and Low-Rank Matrices, IEEE Trans. Inform. Theory, 60(), 22-32, 204. [2] E.J. Candès, The restricted isometry property and its implications for compressed sensing, C. R. Math. Acad. Sci. Paris, Ser. I, 346 589-592, 2008. [3] E.J. Candès, J. Romberg and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 52(2) 489-509, 2006. [4] E.J. Candès and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory, 5(2) 4203-425, 2005. [5] W. Dai and O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction, IEEE Trans. Inform. Theory, 55(5) 2230-2249, 2009. [6] M.A. Davenport and M.B. Wakin, Analysis of orthogonal matching pursuit using the restricted isometry property, IEEE Trans. Inform. Theory, 56(9) 4395-440, 200. [7] G. Davis, S. Mallat, and M. Avellaneda, Adaptive greedy approximation, J. Constr. Approx., 3, 57-98, 997. [8] D. Donoho, Compressed sensing, IEEE Trans. Inform. Theory, 52(4) 289-306, 2006. [9] Y. Shen and S. Li, Sparse Signals Recovery from Noisy Measurements by Orthogonal Matching Pursuit, Inverse problems and imaging, to appear. [0] M. Yang, and F. Hoog, New Coherence and RIP Analysis for Greedy Algorithms in Compressive Sensing, arxiv preprint arxiv:405.3354(204). [] R. Wu, H. Wei, and Di-Rong Chen, The Exact Support Recovery of Sparse Signals With Noise via Orthogonal Matching Pursuit, IEEE Signal Process. Lett. 20.4 (203): 403-406. [2] E. Liu and V.N. Temlyakov, Orthogonal Super Greedy Algorithm and Applications in Compressed Sensing, IEEE Trans. Inform. Theory 58(4), 2040-2047, 202. [3] Q. Mo and L. Song, New bounds on the restricted isometry constant δ 2k, Appl. Comput. Harmon. Anal., 3(3), 460C468, 20. [4] Q. Mo and Y. Shen, A remark on the restricted isometry property in orthogonal matching pursuit algorithm, IEEE Trans. Inform. Theory, 58(6) 3654-3656, 202. [5] D. Needell and J.A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Appl. Comp. Harmonic Anal., 26(3) 30-32, 2009. [6] Y.C. Pati, R. Rezaiifar and P.S. Krishnaprasad, Orthogonal Matching Pursuit: Recursive function approximation with applications to wavelet decomposition. in Proc. 27th Ann. Asilomar Conf. on Signals, Systems and Computers, Nov. 993. [7] J.A. Tropp, Greed is good: Algorithmic results for sparse approximation, IEEE Trans. Inform. Theory, 50(0) 223-2242, 2004. October 9, 208

8 Qun Mo was born in 97 in China. He has obtained a B.Sc. degree in 994 from Tsinghua University, a M.Sc. degree in 997 from Chinese Academy of Sciences and a Ph.D. degree in 2003 from University of Alberta in Canada. He is current an associate professor in mathematics in Zhejiang University. His research interests include compressed sensing, wavelets and their applications. October 9, 208