Multipath Matching Pursuit

Similar documents
Recovery of Sparse Signals Using Multiple Orthogonal Least Squares

Generalized Orthogonal Matching Pursuit- A Review and Some

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

A Greedy Search Algorithm with Tree Pruning for Sparse Signal Recovery

Generalized Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit

A new method on deterministic construction of the measurement matrix in compressed sensing

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Greedy Signal Recovery and Uniform Uncertainty Principles

GREEDY SIGNAL RECOVERY REVIEW

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

ORTHOGONAL matching pursuit (OMP) is the canonical

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property

Randomness-in-Structured Ensembles for Compressed Sensing of Images

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

Recovery of Sparse Signals Using Multiple Orthogonal Least Squares

Generalized Orthogonal Matching Pursuit

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity

Stopping Condition for Greedy Block Sparse Signal Recovery

Sensing systems limited by constraints: physical size, time, cost, energy

An Introduction to Sparse Approximation

ORTHOGONAL MATCHING PURSUIT (OMP) is a

Exact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations

Introduction to Compressed Sensing

The Analysis Cosparse Model for Signals and Images

Compressive Sensing and Beyond

Orthogonal Matching Pursuit: A Brownian Motion Analysis

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Stability and Robustness of Weak Orthogonal Matching Pursuits

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

Signal Recovery from Permuted Observations

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France

Elaine T. Hale, Wotao Yin, Yin Zhang

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Iterative Hard Thresholding for Compressed Sensing

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission

Recovery of Compressible Signals in Unions of Subspaces

Lecture Notes 9: Constrained Optimization

A Generalized Restricted Isometry Property

The Pros and Cons of Compressive Sensing

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

Generalized greedy algorithms.

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Combining geometry and combinatorics

Greedy Sparsity-Constrained Optimization

Shifting Inequality and Recovery of Sparse Signals

On the Role of the Properties of the Nonzero Entries on Sparse Signal Recovery

Simultaneous Sparsity

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE

A Review of Sparsity-Based Methods for Analysing Radar Returns from Helicopter Rotor Blades

Reconstruction from Anisotropic Random Measurements

Robustly Stable Signal Recovery in Compressed Sensing with Structured Matrix Perturbation

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

The Pros and Cons of Compressive Sensing

An Overview of Compressed Sensing

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Improved algorithm based on modulated wideband converter for multiband signal reconstruction

AFRL-RI-RS-TR

Sparse Vector Distributions and Recovery from Compressed Sensing

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals

Sparsity in Underdetermined Systems

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Compressed Sensing and Related Learning Problems

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise

arxiv: v1 [math.na] 26 Nov 2009

Tractable Upper Bounds on the Restricted Isometry Constant

Compressed Sensing and Sparse Recovery

Compressed Spectrum Sensing for Whitespace Networking

The Fundamentals of Compressive Sensing

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1

Does Compressed Sensing have applications in Robust Statistics?

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery

Analysis of Greedy Algorithms

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

A NEW FRAMEWORK FOR DESIGNING INCOHERENT SPARSIFYING DICTIONARIES

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

Sparse Expander-like Real-valued Projection (SERP) matrices for compressed sensing

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

MATCHING PURSUIT WITH STOCHASTIC SELECTION

Sparse Solutions of an Undetermined Linear System

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

Robust Sparse Recovery via Non-Convex Optimization

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

Compressive Sensing with Random Matrices

arxiv: v2 [cs.lg] 6 May 2017

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Enhanced Compressive Sensing and More

Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction

Signal Recovery, Uncertainty Relations, and Minkowski Dimension

arxiv: v1 [math.na] 4 Nov 2009

Transcription:

Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy type of search In the final moment, the most promising path is chosen. They propose breadth-first search and depth-first search for greedy algorithm. They provide analysis for the performance of MMP with RIP CS The sparse signals x I. Introduction n can be reconstructed from the compressed measurements n y Φx even when the system representation is underdetermined m<n), as long as the signal to be recovered is sparse i.e., number of nonzero elements in the vector is small). Reconstruction. 0 mimimization K-sparse signal x can be accurately reconstructed using m=k measurements in a noiseless scenario [].. minimization Since l 0 -minimization problem is NP-hard and hence not so practical, early works focused on the reconstruction of sparse signals using l -norm minimization technique e.g., basis pursuit []). 3. Greedy search the greedy search approach is designed to further reduce the computational complexity of

the basis pursuit. In a nutshell, greedy algorithms identify the support index set of nonzero elements) of the sparse vector x in an iterative fashion, generating a series of locally optimal updates. OMP In the orthogonal matching pursuit OMP) algorithm, the index of column that maximizes the magnitude of correlation between columns of Φ and the modified measurements often called residual) is chosen as a new support element in each iteration. If at least one incorrect index is chosen in the middle of the search, the output of OMP will be simply incorrect. 0 minimization II. MMP algorithm min x subect to Φx y. ) x 0 OMP OMP is simple to implement and also computationally efficient Due to the choice of the single candidate it is very sensitive to the selection of index. The output of OMP will be simply wrong if an incorrect index is chosen in the middle of the search. Multiple indices StOMP algorithm identifying more than one indices in each iteration was proposed. In this approach, indices whose magnitude of correlation exceeds a deliberately designed threshold are chosen [9]. CoSaMP and SP algorithms maintaining K supports in each iteration were introduced. In [], generalized OMP gomp), was proposed. By choosing multiple indices corresponding to N > ) largest correlation in magnitude in each iteration, gomp reduces the misdetection probability at the expense of increase in the false alarm probability.

MMP 3 The MMP algorithm searches multiple promising candidates and then chooses one minimizing the residual in the final moment. Due to the investigation of multiple full-blown candidates instead of partial ones, MMP improves the chance of selecting the true support. The effect of the random noise vector cannot be accurately udged by ust looking at the partial candidate, and more importantly, incorrect decision affects subsequent decision in many greedy algorithms. MMP is effective in noisy scenario. III. Perfect Recovery Condition for MMP A recovery condition under which MMP can accurately recover K-sparse signals in the noiseless scenario. two parts: A conditionn ensuring the successful recovery in the initial iteration k = ). A conditionn guaranteeing the success in the non-initial iteration k >).

By success we mean that an index of the true support T is chosen in the iteration. 4 RIP A sensing matrix Φ is said to satisfy the RIP of order K if there exists a constant 0,) such that for any K-sparse vector x. ) x Φx ) x ) The minimum of all constants satisfying ) is called the restricted isometry constant K. emma 3. Monotonicity of the restricted isometry constant []): If the sensing matrix Φ satisfies the RIP of both orders K and K, then for any K K. K K emma 3. Consequences of RIP []): For I, if then for any I I x, ) x Φ ' Φ x ) x 3) I I I I x Φ ' Φ ) x x 4) I I I I emma 3.3 emma. in [9]): et I, I be two disoint sets I I I I, then holds for any x. Φ ' Φ x x 5) I I I I ). If emma 3.4: For m n matrix Φ, Φ satisfies Φ max Φ' Φ ) min mn, ) 6)

9 A. Success Condition in Initial Iteration In the first iteration, MMP computes the correlation between measurementsy and each column φ i of Φ and then selects indices whose column has largest correlation in magnitude. et Λ be the set of indices chosen in the first iteration, then Φ Λy = max φ i,y. 7) I = Following theorem provides a condition under which at least one correct index belonging to T is chosen in the first iteration. Theorem 3.5: Suppose x R n is K-sparse signal, then among candidates at least one contains the correct index in the first iteration of the MMP algorithm if the sensing matrix Φ satisfies the RIP with δ K+ < i I K +. 8) Proof: From 7), we have Φ Λ y = max φ i,y 9) = max I = = T I = I i I φ i,y 0) i I φ i,y ) i T K Φ Ty ) where T = K. Since y = Φ T x T, we further have Φ Λ y K Φ T Φ Tx T 3) K δ K) x 4) where 4) is due to emma 3.. On the other hand, when an incorrect index is chosen in the first iteration i.e., Λ T = ), Φ Λ y = Φ Λ Φ Tx T δ K+ x, 5) August 6, 03

0 where the inequality follows from emma 3.3. This inequality contradicts 4) if δ K+ x < K δ K) x. 6) In other words, under 6) at least one correct index should be chosen in the first iteration Ti Λ). Further, since δ K δ K+N by emma 3., 6) holds true if δ K+ x < K δ K+) x. 7) Equivalently, δ K+ < K +. 8) In summary, if δ K+ < K+, then among indices at least one belongs to T in the first iteration of MMP. B. Success Condition in Non-initial Iterations Now we turn to the analysis of the success condition for non-initial iterations. In the k-th iteration k > ), we focus on the candidate s i whose elements are exclusively from the true support T see Fig. 3). In short, our key finding is that at least one of indices chosen by s i is from T under δ K+ < K+3. Formal description of our finding is as follows. Theorem 3.6: Suppose a candidate s i generated from s i includes indices only in T, then among children at least one candidate chooses an index in T under δ K+ <. 9) K +3 Before we proceed, we provide definitions and lemmas useful in our analysis. et f i be the i-th largest correlated index in magnitude between r and {φ } T C. That is, f = arg max φ,r. et F be the set of these indices F = {f,f,,f }). T C \{f,...,f ) } Also, let α k be the -th largest correlation in magnitude between the residual r associated with s i and columns indexed by incorrect indices. That is, α k = φ f,r. 0) Note that α k are ordered in magnitude α k αk ). Finally, let βk be the -th largest correlation in magnitude between r and columns whose indices belong to T T i of remaining true indices). That is, the set β k = φ ϕ),r ) August 6, 03

k-)-th iteration k-)-th iteration k-th iteration Fig. 3. Relationship between the candidates in k )-th iteration and those in k-th iteration. Candidates inside the gray box contain elements of true support T only. where ϕ) = arg max T T )\{ϕ),...,ϕ )} φ,r. Similar to α k, β k are ordered in magnitude β k β k ). In the following lemmas, we provide the upper bound of α k bound of β k. emma 3.7: α k satisfies Proof: See Appendix A. α k δ +K k+ + δ +δ K δ δ and lower ) xt T. ) emma 3.8: β k satisfies ) β δ k +δk k+ +δ δ xt T K K k+ K. 3) k + Proof: See Appendix B. Proof of Theorem 3.6: From the definitions of α k and βk, it is clear that a sufficient) condition under which at least one out of indices is true in k-th iteration of MMP is α k < βk 4) August 6, 03

< Fig. 4. true support T. Comparison between α k N and β k. If β k > α k N, then among indices chosen in K-iteration, at least one is from the First, from emma 3. and 3.7, we have α k = δ +K k+ + δ +δ K δ δ +K + δ +Kδ +K δ +K δ +K δ +K x T s ) xt s 5) ) xt s 6). 7) Also, from emma 3. and 3.8, we have ) β k +δk k+ +δ δ xt s K δ K k+ K 8) k + δ δ +K +δ ) xt s +K)δ +K K 9) δ +K ) k + = 3δ +K δ +K x T s K k +. 30) Using 4), 7), and 30), we can obtain the sufficient condition of 4) as 3δ x +K T s K > δ x +K T s. 3) δ +K k + δ +K From 3), we further have δ +K < K k ++3. 3) Since K k + < K for k >, 3) holds under δ +K < K+3, which completes the proof. August 6, 03

3 = APPENDIX A PROOF OF EMMA 3.7 Proof: The l -norm of the correlation Φ F r is expressed as Φ F r = Φ F P Φ s T T x T T Since F and T s correct indices in s k Φ F Φ F Φ T s x T s Φ F P T + Φ F P T 0) 03). 04) are disoint F T s ) = ) and also noting that the number of is k by the hypothesis, Using this together with emma 3.3, F + T s = +K k ). 05) Φ F Similarly, noting that F T where Φ F P T Φ T x T s = and F + s = xt T δ +K k+ δ + Φ T Φ T Φ T δ Φ δ )+K ) = δ δ K = +k, we have ) Φ xt T δ T Φ T T T xt T. 06) 07) 08) 09) 0) ) where 09) and 0) follow from emma 3. and 3.3, respectively. Since T are disoint, if the number of correct indices in T T is k, then and T T ) T T = k )+K k ). ) August 6, 03

3 Using 04), 06), 07), and ), we have Φ F r δ +K k+ + δ +δ K δ ) xt T Using the norm inequality z z 0 z ), we further have Φ F r i=. 3) α k i 4) Φ F r 5) α k = α k 6) where α k is φ f,r 5 and α k αk αk. Combining 3) and 6), we have δ +K k+ + δ ) +δ K δ xt T α k, 7) and hence α k δ +K k+ + δ +δ K δ ) xt T. 8) APPENDIX B PROOF OF EMMA 3.8 Proof: Since β k is the largest correlation in magnitude between r and {φ } T T φϕ),r )6, it is clear that β k φ,r 9) for all T T, and hence β k = Φ r K k ) T T 0) Φ P K k + T T Φx T ) 5 f = arg max T C \{f,...,f ) } φ,r 6 ϕ) = arg max φ,r T T )\{ϕ),...,ϕ )} August 6, 03

33 where ) follows from r = y Φ T β k Φ K k + T T Φ T T Since T T = K k ), and also Φ T T Φ T T P T P T Φ T y = P y. Using the triangle inequality, T x T T ) Φ P T T T Φ T T x T T K. 3) k + δ K k+ ) Φ T T P T +δ K k+ PT 4) 5) 6) where 6) follows from emma 3.4. Further, we have P T Φ T T x T T 7) ) Φ = Φ T Φ Φ T T Φ T T T x T T 8) ) Φ +δ Φ Φ T T Φ T T T x T T 9) +δ δ Φ δ )+K ) Φ T T T +δ xt T δ 30) 3) where 9) and 30) are from the definition of RIP and emma 3.. 3) follows from emma 3.3 and T ) T T = k )+K k ) since T and T T are disoint sets. Using 6) and 3), we obtain Φ T T P T +δk k+ +δ δ K δ δ xt T. 3) Finally, by combining 3), 4) and 3), we have ) β δ k +δk k+ +δ δ xt T K K k+ K. 33) k + August 6, 03

5 REFERENCE S [] E. J. Candes and T. Tao, Decoding by linear programming, IEEE Trans. Inf. Theory, vol. 5, no., pp. 403 45, Dec. 005. [] E. J. Candes, J. Romberg, and T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inf. Theory, vol. 5, no., pp. 489 509, Feb. 006. [3] E. iu and V. N. Temlyakov, The orthogonal super greedy algorithm and applications in compressed sensing, IEEE Trans. Inf. Theory, vol. 58, no. 4, pp. 040 047, April 0. [4] J. A. Tropp and A. C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans. Inf. Theory, vol. 53, no., pp. 4655 4666, Dec. 007 [5] M. A. Davenport and M. B. Wakin, Analysis of orthogonal matching pursuit using the restricted isometry property, IEEE Trans. Inf. Theory, vol. 56, no. 9, pp. 4395 440, Sept. 00. [6] T. T. Cai and. Wang, Orthogonal matching pursuit for sparse signal recovery with noise, IEEE Trans. Inf. Theory, vol. 57, no. 7, pp. 4680 4688, July 0. [7] T. Zhang, Sparse recovery with orthogonal matching pursuit under rip, IEEE Trans. Inf. Theory, vol. 57, no. 9, pp. 65 6, Sept. 0. [8] R. G. Baraniuk, M. A. Davenport, R. DeVore, and M. B. Wakin, A simple proof of the restricted isometry property for random matrices, Constructive Approximation, vol. 8, pp. 53 63, Dec. 008. [9] D.. Donoho, Y. Tsaig, I. Drori, and J.. Starck, Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit, IEEE Trans. Inf. Theory, vol. 58, no., pp. 094, Feb. 0. [0] D. Needell and J. A. Tropp, Cosamp: iterative signal recovery from incomplete and inaccurate samples, Commun. ACM, vol. 53, no., pp. 93 00, Dec. 00. [] W. Dai and O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction, IEEE Trans. Inf. Theory, vol. 55, no. 5, pp. 30 49, May 009. [] J. Wang, S. Kwon, and B. Shim, Generalized orthogonal matching pursuit, IEEE Trans. Signal Process., vol. 60, no., pp. 60 66, Dec. 0. [3] A. J. Viterbi, Error bounds for convolutional codes and an asymptotically optimum decoding algorithm, IEEE Trans. Inf. Theory, vol. 3, no., pp. 60 69, April 967. [4] E. Viterbo and J. Boutros, A universal lattice code decoder for fading channels, IEEE Trans. Inf. Theory, vol. 45, no. 5, pp. 639 64, July 999. [5] B. Shim and I. Kang, Sphere decoding with a probabilistic tree pruning, IEEE Trans. Signal Process., vol. 56, no. 0, pp. 4867 4878, Oct. 008. [6] B. M. Hochwald and S. Ten Brink, Achieving near-capacity on a multiple-antenna channel, IEEE Trans. Commun., vol. 5, no. 3, pp. 389 399, March 003. [7] W. Chen, M. R. D. Rodrigues, and I. J. Wassell, Proection design for statistical compressive sensing: A tight frame based approach, IEEE Trans. Signal Process., vol. 6, no. 8, pp. 06 09, 03. [8] S. Verdu, Multiuser Detection, Cambridge University Press, 998. [9] E. J. Candes, The restricted isometry property and its implications for compressed sensing, Comptes Rendus Mathematique, vol. 346, no. 9-0, pp. 589 59, May 008. [0] D. Needell and R. Vershynin, Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit, Foundations of Computational Mathematics, vol. 9, pp. 37 334, June 009.