Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Similar documents
Generalized Orthogonal Matching Pursuit- A Review and Some

GREEDY SIGNAL RECOVERY REVIEW

Multipath Matching Pursuit

Randomness-in-Structured Ensembles for Compressed Sensing of Images

of Orthogonal Matching Pursuit

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Recovery of Sparse Signals Using Multiple Orthogonal Least Squares

Greedy Signal Recovery and Uniform Uncertainty Principles

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

A new method on deterministic construction of the measurement matrix in compressed sensing

Introduction to Compressed Sensing

Robust Sparse Recovery via Non-Convex Optimization

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property

Exact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

An Introduction to Sparse Approximation

Simultaneous Sparsity

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

Compressive Sensing and Beyond

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Sparse Vector Distributions and Recovery from Compressed Sensing

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

MATCHING PURSUIT WITH STOCHASTIC SELECTION

Sensing systems limited by constraints: physical size, time, cost, energy

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

SPARSE signal representations have gained popularity in recent

Observability of a Linear System Under Sparsity Constraints

COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION

AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE

A Review of Sparsity-Based Methods for Analysing Radar Returns from Helicopter Rotor Blades

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Improved algorithm based on modulated wideband converter for multiband signal reconstruction

Greedy Sparsity-Constrained Optimization

Recovery of Sparse Signals Using Multiple Orthogonal Least Squares

Stochastic geometry and random matrix theory in CS

Sparse Solutions of an Undetermined Linear System

ORTHOGONAL matching pursuit (OMP) is the canonical

Sparsity in Underdetermined Systems

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Iterative Hard Thresholding for Compressed Sensing

Signal Recovery from Permuted Observations

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Block-sparse Solutions using Kernel Block RIP and its Application to Group Lasso

PHASE TRANSITION OF JOINT-SPARSE RECOVERY FROM MULTIPLE MEASUREMENTS VIA CONVEX OPTIMIZATION

Compressed Sensing and Linear Codes over Real Numbers

Compressed Sensing and Redundant Dictionaries

Compressed Sensing and Related Learning Problems

Stability and Robustness of Weak Orthogonal Matching Pursuits

On Sparsity, Redundancy and Quality of Frame Representations

New Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University

Exponential decay of reconstruction error from binary measurements of sparse signals

Reconstruction from Anisotropic Random Measurements

Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

Analysis of Greedy Algorithms

COMPRESSED SENSING IN PYTHON

Error Correction via Linear Programming

The Pros and Cons of Compressive Sensing

Robust multichannel sparse recovery

Stopping Condition for Greedy Block Sparse Signal Recovery

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

Abstract This paper is about the efficient solution of large-scale compressed sensing problems.

Elaine T. Hale, Wotao Yin, Yin Zhang

Compressed Sensing with Very Sparse Gaussian Random Projections

Complementary Matching Pursuit Algorithms for Sparse Approximation

Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images!

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Strengthened Sobolev inequalities for a random subspace of functions

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

AN INTRODUCTION TO COMPRESSIVE SENSING

Efficient Inverse Cholesky Factorization for Alamouti Matrices in G-STBC and Alamouti-like Matrices in OMP

Recent Developments in Compressed Sensing

A Greedy Search Algorithm with Tree Pruning for Sparse Signal Recovery

Sigma Delta Quantization for Compressed Sensing

Uncertainty principles and sparse approximation

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013

Compressed Sensing with Shannon-Kotel nikov Mapping in the Presence of Noise

Recent developments on sparse representation

A NEW FRAMEWORK FOR DESIGNING INCOHERENT SPARSIFYING DICTIONARIES

A New Estimate of Restricted Isometry Constants for Sparse Solutions

Machine Learning for Signal Processing Sparse and Overcomplete Representations

Particle Filtered Modified-CS (PaFiMoCS) for tracking signal sequences

Color Scheme. swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July / 14

Combining geometry and combinatorics

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego

Generalized Orthogonal Matching Pursuit

Recovery of Compressible Signals in Unions of Subspaces

Interpolation via weighted l 1 minimization

Transcription:

Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie Zuo, Mengqiu Fan College of Electronic Information and Optical Engineering, Nankai University, Tianjin 371, China Abstract This paper presents a novel pre-weighted method for the matching pursuit algorithms, termed the Preweighted Matching Pursuit (PwMP) algorithm. The weighted measurement vector is obtained through the pre-weighted process to measurement matrix. PwMP recovers the weighted sparse signal by the weighted measurement vector and unweighted sensing matrix. At last, we get the sparse signal via weighted removal. It is empirically demonstrated that the PwMP algorithm outperforms many existing reconstruction algorithms for both zero-one and Gaussian type signals. Keywords: Compressed Sensing; Reconstruction Algorithm; Pre-weighted; Matching Pursuit 1 Introduction In Compressed Sensing (CS) theory [1], we assume that f is a discrete-time signal of dimension N, which can be represented as a sparse linear combination f = Ψx. Here, x is a K-sparse signal of dimension N, Ψ represents an N N sparse matrix. Sparse signal x has at most K nonzero entries. The measurement vector y can be obtained via y = Φf, where Φ represents an M N measurement matrix with M < N. CS combines signal sampling and data compression together through sampling data in a compressed form. The signal f or, what is equivalent, the vector x can be recovered through the linear system of equations y = ΦΨx = Φx. (1) Here, Φ = ΦΨ represents the sensing matrix. However, (1) is underdetermined and has infinitely many solutions. The straightforward way to find the sparsest signal is solving an l minimization problem [2] which can be represented as: min x s.t. Φx = y. (2) Project supported by the National Nature Science Foundation of China (No. 6117114) and Doctoral Program Foundation of Institutions of Higher Education of China (No. 213311132). Corresponding author. Email address: hejingfei@mail.nankai.edu.cn (Jingfei He). 1548 7741 / Copyright 214 Binary Information Press DOI: 1.12733/jics213758

2934 J. He et al. / Journal of Information & Computational Science 11:9 (214) 2933 2939 Unfortunately, the above l minimization problem is NP-hard, and hence solving it requires combinatorial optimization and is impractical. It is equivalent to use minimal l 1 norm representation instead of the l minimization in some sense [3], such as Basis Pursuit (BP) [4]: min x 1 s.t. Φx = y. (3) It is a convex optimization problem that can be solved efficiently by Linear Programming (LP) techniques. However, this kind of algorithms is associated with fairly heavy computationally burdens and cannot be used for practical applications. Recently, another popular class of sparse recovery algorithms called matching pursuit algorithms has received more and more attention due to their simple and rapid computation. Such as Orthogonal Matching Pursuit (OMP) [5], Compressive Sampling Matching Pursuit (CoSaMP) [6], Subspace Pursuit (SP) [7], and the Sparsity Adaptive Matching Pursuit (SAMP) [8]. These algorithms perform well for Gaussian signals, but the effect is not as good as the LP method for zero-one signals. This paper presents a novel pre-weighted method for the matching pursuit algorithms, termed the Pre-weighted Matching Pursuit (PwMP) algorithm. Through the pre-weighted process to measurement matrix, PwMP outperforms many existing reconstruction algorithms for both zero-one and Gaussian type signals. 2 The Proposal of Pre-weighted Method Since x is a K-sparse signal in (1), only less than or equal to K columns of Φ participate in the measurement vector y. The main idea behind OMP algorithm is to pick those columns in a greedy fashion. At each iteration, OMP chooses the column of Φ that is most strongly correlated with the remaining part of y (the residual), then removes its contribution to y and calculates the new residual. After limited iterations, OMP will identify the correct set of columns. OMP also gains the advantage of less computational cost. Then the latter algorithms, such as CoSaMP and SP, adopt the idea of backtracking. That is, an index can be added to or removed from the estimated support set at any stage in the recovery process. Afterwards, SAMP adjusts the step length in each stage to gradually achieve the sparsity K for the sparsity is always unknown in practical applications. Unfortunately, since the zero-one type signals represent a particularly challenging case for matching pursuit algorithms, these algorithms performances are not very ideal for this type signals. Matching pursuit algorithms judge the correlativity between the column of Φ and the remaining part of y based on their inner product value. However, all nonzero entries in zeroone signals have the same magnitude. In this case, it is difficult to identify the correct set of columns for matching pursuit algorithms. Thus, considering if it is possible to make a weight to the nonzero entries in the sparse signal x, then we can improve the reconstruction accuracy of matching pursuit algorithms for zero-one signals. However, the vector x, equivalent to the signal f, cannot be weighted directly. It is therefore crucial to solve this problem with a novel pre-weighted method. 3 The PwMP Algorithm 3.1 Algorithm Description The proposed pre-weighted method for the matching pursuit algorithms, illustrated in Fig. 1,

J. He et al. / Journal of Information & Computational Science 11:9 (214) 2933 2939 2935 f Measurement weight_y=weight_φf weight_φ=φ(ψqψ 1 ) weight_y ~ Φ=ΦΨ Reconstruction min weight_x 1 s.t. ~ Φweight_x=weight_y weight_x f Sparse representation f=ψx x ~ Weighted removal x=q ~ 1 weight_x Fig. 1: Schematic representation of the PwMP algorithm makes a weight to the sparse signal in a disguised form. First, pre-weight the measurement matrix Φ via weight Φ = Φ(ΨQΨ 1 ). (4) Here, Q is an N N diagonal matrix called weighting matrix whose diagonal entries are nonzero with different magnitudes. The spare matrix Ψ should be nonsingular. Then the weighted measurement vector weight y is given by weight y = weight Φf. (5) Afterwards, the weighted sparse signal weight x can be recovered by the weighted measurement vector weight y and unweighted sensing matrix Φ. Finally, we obtain the recovered sparse signal ˆx via weighted removal. 3.2 Definition of the Algorithm Algorithm 1. The pre-weighted matching pursuit algorithm Sparse representation: f = Ψx; Encoding measurement: the weighted measurement matrix weight Φ = Φ(ΨQΨ 1 ), the weighted measurement vector weight y = weight Φf; Reconstruction algorithm: input: weight y, sensing matrix Φ. Initialization: weight x =, residual r = weight y, supporting sets F =, the stopping threshold T, step size s = 1, supporting sets L = s, k = 1. Step-1: P k = {L k 1 indices corresponding to the largest entries in the vector Φ T r k 1 }. Step-2: Candidate sets C k = F k 1 P k, F = {L k 1 indices corresponding to the largest entries in the vector Φ C k weight y}, Φ C k denotes the pseudo-inverse of the matrix Φ Ck. Step-3: r k = weight y Φ F Φ F weight y. Step-4: If r k T (the halting criterion [6, 9]) end iterations and execute step-6, else if r k 2 r k 1 2, L k = L k 1 + 1.

2936 J. He et al. / Journal of Information & Computational Science 11:9 (214) 2933 2939 Step-5: F k = F, k = k + 1 and go to step-1. Step-6: weight x F = Φ F weight y, weight x {1,,N} F =, ˆx = Q 1 weight x. Output: the reconstruction signal ˆf = Ψˆx. 3.3 Theoretical Analysis In PwMP algorithm, the weighted measurement vector weight y can be obtained via (5), f = Ψx is the signal of interest. Using the identities in (4), (5) is equivalent to weight y = Φ(ΨQΨ 1 )Ψx=ΦΨQx. (6) As we know, ΦΨ = Φ is the sensing matrix, Qx = weight x is the weighted sparse signal. Thus, (6) can be cast as a linear system of equations weight y = Φweight x (7) In order to get weight x, we should also solve an l 1 minimization problem min weight x 1 s.t. Φweight x = weight y (8) Based on the main idea of the matching pursuit algorithms, inheriting the sparsity adaptive [8] and the backtracking idea [6, 7], we reconstruct weight x with the weighted measurement vector weight y and unweighted sensing matrix Φ. Then we remove weight to weight x via ˆx = Q 1 weight x = Q 1 Qx. Here, we get the recovered sparse signal ˆx of the sparse signal x. At last, the reconstruction signal f can be obtained via ˆf = Ψˆx. The objective of the present study is to realize making a weight to the nonzero entries in the sparse signal with pre-weighting the measurement matrix. Therefore, when the sparse signal is zero-one signal or in which the nonzero entries have similar magnitudes, after using the preweighted method, the true sparse signal in the reconstruction algorithm has different magnitudes nonzero entries. And this has the advantage of identifying the correct set of columns for matching pursuit algorithms. 4 Empirical Results and Analysis In this section, based on the simulation platform, we compare the simulation results of the proposed PwMP with OMP, SP, CoSaMP, SAMP, and LP method for both zero-one and Gaussian type signals. We also observe the performance of PwMP in different compression ratios (N/M) for these two type signals. 4.1 Experiment 1 In this experiment, the signals of interests are zero-one or Gaussian sparse signals with length of N = 124. Φ is a normally distributed random matrix with the number of measurements

J. He et al. / Journal of Information & Computational Science 11:9 (214) 2933 2939 2937 M = 128. Setting the signal sparsity K {1, 15, 2, 25, 3, 35, 4, 45, 5}. For each sparsity level, 5 simulations are conducted. The remaining parameters in the experiment are kept at a specification of T = 1 3, Q = diag (1, 2,..., N), and ε = ˆx x / x. ε is the parameter to judge whether the reconstruction is successful or not. We define the reconstruction is successful when ε 1 3. The success rates of each algorithm for zero-one type signals are shown in Fig. 2. The numerical values on the x-axis denote the sparsity level K, while the numerical values on the y-axis represent the success rate of reconstruction. The compression ratio is constant, i.e., M, N value is fixed, with the increase in the sparsity value, the success rate of each algorithm is reduced. As can be seen, OMP, SP, CoSaMP, and SAMP are worse than the LP method for zero-one signals. However, the PwMP algorithm significantly outperforms the LP method. The simulation results reveal that the pre-weighted method can improve the matching pursuit algorithms on the reconstruction for zero-one signals..2 OMP SP CoSaMP SAMP LP PwMP.1 1 15 2 25 3 35 4 45 5 Signal sparsity K Fig. 2: The success rates of each algorithm for zero-one type signals Through pre-weighting the measurement matrix, PwMP algorithm realizes weighting the sparse signal in a disguised form. Consequently, the nonzero entries have different magnitudes and it is beneficial to improve recovery quality for matching pursuit algorithms. PwMP has superiority in reconstruction for not only zero-one type signals, but also the Gaussian type signals. As shown in Fig. 3, the signals of interest are Gaussian sparse signals. And for Gaussian sparse signals, performances of the matching pursuit algorithms far exceed that for zero-one signals. Again, we can see PwMP algorithm has significantly higher success rate than OMP, SP, CoSaMP, SAMP, and LP method. 4.2 Experiment 2 The length of sparse signals of interest is fixed with N = 124 and the number of measurements M {512, 256, 128, 64, 32}. The corresponding compression ratios are 2:1, 4:1, 8:1, 16:1, and 32:1. For each value of M, we set the sparsity K {[.1M], [.2M], [M], [M], [M]}. The other experimental settings are unchanged. Fig. 4 and Fig. 5 demonstrate the results of PwMP

2938 J. He et al. / Journal of Information & Computational Science 11:9 (214) 2933 2939.2 OMP SP CoSaMP SAMP.1 LP PwMP 1 15 2 25 3 35 4 45 5 Signal sparsity K Fig. 3: The success rates of each algorithm for Gaussian type signals.2.1 N:M=2:1 N:M=4:1 N:M=8:1 N:M=16:1 N:M=32:1.1.2 K/M Fig. 4: The success rates of PwMP for zero-one type signals under different compression ratios algorithm for zero-one sparse signals and Gaussian sparse signals under different compression ratios, respectively. Simulation results show that while sparsity K is constant, with the increase in the compression ratio N/M, the success rate is reduced. Also when the compression ratio is fixed, the higher the sparsity value is, the lower the success rate becomes. For different types of interest sparse signals, the reconstruction effect of PwMP for Gaussian signals is better than that for zero-one signals. 5 Conclusion To overcome the drawbacks of matching pursuit algorithms for zero-one type sparse signals reconstruction, we introduced a new algorithm, called the pre-weighted matching pursuit algorithm,

J. He et al. / Journal of Information & Computational Science 11:9 (214) 2933 2939 2939 N:M=2:1.2 N:M=4:1 N:M=8:1.1 N:M=16:1 N:M=32:1.1.2 K/M Fig. 5: The success rates of PwMP for Gaussian type signals under different compression ratios for realization of weighting the sparse signal in a disguised form. Through pre-weighting the measurement matrix, PwMP algorithm realizes weighting the sparse signal. Therefore, The nonzero entries in the sparse signal have different magnitudes and it is beneficial to be recovered by matching pursuit algorithm. The simulation results demonstrate that PwMP algorithm has significant advantages in reconstruction for both zero-one and Gaussian type sparse signals. References [1] D. L. Donoho, Compressed sensing, IEEE Trans. on Information Theory, 52(4), 26, 1289-136 [2] E. J. Candés, J. Romberg, T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. on Information Theory, 52(2), 26, 489-59 [3] D. L. Donoho, For most large underdetermined systems of linear equations the minimal l 1 -norm solution is also the sparsest solution, Communications on Pure and Applied Mathematics, 59(6), 26, 797-829 [4] S. S. Chen, D. L. Donoho, M. A. Saunders, Atomic decomposition by basis pursuit, SIAM Journal on Scientific Computing, 2(1), 1998, 33-61 [5] J. A. Tropp, A. C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans. on Information Theory, 53(12), 27, 4655-4666 [6] D. Needell, J. A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Applied and Computational Harmonic Analysis, 26(3), 29, 31-321 [7] W. Dai, O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction, IEEE Trans. on Information Theory, 55(5), 29, 223-2249 [8] T. T. Do, L. Gan, N. Nguyen, T. D. Tran, Sparsity adaptive matching pursuit algorithm for practical compressed sensing, in: Conference Record - Asilomar Conference on Signals, Systems and Computers, 28, 581-587 [9] G. L. Sun, Y. Zhang, L. Q. Lv, W. X. Li, Research on iterative thresholding matching pursuit based on adaptive sparsity, Journal of Computational Information Systems, 7(1), 211, 34-41