Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Similar documents
Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm

SPARSE signal representations have gained popularity in recent

Complementary Matching Pursuit Algorithms for Sparse Approximation

Compressive Imaging by Generalized Total Variation Minimization

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT

Introduction to Compressed Sensing

EE 381V: Large Scale Optimization Fall Lecture 24 April 11

1 minimization without amplitude information. Petros Boufounos

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Recent Developments in Compressed Sensing

Lecture Notes 9: Constrained Optimization

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

Sparse Solutions of an Undetermined Linear System

User s Guide to Compressive Sensing

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017

Lecture 22: More On Compressed Sensing

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble

Design of Projection Matrix for Compressive Sensing by Nonsmooth Optimization

A New Estimate of Restricted Isometry Constants for Sparse Solutions

Sparse Approximation of Signals with Highly Coherent Dictionaries

2D X-Ray Tomographic Reconstruction From Few Projections

An Introduction to Sparse Approximation

5 Linear Algebra and Inverse Problem

Elaine T. Hale, Wotao Yin, Yin Zhang

Exact penalty decomposition method for zero-norm minimization based on MPEC formulation 1

Bayesian Methods for Sparse Signal Recovery

IEOR 265 Lecture 3 Sparse Linear Regression

DS-GA 1002 Lecture notes 10 November 23, Linear models

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Regularization

COMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4

Computational Methods. Eigenvalues and Singular Values

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Robust Asymmetric Nonnegative Matrix Factorization

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Robust Sparse Recovery via Non-Convex Optimization

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Lecture Notes 10: Matrix Factorization

Sparse linear models

Optimization methods

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

Generalized Orthogonal Matching Pursuit- A Review and Some

Sparse Recovery of Streaming Signals Using. M. Salman Asif and Justin Romberg. Abstract

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

Scale Mixture Modeling of Priors for Sparse Signal Recovery

Designing Information Devices and Systems I Discussion 13B

1-Bit Compressive Sensing

Using SVD to Recommend Movies

Accelerated Block-Coordinate Relaxation for Regularized Optimization

Reconstruction from Anisotropic Random Measurements

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

Estimating Unknown Sparsity in Compressed Sensing

Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices

Oslo Class 6 Sparsity based regularization

sparse and low-rank tensor recovery Cubic-Sketching

Why Sparse Coding Works

Conjugate Gradient (CG) Method

Sensing systems limited by constraints: physical size, time, cost, energy

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems

MLCC 2015 Dimensionality Reduction and PCA

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Probabilistic Latent Semantic Analysis

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Optimization for Compressed Sensing

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Bayesian Paradigm. Maximum A Posteriori Estimation

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

A Tutorial on Compressive Sensing. Simon Foucart Drexel University / University of Georgia

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems

EUSIPCO

Sparse Optimization Lecture: Sparse Recovery Guarantees

Heat Source Identification Based on L 1 Optimization

Review of Linear Algebra

Methods for sparse analysis of high-dimensional data, II

Conditions for Robust Principal Component Analysis

SEQUENTIAL SUBSPACE FINDING: A NEW ALGORITHM FOR LEARNING LOW-DIMENSIONAL LINEAR SUBSPACES.

Introduction to Data Mining

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice

Compressive Sensing with Random Matrices

Math Linear Algebra

Fast Sparse Representation Based on Smoothed

Blind Compressed Sensing

Three Generalizations of Compressed Sensing

Practical Linear Algebra: A Geometry Toolbox

Normed & Inner Product Vector Spaces

Sparsity in system identification and data-driven control

Singular Value Decomposition

Optimal mass transport as a distance measure between images

Overview. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Compressed Sensing: Extending CLEAN and NNLS

Applied Machine Learning for Biomedical Engineering. Enrico Grisan

Transcription:

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University of Victoria

Outline Compressive Sensing Compressive Sensing 2 University of Victoria

Outline Compressive Sensing Use of l 1 Minimization Compressive Sensing 2 University of Victoria

Outline Compressive Sensing Use of l 1 Minimization Use of l p Minimization Compressive Sensing 2 University of Victoria

Outline Compressive Sensing Use of l 1 Minimization Use of l p Minimization Use of l p -Regularized Least Squares Compressive Sensing 2 University of Victoria

Outline Compressive Sensing Use of l 1 Minimization Use of l p Minimization Use of l p -Regularized Least Squares Performance Evaluation Compressive Sensing 2 University of Victoria

Outline Compressive Sensing Use of l 1 Minimization Use of l p Minimization Use of l p -Regularized Least Squares Performance Evaluation Conclusion Compressive Sensing 2 University of Victoria

Compressive Sensing A signal x(n) of length N is K-sparse if it contains K nonzero components with K N. Compressive Sensing 3 University of Victoria

Compressive Sensing A signal x(n) of length N is K-sparse if it contains K nonzero components with K N. A sparse signal An image with sparse gradient (Shepp-Logan phantom) Compressive Sensing 3 University of Victoria

Compressive Sensing, cont d A signal acquisition procedure in the compressive sensing framework is modelled as y = Φ x M 1 M N N 1 where Φ is a measurement matrix, typically with K < M N, and y is measurement vector. Compressive Sensing 4 University of Victoria

Compressive Sensing, cont d A signal acquisition procedure in the compressive sensing framework is modelled as y = Φ x M 1 M N N 1 where Φ is a measurement matrix, typically with K < M N, and y is measurement vector. The inverse problem of recovering signal vector x from y is an ill-posed problem. Compressive Sensing 4 University of Victoria

Compressive Sensing, cont d A signal acquisition procedure in the compressive sensing framework is modelled as y = Φ x M 1 M N N 1 where Φ is a measurement matrix, typically with K < M N, and y is measurement vector. The inverse problem of recovering signal vector x from y is an ill-posed problem. A classical approach to recover x from y is the method of least squares x = Φ T ( ΦΦ T ) 1 y Compressive Sensing 4 University of Victoria

Compressive Sensing, cont d Unfortunately, the method of least squares fails to recover a sparse signal. Compressive Sensing 5 University of Victoria

Compressive Sensing, cont d Unfortunately, the method of least squares fails to recover a sparse signal. The sparsity of a signal can be measured by using its l 0 pseudonorm N x 0 = x i 0 i=1 Compressive Sensing 5 University of Victoria

Compressive Sensing, cont d Unfortunately, the method of least squares fails to recover a sparse signal. The sparsity of a signal can be measured by using its l 0 pseudonorm N x 0 = x i 0 i=1 If signal x is known to be sparse, it can be estimated by solving the optimization problem minimize x subject to x 0 Φx = y Compressive Sensing 5 University of Victoria

Use of l 1 Minimization Unfortunately, the l 0 -pseudonorm minimization problem is nonconvex with combinatorial complexity. Compressive Sensing 6 University of Victoria

Use of l 1 Minimization Unfortunately, the l 0 -pseudonorm minimization problem is nonconvex with combinatorial complexity. Computationally tractable algorithms include the basis pursuit algorithm which solves the problem minimize x subject to x 1 Φx = y where x 1 = N x i. i=1 Compressive Sensing 6 University of Victoria

Use of l 1 Minimization, cont d Theorem If Φ = {φ ij } where φ ij are independent and identically distributed random variables with zero-mean and variance 1/N and M ck log(n/k), the solution of the l 1 -minimization problem would recover exactly a K-sparse signal with high probability. Compressive Sensing 7 University of Victoria

Use of l 1 Minimization, cont d Theorem If Φ = {φ ij } where φ ij are independent and identically distributed random variables with zero-mean and variance 1/N and M ck log(n/k), the solution of the l 1 -minimization problem would recover exactly a K-sparse signal with high probability. For real-valued data {Φ, y}, the l 1 -minimization problem is a linear programming problem. Compressive Sensing 7 University of Victoria

Use of l 1 Minimization, cont d Example: N = 512, M = 120, K = 26 A Sparse Signal with K = 26 Reconstructed Signal by L1 Minimization, M = 120 1 1 0.5 0.5 x(n) 0 x(n) 0 0.5 0.5 1 1 100 200 300 400 500 n 100 200 300 400 500 n 1 x 10 5 Reconstruction Error Reconstructed Signal by L2 Minimization, M = 120 0.6 0.5 0.4 0.2 Error 0 x(n) 0 0.2 0.5 0.4 1 100 200 300 400 500 n 0.6 100 200 300 400 500 n Compressive Sensing 8 University of Victoria

Use of l p Minimization Chartrand s l p -minimization based iteratively reweighted algorithm which solves the problem minimize x subject to x p p Φx = y where x p p = N i=1 x i p with p < 1 yields improved performance. Compressive Sensing 9 University of Victoria

Use of l p Minimization Chartrand s l p -minimization based iteratively reweighted algorithm which solves the problem minimize x subject to x p p Φx = y where x p p = N i=1 x i p with p < 1 yields improved performance. Mohimani et al. s smoothed l 0 -norm minimization algorithm solves the problem minimize x subject to N i=1 [1 exp ( x 2 i /2σ2 )] Φx = y with σ > 0 using a sequential steepest-descent algorithm. Compressive Sensing 9 University of Victoria

Use of l p Minimization, cont d The unconstrained regularized l p norm minimization algorithm estimates signal x as x = x s + V n ξ where x s is the least-squares solution of Φx = y, the columns of V n constitue orthonormal basis of null space of Φ, and ξ is obtained as ξ = arg minimize ξ N i=1 [ (xsi + v T i ξ ) ] 2 p/2 1 + ɛ 2 Compressive Sensing 10 University of Victoria

Use of l p Minimization, cont d The unconstrained regularized l p norm minimization algorithm estimates signal x as x = x s + V n ξ where x s is the least-squares solution of Φx = y, the columns of V n constitue orthonormal basis of null space of Φ, and ξ is obtained as ξ = arg minimize ξ N i=1 [ (xsi + v T i ξ ) ] 2 p/2 1 + ɛ 2 This algorithm finds a vector ξ that would give the sparsest estimate x. Compressive Sensing 10 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm The proposed l p regularized least-squares algorithm is an extension of the unconstrained regularized l p algorithm for noisy measurement. Compressive Sensing 11 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm The proposed l p regularized least-squares algorithm is an extension of the unconstrained regularized l p algorithm for noisy measurement. For noisy measurements, the signal acquisition procedure is modelled as y = Φx + w where w is the measurement noise. Compressive Sensing 11 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm The proposed l p regularized least-squares algorithm is an extension of the unconstrained regularized l p algorithm for noisy measurement. For noisy measurements, the signal acquisition procedure is modelled as y = Φx + w where w is the measurement noise. In such case, the equality condition Φx = y should be relaxed to Φx y 2 2 δ where δ is a small positive scalar. Compressive Sensing 11 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm, cont d Consider the optimization problem where minimize x x p p,ɛ subject to Φx y 2 2 δ x p p,ɛ = N i=1 ( x 2 i + ɛ 2) p/2 Compressive Sensing 12 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm, cont d Consider the optimization problem where minimize x x p p,ɛ subject to Φx y 2 2 δ x p p,ɛ = N i=1 ( x 2 i + ɛ 2) p/2 An unconstrained formulation of the above problem is given by minimize x where λ is a regularization parameter. 1 2 Φx y 2 2 + λ x p p,ɛ Compressive Sensing 12 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm, cont d Consider the optimization problem where minimize x x p p,ɛ subject to Φx y 2 2 δ x p p,ɛ = N i=1 ( x 2 i + ɛ 2) p/2 An unconstrained formulation of the above problem is given by minimize x where λ is a regularization parameter. 1 2 Φx y 2 2 + λ x p p,ɛ We solve the above optimization problem in the null space of Φ and its complement space. Compressive Sensing 12 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm, cont d Let Φ = U [Σ 0] V T be the singular-value decomposition of Φ. Σ is a diagonal matrix whose diagonal elements σ 1, σ 2,..., σ M are the singular values of Φ. The columns of U and V are, respectively, the left and right singular vectors of Φ. V = [V r V n ] where V r consists of the first M columns and V n consists the remaining N M columns of V. The columns of V n and V r form orthogonal basis vectors for the null space of Φ and its complement space, respectively. Compressive Sensing 13 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm, cont d Let Φ = U [Σ 0] V T be the singular-value decomposition of Φ. Σ is a diagonal matrix whose diagonal elements σ 1, σ 2,..., σ M are the singular values of Φ. The columns of U and V are, respectively, the left and right singular vectors of Φ. V = [V r V n ] where V r consists of the first M columns and V n consists the remaining N M columns of V. The columns of V n and V r form orthogonal basis vectors for the null space of Φ and its complement space, respectively. Vector x is expressed as x = V r φ + V n ξ where φ and ξ are vectors of lengths M and N M, respectively. Compressive Sensing 13 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm, cont d Using the SVD, we recast the optimization problem for the l p -RLS algorithm as minimize φ,ξ F p,ɛ (φ, ξ) where M F p,ɛ (φ, ξ) = 1 2 i=1 (σ i φ i ỹ i ) 2 + λ x p p,ɛ ỹ i is the ith component of vector ỹ = U T y, and x = V r φ + V n ξ. Compressive Sensing 14 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm, cont d In the kth iteration of the l p -RLS algorithm, signal x (k) is updated as x (k+1) = x (k) + αd (k) v where and α > 0. d (k) v = V r d (k) r + V n d (k) n Compressive Sensing 15 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm, cont d In the kth iteration of the l p -RLS algorithm, signal x (k) is updated as x (k+1) = x (k) + αd (k) v where and α > 0. d (k) v = V r d (k) r + V n d (k) n Vectors d (k) r and d (k) n are of lengths M and N M, respectively. Compressive Sensing 15 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm, cont d In the kth iteration of the l p -RLS algorithm, signal x (k) is updated as x (k+1) = x (k) + αd (k) v where and α > 0. Vectors d (k) r and d (k) n d (k) v = V r d (k) r + V n d (k) n are of lengths M and N M, respectively. Each component of vectors d (k) r and d (k) n is efficiently computed using the first step of a fixed-point iteration. Compressive Sensing 15 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm, cont d According to Banach s fixed-point theorem, the step size α can be computed using a finite number of iterations as where with α = d(k) T r Σ (Σφ ỹ) + λ p x (k) T ζ v 2 2 + λ p d(k) T ζv [ ( ζ vj = x (k) Σd (k) r ζ v = [ζ v1 ζ v2 j v ζ vn ] T ) ] 2 p/2 1 + αd (k) vj + ɛ 2 d (k) vj for j = 1, 2,..., N where d (k) vj direction d (k) v. is the jth component of the descent Compressive Sensing 16 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm, cont d Optimization overview: First, set ɛ to a large value, say, ɛ 1, typically 0.5 ɛ 1 1, and initialize φ and ξ to zero vectors. Compressive Sensing 17 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm, cont d Optimization overview: First, set ɛ to a large value, say, ɛ 1, typically 0.5 ɛ 1 1, and initialize φ and ξ to zero vectors. Solve the optimization problem by i) computing descent directions d v and d r, ii) computing the step size α; and iii) updating solution x and coefficient vector φ. Compressive Sensing 17 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm, cont d Optimization overview: First, set ɛ to a large value, say, ɛ 1, typically 0.5 ɛ 1 1, and initialize φ and ξ to zero vectors. Solve the optimization problem by i) computing descent directions d v and d r, ii) computing the step size α; and iii) updating solution x and coefficient vector φ. Reduce ɛ to a smaller value and again solve the optimization problem. Compressive Sensing 17 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm, cont d Optimization overview: First, set ɛ to a large value, say, ɛ 1, typically 0.5 ɛ 1 1, and initialize φ and ξ to zero vectors. Solve the optimization problem by i) computing descent directions d v and d r, ii) computing the step size α; and iii) updating solution x and coefficient vector φ. Reduce ɛ to a smaller value and again solve the optimization problem. Repeat this procedure until a sufficiently small target value, say, ɛ J is reached. Compressive Sensing 17 University of Victoria

Proposed l p -Regularized Least-Squares Algorithm, cont d Optimization overview: First, set ɛ to a large value, say, ɛ 1, typically 0.5 ɛ 1 1, and initialize φ and ξ to zero vectors. Solve the optimization problem by i) computing descent directions d v and d r, ii) computing the step size α; and iii) updating solution x and coefficient vector φ. Reduce ɛ to a smaller value and again solve the optimization problem. Repeat this procedure until a sufficiently small target value, say, ɛ J is reached. Output x as the solution. Compressive Sensing 17 University of Victoria

Performance Evaluation Number of recovered instances versus sparsity K by various algorithms with N = 1024 and M = 200 over 100 runs. l p-rls: Proposed URLP: Unconstrained regularized l p (Pant, Lu, and Antoniou, 2011) SL0: Smoothed l 0 -norm minimization (Mohimani et al., 2009) IR: Iterative re-weighting (Chartrand and Yin, 2008) Compressive Sensing 18 University of Victoria

Performance Evaluation, cont d Average CPU time versus signal length for various algorithms with M = N/2 and K = M/2.5. l p-rls: Proposed URLP: Unconstrained regularized l p (Pant, Lu, and Antoniou, 2011) SL0: Smoothed l 0 -norm minimization (Mohimani et al., 2009) IR: Iterative re-weighting (Chartrand and Yin, 2008) Compressive Sensing 19 University of Victoria

Conclusions Compressive sensing is an effective technique for sampling sparse signals. Compressive Sensing 20 University of Victoria

Conclusions Compressive sensing is an effective technique for sampling sparse signals. l 1 minimization works in general for the reconstruction of sparse signals. Compressive Sensing 20 University of Victoria

Conclusions Compressive sensing is an effective technique for sampling sparse signals. l 1 minimization works in general for the reconstruction of sparse signals. l p minimization with p < 1 can improve the recovery performance for signals that are less sparse. Compressive Sensing 20 University of Victoria

Conclusions Compressive sensing is an effective technique for sampling sparse signals. l 1 minimization works in general for the reconstruction of sparse signals. l p minimization with p < 1 can improve the recovery performance for signals that are less sparse. Proposed l p -regularized least-squares offers improved signal reconstruction from noisy measurements. Compressive Sensing 20 University of Victoria

Thank you for your attention. Compressive Sensing 21 University of Victoria