Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm

Similar documents
Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

SPARSE signal representations have gained popularity in recent

Design of Projection Matrix for Compressive Sensing by Nonsmooth Optimization

Designs of Orthogonal Filter Banks and Orthogonal Cosine-Modulated Filter Banks

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Estimating Unknown Sparsity in Compressed Sensing

Direct Design of Orthogonal Filter Banks and Wavelets

Conjugate Gradient algorithm. Storage: fixed, independent of number of steps.

Lecture Notes 9: Constrained Optimization

Recent Developments in Compressed Sensing

Programming, numerics and optimization

Compressed Sensing: Extending CLEAN and NNLS

Convex Optimization and l 1 -minimization

Introduction to Compressed Sensing

Numerical Methods I Non-Square and Sparse Linear Systems

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

An Introduction to Sparse Approximation

A New Estimate of Restricted Isometry Constants for Sparse Solutions

Elaine T. Hale, Wotao Yin, Yin Zhang

Towards Global Design of Orthogonal Filter Banks and Wavelets

IEOR 265 Lecture 3 Sparse Linear Regression

Generalized Orthogonal Matching Pursuit- A Review and Some

Compressive sensing in the analog world

Optimization methods

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices

Algorithms for sparse analysis Lecture I: Background on sparse approximation

Mismatch and Resolution in CS

Three Generalizations of Compressed Sensing

SOCP Relaxation of Sensor Network Localization

EE 381V: Large Scale Optimization Fall Lecture 24 April 11

Sparse Recovery of Streaming Signals Using. M. Salman Asif and Justin Romberg. Abstract

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms

Optimization methods

The Conjugate Gradient Method

sparse and low-rank tensor recovery Cubic-Sketching

Bayesian Paradigm. Maximum A Posteriori Estimation

Part 4: IIR Filters Optimization Approach. Tutorial ISCAS 2007

Sparse Approximation of Signals with Highly Coherent Dictionaries

EUSIPCO

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

A Unified Approach to the Design of Interpolated and Frequency Response Masking FIR Filters

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

COMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4

Compressive Imaging by Generalized Total Variation Minimization

Fast Hard Thresholding with Nesterov s Gradient Method

The Conjugate Gradient Algorithm

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization

Compressive Sensing (CS)

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Info-Greedy Sequential Adaptive Compressed Sensing

Symmetries in Experimental Design and Group Lasso Kentaro Tanaka and Masami Miyakawa

Solution Recovery via L1 minimization: What are possible and Why?

Scale Mixture Modeling of Priors for Sparse Signal Recovery

Robust Sparse Recovery via Non-Convex Optimization

Lecture 16: Multiresolution Image Analysis

Blind Compressed Sensing

Iterative regularization of nonlinear ill-posed problems in Banach space

Message Passing Algorithms for Compressed Sensing: II. Analysis and Validation

Sparsity Regularization

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

Phase recovery with PhaseCut and the wavelet transform case

Optimization for Compressed Sensing

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

1 Conjugate gradients

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

1 Sparsity and l 1 relaxation

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

arxiv: v1 [math.na] 26 Nov 2009

Reconstruction from Anisotropic Random Measurements

SOCP Relaxation of Sensor Network Localization

Applied Machine Learning for Biomedical Engineering. Enrico Grisan

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Sparse linear models

Homotopy methods based on l 0 norm for the compressed sensing problem

Analog-to-Information Conversion

Sparse signals recovered by non-convex penalty in quasi-linear systems

Robust multichannel sparse recovery

Sparse Approximation via Penalty Decomposition Methods

Particle Filtered Modified-CS (PaFiMoCS) for tracking signal sequences

Sensing systems limited by constraints: physical size, time, cost, energy

Compressed Sensing and Linear Codes over Real Numbers

Oslo Class 6 Sparsity based regularization

Block-sparse Solutions using Kernel Block RIP and its Application to Group Lasso

Conjugate Gradients I: Setup

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

The Pros and Cons of Compressive Sensing

SGN Advanced Signal Processing Project bonus: Sparse model estimation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Sparse Approximation and Variable Selection

Exact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations

Lecture 16: Compressed Sensing

Constrained optimization

Compressed Sensing: a Subgradient Descent Method for Missing Data Problems

Transcription:

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Jeevan K. Pant, Wu-Sheng Lu, and Andreas Antoniou University of Victoria May 21, 2012 Compressive Sensing 1/23 University of Victoria

Outline Compressive Sensing Compressive Sensing 2/23 University of Victoria

Outline Compressive Sensing Signal Recovery by Using l 1 or l p Minimization Compressive Sensing 2/23 University of Victoria

Outline Compressive Sensing Signal Recovery by Using l 1 or l p Minimization Recovery of Block-Sparse Signals Compressive Sensing 2/23 University of Victoria

Outline Compressive Sensing Signal Recovery by Using l 1 or l p Minimization Recovery of Block-Sparse Signals Block-Sparse Signal Recovery by Using l 2/p Minimization Compressive Sensing 2/23 University of Victoria

Outline Compressive Sensing Signal Recovery by Using l 1 or l p Minimization Recovery of Block-Sparse Signals Block-Sparse Signal Recovery by Using l 2/p Minimization Performance Evaluation Compressive Sensing 2/23 University of Victoria

Outline Compressive Sensing Signal Recovery by Using l 1 or l p Minimization Recovery of Block-Sparse Signals Block-Sparse Signal Recovery by Using l 2/p Minimization Performance Evaluation Conclusions Compressive Sensing 2/23 University of Victoria

Compressive Sensing A signal x(n) of length N is K-sparse if it contains K nonzero components with K N. Compressive Sensing 3/23 University of Victoria

Compressive Sensing A signal x(n) of length N is K-sparse if it contains K nonzero components with K N. A signal is near K-sparse if it contains K significant components. Compressive Sensing 3/23 University of Victoria

Compressive Sensing A signal x(n) of length N is K-sparse if it contains K nonzero components with K N. A signal is near K-sparse if it contains K significant components. Example: an image with near sparse wavelet coefficients: An image of An equivalent Wavelet coefficients Lena 1-D signal Compressive Sensing 3/23 University of Victoria

Compressive Sensing, cont d Compressive sensing (CS) is a data acquisition process whereby a sparse signal x(n) represented by a vector x of length N is determined using a small number of projections represented by a matrix Φ of dimension M N. Compressive Sensing 4/23 University of Victoria

Compressive Sensing, cont d Compressive sensing (CS) is a data acquisition process whereby a sparse signal x(n) represented by a vector x of length N is determined using a small number of projections represented by a matrix Φ of dimension M N. In such a process, measurement vector y and signal vector x are interrelated by the equation y = Φ x = 16 measurements projection matrix 4-sparse signal of size 16 30 of length 30 Compressive Sensing 4/23 University of Victoria

Signal Recovery by Using l 1 or l p Minimization The inverse problem of recovering signal x from measurement y such that Φ x = y M N N 1 M 1 is an ill-posed problem. Compressive Sensing 5/23 University of Victoria

Signal Recovery by Using l 1 or l p Minimization The inverse problem of recovering signal x from measurement y such that Φ x = y M N N 1 M 1 is an ill-posed problem. l 2 minimization often fails to yield a sparse x, i.e., a signal obtained as x = arg x is often not sparse. min x 2 subject to Φx = y Compressive Sensing 5/23 University of Victoria

Signal Recovery by Using l 1 or l p Minimization The inverse problem of recovering signal x from measurement y such that Φ x = y M N N 1 M 1 is an ill-posed problem. l 2 minimization often fails to yield a sparse x, i.e., a signal obtained as x = arg x is often not sparse. min x 2 subject to Φx = y A sparse x can be recovered using l 1 minimization as x = arg x min x 1 subject to Φx = y Compressive Sensing 5/23 University of Victoria

Signal Recovery by Using l 1 or l p Minimization, cont d Recently, l p minimization based algorithms have been shown to recover sparse signals using fewer measurements. In these algorithms, the signal is recovered by using the optimization problem where p < 1. minimize x subject to x p p = N i=1 x i p Φx = y Compressive Sensing 6/23 University of Victoria

Signal Recovery by Using l 1 or l p Minimization, cont d Recently, l p minimization based algorithms have been shown to recover sparse signals using fewer measurements. In these algorithms, the signal is recovered by using the optimization problem where p < 1. minimize x subject to x p p = N i=1 x i p Φx = y Note that the objective function x p p in the above problem is nonconvex and nondifferentiable. Compressive Sensing 6/23 University of Victoria

Signal Recovery by Using l 1 or l p Minimization, cont d Recently, l p minimization based algorithms have been shown to recover sparse signals using fewer measurements. In these algorithms, the signal is recovered by using the optimization problem where p < 1. minimize x subject to x p p = N i=1 x i p Φx = y Note that the objective function x p p in the above problem is nonconvex and nondifferentiable. Despite this, it has been shown in the literature that if the above problem is solved with sufficient care, improved reconstruction performance can be achieved. Compressive Sensing 6/23 University of Victoria

Signal Recovery by Using l 1 or l p Minimization, cont d Example: N = 256, K = 35, M = 100. Compressive Sensing 7/23 University of Victoria

Recovery of Block-Sparse Signals Let N, d, and N/d be positive integers such that d < N and N/d < N. Compressive Sensing 8/23 University of Victoria

Recovery of Block-Sparse Signals Let N, d, and N/d be positive integers such that d < N and N/d < N. A signal x of length N can be divided into N/d blocks as x = [ x 1 x 2 x N/d ] T where x i = [ x (i 1)d+1 x (i 1)d+1 x (i 1)d+d ] T for i = 1, 2,..., N/d. Compressive Sensing 8/23 University of Victoria

Recovery of Block-Sparse Signals Let N, d, and N/d be positive integers such that d < N and N/d < N. A signal x of length N can be divided into N/d blocks as x = [ x 1 x 2 x N/d ] T where x i = [ x (i 1)d+1 x (i 1)d+1 x (i 1)d+d ] T for i = 1, 2,..., N/d. Signal x is said to be K-block sparse if it has K nonzero blocks with K N/d. Compressive Sensing 8/23 University of Victoria

Recovery of Block-Sparse Signals Let N, d, and N/d be positive integers such that d < N and N/d < N. A signal x of length N can be divided into N/d blocks as x = [ x 1 x 2 x N/d ] T where x i = [ x (i 1)d+1 x (i 1)d+1 x (i 1)d+d ] T for i = 1, 2,..., N/d. Signal x is said to be K-block sparse if it has K nonzero blocks with K N/d. Note that the definition of K-sparse in the conventional CS is the special case of K-block sparse with d = 1. Compressive Sensing 8/23 University of Victoria

Recovery of Block-Sparse Signals, cont d Block-sparsity naturally arises in various signals such as speech signals, multiband signals, and some images. Compressive Sensing 9/23 University of Victoria

Recovery of Block-Sparse Signals, cont d Block-sparsity naturally arises in various signals such as speech signals, multiband signals, and some images. Speech signal: Compressive Sensing 9/23 University of Victoria

Recovery of Block-Sparse Signals, cont d Block-sparsity naturally arises in various signals such as speech signals, multiband signals, and some images. Speech signal: Multiband spectrum: Compressive Sensing 9/23 University of Victoria

Recovery of Block-Sparse Signals, cont d An image of Jupiter: An image of Jupiter An equivalent 1-D signal The 1-D signal from n = 170000 to 176000 Compressive Sensing 10/23 University of Victoria

Recovery of Block-Sparse Signals, cont d The block sparsity of a signal can be measured using the l 2/0 -pseudonorm which is given by N/d x 2/0 = ( x i 2 ) 0 i=1 Compressive Sensing 11/23 University of Victoria

Recovery of Block-Sparse Signals, cont d The block sparsity of a signal can be measured using the l 2/0 -pseudonorm which is given by N/d x 2/0 = ( x i 2 ) 0 The value of function x 2/0 is equal to the number of blocks of x which have all-nonzero components. i=1 Compressive Sensing 11/23 University of Victoria

Recovery of Block-Sparse Signals, cont d The block sparsity of a signal can be measured using the l 2/0 -pseudonorm which is given by N/d x 2/0 = ( x i 2 ) 0 The value of function x 2/0 is equal to the number of blocks of x which have all-nonzero components. A block-sparse signal can therefore be recovered by solving the optimization problem minimize x subject to i=1 x 2/0 Φx = y Compressive Sensing 11/23 University of Victoria

Recovery of Block-Sparse Signals, cont d The block sparsity of a signal can be measured using the l 2/0 -pseudonorm which is given by N/d x 2/0 = ( x i 2 ) 0 The value of function x 2/0 is equal to the number of blocks of x which have all-nonzero components. A block-sparse signal can therefore be recovered by solving the optimization problem minimize x subject to i=1 x 2/0 Φx = y Unfortunately, this problem is nonconvex with combinatorial complexity. Compressive Sensing 11/23 University of Victoria

Recovery of Block-Sparse Signals, cont d A practical method for recovering a block sparse signal is to solve the problem minimize x 2/1 x subject to Φx = y where N/d x 2/1 = x i 2 i=1 Compressive Sensing 12/23 University of Victoria

Recovery of Block-Sparse Signals, cont d A practical method for recovering a block sparse signal is to solve the problem minimize x 2/1 x subject to Φx = y where N/d x 2/1 = x i 2 i=1 Note that function x 2/1 is the l 1 norm of the vector [ x1 2 x 2 2 x N/d 2 ] T, which essentially gives a measure of the inter-block sparsity of x. Compressive Sensing 12/23 University of Victoria

Recovery of Block-Sparse Signals, cont d A practical method for recovering a block sparse signal is to solve the problem minimize x 2/1 x subject to Φx = y where N/d x 2/1 = x i 2 i=1 Note that function x 2/1 is the l 1 norm of the vector [ x1 2 x 2 2 x N/d 2 ] T, which essentially gives a measure of the inter-block sparsity of x. The above problem is a convex programming problem which can be solved using a semidefinite-programming or a second-order cone-programming (SOCP) solver. Compressive Sensing 12/23 University of Victoria

Recovery of Block-Sparse Signals, cont d Example: N = 512, d = 8, K = 5, M = 100. Compressive Sensing 13/23 University of Victoria

Block-Sparse Signal Recovery by Using l 2/p Minimization We propose reconstructing a block-sparse signal x from measurement y by solving the l 2/p -regularized least-squares problem minimize x with p < 1 for a small ɛ where F ɛ (x) = 1 2 Φx y 2 2 + λ x p 2/p,ɛ (P) N/d x p 2/p,ɛ = ( xi 2 2 + ɛ 2) p/2 i=1 Compressive Sensing 14/23 University of Victoria

Block-Sparse Signal Recovery by Using l 2/p Minimization We propose reconstructing a block-sparse signal x from measurement y by solving the l 2/p -regularized least-squares problem minimize x F ɛ (x) = 1 2 Φx y 2 2 + λ x p 2/p,ɛ (P) with p < 1 for a small ɛ where N/d x p 2/p,ɛ = ( xi 2 2 + ɛ 2) p/2 i=1 Note that lim ɛ 0 x p 2/p,ɛ = x p 2/p lim p 0 x p 2/p = x 2/0 Compressive Sensing 14/23 University of Victoria

Block-Sparse Signal Recovery by Using l 2/p Minimization, cont d Good signal reconstruction performance is expected when problem P on slide 14 is solved with a sufficiently small ɛ. Compressive Sensing 15/23 University of Victoria

Block-Sparse Signal Recovery by Using l 2/p Minimization, cont d Good signal reconstruction performance is expected when problem P on slide 14 is solved with a sufficiently small ɛ. However, for small ɛ the objective function F ɛ (x) becomes highly nonconvex and nearly nondifferentiable. Compressive Sensing 15/23 University of Victoria

Block-Sparse Signal Recovery by Using l 2/p Minimization, cont d Good signal reconstruction performance is expected when problem P on slide 14 is solved with a sufficiently small ɛ. However, for small ɛ the objective function F ɛ (x) becomes highly nonconvex and nearly nondifferentiable. The larger the ɛ, the easier the optimization of F ɛ (x). Compressive Sensing 15/23 University of Victoria

Block-Sparse Signal Recovery by Using l 2/p Minimization, cont d Good signal reconstruction performance is expected when problem P on slide 14 is solved with a sufficiently small ɛ. However, for small ɛ the objective function F ɛ (x) becomes highly nonconvex and nearly nondifferentiable. The larger the ɛ, the easier the optimization of F ɛ (x). Therefore, we propose to solve problem P on slide 14 by using the following sequential optimization procedure: Choose a sufficiently large value of ɛ and solve problem P using Fletcher-Reeves conjugate-gradient (CG) algorithm. Set the solution to x. Reduce the value of ɛ, use x as an initializer, and solve problem P again. Repeat this procedure until problem P is solved for a sufficiently small value of ɛ. Output the final solution and stop. Compressive Sensing 15/23 University of Victoria

Block-Sparse Signal Recovery by Using l 2/p Minimization, cont d In the kth iteration of Fletcher-Reeves CG algorithm, iterate x k is updated as x k+1 = x k + α k d k where d k = g k + β k 1 d k 1 β k 1 = g k 2 2 g k 1 2 2 Compressive Sensing 16/23 University of Victoria

Block-Sparse Signal Recovery by Using l 2/p Minimization, cont d In the kth iteration of Fletcher-Reeves CG algorithm, iterate x k is updated as x k+1 = x k + α k d k where d k = g k + β k 1 d k 1 β k 1 = g k 2 2 g k 1 2 2 Given x k and d k, the step size α k is obtained by solving the optimization problem minimize α f (α) = F ɛ (x k + αd k ) Compressive Sensing 16/23 University of Victoria

Block-Sparse Signal Recovery by Using l 2/p Minimization, cont d By setting the first derivative of f (α) to zero, we get where α = G(α) d T k Φ T (Φx k y) + λ p G(α) = N/d Φd k 2 2 + λ p γ i γ i = ( x i + α d i 22 + ɛ 2 ) p/2 1 i=1 N/d γ i ( x Tki d ) ki i=1 ( d T ki d ki ) Compressive Sensing 17/23 University of Victoria

Block-Sparse Signal Recovery by Using l 2/p Minimization, cont d By setting the first derivative of f (α) to zero, we get where α = G(α) d T k Φ T (Φx k y) + λ p G(α) = N/d Φd k 2 2 + λ p γ i γ i = ( x i + α d i 22 + ɛ 2 ) p/2 1 i=1 N/d γ i ( x Tki d ) ki i=1 ( d T ki d ki ) In the above equations, x ki and d ki are the ith blocks of vectors x k and d k, respectively. Compressive Sensing 17/23 University of Victoria

Block-Sparse Signal Recovery by Using l 2/p Minimization, cont d Step size α k is determined by using the recursive relation α l+1 = G(α l ) for l = 1, 2,... Compressive Sensing 18/23 University of Victoria

Block-Sparse Signal Recovery by Using l 2/p Minimization, cont d Step size α k is determined by using the recursive relation α l+1 = G(α l ) for l = 1, 2,... According to Banach s fixed-point theorem, if dg(α)/dα < 1 then function G(α) is a contraction mapping, i.e., G(α 1 ) G(α 2 ) η α 1 α 2 with η < 1 and, as a consequence, the above recursion converges to a solution. Compressive Sensing 18/23 University of Victoria

Block-Sparse Signal Recovery by Using l 2/p Minimization, cont d Step size α k is determined by using the recursive relation α l+1 = G(α l ) for l = 1, 2,... According to Banach s fixed-point theorem, if dg(α)/dα < 1 then function G(α) is a contraction mapping, i.e., G(α 1 ) G(α 2 ) η α 1 α 2 with η < 1 and, as a consequence, the above recursion converges to a solution. Extensive experimental results have shown that function G(α) for function f (α) is, in practice, a contraction mapping. Compressive Sensing 18/23 University of Victoria

Performance Evaluation Number of perfectly recovered instances with N = 512, M = 100, and d = 8 over 100 runs. l 2/p -RLS: Proposed l 2/p -Regularized Least-Squares l 2/1 -SOCP: l 2/1 Second-Order Cone-Programming (Eldar and Mishali, 2009) BOMP: Block Orthogonal Matching Pursuit (Eldar et. al., 2010) Compressive Sensing 19/23 University of Victoria

Performance Evaluation, cont d Average CPU time with M = N/2, K = M/2.5d, and d = 8 over 100 runs. l 2/p -RLS: Proposed l 2/p -Regularized Least-Squares l 2/1 -SOCP: l 2/1 Second-Order Cone-Programming (Eldar and Mishali, 2009) BOMP: Block Orthogonal Matching Pursuit (Eldar et. al., 2010) Compressive Sensing 20/23 University of Victoria

Performance Evaluation, cont d Example: N = 512, d = 8, K = 9, M = 100. Compressive Sensing 21/23 University of Victoria

Conclusions Compressive sensing is an effective sampling technique for sparse signals. Compressive Sensing 22/23 University of Victoria

Conclusions Compressive sensing is an effective sampling technique for sparse signals. l 1 -minimization and l p -minimization with p < 1 work well for the reconstruction of sparse signals. Compressive Sensing 22/23 University of Victoria

Conclusions Compressive sensing is an effective sampling technique for sparse signals. l 1 -minimization and l p -minimization with p < 1 work well for the reconstruction of sparse signals. l 2/1 -minimization offers improved reconstruction performance for block-sparse signals. Compressive Sensing 22/23 University of Victoria

Conclusions Compressive sensing is an effective sampling technique for sparse signals. l 1 -minimization and l p -minimization with p < 1 work well for the reconstruction of sparse signals. l 2/1 -minimization offers improved reconstruction performance for block-sparse signals. The proposed l 2/p -regularized least-squares algorithm offers improved reconstruction performance for block-sparse signals relative to the l 2/1 -SOCP and BOMP algorithms. Compressive Sensing 22/23 University of Victoria

Thank you for your attention. This presentation can be downloaded from: http://www.ece.uvic.ca/ andreas/rlectures/iscas2012-jeevan-pres.pdf Compressive Sensing 23/23 University of Victoria