Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University of Victoria
Outline Compressive Sensing Compressive Sensing 2 University of Victoria
Outline Compressive Sensing Use of l 1 Minimization Compressive Sensing 2 University of Victoria
Outline Compressive Sensing Use of l 1 Minimization Use of l p Minimization Compressive Sensing 2 University of Victoria
Outline Compressive Sensing Use of l 1 Minimization Use of l p Minimization Use of l p -Regularized Least Squares Compressive Sensing 2 University of Victoria
Outline Compressive Sensing Use of l 1 Minimization Use of l p Minimization Use of l p -Regularized Least Squares Performance Evaluation Compressive Sensing 2 University of Victoria
Outline Compressive Sensing Use of l 1 Minimization Use of l p Minimization Use of l p -Regularized Least Squares Performance Evaluation Conclusion Compressive Sensing 2 University of Victoria
Compressive Sensing A signal x(n) of length N is K-sparse if it contains K nonzero components with K N. Compressive Sensing 3 University of Victoria
Compressive Sensing A signal x(n) of length N is K-sparse if it contains K nonzero components with K N. A sparse signal An image with sparse gradient (Shepp-Logan phantom) Compressive Sensing 3 University of Victoria
Compressive Sensing, cont d A signal acquisition procedure in the compressive sensing framework is modelled as y = Φ x M 1 M N N 1 where Φ is a measurement matrix, typically with K < M N, and y is measurement vector. Compressive Sensing 4 University of Victoria
Compressive Sensing, cont d A signal acquisition procedure in the compressive sensing framework is modelled as y = Φ x M 1 M N N 1 where Φ is a measurement matrix, typically with K < M N, and y is measurement vector. The inverse problem of recovering signal vector x from y is an ill-posed problem. Compressive Sensing 4 University of Victoria
Compressive Sensing, cont d A signal acquisition procedure in the compressive sensing framework is modelled as y = Φ x M 1 M N N 1 where Φ is a measurement matrix, typically with K < M N, and y is measurement vector. The inverse problem of recovering signal vector x from y is an ill-posed problem. A classical approach to recover x from y is the method of least squares x = Φ T ( ΦΦ T ) 1 y Compressive Sensing 4 University of Victoria
Compressive Sensing, cont d Unfortunately, the method of least squares fails to recover a sparse signal. Compressive Sensing 5 University of Victoria
Compressive Sensing, cont d Unfortunately, the method of least squares fails to recover a sparse signal. The sparsity of a signal can be measured by using its l 0 pseudonorm N x 0 = x i 0 i=1 Compressive Sensing 5 University of Victoria
Compressive Sensing, cont d Unfortunately, the method of least squares fails to recover a sparse signal. The sparsity of a signal can be measured by using its l 0 pseudonorm N x 0 = x i 0 i=1 If signal x is known to be sparse, it can be estimated by solving the optimization problem minimize x subject to x 0 Φx = y Compressive Sensing 5 University of Victoria
Use of l 1 Minimization Unfortunately, the l 0 -pseudonorm minimization problem is nonconvex with combinatorial complexity. Compressive Sensing 6 University of Victoria
Use of l 1 Minimization Unfortunately, the l 0 -pseudonorm minimization problem is nonconvex with combinatorial complexity. Computationally tractable algorithms include the basis pursuit algorithm which solves the problem minimize x subject to x 1 Φx = y where x 1 = N x i. i=1 Compressive Sensing 6 University of Victoria
Use of l 1 Minimization, cont d Theorem If Φ = {φ ij } where φ ij are independent and identically distributed random variables with zero-mean and variance 1/N and M ck log(n/k), the solution of the l 1 -minimization problem would recover exactly a K-sparse signal with high probability. Compressive Sensing 7 University of Victoria
Use of l 1 Minimization, cont d Theorem If Φ = {φ ij } where φ ij are independent and identically distributed random variables with zero-mean and variance 1/N and M ck log(n/k), the solution of the l 1 -minimization problem would recover exactly a K-sparse signal with high probability. For real-valued data {Φ, y}, the l 1 -minimization problem is a linear programming problem. Compressive Sensing 7 University of Victoria
Use of l 1 Minimization, cont d Example: N = 512, M = 120, K = 26 A Sparse Signal with K = 26 Reconstructed Signal by L1 Minimization, M = 120 1 1 0.5 0.5 x(n) 0 x(n) 0 0.5 0.5 1 1 100 200 300 400 500 n 100 200 300 400 500 n 1 x 10 5 Reconstruction Error Reconstructed Signal by L2 Minimization, M = 120 0.6 0.5 0.4 0.2 Error 0 x(n) 0 0.2 0.5 0.4 1 100 200 300 400 500 n 0.6 100 200 300 400 500 n Compressive Sensing 8 University of Victoria
Use of l p Minimization Chartrand s l p -minimization based iteratively reweighted algorithm which solves the problem minimize x subject to x p p Φx = y where x p p = N i=1 x i p with p < 1 yields improved performance. Compressive Sensing 9 University of Victoria
Use of l p Minimization Chartrand s l p -minimization based iteratively reweighted algorithm which solves the problem minimize x subject to x p p Φx = y where x p p = N i=1 x i p with p < 1 yields improved performance. Mohimani et al. s smoothed l 0 -norm minimization algorithm solves the problem minimize x subject to N i=1 [1 exp ( x 2 i /2σ2 )] Φx = y with σ > 0 using a sequential steepest-descent algorithm. Compressive Sensing 9 University of Victoria
Use of l p Minimization, cont d The unconstrained regularized l p norm minimization algorithm estimates signal x as x = x s + V n ξ where x s is the least-squares solution of Φx = y, the columns of V n constitue orthonormal basis of null space of Φ, and ξ is obtained as ξ = arg minimize ξ N i=1 [ (xsi + v T i ξ ) ] 2 p/2 1 + ɛ 2 Compressive Sensing 10 University of Victoria
Use of l p Minimization, cont d The unconstrained regularized l p norm minimization algorithm estimates signal x as x = x s + V n ξ where x s is the least-squares solution of Φx = y, the columns of V n constitue orthonormal basis of null space of Φ, and ξ is obtained as ξ = arg minimize ξ N i=1 [ (xsi + v T i ξ ) ] 2 p/2 1 + ɛ 2 This algorithm finds a vector ξ that would give the sparsest estimate x. Compressive Sensing 10 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm The proposed l p regularized least-squares algorithm is an extension of the unconstrained regularized l p algorithm for noisy measurement. Compressive Sensing 11 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm The proposed l p regularized least-squares algorithm is an extension of the unconstrained regularized l p algorithm for noisy measurement. For noisy measurements, the signal acquisition procedure is modelled as y = Φx + w where w is the measurement noise. Compressive Sensing 11 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm The proposed l p regularized least-squares algorithm is an extension of the unconstrained regularized l p algorithm for noisy measurement. For noisy measurements, the signal acquisition procedure is modelled as y = Φx + w where w is the measurement noise. In such case, the equality condition Φx = y should be relaxed to Φx y 2 2 δ where δ is a small positive scalar. Compressive Sensing 11 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm, cont d Consider the optimization problem where minimize x x p p,ɛ subject to Φx y 2 2 δ x p p,ɛ = N i=1 ( x 2 i + ɛ 2) p/2 Compressive Sensing 12 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm, cont d Consider the optimization problem where minimize x x p p,ɛ subject to Φx y 2 2 δ x p p,ɛ = N i=1 ( x 2 i + ɛ 2) p/2 An unconstrained formulation of the above problem is given by minimize x where λ is a regularization parameter. 1 2 Φx y 2 2 + λ x p p,ɛ Compressive Sensing 12 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm, cont d Consider the optimization problem where minimize x x p p,ɛ subject to Φx y 2 2 δ x p p,ɛ = N i=1 ( x 2 i + ɛ 2) p/2 An unconstrained formulation of the above problem is given by minimize x where λ is a regularization parameter. 1 2 Φx y 2 2 + λ x p p,ɛ We solve the above optimization problem in the null space of Φ and its complement space. Compressive Sensing 12 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm, cont d Let Φ = U [Σ 0] V T be the singular-value decomposition of Φ. Σ is a diagonal matrix whose diagonal elements σ 1, σ 2,..., σ M are the singular values of Φ. The columns of U and V are, respectively, the left and right singular vectors of Φ. V = [V r V n ] where V r consists of the first M columns and V n consists the remaining N M columns of V. The columns of V n and V r form orthogonal basis vectors for the null space of Φ and its complement space, respectively. Compressive Sensing 13 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm, cont d Let Φ = U [Σ 0] V T be the singular-value decomposition of Φ. Σ is a diagonal matrix whose diagonal elements σ 1, σ 2,..., σ M are the singular values of Φ. The columns of U and V are, respectively, the left and right singular vectors of Φ. V = [V r V n ] where V r consists of the first M columns and V n consists the remaining N M columns of V. The columns of V n and V r form orthogonal basis vectors for the null space of Φ and its complement space, respectively. Vector x is expressed as x = V r φ + V n ξ where φ and ξ are vectors of lengths M and N M, respectively. Compressive Sensing 13 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm, cont d Using the SVD, we recast the optimization problem for the l p -RLS algorithm as minimize φ,ξ F p,ɛ (φ, ξ) where M F p,ɛ (φ, ξ) = 1 2 i=1 (σ i φ i ỹ i ) 2 + λ x p p,ɛ ỹ i is the ith component of vector ỹ = U T y, and x = V r φ + V n ξ. Compressive Sensing 14 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm, cont d In the kth iteration of the l p -RLS algorithm, signal x (k) is updated as x (k+1) = x (k) + αd (k) v where and α > 0. d (k) v = V r d (k) r + V n d (k) n Compressive Sensing 15 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm, cont d In the kth iteration of the l p -RLS algorithm, signal x (k) is updated as x (k+1) = x (k) + αd (k) v where and α > 0. d (k) v = V r d (k) r + V n d (k) n Vectors d (k) r and d (k) n are of lengths M and N M, respectively. Compressive Sensing 15 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm, cont d In the kth iteration of the l p -RLS algorithm, signal x (k) is updated as x (k+1) = x (k) + αd (k) v where and α > 0. Vectors d (k) r and d (k) n d (k) v = V r d (k) r + V n d (k) n are of lengths M and N M, respectively. Each component of vectors d (k) r and d (k) n is efficiently computed using the first step of a fixed-point iteration. Compressive Sensing 15 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm, cont d According to Banach s fixed-point theorem, the step size α can be computed using a finite number of iterations as where with α = d(k) T r Σ (Σφ ỹ) + λ p x (k) T ζ v 2 2 + λ p d(k) T ζv [ ( ζ vj = x (k) Σd (k) r ζ v = [ζ v1 ζ v2 j v ζ vn ] T ) ] 2 p/2 1 + αd (k) vj + ɛ 2 d (k) vj for j = 1, 2,..., N where d (k) vj direction d (k) v. is the jth component of the descent Compressive Sensing 16 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm, cont d Optimization overview: First, set ɛ to a large value, say, ɛ 1, typically 0.5 ɛ 1 1, and initialize φ and ξ to zero vectors. Compressive Sensing 17 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm, cont d Optimization overview: First, set ɛ to a large value, say, ɛ 1, typically 0.5 ɛ 1 1, and initialize φ and ξ to zero vectors. Solve the optimization problem by i) computing descent directions d v and d r, ii) computing the step size α; and iii) updating solution x and coefficient vector φ. Compressive Sensing 17 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm, cont d Optimization overview: First, set ɛ to a large value, say, ɛ 1, typically 0.5 ɛ 1 1, and initialize φ and ξ to zero vectors. Solve the optimization problem by i) computing descent directions d v and d r, ii) computing the step size α; and iii) updating solution x and coefficient vector φ. Reduce ɛ to a smaller value and again solve the optimization problem. Compressive Sensing 17 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm, cont d Optimization overview: First, set ɛ to a large value, say, ɛ 1, typically 0.5 ɛ 1 1, and initialize φ and ξ to zero vectors. Solve the optimization problem by i) computing descent directions d v and d r, ii) computing the step size α; and iii) updating solution x and coefficient vector φ. Reduce ɛ to a smaller value and again solve the optimization problem. Repeat this procedure until a sufficiently small target value, say, ɛ J is reached. Compressive Sensing 17 University of Victoria
Proposed l p -Regularized Least-Squares Algorithm, cont d Optimization overview: First, set ɛ to a large value, say, ɛ 1, typically 0.5 ɛ 1 1, and initialize φ and ξ to zero vectors. Solve the optimization problem by i) computing descent directions d v and d r, ii) computing the step size α; and iii) updating solution x and coefficient vector φ. Reduce ɛ to a smaller value and again solve the optimization problem. Repeat this procedure until a sufficiently small target value, say, ɛ J is reached. Output x as the solution. Compressive Sensing 17 University of Victoria
Performance Evaluation Number of recovered instances versus sparsity K by various algorithms with N = 1024 and M = 200 over 100 runs. l p-rls: Proposed URLP: Unconstrained regularized l p (Pant, Lu, and Antoniou, 2011) SL0: Smoothed l 0 -norm minimization (Mohimani et al., 2009) IR: Iterative re-weighting (Chartrand and Yin, 2008) Compressive Sensing 18 University of Victoria
Performance Evaluation, cont d Average CPU time versus signal length for various algorithms with M = N/2 and K = M/2.5. l p-rls: Proposed URLP: Unconstrained regularized l p (Pant, Lu, and Antoniou, 2011) SL0: Smoothed l 0 -norm minimization (Mohimani et al., 2009) IR: Iterative re-weighting (Chartrand and Yin, 2008) Compressive Sensing 19 University of Victoria
Conclusions Compressive sensing is an effective technique for sampling sparse signals. Compressive Sensing 20 University of Victoria
Conclusions Compressive sensing is an effective technique for sampling sparse signals. l 1 minimization works in general for the reconstruction of sparse signals. Compressive Sensing 20 University of Victoria
Conclusions Compressive sensing is an effective technique for sampling sparse signals. l 1 minimization works in general for the reconstruction of sparse signals. l p minimization with p < 1 can improve the recovery performance for signals that are less sparse. Compressive Sensing 20 University of Victoria
Conclusions Compressive sensing is an effective technique for sampling sparse signals. l 1 minimization works in general for the reconstruction of sparse signals. l p minimization with p < 1 can improve the recovery performance for signals that are less sparse. Proposed l p -regularized least-squares offers improved signal reconstruction from noisy measurements. Compressive Sensing 20 University of Victoria
Thank you for your attention. Compressive Sensing 21 University of Victoria