Improved FOCUSS Method With Conjugate Gradient Iterations

Size: px
Start display at page:

Download "Improved FOCUSS Method With Conjugate Gradient Iterations"

Transcription

1 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 1, JANUARY Improved FOCUSS Method With Conjugate Gradient Iterations Zhaoshui He, Andrzej Cichocki, Rafal Zdunek, and Shengli Xie Abstract FOCal Underdetermined System Solver (FOCUSS) is a powerful tool for sparse representation and underdetermined inverse problems. In this correspondence, we strengthen the FOCUSS method with the following main contributions: 1) we give a more rigorous derivation of the FO- CUSS for the sparsity parameter 0 1 by a nonlinear transform and 2) we develop the CG-FOCUSS by incorporating the conjugate gradient (CG) method to the FOCUSS, which significantly reduces a computational cost with respect to the standard FOCUSS and extends its availability for large scale problems. We justify the CG-FOCUSS based on a probability theory. Furthermore, the high performance of the CG-FOCUSS is demonstrated with experiments. Index Terms Basis pursuit (BP), conjugate gradient (CG), FOCUSS, matching pursuit (MP), nonlinear transform, orthogonal matching pursuit (OMP), preconditioned conjugate gradient (PCG), preconditioner. I. INTRODUCTION The problem of finding sparse solutions to underdetermined linear problems from limited data arises in many applications, including compressed sensing/compressive sampling [1], biomagnetic imaging problem [2], spectral estimation, direction-of-arrival (DOA), signal reconstruction, [3] [6], borehole tomography [7], etc. This problem can be modeled as follows: x = As (1) where x = (x1;...;x m ) T 2 m is an observable vector, A = [a1;...;a n ] 2 m2n is a known basis matrix, s =(s1;...;s n ) T 2 n is an unknown vector which represents n sparse sources or hidden sparse components, and m is the number of observations. Here the linear model (1) is underdetermined, i.e., A is overcomplete (m < n). The main objective is to estimate the sources s such that s is as sparse as possible or has a specified sparsity profile [2] [4], [6], [8] [16]. Manuscript received October 22, 2007; revised September 09, First published October 31, 2008; current version published January 06, The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Ilya Pollak. The work was supported in part by National Natural Science Foundation of China (Grant ), the Natural Science Fund of Guangdong Province, China (Grant ). Z. He is with the Laboratory for Advanced Brain Signal Processing, RIKEN Brain Science Institute, Saitama, , Japan, and the School of Electronics and Information Engineering, South China University of Technology, Guangzhou, , China ( he_shui@tom.com). A. Cichocki is with the Laboratory for Advanced Brain Signal Processing, RIKEN Brain Science Institute, Saitama, , Japan, System Research Institute, Polish Academy of Sciences (PAN), Warsaw, , Poland, and Warsaw University of Technology, Warsaw, , Poland ( cia@brain. riken.jp). R. Zdunek is with the Laboratory for Advanced Brain Signal Processing, RIKEN Brain Science Institute, Saitama, , Japan, and also with Institute of Telecommunications, Teleinformatics, and Acoustics, Wroclaw University of Technology, Wroclaw, Poland ( rafal.zdunek@pwr.wroc.pl). S. Xie is with the School of Electronics and Information Engineering, South China University of Technology, Guangzhou, , China ( adshlxie@scut.edu.cn). Color versions of one or more of the figures in this correspondence are available online at Digital Object Identifier /TSP Recently, much attention has been paid to this problem due to its importance. The methods for minimizing `1-norm were first introduced. Chen, Donoho, and Saunders discussed a sparse representation of signals using the large scale linear programming (LP) [11]. Donoho and Elad discussed a maximal sparsity representation via `1-norm minimization [13]. Li, Cichocki, and Amari analyzed the equivalence between `1 minimization and `0 minimization according to a probabilistic framework [16]. They found that if the obtained `1-norm solution is sufficiently sparse, it equals to the `0-norm solution with a high probability. Takigawa, Kudo, and Toyama analyzed the performance of minimum `1-norm solutions for underdetermined blind source separation problems [17]. Their results showed that the minimum `1-norm solutions were easier to obtain in the case where the number of nonzero sources was less than the number of sensors at each time instant, or in the case where the source signals had a highly peaked distribution close to the Laplacian distribution. Li, et al. also investigated the recoverability analysis of source signals and gave a necessary and sufficient condition for recoverability of the sources [18]. At the same time, many algorithms have been employed for this problem, for example, LP [11], [13], [16] [18], greedy algorithms (e.g., shortest path decomposition [17], [19], BP [13], MP and OMP [20], [21], etc.), least squares methods with `1 regularization (e.g., PDCO-LSQR [22], Homotopy [23], TNIPM [1], etc.), and FOCUSS algorithm(s) [4], [14], [24] [26]. Among them, the LP is very time-consuming, the performance of the MP and OMP is usually a little worse than the others, the BP is NP-hard and needs a lot of memory space. So the LP and BP are not suitable for large scale problems. The least squares methods with `1 regularization can be used to solve large scale problem; however, the regularization parameters for imposing the sparseness constraint must be set in advance before we run these kinds of methods. In general, it is not easy to set the optimal sparseness regularization parameters. The FOCUSS algorithms have no regularization parameters to set. Also, they are advantageous in terms of a computational complexity, and they are suitable even for large scale problems. In this correspondence, we strengthen FOCUSS in theoretical and algorithmic aspects. At first, a rigorous mathematical derivation of the FOCUSS, for the special case 0 < p < 1, is given. In addition, to further speed up the convergence of FOCUSS and extend its availability for large scale problems, we develop the CG-FOCUSS. The outline of this correspondence is as follows. The mathematical derivation of `p FOCUSS is rigorously discussed in Section II (0 < p< 1). In Section III, we introduce our motivation for the usage of the PCG method. The CG-FOCUSS is addressed in Sections IV and V. The experiments and conclusions are given in Sections VI and VIII, respectively. II. THE DERIVATION OF FOCUSS WHEN 0 <p<1 By imposing sparsity constraints via `p diversity (0 <p 1), a sparse representation for model (1) can be converted to the following optimization problem [2] [4], [6], [9], [12], [15]: n mins J(s) = i=1 jsijp subject to: x = As: To solve problem (2), Rao et al. employed the Lagrange multiplier method [4]. The Lagrange function is (2) L(s; ) =J(s)+ T (As 0 x) (3) X/$ IEEE

2 400 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 1, JANUARY 2009 where is an m 2 1 vector of the Lagrange multipliers. A necessary condition for the solution s3 to exist is that (s3;3) be stationary points of the Lagrange function, = As 0 + AT =0 = jpj5(s)s and 5(s) = diag(js 1j p02 ;...; js n j p02 ) [4], [15]. From (4), we can derive the FOCUSS equations by some mathematical manipulations as follows (see [4]): (4) s =5 01 (s) 1 A T 1 [A (s) 1 A T ] 01 1 x: (5) Thus, using x = A~z, we can derive the following expression by simultaneously pre-multiplying A on the both sides of (11) Combining (11) and (12), we get = 0p 1 [A (~z~z~z) 1 A T ] 01 1 x: (12) ~z =5 01 (~z) 1 A T 1 [A (s) 1 A T ] 01 1 x: (13) Substituting ~z with s by (8), we can derive the iterative FOCUSS formula (6) for the case 0 <p<1 s =5 01 (s) 1 A T 1 [A (s) 1 A T ] 01 1 x: Then we have the following iterative formula of FOCUSS: s (k+1) =5 01 (s (k) ) 1 A T 1 [A (s (k) ) 1 A T ] 01 1 x: (6) It is worth noting that theoretically the (4) does not hold when 0 < p<1 and some components of s are zeros. To be precise, the matrix 5(s) does not exist in this case though the matrix 5 01 (s) does, because 0 p02!1. However, the iterative formula (6) of FOCUSS is still valid. For this problem, we prove (5) in another way. To do this, we can choose an appropriate nonlinear function and make a nonlinear transform for s i, i =1;...;n. For simplicity, here the nonlinear function is chosen as s(z) =jzj 2=p 1 sgn(z), which is a continuous, differentiable and monotonically increasing function because 2=p > 2. Obviously, sgn(s) = sgn(z).sos i = jz ij 2=p 1 sgn(z i), (i =1;...;n). Then the optimization problem (2) can be formulated as follows: where minz J(z) = n i=1 subject to: x = A~z jz i j 2 ~z =[jz 1j 2=p 1 sgn(z 1);...; jz nj 2=p 1 sgn(z n)] T = s: (8) Similarly, we construct the following Lagrange function L(z; ) for the problem (7): (7) L(z; ) =J(z)+ T 1 (A~z 0 x): (9) From (9), we can obtain the following equation =2z + 2 p 1 D(z) 1 = x 0 A~z~z~z =0 D(z) = jz 1j 2=p : 0... jz n j 2=p01 (10) Pre-multiplying the diagonal matrix D(z) on both sides of the first equation of (10), we can derive 2~z =2D(z)z = 0 2 p 1 D2 (z) 1 A T = 0 2 p (~z) 1 A T (11) where 5 01 (~z) = jz 1 j 2=p(20p) : 0... jz nj 2=p(20p) III. MOTIVATION FOR USAGE OF CONJUGATE GRADIENT METHOD For the FOCUSS in (5) or (6), it is time-consuming to compute the inverse of symmetric positive definite matrix A5 01 (s)a T because we must compute it separately for each time instant in each iteration, and also computation of the matrix inverse is usually quite expensive. For this reason, we discuss how to implement this step in a more efficient way. The linear conjugate gradient (CG) method, which was proposed by Hestenes and Stiefel as an iterative method [27], [28], is one of the most useful and computationally inexpensive techniques for solving large linear systems, which is convergent to a minimal norm least square solution in a finite number of iterations. Let us consider the following linear system: H = b (14) where H is a known m 2 m symmetric positive-definite matrix, is an unknown vector, and b is a known vector. Instead of directly solving (14) by computation of an inverse matrix, i.e., = H 01 b, we can employ the CG method to (14), which avoids computing the inverse H 01. Here, we do not discuss detailed implementations of the CG method. For more details, the readers can refer to [1], [27], and [28]. For simplicity, we denote the solution to (14) obtained with the CG method as = cg(h;b; 0;"), where " is a tolerance and 0 is the initialization. It is worth noting that the performance of the linear CG method depends on a distribution of eigenvalues of the coefficient matrix H[28]. In details, if H has r distinct real-valued eigenvalues (r m), then the CG iterations will terminate at the solution in at most r iterations. In other words, if the matrix H has very few distinct eigenvalues, the CG method will be extremely fast. For example, if r =1, the CG can find the right solution in only one iteration even for large scale problems. To take advantage sufficiently of this property to speed up the convergence, the preconditioned conjugate gradient (PCG) method was developed later [28]. Similarly, we denote the solution to (14) with the PCG as = pcg(h;b; P; 0;"), where P is a preconditioner. Equivalently, we can accelerate the CG method by transforming the linear system (14) to improve the eigenvalue distribution of H [28]. The key point to this process, which is known as preconditioning, is a linear transform from (14) to (15) via a nonsingular matrix C, that is (C 0T HC 01 )~ = C 0T b (15) where ~ = C and the preconditioner P = C T C. Then we can find the solution = C 01 1 cg(c 0T HC 01 ;C 0T b; ~ 0 ;") to (14) by the standard CG method, where a convergence rate depends on the eigenvalues of the matrix C 0T HC 01. If a good preconditioner P or transform matrix C can be found, we can make this distribution more favorable and improve the convergence of the CG method significantly. However, no single precondi-

3 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 1, JANUARY tioning strategy is the best for all conceivable types of matrices: the tradeoff between various objectives such as effectiveness of P, inexpensive computation and storage of P, etc., varies from one problem to another problem [28]. IV. PCG-FOCUSS AND CG-FOCUSS Set H A5 01 A T, then [A5 01 A T ] 01 x =cg(h;x; 0 ;"). Thus, the iterative formula (5) of FOCUSS can be written as s =5 01 (s) 1 A T 1 cg(a5 01 (s)a T ;x; 0 ;"): (16) As mentioned in Section III, the preconditioning plays a crucial role in designing practical CG strategies. Next, we mainly discuss how to construct the preconditioner or transform matrix C for the CG-FO- CUSS given by (16). Let us perform the singular value decomposition (SVD) on A as A = U6V T, where 6 =[3; 0] = m (17) where 3 =diag( 1 ;...; m ). To develop the PCG-FOCUSS or CG-FOCUSS, we need to calculate only the matrices U and 3, and matrix V does not play an important role. Usually, the eigenvalue decomposition (EVD) on the matrix AA T = U3 2 U T is more efficient than SVD to achieve this goal because the basis matrix A is overcomplete. Fortunately, there are many efficient methods and tools for EVD and SVD, even for large scale eigenvalue problems [29], [30]. A. PCG-FOCUSS Here, we choose the preconditioner P as P =(3U T ) T (3U T )= U3 2 U T = AA T. Then the PCG-FOCUSS can be outlined as follows: Algorithm 1: PCG-FOCUSS 1. Set the parameter p and the tolerance ". Usually, we can set p Initialize s as s 0, initialize as 0 and set k =0. 3. Compute T k = 5 01 (s k ) 1 A T ; 4. Update the sparse components as k = pcg [AT k ;x; P; k ;"] s k+1 = T k 1 k where P = AA T. 5. If the iterative procedure is converged, output s 3 = s k ; otherwise let k = k +1and go to step 3). B. CG-FOCUSS The PCG-FOCUSS involves the preconditioner in each iteration. We can further develop a more efficient CG-FOCUSS. Pre-multiplying the transform matrix C = 3 01 U 01 on both sides of the model (1), we have ~x = As ~ (18) where ~x =3 01 U 01 x and A ~ =3 01 U 01 A. Note that the purpose of SVD is twofold: first, it helps to find the transform matrix (U3) 01 for (18), and second, it helps to determine the rank of A, and consequently to truncate small singular values when the basis matrix A is rank-deficient or very ill-conditioned. Instead of directly considering the system (1), we can equivalently solve it by operating with (18). The CG-FOCUSS applied to (18) is as follows. Algorithm 2: CG-FOCUSS 1. Perform EVD on matrix AA T or SVD on A to get U and 3. Compute A ~ =3 01 U 01 A and ~x =3 01 U 01 x. Set the parameter p and ". 2. Initialize s as s 0, initialize ~ as ~ 0 and set k =0. 3. Compute T ~ k =5 01 (s k ) 1 A ~ T. 4. Update s as ~ k =cg( A ~ T ~ k ; ~x; ~ k ;") s k+1 = T ~ k 1 ~ k : 5. Let k = k +1and go to step 3) until the convergence is reached. The advantages of the CG-FOCUSS are as follows. 1) It is significantly faster than the standard FOCUSS. For the standard FOCUSS, the conventional method (e.g., Gaussian elimination) is used to calculate the matrix inversion [A5 01 A T ] 01, where its computational complexity is O(m 3 ). Here the computational cost of CG method for (14) is only O(m 2 ). 2) It is easier to implement it even in hardware because it avoids the calculation of matrix inversion. It is more suitable for larger-scale problems because the conjugate directions in the CG can be generated in a very economical way [28]. Remark 1: In most experiments, the tolerance can be set as " = 0:001. Since the M-FOCUSS has the similar iterative formula to the standard FOCUSS, we can also develop the CG-M-FOCUSS by applying the CG method to the M-FOCUSS analogously [15]. In [1], a very efficient method called the TNIPM (truncated Newton interiorpoint method) was proposed for the special case p = 1 of problem (2). Moreover, it works well even for the large scale problems [1]. For the TNIPM, one of the most important tricks is that the PCG method was used to compute the search direction. Benefiting from avoiding directly calculating the matrix inversion, the CG-FOCUSS has a similar advantage to the TNIPM, but it is more flexible because it works for the general case p 6= 1, while the TNIPM is only suitable for the special case p = 1. V. MATHEMATICAL INTERPRETATION OF CG-FOCUSS BASED ON PROBABILITY THEORY Let H ~ = A5 ~ 01 (s) A ~ T, and suppose that its m eigenvalues ~ 1 (E H) ~ 111 ~ m (E H), ~ where E(1) denotes the expectation operator. Then we have the following theorem: Theorem 1: If s 1 ;...;s n follow an identical distribution, then ~ 1(E H)=111 ~ = ~ m(e H). ~ Proof: Since s 1;...;s n follow an identical distribution, we have E js 1 j 20p = 111 = E js n j 20p. Without loss of generality, let us suppose Thus i.e., E js 1 j 20p = 111 = E js n j 20p = 20p > 0: (19) E ~ H = ~ A 1 E 5 01 (s) 1 ~ A ~ A ~ A T = ~ A 1 diag E js 1 j 20p ;...;Ejs n j 20p 1 ~ A ~ A ~ A T = 20p 1 ~ A 1 ~ A T E ~ H = 20p 1 ~ A 1 ~ A ~ A ~ A T : (20)

4 402 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 1, JANUARY 2009 From (17), we get ~A =3 01 U 01 A =3 01 U 01 1 U6V T =3 01 6V T = [3; 0] 1 V T =[I; 0] 1 V T (21) TABLE I THE PERFORMANCE OF THREE METHODS WHEN THE ROWS OFA ARE ORTHOGONALIZED where I is an m 2 m identity matrix. Combining (20) with (21), we can immediately obtain E ~ H = 20p 1 [I; 0] 1 V T V 1 [I; 0] T = 20p 1 I m2m: (22) From (22), we know that the eigenvalues ~ 1 (E H); ~...; ~ m (E H) ~ of matrix E H ~ satisfy ~ 1(E H)=111 ~ = ~ m(e H)= ~ 20p. Remark 2: From the discussion in Section III, we know that the performance of CG-FOCUSS depends on the distribution of eigenvalues of the matrix H: ~ if the matrix H ~ has few distinct eigenvalues or all of its m eigenvalues are almost mutually equal, the CG-FOCUSS will be very fast. In addition, Theorem 1 illustrates that the matrix E H ~ will have only one distinct eigenvalue, i.e., ~ 1(E H)=111 ~ = ~ m(e H),if ~ the original sources s 1 ;...;s n follow an identical distribution, which means that the CG-FOCUSS can statistically improve the eigenvalue distribution of matrix H ~ in high degree by incorporating the transform matrix (U3) 3 01 via EVD. Not only that. Consider T samples taken from the model (1) TABLE II THE PERFORMANCE OF THREE METHODS WHEN THE ROWS OF A ARE NOT ORTHOGONALIZED x(t) =As(t); t =1;...;T: (23) It is worth mentioning two issues: firstly, in (23), we compute the preconditioner P =3 2 or the transform matrix (U3) 01 only one time for all of T sampling points; secondly, Theorem 1 means that, on the whole, the preconditioner P =3 2 for the PCG-FOCUSS or the transform matrix (U3) 01 for the CG-FOCUSS is statistically optimal for all T sampling points t =1;...;T, though it might not be the best for some special sampling points t. VI. SIMULATIONS In this section, we give some numerical experiments to illustrate the performance of the CG-FOCUSS. As mentioned in Section IV, TNIPM 1 is a truncated Newton interior-point method, which is very efficient for large scale compressed sensing problems. Kim et al. compared the TNIPM with many efficient existing methods in details [1]. Their results showed that the TNIPM significantly outperformed the other methods. So here we only compare our method with the standard FOCUSS and the TNIPM. All the methods are implemented in Matlab 7.2, and were run on a Dell PC with Intel Xeon CPU 3 GHz under Windows XP Professional. To check how well the sparse sources are reconstructed, we compute the signal-to-interference ratio (SIR) between the true coefficient matrix S and its estimate ^S, which is defined as follows: SIR(S; ^S) =020 log 10 S 0 ^S F ksk F [db] (24) where k1k F denotes the Frobenious norm. To our experience, the FOCUSS usually can start from a very dense initialization (e.g., all entries of its initialization are nonzero). For convenience, in the following examples, the initializations of all algorithms are chosen as vector 1, in which all entries are 1; and all algorithms start from the same initializations for fair comparison. In addition, the 1 The Matlab codes of the TNIPM were downloaded from the website: stanford.edu/~boyd/l1_ls/. Fig. 1. Variation of runtimes with the number of nonzero sources. other algorithm parameters are taken as follows: the sparsity parameter is p =1; the CG-FOCUSS and the standard FOCUSS run 30 iterations and they also converge within 30 iterations; for CG-FOCUSS, the tolerance is " =0:001, the Lagrange multiplier vector is set as zero vector 0 = 0 m21; for the TNIPM, the same parameters are taken as in [1]: regularization parameter =0:01 and the relative tolerance is ) Example 1: Consider a similar example of sparse signal recovery as in [1], where the sparse sources are s which consist of 160 spikes with amplitude 61. The basis matrix A is randomly generated and then its rows are orthogonalized. The detailed results of three methods are shown in Table I, where we can see that all the methods faithfully reconstructed the signals, but the CG-FOCUSS is faster than the standard FOCUSS and TNIPM. 2) Example 2: The sources s are the same as in Example 1. Here the basis matrix A is slightly different from Example 1, which is also randomly generated but not orthogonalized. In this case, the results are shown in Table II. Comparing Tables I and II, we can see that the TNIPM is more time-consuming when A is not orthogonalized. 3) Example 3: In this example, we investigate the variation of the performance of three methods with the number of nonzero sources. The sources s and the basis matrix A orthogonalized by the rows are generated in the same way as in Example 1. Here, all the methods faithfully reconstructed the sources. We only compare their runtimes. Fig. 1 shows that the runtime of the TNIPM is very sensitive to the number of nonzero sources. In details, the computational complexity of the TNIPM sharply increases when the number of

5 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 1, JANUARY TABLE III THREE METHODS FOR THE EXAMPLE OF MULTIPLE SAMPLES TABLE IV VARIATION OF SIR WITH PARAMETER " (dbs) TABLE V THE VARIATION OF RUNTIME WITH PARAMETER " (SECONDS) Fig. 2. MRI image reconstruction results. Upper left: Removed DFT coefficients (in white). Upper right: Original MRI image. Lower left: Linear reconstruction. Some artifacts are pointed by an arrow. Lower right: Sparse reconstruction by CG-FOCUSS. nonzero sources increases while the computation time of the CG-FO- CUSS and the standard FOCUSS is independent from the sparsity of the sources. 4) Example 4: Consider the model (23) with 100 sampling points, where the basis matrix A is randomly generated and followed by normalization of the rows, and the sources s(t) (t = 1;...; 100) are also randomly generated, where 280 sources are nonzeros in each sampling point. The results are given in Table III. 5) Example 5: Here, we mainly test the performance of the CG-FO- CUSS with the sensitivity values of " in (16) and compare CG-FO- CUSS with the simplest PCG-FOCUSS, in which the preconditioner is an identity matrix. The source matrix S and the basis matrix A are the same as in Example 4. From Table IV, we can see that neither the simplest PCG-FOCUSS nor CG-FOCUSS found good results (SIR < 18 db) when " =0:1, while both methods succeeded when " 0:001. The SIRs changed very little when " 0:001. Also, by extensive tests, we found that " = 0:001 works well. So we suggest to take " =0:001 for CG-FOCUSS. Table V shows that the CG-FOCUSS is uniformly faster than the simplest PCG-FOCUSS when " 0:01. Furthermore, the smaller " is, the more the percentage of the reduced runtime by the CG-FOCUSS. 6) Example 6: Finally, we consider a MRI image reconstruction problem by sparse representation. We extracted 472 from 512 possible parallel lines in the spatial frequency of an image I. The other 40 lines of 512 were removed (see Fig. 2). Thus, a DFT coefficient matrix If, the kept DFT coefficient matrix of the original MRI image I after removing, was obtained, whose compressed sensing matrix 8 is a matrix by randomly removing the corresponding 40 rows of the DFT transform matrix. Considering that usually the images have sparse representation in the wavelet domain, we reconstruct the MRI image in the wavelet domain and the Daubechies 4 transform W is used. Then, we can derive the following complex-valued overcomplete sparse representation problem: If =8 1 I =8 1 W 01 1 W 1 I = A 1 IW (25) TABLE VI MRI RECONSTRUCTION RESULTS where A = 8 1 W 01 and IW = W 1 I. The (25) can be further represented as a real-valued problem parallel to model (23) with 512 samples (t =1;...; 512; m =472, n = 512): I R f + I I f =(A R + A I ) 1 IW (26) where I R f, I I f are respectively the real part and imaginary part of If and A R, A I are the real part and imaginary part of A, respectively. Then, we can reconstruct the original MRI image I by ^I = W 01 1 ^IW, where ^IW is the solution of (26). The standard FOCUSS, TNIPM, and CG-FOCUSS were, respectively, employed to solve (26). Similar to the standard FOCUSS, it takes much time for CG-FO- CUSS to compute the matrix-matrix multiplications A5 01 (s)a T. Fortunately, for this kind of compressed sensing example, fast algorithms are usually available [1]. Note that A =8 1 W 01, where W is an orthogonal wavelet matrix (W 01 = W T ) and 8 is a part of a Fourier transform matrix. So this multiplication A5 01 (s)a T can be done efficiently by performing fast inverse wavelet transform and fast DFT on matrix 5 01 [1]. For any vector v 2 n, the computational complexity of fast DFT is only O(n log n). In addition, the EVD step on matrix AA T for finding a good preconditioner is omitted in this example because AA T = I is an identity matrix. From Table VI, we can see the two methods almost achieved the similar results (i.e., the PSNRs are about 28.9 db) and the main difference is the computational time. So we show only the reconstructed MRI by CG-FOCUSS in Fig. 2. In addition, Table VI shows that the sparse MRI method gained a little more than 3.8 db compared with

6 404 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 1, JANUARY 2009 linear reconstruction method, which sets the unobserved DFT coefficients to zeros and then directly performs the inverse DFT. We also can compare their results in Fig. 2, where the linear reconstruction suffers from the arc-like streaking artifacts (pointed by the arrow) due to undersampling, whereas the artifacts are much less noticeable in the sparse reconstruction. VII. DISCUSSIONS As mentioned in Example 6, for many FOCUSS algorithms including the CG-FOCUSS, one of the most expensive computation is the matrix-matrix multiplication A5 01 (s)a T, whose computational complexity is O(m 3 ). So for the general overcomplete sparse representation problem (1), the computational complexity of CG-FOCUSS is also O(m 3 ). To our simulation experience, a variety of FOCUSS algorithms are computable on PC when m < 4000, which are sufficient for many real applications. The parameter n can be a little larger when m is not very great. For example, m = 80, n = However, for some compressed sensing problems, the EVD step for finding the preconditioner can be omitted and the computational complexity of A5 01 (s)a T can be reduced by fast orthogonal transforms (i.e., the cost of computing A5 01 (s)a T is reduced to O(n 2 log n) in Example 6). For a large basis matrix A (e.g., m 3000), it is time-consuming to perform SVD or EVD on A. It is even difficult to store it in a RAM memory on PC when m>5000 unless A is sparse. In such cases, we suggest to directly apply the CG-FOCUSS in (16) without considering the preconditioner or transform matrix by EVD/SVD. For very large scale compressed sensing problems (i.e., m>5000), maybe TNIPM is a better choice [1]. VIII. CONCLUSION In this correspondence, two issues were mainly addressed. First, we presented a very rigorous derivation for the FOCUSS in the case 0 <p<1by using the nonlinear transform. The second contribution is the proposed CG-FOCUSS which is computationally efficient and more suitable than the standard FOCUSS for larger scale problems. On the whole, the implementation of (5) can be separated into two parts: the computation of [A (s)1a T ] 01 1x and the remaining. Roughly estimating, the former costs about half of the total computation time, and the latter costs another half. By incorporating the transform matrix (U 3) 01 into (18), the CG method can be efficiently implemented to compute [A (s) 1 A T ] 01 1 x. Then, in contrast to the total computation time of (5), the computation time for [A (s) 1 A T ] 01 1 x can nearly be neglected. Therefore, the CG-FOCUSS is nearly as twice faster as the standard FOCUSS in many situations. ACKNOWLEDGMENT The authors would like to thank all reviewers for their very insightful comments and suggestions. REFERENCES [1] S. J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, An interiorpoint method for large-scale ` -regularized least squares, IEEE J. Sel. Topics Signal Process., vol. 1, no. 4, pp , Dec [2] I. F. Gorodnitsky, J. George, and B. D. Rao, Neuromagnetic source imaging with FOCUSS: A recursive weighted minimum norm algorithm, Electroencephalogr. Clin. Neurophysiol., vol. 95, no. 4, pp , Oct [3] I. F. Gorodnitsky and B. D. Rao, Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm, IEEE Trans. Signal Process., vol. 45, no. 3, pp , Mar [4] B. D. Rao and K. Kreutz-Delgado, An affine scaling methodology for best basis selection, IEEE Trans. Signal Process., vol. 47, no. 1, pp , Jan [5] P. Xu, Y. Tian, H. F. Chen, and D. Z. Yao, Lp norm iterative sparse solution for EEG source localization, IEEE Trans. Biomed. Eng., vol. 54, no. 3, pp , Mar [6] A. Cichocki and S. Amari, Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications. New York: Wiley, [7] A. Pralat and R. Zdunek, Electromagnetic geotomography Selection of measuring frequency, IEEE Sensors J., vol. 5, no. 2, pp , Apr [8] B. D. Rao and K. Kreutz-Delgado, Deriving algorithms for computing sparse solutions to linear inverse problems, in Conf. Rec. 31st Asilomar Conf. Signals, Systems, Computers, 1997, vol. 1, pp [9] B. D. Rao, Signal processing with the sparseness constraint, in Proc. Int. Conf. Acoustics, Speech, Signal Processing (ICASSP), Seattle, WA, 1998, vol. III, pp [10] B. D. Rao and K. Kreutz-Delgado, Sparse solutions to linear inverse problems with multiple measurement vectors, in Proc. 8th IEEE Digital Signal Processing Workshop, Bryce Canyon National Park, Aug [11] S. Chen, D. L. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput., vol. 20, no. 1, pp , [12] B. D. Rao, K. Engan, S. F. Cotter, J. Palmer, and K. Kreutz-Delgado, Subset selection in noise based on diversity measure minimization, IEEE Trans. Signal Process., vol. 51, no. 3, pp , Mar [13] D. L. Donoho and M. Elad, Maximal sparsity representation via ` minimization, in Proc. Nat. Acad. Sci., 2003, vol. 100, pp [14] K. Kreutz-Delgado et al., Dictionary learning algorithms for sparse representation, Neural Comput., vol. 15, pp , [15] S. F. Cotter, B. D. Rao, K. Engan, and K. Kreutz-Delgado, Sparse solutions to linear inverse problems with multiple measurement vectors, IEEE Trans. Signal Process., vol. 53, no. 7, pp , Jul [16] Y. Q. Li, A. Cichocki, and S. Amari, Analysis of sparse representation and blind source separation, Neural Comput., vol. 16, pp , [17] I. Takigawa, M. Kudo, and J. Toyama, Performance analysis of minimum ` -norm solutions for underdetermined source separation, IEEE Trans. Signal Process., vol. 52, no. 3, pp , Mar [18] Y. Q. Li, S. Amari, A. Cichocki, D. W. C. Ho, and S. L. Xie, Underdetermined blind source separation based on sparse representation, IEEE Trans. Signal Process., vol. 54, no. 2, pp , Feb [19] P. Bofill and M. Zibulevsky, Underdetermined blind source separation using sparse representations, Signal Process., vol. 81, pp , [20] S. G. Mallat and Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Trans. Signal Process., vol. 41, no. 12, pp , Dec [21] J. Tropp, Greed is good: algorithmic results for sparse approximation, IEEE Trans. Inf. Theory, vol. 50, no. 10, pp , Oct [22] M. Saunders, PDCO: Primal-Dual Interior Method for Convex Objectives 2002 [Online]. Available: software/pdco.html [23] D. L. Donoho and Y. Tsaig, Fast Solution of ` -norm minimization problems when the solution may be sparse, Department of Statistics, Stanford University, Stanford, CA, Tech. Rep , [24] I. F. Gorodnitsky, An extension of an interior-point method for entropy minimization, in Proc. Acoustics, Speech, Signal Processing (ICASSP), Washington, DC, 1999, pp [25] B. Wohlberg, Noise sensitivity of sparse signal representations: reconstruction error bounds for the inverse problem, IEEE Trans. Signal Process., vol. 51, no. 12, pp , Dec [26] J. Chen and X. M. Huo, Theoretical results on sparse representations of multiple-measurement vectors, IEEE Trans. Signal Process., vol. 54, no. 12, pp , Dec [27] M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Natl. Bur. Stand., vol. 49, no. 6, pp , Dec [28] J. Nocedal and S. J. Wright, Numerical Optimization, ser. Springer Series in Operations Research and Financial Engineering, P. Glynn and S. M. Robinson, Eds., 2nd ed. New York: Springer-Verlag, [29] D. C. Sorensen, Numerical methods for large eigenvalue problems, Acta Numerica, vol. 11, pp , [30] R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARPACK Users Guide: Solution of Large-Scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods SIAM, Philadelphia, PA, 1998 [Online]. Available:

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 2, FEBRUARY

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 2, FEBRUARY IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 54, NO 2, FEBRUARY 2006 423 Underdetermined Blind Source Separation Based on Sparse Representation Yuanqing Li, Shun-Ichi Amari, Fellow, IEEE, Andrzej Cichocki,

More information

Numerical Methods. Rafał Zdunek Underdetermined problems (2h.) Applications) (FOCUSS, M-FOCUSS,

Numerical Methods. Rafał Zdunek Underdetermined problems (2h.) Applications) (FOCUSS, M-FOCUSS, Numerical Methods Rafał Zdunek Underdetermined problems (h.) (FOCUSS, M-FOCUSS, M Applications) Introduction Solutions to underdetermined linear systems, Morphological constraints, FOCUSS algorithm, M-FOCUSS

More information

Natural Gradient Learning for Over- and Under-Complete Bases in ICA

Natural Gradient Learning for Over- and Under-Complete Bases in ICA NOTE Communicated by Jean-François Cardoso Natural Gradient Learning for Over- and Under-Complete Bases in ICA Shun-ichi Amari RIKEN Brain Science Institute, Wako-shi, Hirosawa, Saitama 351-01, Japan Independent

More information

Fast Sparse Representation Based on Smoothed

Fast Sparse Representation Based on Smoothed Fast Sparse Representation Based on Smoothed l 0 Norm G. Hosein Mohimani 1, Massoud Babaie-Zadeh 1,, and Christian Jutten 2 1 Electrical Engineering Department, Advanced Communications Research Institute

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are

More information

Non-Negative Matrix Factorization with Quasi-Newton Optimization

Non-Negative Matrix Factorization with Quasi-Newton Optimization Non-Negative Matrix Factorization with Quasi-Newton Optimization Rafal ZDUNEK, Andrzej CICHOCKI Laboratory for Advanced Brain Signal Processing BSI, RIKEN, Wako-shi, JAPAN Abstract. Non-negative matrix

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Robust multichannel sparse recovery

Robust multichannel sparse recovery Robust multichannel sparse recovery Esa Ollila Department of Signal Processing and Acoustics Aalto University, Finland SUPELEC, Feb 4th, 2015 1 Introduction 2 Nonparametric sparse recovery 3 Simulation

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Basis Pursuit Denoising and the Dantzig Selector

Basis Pursuit Denoising and the Dantzig Selector BPDN and DS p. 1/16 Basis Pursuit Denoising and the Dantzig Selector West Coast Optimization Meeting University of Washington Seattle, WA, April 28 29, 2007 Michael Friedlander and Michael Saunders Dept

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Dictionary Learning for L1-Exact Sparse Coding

Dictionary Learning for L1-Exact Sparse Coding Dictionary Learning for L1-Exact Sparse Coding Mar D. Plumbley Department of Electronic Engineering, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom. Email: mar.plumbley@elec.qmul.ac.u

More information

A simple test to check the optimality of sparse signal approximations

A simple test to check the optimality of sparse signal approximations A simple test to check the optimality of sparse signal approximations Rémi Gribonval, Rosa Maria Figueras I Ventura, Pierre Vergheynst To cite this version: Rémi Gribonval, Rosa Maria Figueras I Ventura,

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Sparse & Redundant Signal Representation, and its Role in Image Processing

Sparse & Redundant Signal Representation, and its Role in Image Processing Sparse & Redundant Signal Representation, and its Role in Michael Elad The CS Department The Technion Israel Institute of technology Haifa 3000, Israel Wave 006 Wavelet and Applications Ecole Polytechnique

More information

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße

More information

Computing approximate PageRank vectors by Basis Pursuit Denoising

Computing approximate PageRank vectors by Basis Pursuit Denoising Computing approximate PageRank vectors by Basis Pursuit Denoising Michael Saunders Systems Optimization Laboratory, Stanford University Joint work with Holly Jin, LinkedIn Corp SIAM Annual Meeting San

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

On the Role of the Properties of the Nonzero Entries on Sparse Signal Recovery

On the Role of the Properties of the Nonzero Entries on Sparse Signal Recovery On the Role of the Properties of the Nonzero Entries on Sparse Signal Recovery Yuzhe Jin and Bhaskar D. Rao Department of Electrical and Computer Engineering, University of California at San Diego, La

More information

Nonnegative Tensor Factorization with Smoothness Constraints

Nonnegative Tensor Factorization with Smoothness Constraints Nonnegative Tensor Factorization with Smoothness Constraints Rafal ZDUNEK 1 and Tomasz M. RUTKOWSKI 2 1 Institute of Telecommunications, Teleinformatics and Acoustics, Wroclaw University of Technology,

More information

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 17, NO. 2, MARCH Yuanqing Li, Andrzej Cichocki, and Shun-Ichi Amari, Fellow, IEEE

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 17, NO. 2, MARCH Yuanqing Li, Andrzej Cichocki, and Shun-Ichi Amari, Fellow, IEEE IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL 17, NO 2, MARCH 2006 419 Blind Estimation of Channel Parameters and Source Components for EEG Signals: A Sparse Factorization Approach Yuanqing Li, Andrzej Cichocki,

More information

A Compressive Sensing Based Compressed Neural Network for Sound Source Localization

A Compressive Sensing Based Compressed Neural Network for Sound Source Localization A Compressive Sensing Based Compressed Neural Network for Sound Source Localization Mehdi Banitalebi Dehkordi Speech Processing Research Lab Yazd University Yazd, Iran mahdi_banitalebi@stu.yazduni.ac.ir

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM. Brain Science Institute, RIKEN, Wako-shi, Saitama , Japan

ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM. Brain Science Institute, RIKEN, Wako-shi, Saitama , Japan ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM Pando Georgiev a, Andrzej Cichocki b and Shun-ichi Amari c Brain Science Institute, RIKEN, Wako-shi, Saitama 351-01, Japan a On leave from the Sofia

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Compressed Sensing and Neural Networks

Compressed Sensing and Neural Networks and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications

More information

Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods

Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods Renato D. C. Monteiro Jerome W. O Neal Takashi Tsuchiya March 31, 2003 (Revised: December 3, 2003) Abstract Solving

More information

A NEW FRAMEWORK FOR DESIGNING INCOHERENT SPARSIFYING DICTIONARIES

A NEW FRAMEWORK FOR DESIGNING INCOHERENT SPARSIFYING DICTIONARIES A NEW FRAMEWORK FOR DESIGNING INCOERENT SPARSIFYING DICTIONARIES Gang Li, Zhihui Zhu, 2 uang Bai, 3 and Aihua Yu 3 School of Automation & EE, Zhejiang Univ. of Sci. & Tech., angzhou, Zhejiang, P.R. China

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

MATCHING PURSUIT WITH STOCHASTIC SELECTION

MATCHING PURSUIT WITH STOCHASTIC SELECTION 2th European Signal Processing Conference (EUSIPCO 22) Bucharest, Romania, August 27-3, 22 MATCHING PURSUIT WITH STOCHASTIC SELECTION Thomas Peel, Valentin Emiya, Liva Ralaivola Aix-Marseille Université

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

Iterative reweighted l 1 design of sparse FIR filters

Iterative reweighted l 1 design of sparse FIR filters Iterative reweighted l 1 design of sparse FIR filters Cristian Rusu, Bogdan Dumitrescu Abstract Designing sparse 1D and 2D filters has been the object of research in recent years due mainly to the developments

More information

STRUCTURE-AWARE DICTIONARY LEARNING WITH HARMONIC ATOMS

STRUCTURE-AWARE DICTIONARY LEARNING WITH HARMONIC ATOMS 19th European Signal Processing Conference (EUSIPCO 2011) Barcelona, Spain, August 29 - September 2, 2011 STRUCTURE-AWARE DICTIONARY LEARNING WITH HARMONIC ATOMS Ken O Hanlon and Mark D.Plumbley Queen

More information

Dictionary learning for speech based on a doubly sparse greedy adaptive dictionary algorithm

Dictionary learning for speech based on a doubly sparse greedy adaptive dictionary algorithm IEEE TRANSACTIONS ON, VOL. X, NO. X, JANUARY XX 1 Dictionary learning for speech based on a doubly sparse greedy adaptive dictionary algorithm Maria G. Jafari and Mark D. Plumbley Abstract In this paper

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego 1 Outline Course Outline Motivation for Course Sparse Signal Recovery Problem Applications Computational

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Adaptive beamforming for uniform linear arrays with unknown mutual coupling. IEEE Antennas and Wireless Propagation Letters.

Adaptive beamforming for uniform linear arrays with unknown mutual coupling. IEEE Antennas and Wireless Propagation Letters. Title Adaptive beamforming for uniform linear arrays with unknown mutual coupling Author(s) Liao, B; Chan, SC Citation IEEE Antennas And Wireless Propagation Letters, 2012, v. 11, p. 464-467 Issued Date

More information

Bayesian Methods for Sparse Signal Recovery

Bayesian Methods for Sparse Signal Recovery Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Motivation Motivation Sparse Signal Recovery

More information

RECENTLY, there has been a great deal of interest in

RECENTLY, there has been a great deal of interest in IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 1, JANUARY 1999 187 An Affine Scaling Methodology for Best Basis Selection Bhaskar D. Rao, Senior Member, IEEE, Kenneth Kreutz-Delgado, Senior Member,

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 6, JUNE

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 6, JUNE IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 6, JUNE 2010 2935 Variance-Component Based Sparse Signal Reconstruction and Model Selection Kun Qiu, Student Member, IEEE, and Aleksandar Dogandzic,

More information

Co-Prime Arrays and Difference Set Analysis

Co-Prime Arrays and Difference Set Analysis 7 5th European Signal Processing Conference (EUSIPCO Co-Prime Arrays and Difference Set Analysis Usham V. Dias and Seshan Srirangarajan Department of Electrical Engineering Bharti School of Telecommunication

More information

A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT

A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT Progress In Electromagnetics Research Letters, Vol. 16, 53 60, 2010 A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT Y. P. Liu and Q. Wan School of Electronic Engineering University of Electronic

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES. Mika Inki and Aapo Hyvärinen

TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES. Mika Inki and Aapo Hyvärinen TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES Mika Inki and Aapo Hyvärinen Neural Networks Research Centre Helsinki University of Technology P.O. Box 54, FIN-215 HUT, Finland ABSTRACT

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse

More information

LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING. Saiprasad Ravishankar and Yoram Bresler

LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING. Saiprasad Ravishankar and Yoram Bresler LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and the Coordinated Science Laboratory, University

More information

Simultaneous Sparsity

Simultaneous Sparsity Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

WHEN studying distributed simulations of power systems,

WHEN studying distributed simulations of power systems, 1096 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 21, NO 3, AUGUST 2006 A Jacobian-Free Newton-GMRES(m) Method with Adaptive Preconditioner and Its Application for Power Flow Calculations Ying Chen and Chen

More information

Sparse & Redundant Representations by Iterated-Shrinkage Algorithms

Sparse & Redundant Representations by Iterated-Shrinkage Algorithms Sparse & Redundant Representations by Michael Elad * The Computer Science Department The Technion Israel Institute of technology Haifa 3000, Israel 6-30 August 007 San Diego Convention Center San Diego,

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

A discretized Newton flow for time varying linear inverse problems

A discretized Newton flow for time varying linear inverse problems A discretized Newton flow for time varying linear inverse problems Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München Arcisstrasse

More information

Truncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences

Truncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 207-4212 Ubiquitous International Volume 7, Number 5, September 2016 Truncation Strategy of Tensor Compressive Sensing for Noisy

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

SIGNALS with sparse representations can be recovered

SIGNALS with sparse representations can be recovered IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1497 Cramér Rao Bound for Sparse Signals Fitting the Low-Rank Model with Small Number of Parameters Mahdi Shaghaghi, Student Member, IEEE,

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Title without the persistently exciting c. works must be obtained from the IEE

Title without the persistently exciting c.   works must be obtained from the IEE Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Learning an Adaptive Dictionary Structure for Efficient Image Sparse Coding

Learning an Adaptive Dictionary Structure for Efficient Image Sparse Coding Learning an Adaptive Dictionary Structure for Efficient Image Sparse Coding Jérémy Aghaei Mazaheri, Christine Guillemot, Claude Labit To cite this version: Jérémy Aghaei Mazaheri, Christine Guillemot,

More information

Estimation Error Bounds for Frame Denoising

Estimation Error Bounds for Frame Denoising Estimation Error Bounds for Frame Denoising Alyson K. Fletcher and Kannan Ramchandran {alyson,kannanr}@eecs.berkeley.edu Berkeley Audio-Visual Signal Processing and Communication Systems group Department

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition

Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition Seema Sud 1 1 The Aerospace Corporation, 4851 Stonecroft Blvd. Chantilly, VA 20151 Abstract

More information

Invariancy of Sparse Recovery Algorithms

Invariancy of Sparse Recovery Algorithms Invariancy of Sparse Recovery Algorithms Milad Kharratzadeh, Arsalan Sharifnassab, and Massoud Babaie-Zadeh Abstract In this paper, a property for sparse recovery algorithms, called invariancy, is introduced.

More information

Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm

Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm Stanford Exploration Project, Report 82, May 11, 2001, pages 1 176 Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm James G. Berryman 1 ABSTRACT Resolution matrices

More information

The Iteration-Tuned Dictionary for Sparse Representations

The Iteration-Tuned Dictionary for Sparse Representations The Iteration-Tuned Dictionary for Sparse Representations Joaquin Zepeda #1, Christine Guillemot #2, Ewa Kijak 3 # INRIA Centre Rennes - Bretagne Atlantique Campus de Beaulieu, 35042 Rennes Cedex, FRANCE

More information

Optimization Algorithms for Compressed Sensing

Optimization Algorithms for Compressed Sensing Optimization Algorithms for Compressed Sensing Stephen Wright University of Wisconsin-Madison SIAM Gator Student Conference, Gainesville, March 2009 Stephen Wright (UW-Madison) Optimization and Compressed

More information

Blind Source Separation with a Time-Varying Mixing Matrix

Blind Source Separation with a Time-Varying Mixing Matrix Blind Source Separation with a Time-Varying Mixing Matrix Marcus R DeYoung and Brian L Evans Department of Electrical and Computer Engineering The University of Texas at Austin 1 University Station, Austin,

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property : An iterative algorithm for sparse recovery with restricted isometry property Rahul Garg grahul@us.ibm.com Rohit Khandekar rohitk@us.ibm.com IBM T. J. Watson Research Center, 0 Kitchawan Road, Route 34,

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Sparse Signal Recovery: Theory, Applications and Algorithms

Sparse Signal Recovery: Theory, Applications and Algorithms Sparse Signal Recovery: Theory, Applications and Algorithms Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego Collaborators: I. Gorodonitsky, S. Cotter,

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

J. Liang School of Automation & Information Engineering Xi an University of Technology, China

J. Liang School of Automation & Information Engineering Xi an University of Technology, China Progress In Electromagnetics Research C, Vol. 18, 245 255, 211 A NOVEL DIAGONAL LOADING METHOD FOR ROBUST ADAPTIVE BEAMFORMING W. Wang and R. Wu Tianjin Key Lab for Advanced Signal Processing Civil Aviation

More information

SIGNAL SEPARATION USING RE-WEIGHTED AND ADAPTIVE MORPHOLOGICAL COMPONENT ANALYSIS

SIGNAL SEPARATION USING RE-WEIGHTED AND ADAPTIVE MORPHOLOGICAL COMPONENT ANALYSIS TR-IIS-4-002 SIGNAL SEPARATION USING RE-WEIGHTED AND ADAPTIVE MORPHOLOGICAL COMPONENT ANALYSIS GUAN-JU PENG AND WEN-LIANG HWANG Feb. 24, 204 Technical Report No. TR-IIS-4-002 http://www.iis.sinica.edu.tw/page/library/techreport/tr204/tr4.html

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction

A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction Yao Xie Project Report for EE 391 Stanford University, Summer 2006-07 September 1, 2007 Abstract In this report we solved

More information

Fast Dictionary Learning for Sparse Representations of Speech Signals

Fast Dictionary Learning for Sparse Representations of Speech Signals Fast Dictionary Learning for Sparse Representations of Speech Signals Jafari, MG; Plumbley, MD For additional information about this publication click this link. http://qmro.qmul.ac.uk/jspui/handle/123456789/2623

More information

Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation

Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation Alfredo Nava-Tudela John J. Benedetto, advisor 5/10/11 AMSC 663/664 1 Problem Let A be an n

More information

Sparse linear models and denoising

Sparse linear models and denoising Lecture notes 4 February 22, 2016 Sparse linear models and denoising 1 Introduction 1.1 Definition and motivation Finding representations of signals that allow to process them more effectively is a central

More information

Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization

Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization Andrzej CICHOCKI and Rafal ZDUNEK Laboratory for Advanced Brain Signal Processing, RIKEN BSI, Wako-shi, Saitama

More information

SPARSE signal representations allow the salient information. Fast dictionary learning for sparse representations of speech signals

SPARSE signal representations allow the salient information. Fast dictionary learning for sparse representations of speech signals Author manuscript, published in "IEEE journal of selected topics in Signal Processing, special issue on Adaptive Sparse Representation of Data and Applications in Signal and Image Processing. (2011)" 1

More information

Analysis of Denoising by Sparse Approximation with Random Frame Asymptotics

Analysis of Denoising by Sparse Approximation with Random Frame Asymptotics Analysis of Denoising by Sparse Approximation with Random Frame Asymptotics Alyson K Fletcher Univ of California, Berkeley alyson@eecsberkeleyedu Sundeep Rangan Flarion Technologies srangan@flarioncom

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Step lengths in BFGS method for monotone gradients

Step lengths in BFGS method for monotone gradients Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly

More information

Compressed Sensing: Extending CLEAN and NNLS

Compressed Sensing: Extending CLEAN and NNLS Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction

More information