Recovery of Sparse Signals Using Multiple Orthogonal Least Squares
|
|
- Johnathan Parrish
- 5 years ago
- Views:
Transcription
1 Recovery of Sparse Signals Using Multiple Orthogonal east Squares Jian Wang and Ping i Department of Statistics and Biostatistics, Department of Computer Science Rutgers, The State University of New Jersey Piscataway, New Jersey 8854, USA {jwang,pingli}@stat.rutgers.edu arxiv:4.55v [stat.me] 3 Dec 5 Abstract We study the problem of recovering sparse signals from compressed linear measurements. This problem, often referred to as sparse recovery or sparse reconstruction, has generated a great deal of interest in recent years. To recover the sparse signals, we propose a new method called multiple orthogonal least squares MOS), which extends the well-known orthogonal least squares OS) algorithm by allowing multiple indices to be chosen per iteration. Owing to inclusion of multiple support indices in each selection, the MOS algorithm converges in much fewer iterations and improves the computational efficiency over the conventional OS algorithm. Theoretical analysis shows that MOS > ) performs exact recovery of all -sparse signals within iterations if the measurement matrix satisfies the restricted isometry property ) with isometry constant δ < +. The recovery performance of MOS in the noisy scenario is also studied. It is shown that stable recovery of sparse signals can be achieved with the MOS algorithm when the signal-to-noise ratio SNR) scales linearly with the sparsity level of input signals. Index Terms Compressed sensing CS), sparse recovery, orthogonal matching pursuit ), orthogonal least squares OS), multiple orthogonal least squares MOS), restricted isometry property ), signal-to-noise ratio SNR). I. INTRODUCTION In recent years, sparse recovery has attracted much attention in applied mathematics, electrical engineering, and statistics [] [4]. The main task of sparse recovery is to recover a high dimensional -sparse vector x R n x n) from a small number of linear measurements y = Φx, ) where Φ R m n m < n) is often called the measurement matrix. Although the system is underdetermined, owing to the signal sparsity, x can be accurately recovered from the measurements y by solving an l -minimization problem: min x x subject to y = Φx. ) This method, however, is known to be intractable due to the combinatorial search involved and therefore impractical for realistic applications. Thus, much attention has focused on developing efficient algorithms for recovering the sparse signal. In general, the algorithms can be classified into two major categories: those using convex optimization techniques [] [5] and those based on greedy searching principles [6] [4]. Other algorithms relying on nonconvex methods have also been proposed [5] [9]. The optimization-based approaches replace the nonconvex l -norm with its convex surrogate l -norm, translating the combinatorial hard search into a computationally tractable problem: min x x subject to y = Φx. 3) This algorithm is known as basis pursuit BP) [5]. It has been revealed that under appropriate constraints on the measurement matrix, BP yields exact recovery of the sparse signal. The second category of approaches for sparse recovery are greedy algorithms, in which signal support is iteratively identified according to various greedy principles. Due to their computational simplicity and competitive performance, greedy algorithms have gained considerable popularity in practical applications. Representative algorithms include matching pursuit MP) [7], orthogonal matching pursuit ) [6], [] [7] and orthogonal least squares OS) [8], [8] [3]. Both and OS identify the support of the underlying sparse signal by adding one index at a time, and estimate the sparse coefficients over the enlarged support. The main difference between and OS lies in the greedy rule of updating the support at each iteration. While finds a column that is most strongly correlated with the signal residual, OS seeks to maximally reduce the power of the current residual with an enlarged support set. It has been shown that OS has better convergence property but is computationally more expensive than the algorithm [3]. In this paper, with the aim of improving the recovery accuracy and also reducing the computational cost of OS, we propose a new method called multiple orthogonal least squares MOS), which can be viewed as an extension of the OS algorithm in that multiple indices are allowed to be chosen at a time. Our method is inspired by that those suboptimal candidates in each of the OS identification are likely to be reliable and could be utilized to better reduce the power of signal residual for each iteration, thereby accelerating the convergence of the algorithm. The main steps of the MOS algorithm are specified in Table I. Owing to selection of multiple good candidates in each time, MOS converges in much fewer iterations and improves the computational efficiency over the conventional OS algorithm. Greedy methods with a similar flavor to MOS in adding multiple indices per iteration include stagewise St) [9], regularized R) [], and generalized
2 TABE I THE MOS AGORITHM Input measurement matrix Φ R m n, measurements vector y R m, sparsity level, and selection parameter min{, m }. Initialize iteration count k =, estimated support T =, and residual vector r = y. While End Output r k ǫ and k < ) or k <, do k = k +. Identify S k = argmin S: S = P T k {i} y. Enlarge T k = T k S k. Estimate x k = argmin u:suppu)=t k y Φu. Update r k = y Φx k. the estimated support ˆT = arg min x k x k S and the S: S = estimated signal ˆx satisfying ˆx Ω\ˆT = and = Φ ˆxˆT ˆT y. g) [] also known as orthogonal super greedy algorithm OSGA) [3]), etc. These algorithms identify candidates at each iteration according to correlations between columns of the measurement matrix and the residual vector. Specifically, St picks indices whose magnitudes of correlation exceed a deliberately designed threshold. R first chooses a set of indices with strongest correlations and then narrows down the candidates to a subset based on a predefined regularization rule. The g algorithm finds a fixed number of indices with strongest correlations in each selection. Other greedy methods adopting a different strategy of adding as well as pruning indices from the list include compressive sampling matching pursuit ) [] and subspace pursuit SP) [4] and hard thresholding pursuit HTP) [3], etc. The contributions of this paper are summarized as follows. i) We propose a new algorithm, referred to as MOS, for solving sparse recovery problems. We analyze the MOS algorithm using the restricted isometry property ) introduced in the compressed sensing CS) theory [3] see Definition below). Our analysis shows that MOS > ) exactly recovers any -sparse signal within iterations if the measurement matrix Φ obeys the with isometry constant δ <. 4) + For the special case when =, MOS reduces to the conventional OS algorithm. We establish the condition for the exact sparse recovery with OS as δ + < +. 5) This condition is nearly sharp in the sense that, even with a slight relaxation e.g., relaxing to δ + = ), the exact recovery with OS may not be guaranteed. ii) We analyze recovery performance of MOS in the presence of noise. Our result demonstrates that stable recovery of sparse signals can be achieved with MOS when the signal-to-noise ratio SNR) scales linearly with the sparsity level of input signals. In particular, for the case of OS i.e., when = ), we show that the scaling law of the SNR is necessary for exact support recovery of sparse signals. The rest of this paper is organized as follows: In Section II, we introduce notations, definitions, and lemmas that will be used in this paper. In Section III, we give a useful observation regarding the identification step of MOS. In Section IV and V, we analyze the theoretical performance of MOS in recovering sparse signals. In Section VI, we study the empirical performance of the MOS algorithm. Concluding remarks are given in Section VII. A. Notations II. PREIMINARIES We first briefly summarize notations used in this paper. et Ω = {,,,n} and let T = suppx) = {i i Ω,x i } denote the support of vector x. For S Ω, S is the cardinality of S. T \S is the set of all elements contained in T but not in S. x S R S is the restriction of the vector x to the elements with indices in S. Φ S R m S is a submatrix of Φ that only contains columns indexed by S. If Φ S is full column rank, then Φ S = Φ S Φ S) Φ S is the pseudoinverse of Φ S. spanφ S ) is the span of columns in Φ S. P S = Φ S Φ S is the projection onto spanφ S ).P S = I P S is the projection onto the orthogonal complement of spanφ S ), where I is the identity matrix. For mathematical convenience, we assume that Φ has unit l -norm columns throughout the paper. B. Definitions and emmas Definition [3]): A measurement matrix Φ is said to satisfy the of order if there exists a constant δ,) such that δ) x Φx +δ) x 6) for all -sparse vectors x. In particular, the minimum of all constants δ satisfying 6) is called the isometry constant δ. The following lemmas are useful for our analysis. emma emma 3 in [3]): If a measurement matrix satisfies the of both orders and where, then δ δ. This property is often referred to as the monotonicity of the isometry constant. emma Consequences of [], [34]): et S Ω. If δ S < then for any vector u R S, δ S ) u Φ S Φ Su +δ S ) u, u Φ S +δ Φ S) u u. S δ S emma 3 emma. in [35]): et S,S Ω and S S =. If δ S + S <, then Φ S Φv δ S + S v holds for any vector v R n supported on S. In [33], it has been shown that the behavior of OS is unchanged whether the columns of Φ have unit l -norm or not. As MOS is a direct extension of the OS algorithm, it can be verified that the behavior of MOS is also unchanged whether Φ has unit l -norm columns or not.
3 3 emma 4 Proposition 3. in []): et S Ω. If δ S <, then for any vector u R m, Φ S u +δ S u. emma 5 emma 5 in [36]): et S,S Ω. Then the minimum and maximum eigenvalues of Φ S P S Φ S satisfy λ min Φ S P S Φ S ) λ min Φ S S Φ S S ), λ max Φ S P S Φ S ) λ max Φ S S Φ S S ). emma 6: et S Ω. If δ S < then for any vector u R S, u +δ S Φ S ) u u δ S. 7) The upper bound in 7) has appeared in [, Proposition 3.] and we give a proof for the upper bound in Appendix A. III. OBSERVATION et us begin with an interesting and important observation regarding the identification step of MOS as shown in Table I. At the k +)-th iteration k ), MOS adds to T k a set of indices, S k+ = arg min P T k {i} y. 8) S: S = Intuitively, a straightforward implementation of 8) requires to sort all elements in { P T k {i} y } i Ω\T k and then find the smallest ones and their corresponding indices). This implementation, however, is computationally expensive as it requires to construct n k different orthogonal projections i.e., P T k {i}, i Ω\T k ). Therefore, it is highly desirable to find a cost-effective alternative to 8) for the identification step of MOS. Interestingly, the following proposition illustrates that 8) can be substantially simplified. It is inspired by the technical report of Blumensath and Davies [33], in which a geometric interpretation of OS is given in terms of orthogonal projections. Proposition : At the k+)-th iteration, the MOS algorithm identifies a set of indices: S k+ φ i,r k =arg max S: S = P 9) φ T k i =arg max P T φ S: S = k i P,r k. ) φ T k i The proof of this proposition is given in Appendix B. It is essentially identical to some analysis in [8] which is particularly for OS), but with extension to the case of selecting multiple indices per iteration i.e., the MOS case). This extension is important in that it not only enables a lowcomplexity implementation for MOS, but also will play an key role in the performance analysis of MOS in Section IV and V. We thus include the proof for completeness. One can interpret from 9) that to identify S k+, it suffices to find the largest values in { φ i,r k P T kφi }i Ω\T k, which is much simpler than 8) as it involves only one projection operator i.e., P T k ). Indeed, by numerical experiments, we have confirmed that the simplification offers massive reduction in the computational cost. Following the arguments in [3], [33], we give a geometric interpretation of the selection rule in MOS: the columns of measurement matrix are projected onto the subspace that is orthogonal to the span of the active columns, and the normalized projected columns that are best correlated with the residual vector are selected. IV. EXACT SPARSE RECOVERY WITH MOS A. Main Results In this section, we study the condition of MOS for exact recovery of sparse signals. For convenience of stating the results, we say that MOS makes a success at an iteration if it selects at least one correct index at the iteration. Clearly if MOS makes a success in each iteration, it will select all support indices within iterations. When all support indices are selected, MOS can recover the sparse signal exactly. Theorem : et x R n be any -sparse signal and let Φ R m n be the measurement matrix. Also, let be the number of indices selected at each iteration of MOS. Then if Φ satisfies the with { δ < +, >, δ + < ) +, =, MOS exactly recovers x from the measurements y = Φx within iterations. Note that when =, MOS reduces to the conventional OS algorithm. Theorem suggests that under δ + < +, OS can recover any -sparse signal in exact iterations. Similar results have also been established for the algorithm. In [4], [5], it has been shown that δ + < + is sufficient for to exactly recover -sparse signals. The condition is recently improved to δ + < 4+ [37], by utilizing techniques developed in [38]. It is worth mentioning that there exist examples of measurement matrices satisfying δ + = and -sparse signals, for which makes wrong selection at the first iteration and thus fails to recover the signals in iterations [4], [5]. Since OS coincides with for the first iteration, those examples naturally apply to OS, which therefore implies that δ + < is a necessary condition for the OS algorithm. We would like to mention that the claim of δ + < being necessary for exact recovery with OS has also been proved in [39, emma ]. Considering the fact that + converges to as goes large, the proposed condition δ + < + is nearly sharp. The proof of Theorem follows along a similar line as the proof in [, Section III], in which conditions for exact recovery with g were proved using mathematical induction, but with two key distinctions. The first distinction lies in the way we lower bound the correlations between correct columns and the current residual. As will be seen in Proposition, by employing an improved analysis, we obtain a tighter bound for MOS than the corresponding result for g [, emma 3.7], which consequently leads to better recovery conditions for the MOS algorithm. The second distinction is in how we
4 4 obtain recovery conditions for the case of =. While in [, Theorem 3.] the condition for the first iteration of directly applies to the general iteration and hence becomes an overall condition for i.e., the condition for success of the first iteration also guarantees the success of succeeding iterations, see [, emma 3.]), such is not the case for the OS algorithm due to the difference in the identification rule. This means that to obtain the overall condition for OS, we need to consider the first iteration and the general iteration individually, which makes the underlying analysis for OS more complex and also leads to a more restrictive condition than the algorithm. B. Proof of Theorem The proof works by mathematical induction. We first establish a condition that guarantees success of MOS at the first iteration. Then we assume that MOS has been successful in previous k iterations k < ). Under this assumption, we derive a condition under which MOS also makes a success at the k + )-th iteration. Finally, we combine these two conditions to obtain an overall condition. ) Success of the first iteration: From 9), MOS selects at the first iteration the index set T = arg max S: S = = arg max S: S = φ i,r k P T k φ i φ i,r k = arg max S: S = Φ S y = arg max S: S = Φ S y. ) where is because k = so that P T k φ i = φ i =. By noting that, Φ T y = max S: S = Φ S y Φ T y = Φ T Φ Tx T δ ) x. 3) On the other hand, if no correct index is chosen at the first iteration i.e., T T = ), then Φ T y = Φ T Φ emma 3 Tx T δ + x. 4) This, however, contradicts 3) if δ + < δ ) or δ + < +. 5) Therefore, under 5), at least one correct index is chosen at the first iteration of MOS. Note that the average of largest elements in { φ i,y } i Ω must be no less than that of any other subset of { φ i,y } i Ω whose cardinality is no less than. Hence, i T φ i,y i T φ i,y. ) Success of the general iteration: Assume that MOS has selected at least one correct index at each of the previous k k < ) iterations and denote by l the number of correct indices in T k. Then l = T T k k. Also, assume that T k does not contain all correct indices, that is, l <. Under these assumptions, we will establish a condition that ensures MOS to select at least one correct index at the k + )-th iteration. For analytical convenience, we introduce the following two φ quantities: i) u denotes the largest value of i,r k P, i T kφi T \T k and ii) v denotes the -th largest value of i Ω\T T k ). It is clear that if φ i,r k P T kφi, u > v, 6) u belongs to the set of largest elements among all elements in { φ i,r k. Then it follows from 9) that at least P }i Ω\T T kφi k one correct index i.e., the one corresponding to u ) will be selected at the k + )-th iteration of MOS. The following proposition gives a lower bound for u and an upper bound for v. Proposition : We have u δ +k l xt \T k, 7) v + δ k+ )/ δ k δk+ δ + l + δ +kδ k+ l δ k ) xt \T k. 8) Proof: See Appendix C. By noting that k l < and, we can use monotonicity of isometry constant to obtain < δ l δ, k + δ k+ l δ, k < δ k δ, 9) k + δ k+ δ, +k δ +k δ. Using 7) and 9), we have u δ ) x T \T k. ) Also, using 8) and 9), we have δ ) / v + δ δ δ xt \T k = δ δ )/ δ ) δ + δ δ From ) and ), u > v holds true whenever ) xt \T k /. ) δ δ > ) δ δ )/ δ ) /. Equivalently see Appendix D), δ <. 3) +
5 5 Therefore, under 3), MOS selects at least one correct index at the k +)-th iteration. 3) Overall condition: So far, we have obtained condition 5) for the success of MOS at the first iteration and condition 3) for the success of the general iteration. We now combine them to get an overall condition that ensures selection of all support indices within iterations of MOS. Clearly the overall condition is governed the more restrictive one between 5) and 3). We consider the following two cases: + > : Since δ δ + and also +, 3) is more restrictive than 5) and hence becomes the overall condition of MOS for this case. = : In this case, MOS reduces to the conventional OS algorithm and conditions 5) and 3) become δ + < + and δ < +, 4) respectively. One can easily check that both conditions in 4) hold true if δ + < +. 5) Therefore, under 5), OS exactly recovers the support of -sparse signals in iterations. We have obtained the condition ensuring selection of all support indices within iterations of MOS. When all support indices are selected, we have T T l where l ) denotes the number of actually performed iterations. Since min{, m } by Table I, the number of totally selected indices of MOS, i.e., l) does not exceed m, and hence the sparse signal can be recovered with a least squares S) projection: x l T l = argmin u y Φ T lu = Φ T l y = Φ T l Φ T lx T l = x T l. 6) As a result, the residual vector becomes zero r l = y Φx l = ), and hence the algorithm terminates and returns exact recovery of the sparse signal ˆx = x). C. Convergence Rate We can gain good insights by studying the rate of convergence of MOS. In the following theorem, we show that the residual power of MOS decays exponentially with the number of iterations. Theorem : For any k <, the residual of MOS satisfies r k+ αk,)) k+ y, 7) where αk,) := δ k δ k+ ) δ +k) +δ ) δ k )+δ +k ). The proof is given in Appendix E. Using Theorem, one can roughly compare the rate of convergence of OS and MOS. When Φ has small isometry constants, the upper bound of the convergence rate of MOS is better than that of OS in a factor of αk,) α,) ). We will see later in the experimental section that MOS has a faster convergence than the OS algorithm. V. SPARSE RECOVERY WITH MOS UNDER NOISE A. Main Results In this section, we consider the general scenario where the measurements are contaminated with noise as y = Φx+v. 8) Note that in this scenario exact sparse recovery of x is not possible, Thus we employ the l -norm distortion i.e., x ˆx ) as a performance measure and will derive conditions ensuring an upper bound for the recovery distortion. Recall from Table I that MOS runs until neither r k ǫ and k < nor k < is true. 3 Since, one can see that the algorithm terminates when r k < ǫ or k =. In the following we will analyze the recovery distortion of MOS based on these two termination cases. We first consider the case that MOS is finished by the rule r k < ǫ. The following theorem provides an upper bound on x ˆx for this case. Theorem 3: Consider the measurement model in 8). If MOS satisfies r l ǫ after l < ) iterations and Φ satisfies the of orders l+ and, then the output ˆx satisfies x ˆx ǫ δ + δ + δ l+ ) v. δl+ )+δ ) 9) Proof: See Appendix F. Next, we consider the second case where MOS terminates after iterations. In this case, we parameterize the dependence on the noise v and the signal x with two quantities: i) the signal-to-noise ratio SNR) and ii) the minimum-to-average ratio MAR) [4] which are defined as snr := Φx v and κ := min j T x j x /, 3) respectively. The following theorem provides an upper bound on the l -norm of the recovery distortion of MOS. Theorem 4: Consider the measurement model in 8). If the measurement matrix Φ satisfies ) and the SNR satisfies snr +δ +) κ +)δ, =, +) 3) snr, >, +)+δ ) κ + )δ ) then MOS chooses all support indices in iterations i.e., T T ) and generates an estimate of x satisfying ˆx x v δ, =, ˆx x δ + ) v +δ δ, >. 3) 3 The constraint k actually ensures MOS to select at least candidates before stopping. These candidate are then narrowed down to exact ones as the final output of the algorithm.
6 6 One can interpret from Theorem 4 that MOS can catch all support indices of x in iterations when the SNR scales linearly with the sparsity. In particular, for the special case of =, the algorithm exactly recovers the support of x i.e., ˆT = T ). It is worth noting that the SNR being proportional to is necessary for exact support recovery with OS. In fact, there exist a measurement matrix Φ satisfying ) and a -sparse signal, for which the OS algorithm fails to recover the support of the signal under snr. 33) Example : Consider an identity matrixφ m m, a-sparse signal x R m with all nonzero elements equal to one, and an -sparse noise vector v R m as follows, Φ =..., x = Then the measurements are given by y =.., and v =.. m [{}}{{}}{ ]. In this case, we have δ + = so that condition ) is fulfilled) and snr = ; however, OS may fail to recover the support of x. Specifically, OS is not guaranteed to make a correct selection at the first iteration. B. Proof of Theorem 4 Our proof of Theorem 4 extends the proof technique in [, Theorem 3.4 and 3.5] which studied the recovery condition for the g algorithm in the noiseless situation) by considering the measurement noise. We mention that [, Theorem 4.] also provided a noisy case analysis based on the l -norm distortion of signal recovery, but the corresponding result is far inferior to the result established in Theorem 4. Indeed, while the result in [] suggested a recovery distortion upper bounded by O ) v, our result shows that the recovery distortion with MOS is at most proportional to the noise power. The result in Theorem 4 is also closely related to the results in [37], [4], in which the researchers considered the algorithm with data driven stopping rules i.e., residual based stopping rules), and established conditions for exact support recovery that depend on the minimum magnitude of nonzero elements of input signals. It can be shown that the results of [37], [4] essentially require a same scaling law of the SNR as the result in Theorem 4. The key idea in the proof is to derive a condition ensuring MOS to select at least one good index at each iterations. As long as at least one good index is chosen in each iteration, all support indices will be included in iterations of MOS i.e., T T ) and consequently the algorithm produces a stable recovery of x. Proposition 3: Consider the measurement model in 8). If the measurement matrix Φ satisfies the of order, then MOS satisfies 3) provided that T T. Proof: See Appendix G. Now we proceed to derive the condition ensuring the success of MOS at each iteration. Again, the notion success means that MOS selects at least one good index at this iteration. We first derive a condition for the success of MOS at the first iteration. Then we assume that MOS has been successful in the previous k iterations and derive a condition guaranteeing MOS to make a success as well at the k+)-th iteration. Finally, we combine these two conditions to obtain an overall condition for MOS. ) Success at the first iteration: From ), we know that at the first iteration, MOS selects the set T of indices such that Φ T y = max S: S = Φ S y. Since, Φ T y Φ T y = Φ T Φx+Φ T v Φ TΦ T x T Φ Tv ) b) δ ) x ) +δ v, 34) where is from the triangle inequality and b) is from the and emma 4. On the other hand, if no correct index is chosen at the first iteration i.e., T T = ), then Φ T y = Φ T Φ Tx T +Φ Tv Φ T Φ Tx T + Φ T v emma 3, 4 δ + x + +δ v. 35) This, however, contradicts 34) if δ + x + +δ v < δ ) x ) +δ v. Equivalently, ) ) δ + ) δ x + > + +δ+.36) v Furthermore, since Φx +δ x emma +δ + x, 37) using 3) we can show that 36) holds true if snr > +δ + ) + ) + )δ+. 38) Therefore, under 38), at least one correct index is chosen at the first iteration of MOS. ) Success at thek+)-th iteration: Similar to the analysis of MOS in the noiseless case in Section IV, we assume that MOS selects at least one correct index at each of the previous k k < ) iterations and denote by l the number of correct indices in T k. Then, l = T T k k. Also, we assume that T k does not contain all correct indices l < ). Under these assumptions, we derive a condition that ensures
7 7 MOS to select at least one correct index at the k + )-th iteration. We introduce two quantities that are useful for stating results. et u φ denote the largest value of i,r k P, i T T kφi and let v denote the -th largest value of φ i,r k P T kφi, i Ω\T T k ). It is clear that if u > v, 39) then u belongs to the set of largest elements among all elements in { φ i,r k. Then it follows from 9) that P }i Ω\T T kφi k at least one correct index i.e., the one corresponding to u ) will be selected at the k + )-th iteration. The following proposition gives a lower bound for u and an upper bound for v. Proposition 4: We have u δ +k l ) xt \T k +δ +k l v, 4) v δ + l + δ ) +kδ k+ l xt \T k δ k + ) δ )/ k+ +δ +k v + δ k δk+. 4) Proof: See Appendix H. By noting that k l < and, and also using emma, we have < δ l δ, k+ δ k+ l δ. 4) From 9), 4), and 4), we have ) / v δ + δ δ ) δ + δ xt \T k δ + ) +δ v = ) / δ δxt \T k δ δ + ) +δ v. δ 43) Also, from 9), 4), and 4), we have u δ ) x T \T k ) +δ v. 44) From 43) and 44), u > v can be guaranteed by δ ) x T \T k ) +δ v > ) / δ δxt \T k δ δ + ) +δ v, δ 45) which is true under see Appendix I) snr +)+δ ) κ + )δ ), 46) Therefore, under 46), MOS selects at least one correct index at the k +)-th iteration. 3) Overall condition: Thus far we have obtained condition 38) for the success of MOS at the first iteration and condition 46) for the success of the general iteration. We now combine them to get an overall condition of MOS ensuring selection of all support indices in iterations. Clearly the overall condition can be the more restrictive one between 38) and 46). We consider the following two cases. : Since δ δ +, 46) is more restrictive than 38) and becomes the overall condition. = : In this case, the MOS algorithm reduces to the conventional OS algorithm. Since δ + δ, one can verify that both 38) and 46) hold true under +δ + ) snr κ. 47) +)δ + ) Therefore, 47) ensures selection of all support indices in iterations of OS. The proof is now complete. VI. EMPIRICA RESUTS In this section, we empirically study the performance of MOS in recovering sparse signals. We consider both the noiseless and noisy scenarios. In the noiseless case, we adopt the testing strategy in [], [4], [4] which measures the performance of recovery algorithms by testing their empirical frequency of exact reconstruction of sparse signals, while in the noisy case, we employ the mean square error MSE) as a metric to evaluate the recovery performance. For comparative purposes, we consider the following recovery approaches in our simulation: ) OS and MOS; ) ; 3) St 4) R 5) 6) BP or BPDN for the noisy case) 7) Iterative reweighted S IRS); 8) inear minimum MSE MMSE) estimator. In each trial, we construct an m n matrix where m = 8 and n = 56) with entries drawn independently from a Gaussian distribution with zero mean and m variance. For each value of in{5,,64}, we generate a-sparse signal of size n whose support is chosen uniformly at random and nonzero elements are ) drawn independently from a standard Gaussian distribution, or ) chosen randomly from the set {±}. We refer to the two types of signals as the sparse Gaussian signal and the sparse -ary pulse amplitude modulation -PAM) signal, respectively. We mention that reconstructing sparse -PAM signals is a particularly challenging case for and OS. In the noiseless case, we perform, independent trials for each recovery approach and plot the empirical frequency of exact reconstruction as a function of the sparsity level. By comparing the maximal sparsity level, i.e., the so called critical sparsity [4], of sparse signals at which exact reconstruction is
8 8 Frequency of Exact Reconstruction Frequency of Exact Reconstruction OS MOS =3) MOS =5) St R IRS BP Sparsity OS MOS =3) MOS =5) St R IRS BP Sparse Gaussian signal Sparsity b) Sparse -PAM signal Running time sec.) Running time sec.) OS MOS =3) MOS =5) St R IRS BP Sparsity Running time Gaussian signals) OS MOS =3) MOS =5) St R 3 4 Sparsity c) Running time without BP and IRS Gaussian signals). # of iterations for exact reconstruction OS MOS =3) MOS =5) Sparsity e) # of iterations Gaussian signals). Running time sec.) Running time sec.) OS MOS =3) MOS =5) St R IRS BP Sparsity b) Running time -PAM signals) OS MOS =3) MOS =5) St R Sparsity d) Running time without BP and IRS -PAM signals). # of iterations for exact reconstruction OS MOS =3) MOS =5) Sparsity f) # of iterations -PAM signals). Fig.. Running time and number of iterations for exact reconstruction of -sparse Gaussian and -PAM signals. Fig.. Frequency of exact recovery of sparse signals as a function of. always ensured, recovery accuracy of different algorithms can be compared empirically. 4 As shown in Fig., for both sparse Gaussian and sparse -PAM signals, the MOS algorithm outperforms other greedy approaches with respect to the critical sparsity. Even when compared to the BP and IRS methods, the MOS algorithm still exhibits very competitive reconstruction performance. For the Gaussian case, the critical sparsity of MOS is 43, which is higher than that of BP and IRS, while for the -PAM case, MOS, BP and IRS have 4 Note that for MOS, the selection parameter should obey. We thus choose = 3, 5 in our simulation. Interested reader may try other options. We suggest to choose to be small integers and have empirically confirmed that choices of =,3,4,5 generally lead to similar recovery performance. For St, there are two thresholding strategies: false alarm control FAC) and false discovery control FDC) [9]. We exclusively use FAC, since the FAC outperforms FDC. For and OS, we run the algorithm for exact iterations before stopping. For : we set the maximal iteration number to 5 to avoid repeated iterations. We implement IRS with p = ) as featured in [6]. almost identical critical sparsity around 37). In Fig., we plot the running time and the number of iterations for exact reconstruction of -sparse Gaussian and -PAM signals as a function of. The running time is measured using the MATAB program under the 8-core 64-bit processor, 56Gb RAM, and Windows Server R environments. Overall, we observe that for both sparse Gaussian and -PAM cases, the running time of BP and IRS is longer than that of,, St and MOS. In particular, the running time of BP is more than one order of magnitude higher than the rest of algorithms require. This is because the complexity of BP is a quadratic function of the number of measurements Om n 3/ )) [43], while that of greedy algorithms is Omn). Moreover, the running time of MOS is roughly two to three times as much as that of. We also observe that the number of iterations of MOS for exact reconstruction is much smaller than that of the OS algorithm since MOS can include more than one support index at a time. The associated running time of MOS is also
9 9 Frequency of Exact Reconstruction OS MOS =3) MOS =5) St R IRS BP Number of Measurements m Fig. 3. Frequency of exact recovery of sparse signals as a function of m. MSE OS MOS =3) MOS =5) St BPDN IRS MMSE Oracle-S SNR db) = much less than that of OS. In Fig. 3, by varying the number of measurements m, we plot empirical frequency of exact reconstruction of - sparse Gaussian signals as a function of m. We consider the sparsity level = 45, for which none of the reconstruction methods in Fig. are guaranteed to perform exact recovery. Overall, we observe that the performance comparison among all reconstruction methods is similar to Fig. in that MOS performs the best and OS, and R perform worse than other methods. Moreover, for all reconstruction methods under test, the frequency of exact reconstruction improves as the number of measurements increases. In particular, MOS roughly requires m 35 to ensure exact recovery of sparse signals, while BP, and St seem to always succeed when m 5. In the noisy case, we empirically compare MSE performance of each recovery method. The MSE is defined as MSE = n x i ˆx i ), 48) n i= where ˆx i is the estimate of x i. In obtaining the performance result for each simulation point of the algorithm, we perform, independent trials. In Fig. 4, we plot the MSE performance for each recovery method as a function of SNR in db) i.e., SNR := log snr). In this case, the system model is expressed as y = Φx + v where v is the noise vector whose elements are generated from Gaussian distribution N, SNR m ). 5 The benchmark performance of Oracle least squares estimator Oracle-S), the best possible estimation having prior knowledge on the support of input signals, is plotted as well. In general, we observe that for all reconstruction methods, the MSE performance improves with the SNR. For the whole SNR region under test, the MSE MSE - - OS MOS =3) MOS =5) -3 St BPDN IRS MMSE Oracle-S SNR db) b) = Fig. 4. MSE performance of recovery methods for recovering sparse -PAM signals as a function of SNR. performance of MOS is very competitive compared to that of other methods. In particular, in the high SNR region the MSE of MOS matches with that of the Oracle-S estimator. This is essentially because the MOS algorithm detects all support indices of sparse signals successfully in that SNR region. An interesting point we would like to mention is that the actual recovery error of MOS may be much smaller than indicated in Theorem 4. Consider MOS = 5) for example. When SNR = db, =, and v j N, SNR m ), we have E v = SNR ) / = 5 5. Thus, by assuming small isometry constants we obtain from Theorem 4 that x ˆx Whereas, thel -norm of the actual recovery error of MOS is x ˆx = n MSE) /. Fig. 4b)), 5 Since the components of Φ have power and the signal x is - m sparse with nonzero elements drawn independently from a standard Gaussian distribution, E Φx) i =. From the definition of SNR, we have m E v i = E Φx) i SNR = SNR. m 6 In this case, we can verify that condition 3) in Theorem 4 is fulfilled. To be specific, since sparse-pam signals haveκ = and snr = SNR =, when the measurement matrix Φ has small isometry constants, 3) roughly becomes 5+, which is true. 5
10 which is much smaller. The gap between the theoretical and empirical results is perhaps due to ) that our analysis is based on the framework and hence is essentially the worst-caseanalysis, and ) that some inequalities relaxations) used in our analysis may not be tight. VII. CONCUSION In this paper, we have studied a sparse recovery algorithm called MOS, which extends the conventional OS algorithm by allowing multiple candidates entering the list in each selection. Our method is inspired by the fact that sub-optimal candidates in each of the OS identification are likely to be reliable and can be selected to accelerate the convergence of the algorithm. We have demonstrated by analysis that MOS > ) performs exact recovery of any -sparse signal within iterations if δ +. In particular, for the special case of MOS when = i.e., the OS case), we have shown that any -sparse signal can be exactly recovered in iterations under δ + +, which is a nearly optimal condition for the OS algorithm. We have also extended our analysis to the noisy scenario. Our result showed that stable recovery of sparse signals can be achieved with MOS when the SNR has a linear scaling in the sparsity level of signals to be recovered. In particular, for the case of OS, we demonstrated from a counterexample that the linear scaling law for the SNR is essentially necessary. In addition, we have shown from empirical experiments that the MOS algorithm has lower computational cost than the conventional OS algorithm, while exhibiting improved recovery accuracy. The empirical recovery performance of MOS is also competitive when compared to the state of the art recovery methods. ACNOWEDGEMENT This work is supported in part by ONR-N , NSF-III-3697, AFOSR-FA , and NSF-Bigdata-49. APPENDIX A PROOF OF EMMA 6 Proof: We focus on the proof for the upper bound. Since δ S <, Φ S has full column rank. Suppose that Φ S has singular value decomposition Φ S = UΣV. Then from the definition of, the minimum diagonal entries of Σ satisfies σ min δ S. Note that Φ S ) = Φ SΦ S ) Φ S) = UΣV UΣV ) UΣV ) = UΣ V, A.) where Σ is the diagonal matrix formed by replacing every non-zero) diagonal entry of Σ by its reciprocal. Hence, all singular values of Φ S ) are upper bounded by σ min =, δ S which competes the proof. APPENDIX B PROOF OF PROPOSITION Proof: Since P T k {i} y and P T k {i}y are orthogonal, P T k {i} y = y P T k {i}y, and hence 8) is equivalent to S k+ = arg max S: S = P T k {i}y. By noting that P T k +P T k = I, we have P T k {i} = P T kp T k {i} +P T kp T k {i} = P T k +P T kp T k {i} [ = P T k+p Φ T k[φ T k,φ i] T k φ i B.) ] ) [ Φ [Φ T k,φ i ] T k = P T k+ [ P T k φ i ] [ Φ T k Φ T k Φ T k φ i φ i Φ T k φ i φ i = P T k + [ ] [ ][ P M T φ M Φ k i M 3 M 4 T k φ i φ i ] [ Φ T k = P T k +P T kφ iφ i P T kφ i) φ i P T k, B.) where is from the partitioned inverse formula and M = Φ T kp i Φ T k), M = Φ T kp i Φ T k) Φ T kφ iφ i φ i), M 3 = φ ip T kφ i) φ iφ T kφ T kφ T k), M 4 = φ i P T kφ i). This implies that P T k {i}y = P T ky+p T kφ iφ ip T kφ i) φ ip T ky ] φ i ] ] B.3) = P T ky + P T kφ iφ ip T kφ i) φ ip T ky b) = P T ky + φ i P T k y P T k φ i φ i P T k φ i c) = P T ky + φ i P T y k T k i ) d) = P T ky + φ i,r k P, B.4) φ T k i where is because P T ky and P T k φ i φ i P T k φ i ) φ i P T k y are orthogonal, b) follows from that fact that φ i P T k y and φ i P T k φ i are scalars, c) is from P T k = P T k) = P T k) B.5) and hence φ i P T φ k i = φ i P T ) P k T φ k i = P T φ k i, and d) is due to r k = P T y. k By relating B.) and B.4), we have S k+ = arg max S: S = φ i,r k P T k φ i. Furthermore, if we write φ i,r k = φ i P T ) P k T y = k P T φ k i,r k, then 9) becomes S k+ = arg max P T φ S: S = k i P,r k. φ T k i This completes the proof.
11 APPENDIX C PROOF OF PROPOSITION Proof: We first prove 7) and then prove 8). ) Proof of 7): Since u is the largest value of { φi,r k P T kφi }i T \T k, we have u φ i,rk T \T k i T \T k T k i φ i,r k T \T k i T \T k = Φ T\T k r k = Φ T \T k P T k Φx, C.) where is because P T k φ i φ i =. Observe that Φ T \T kp T kφx = Φ T\T kp T kφ T \T kx T \T k b) x T \T k Φ T \T k P T k Φ T \T kx T \T k x T \T k c) = P T k Φ T \T kx T \T k x T \T k d) λ min Φ T \T kp T kφ T \T k) x T \T k e) λ min Φ T T kφ T T k) x T \T k f) δ +k l ) x T \T k, C.) where is because P T k Φ T k =, b) is from the norm inequality, c) and d) are from B.5), e) is from emma 5, and f) is from the. Note that T T k = T + T k T \T k = +k l.) Using C.) and C.), we obtain 7). ) Proof of 8): et F be the index set corresponding to largest elements in { φ i,r k P T kφi }i Ω\T T k ). Then, i F ) / φ i,r k i F T k i φ ) / i,r k min i F T k i i F = φ ) / i,r k max i F P T kφ i δ ) / k+ Φ F δ rk, C.3) k where is because for any i / T k, P T kφ i = Φ ) Φ T k T kφ i emma 6 Φ T φ k i δ k emma 3 δk+, δ k C.4) By noting thatr k = y Φx k = y Φ T kφ y = y P T k T ky = P T y = P k T Φ k T \T kx T \T k, we have ) / φ i,r k i F T k i ) / δ k = δ k δk+ Φ F P T kφ T \T kx T \T k ) /Φ δ k δ k δk+ FΦ T \T kx T \T k + ) Φ F P T kφ T \T kx T \T k. C.5) Since F and T \T k are disjoint i.e., F T \T k ) = ), and also noting that T T k = l by hypothesis, we have F + T \T k = + l. Using this together with emma 3, we have Φ F Φ T\T kx T \T k δ xt + l \T k. C.6) Moreover, since F T k = and F + T k = +k, Φ F P T kφ T \T kx T \T k Φ δ +k Φ T k T \T kx T \T k = δ +k Φ T kφ T k) Φ T kφ T\T kx T \T k emma emma 3 δ +k Φ T δ kφ T\T kx T \T k k δ +k δ k+ l xt δ \T k. k C.7) where in the last inequality we have used the fact that T k T \T k ) = k+ l. Note that T k and T \T k are disjoint and T \T k =.) Invoking C.6) and C.7) into C.5), we have ) / φ i,r k δ k ) / T k i i F δ k δ k+ δ + l + δ +kδ k+ l δ k ) xt \T k. C.8) On the other hand, since v is the -th largest value in, we have { φi,r k P T kφi }i F i F ) / φ i,r k T k i v, C.9) which, together with C.8), implies 8). APPENDIX D PROOF OF 3) Proof: Observe that ) is equivalent to < δ ) 3/ δ δ )/ δ. D.) et fδ ) = δ)3/ δ δ )/ δ and gδ ) = δ. Then one can check that δ, 5 ), fδ ) > gδ ). D.)
12 l Hence, D.) is ensured by < δ, or equivalently, δ <. D.3) + Since <, D.3) is guaranteed by 3). APPENDIX E PROOF OF THEOREM Proof: We prove Theorem in two steps. First, we show that the residual power difference of MOS satisfies r k r k+ δ k δk+ max +δ ) δ k ) S: S = Φ Sr k.e.) In the second step, we show that max S: S = Φ S rk δ +k) +δ +k ) rk. E.) The theorem is established by combining E.) and E.). ) Proof of E.): First, from the definition of MOS see Table I), we have that for any integer k <, r k r k+ = P T ky P T k+y b) = P T l+ P T l+p T l)y = P T l+y P T ly) = P T l+r k, E.3) where is from that x k = argmin u:suppu)=t k y Φu, and hence Φx k = Φ T kφ T k y = P T ly, and b) is because spanφ T k) spanφ T k+) so that P T ky = P T k+p T ky). Since T k+ S k+, we have spanφ T k+) spanφ S k+) and r k r k+ = P T k+r k P S k+r k. By noting that r k r k+ = rk rk+, we have r k r k+ P S k+r k Φ ) Φ S k+ S k+rk emma 6 Φ S k+ r k +δ S k+ = Φ S k+ r k +δ, E.4) where is because P S k+ = P S k+ = Φ S k+ ) Φ S k+. Next, we build a lower bound for Φ S k+ r k. Denote S = argmax S: S = Φ S rk. Then, φ i,rk k+ T k i 9) = max S: S = φ i,r k P T k φ i φ i,rk T k i φ i,r k = max S: S = Φ Sr k, E.5) where holds because φ i has unit l -norm and hence P T k φ i. On the other hand, φ i,rk k+ T k i k+ φ i,r k min k+ T k i φ = k+ i,r k max k+ P T kφ i C.4) δ ) k+ Φ S δ k+rk. E.6) k Combining E.5) and E.6) yields ) Φ S k+rk δ k+ max δ k S: S = Φ Sr k. E.7) Finally, using E.4) and E.7), we obtain E.). ) Proof of E.): Since, max S: S = Φ Sr k Φ Tr k = Φ T T krk = Φ T T kφ T T kx xk ) T T k δ +k) x x k ) T T k δ +k) +δ +k ) Φ T T kx xk ) T T k = δ +k) +δ +k ) rk, E.8) where is because Φ T r k = Φ k T P k T y) = k Φ T P k T ) y = P k T Φ k T k) y =. From E.4) and E.8), r k r k+ δ +k) +δ )+δ +k ) which implies that r k+ αk,) rk where ) δ k+ r k δ, k αk,) := δ k δ k+ ) δ +k) +δ ) δ k )+δ +k ). E.9) Repeating this we obtain k+ r k+ i= which completes the proof. αi,) r αk,))k+ y, E.) APPENDIX F PROOF OF THEOREM 3 Proof: We consider the best -term approximationx l )ˆT of x l and observer that x l )ˆT x = x l )ˆT x l +x l x x l )ˆT x l + x l x b) x l x Φxl x) = rl + v ) δl+ δl+ ǫ+ v ) δl+, F.)
13 3 where is from the triangle inequality and b) is because x l )ˆT is the best -term approximation to x l and hence is a better approximation than x. On the other hand, x l )ˆT x Φxl )ˆT x) δ = Φxl )ˆT y+v δ Φxl )ˆT y v δ b) Φˆx y v δ = Φˆx x) v v δ c) Φˆx x) v δ +δ ˆx x v, F.) δ where and c) are from the triangle inequality and b) is because x l )ˆT is supported on ˆT and ˆxˆT = Φ ˆTy = arg min y ΦˆTu see Table I). u By combining F.) and F.) we obtain 9). APPENDIX G PROOF OF PROPOSITION 3 Proof: We first consider the case of =. In this case, T = T and ˆx = x = Φ Ty, and hence ˆx x = x Φ T y = Φ T v Φ TΦ T v δ = P Tv δ v δ. G.) Next, we prove the case of >. Consider the best -term approximation x )ˆT of x and observer that x )ˆT x = x )ˆT x +x x x )ˆT x + x x b) x x = x Φ T y c) = Φ T v Φ T Φ T v δ T = P T v v, G.) δ δ where is from the triangle inequality, b) is because x )ˆT is the best -term approximation to x and hence is a better approximation than x note that both x )ˆT and x are - sparse), and c) is because T T and y = Φx+v. On the other hand, following the same argument in F.), one can show that x +δ ˆx x v )ˆT x. G.3) δ Combining G.) and G.3) yields the desired result. APPENDIX H PROOF OF PROPOSITION 4 Proof: In the following, we provide the proofs of 4) and 4), respectively. ) Proof of 4): Since u is the largest value of { φi,r k P T kφi }i T \T k, we have u φ i,rk i T \T T \T k i T \T k T k i k φ i,r k T \T k = Φ T\T k r k b) = Φ T \T k P T k Φx+v) Φ T\T P k T Φx k Φ T\T P k T v k, H.) where is due to Cauchy-Schwarz inequality and b) is from the triangle inequality. Observe that Φ T P T kv = B.5) = emma 5 Also, from C.), we have P T kφ T) v ) λ max P Φ T k T ) P Φ T k T v ) λ max Φ T P Φ T k T v λ max Φ T T kφ T T k) v +δ+k l v. H.) Φ T\T kp T kφx δ +k l ) xt \T k. H.3) Using H.), H.3) and H.), we obtain 4). ) Proof of 4): et F be the index set corresponding to largest elements in { φ i,r k. Following C.3), P }i Ω\T T kφi T k ) we can show that ) / φ i,rk i F T k i δ ) / k+ Φ F δ rk. k H.4) Observe that Φ F rk = Φ F P T kφx+v) Φ F P T kφx + Φ F P T kv = Φ F P T kφ T\T kx T \T k + Φ F P T kv Φ F Φ T \T kx T \T k + Φ F P T kv + Φ F P T kφ T \T kx T \T k H.5) Following C.6) and C.7), we have Φ F Φ T\T kx T \T k δ + l x T \T k, Φ F P T kφ T\T kx T \T k δ +kδ k+ l δ k xt \T k H.6).H.7)
14 4 Also, Φ F P T kv = P T kφ F ) v λ max P Φ T k F ) P Φ T k F ) v B.5) = λ max Φ F P Φ T k F ) v emma 5 ) λ max Φ Φ F T k F T k v +δ F T k v = +δ +k v. Using H.4), H.5), H.6), H.7), and H.8), we have ) / φ i,rk δ k i F T k i δ k δk+ δ + l + δ +kδ k+ l δ k H.8) ) /xt \T k )+ +δ +k v ).H.9) On the other hand, by noting that v is the -th largest value in { φ i,r k }, we have P T kφi i F ) / φ i,rk i F T k i v. Using H.9) and H.), we obtain 4). APPENDIX I PROOF OF 46) H.) Proof: Rearranging the terms in 45) we obtain δ ) δ δ ) / + δ δ δ > δ ) / ) +δ v + δ δ + l. I.) x T \T k In the following, we will show that I.) is guaranteed by 46). First, since x T \T k T \T k min x 3) j = κ x j T κ l Φx +δ ) 3) = κ l )snr v, +δ ) by denoting δ ) / β := + δ δ, γ :=, δ := δ, τ := κ )snr, we can rewrite I.) as γ δ +δ)τ) > β ) δ δ ++δ)τ. I.) Since D.) implies β < δ) δ, it is easily shown that I.) is ensured by γ > δ δ δ + δ )τ δ +δ)τ. I.3) Moreover, since δ + δ, I.3) holds true under γ > δ δ δ++δ)τ δ +δ)τ, or equivalently, δ < ) γ +τ u+γ τ where u := δ δ. I.4) Next, observe that δ < + δ < γ +γ, δ δ < +γ, u < +γ, γ u+γ > γ +γ. I.5) l +, then we can derive from I.4) Thus, if δ < and I.5) that I.) holds true whenever δ < ) γ +τ +γ τ = +τ That is l + τ ). I.6) snr > +δ ) + ) κ l + )δ ) l. I.7) Finally, since <, I.7) is guaranteed by 46), this completes the proof. REFERENCES [] D.. Donoho and P. B. Stark, Uncertainty principles and signal recovery, SIAM Journal on Applied Mathematics, vol. 49, no. 3, pp , 989. [] D.. Donoho, Compressed sensing, IEEE Trans. Inform. Theory, vol. 5, no. 4, pp , Apr. 6. [3] E. J. Candès and T. Tao, Near-optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inform. Theory, vol. 5, no., pp , Dec. 6. [4] E. J. Candès, J. Romberg, and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, vol. 5, no., pp , Feb. 6. [5] S. S. Chen, D.. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM review, pp. 9 59,. [6] Y. C. Pati, R. Rezaiifar, and P. S. rishnaprasad, Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition, in Proc. 7th Annu. Asilomar Conf. Signals, Systems, and Computers. IEEE, Nov. Pacific Grove, CA, Nov. 993, vol., pp [7] S. G. Mallat and Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Trans. Signal Process., vol. 4, no., pp , Dec [8] S. Chen, S. A. Billings, and W. uo, Orthogonal least squares methods and their application to non-linear system identification, International Journal of control, vol. 5, no. 5, pp , 989. [9] D.. Donoho, I. Drori, Y. Tsaig, and J.. Starck, Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit, IEEE Trans. Inform. Theory, vol. 58, no., pp. 94, Feb.. [] D. Needell and R. Vershynin, Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit, IEEE J. Sel. Topics Signal Process., vol. 4, no., pp. 3 36, Apr..
Recovery of Sparse Signals Using Multiple Orthogonal Least Squares
Recovery of Sparse Signals Using Multiple Orthogonal east Squares Jian Wang, Ping i Department of Statistics and Biostatistics arxiv:40.505v [stat.me] 9 Oct 04 Department of Computer Science Rutgers University
More informationMultipath Matching Pursuit
Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy
More informationof Orthogonal Matching Pursuit
A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior
More informationPre-weighted Matching Pursuit Algorithms for Sparse Recovery
Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationUniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit
Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,
More informationA new method on deterministic construction of the measurement matrix in compressed sensing
A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationA New Estimate of Restricted Isometry Constants for Sparse Solutions
A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign
More informationRandomness-in-Structured Ensembles for Compressed Sensing of Images
Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationGeneralized Orthogonal Matching Pursuit
Generalized Orthogonal Matching Pursuit Jian Wang, Seokbeop Kwon and Byonghyo Shim Information System Laboratory School of Information and Communication arxiv:1111.6664v1 [cs.it] 29 ov 2011 Korea University,
More informationOrthogonal Matching Pursuit for Sparse Signal Recovery With Noise
Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published
More informationStability and Robustness of Weak Orthogonal Matching Pursuits
Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery
More informationNoisy Signal Recovery via Iterative Reweighted L1-Minimization
Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.
More informationSignal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit
Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably
More informationA Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases
2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary
More informationRecovery of Sparse Signals via Generalized Orthogonal Matching Pursuit: A New Analysis
1 Recovery of parse ignals via Generalized Orthogonal Matching Pursuit: A New Analysis Jian Wang 1,, uhyu won 3, Ping Li and Byonghyo him 3 1 Department of Electrical & Computer Engineering, Due University
More informationStability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction
Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction Namrata Vaswani ECE Dept., Iowa State University, Ames, IA 50011, USA, Email: namrata@iastate.edu Abstract In this
More informationSolution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions
Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationLecture: Introduction to Compressed Sensing Sparse Recovery Guarantees
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin
More informationCompressed Sensing with Very Sparse Gaussian Random Projections
Compressed Sensing with Very Sparse Gaussian Random Projections arxiv:408.504v stat.me] Aug 04 Ping Li Department of Statistics and Biostatistics Department of Computer Science Rutgers University Piscataway,
More informationORTHOGONAL matching pursuit (OMP) is the canonical
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4395 Analysis of Orthogonal Matching Pursuit Using the Restricted Isometry Property Mark A. Davenport, Member, IEEE, and Michael
More informationINDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina
INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationEUSIPCO
EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,
More informationGreedy Signal Recovery and Uniform Uncertainty Principles
Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles
More informationCompressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles
Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional
More informationRecent Developments in Compressed Sensing
Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline
More informationSparse Solutions of an Undetermined Linear System
1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research
More informationConditions for Robust Principal Component Analysis
Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationUniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit
Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit
More informationRecent developments on sparse representation
Recent developments on sparse representation Zeng Tieyong Department of Mathematics, Hong Kong Baptist University Email: zeng@hkbu.edu.hk Hong Kong Baptist University Dec. 8, 2008 First Previous Next Last
More informationCompressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery
Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad
More informationThe Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1
The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,
More informationCompressed Sensing and Robust Recovery of Low Rank Matrices
Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech
More informationCoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp
CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationGeneralized Orthogonal Matching Pursuit
Generalized Orthogonal Matching Pursuit Jian Wang, Student Member, IEEE, Seokbeop won, Student Member, IEEE, and Byonghyo Shim, Senior Member, IEEE arxiv:6664v [csit] 30 Mar 04 Abstract As a greedy algorithm
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationSparsity in Underdetermined Systems
Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2
More information5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE
5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years
More informationc 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE
METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationSparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images
Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are
More informationSpanning and Independence Properties of Finite Frames
Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames
More informationA Greedy Search Algorithm with Tree Pruning for Sparse Signal Recovery
A Greedy Search Algorithm with Tree Pruning for Sparse Signal Recovery Jaeseok Lee, Suhyuk Kwon, and Byonghyo Shim Information System Laboratory School of Infonnation and Communications, Korea University
More informationThe Analysis Cosparse Model for Signals and Images
The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under
More informationModel-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk
Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional
More informationGradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property
: An iterative algorithm for sparse recovery with restricted isometry property Rahul Garg grahul@us.ibm.com Rohit Khandekar rohitk@us.ibm.com IBM T. J. Watson Research Center, 0 Kitchawan Road, Route 34,
More informationExact Low-rank Matrix Recovery via Nonconvex M p -Minimization
Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:
More informationElaine T. Hale, Wotao Yin, Yin Zhang
, Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2
More informationZ Algorithmic Superpower Randomization October 15th, Lecture 12
15.859-Z Algorithmic Superpower Randomization October 15th, 014 Lecture 1 Lecturer: Bernhard Haeupler Scribe: Goran Žužić Today s lecture is about finding sparse solutions to linear systems. The problem
More informationORTHOGONAL MATCHING PURSUIT (OMP) is a
1076 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 64, NO 4, FEBRUARY 15, 2016 Recovery of Sparse Signals via Generalized Orthogonal Matching Pursuit: A New Analysis Jian Wang, Student Member, IEEE, Suhyuk
More informationNOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained
NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS 1. Introduction. We consider first-order methods for smooth, unconstrained optimization: (1.1) minimize f(x), x R n where f : R n R. We assume
More informationMATCHING PURSUIT WITH STOCHASTIC SELECTION
2th European Signal Processing Conference (EUSIPCO 22) Bucharest, Romania, August 27-3, 22 MATCHING PURSUIT WITH STOCHASTIC SELECTION Thomas Peel, Valentin Emiya, Liva Ralaivola Aix-Marseille Université
More informationExact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations
Exact Topology Identification of arge-scale Interconnected Dynamical Systems from Compressive Observations Borhan M Sanandaji, Tyrone Vincent, and Michael B Wakin Abstract In this paper, we consider the
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationACCORDING to Shannon s sampling theorem, an analog
554 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 2, FEBRUARY 2011 Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis Omid Taheri, Student Member,
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationCS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5
CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given
More informationThresholds for the Recovery of Sparse Solutions via L1 Minimization
Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu
More informationBias-free Sparse Regression with Guaranteed Consistency
Bias-free Sparse Regression with Guaranteed Consistency Wotao Yin (UCLA Math) joint with: Stanley Osher, Ming Yan (UCLA) Feng Ruan, Jiechao Xiong, Yuan Yao (Peking U) UC Riverside, STATS Department March
More informationInverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France
Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse
More informationPrincipal Component Analysis
Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used
More informationSparse Vector Distributions and Recovery from Compressed Sensing
Sparse Vector Distributions and Recovery from Compressed Sensing Bob L. Sturm Department of Architecture, Design and Media Technology, Aalborg University Copenhagen, Lautrupvang 5, 275 Ballerup, Denmark
More informationCOMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION
COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION By Mazin Abdulrasool Hameed A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for
More informationThe uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008
The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle
More informationGreedy Sparsity-Constrained Optimization
Greedy Sparsity-Constrained Optimization Sohail Bahmani, Petros Boufounos, and Bhiksha Raj 3 sbahmani@andrew.cmu.edu petrosb@merl.com 3 bhiksha@cs.cmu.edu Department of Electrical and Computer Engineering,
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57
More informationA Continuation Approach to Estimate a Solution Path of Mixed L2-L0 Minimization Problems
A Continuation Approach to Estimate a Solution Path of Mixed L2-L Minimization Problems Junbo Duan, Charles Soussen, David Brie, Jérôme Idier Centre de Recherche en Automatique de Nancy Nancy-University,
More informationThe Iteration-Tuned Dictionary for Sparse Representations
The Iteration-Tuned Dictionary for Sparse Representations Joaquin Zepeda #1, Christine Guillemot #2, Ewa Kijak 3 # INRIA Centre Rennes - Bretagne Atlantique Campus de Beaulieu, 35042 Rennes Cedex, FRANCE
More informationPHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN
PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the
More informationQuantized Iterative Hard Thresholding:
Quantized Iterative Hard Thresholding: ridging 1-bit and High Resolution Quantized Compressed Sensing Laurent Jacques, Kévin Degraux, Christophe De Vleeschouwer Louvain University (UCL), Louvain-la-Neuve,
More informationRSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems
1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical
More informationRecovering overcomplete sparse representations from structured sensing
Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationarxiv: v1 [math.na] 26 Nov 2009
Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,
More informationFast Angular Synchronization for Phase Retrieval via Incomplete Information
Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department
More informationConstructing Explicit RIP Matrices and the Square-Root Bottleneck
Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry
More informationEstimation Error Bounds for Frame Denoising
Estimation Error Bounds for Frame Denoising Alyson K. Fletcher and Kannan Ramchandran {alyson,kannanr}@eecs.berkeley.edu Berkeley Audio-Visual Signal Processing and Communication Systems group Department
More informationCoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles
CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal
More informationOptimization methods
Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,
More informationIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 6, JUNE
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 6, JUNE 2010 2935 Variance-Component Based Sparse Signal Reconstruction and Model Selection Kun Qiu, Student Member, IEEE, and Aleksandar Dogandzic,
More informationA Generalized Restricted Isometry Property
1 A Generalized Restricted Isometry Property Jarvis Haupt and Robert Nowak Department of Electrical and Computer Engineering, University of Wisconsin Madison University of Wisconsin Technical Report ECE-07-1
More informationRobust Principal Component Analysis
ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationIterative Hard Thresholding for Compressed Sensing
Iterative Hard Thresholding for Compressed Sensing Thomas lumensath and Mike E. Davies 1 Abstract arxiv:0805.0510v1 [cs.it] 5 May 2008 Compressed sensing is a technique to sample compressible signals below
More informationObservability of a Linear System Under Sparsity Constraints
2372 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 58, NO 9, SEPTEMBER 2013 Observability of a Linear System Under Sparsity Constraints Wei Dai and Serdar Yüksel Abstract Consider an -dimensional linear
More informationOn the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals
On the l 1 -Norm Invariant Convex -Sparse Decomposition of Signals arxiv:1305.6021v2 [cs.it] 11 Nov 2013 Guangwu Xu and Zhiqiang Xu Abstract Inspired by an interesting idea of Cai and Zhang, we formulate
More informationRobust Sparse Recovery via Non-Convex Optimization
Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn
More informationSparse linear models and denoising
Lecture notes 4 February 22, 2016 Sparse linear models and denoising 1 Introduction 1.1 Definition and motivation Finding representations of signals that allow to process them more effectively is a central
More informationRui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China
Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series
More informationExact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice
Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University
More information