Generalized Orthogonal Matching Pursuit
|
|
- Aileen Clark
- 5 years ago
- Views:
Transcription
1 Generalized Orthogonal Matching Pursuit Jian Wang, Seokbeop Kwon and Byonghyo Shim Information System Laboratory School of Information and Communication arxiv: v1 [cs.it] 29 ov 2011 Korea University, Seoul, Korea Phone: Abstract As a greedy algorithm to recover sparse signals from compressed measurements, the orthogonal matching pursuit (OMP) algorithm has received much attention in recent years. In this paper, we introduce an extension of the orthogonal matching pursuit (gomp) for pursuing efficiency in reconstructing sparse signals. Our approach, henceforth referred to as generalized OMP (gomp), is literally a generalization of the OMP in the sense that multiple indices are identified per iteration. Owing to the selection of multiple correct indices, the gomp algorithm is finished with much smaller number of iterations compared to the OMP. We show that the gomp can perfectly reconstruct any K-sparse signals (K > 1), provided that the sensing matrix satisfies the RIP with δ K < K+2. We also demonstrate by empirical simulations that the gomp has excellent recovery performance comparable to l 1 -minimization technique with fast processing speed and competitive computational complexity. Index Terms Compressed sensing (CS), orthogonal matching pursuit (OMP), generalized orthogonal matching pursuit (gomp), restricted isometric property (RIP). This work was supported by the ational Research Foundation of Korea (RF) grant funded by the Korea government (MEST) (o ) and the research grant from the second BK21 project.
2 Generalized Orthogonal Matching Pursuit 1 I. ITRODUCTIO As a paradigm to acquire sparse signals at a rate significantly below yquist rate, compressive sensing has received much attention in recent years [1] [17]. The goal of compressive sensing is to recover the sparse vector using small number of linearly transformed measurements. The process of acquiring compressed measurements is referred to as sensing while that of recovering the original sparse signals from compressed measurements is called reconstruction. In the sensing operation, K-sparse signal vector x, i.e., n-dimensional vector having at most K non-zero elements, is transformed into m-dimensional measurements y via a matrix multiplication with Φ. This process is expressed as y = Φx. (1) Since n > m for most of the compressive sensing scenarios, the system in (1) can be classified as an underdetermined system having more unknowns than observations, and hence, it is in general impossible to obtain an accurate reconstruction of the original input x using conventional inverse transform ofφ. Whereas, with a prior information on the signal sparsity and a condition imposed on Φ, x can be reconstructed by solving the l 1 -minimization problem [16]: min x x 1 subject to Φx = y. (2) A widely used condition of Φ ensuring the exact recovery of x is called restricted isometric property (RIP) [12]. A sensing matrix Φ is said to satisfy the RIP of order K if there exists a constant δ (0,1) such that (1 δ) x 2 2 Φx 2 2 (1+δ) x 2 2 (3) for any K-sparse vector x ( x 0 K). In particular, the minimum of all constants δ satisfying (3) is referred to as isometry constant δ K. It is now well known that if δ 2K < 2 1 [16], then x can be perfectly recovered using l 1 -minimization. While l 1 -norm is convex and hence the problem can be solved via linear programming (LP) technique, the complexity associated with the LP is cubic in the size of the original vector to be recovered (i.e., O(n 3 )) so that the complexity is burdensome for many applications.
3 2 Recently, greedy algorithms sequentially investigating the support of x have received considerable attention as cost effective alternatives of the LP approach. Algorithms in this category include orthogonal matching pursuit (OMP) [11], regularized OMP (ROMP) [2], stagewise OMP (StOMP) [13], subspace pursuit (SP) [18], and compressive sampling matching pursuit (CoSaMP) [19]. As a representative method in the greedy algorithm family, the OMP has been widely used due to its simplicity and competitive performance. Tropp and Gilbert [11] showed that, for a K-sparse vector x and an m n Gaussian sensing matrix Φ, the OMP recovers x from y = Φx with overwhelming probability, provided the number of measurements is m K log n. Wakin and Davenport showed that the OMP can reconstruct K-sparse vector if δ K+1 < 1 3 K Wang and Shim recently provided an improved condition δ K+1 < 1 K+1 [21]. [20], and The main principle behind the OMP is simple and intuitive: in each iteration, a column of Φ maximally correlated with the residual is chosen (identification), the index of this column is added to the list (augmentation), and then the vestige of columns in the list is eliminated from the measurements, generating a new residual used for the next iteration (residual update). Among these, computational complexity of the OMP is dominated by the identification and the residual update steps. In the k-th iteration, the identification requires a matrix-vector multiplication between Φ R n m and r k 1 R m so that the number of floating point operations (flops) becomes (2m 1)n. Main operation of the residual update is to compute the estimate of x, which is completed by obtaining the LS solution and the required flops is approximately 4km when the modified Gram-Schmidt (MGS) algorithm is applied [22]. In addition, it requires 2km flops to perform the residual update. Considering that the algorithm requires K iterations, the total number of flops of the OMP is about 2Kmn + 3K 2 m. Clearly, the sparsity K plays an important role in the complexity of the OMP. In order to observe the effect of K on the complexity, we measure the running time of the OMP as a function of n for three distinct sparsity levels ( K n = 0.02, 0.1, and 0.2). As shown in Fig. 1, the running time complexity for K n = 0.2 is significantly larger than that for K n = 0.02 since the cost of this orthogonalization process increases quadratically with the number of iterations. When the signal being recovered is not very sparse, therefore, the OMP may not be an excellent choice. There have been some studies on the modification, mainly on the identification step of the OMP, to improve the computational efficiency and recovery performance. In [13], a method identifying more than one indices in each iteration was proposed. In this approach, referred to
4 K/n = 0.02 K/n = 0.1 K/n = Run time of the OMP (sec) n Fig. 1. Running time of the OMP as a function of n (m is set to the closest integer of 0.7n). The running time is the sum of 1000 independent experiments measured using a matlab program under quad-core 64-bit processor and Windows 7 environment. as the StOMP, indices whose magnitude of correlation exceeds a deliberately designed threshold are chosen. It is shown that while achieving performance comparable to l 1 -minimization, StOMP runs much faster than the OMP as well as l 1 -minimization technique [13]. In [2], another variation of the OMP, called ROMP, was proposed. After choosing a set of K indices with largest correlation in magnitude, ROMP narrows down the candidates by selecting a subset satisfying a predefined regularization rule. It is shown that the ROMP algorithm exactly recovers K-sparse signals under δ 2K < 0.03/ logk [23]. Our approach lies on the same ground of the StOMP and ROMP in the sense that we pursue the reduction of complexity and speeding-up the execution time of the algorithm by choosing multiple indices in each iteration. While previous efforts employ special treatment on the
5 4 identification step such as thresholding [13] or regularization [2], the proposed method pursues direct extension of the OMP by choosing indices corresponding to ( 1) largest correlation in magnitude. Our approach, henceforth referred to as generalized OMP (gomp), is literally a generalization of the OMP and hence embraces the OMP as a special case ( = 1). Owing to the selection of multiple indices, multiple correct indices (i.e., indices in the support set) are added to the list and hence the algorithm is finished with much smaller number of iterations compared to the OMP. Indeed, in both empirical simulations and complexity analysis, we observe that the gomp achieves substantial reduction in complexity with competitive reconstruction performance. The primary contributions of this paper are twofold: We present an extension of the OMP, termed gomp, for improving both complexity and recovery performance. By nature, the proposed gomp lends itself to parallel processing. We show from empirical simulation that the recovery performance of the gomp is much better than the OMP and also comparable to the LP technique as well as modified OMP algorithms (e.g., CoSaMP and StOMP). We develop a perfect recovery condition of the gomp. To be specific, we show that the RIP of order K with δ K < K+2 (K > 1) is sufficient for the gomp to exactly recover any K-sparse vector within K iterations (Theorem 3.8). Also, as a special case of the gomp, we show that a sufficient condition of the OMP is given by δ K+1 < 1 K+1. The rest of this paper is organized as follows. In Section II, we introduce the proposed gomp algorithm. In Section III, we provide the RIP based analysis of the gomp guaranteeing the perfect reconstruction of K-sparse signals. We also revisit the OMP algorithm as a special case of the gomp and develop a sufficient condition ensuring the recovery of K-sparse signals. In Section IV, we provide empirical experiments on the reconstruction performance and complexity of the gomp. Concluding remarks is given in Section V. We briefly summarize notations used in this paper. Let Ω = {1,2,,n} then T = {i i Ω,x i 0} denotes the support of vector x. For D Ω, D is the cardinality of D. T D = T\(T D) is the set of all elements contained in T but not in D. x D R D is a restriction of the vector x to the elements with indices in D. Φ D R m D is a submatrix of Φ that only contains columns indexed by D. If Φ D is full column rank, then Φ = (Φ D Φ D) 1 Φ D pseudoinverse of Φ D. span(φ D ) is the span of columns in Φ D. P D = Φ D Φ D is the is the projection onto span(φ D ). P D = I P D is the projection onto the orthogonal complement of span(φ D ).
6 5 II. GOMP ALGORITHM In each iteration of the gomp algorithm, correlations between columns of Φ and the modified measurements (residual) are compared and indices of the columns corresponding to maximal correlation are chosen as elements of the estimated support set Λ k. Hence, when = 1, gomp returns to the OMP. Denoting indices as φ k (1),,φ k (), the extended support set at k-th iteration becomes Λ k = Λ k 1 {φ k (1),,φ k ()}. After obtaining the LS solution ˆx Λ k = arg min x y Φ Λ kx 2 = Φ Λ k y, the residual r k is updated by subtracting Φ Λ kˆx Λ k from the measurements y. In other words, the projection of y onto the orthogonal complement space of span(φ Λ k) becomes the new residual (i.e., r k = P Λ k y). These operations are repeated until either the iteration number reaches maximum k max = min(k, m ) or the l 2-norm of the residual falls below a prespecified threshold ( r k 2 ǫ) (see Fig. 2). It is worth mentioning that the residual r k of the gomp is orthogonal to the columns of Φ Λ k since ΦΛ k,r k = Φ Λ k,p Λ k y (4) = Φ Λ k P Λ k y (5) = Φ Λ k ( P Λ k ) y (6) = ( P Λ k Φ Λ k) y = 0 (7) where (6) follows from the symmetry of P Λ k (P Λ k = ( P Λ k ) ) and (7) is due to P Λ k Φ Λ k = (I P Λ k)φ Λ k = Φ Λ k Φ Λ kφ Λ k Φ Λ k = 0. 1 It is clear from this observation that indices in Λ k cannot be re-selected in the succeeding iterations and the cardinality of Λ k becomes simply k. When the iteration loop of the gomp is finished, therefore, it is possible that the final support set Λ contains indices not in T. ote that, even in this situation, final result is unaffected and 1 This property is satisfied only when Φ Λ k has full column rank, which is true under k m in the gomp operation.
7 6 k 1 r y k 1 k 1 y,! Support elements selection i &! k" 1 arg max r,, k" 1 j j#$ \"%, 1!,, i" 1! #! "! #! i" $ i! 1,2,, Augmentation k k 1 " $ " i# % i! 1,2,, i 1,, k Estimation x ˆ k! k y Residual update r k! y " k xˆ k k r o k min{ K, m / }! k or r 2? Yes End xˆ k Fig. 2. Schematic diagram of the gomp algorithm. the original signal is recovered because ˆx Λ = argmax y Φ Λ x x 2 (8) = Φ Λ y (9) = (Φ Λ Φ Λ ) 1 Φ Λ Φ Tx T (10) = (Φ Λ Φ Λ ) 1 Φ Λ (Φ Λ x Λ +Φ Λ Tx Λ T) (11) = x Λ. (12) where (11) follows from the fact that x Λ T = 0. From this observation, we deduce that as long as at least one correct index is found in each iteration of the gomp, we can ensure that the
8 7 TABLE I GOMP ALGORITHM Input: measurements y, sensing matrix Φ, sparsity K, number of indices for each selection. Initialize: iteration count k = 0, residual vector r 0 = y, estimated support set Λ 0 =. While r k 2 > ǫ and k < min{k,m/} End k = k +1. (Support elements selection) for i = 1,2,,, φ(i) = arg (Augmentation) max j:j Ω\{Λ k 1,φ(i 1),,φ(1)} r k 1,ϕ j. Λ k = Λ k 1 {φ(1),,φ()}. (Estimation of ˆx Λ k ) ˆx Λ k = argmin y Φ Λ kx x 2. (Residual Update) r k = y Φ Λ kˆx Λ k. Output: ˆx = arg min x:supp(x)=λ k y Φx 2. original signal is perfectly recovered within K iterations. In practice, however, the number of correct indices being selected is usually more than one so that required number of iterations is much smaller than K. The whole procedure of the gomp algorithm is summarized in Table. I. III. RIP BASED RECOVERY CODITIO AALYSIS In this section, we analyze the RIP based condition under which the gomp can perfectly recover K-sparse signals. First, we analyze the condition ensuring a success at the first iteration (k = 1). Success means that at least one correct index is chosen in the iteration. ext, we study the condition ensuring the success in the non-initial iteration (k > 1). Combining two conditions, we obtain the sufficient condition of the gomp algorithm guaranteeing the perfect recovery of K-sparse signals. Following lemmas are useful in our analysis.
9 8 Lemma 3.1 (Lemma 3 in [12], [18]): If the sensing matrix satisfies the RIP of both orders K 1 and K 2, then δ K1 δ K2 for any K 1 K 2. This property is referred to as the monotonicity of the isometric constant. R I, Lemma 3.2 (Consequences of RIP [3], [12], [19]): For I Ω, if δ I < 1 then for any u ( 1 δ I ) u 2 Φ I Φ Iu 2 ( 1+δ I ) u 2, 1 1+δ I u 2 (Φ IΦ I ) 1 u δ I u 2. Lemma 3.3 (Lemma 2.1 in [16] and Lemma 1 in [18]): Let I 1,I 2 Ω be two disjoint sets (I 1 I 2 = ). If δ I1 + I 2 < 1, then holds for any u supported on I 2. Φ I 1 Φu 2 = Φ I 1 Φ I2 u I2 2 δ I1 + I 2 u 2 A. Condition for Success at the Initial Iteration As mentioned, if at least one index is correct among indices selected, we say that the gomp makes a success in the iteration. The following theorem provides a sufficient condition guaranteeing the success of the gomp in the first iteration. Theorem 3.4: Suppose x R n is a K-sparse signal, then the gomp algorithm makes a success in the first iteration if δ K+ < K +. (13) Proof: Let Λ 1 denote the set of indices chosen in the first iteration. Then, elements of Φ Λ y are significant elements in Φ y and thus 1 Φ Λ 1y 2 = max I = ϕ i,y 2 (14) i I
10 9 where ϕ i denotes the i-th column in Φ. Further, we have 1 Φ Λ 1y 1 2 = max ϕ i,y 2 (15) I = This together with y = Φ T x T, we have where (20) is from Lemma 3.2. = max I = = Φ Λ 1y 2 1 T 1 I i I ϕ i,y 2 (16) i I ϕ i,y 2 (17) i T 1 K Φ Ty 2. (18) K Φ TΦ T x T 2 (19) K (1 δ K) x 2 (20) On the other hand, when no correct index is chosen in the first iteration (i.e., Λ 1 T = ), Φ Λ 1y 2 = Φ Λ 1Φ Tx T 2 δ K+ x 2, (21) where the inequality follows from Lemma 3.3. The last inequality contradicts (20) if δ K+ x 2 < K (1 δ K) x 2. (22) ote that, under (22), at least one correct index is chosen in the first iteration. Since δ K δ K+ by Lemma 3.1, (22) holds true when Equivalently, δ K+ x 2 < δ K+ < K (1 δ K+) x 2. (23) K +. (24) In summary, if δ K+ < K+, then Λ 1 contains at least one element of T in the first iteration of the gomp.
11 10 T (true support set) k! (estimated support set) Fig. 3. Set diagram of Ω, T, and Λ k. B. Condition for Success in on-initial Iterations In this subsection, we investigate the condition guaranteeing the success of the gomp in non-initial iterations. Theorem 3.5: Suppose the gomp has performed k iterations (1 k < K) successfully. Then under the condition δ K < the gomp will make a success at the (k +1)-th condition. K +2, (25) As mentioned, newly selected indices are not overlapping with previously selected ones and hence Λ k = k. Also, under the hypothesis that gomp has performed k iterations successfully, Λ k contains at least k correct indices. Therefore, the number of correct indices l in Λ k becomes l = T Λ k k. ote that we only consider the case where Λ k does not include all correct indices (l < K) since otherwise the reconstruction task is already finished. 2 Hence, as depicted in Fig. 3, the set containing the rest of the correct indices is nonempty (T Λ k ). Key ingredients in our proof are 1) the upper bound α of -th largest correlation in magnitude between r k and columns indexed by F = Ω\(Λ k T) (i.e., the set of remaining 2 When all the correct indices are chosen (T Λ k ) then the residual r k = 0 and hence the gomp algorithm is finished already.
12 11 incorrect indices) and 2) the lower bound β 1 of the largest correlation in magnitude between r k and columns whose indices belong to T Λ k. If β 1 is larger than α, then β 1 is contained in the top among all values of ϕ j,r k and hence at least one correct index is chosen in the (k + 1)-th iteration. The following two lemmas provides the upper bound of α and the lower bound of β 1, respectively. Lemma 3.6: Let α i = ϕ φ(i),r k where φ(i) = arg max j:j F\{φ(1),,φ(i 1)} ϕ j,r k so that αi are ordered in magnitude (α 1 α 2 ). Then, in the (k + 1)-th iteration in the gomp algorithm, α, the -th largest correlation in magnitude between r k and {ϕ i } i F, satisfies ( α δ +K l + δ ) +kδ k+k l xt Λ k 2. (26) 1 δ k Proof: See Appendix A. Lemma 3.7: Let β i = ϕ φ(i),r k where φ(i) = arg max j:j (T Λ k )\{φ(1),,φ(i 1)} ϕ j,r k so that β i are ordered in magnitude (β 1 β 2 ). Then in the (k + 1)-th iteration in the gomp algorithm, β 1, the largest correlation in magnitude between r k and {ϕ i } i T Λ k, satisfies ( β 1 1 δ K l 1+δ ) k xt Λ k 2 (1 δ k ) 2δ2 k+k l K. (27) l Proof: See Appendix B. Proof of Theorem 3.5 Proof: A sufficient condition under which at least one correct index is selected at the (k +1)-th step can be described as α < β 1. (28) oting that 1 k l < K and 1 < K and also using the monotonicity of the restricted isometric constant in Lemma 3.1, we have K l < K δ K l < δ K, k +K l < K δ k+k l < δ K, k < K δ k < δ K, +k K δ +k δ K. (29)
13 12 From Lemma 3.6 and (29), we have ( α δ +K l + δ ) +kδ k+k l xt Λ k 2 1 δ k Also, from Lemma 3.7 and (29), we have ( β 1 1 δ K l 1+δ k (1 δ k ) 2δ2 k+k k ) xt Λ k 2 K l ( ) δ K + δ2 K xt Λ k 2. (30) 1 δ K ( 1 δ K 1+δ K (1 δ K ) 2δ2 K Using (30) and (31), we obtain the sufficient condition of (28) as ( 1 δ K 1+δ ) K xt Λ k 2 (1 δ K ) 2δ2 K K > l After some manipulations, we have Since K l < K, (33) holds if which completes the proof. δ K < δ K < ( δ K + δ2 K 1 δ K ) xt Λ k 2 K l. (31) ) xt Λ k 2. (32) K l+2. (33) K +2, (34) C. Overall Sufficient Condition Thus far, we investigated conditions guaranteeing the success of the gomp algorithm in the initial iteration (k = 1) and non-initial iterations (k > 1). We now combine these results to describe the sufficient condition of the gomp algorithm ensuring the perfect recovery of K- sparse signals. Recall from Theorem 3.4 that the gomp makes a success in the first iteration if δ K+ <. (35) K + Also, recall from Theorem 3.5 that if the previous k iterations were successful, then the gomp will be successful for the (k +1)-th iteration if δ K <. (36) K +2 The overall sufficient condition is determined by the stricter condition between (35) and (36).
14 13 Theorem 3.8 (Sufficient condition of gomp): For any K-sparse vector x, the gomp algorithm perfectly recovers x from y = Φx via at most K iterations if the sensing matrix Φ satisfies the RIP with isometric constant δ K < K+2 for K > 1, (37) δ 2 < 1 2 for K = 1. (38) Proof: In order to prove the theorem, following three cases need to be considered. Case 1 [ > 1,K > 1]: In this case, K K + and hence δ K δ K+ and also K+ > K+2. Thus, (36) is stricter than (35) and the general condition becomes δ K <. (39) K +2 Case 2 [ = 1,K > 1]: In this case, the general condition should be the stricter condition between δ K < 1 K+2 and δ K+1 < 1 K+1. Unfortunately, since δ K δ K+1 and 1 K+2 1 K+1, one cannot compare two conditions directly. As an indirect way, we borrow a sufficient condition guaranteeing the perfect recovery of the gomp for = 1 as K 1 δ K <. (40) K 1+K Readers are referred to [24] for the proof of (40). Since 1 K+2 < K 1 K 1+K for K > 1, the sufficient condition for Case 2 becomes δ K < 1 K +2. (41) ice feature of (41) is that it can be nicely combined with the result of Case 1 since applying = 1 in (39) results in (41). Case 3 [K = 1]: Since x has a single nonzero element (K = 1), x should be recovered in the first iteration. Let u be the index of nonzero element, then the exact recovery of x is ensured regardless of if ϕ u,y = max ϕ i,y. (42) i
15 14 The condition ensuring (42) is obtained by applying = K = 1 for Theorem 3.4 and is given by δ 2 < 1 2. Remark 1 (Related to the measurement size of sensing matrix): It is well known that m n random sensing matrix with i.i.d. entries with Gaussian distribution (1, 1/m) obey the RIP with δ K < ε with overwhelming probability if the dimension of the measurements satisfies [17] ( Klog n ) K m = O. (43) ε 2 Plugging (37) into (43), we have ( m = O K 2 log n ). (44) K ote that the same result can be obtained for the OMP by plugging δ K+1 < 1 K+1 into into (43). D. Sufficient Condition of OMP In this subsection, we put our focus on the OMP algorithm which is the special case of the gomp algorithm for = 1. For sure, one can immediately obtain the condition of the OMP δ K < 1 K+2 by applying = 1 to Theorem 3.8. Our result, slightly improved version of this, is based on the fact that the non-initial step of the OMP process is the same as the initial step since the residual is considered as a new measurement preserving the sparsity K of an input vector x [24]. In this perspective, a condition guaranteeing to select a correct index in the first iteration is extended to the general condition without occurring any loss. Corollary 3.9 (A direct sequence of Theorem 3.4): Suppose x R n is K-sparse, then the OMP algorithm recovers an index in T from y = Φx R m in the first iteration if δ K+1 < 1 K +1. (45) We now state that the first iteration condition is extended to any iteration of the OMP algorithm. Lemma 3.10 (Wang and Shim [21]): Suppose that the first k iterations (1 k K 1) of the OMP algorithm are successful (i.e., Λ k T ), then the (k+1)-th iteration is also successful (i.e., t k+1 T ) under δ K+1 < 1 K+1.
16 15 Proof: The residual at the k-th iteration of the OMP is expressed as r k = y Φ Λ kˆx Λ k. (46) Since y = Φ T x T and also Φ Λ k is a submatrix of Φ T under hypothesis, r k span(φ T ). Hence, r k can be expressed as a linear combination of the T (= K) columns of Φ T and can be expressed as r k = Φx where the support of x is contained in the support of x. In other words, r k is a measurement of K-sparse signal x using the sensing matrix Φ. From this observation together with the corollary 3.9, we conclude that if Λ k T, then the index chosen in (k +1)-th iteration is an element of T under (45). Combining Corollary 3.9 and Lemma 3.10, and also noting that indices in Λ k is not selected again in the succeeding iterations (i.e., the index chosen in (k+1)-th step belongs to T Λ k ), Λ K = T and the OMP algorithm recovers original signal x in exactly K iterations under δ K+1 < 1 K+1. Following theorem formally describes the sufficient condition of the OMP algorithm. Theorem 3.11 (Wang and Shim [21]): SupposexisK-sparse vector, then the OMP algorithm recovers x from y = Φx under δ K+1 < Proof: Immediate from Corollary 3.9 and Lemma K +1. (47) IV. SIMULATIOS AD DISCUSSIOS A. Simulations Setup In this section, we empirically demonstrate the effectiveness of the gomp in recovering the sparse signals. Perfect recovery conditions in the literatures (including the condition of the gomp in this paper) usually offer too strict sufficient condition so that empirical performance has been served as a supplementary measure in many works. In particular, empirical frequency of exact reconstruction has been a popularly used tool to measure the effectiveness of recovery algorithm [18], [25]. By comparing the maximal sparsity level at which the perfect recovery is ensured
17 16 (this point is often called critical sparsity), superiority of the reconstruction algorithm can be evaluated. In our simulations, following algorithms are considered. 1) LP technique for solving l 1 -minimization ( 2) OMP algorithm. 3) gomp algorithim ( = 5). 4) StOMP with false alarm control (FAC) based thresholding ( 3 5) ROMP algorithm ( 6) CoSaMP algorithm ( In each trial, we construct m n (m = 128 and n = 256) sensing matrix Φ with entries drawn independently from Gaussian distribution (0, 1/m). In addition, we generate K-sparse signal vector x whose support is chosen at random. We consider two types of sparse signals; Gaussian signals and pulse amplitude modulation (PAM) signals. Each nonzero element of Gaussian signals is drawn from standard Gaussian and that in PAM signals is randomly chosen from the set {±1,±3}. In each recovery algorithm, we perform 1000 independent trials and plot the empirical frequency of exact reconstruction. B. Simulation Results In Fig. 4, we provide the recovery performance as a function of the sparsity level K. Clearly, higher critical sparsity implies better empirical reconstruction performance. The simulation results reveal that the critical sparsity of the gomp algorithm is much better than ROMP, OMP, and StOMP. Even compared to the LP technique and CoSaMP, the gomp exhibits a bit improved recovery performance. Fig. 5 provides results for the PAM input signals. We observe that overall behavior is similar to the case of Gaussian signals except that the l 1 -minimization is slightly better than the gomp. Overall, we can clearly see that the gomp algorithm is very competitive for both Gaussian and PAM input scenarios. C. Complexity of gomp In this subsection, we discuss the computational complexity of the gomp algorithm. Complexity for each step of the gomp algorithm is summarized as follows. 3 Since FAC scheme outperforms false discovery control (FDC) scheme, we exclusively use FAC scheme in our simulations.
18 17 Frequency of Exact Reconstruction gomp OMP StOMP ROMP CoSaMP LP Sparsity Fig. 4. Reconstruction performance for K-sparse Gaussian signal vector as a function of sparsity K. Support elements selection: The gomp performs a matrix-vector multiplication Φ r k 1, which needs (2m 1)n flops (m multiplication and m 1 additions). Also, Φ r k 1 needs to be sorted to find best indices, which requires n ( +1)/2 flops. Estimation of ˆx Λ k: In this step, the LS solution is obtained using the MGS algorithm. Using the QR factorization (Φ Λ k = QR), we have ˆx Λ k = (Φ Λ Φ k Λ k) 1 Φ Λ y = (R R) 1 R Q y. k By recycling the part of the QR factorization of Φ Λ k 1 computed in the previous iteration, the LS solutions can be solved efficiently (see Appendix C for details). As a result, the LS solution can be obtained with a cost of 4 2 km+( )m+2 3 k 2 + ( )k
19 18 Frequency of Exact Reconstruction gomp OMP StOMP ROMP CoSaMP LP Sparsity Fig. 5. Reconstruction performance for K-sparse PAM signal vector as a function of sparsity K. Residual update: For updating residual, the gomp performs the matrix-vector multiplication Φ Λ kˆx Λ k ((2k 1)m flops) followed by the subtraction (m flops). Table II summarizes the complexity of the gomp in each iteration. If the gomp is finished in S iterations, then the complexity of the gomp, denoted as C gomp (,S,m,n), becomes C gomp (,S,m,n) 2Smn+(2 2 +)S 2 m. (48) oting that S min{k,m/} and is a small constant, the complexity of the gomp is upper bounded by O(Kmn). In practice, however, the iteration number of the gomp is much smaller than K due to the parallel processing of multiple correct indices, which saves the complexity of the gomp substantially. Indeed, as shown in Fig. 6, the number of iterations is only about 1/3 of the OMP so that the gomp has an advantage over the OMP in both complexity and running
20 19 TABLE II COMPLEXITY OF THE GOMP ALGORITHM Step Support elements selection Estimation of ˆx Λ k Residual update Total cost of the k-th iteration Running time (2m 1+)n ( +1)/2 = O(mn) 4 2 km = O(km) 2km 2mn+(4 2 +2)km = O(mn) gomp OMP K/3 50 umber of Iterations Sparsity Fig. 6. umber of iterations of the OMP and gomp ( = 5) as a function of sparsity K. time.
21 gomp OMP StOMP ROMP CoSaMP Running time (sec) Sparsity K Fig. 7. Running time as a function of sparsity K. ote that the running time of the l 1-minimization is not in the figure since the time is more than order of magnitude higher than the time of other algorithms. D. Running time In Fig. 7, the running time (average of Gaussian and PAM signals) for recovery algorithms is provided. The running time is measured using the MATLAB program under quad-core 64-bit processor and Window 7 environments. ote that we do not add the result of LP technique simply because the running time is more than order of magnitude higher than that of all other algorithms. Overall, we observe that the running time of StOMP, CoSaMP, gomp, and OMP is more or less similar when K is small (i.e., the signal vector is sparse). However, when the signal vector is less sparse (i.e., when K is large), the running time of the OMP and CoSaMP increases much faster than that of the gomp and StOMP. In particular, while the running time of the OMP, StOMP, and gomp increases linearly over K, that for the CoSaMP seems to increase quadratically over
22 21 K. The reason is that the CoSaMP should compute completely new LS solution over the distinct subset of Φ and hence previous QR factorization cannot be recycled [19]. Also, it is interesting to observe that the running time of the gomp and StOMP is fairly comparable. Considering the fact that the thresholding (FAC or FDC) is required for each iteration of the StOMP, the gomp might be a bit favorable in implementation. V. COCLUSIO As a cost-effective solution for recovering sparse signals from compressed measurements, the OMP algorithm has received much attention in recent years. In this paper, we presented the generalized version of the OMP algorithm for pursuing efficiency in reconstructing sparse signals. Since multiple indices can be identified with no additional postprocessing operation, the proposed gomp algorithm lends itself to parallel processing, which expedites the processing of the algorithm and thereby reduces the running time. In fact, we demonstrated in the empirical simulation that the gomp has excellent recovery performance comparable to l 1 -minimization technique with fast processing speed and competitive computational complexity. Also, we showed from the RIP analysis that if the isometry constant of the sensing matrix satisfies δ K < K+2 then the gomp algorithm can perfectly recover K-sparse signals (K > 1) from compressed measurements. One important point we would like to mention is that the gomp algorithm is potentially more effective than what this analysis tells. Indeed, the bound in (37) is derived based on the worst case scenario where the algorithm selects only one correct index per iteration (hence requires maximum K-th iterations). In reality, as observed in the empirical simulations, it is highly likely that multiple correct indices are identified for each iteration and hence the number of iterations is much smaller than that of the OMP. Therefore, we believe that less strict or probabilistic analysis will uncover the whole story of the CS recovery performance. Our future work will be directed towards this avenue.
23 22 APPEDIX A PROOF OF LEMMA 3.6 Proof: Let w i be the i-th largest correlation in magnitude between r k and {ϕ j } j F (i.e., columns corresponding to remaining incorrect indices). Also, define the set of indices W = {w 1,w 2,,w }. The l 2 -norm of the correlation Φ W rk is expressed as Φ W r k 2 = Φ W P Λ Φ k T Λ kx T Λ k 2 = Φ W Φ T Λ kx T Λ k Φ W P Λ kφ T Λ kx T Λ k 2 Φ WΦ T Λ kx T Λ k 2 + Φ WP Λ kφ T Λ kx T Λ k 2, (49) where P Λ k = I P Λ k. Since W and T Λ k are disjoint (i.e., W (T Λ k ) = ) and W + T Λ k = +K l (note that the number of correct indices in Λ k is l by hypothesis). This together with Lemma 3.3, we have Φ W Φ T Λ kx T Λ k 2 δ +K l x T Λ k 2. (50) Similarly, noting that W Λ k = and W + Λ k = +k, we have Φ Φ WP Λ kφ T Λ kx T Λ k 2 δ +k Φ Λ k T Λ kx T Λ k (51) 2 where Φ Φ Λ k T Λ kx T Λ k is 2 Φ Φ Λ k T Λ kx T Λ k = (Φ Λ Φ k Λ k) 1 Φ Λ Φ k T Λ kx T Λ k (52) δ k Φ Λ k Φ T Λ kx T Λ k 2 (53) δ k+k l 1 δ k x T Λ k 2, (54) where (53) and (54) follow from Lemma 3.2 and Lemma 3.3, respectively. Since Λ k and T Λ k are disjoint, if the number of correct indices in Λ k is l, then Λ k ( T Λ k) = k +K l. Using (50), (51), and (54), we have Φ W r k 2 ( δ +K l + δ +kδ k+k l 1 δ k ) x T Λ k 2. (55) Since α i = ϕ wi,r k, we have Φ W rk 1 = α i. ow, using the norm inequality 4, we have i=1 Φ W r k 2 1 α i. (56) i=1 4 u 1 u 0 u 2.
24 23 Since α 1 α 2 α, it is clear that Φ W r k 2 1 α = α. (57) Combining (55) and (57), we have ( δ +K l + δ ) +kδ k+k l x 1 δ T Λ k 2 α, (58) k and hence α ( δ +K l + δ ) +kδ k+k l xt Λ k 2. (59) 1 δ k APPEDIX B PROOF OF LEMMA 3.7 Proof: Since r k = y Φ Λ kφ Λ k y = P Λ k y, we have r k 2 = ( P 2 Λ y ) P k Λ y = ( ) P P k Λ Φ k T Λ kx T Λ k Λ y. k Employing the idempotency and symmetry properties of the operator P Λ k (i.e. P Λ k = (P Λ k ) 2 and P Λ k = (P Λ k ) ), we further have oting that Φ T Λ kx T Λ k = r k 2 2 = (Φ T Λ kx T Λ k) P Λ k y = Φ T Λ kx T Λ k,r k. r k 2 = 2 j T Λ k x j ϕ j, we further have j T Λ k x j ϕ j,r k x j ϕ j,r k. (60) j T Λ k Since β 1 is the largest correlation in magnitude between r k and {ϕ j } j T Λ k, it is clear that for all j T Λ k. Applying this to (60), we obtain r k 2 2 ϕ j,r k β1 (61) j T Λ k x j β 1 = x T Λ k 1 β 1. (62) oting that the dimension of x T Λ k is K l, using the norm inequality x T Λ k 1 K l x T Λ k 2,
25 24 we have r k 2 2 K l x T Λ k 2 β 1. (63) In addition, noting that r k = P Λ k (Φ Λ kx Λ k +Φ T Λ kx T Λ k) and P Λ k Φ Λ kx Λ k = 0, r k 2 2 can be rewritten as Using the definition of RIP, we get r k 2 = 2 P Λ Φ k T Λ kx T Λ k 2 2 = Φ T Λ kx T Λ k 2 2 P Λ kφ T Λ kx T Λ k 2 2. Φ T Λ kx T Λ k 2 2 (1 δ K l) x T Λ k 2 2. (64) On the other hand, P Λ kφ T Λ kx T Λ k 2 2 = Φ Λ k(φ Λ Φ k Λ k) 1 Φ Λ Φ k T Λ kx T Λ k 2 2 (1+δ k ) (Φ Λ Φ k Λ k) 1 Φ Λ Φ k T Λ kx T Λ k 2 2 (65) (66) 1+δ k (1 δ k ) 2 Φ Λ k Φ T Λ kx T Λ k 2 2 (67) δ2 k+k l (1+δ k) (1 δ k ) 2 x T Λ k 2 2, (68) where (66) is from the definition of RIP and (67) and (68) follow from Lemma 3.2 and 3.3, respectively (Λ k and T Λ k are disjoint sets and Λ k ( T Λ k) = k +K l). Combing (64) and (68), we have ( r k 2 1 δ 2 K l δ2 k+k l (1+δ k) (1 δ k ) 2 ) x T Λ k 2 2. (69) From (63) and (69), ( 1 δ K l 1+δ ) k (1 δ k ) 2δ2 k+k l x T Λ k 2 2 K l x T Λ k 2 β 1, (70) and hence β 1 ( 1 δ K l 1+δ ) k xt Λ k 2 (1 δ k ) 2δ2 k+k l K. (71) l
26 25 APPEDIX C COMPUTATIOAL COST FOR THE ESTIMATE STEP OF GOMP In thek-th iteration, the gomp estimates the nonzero elements of x by solving an LS problem, ˆx Λ k = argmin x y Φ Λ kx 2 = Φ Λ k y = (Φ Λ k Φ Λ k) 1 Φ Λ k y. (72) To solve (72), we employ the MGS algorithm in which the QR decomposition of previous iteration is maintained and, therefore, the computational cost can be reduced. Without loss of ) generality, we assume Φ Λ k = (ϕ 1 ϕ 2 ϕ k. The QR decomposition of Φ Λ k is given by Φ Λ k = QR (73) ) where Q = (q 1 q 2 q k R m k consists of k orthonormal columns obtained via MGS algorithm, and R R k k is an upper triangular matrix, q 1,ϕ 1 q 2,ϕ 2 q 1,ϕ k 0 q 2,ϕ 2 q 2,ϕ k R =. 0 0 q k,ϕ k For notation simplicity we denote R i,j = q i,ϕ j and p = (k 1). In addition, we denote the QR decomposition of the (k 1)-th iteration as Φ Λ k 1 = Q 1 R 1. Then it is clear that ) Q = (Q 1 Q 0 and R = R 1 R a. (74) 0 R b ) where Q 0 = (q p+1 q k R m, R a and R b are given by R 1,p+1 R 1,k R p+1,p+1 R p+1,k R a =.. and R b =..... (75) R p,p+1 R p,k 0 R k,k Applying (73) to (72), we have ˆx Λ k = (R R) 1 R Q y. (76) We count the cost of solving (76) in the following steps. Here we assess cost in the classical sense of counting floating-point operations (flops), i.e., each +,,,/, counts as one flop.
27 26 Cost of QR decomposition: To obtain Q and R, one only needs to compute Q 0, R a and R b since the previous data, Q 1 and R 1, are stored. For j = 1 to, we sequentially calculate {R i,p+j } i=1,2,,p+j 1 = { q i,ϕ j } i=1,2,,p+j 1, (77) ˆq p+j = ϕ p+j q p+j = p+j 1 i=1 R i,p+j q i, (78) ˆq p+j ˆq p+j 2, (79) R p+j,p+j = q p+j,ϕ p+j. (80) Taking j = 1 for example. One first computes {R i,p+1 } i=1,2,,p using Q 1 (requires p(2m 1) flops) and then computes ˆq p+1 = ϕ p+1 p i=1 R i,p+1q i (requires 2mp flops). Then, normalizing ˆq p+1 needs 3m flops. Finally, computing R p+1,p+1 requires 2m 1 flops. The cost of this example amounts to 4mp+5m p 1. Similarly, one can calculate the other ( ). data in Q 0 and R a R b In summary, the cost for this QR factorization becomes C QR = 4 2 mk 2m 2 +3m 2 k (81) 2 Cost of calculating Q y ) Since Q = (Q 1 Q 0, we have Q y = Q 1y By reusing the data Q 1y, (82) is solved with Cost of calculating R Q y:. Q 0 y (82) C 1 = (2m 1). Applying R to the vector Q y, we have R Q y = R 1 Q 1 y. (83) R a Q 1 y+r b Q 0 y Since the data R 1Q 1y can be reused, (83) is solved with C 2 = 2 2 k 2.
28 27 Cost of calculating (R R) 1 Since R is an upper triangular matrix, (R R) 1 = (R ) 1 R 1. (84) Applying the block matrix inversion, 1 R 1 = R 1 R a = (R 1) 1 (R 1 ) 1 R a (R b ) 1. (85) 0 R b 0 (R b ) 1 Then we calculate (R R) 1 = (R ) 1 R 1, i.e., (R R) 1 (R = 1 ) 1 (R 1 ) 1 (R 1 ) 1 (R 1 ) 1 R a (R b ) 1. (86) (R b ) 1 R a (R 1 ) 1 (R 1 ) 1 (R b ) 1 (R b ) 1 We can reuse the data(r 1) 1 (R 1 ) 1 so that the cost of calculating(r b ) 1,(R b ) 1 (R b ) 1, and (R 1 ) 1 (R 1 ) 1 R a (R b ) 1 becomes ( + 1)(2 + 1)/3 (Gaussian Elimination method), ( + 1)(2 + 1)/6, and 2 3 k k + 2 3, respectively. The cost for computing (R R) 1 is C 3 = 2 3 k k Cost of calculating ˆx Λ k = (R R) 1 R Q y Applying (R R) 1 to the vector R Q y, we obtain ˆx Λ k = (R 1) 1 (R 1 ) 1 R 1Q 1y+ξ 1 (87) ξ 2 +ξ 3 where ξ 1 = (R 1) 1 (R 1 ) 1 R a (R b ) 1 R aq 1y+R bq 0y, ξ 2 = (R b ) 1 R a (R 1 ) 1 (R 1 ) 1 R 1 Q 1 y, ξ 3 = (R b ) 1 (R b ) 1 R a Q 1 y+r b Q 0 y. We can reuse (R 1) 1 (R 1 ) 1 R 1Q 1y so that calculating ξ 1, ξ 2 and ξ 3 need (2 1)(k 1), (2(k 1) 1) and (2 1) flops, respectively. The cost of this step becomes C 4 = 4 2 k 2 2.
29 28 In summary, whole cost of solving LS problem in the k-th iteration of gomp is the sum of the above and is given by C LS = C QR +C 1 +C 2 +C 3 +C 4 = 4 2 km+( )m+2 3 k 2 +( )k
30 29 REFERECES [1] D. Malioutov, M. Cetin, and A. S. Willsky, A sparse signal reconstruction perspective for source localization with sensor arrays, IEEE Trans. Signal Process., vol. 53, no. 8, pp , Aug [2] D. eedell and R. Vershynin, Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit, IEEE J. Sel. Topics Signal Process., vol. 4, no. 2, pp , Apr [3] R. Giryes and M. Elad, RIP-Based ear-oracle Performance Guarantees for Subspace-Pursuit, CoSaMP, and Iterative Hard-Thresholding, Arxiv: , [4] S. Qian and D. Chen, Signal representation using adaptive normalized Gaussian functions, Signal Processing, vol. 36, no. 1, pp. 1 11, [5] S. G. Mallat and Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Trans. Signal Process., vol. 41, no. 12, pp , Dec [6] W. Xu, M. A. Khajehnejad, A. S. Avestimehr, and B. Hassibi, Breaking through the thresholds: an analysis for iterative reweighted l 1-minimization via the grassmann angle framework, in in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), Mar. Dallas, TX. IEEE, 2010, pp [7] D. Model and M. Zibulevsky, Signal reconstruction in sensor arrays using sparse representations, Signal Processing, vol. 86, no. 3, pp , Mar [8] S. Sarvotham, D. Baron, and R. G. Baraniuk, Compressed sensing reconstruction via belief propagation, preprint, [9] J. H. Friedman and W. Stuetzle, Projection pursuit regression, Journal of the American statistical Association, vol. 76, no. 376, pp , Dec [10] V. Cevher, M. Duarte, and R. G. Baraniuk, Distributed target localization via spatial sparsity, in European Signal Processing Conference (EUSIPCO). Citeseer, [11] J. A. Tropp and A. C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans. Inform. Theory, vol. 53, no. 12, pp , Dec [12] E. J. Candès and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory, vol. 51, no. 12, pp , Dec [13] D. L. Donoho, I. Drori, Y. Tsaig, and J. L. Starck, Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit, Citeseer, [14] E. J. Candès, J. Romberg, and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, vol. 52, no. 2, pp , Feb [15] E. J. Candès and J. Romberg, Sparsity and incoherence in compressive sampling, Inverse problems, vol. 23, pp. 969, Apr [16] E. J. Candès, The restricted isometry property and its implications for compressed sensing, Comptes Rendus Mathematique, vol. 346, no. 9-10, pp , [17] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, A simple proof of the restricted isometry property for random matrices, Constructive Approximation, vol. 28, no. 3, pp , [18] W. Dai and O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction, IEEE Trans. Inform. Theory, vol. 55, no. 5, pp , May [19] D. eedell and J. A. Tropp, Cosamp: Iterative signal recovery from incomplete and inaccurate samples, Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp , Mar
31 30 [20] M. A. Davenport and M. B. Wakin, Analysis of Orthogonal Matching Pursuit using the restricted isometry property, IEEE Trans. Inform. Theory, vol. 56, no. 9, pp , Sep [21] J. Wang and B. Shim, On recovery limit of orthogonal matching pursuit using restricted isometric property, submitted to IEEE Trans. Signal Process. [22] Å. Björck, umerical methods for least squares problems, umber 51. Society for Industrial Mathematics, [23] D. eedell and R. Vershynin, Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit, Foundations of Computational Mathematics, vol. 9, no. 3, pp , [24] J. Wang, S. Kwon, and B. Shim, ear optimal bound of orthogonal matching pursuit using restricted isometric constant, submitted to EURASIP J Adv Signal Process., [25] E. Candes, M. Rudelson, T. Tao, and R. Vershynin, Error correction via linear programming, in in IEEE Symposium on Foundations of Computer Science (FOCS)., 2005, pp
Multipath Matching Pursuit
Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy
More informationGeneralized Orthogonal Matching Pursuit
Generalized Orthogonal Matching Pursuit Jian Wang, Student Member, IEEE, Seokbeop won, Student Member, IEEE, and Byonghyo Shim, Senior Member, IEEE arxiv:6664v [csit] 30 Mar 04 Abstract As a greedy algorithm
More informationRecovery of Sparse Signals Using Multiple Orthogonal Least Squares
Recovery of Sparse Signals Using Multiple Orthogonal east Squares Jian Wang, Ping i Department of Statistics and Biostatistics arxiv:40.505v [stat.me] 9 Oct 04 Department of Computer Science Rutgers University
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationRecovery of Sparse Signals Using Multiple Orthogonal Least Squares
Recovery of Sparse Signals Using Multiple Orthogonal east Squares Jian Wang and Ping i Department of Statistics and Biostatistics, Department of Computer Science Rutgers, The State University of New Jersey
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationUniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit
Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,
More informationPre-weighted Matching Pursuit Algorithms for Sparse Recovery
Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationof Orthogonal Matching Pursuit
A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement
More informationORTHOGONAL matching pursuit (OMP) is the canonical
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4395 Analysis of Orthogonal Matching Pursuit Using the Restricted Isometry Property Mark A. Davenport, Member, IEEE, and Michael
More informationModel-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk
Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional
More informationA new method on deterministic construction of the measurement matrix in compressed sensing
A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central
More informationGreedy Signal Recovery and Uniform Uncertainty Principles
Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign
More informationGradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property
: An iterative algorithm for sparse recovery with restricted isometry property Rahul Garg grahul@us.ibm.com Rohit Khandekar rohitk@us.ibm.com IBM T. J. Watson Research Center, 0 Kitchawan Road, Route 34,
More informationSignal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit
Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably
More informationRandomness-in-Structured Ensembles for Compressed Sensing of Images
Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder
More informationCoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles
CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal
More informationUniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit
Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit
More informationA Greedy Search Algorithm with Tree Pruning for Sparse Signal Recovery
A Greedy Search Algorithm with Tree Pruning for Sparse Signal Recovery Jaeseok Lee, Suhyuk Kwon, and Byonghyo Shim Information System Laboratory School of Infonnation and Communications, Korea University
More informationOrthogonal Matching Pursuit for Sparse Signal Recovery With Noise
Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More informationExact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP)
1 Exact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP) Wei Lu and Namrata Vaswani Department of Electrical and Computer Engineering, Iowa State University,
More informationCompressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery
Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad
More informationSparsity in Underdetermined Systems
Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More informationStopping Condition for Greedy Block Sparse Signal Recovery
Stopping Condition for Greedy Block Sparse Signal Recovery Yu Luo, Ronggui Xie, Huarui Yin, and Weidong Wang Department of Electronics Engineering and Information Science, University of Science and Technology
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationCoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp
CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell
More informationThe Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1
The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationNoisy Signal Recovery via Iterative Reweighted L1-Minimization
Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.
More informationStability and Robustness of Weak Orthogonal Matching Pursuits
Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior
More informationInverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France
Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse
More informationThe Analysis Cosparse Model for Signals and Images
The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under
More informationExact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice
Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University
More informationIEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER
IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,
More informationObservability of a Linear System Under Sparsity Constraints
2372 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 58, NO 9, SEPTEMBER 2013 Observability of a Linear System Under Sparsity Constraints Wei Dai and Serdar Yüksel Abstract Consider an -dimensional linear
More informationA New Estimate of Restricted Isometry Constants for Sparse Solutions
A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationORTHOGONAL MATCHING PURSUIT (OMP) is a
1076 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 64, NO 4, FEBRUARY 15, 2016 Recovery of Sparse Signals via Generalized Orthogonal Matching Pursuit: A New Analysis Jian Wang, Student Member, IEEE, Suhyuk
More informationEUSIPCO
EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,
More informationRecovery of Compressible Signals in Unions of Subspaces
1 Recovery of Compressible Signals in Unions of Subspaces Marco F. Duarte, Chinmay Hegde, Volkan Cevher, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract
More informationCompressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles
Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationSolution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions
Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,
More informationMultiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing
Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing ibeltal F. Alem, Daniel H. Chae, and Rodney A. Kennedy The Australian National
More informationElaine T. Hale, Wotao Yin, Yin Zhang
, Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2
More informationSparse Vector Distributions and Recovery from Compressed Sensing
Sparse Vector Distributions and Recovery from Compressed Sensing Bob L. Sturm Department of Architecture, Design and Media Technology, Aalborg University Copenhagen, Lautrupvang 5, 275 Ballerup, Denmark
More informationCOMPRESSED SENSING IN PYTHON
COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed
More informationInterpolation via weighted l 1 minimization
Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C
More informationCompressed Sensing and Robust Recovery of Low Rank Matrices
Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech
More informationConditions for Robust Principal Component Analysis
Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and
More informationJoint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces
Aalborg Universitet Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces Christensen, Mads Græsbøll; Nielsen, Jesper Kjær Published in: I E E E / S P Workshop
More informationCompressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes
Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Item Type text; Proceedings Authors Jagiello, Kristin M. Publisher International Foundation for Telemetering Journal International Telemetering
More informationExponential decay of reconstruction error from binary measurements of sparse signals
Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation
More informationAnalysis of Greedy Algorithms
Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm
More informationRandom projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016
Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use
More informationStrengthened Sobolev inequalities for a random subspace of functions
Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)
More informationExact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations
Exact Topology Identification of arge-scale Interconnected Dynamical Systems from Compressive Observations Borhan M Sanandaji, Tyrone Vincent, and Michael B Wakin Abstract In this paper, we consider the
More informationNear Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing
Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationRobust multichannel sparse recovery
Robust multichannel sparse recovery Esa Ollila Department of Signal Processing and Acoustics Aalto University, Finland SUPELEC, Feb 4th, 2015 1 Introduction 2 Nonparametric sparse recovery 3 Simulation
More informationThresholds for the Recovery of Sparse Solutions via L1 Minimization
Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu
More informationSparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery
Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed
More informationSimultaneous Sparsity
Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationExact Low-rank Matrix Recovery via Nonconvex M p -Minimization
Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:
More informationAN INTRODUCTION TO COMPRESSIVE SENSING
AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE
More informationStability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction
Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction Namrata Vaswani ECE Dept., Iowa State University, Ames, IA 50011, USA, Email: namrata@iastate.edu Abstract In this
More informationRobustly Stable Signal Recovery in Compressed Sensing with Structured Matrix Perturbation
Robustly Stable Signal Recovery in Compressed Sensing with Structured Matri Perturbation Zai Yang, Cishen Zhang, and Lihua Xie, Fellow, IEEE arxiv:.7v [cs.it] Dec Abstract The sparse signal recovery in
More informationCOMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION
COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION By Mazin Abdulrasool Hameed A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for
More informationL-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise
L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise Srdjan Stanković, Irena Orović and Moeness Amin 1 Abstract- A modification of standard
More informationarxiv: v1 [math.na] 26 Nov 2009
Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,
More informationSigma Delta Quantization for Compressed Sensing
Sigma Delta Quantization for Compressed Sensing C. Sinan Güntürk, 1 Mark Lammers, 2 Alex Powell, 3 Rayan Saab, 4 Özgür Yılmaz 4 1 Courant Institute of Mathematical Sciences, New York University, NY, USA.
More informationIterative Hard Thresholding for Compressed Sensing
Iterative Hard Thresholding for Compressed Sensing Thomas lumensath and Mike E. Davies 1 Abstract arxiv:0805.0510v1 [cs.it] 5 May 2008 Compressed sensing is a technique to sample compressible signals below
More informationCompressed Sensing and Related Learning Problems
Compressed Sensing and Related Learning Problems Yingzhen Li Dept. of Mathematics, Sun Yat-sen University Advisor: Prof. Haizhang Zhang Advisor: Prof. Haizhang Zhang 1 / Overview Overview Background Compressed
More informationAN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE
AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE Ana B. Ramirez, Rafael E. Carrillo, Gonzalo Arce, Kenneth E. Barner and Brian Sadler Universidad Industrial de Santander,
More informationA Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission
Li and Kang: A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing 1 A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationConstructing Explicit RIP Matrices and the Square-Root Bottleneck
Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry
More informationMATCHING PURSUIT WITH STOCHASTIC SELECTION
2th European Signal Processing Conference (EUSIPCO 22) Bucharest, Romania, August 27-3, 22 MATCHING PURSUIT WITH STOCHASTIC SELECTION Thomas Peel, Valentin Emiya, Liva Ralaivola Aix-Marseille Université
More informationRecovery Guarantees for Rank Aware Pursuits
BLANCHARD AND DAVIES: RECOVERY GUARANTEES FOR RANK AWARE PURSUITS 1 Recovery Guarantees for Rank Aware Pursuits Jeffrey D. Blanchard and Mike E. Davies Abstract This paper considers sufficient conditions
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationCompressed Sensing and Linear Codes over Real Numbers
Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st,
More informationDetecting Sparse Structures in Data in Sub-Linear Time: A group testing approach
Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach Boaz Nadler The Weizmann Institute of Science Israel Joint works with Inbal Horev, Ronen Basri, Meirav Galun and Ery Arias-Castro
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationSparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images
Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are
More informationINDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina
INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed
More informationCombining geometry and combinatorics
Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss
More informationThe Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology
The Sparsity Gap Joel A. Tropp Computing & Mathematical Sciences California Institute of Technology jtropp@acm.caltech.edu Research supported in part by ONR 1 Introduction The Sparsity Gap (Casazza Birthday
More informationLecture: Introduction to Compressed Sensing Sparse Recovery Guarantees
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin
More informationStochastic geometry and random matrix theory in CS
Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder
More informationIntroduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011
Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear
More informationComplementary Matching Pursuit Algorithms for Sparse Approximation
Complementary Matching Pursuit Algorithms for Sparse Approximation Gagan Rath and Christine Guillemot IRISA-INRIA, Campus de Beaulieu 35042 Rennes, France phone: +33.2.99.84.75.26 fax: +33.2.99.84.71.71
More informationAFRL-RI-RS-TR
AFRL-RI-RS-TR-200-28 THEORY AND PRACTICE OF COMPRESSED SENSING IN COMMUNICATIONS AND AIRBORNE NETWORKING STATE UNIVERSITY OF NEW YORK AT BUFFALO DECEMBER 200 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC
More information