Generalized Orthogonal Matching Pursuit
|
|
- Dorcas Hampton
- 6 years ago
- Views:
Transcription
1 Generalized Orthogonal Matching Pursuit Jian Wang, Student Member, IEEE, Seokbeop won, Student Member, IEEE, and Byonghyo Shim, Senior Member, IEEE arxiv:6664v [csit] 30 Mar 04 Abstract As a greedy algorithm to recover sparse signals from compressed measurements, orthogonal matching pursuit OMP algorithm has received much attention in recent years In this paper, we introduce an extension of the OMP for pursuing efficiency in reconstructing sparse signals Our approach, henceforth referred to as generalized OMP gomp, is literally a generalization of the OMP in the sense that multiple indices are identified per iteration Owing to the selection of multiple correct indices, the gomp algorithm is finished with much smaller number of iterations when compared to the OMP We show that the gomp can perfectly reconstruct any -sparse signals >, provided that the sensing matrix satisfies the RIP with δ < +3 We also demonstrate by empirical simulations that the gomp has excellent recovery performance comparable to l -minimization technique with fast processing speed and competitive computational complexity Index Terms Compressive sensing CS, orthogonal matching pursuit, sparse recovery, restricted isometry property RIP I ITRODUCTIO As a paradigm to acquire sparse signals at a rate significantly below yquist rate, compressive sensing has attracted much attention in recent years [] [3] The goal of compressive sensing is to recover the sparse vector using a small number of linearly transformed measurements The process of acquiring compressed measurements is referred to as sensing while that of recovering the original sparse signals from compressed measurements is called reconstruction In the sensing operation,-sparse signal vector x, ie, n-dimensional vector having at most non-zero elements, is transformed into m- dimensional measurements y via a matrix multiplication with Φ The measurement is expressed as y Φx Since n > m for most of the compressive sensing scenarios, the system in can be classified as an underdetermined system having more unknowns than observations Clearly, it is in general impossible to obtain an accurate reconstruction of the original input x using conventional inverse transform of Φ Whereas, it is now well known that with a prior information J Wang, S won, and B Shim are with School of Information and Communication, orea University, Seoul, orea {jwang,sbkwon,bshim}@islkoreaackr Copyright c 0 IEEE Personal use of this material is permitted However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to pubs-permissions@ieeeorg This work was supported by the CC orea Communications Commission, orea, under the R&D program supervised by the CA orea Communication Agency CA and the RF grant funded by the orea government MEST o A part of this paper was presented in Asilomar Conference on Signals, Systems & Computers, ov, 0 on the signal sparsity and a condition imposed on Φ, x can be reconstructed by solving the l -minimization problem [6]: min x x subject to Φx y A widely used condition of Φ ensuring the exact recovery of x is called restricted isometry property RIP [3] A sensing matrix Φ is said to satisfy the RIP of order if there exists a constant δ 0, such that δ x Φx +δ x 3 for any -sparse vector x x 0 In particular, the minimum of all constants δ satisfying 3 is referred to as an isometry constant δ It has been shown that x can be perfectly recovered by solving l -minimization problem if δ < [6] While l -norm is convex and hence the problem can be solved via linear programming LP technique, the complexity associated with the LP is cubic ie, On 3 in the size of the original vector to be recovered [4] so that the complexity is burdensome for many real applications Recently, greedy algorithms sequentially investigating the support of x have received considerable attention as cost effective alternatives of the LP approach Algorithms in this category include orthogonal matching pursuit OMP [8], regularized OMP ROMP [5], stagewise OMP StOMP [], subspace pursuit SP [6], and compressive sampling matching pursuit CoSaMP [7] As a representative method in the greedy algorithm family, the OMP has been widely used due to its simplicity and competitive performance Tropp and Gilbert [8] showed that, for a -sparse vector x and an m n Gaussian sensing matrix Φ, the OMP recovers x from y Φx with overwhelming probability if the number of measurements follows m log n Wakin and Davenport showed that the OMP can exactly reconstruct all -sparse vectors if δ + < 3 [7] and Wang and Shim recently improved the condition to δ + < + [8] The main principle behind the OMP is simple and intuitive: in each iteration, a column of Φ maximally correlated with the residual is chosen identification, the index of this column is added to the list augmentation, and then the vestige of columns in the list is eliminated from the measurements, generating a new residual used for the next iteration residual update Among these, computational complexity of the OMP is dominated by the identification and the residual update steps In the k-th iteration, the identification requires a matrixvector multiplication so that the number of floating point operations flops becomes m n Main operation of the residual update is to compute the estimate of x, which is completed by obtaining the least squares LS solution and the required flops is approximately 4km Additionally, km flops
2 are required to perform the residual update Considering that the algorithm requires iterations, the total number of flops of the OMP is about mn+3 m Clearly, the sparsity plays an important role in the complexity of the OMP When the signal being recovered is not very sparse, therefore, the OMP may not be an excellent choice There have been some studies on the modification of the OMP, mainly on the identification step, to improve the computational efficiency and recovery performance In [], a method identifying more than one indices in each iteration was proposed In this approach, referred to as the StOMP, indices whose magnitude of correlation exceeds a deliberately designed threshold are chosen It is shown that while achieving performance comparable to l -minimization technique, the StOMP runs much faster than the OMP as well as l -minimization technique [] In [5], another variation of the OMP, so called ROMP, was proposed After choosing a set of indices with largest correlation in magnitude, the ROMP narrows down the candidates by selecting a subset satisfying a predefined regularization rule It is shown that the ROMP algorithm exactly recovers -sparse signals under δ < 003/ log [9] While the main focus of the StOMP and ROMP algorithm is on the modification of the identification step, the SP and CoSaMP algorithm require additional operation, called pruning step, to refine the signal estimate recursively Our approach lies on the similar ground of these approaches in the sense that we pursue reduction in complexity through the modification on the identification step of the OMP Specifically, towards the reduction of complexity and speedingup the execution time of the algorithm, we choose multiple indices in each iteration While previous efforts employ special treatment on the identification step such as thresholding [] for StOMP or regularization [5] for ROMP, the proposed method pursues direct extension of the OMP by choosing indices corresponding to largest correlation in magnitude Therefore, our approach, henceforth referred to as generalized OMP gomp, is literally a generalization of the OMP and embraces the OMP as a special case Owing to the selection of multiple indices, multiple correct indices ie, indices in the support set are added to the list and the algorithm is finished with much smaller number of iterations when compared to the OMP Indeed, in both empirical simulations and complexity analysis, we observe that the gomp achieves substantial reduction in the number of calculations with competitive reconstruction performance The primary contributions of this paper are twofold: We present an extension of the OMP, termed gomp, for pursuing efficiency in reconstructing sparse signals Our empirical simulation shows that the recovery performance of the gomp is comparable to the LP technique as well as modified OMP algorithms eg, CoSaMP and StOMP We develop a perfect recovery condition of the gomp To be specific, we show that the RIP of order with δ < +3 > is sufficient for the gomp to exactly recover any -sparse vector within iterations Theorem 38 As a special case of the gomp, we show that a sufficient condition of the OMP is given byδ + < + Also, we extend our work to the reconstruction of sparse signals in the presence of noise and obtain the bound of the estimation error It has been brought to our attention that in parallel to our effort, orthogonal multi matching pursuit OMMP or orthogonal super greedy algorithm OSGA [0] suggested a similar treatment to the one posted in this paper Similar approach has also been introduced in [] evertheless, our work is sufficiently distinct from these works in the sufficient recovery condition analysis Further, we also provide an analysis of the noisy scenario upper bound of the recovery distortion in l - norm for which there is no counterpart in the OMMP and OSGA study The rest of this paper is organized as follows In Section II, we introduce the proposed gomp algorithm and provide empirical experiments on the reconstruction performance In Section III, we provide the RIP based analysis of the gomp guaranteeing the perfect reconstruction of -sparse signals We also revisit the OMP algorithm as a special case of the gomp and obtain a sufficient condition ensuring the recovery of-sparse signals In Section IV, we study the reconstruction performance of the gomp under noisy measurement scenario In Section V, we discuss complexity of the gomp algorithm and conclude the paper in Section VI We briefly summarize notations used in this paper Let Ω {,,,n} then T { i i Ω,x i 0} denotes the support of vector x For D Ω, D is the cardinality of D T D T\T D is the set of all elements contained in T but not in D x D R D is a restriction of the vector x to the elements with indices indφ D R m D is a submatrix ofφ that only contains columns indexed byd IfΦ D is full column rank, then Φ D Φ D Φ D Φ D is the pseudoinverse of Φ D spanφ D is the span of columns in Φ D P D Φ D Φ D is the projection onto spanφ D P D I P D is the projection onto the orthogonal complement of spanφ D II GOMP ALGORITHM In each iteration of the gomp algorithm, correlations between columns of Φ and the modified measurements residual are compared and indices of the columns corresponding to maximal correlation are chosen as the new elements of the estimated support set Λ k As a trivial case, when, gomp returns to the OMP Denoting the indices as φ,,φ where φi arg max j:j Ω\{φi,,φ} rk,ϕ j, the extended support set at the k-th iteration becomes Λ k Λ k {φ,,φ} After obtaining the LS solution ˆx Λ k arg min u y Φ Λ ku Φ Λ k y, the residual r k is updated by subtractingφ Λ kˆx Λ k from the measurementsy In other words, the projection of y onto the orthogonal complement space of spanφ Λ k becomes the new residual ie, r k P Λ k y These operations are repeated until either the iteration number reaches maximum k max min, m or the l -norm of the residual falls below a prespecified threshold r k ǫ
3 3 It is worth mentioning that the residual r k of the gomp is orthogonal to the columns of Φ Λ k since ΦΛ k,r k Φ Λ k,p Λ ky 4 Φ Λ kp Λky 5 Φ Λ P y k Λ 6 k P Λ kφ Λ k y 0 7 where 6 follows from the symmetry ofp Λ k P Λ k P Λ k and 7 is due to P Λ kφ Λ k I P Λ kφ Λ k Φ Λ k Φ Λ kφ Λ k Φ Λ k 0 Here we note that this property is satisfied when Φ Λ k has full column rank, which is true if k m/ in the gomp operation It is clear from this observation that indices in Λ k cannot be re-selected in the succeeding iterations and the cardinality ofλ k becomes simplyk When the iteration loop of the gomp is finished, therefore, it is possible that the final support set Λ s contains indices not in T ote that, even in this situation, the final result is unaffected and the original signal is recovered because ˆx Λ s Φ Λsy 8 Φ Λ sφ Λ s Φ Λ sφ Tx T 9 Φ Λ sφ Λ s Φ Λ s Φ Λ sx Λ s Φ Λ sφ Λ s Φ Λ sφ Λ s Tx Λs T 0 x Λ s, where 0 follows from the fact that x Λs T 0 From this observation, we deduce that as long as at least one correct index is found in each iteration of the gomp, we can ensure that the original signal is perfectly recovered within iterations In practice, however, the number of correct indices being selected is usually more than one so that the required number of iterations is much smaller than In order to observe the empirical performance of the gomp algorithm, we performed computer simulations In our experiment, we use the testing strategy in [6], [] which measures the effectiveness of recovery algorithms by checking the empirical frequency of exact reconstruction in the noiseless environment By comparing the maximal sparsity level of the underlying sparse signals at which the perfect recovery is ensured this point is often called critical sparsity [6], accuracy of the reconstruction algorithms can be compared empirically In our simulation, the following algorithms are considered LP technique for solving l -minimization problem OMP algorithm 3 gomp algorithm 4 StOMP with false alarm control FAC based thresholding 5 ROMP algorithm Since FAC scheme outperforms false discovery control FDC scheme, we exclusively use FAC scheme in our simulation Frequency of Exact Reconstruction gomp 3 gomp 6 gomp 9 OMP StOMP ROMP CoSaMP LP Sparsity Fig Reconstruction performance for -sparse Gaussian signal vector as a function of sparsity Frequency of Exact Reconstruction gomp 3 gomp 6 gomp 9 OMP StOMP ROMP CoSaMP LP Sparsity Fig Reconstruction performance for -sparse PAM signal vector as a function of sparsity 6 CoSaMP algorithm In each trial, we construct m n m 8 and n 56 sensing matrix Φ with entries drawn independently from Gaussian distribution 0, m In addition, we generate a -sparse vector x whose support is chosen at random We consider two types of sparse signals; Gaussian signals and pulse amplitude modulation PAM signals Each nonzero element of Gaussian signals is drawn from standard Gaussian
4 4 TABLE I THE GOMP ALGORITHM Input: measurements y R m, sensing matrix Φ R m n, sparsity, number of indices for each selection and m/ Initialize: iteration count k 0, residual vector r 0 y, estimated support set Λ 0 While r k > ǫ and k < min{,m/} do k k + Identification Select indices {φi} i,,, corresponding to largest entries in magnitude in Φ r k Augmentation Λ k Λ k {φ,,φ} Estimation ˆx Λ k argmin y Φ u Λ ku Residual Update r k y Φ Λ kˆx Λ k End Output: the estimated signal ˆx arg min u:suppuλ k y Φu and that in PAM signals is randomly chosen from the set {±, ±3} It should be noted that the gomp algorithm should satisfy and m/ and thus the value should not exceed m m In view of this, we choose 3,6,9 in our simulations For each recovery algorithm, we perform at least 5, 000 independent trials and plot the empirical frequency of exact reconstruction In Fig, we provide the recovery performance as a function of the sparsity level Clearly, higher critical sparsity implies better empirical reconstruction performance The simulation results reveal that the critical sparsity of the gomp algorithm is larger than that of the ROMP, OMP, and StOMP algorithms Even compared to the LP technique and CoSaMP, the gomp exhibits slightly better recovery performance Fig provides results for the PAM input signals We observe that the overall behavior is similar to the case of Gaussian signals except that the l -minimization is better than the gomp Overall, we observe that the gomp algorithm is competitive for both Gaussian and PAM input scenarios In Fig 3, the running time average of Gaussian and PAM signals for recovery algorithms is provided The running time is measured using the MATLAB program under quad-core 64- bit processor and Window 7 environments ote that we do not add the result of LP technique simply because the running time is more than order of magnitude higher than that of all other algorithms Overall, we observe that the running time of StOMP, CoSaMP, gomp, and OMP is more or less similar when the signal vector is sparse ie, when is small However, when the signal vector becomes less sparse ie, when is large, the running time of the CoSaMP and OMP increases much faster than that of the gomp and StOMP In particular, while the running time of the OMP, StOMP, and gomp increases linearly over, that for the CoSaMP seems to increase quadratically over Among algorithms under test, the running time of the StOMP and gomp 6,9 is smallest ote that we do not use any option for enabling the multithread operations Running time sec gomp 3 gomp 6 gomp 9 OMP StOMP ROMP CoSaMP Sparsity Fig 3 Running time as a function of sparsity ote that the running time of the l -minimization is not in the figure since the time is more than order of magnitude higher than the time of other algorithms III RIP BASED RECOVERY CODITIO AALYSIS In this section, we analyze the RIP based condition under which the gomp can perfectly recover -sparse signals First, we analyze the condition ensuring a success at the first iteration k Success means that at least one correct index is chosen in the iteration ext, we study the condition ensuring the success in the non-initial iteration k > By combining two conditions, we obtain the sufficient condition of the gomp algorithm guaranteeing the perfect recovery of -sparse signals The following lemmas are useful in our analysis Lemma 3 Lemma 3 in [3], [6]: If the sensing matrix satisfies the RIP of both orders and, then δ δ for any This property is referred to as the
5 5 monotonicity of the isometry constant Lemma 3 Consequences of RIP [3], [7]: For I Ω, if δ I < then for any u R I, δ I u Φ I Φ Iu +δ I u, T true support set +δ I u Φ I Φ I u δ I u Lemma 33 Lemma in [6] and Lemma in [6]: Let I,I Ω be two disjoint sets I I If δ I + I <, then Φ I Φu Φ I Φ I u I δ I + I u holds for any u supported on I A Condition for Success at the Initial Iteration As mentioned, if at least one index is correct among indices selected, we say that the gomp makes a success in the iteration The following theorem provides a sufficient condition guaranteeing the success of the gomp in the first iteration Theorem 34: Suppose x R n is a -sparse signal, then the gomp algorithm makes a success in the first iteration if δ + < + Proof: Let Λ denote the set of indices chosen in the first iteration Then, elements of Φ Λ y are significant elements in Φ y and thus Φ Λ y max I ϕ i,y 3 where ϕ i denotes the i-th column in Φ Further, we have Φ Λ y max ϕ i,y 4 I max I T i I I i I ϕ i,y 5 i I ϕ i,y 6 i T Φ T y 7 where 6 is from the fact that the average of -best correlation power is larger than or equal to the average of true correlation power Using this together with y Φ T x T, we have Φ Λ y Φ TΦ T x T δ x 8 where the second inequality is from Lemma 3 On the other hand, when no correct index is chosen in the first iteration ie, Λ T, Φ Λ y Φ Λ Φ Tx T δ + x, 9 k! estimated support set Fig 4 Set diagram of Ω, T, and Λ k where the inequality follows from Lemma 33 This inequality contradicts 8 if δ + x < δ x 0 ote that, under 0, at least one correct index is chosen in the first iteration Since δ δ + by Lemma 3, 0 holds true when δ + x < δ + x Equivalently, δ + < + In summary, if δ + < +, then Λ contains at least one element of T in the first iteration of the gomp B Condition for Success in on-initial Iterations In this subsection, we investigate the condition guaranteeing the success of the gomp in non-initial iterations Theorem 35: Suppose min{, m } and the gomp has performed k iterations k < successfully Then under the condition δ <, 3 +3 the gomp will make a success at the k +-th condition As mentioned, newly selected indices are not overlapping with previously selected ones and hence Λ k k Also, under the hypothesis that the gomp has performed k iterations successfully, Λ k contains at least k correct indices In other words, the number of correct indices l in Λ k becomes l T Λ k k ote that we only consider the case whereλ k does not include all correct indices l < since otherwise the reconstruction task is already finished 3 Hence, we can safely assume that the set containing the rest of the correct indices is nonempty T Λ k ey ingredients in our proof are the upper bound α of the -th largest correlation in magnitude between r k and columns indexed byf Ω\Λ k T ie, the set of remaining 3 When all the correct indices are chosen T Λ k then the residual r k 0 and hence the gomp algorithm is finished already
6 6 incorrect indices and the lower bound β of the largest correlation in magnitude between r k and columns whose indices belong to T Λ k ie, the set of remaining correct indices If β is larger than α, then β is contained in the top among all values of ϕ j,r k and hence at least one correct index is chosen in the k +-th iteration The following two lemmas provide the upper bound of α and the lower bound of β, respectively Lemma 36: Let α i ϕ φi,r k where φi arg max ϕ j,r k so that αi are ordered in j:j F\{φ,,φi } magnitude α α Then, in the k +-th iteration in the gomp algorithm, α, the -th largest correlation in magnitude between r k and {ϕ i } i F, satisfies α δ + + δ +kδ k+ δ k Proof: See Appendix A xt Λ k 4 j:j T Λ k \{φ,,φi } Lemma 37: Let β i ϕ φi,r k where φi arg max ϕ j,r k so that βi are ordered in magnitude β β Then in the k +-th iteration in the gomp algorithm, β, the largest correlation in magnitude between r k and {ϕ i } i T Λ k, satisfies +δ +δk δ k+ β δ δ k x T Λ k 5 Proof: See Appendix B We now have all ingredients to prove Theorem 35 Proof of Theorem 35 Proof: A sufficient condition under which at least one correct index is selected at the k +-th step can be described as α < β 6 oting that k l < and < and also using the monotonicity of the restricted isometry constant in Lemma 3, we have < δ < δ, k + < δ k+ < δ, k < δ k < δ, +k δ +k δ 7 From Lemma 36 and 7, we have α δ + + δ +kδ k+ δ k x T Λ k 8 δ + δ xt Λ k 9 δ δ δ x T Λk 30 Also, from Lemma 37 and 7, we have +δ +δk δ k+ β δ δ k x T Λ k 3 +δ +δ δ δ δ x T Λ k 3 3δ δ x T Λk 33 Using 30 and 33, we obtain the sufficient condition of 6 as 3δ δ x T Λk > δ δ x T Λk 34 After some manipulations, we have δ < Since <, 35 holds if δ <, which completes the proof C Overall Sufficient Condition Thus far, we investigated conditions guaranteeing the success of the gomp algorithm in the initial iteration k and non-initial iterations k > We now combine these results to describe the sufficient condition of the gomp algorithm ensuring the perfect recovery of -sparse signals Recall from Theorem 34 that the gomp makes a success in the first iteration if δ + < 37 + Also, recall from Theorem 35 that if the previous k iterations were successful, then the gomp will be successful for the k +-th iteration if δ < In essence, the overall sufficient condition is determined by the stricter condition between 37 and 38 Theorem 38 Sufficient condition of gomp: Let min{, m }, then the gomp algorithm perfectly recovers any -sparse vector x from y Φx via at most iterations if the sensing matrix Φ satisfies the RIP with isometry constant δ < +3 for >, 39 δ < for 40 Proof: In order to prove the theorem, the following three cases need to be considered
7 7 Case [ >, > ]: In this case, + and hence δ δ + and also + > +3 Thus, 38 is stricter than 37 and the general condition becomes δ < 4 +3 Case [, > ]: In this case, the general condition should be the stricter condition between δ < +3 and δ + < + Unfortunately, since δ δ + and +3 +, one cannot compare two conditions directly As an indirect way, we borrow a sufficient condition guaranteeing the perfect recovery of the gomp for as δ < 4 + Readers are referred to [3] for the proof of 4 Since +3 < + for >, the sufficient condition for Case becomes δ < It is interesting to note that 43 can be nicely combined with the result of Case since applying in 4 will result in 43 Case 3 [ ]: Since x has a single nonzero element, x should be recovered in the first iteration Let u be the index of nonzero element, then the exact recovery of x is ensured regardless of if ϕ u,y max ϕ i,y 44 i The condition ensuring 44 is obtained by applying for Theorem 34 and is given by δ < Remark δ based recovery condition: We can express our condition with a small order of isometry constant By virtue of [7, Corollary 34] δ c cδ for positive integer c, the proposed bound holds whenever δ < +3 Remark Comparison with previous work: It is worth mentioning that there have been previous efforts to investigate the sufficient condition for this algorithm In particular, the + condition δ < was established in [0, Theorem ] The proposed bound δ < +3 holds the advantage over this bound if < Since is typically much smaller than, the proposed bound offers better recovery condition in many practical scenarios 4 Remark 3 Measurement size of sensing matrix: It is well known that an m n random sensing matrix whose entries are iid with Gaussian distribution 0, m obeys the RIP 4 In fact, should not be large since the inequality m must be guaranteed δ < ε with overwhelming probability if the dimension of the measurements satisfies [9] log n m O ε 45 In [7], it is shown that the OMP requires m O logn/ random measurements for reconstructing -sparse signal Plugging 39 into 45, we also get the same result D Sufficient Condition of OMP In this subsection, we put our focus on the OMP algorithm which is the special case of the gomp algorithm for For sure, one can immediately obtain the condition of the OMP δ < +3 by applying to Theorem 38 Our result is an improved version of this and based on the fact that the non-initial step of the OMP process is the same as the initial step since the residual is considered as a new measurement preserving the sparsity of an input vector x [8], [3] In this regard, a condition guaranteeing to select a correct index in the first iteration can be readily extended to the general condition without incurring any loss Corollary 39 Direct consequence of Theorem 34: Suppose x R n is -sparse, then the OMP algorithm recovers an index in T from y Φx R m in the first iteration if δ + < + We now state that the first iteration condition is extended to any iteration of the OMP algorithm Lemma 30 Wang and Shim [8]: Suppose that the first k iterations k of the OMP algorithm are successful ie, Λ k T, then the k +-th iteration is also successful ie, t k+ T under δ + < + Combining Corollary 39 and Lemma 30, and also noting that indices in Λ k are not selected again in the succeeding iterations since the index chosen in the k+-th step belongs to T Λ k, one can conclude that Λ T and the OMP algorithm recovers original signal x in exactly iterations under δ + < + The following theorem formally describes the sufficient condition of the OMP algorithm Theorem 3 Wang and Shim [8]: Suppose x is - sparse vector, then the OMP algorithm recovers x from y Φx under δ + < + 46 Proof: Immediate from Corollary 39 and Lemma 30 Remark 4 Comments on bounds of 4 and 46: The bounds of the gomp in 4 and the OMP in 46 cannot be directly compared since δ δ + and +3 + evertheless, the difference in the order might offer possible advantages to the OMP It should be noted that the RIP condition, analyzed based on the worse case scenario, is for the perfect recovery and hence offers too conservative bound This explains why the most of sparse recovery algorithms perform better than the bound
8 8 predicts in practice It should also be noted that by allowing more iterations larger than iterations for the OMP, one can improve performance [0], [4], [5] and achieve performance comparable to the gomp However, this may incur large delay and higher computational cost IV RECOSTRUCTIO OF SPARSE SIGALS FROM OISY MEASUREMETS In this section, we consider the reconstruction performance of the gomp algorithm in the presence of noise Since the measurement is expressed as y Φx + v in this scenario, perfect reconstruction of x cannot be guaranteed and hence we need to use the upper bound of l -norm distortion x ˆx as a performance measure Recall that the termination condition of the gomp algorithm is either r s < ǫ or k min {, m } ote that, since min {, m } under the condition that Φ satisfies the RIP of order, 5 the stopping rule of the gomp can be simplified to r k < ǫ or k In these two scenarios, we investigate the upper bound of x ˆx The next theorem provides the upper bound of l -norm distortion when the gomp algorithm is finished by r k < ǫ Theorem 4: Let Φ be the sensing matrix satisfying RIP of order If r s < ǫ is satisfied after s s < iterations, then x ˆx ǫ δ + v δ 47 Proof: See Appendix C The next theorem provides the upper bound of x ˆx for the second scenario ie, when the gomp is terminated after iterations Theorem 4: Let Φ be the sensing matrix satisfying the RIP of order + and δ < 3 + Suppose the gomp algorithm is terminated after iterations, then where C x ˆx v δ, if T Λ, 48 x ˆx C v, if T Λ, 49 δ + +δ + 3δ 3δ 3δ δ +δ +δ δ+ δ δ δ+ Since C > δ, one can get the simple upper bound as x ˆx C v 5 If Φ satisfies the RIP of order, ie, δ 0,, then 0 < δ λ min Φ D Φ D for all index set D with D, which indicates that all eigenvalues of Φ D are positive Thus, Φ D should be full column rank ie, m/ Remark 5 Comments about C : ote that, by the hypothesis of the theorem, 0 < 3δ δ <, so C k for large is approximately C δ +δ +δ δ+ 3δ δ 50 This says that the l -norm distortion is essentially upper bounded by the product of noise power and c c is a constant It is clear from 50 that C decreases as increases Hence, by increasing ie, allowing more indices to be selected per step, we may obtain a better smaller distortion bound However, since m needs to be satisfied, this bound is guaranteed only for very sparse signal vectors Before providing the proof of Theorem 4, we analyze a sufficient condition of the gomp to make success at the k+ -th iteration when the former k iterations are successful In our analysis, we reuse the notation α i and β i of Lemma 36 and 37 The following lemma provides an upper bound of α and a lower bound of β and Lemma 43: α and β satisfy α β δ + + δ +kδ k+ xt Λ k δ k +δ v + 5 δ +δ +δk δ k+ δ k x T Λ k +δ v 5 Proof: See Appendix D As mentioned, the gomp will select at least one correct index fromt at thek+-th iteration provided thatα < β Lemma 44: Suppose the gomp has performed k iterations k < successfully Then under the condition +δ + + δ δ x T Λ k > v, 3δ δ 53 the gomp makes a success at the k +-th condition Proof: See Appendix E We are now ready to prove Theorem 4 Proof: We first consider the scenario where Λ contains
9 9 all correct indices ie, T Λ In this case, x ˆx δ Λ T Φx ˆx 54 δ Λ Φx ˆx 55 Φx ΦΛ Φ y Λ 56 δ Φx Φ Λ Φ Λ Φx+v δ 57 Φx Φ Λ Φ Λ Φx Φ Λ Φ Λ v δ 58 P Λ v 59 δ v, 60 δ where 59 follows from Φx Φ Λ Φ Λ Φx Φx Φ Λ Φ Λ Φ Λ x Λ 0 for T Λ ow we turn to the next scenario where Λ does not contain all the correct indices ie, T Λ Since the algorithm has performed iterations yet failed to find all correct indices, it is clear that the gomp algorithm does not make a success for some iteration say this occurs at p+-th iteration Then, by the contraposition of Lemma 44, +δ + x T Λ p + δ δ v 3δ δ 6 Since x ˆx is at most +-sparse, Also, r r p, 6 and thus x ˆx r p + v 67 δ+ P Λ py + v 68 δ+ P Λ pφ T x T +P Λ pv + v 69 δ+ P Λ pφ T Λ px T Λ p +P Λ pv δ+ v + P Λ pφ T Λ px T Λ p δ+ 70 δ+ P Λ pv + + v 7 δ+ Φ T Λ px T Λ p + v, 7 δ+ where 70 is because P Λp cancels all the components in spanφ Λ p Using the definition of the RIP, we have Φ T Λ px T Λ p +δ T Λ p x T Λ p and hence +δ x T Λ p, 73 x ˆx +δ x T Λ p + v δ+ 74 Combining 74 and 6, we finally have where C x ˆx C v, 75 δ + +δ + 3δ 3δ 3δ δ +δ +δ δ+ δ δ δ+ x ˆx 6 δ Λ T Φx ˆx Φx Φ Λ Φ Λ y δ+ 63 y v Φ Λ Φ y Λ δ+ 64 r v δ+ 65 r + v 66 δ+ V DISCUSSIOS O COMPLEXITY In this section, we discuss the complexity of the gomp algorithm Our analysis shows that the computational complexity is approximatelysmn+ +s m wheres is the number of iterations We show by empirical simulations that the number of iterations s is small so that the proposed gomp algorithm is effective in running time and computational complexity The complexity for each step of the gomp algorithm is summarized as follows 6 Due to the orthogonal projection of the gomp, the magnitude of the residual decreases as iterations go on r i r j for i j
10 0 umber of iteration gomp 3 gomp 6 gomp 9 OMP / Sparsity Fig 5 umber of iterations of the OMP and gomp 5 as a function of sparsity The red dashed line is the reference curve indicating /3 iterations Identification: The gomp performs a matrix-vector multiplication Φ r k, which requires m n flops m multiplication and m additions Also, Φ r k needs to be sorted to find best indices, which requires n +/ flops Estimation of ˆx Λ k: The LS solution ˆx Λ k is obtained in this step Using the QR factorization of Φ Λ k Φ Λ k QR, we have ˆx Λ k Φ Λ kφ Λ k Φ Λ ky R R R Q y 76 and this leads to a cost of Ok m [6] Actually, since the elements of Λ k and Λ k are largely overlapped, it is possible to recycle the part of the previous QR factorization ofφ Λ k and then apply the modified Gram-Schmidt MGS algorithm In doing so, the LS solution can be solved efficiently see Appendix F, 7 and the associated cost is 4 km+ +5m+ 3 k k +3 3 flops Residual update: For the residual update, the gomp performs the matrix-vector multiplication Φ Λ kˆx Λ k k m flops followed by the subtraction m flops Table II summarizes the complexity of each operation in the k-th iteration of the gomp The complexity of the k-th iterations is approximately mn km If the gomp is finished in s iterations, then the complexity of the 7 We note that the fast approach of the MGS for solving LS problem can also be applied to the OMP, ROMP, and StOMP, but with the exception of the CoSaMP This is because the CoSaMP algorithm computes a completely new LS solution over the distinct subset of Φ in each iteration evertheless, other fast approaches such as Richardson s iteration or conjugate gradient might be applied in the CoSaMP TABLE II COMPLEXITY OF THE GOMP ALGORITHM k-th STEP Operation Identification Estimation of ˆx Λ k Residual update Total Complexity m +n +/ Omn 4 km Okm km mn+4 +km Omn gomp, denoted as C gomp,s,m,n, becomes s C gomp,s,m,n mn+4 +km k smn+ +s m oting that s min{,m/} and is a small constant, the complexity of the gomp can be expressed as Omn In practice, however, the iteration number of the gomp is much smaller than due to the inclusion of multiple correct indices for each iteration, which saves the complexity of the gomp substantially Indeed, as shown in Fig 5, the number of iterations is about 3 of the OMP so that the gomp has a computational advantage over the OMP VI COCLUSIO As a cost-effective solution for recovering sparse signals from compressed measurements, the OMP algorithm has received much attention in recent years In this paper, we presented the generalized version of the OMP algorithm for pursuing efficiency in reconstructing sparse signals Since multiple indices can be identified with no additional postprocessing operation, the proposed gomp algorithm lends itself to parallel-wise processing, which expedites the processing of the algorithm and thereby reduces the running time In fact, we demonstrated in the empirical simulations that the gomp has excellent recovery performance comparable to l -minimization technique with fast processing speed and competitive computational complexity We showed from the RIP analysis that if the isometry constant of the sensing matrix satisfies δ < +3 then the gomp algorithm can perfectly recover -sparse signals > from compressed measurements One important point we would like to mention is that the gomp algorithm is potentially more effective than what this analysis tells Indeed, the bound in 39 is derived based on the worst case scenario where the algorithm selects only one correct index per iteration hence requires maximum iterations In reality, as observed in the empirical simulations, it is highly likely that the multiple correct indices are identified for each iteration and hence the number of iterations is usually much smaller than that of the OMP Therefore, we believe that less strict or probabilistic analysis will uncover the whole story of the CS recovery performance Our future work will be directed towards this avenue APPEDIX A PROOF OF LEMMA 36 Proof: Let w i be the index of the i-th largest correlation in magnitude between r k and {ϕ j } j F ie, columns corresponding to remaining incorrect indices Also, define the
11 set of indices W {w,w,,w } The l -norm of the correlation Φ W rk is expressed as Φ W rk Φ W P Λ kφ T Λ kx T Λ k Φ W Φ T Λ kx T Λ k Φ WP Λ kφ T Λ kx T Λ k Φ W Φ T Λ kx T Λ k + Φ WP Λ kφ T Λ kx T Λ k 77 where P Λ k I P Λ k Since W and T Λ k are disjoint ie, W T Λ k and W + T Λ k + note that the number of correct indices in Λ k is l by hypothesis Using this together with Lemma 33, we have Φ W Φ T Λ kx T Λ k δ + x T Λ k 78 Similarly, noting that W Λ k and W + Λ k +k, we have where Φ WP Λ kφ T Λ kx T Λ k Φ δ +k Φ Λ k T Λ kx T Λ k 79 Φ Φ Λ k T Λ kx T Λ k Φ Λ kφ Λ k Φ Λ kφ T Λ kx T Λ k 80 δ k Φ Λ kφ T Λ kx T Λ k 8 δ k+ δ k x T Λ k, 8 where 8 and 8 follow from Lemma 3 and Lemma 33, respectively SinceΛ k andt Λ k are disjoint, if the number of correct indices inλ k isl, then Λ k T Λ k k+ Using 77, 78, 79, and 8, we have Φ W r k δ + + δ +kδ k+ δ k x T Λ k 83 Since α i ϕ wi,r k, we have Φ W rk α i ow, using the norm inequality 8, we have i i Φ W r k α i 84 Since α α α, it is clear that Φ W r k α α Hence, we have δ + + δ +kδ k+ x δ T Λ k k α, 85 and α δ + + δ +kδ k+ xt Λ k 86 δ k APPEDIX B PROOF OF LEMMA 37 Proof: Since β is the largest correlation in magnitude between r k and {ϕ j } j T Λ k, it is clear that for all j T Λ k, and hence β β ϕ j,r k 87 Φ T Λ kr k 88 Φ T Λ kp Λ kφx 89 where 89 follows from r k y Φ Λ kφ y P Λ k Λ Φx k Using the triangle inequality, Φ T Λ kp Λ Φ β k T Λ kx T Λ k 90 Since T Λ k, and where 94 is from Φ T Λ kφ T Λ kx T Λ k Φ T Λ kp Λ kφ T Λ kx T Λ k 9 Φ T Λ kφ T Λ kx T Λ k δ x T Λ k, 9 Φ T Λ kp Λ kφ T Λ kx T Λ k Φ T Λ k P Λ kφ T Λ kx T Λ k 93 +δ P Λ kφ T Λ kx T Λ k 94 Φ T Λ k λ max Φ T Λ kφ T Λ k +δ Furthermore, we observe that P Λ kφ T Λ kx T Λ k Φ Λ kφ Λ kφ Λ k Φ Λ kφ T Λ kx T Λ k 95 +δ k Φ Λ kφ Λ k Φ Λ kφ T Λ kx T Λ 96 k +δk Φ Λ δ kφ T Λ kx T Λ k 97 k δ k+ +δk x δ T Λ k, 98 k where 96 is from the definition of RIP and 97 and 98 follow from Lemma 3 and 33, respectively Λ k and T Λ k are disjoint sets and Λ k T Λ k k + l By combining 94 and 98, we obtain Φ T Λ kp Λ kφ T Λ kx T Λ k 8 u u 0 u +δ +δk δ k+ δ k x T Λ k 99
12 Finally, by combining 9 9, and 99, we obtain β x T Λ k +δ +δk δ k+ δ 00 δ k Proof: We observe that APPEDIX C PROOF OF THEOREM 4 x ˆx Φx ˆx δ Λ s T 0 Φx ˆx 0 δs+ Φx Φ Λ sφ Λ sy δs+ 03 y v Φ Λ sφ Λ sy δs+ 04 where 0 is due to the definition of the RIP, 0 follows from the fact thatx ˆx is at mosts+-sparse δ Λ s T δ s+, and 04 is from ˆx Λ s Φ Λ sy Since y Φ Λ sφ Λ sy y P Λ sy P Λ sy rs, we further have x ˆx r s v 05 δs+ δs+ r s + v 06 ǫ δ + v δ 07 where the last inequality is due to s+ and hence δ s+ δ and r s < ǫ APPEDIX D PROOF OF LEMMA 43 Proof: Using a triangle inequality, Φ W r k Φ W P Λ k Φx+v 08 Φ W P Λ kφ T Λ kx T Λ k + Φ W P Λ kv 09 Φ WΦ T Λ kx T Λ k + Φ WP Λ kφ T Λ kx T Λ k + Φ WP Λ kv 0 Using 78, 79, and 8, Φ W r k is upper bounded by Φ W r k δ + + δ +kδ k+ x δ T Λ k k + Φ W P Λ kv Further, we have Φ W P Λ kv Φ W P Λ kv Plugging 5 into, we have Φ W r k λ max Φ WΦ W P Λ kv 3 +δ P Λ kv 4 +δ v 5 δ + + δ +kδ k+ δ k x T Λ k + +δ v 6 Using Φ W r k α and 6, we have α δ + + δ +kδ k+ xt Λ k δ k +δ v + 7 ext, we derive a lower bound for β First, recalling that β max j:j T Λ k ϕj,r k, we have β Φ T Λ kr k 8 Φ T Λ kp Λ k Φx+v 9 Using a triangle inequality, Φ T Λ β kp Λ Φ k T Λ kx T Λ k Φ T Λ kp Λ v k 0 Φ T Λ kφ T Λ kx T Λ k Φ T Λ kp Λ kφ T Λ kx T Λ k Φ T Λ kp Λ v k From 9 and 99, we have Φ T Λ kφ T Λ kx T Λ k δ x T Λ k, and Also, Φ T Λ kp Λ kφ T Λ kx T Λ k +δ +δk δ k+ x δ T Λ k k Φ T Λ kp Λ kv Φ T Λ k P Λ kv 3 +δ P Λ kv 4 +δ v 5
13 3 Finally, by combining, and 5, we obtain +δ +δk δ k+ β δ δ k x T Λ k +δ v 6 APPEDIX E PROOF OF LEMMA 44 Proof: Using the relaxations of the isometry constants in 7, we have α δ + + δ +kδ k+ δ k and β x T Λ k +δ v + δ + δ δ xt Λ k δ +δ v + δ δ x T Λk + +δ v 7 δ +δ +δk δ k+ δ k x T Λ k +δ v +δ +δ δ δ δ x T Λ k +δ v 3δ x T Λk +δ v 8 δ The bounds on α and β imply that a sufficient condition of α < β is 3δ δ x T Λk +δ v > δ δ x T Λk + +δ v 9 After some manipulations, we have x T Λ k > +δ+ + δ δ v 30 3δ δ Since, this condition is guaranteed if x T Λ k > +δ + +δ δ v 3 3δ δ APPEDIX F COMPUTATIOAL COST FOR THE ESTIMATE STEP OF GOMP In the k-th iteration, the gomp estimates the nonzero elements of x by solving an LS problem, ˆx Λ k argmin y Φ Λ kx x Φ Λ kφ Λ k Φ Λky 3 To solve 3, we employ the MGS algorithm in which the QR decomposition of previous iteration is maintained and, therefore, the computational cost can be reduced Without loss of generality, we assume Φ Λ k ϕ ϕ ϕ k The QR decomposition of ΦΛ k is given by Φ Λ k QR where Q q q q k R m k consists of k orthonormal columns and R R k k is an upper triangular matrix, R q,ϕ q,ϕ q,ϕ k 0 q,ϕ q,ϕ k 0 0 q k,ϕ k For notation simplicity we denote R i,j q i,ϕ j and p k In addition, we denote the QR decomposition of the k -th iteration as Φ Λ k Q R Then it is clear that Q R R Q Q 0 and R a 33 0 R b where Q 0 q p+ q k R m and R a and R b are given by R,p+ R,k R a, R p,p+ R p,k R p+,p+ R p+,k R b 34 0 R k,k Applying Φ Λ k QR to 3, we have ˆx Λ k R R R Q y 35 We count the cost of solving 35 in the following steps Here we assess the computational cost by counting floatingpoint operations flops That is, each +,,,/, counts as one flop Cost of QR decomposition: To obtain Q and R, one only needs to compute Q 0, R a and R b since the previous data, Q and R are stored For j to, we sequentially calculate {R i,p+j } i,,,p+j { q i,ϕ j } i,,,p+j,
14 4 ˆq p+j q p+j p+j ϕ p+j R i,p+j q i, ˆq p+j ˆq p+j, i R p+j,p+j q p+j,ϕ p+j Taking j for example One first computes {R i,p+ } i,,,p using Q requires pm flops and then computes ˆq p+ ϕ p+ p i R i,p+q i requires mp flops Then, normalization of ˆq p+ requires 3m flops Finally, computing R p+,p+ requires m flops The cost of this example amounts to 4mp+5m p Similarly, one can calculate the other data in Q 0 and R a R b In summary, the cost for this QR factorization becomes C QR 4 mk m +3m k + Cost of calculating Q y Since Q Q Q 0, we have Q Q y y Q 0 y By reusing the data Q y, Q y is solved with C m Cost of calculating R Q y: Applying R to the vector Q y, we have R Q R y Q y R a Q y+r b Q 0 y Since the datar Q y can be reused,r Q y is solved with C k Cost of calculating R R Since R is an upper triangular matrix, R R R R Using the block matrix inversion formula, we have R R R a 0 R b R R R a R b 0 R b Then we calculate R R R R, ie, R R M M M 3 M 4 where M R R, M R R R a R b, M 3 R b R a R R, M 4 R b R b We can reuse the data R R so that the cost of calculating R b, R b R b, and R R R a R b becomes + + /3 using Gaussian elimination method, + + /6, and 3 k 4 3 k + 3, respectively The cost for computing R R is C 3 3 k 4 3 k Cost of calculating ˆx Λ k R R R Q y Applying R R to the vector R Q y, we obtain R ˆx Λ k R R Q y+ξ ξ +ξ 3 where ξ ξ R R R ar b R aq y +R bq 0y, R b R ar R R Q y, ξ 3 R b R b R aq y+r bq 0y We can reuse R R R Q y so that the computation of ξ, ξ and ξ 3 requires k, k and flops, respectively The cost of this step becomes C 4 4 k In summary, whole cost of solving LS problem in the k-th iteration of the gomp is the sum of the above and is given by C LS C QR +C +C +C 3 +C 4 4 km+ +5m+ 3 k k +3 3 ACOWLEDGMET The authors would like to thank the anonymous reviewers and for their valuable suggestions that improved the presentation of the paper REFERECES [] D L Donoho and X Huo, Uncertainty principles and ideal atomic decomposition, IEEE Trans Inform Theory, vol 47, no 7, pp , ov 00 [] D L Donoho, I Drori, Y Tsaig, and J L Starck, Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit, Citeseer, 006 [3] E J Candès and T Tao, Decoding by linear programming, IEEE Trans Inform Theory, vol 5, no, pp , Dec 005 [4] E J Candès, J Romberg, and T Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans Inform Theory, vol 5, no, pp , Feb 006 [5] E J Candès and J Romberg, Sparsity and incoherence in compressive sampling, Inverse problems, vol 3, pp 969, Apr 007 [6] E J Candès, The restricted isometry property and its implications for compressed sensing, Comptes Rendus Mathematique, vol 346, no 9-0, pp , 008 [7] M A Davenport and M B Wakin, Analysis of Orthogonal Matching Pursuit using the restricted isometry property, IEEE Trans Inform Theory, vol 56, no 9, pp , Sept 00 [8] J A Tropp and A C Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans Inform Theory, vol 53, no, pp , Dec 007 [9] R Baraniuk, M Davenport, R DeVore, and M Wakin, A simple proof of the restricted isometry property for random matrices, Constructive Approximation, vol 8, no 3, pp 53 63, 008
15 5 [0] M Raginsky, R M Willett, Z T Harmany, and R F Marcia, Compressed sensing performance bounds under poisson noise, IEEE Trans Signal Process, vol 58, no 8, pp , Aug 00 [] J Wang and B Shim, Exact reconstruction of sparse signals via generalized orthogonal matching pursuit, in Proc Asilomar Conf on Signals, Systems and Computers, Monterey, CA, ov 0, pp 39 4 [] Gao, S Batalama, D A Pados, and B W Suter, Compressive sampling with generalized polygons, IEEE Trans Signal Process, vol 59, no 0, pp , Oct 0 [3] M A hajehnejad, A G Dimakis, W Xu, and B Hassibi, Sparse recovery of nonnegative signals with minimal expansion, IEEE Trans Signal Process, vol 59, no, pp 96 08, Jan 0 [4] S Sarvotham, D Baron, and R G Baraniuk, Compressed sensing reconstruction via belief propagation, preprint, 006 [5] D eedell and R Vershynin, Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit, IEEE J Sel Topics Signal Process, vol 4, no, pp 30 36, Apr 00 [6] W Dai and O Milenkovic, Subspace pursuit for compressive sensing signal reconstruction, IEEE Trans Inform Theory, vol 55, no 5, pp 30 49, May 009 [7] D eedell and J A Tropp, Cosamp: Iterative signal recovery from incomplete and inaccurate samples, Applied and Computational Harmonic Analysis, vol 6, no 3, pp 30 3, March 009 [8] J Wang and B Shim, On the reoovery limit of sparse signals using orthogonal matching pursuit, IEEE Trans Signal Process, vol 60, no 9, pp , Sep 0 [9] D eedell and R Vershynin, Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit, Foundations of Computational Mathematics, vol 9, no 3, pp , 009 [0] E Liu and V Temlyakov, The orthogonal super greedy algorithm and applications in compressed sensing, IEEE Trans Inform Theory, vol 58, no 4, pp , Apr 0 [] R Maleh, Improved rip analysis of orthogonal matching pursuit, arxiv:043, 0 [] E Candes, M Rudelson, T Tao, and R Vershynin, Error correction via linear programming, in IEEE Symposium on Foundations of Computer Science FOCS, 005, pp [3] J Wang, S won, and B Shim, ear optimal bound of orthogonal matching pursuit using restricted isometric constant, EURASIP Journal on Advances in Signal Processing, vol 0, no, pp 8, 0 [4] T Zhang, Sparse recovery with orthogonal matching pursuit under rip, IEEE Trans Inform Theory, vol 57, no 9, pp 65 6, Sept 0 [5] S Foucart, Stability and robustness of weak orthogonal matching pursuits, Submitted for publication, 0 [6] Å Björck, umerical methods for least squares problems, umber 5 Society for Industrial Mathematics, 996 PLACE PHOTO HERE Byonghyo Shim Senior Member, IEEE received the BS and MS degrees in control and instrumentation engineering currently electrical engineering from Seoul ational University, orea, in 995 and 997, respectively and the MS degree in mathematics and the PhD degree in electrical and computer engineering from the University of Illinois at Urbana-Champaign, in 004 and 005, respectively From 997 and 000, he was with the department of electronics engineering at the orean Air Force Academy as an Officer First Lieutenant and an Academic Full-time Instructor From 005 to 007, he was with the Qualcomm Inc, San Diego, CA, as a staff member Since September 007, he has been with the school of information and communication, orea University, where he is currently an associate professor His research interests include wireless communications, compressive sensing, applied linear algebra, and information theory Dr Shim was the recipient of 005 M E Van Valkenburg research award from ECE department of University of Illinois and 00 Haedong young engineer award from IEE PLACE PHOTO HERE Jian Wang Student Member, IEEE received the BS degree in Material Engineering and MS degree in Information and Communication Engineering from Harbin Institute of Technology, China, in 006 and 009, respectively He is currently working toward the PhD degree in Electrical and Computer Engineering in orea University His research interests include compressive sensing, wireless communications, and statistical learning PLACE PHOTO HERE Seokbeop won Student Member, IEEE received the BS and MS degrees in the School of Information and Communication, orea University, in 008 and 00, where he is currently working toward the PhD degree His research interests include compressive sensing and signal processing
Generalized Orthogonal Matching Pursuit
Generalized Orthogonal Matching Pursuit Jian Wang, Seokbeop Kwon and Byonghyo Shim Information System Laboratory School of Information and Communication arxiv:1111.6664v1 [cs.it] 29 ov 2011 Korea University,
More informationMultipath Matching Pursuit
Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy
More informationRecovery of Sparse Signals Using Multiple Orthogonal Least Squares
Recovery of Sparse Signals Using Multiple Orthogonal east Squares Jian Wang, Ping i Department of Statistics and Biostatistics arxiv:40.505v [stat.me] 9 Oct 04 Department of Computer Science Rutgers University
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationRecovery of Sparse Signals Using Multiple Orthogonal Least Squares
Recovery of Sparse Signals Using Multiple Orthogonal east Squares Jian Wang and Ping i Department of Statistics and Biostatistics, Department of Computer Science Rutgers, The State University of New Jersey
More informationof Orthogonal Matching Pursuit
A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationA new method on deterministic construction of the measurement matrix in compressed sensing
A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign
More informationUniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit
Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,
More informationPre-weighted Matching Pursuit Algorithms for Sparse Recovery
Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie
More informationGreedy Signal Recovery and Uniform Uncertainty Principles
Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles
More informationORTHOGONAL matching pursuit (OMP) is the canonical
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4395 Analysis of Orthogonal Matching Pursuit Using the Restricted Isometry Property Mark A. Davenport, Member, IEEE, and Michael
More informationCoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp
CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell
More informationRandomness-in-Structured Ensembles for Compressed Sensing of Images
Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder
More informationSignal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit
Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably
More informationModel-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk
Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationGradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property
: An iterative algorithm for sparse recovery with restricted isometry property Rahul Garg grahul@us.ibm.com Rohit Khandekar rohitk@us.ibm.com IBM T. J. Watson Research Center, 0 Kitchawan Road, Route 34,
More informationStability and Robustness of Weak Orthogonal Matching Pursuits
Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery
More informationCoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles
CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal
More informationA New Estimate of Restricted Isometry Constants for Sparse Solutions
A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist
More informationCompressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery
Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior
More informationOrthogonal Matching Pursuit for Sparse Signal Recovery With Noise
Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published
More informationORTHOGONAL MATCHING PURSUIT (OMP) is a
1076 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 64, NO 4, FEBRUARY 15, 2016 Recovery of Sparse Signals via Generalized Orthogonal Matching Pursuit: A New Analysis Jian Wang, Student Member, IEEE, Suhyuk
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationA Greedy Search Algorithm with Tree Pruning for Sparse Signal Recovery
A Greedy Search Algorithm with Tree Pruning for Sparse Signal Recovery Jaeseok Lee, Suhyuk Kwon, and Byonghyo Shim Information System Laboratory School of Infonnation and Communications, Korea University
More informationCompressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes
Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Item Type text; Proceedings Authors Jagiello, Kristin M. Publisher International Foundation for Telemetering Journal International Telemetering
More informationCompressed Sensing and Robust Recovery of Low Rank Matrices
Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationUniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit
Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationSolution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions
Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,
More informationStability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction
Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction Namrata Vaswani ECE Dept., Iowa State University, Ames, IA 50011, USA, Email: namrata@iastate.edu Abstract In this
More informationCompressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles
Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional
More informationStopping Condition for Greedy Block Sparse Signal Recovery
Stopping Condition for Greedy Block Sparse Signal Recovery Yu Luo, Ronggui Xie, Huarui Yin, and Weidong Wang Department of Electronics Engineering and Information Science, University of Science and Technology
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationMultiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing
Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing ibeltal F. Alem, Daniel H. Chae, and Rodney A. Kennedy The Australian National
More informationIEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER
IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,
More informationExact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP)
1 Exact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP) Wei Lu and Namrata Vaswani Department of Electrical and Computer Engineering, Iowa State University,
More informationAnalysis of Greedy Algorithms
Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm
More informationNoisy Signal Recovery via Iterative Reweighted L1-Minimization
Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.
More informationThe Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1
The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,
More informationExact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice
Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationIterative Hard Thresholding for Compressed Sensing
Iterative Hard Thresholding for Compressed Sensing Thomas lumensath and Mike E. Davies 1 Abstract arxiv:0805.0510v1 [cs.it] 5 May 2008 Compressed sensing is a technique to sample compressible signals below
More informationLecture: Introduction to Compressed Sensing Sparse Recovery Guarantees
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin
More informationExponential decay of reconstruction error from binary measurements of sparse signals
Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation
More informationElaine T. Hale, Wotao Yin, Yin Zhang
, Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2
More informationExact Low-rank Matrix Recovery via Nonconvex M p -Minimization
Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:
More informationAFRL-RI-RS-TR
AFRL-RI-RS-TR-200-28 THEORY AND PRACTICE OF COMPRESSED SENSING IN COMMUNICATIONS AND AIRBORNE NETWORKING STATE UNIVERSITY OF NEW YORK AT BUFFALO DECEMBER 200 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC
More informationInverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France
Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse
More informationObservability of a Linear System Under Sparsity Constraints
2372 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 58, NO 9, SEPTEMBER 2013 Observability of a Linear System Under Sparsity Constraints Wei Dai and Serdar Yüksel Abstract Consider an -dimensional linear
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationarxiv: v1 [math.na] 26 Nov 2009
Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,
More informationEquivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,
More informationQuantized Iterative Hard Thresholding:
Quantized Iterative Hard Thresholding: ridging 1-bit and High Resolution Quantized Compressed Sensing Laurent Jacques, Kévin Degraux, Christophe De Vleeschouwer Louvain University (UCL), Louvain-la-Neuve,
More informationStrengthened Sobolev inequalities for a random subspace of functions
Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)
More informationNear Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing
Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar
More informationSparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images
Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are
More informationRecovery of Compressible Signals in Unions of Subspaces
1 Recovery of Compressible Signals in Unions of Subspaces Marco F. Duarte, Chinmay Hegde, Volkan Cevher, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract
More informationConstructing Explicit RIP Matrices and the Square-Root Bottleneck
Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry
More informationSparse Vector Distributions and Recovery from Compressed Sensing
Sparse Vector Distributions and Recovery from Compressed Sensing Bob L. Sturm Department of Architecture, Design and Media Technology, Aalborg University Copenhagen, Lautrupvang 5, 275 Ballerup, Denmark
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationRandom projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016
Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use
More informationAN INTRODUCTION TO COMPRESSIVE SENSING
AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE
More informationCS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5
CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationSparsity in Underdetermined Systems
Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2
More informationc 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE
METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.
More informationA Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission
Li and Kang: A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing 1 A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization
More informationCompressed Sensing and Related Learning Problems
Compressed Sensing and Related Learning Problems Yingzhen Li Dept. of Mathematics, Sun Yat-sen University Advisor: Prof. Haizhang Zhang Advisor: Prof. Haizhang Zhang 1 / Overview Overview Background Compressed
More informationRecent Developments in Compressed Sensing
Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline
More informationStability of Modified-CS over Time for recursive causal sparse reconstruction
Stability of Modified-CS over Time for recursive causal sparse reconstruction Namrata Vaswani Department of Electrical and Computer Engineering Iowa State University http://www.ece.iastate.edu/ namrata
More informationINDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina
INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed
More informationSparse Optimization Lecture: Sparse Recovery Guarantees
Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationRecovering overcomplete sparse representations from structured sensing
Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix
More informationConditions for Robust Principal Component Analysis
Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and
More informationIntroduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011
Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear
More informationExact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations
Exact Topology Identification of arge-scale Interconnected Dynamical Systems from Compressive Observations Borhan M Sanandaji, Tyrone Vincent, and Michael B Wakin Abstract In this paper, we consider the
More informationInterpolation via weighted l 1 minimization
Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C
More informationThe Analysis Cosparse Model for Signals and Images
The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under
More informationEstimation Error Bounds for Frame Denoising
Estimation Error Bounds for Frame Denoising Alyson K. Fletcher and Kannan Ramchandran {alyson,kannanr}@eecs.berkeley.edu Berkeley Audio-Visual Signal Processing and Communication Systems group Department
More informationOn Sparsity, Redundancy and Quality of Frame Representations
On Sparsity, Redundancy and Quality of Frame Representations Mehmet Açaaya Division of Engineering and Applied Sciences Harvard University Cambridge, MA Email: acaaya@fasharvardedu Vahid Taroh Division
More informationSimultaneous Sparsity
Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,
More informationBlock-sparse Solutions using Kernel Block RIP and its Application to Group Lasso
Block-sparse Solutions using Kernel Block RIP and its Application to Group Lasso Rahul Garg IBM T.J. Watson research center grahul@us.ibm.com Rohit Khandekar IBM T.J. Watson research center rohitk@us.ibm.com
More informationTractable Upper Bounds on the Restricted Isometry Constant
Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.
More informationACCORDING to Shannon s sampling theorem, an analog
554 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 2, FEBRUARY 2011 Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis Omid Taheri, Student Member,
More informationRui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China
Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series
More informationRecovery Guarantees for Rank Aware Pursuits
BLANCHARD AND DAVIES: RECOVERY GUARANTEES FOR RANK AWARE PURSUITS 1 Recovery Guarantees for Rank Aware Pursuits Jeffrey D. Blanchard and Mike E. Davies Abstract This paper considers sufficient conditions
More informationGreedy Sparsity-Constrained Optimization
Greedy Sparsity-Constrained Optimization Sohail Bahmani, Petros Boufounos, and Bhiksha Raj 3 sbahmani@andrew.cmu.edu petrosb@merl.com 3 bhiksha@cs.cmu.edu Department of Electrical and Computer Engineering,
More informationL-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise
L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise Srdjan Stanković, Irena Orović and Moeness Amin 1 Abstract- A modification of standard
More informationFast Angular Synchronization for Phase Retrieval via Incomplete Information
Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department
More informationCOMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION
COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION By Mazin Abdulrasool Hameed A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for
More information