Robustly Stable Signal Recovery in Compressed Sensing with Structured Matrix Perturbation

Size: px
Start display at page:

Download "Robustly Stable Signal Recovery in Compressed Sensing with Structured Matrix Perturbation"

Transcription

1 Robustly Stable Signal Recovery in Compressed Sensing with Structured Matri Perturbation Zai Yang, Cishen Zhang, and Lihua Xie, Fellow, IEEE arxiv:.7v [cs.it] Dec Abstract The sparse signal recovery in standard compressed sensing (CS) requires that the sensing matri be known a priori. The CS problem subject to perturbation in the sensing matri is often encountered in practice and results in the literature have shown that the signal recovery error grows linearly with the perturbation level. This paper assumes a structure for the perturbation. Under mild conditions on the perturbed sensing matri, it is shown that a sparse signal can be recovered by minimization with the recovery error being at most proportional to the measurement noise level, similar to that in the standard CS. The recovery is eact in the special noise free case provided that the signal is sufficiently sparse with respect to the perturbation level. A similar result holds for compressible signals under an additional assumption of small perturbation. Algorithms are proposed for implementing the minimization problem and numerical simulations are carried out that verify our analysis. Inde Terms Compressed sensing, structured matri perturbation, robustly stable signal recovery, alternating algorithm. I. INTRODUCTION Compressed sensing (CS) has been a very active research area since the pioneering works of Candès et al. [], [] and Donoho [3]. In CS, a signal o R n of length n is called k-sparse if it has at most k nonzero entries, and it is called compressible if its entries obey a power law o (i) C q i q, () where o (i) is the ith largest entry (in absolute value) of o ( o () o () o (n) ), q>and C q is a constant that depends only on q. Let k be the truncated vector corresponding to the k largest entries (in absolute value) of o. If o is compressible, then it can be well approimated by the sparse signal k in the sense that o k Cqk q+/ () where C q is a constant as C q. To obtain the knowledge of o, CS acquires linear measurements of o as y Φ o + e, (3) where Φ R m n is the sensing matri (or linear operator) with typically k < m n, y R m is the vector of measurements, and e R m denotes the vector of measurement noises with bounded energy, i.e., e for >. Given Author for correspondence. Z. Yang and L. Xie are with EXQUISITUS, Centre for E-City, School of Electrical and Electronic Engineering, Nanyang Technological University, , Singapore ( yang48@e.ntu.edu.sg; elhie@ntu.edu.sg). C. Zhang is with the Faculty of Engineering and Industrial Sciences, Swinburne University of Technology, Hawthorn VIC 3, Australia ( cishenzhang@swin.edu.au). Φ and, the task of CS is to recover o from a significantly reduced number of measurements y. Candès et al. [], [4] show that if o is sparse, then it can be stably recovered under mild conditions on Φ with the recovery error being at most proportional to the measurement noise level by solving an minimization problem. Similarly, the largest entries (in absolute value) of a compressible signal can be stably recovered. More details are presented in Subsection II-B. In addition to the minimization, other approaches that provide similar guarantees are also reported thereafter, such as IHT [5] and greedy pursuit methods including OMP [6], StOMP [7] and CoSaMP [8]. The sensing matri Φ is assumed known a priori in the standard CS, which is, however, not always the case in practical situations. For eample, a matri perturbation can be caused by quantization during implementation. In source separation [9], [] the sensing matri (or miing system) is usually unknown and needs to be estimated, and thus estimation errors eist. In source localization such as direction of arrival (DOA) estimation [], [] and radar imaging [3], [4], the sensing matri (overcomplete dictionary) is constructed via discretizing one or more continuous parameters, and errors eist typically in the sensing matri since the true source locations may not be eactly on a discretized sampling grid. As a result, there have been recent active studies on CS where the sensing matri is subject to an unknown perturbation. Herman and Strohmer [5] analyze the effect of a general matri perturbation and show that the signal recovery is robust to the perturbation in the sense that the recovery error grows linearly with the perturbation level. Similar robust recovery results are also reported in [6], [7]. It is demonstrated in [7], [8] that the signal recovery may suffer from a large error under a large perturbation. In addition, the eistence of recovery error caused by the perturbed sensing matri is independent of the sparsity of the original signal. Algorithms have also been proposed to deal with sensing matri perturbations. Zhu et al. [9] propose a sparse total least-squares approach to alleviating the effect of perturbation where they eplore the structure of the perturbation to improve recovery performance. Yang et al. [] formulate the off-grid DOA estimation problem from a sparse Bayesian inference perspective and iteratively recover the source signal and the matri perturbation. It should be noted that eisting algorithmic results provide no guarantees on signal recovery performance when there eist perturbations in the sensing matri. This paper is on the perturbed CS problem. A structured matri perturbation is studied with each column of the perturbation matri being a (unknown) constant times a (known)

2 vector which defines the direction of perturbation. For certain structured matri perturbation, we provide conditions for guaranteed signal recovery performance. Our analysis shows that robust stability (see definition in Subsection II-A) can be achieved for a sparse signal under similar mild conditions as those for the standard CS problem by solving an minimization problem incorporated with the perturbation structure. In the special noise free case, the recovery is eact for a sufficiently sparse signal with respect to the perturbation level. A similar result holds for a compressible signal under an additional assumption of small perturbation (depending on the number of largest entries to be recovered). To verify our analysis, two algorithms for positive-valued and general signals respectively are proposed to solve the resulting nonconve minimization problem. Numerical simulations confirm our robustly stable signal recovery results. A common approach in CS to signal recovery is solving an optimization problem, e.g., minimization. Another contribution of this paper is to characterize a set of solutions to the optimization problem that can be good estimates of the signal to be recovered, which indicates that it is not necessary to obtain the optimal solution to the optimization problem. This is helpful to assess the effectiveness of an algorithm (see definition in Subsection III-E), for eample, the p (p<) minimization [], [] in the standard CS, in solving the optimization problem as in nonconve optimization the output of an algorithm cannot be guaranteed to be the optimal solution. Notations used in this paper are as follows. Bold-case letters are reserved for vectors and matrices. denotes the pseudo norm that counts the number of nonzero entries of a vector. and denote the and norms of a vector respectively. A and A F are the spectral and Frobenius norms of a matri A respectively. T is the transpose of a vector and A T is for a matri A. i is the ith entry of a vector. T c is the complementary set of a set T. Unless otherwise stated, T has entries of a vector on an inde set T and zero entries on T c. diag () is a diagonal matri with its diagonal entries being entries of a vector. is the Hadamard (elementwise) product. The rest of the paper is organized as follows. Section II first defines formally some terminologies used in this paper and then introduces eisting results on the standard CS and perturbed CS. Section III presents the main results of the paper with some discussions. Section IV introduces algorithms for the minimization problem in our considered perturbed CS and their analysis. Section V presents a series of numerical simulations to verify our main results. Conclusions are drawn in Section VI. Finally, proofs of some results in Section III and Section IV are provided in Appendices. A. Definitions II. PRELIMINARY RESULTS For the purpose of clarification of epression, we define formally some terminologies for signal recovery used in this paper, including stability in the standard CS, robustness and robust stability in the perturbed CS. Definition ( []): In the standard CS where Φ is known a priori, consider a recovered signal ˆ of o from measurements y Φ o + e with e. We call that ˆ achieves stable signal recovery if ˆ o C stb k q+/ + C stb holds for compressible signal o obeying () and an integer k, or if ˆ o C stb holds for k-sparse signal o, with nonnegative constants C stb C stb., Definition : In the perturbed CS where ΦA + E with A known a priori and E unknown with E F η, consider a recovered signal ˆ of o from measurements y Φ o + e with e. We call that ˆ achieves robust signal recovery if ˆ o C rbt k q+/ + C rbt + C rbt 3 η holds for compressible signal o obeying () and an integer k, or if ˆ o C rbt + C rbt 3 η holds for k-sparse signal o, with nonnegative constants C rbt, C rbt and C3 rbt. Definition 3: In the perturbed CS where ΦA + E with A known a priori and E unknown with E F η, consider a recovered signal ˆ of o from measurements y Φ o + e with e. We call that ˆ achieves robustly stable signal recovery if ˆ o C rs (η) k q+/ + C rs (η) holds for compressible signal o obeying () and an integer k, or if ˆ o C rs (η) holds for k-sparse signal o, with nonnegative constants C rs C rs depending on η. Remark : () In the case where o is compressible, the defined stable, robust, or robustly stable signal recovery is in fact for its k largest entries (in absolute value). The first term O k q+/ in the error bounds above represents, by (), the best approimation error (up to a scale) that can be achieved when we know everything about o and select its k largest entries. () The Frobenius norm of E, E F, can be replaced by any other norm in Definitions and 3 since the norms are equivalent. It should be noted that the stable recovery in the standard CS and the robustly stable recovery in the perturbed CS are eact in the noise free, sparse signal case while there is no such a guarantee for the robust recovery in the perturbed CS.,

3 3 B. Stable Signal Recovery of Standard CS The task of the standard CS is to recover the original signal o via an efficient approach given the sensing matri Φ, acquired sample y and upper bound for the measurement noise. This paper focuses on the norm minimization approach. The restricted isometry property (RIP) [] has become a dominant tool to such analysis, which is defined as follows. Definition 4: Define the k-restricted isometry constant (RIC) of a matri Φ, denoted by δ k (Φ), as the smallest number such that ( δ k (Φ)) v Φv ( + δ k (Φ)) v holds for all k-sparse vectors v. Φ is said to satisfy the k-rip with constant δ k (Φ) if δ k (Φ) <. Based on the RIP, the following theorem holds. Theorem ( [4]): Assume that δ k (Φ) < and e. Then an optimal solution to the basis pursuit denoising (BPDN) problem satisfies min, subject to y Φ (4) o C std k / o k + C std (5) where C std [+( )δ k (Φ)] ( +)δ k (Φ), Cstd ( +)δ k (Φ). 4 +δ k (Φ) Theorem states that a k-sparse signal o ( k o ) can be stably recovered by solving a computationally efficient conve optimization problem provided δ k (Φ) <. The same conclusion holds in the case of compressible signal o since k / o k C q k q+/ according to () and () with C q being a constant as C q in (). In the special noise free, k-sparse signal case, such a recovery is eact. The RIP condition in Theorem can be satisfied provided m O (k log (n/k)) with a large probability if the sensing matri Φ is i.i.d. subgaussian distributed [3]. Note that the RIP condition for the stable signal recovery in the standard CS has been relaed in [4], [5] but it is beyond the scope of this paper. C. Robust Signal Recovery in Perturbed CS In the standard CS, the sensing matri Φ is assumed to be eactly known. Such an ideal assumption is not always the case in practice. Consider that the true sensing matri is Φ A + E where A R m n is the known nominal sensing matri and E R m n represents the unknown matri perturbation. Unlike the additive noise term e in the observation model in (3), a multiplicative noise E o is introduced in the perturbed CS and is more difficult to analyze since it is correlated with the signal of interest. Denote E (k) the largest spectral norm taken over all k-column submatrices of E, and similarly define Φ (k). The following theorem is stated in [5]. Theorem ( [5]): Assume that there eist constants ε (k) and E, o such that E(k) Φ (k) E, o. Assume that δ k (Φ) < E,Φ, ε (k) E,Φ, e and Eo +ε (k) E,Φ and o k. Then an optimal solution to the BPDN problem with the nominal sensing matri A, denoted by N-BPDN, min, subject to y A + E, o (6) achieves robust signal recovery with where C ptb Remark : o C ptb + C ptb E, o (7) 4 +δ k (Φ) +ε (k) E,Φ (. +) (+δ k (Φ)) +ε (k) E,Φ () The relaation of the inequality constraint in (6) from to + E, o is to ensure that the original signal o is a feasible solution to N-BPDN. Theorem is a little different from that in [5], where the multiplicative noise E o is bounded using ε (k) E,Φ, δ k (Φ) and Φ o rather than a constant E, o. () Theorem is applicable only to the small perturbation case where ε (k) E,Φ < 4 since δ k (Φ). (3) Theorem generalizes Theorem for the k-sparse signal case. As the perturbation E, Theorem coincides with Theorem for the k-sparse signal case. Theorem states that, for a small matri perturbation E, the signal recovery of N-BPDN that is based on the nominal sensing matri A is robust to the perturbation with the recovery error growing at most linearly with the perturbation level. Note that, in general, the signal recovery in Theorem is unstable according to the definition of stability in this paper since the recovery error cannot be bounded within a constant (independent of the noise) times the noise level as some perturbation occurs. A result on general signals in [5] is omitted that shows the robust recovery of a compressible signal. The same problem is studied and similar results are reported in [6] based on the greedy algorithm CoSaMP [8]. III. SP-CS: CS SUBJECT TO STRUCTURED PERTURBATION A. Problem Description In this paper we consider a structured perturbation in the form E B o where B R m n is known a priori, o diag (β o ) is a bounded uncertain term with β o [ r, r] n and r>, i.e., each column of the perturbation is on a known direction. In addition, we assume that each column of B has unit norm to avoid the scaling problem between B and o. As a result, the observation model in (3) becomes y Φ o + e, ΦA + B o (8) with o diag (β o ), β o [ r, r] n and e. Given y, A, B, r and, the task of SP-CS is to recover o, possibly, as well as β o. Remark 3: If o i for some i {,,n}, then βo i has no contributions to the observation y and hence it is impossible to recover βi o. As a result, the recovery of βo in this paper refers only to the recovery on the support of o. In fact, the D-RIP condition on matri [A, B] in Subsection III-B implies that columns of both A and B have approimately unit norms.

4 4 B. Main Results of This Paper In this paper, a vector v is called k-duplicately (D-) sparse if v v T, v T T with v and v being of the same dimension and jointly k-sparse (each being k-sparse and sharing the same support). The concept of duplicate (D-) RIP is defined as follows. Definition 5: Define the k-duplicate (D-) RIC of a matri Φ, denoted by δ k (Φ), as the smallest number such that δk (Φ) v Φv + δ k (Φ) v holds for all k-d-sparse vectors v. Φ is said to satisfy the k-d-rip with constant δ k (Φ) if δ k (Φ) <. With respect to the perturbed observation model in (8), let Ψ[A, B]. The main results of this paper are stated in the following theorems. The proof of Theorem 3 is provided in Appendi A and proofs of Theorems 4 and 5 are in Appendi B. Theorem 3: In the noise free case where e, assume that o k and δ 4k (Ψ) <. Then an optimal solution (,β ) to the perturbed combinatorial optimization problem min R n,β [ r,r] n, subject to y (A + B ) (9) with diag (β) recovers o and β o. Theorem 4: Assume that δ 4k (Ψ) < (+r )+, o k and e. Then an optimal solution (,β ) to the perturbed (P-) BPDN problem min R n,β [ r,r] n, subject to y (A + B ) achieves robustly stable signal recovery with where C C () o C, () (β β o ) o C () 4 + δ 4k (Ψ) (+r, )+ δ4k (Ψ) + +r Ψ C δ4k (Ψ) Theorem 5: Assume that δ (+r 4k (Ψ) < )+ and e. Then an optimal solution (,β ) to the P- BPDN problem in () satisfies that o C k / + C o k + C,(3) (β β o ) k C k / + C o k + C (4) where C + (+r ) δ4k (Ψ) /a, C r δ 4k (Ψ) /a, C +r Ψ C /b, C +r C +r Ψ /b. (+r with a )+ δ4k (Ψ), b δ 4k (Ψ) and C C, C C with C, C as defined in Theorem 4. Remark 4: In general, the robustly stable signal recovery cannot be concluded for compressible signals since the error bound in (3) may be very large in the case of large perturbation by C O (r). If the perturbation is small with r O k /, then the robust stability can be achieved for compressible signals provided that the D-RIP condition in Theorem 5 is satisfied. C. Interpretation of the Main Results Theorem 3 states that for a k-sparse signal o, it can be recovered by solving a combinatorial optimization problem provided δ 4k (Ψ) < when the measurements are eact. Meanwhile, β o can be recovered. Since the combinatorial optimization problem is NP-hard and that its solution is sensitive to measurement noise [6], a more reliable approach, minimization, is eplored in Theorems 4 and 5. Theorem 4 states the robustly stable recovery of a k-sparse signal o in SP-CS with the recovery error being at most proportional to the noise level. Such robust stability is obtained by solving an minimization problem incorporated with the perturbation structure provided that the D-RIC is sufficiently small with respect to the perturbation level in terms of r. Meanwhile, the perturbation parameter β o can be stably recovered on the support of o. As the D-RIP condition is satisfied in Theorem 4, the signal recovery error of the perturbed CS is constrained by the noise level, and the influence of the perturbation is limited to the coefficient before. For eample, if δ 4k (Ψ)., then o 8.48, 8.5,. corresponding to r.,.,, respectively. In the special noise free case, the recovery is eact. This is similar to that in the standard CS but in contrast to the eisting robust signal recovery result in Subsection II-C where the recovery error eists once a matri perturbation appears. Another interpretation of the D-RIP condition in Theorem4 is that the robustly stable signal recovery requires that r< δ4k (Ψ) for a fied matri Ψ. Using the aforementioned eample where δ 4k (Ψ)., the perturbation is required to satisfy r< 7. As a result, our robustly stable signal recovery result of SP- CS applies to the case of large perturbation if the D-RIC of Ψ is sufficiently small while the eisting result does not as demonstrated in Remark. Theorem 5 considers general signals and is a generalized form of Theorem 4. In comparison with Theorem in the standard CS, one more term C o k appears in the upper bound of the recovery error. The robust stability does not hold generally for compressible signals as illustrated in Remark 4 while it is true under an additional assumption r O k /. The results in this paper generalize that in the standard CS. Without accounting for the symbolic difference between δ k (Φ) and δ 4k (Ψ), the conditions in Theorems and 5 coincide, as well as the upper bounds in (5) and (3) for the recovery errors, as the perturbation vanishes or equivalently r. As mentioned before, the RIP condition for guaranteed stable recovery in the standard CS has been relaed.

5 5 Similar techniques may be adopted to possibly rela the D- RIP condition in SP-CS. While this paper is focused on the minimization approach, it is also possible to modify other algorithms in the standard CS and apply them to SP-CS to provide similar recovery guarantees. D. When is the D-RIP satisfied? Eisting works studying the RIP mainly focus on random matrices. In the standard CS, Φ has the k-rip with constant δ with a large probability provided that m C δ k log (n/k) and Φ has properly scaled i.i.d. subgaussian distributed entries with constant C δ depending on δ and the distribution [3]. The D-RIP can be considered as a model-based RIP introduced in [7]. Suppose that A, B are mutually independent and both are i.i.d. subgaussian distributed (the true sensing matri ΦA + B o is also i.i.d. subgaussian distributed if β o is independent of A and B). The model-based RIP is determined by the number of subspaces of the structured sparse signals that are referred to as the D-sparse ones in the present paper. For Ψ[A, B], the number of k-dimensional subspaces for n k-d-sparse signals is. Consequently, Ψ has the k-dk RIP with constant δ with a large probability also provided that m C δ k log (n/k) by [7, Theorem ] or [8, Theorem 3.3]. So, in the case of high dimensional system, the D-RIP condition on Ψ, as r, in Theorem 4 or 5 for guaranteed robustly stable signal recovery of SP-CS can be satisfied once the RIP condition on Φ (after proper scaling of columns) in the standard CS is met. It means that the perturbation in SP-CS gradually strengthens the D-RIP condition for robustly stable signal recovery but there eists no gap between SP-CS and the standard CS in the case of high dimensional systems. It is noted that there is another way to stably recover the original signal o in SP-CS. Given the sparse signal case as an eample where o is k-sparse. Let z o o β o o, and it is k-sparse. The observation model can be written as y Ψz o + e. Then z o and hence, o, can be stably recovered from the problem min z z, subject to y Ψz (5) provided that δ 4k (Ψ) < by Theorem. It looks like that we transformed the perturbation into a signal of interest. Denote TPS-BPDN the problem in (5). In a high dimensional system, the condition δ 4k (Ψ) < requires about twice as many as the measurements that makes the D-RIP condition δ 4k (Ψ) < hold by [7, Theorem ] corresponding to the D-RIP condition in Theorem 4 or 5 as r. As a result, for a considerable range of perturbation level, the D-RIP condition in Theorem 4 or 5 for P-BPDN is weaker than that for TPS-BPDN since it varies slowly for a moderate perturbation (as an eample, δ4k (Ψ) <.44,.43,.49 corresponds to r,.,. respectively). Numerical simulations in Subsection V can verify our conclusion. It is hard to incorporate the knowledge β o [ r, r] n into the problem in (5). Fig.. Illustration of Corollary. The shaded band area refers to the feasible domain of BPDN. The triangular area, intersection of the feasible domain and the ball : o, is the set of all good recoveries. E. Relaation of the Optimal Solution In Theorem 5 (Theorem 4 is a special case), (,β ) is required to be an optimal solution to P-BPDN. Naturally, we would like to know if the requirement of the optimality is necessary for a good recovery in the sense that a good recovery validates the error bounds in (3) and (4) under the conditions in Theorem 5. Generally speaking, the answer is negative since, regarding the optimality of (,β ), only o and the feasibility of (,β ) are used in the proof of Theorem 5 in Appendi B. Denote D the feasible domain of P-BPDN, i.e., D { (,β):β [ r, r] n, y (A + B ) with diag (β)}. (6) We have the following corollary. Corollary : Under the assumptions in Theorem 5, any (,β) Dthat meets o satisfies that o C k / + C o k + C, (β β o ) k C k / + C o k + C with C i, C i, i,,, as defined in Theorem 5. Corollary generalizes Theorem 5 and its proof follows directly that of Theorem 5. It shows that a good recovery in SP-CS is not necessarily an optimal solution to P-BPDN. A similar result holds in the standard CS that generalizes Theorem, and the proof of Theorem in [4] applies directly to such case. Corollary : Under the assumptions in Theorem, any that meets y A and o satisfies that o C std k / o k + C std (7) with C std,c std as defined in Theorem. An illustration of Corollary is presented in Fig., where the shaded band area refers to the feasible domain of BPDN in (4) and all points in the triangular area, intersection of the feasible domain of BPDN and the ball { : o }, are good candidates for recovery of o. The reason why

6 6 one seeks for the optimal solution is to guarantee that the inequality o holds while o is generally unavailable a priori. Corollary can eplain why a satisfactory recovery can be obtained in practice using some algorithm that may not produce an optimal solution to BPDN, e.g., rone-l [9]. Corollaries and are useful for checking the effectiveness of an algorithm in the case when the output cannot be guaranteed to be optimal. 3 Namely, an algorithm is called effective in solving some minimization problem if it can produce a feasible solution with its norm no larger than that of the original signal. Similar ideas have been adopted in [3], [3]. IV. ALGORITHMS FOR P-BPDN A. Special Case: Positive Signals In general, the P-BPDN problem in () for SP-CS is nonconve and an optimal solution cannot be easily achieved. This subsection studies a special case where the original signal o is positive-valued (ecept zero entries). Such a case has been studied in the standard CS [34], [35]. By incorporating the positiveness of o, P-BPDN is modified into the positive P-BPDN (PP-BPDN) problem min, subject to,β y (A + B ),, r β r, where is with an elementwise operation and, are column vectors composed of, respectively with proper dimensions. It is noted that the robustly stable signal recovery results in the present paper apply directly to the solution to PP- BPDN in such case. This subsection shows that the nonconve PP-BPDN problem can be transformed into a conve one and hence its optimal solution can be efficiently obtained. Denote p β. A new, conve problem (P ) is introduced as follows., (P ) min,p, subject to y Ψ p, r p r. Theorem 6: Problems PP-BPDN and (P ) are equivalent in the sense that, if (,β ) is an optimal solution to PP-BPDN, then there eists p β such that (, p ) is an optimal solution to (P ), and that, if (, p ) is an optimal solution to p (P ), then there eists β with βi i / i, if i > ;, otherwise such that (,β ) is an optimal solution to PP-BPDN. Proof: We only prove the first part of Theorem 6 using contradiction. The second part follows similarly. Suppose that (, p ) with p β is not an optimal solution to (P ). Then there eists (, p ) in the feasible domain of (P ) such that <. Define β as βi p i / i, if i > ;, otherwise. It is easy to show that (,β ) is a 3 It is common when the problem to be solved is nonconve, such as P-BPDN as discussed in Section IV and p ( p < ) minimization approaches [], [], [3] in the standard CS. In addition, Corollaries and can be readily etended to the p ( p<) minimization approaches. feasible solution to PP-BPDN. By < we conclude that (,β ) is not an optimal solution to PP-BPDN, which leads to contradiction. Theorem 6 states that an optimal solution to PP-BPDN can be efficiently obtained by solving the conve problem (P ). B. AA-P-BPDN: Alternating Algorithm for P-BPDN This subsection studies P-BPDN in () for general signals where P-BPDN is nonconve and the global minimum cannot be easily obtained. A simple method is to solve a series of BPDN problems with (j+) arg min, subject to y A + B (j), (8) β (j+) y arg min (A + B ) (j+) (9) β [ r,r] n starting from β (), where the superscript (j) indicates the jth iteration and (j) diag β (j). Denote AA-P-BPDN the alternating algorithm defined by (8) and (9). To analyze AA-P-BPDN, we first present the following two lemmas. Lemma : For a matri sequence Φ (j) j composed of fat matrices, let D j v : y Φ (j) v, j,,, and D {v : y Φ v } with >. If Φ (j) Φ, as j +, then for any v D there eists a sequence v (j) j with v(j) D (j), j,,, such that v (j) v, as j +. Lemma studies the variation of feasible domains D j, j,,, of a series of BPDN problems whose sensing matrices Φ (j), j,,, converge to Φ. It states that the sequence of the feasible domains also converges to D in the sense that for any point in D, there eists a sequence of points, each of which belongs to one D j, that converges to the point. To prove Lemma, we first show that it holds for any interior point of D by constructing such a sequence. Then we show that it also holds for a boundary point of D by that for any boundary point there eists a sequence of interior points of D that converges to it. The detailed proof is given in Appendi C. Lemma : An optimal solution to the BPDN problem in (4) satisfies that, if y, or y Φ, otherwise. Proof: It is trivial for the case where y. Consider the other case where y >. Note first that. We use contradiction to show that the equality y Φ holds. Suppose that y Φ <. Introduce f (θ) y θφ. Then f() >, and f() <. There eists θ, <θ <, such that f (θ ) since f (θ) is continuous on the interval [, ]. Hence, θ is a feasible solution to BPDN in (4). We conclude that is not optimal by θ <, which leads to contradiction. Lemma studies the location of an optimal solution to the BPDN problem. It states that the optimal solution locates at the origin if the origin is a feasible solution, or at the boundary of the feasible domain otherwise. This can be easily observed from Fig.. Based on Lemmas and, we have the following results for AA-P-BPDN.

7 7 Theorem 7: Any accumulation point (,β ) of the sequence (j),β (j) is a stationary point of AA-P-BPDN j in the sense that arg min, subject to y (A + B ), () β arg min y (A + B ) β [ r,r] n () with diag (β ). Theorem 8: An optimal solution (,β ) to P-BPDN in () is a stationary point of AA-P-BPDN. Theorem 7 studies the property of the solution (j),β (j) produced by AA-P-BPDN. It shows that (j),β (j) is arbitrarily close to a stationary point of AA-P-BPDN as the iteration inde j is large enough. 4 Hence, the output of AA- P-BPDN can be considered as a stationary point provided that an appropriate termination criterion is set. Theorem 8 tells that an optimal solution to P-BPDN is a stationary point of AA- P-BPDN. So, it is possible for AA-P-BPDN to produce an optimal solution to P-BPDN. The proofs of Theorems 7 and 8 are provided in Appendi D and Appendi E respectively. C. Effectiveness of AA-P-BPDN As reported in the last subsection, it is possible for AA- P-BPDN to produce an optimal solution to P-BPDN. But it is not easy to check the optimality of the output of AA-P- BPDN because of the nonconveity of P-BPDN. Instead, we study the effectiveness of AA-P-BPDN in solving P-BPDN in this subsection with the concept of effectiveness as defined in Subsection III-E. By Corollary, a good signal recovery ˆ of o is not necessarily an optimal solution. It requires only that ˆ, ˆβ, where ˆβ denotes the recovery of β o, be a feasible solution to P-BPDN and that ˆ o holds. As shown in the proof of Theorem 7 in Appendi D, that (j),β (j) for any j is a feasible solution to P-BPDN and that the sequence (j) is monotone decreasing and j converges. So, the effectiveness of AA-P-BPDN in solving P- BPDN can be assessed via numerical simulations by checking whether AA o holds with AA denoting the output of AA-P-BPDN. The effectiveness of AA-P-BPDN is verified in Subsection V via numerical simulations, where we observe that the inequality AA o holds in all eperiments (over 37 trials). V. NUMERICAL SIMULATIONS This section demonstrates the robustly stable signal recovery results of SP-CS in the present paper, as well as the effectiveness of AA-P-BPDN in solving P-BPDN in (), via numerical simulations. AA-P-BPDN is implemented in Matlab with problems in (8) and (9) being solved using CVX [33]. AA-P-BPDN is terminated as (j) (j ) (j ) 6 4 It is shown in the proof of Theorem 7 in Appendi D that the sequence (j),β (j) is bounded. And it can be shown, for eample, using contradiction, that for a bounded sequence {a j } j, there eists an accumulation j point of {a j } j such that a j is arbitrarily close to it as j is large enough. or the maimum number of iterations, set to, is reached. PP-BPDN is also implemented in Matlab and solved by CVX. We first consider general signals. The sparse signal case is mainly studied. The variation of the signal recovery error is studied with respect to the noise level, perturbation level and number of measurements respectively. Besides AA-P-BPDN for P-BPDN in SP-CS, performances of three other approaches are also studied. The first one assumes that the perturbation is known a priori and recovers the original signal o by solving, namely, the oracle (O-) BPDN problem min, subject to y (A + B o ). The O-BPDN approach produces the best recovery result of SP-CS within the scope of minimization of CS since it eploits the eact perturbation (oracle information). The second one corresponds to the robust signal recovery of the perturbed CS as described in Subsection II-C and solves N- BPDN in (6) where E, o B o o is used though it is not available in practice. The last one refers to the other approach to SP-CS that seeks for the signal recovery by solving TPS- BPDN in (5) as discussed in Subsection III-E. The first eperiment studies the signal recovery error with respect to the noise level. We set the signal length n, sample size m 8, sparsity level k and perturbation parameter r.. The noise level varies from.5 to with interval.5. For each combination of (n, m, k, r, ), the signal recovery error, as well as β o recovery error (on the support of o ), is averaged over R 5 trials. In each trial, matrices A and B are generated from Gaussian distribution and each column of them has zero mean and unit norm after proper scaling. The sparse signal o is composed of unit spikes with random signs and locations. Entries of β o are uniformly distributed in [ r, r]. The noise e is zero mean Gaussian distributed and then scaled such that e. Using the same data, the four approaches, including O-BPDN, N-BPDN, TPS- BPDN and AA-P-BPDN for P-BPDN, are used to recover o respectively in each trial. The simulation results are shown in Fig.. It can be seen that both signal and β o recovery errors of AA-P-BPDN for P-BPDN in SP-CS are proportional to the noise, which is consistent with our robustly stable signal recovery result in the present paper. The error of N-BPDN grows linearly with the noise but a large error still ehibits in the noise free case. Ecept the ideal case of O-BPDN, our proposed P-BPDN has the smallest error. The second eperiment studies the effect of the structured perturbation. Eperiment settings are the same as those in the first eperiment ecept that we set (n, m, k, ) (, 8,,.5) and vary r {.5,.,, }. Fig. 3 presents our simulation results. A nearly constant error is obtained using O-BPDN in the standard CS since the perturbation is assumed to be known in O-BPDN. The error of AA-P-BPDN for P-BPDN in SP-CS slowly increases with the perturbation level and is quite close to that of O-BPDN for a moderate perturbation. Such a behavior is consistent with our analysis. Besides, it can be observed that the error of N- BPDN grows linearly with the perturbation level. Again, our proposed P-BPDN has the smallest error ecept O-BPDN.

8 8 Signal recovery error O BPDN N BPDN TPS BPDN AA P BPDN.5.5 ε β o recovery error ε Fig.. Signal and perturbation recovery errors with respect to the noise level with parameter settings (n, m, k, r) (, 8,,.). Both signal and β o recovery errors of AA-P-BPDN for P-BPDN in SP-CS are proportional to. O BPDN, signal recovery error N BPDN, signal recovery error AA P BPDN, signal recovery error.39 Signal recovery error.5.5 O BPDN N BPDN TPS BPDN AA P BPDN β o recovery error Fig. 5. Recovery of a compressible signal from noisy measurements with (m, n, r, ) (, 7,.,.). The performance of AA-P-BPDN for P- BPDN in SP-CS is very close to that of the ideal O-BPDN. Black circles: original signal; red stars: recoveries r r Fig. 3. Signal and perturbation recovery errors with respect to the perturbation level in terms of r with parameter settings (n, m, k, ) (, 8,,.5). The error of AA-P-BPDN for P-BPDN in SP-CS slowly increases with the perturbation level and is quite close to that of the ideal case of O-BPDN for a moderate perturbation. The third eperiment studies the variation of the recovery error with the number of measurements. We set (n, k, r, ) (,,.,.) and vary m {3, 35,, }. Simulation results are presented in Fig. 4. Signal recovery errors of all four approaches decrease as the number of measurements increases. Again, it is observed that O-BPDN of the ideal case achieves the best result followed by our proposed P-BPDN. For eample, to obtain the signal recovery error of.5, about 55 measurements are needed for O-BPDN while the numbers are, respectively, 65 for AA-P-BPDN and 95 for TPS-BPDN. It is impossible for N-BPDN to achieve such a small error in our observation because of the eistence of the perturbation. Another eperiment result is shown in Fig. 5 for a compressible signal that is generated by taking a fied sequence Signal recovery error O BPDN N BPDN TPS BPDN AA P BPDN Number of measurements β o recovery error Number of measurements Fig. 4. Signal and perturbation recovery errors with respect to the number of measurements with parameter settings (n,k,r,) (,,.,.). AA- P-BPDN for P-BPDN in SP-CS has the best performance ecept the ideal case of O-BPDN..5. PP BPDN, signal recovery error.465e PP BPDN, β o recovery error.883e Fig. 6. Eact recovery of a positive sparse signal from noise-free measurements with (m, n, k, r, ) (, 5,,., ). PP-BPDN is solved by solving (P ). β o and its recovery are shown only on the support of o. Black circles: original signal and β o ; red stars: recoveries i.5 n with n, randomly permuting it, and i multiplying by a random sign sequence (the coefficient.8843 is chosen such that the compressible signal has the same norm as the sparse signals in the previous eperiments). It is sought to be recovered from m 7 noisy measurements with. and r.. The signal recovery error of AA- P-BPDN for P-BPDN in SP-CS is about.39, while errors of O-BPDN, N-BPDN and TPS-BPDN are about.34,.36 and.34 respectively (the recovery of TPS-BPDN is omitted in Fig. 5). For the special positive signal case, an optimal solution to PP-BPDN can be efficiently obtained. An eperiment result is shown in Fig. 6, where a sparse signal of length n, composed of k positive unit spikes, is eactly recovered from m 5 noise free measurements with r.by solving (P ). VI. CONCLUSION This paper studied the CS problem in the presence of measurement noise and a structured matri perturbation. A concept named as robust stability for signal recovery was broached. It

9 9 was shown that the robust stability can be achieved for a sparse signal by solving an minimization problem P-BPDN under mild conditions. In the presence of measurement noise, the recovery error is at most proportional to the noise level and the recovery is eact in the special noise free case. A general result for compressible signals was also reported. An alternating algorithm named as AA-P-BPDN was proposed to solve the nonconve P-BPDN problem, and numerical simulations were carried out, verifying our theoretical analysis. A special positive signal case was also studied and a computationally efficient approach was proposed to find an optimal solution to the nonconve P-BPDN problem. In this paper it was shown that the D-RIP condition for guaranteed robust stability in SP-CS can be satisfied for mutually independent random perturbation and nominal sensing matri. One future work is to study practical situations where the perturbation and the nominal sensing matri may be correlated. APPENDIX A PROOF OF THEOREM 3 Denote z and similarly define z β o and z. Then the problem in (9) can be rewritten into min R n,β [ r,r] n, subject to y Ψz. () Let δ k δ k (Ψ) hereafter for brevity. First note that o is k-sparse and z o is k-d-sparse. Since (,β ) is a solution to the problem in (), we have o k and, hence, z is k-d-sparse. By y Ψz o Ψz we obtain Ψ(z o z ) and thus z o z by δ 4k < and the fact that z o z is 4k-D-sparse. We complete the proof by observing that z o z o β o o β. APPENDIX B PROOFS OF THEOREMS 4 AND 5 We only present the proof of Theorem 5 since Theorem 4 is a special case of Theorem 5. We first show the following lemma. Lemma 3: We have Ψv, Ψv δ (k+k ) v v for all k-d-sparse v and k -D-sparse v supported on disjoint subsets. Proof: Without loss of generality, assume that v and v are unit vectors with disjoint supports as above. Then by the definition of D-RIP and v ± v v + v we have δ (k+k ) Ψv ± Ψv + δ (k+k ). And thus Ψv, Ψv Ψv +Ψv 4 Ψv Ψv δ (k+k ), which completes the proof. Using the notations z, z o, z and δ k in Appendi A, P-BPDN in () can be rewritten into min R n,β [ r,r] n, subject to y Ψz. (3) Let h o and decompose h into a sum of k-sparse vectors h T, h T, h T,, where T denotes the set of indices of the k largest entries (in absolute value) of o, T that of the k largest entries of h T c with T c being the complementary set of T, T that of the net k largest entries of h T c and so on. We misuse notations z T T j j βt j, j,,,, and T j similarly define z o T j. Let f z z o and f Tj z T j z o T j for j,,,. For brevity we write T T T. To bound h, in the first step we show that h T c is essentially bounded by h T, and then in the second step we show that h T is sufficiently small. The first step follows the proof of Theorem.3 in [4]. Note that htj k / htj k / htj, j, (4) and thus h T c j h Tj k / j htj j htj k / ht c. By o + h is an optimal solution, we have o o + h o i + h i + o i + h i i T and thus i T c (5) o T h T + (6) ht c o T c ht c h T + o T c. (7) By (5), (7) and the inequality h T k / h T we have ht c htj h T +k / e (8) j with e o k. In the second step, we bound h T by utilizing its relationship with ft. Note that f Tj for each j,, is k-d-sparse. By Ψf T Ψf j Ψf T j we have Ψf T Ψf T, Ψf Ψf T, Ψf Tj j Ψf T, Ψf + Ψf T, Ψf Tj j + Ψf T, Ψf Tj j ΨfT Ψf + δ ft 4k f Tj j + δ 4k f T f Tj (9) j ft + δ 4k + δ 4k f Tj. (3) j

10 We used Lemma 3 in (9). In (3), we used the D-RIP, and inequalities ft + ft ft and Ψf Ψ(z z o ) y Ψz + y Ψz o. (3) By noting that β o,β [ r, r] n and h f β h +(β β o ) o (3) we have f Tj +r h Tj +r o T j, j,,. (33) Meanwhile, o T j o T j o T c e. (34) j j Applying the D-RIP, (3), (33) and then (8) and (34) it gives δ4k ft ΨfT f T + δ 4k + (+r ) δ 4k h T + δ 4k +r k / + r e, and thus with c δ 4k r δ 4k h T ft c + c h T + (+r ) δ 4k δ 4k, c. Hence, we get a bound h T ( c ) c + which together with (8) gives h h T + ht c h T +k / e c c + c c + c k / + c 3 e + δ 4k δ 4k, c c and c 3 c k / + c 3 e, k / + c 3 e, c which concludes (3). By (3), (3), (35) and the RIP we have [ δ k (B)] / β T β o T o T B β T βt o o T Ψ β T βt o o T Ψ h f β h βt c βo T c Ψf + Ψ h β + B h o T c β T c βo T c + +r Ψ h +rb e c 4 + c 5 k / + c 6 e o T c (35) with δ k (B) δ k (B) δ 4k, c 4 + +r Ψ c c, c 5 +r c c + Ψ and c 6 +r c 3 c +r Ψ, and thus βt βt o o T c 4 + c 5 k / + c 6 e, δ4k which concludes (4). We complete the proof by noting that the above results make sense if c <, i.e., δ 4k < (+r )+. APPENDIX C PROOF OF LEMMA We first consider the case where v is an interior point of D, i.e., it holds that y Φ v <. Let η. Construct a sequence v (j) such that j v (j) v /j. It is obvious that v (j) v. We net show that v (j) D j as j is large enough. By Φ (j) Φ, v (j) v and that the sequence v (j) is bounded, there eists a positive integer j j such that, as j j, Φ Φ (j) v (j) η/ Φ v v (j) η/. Hence, as j j, y Φ (j) v (j) (y Φ v)+ Φ Φ (j) v (j) +Φ v v (j) y Φ v + Φ Φ (j) v (j) + Φ v v (j) y Φ v + Φ Φ (j) v (j) + Φ v v (j) + η/+η/, from which we have v (j) D j for j j. By re-selecting arbitrary v (j) D j for j<j we obtain the conclusion. For the other case where v is a boundary point of D, there eists a sequence v (l) l D with all v (l) being interior points of D such that v (l) v, as l +. According to the first part of the proof, for each l,,, there eists a sequence v (j) (l) with v (j) (l) Dj, j,,, such that j v (j) (l) v (l), as j +. The sequence v (j) (j) is what we j epected since v (j) (j) v v (j) (j) v (j) + v(j) v, as j +. APPENDIX D PROOF OF THEOREM 7 We first show the eistence of an accumulation point. It follows the inequality y A + B (j) (j) y A + B (j ) (j)

11 that (j) is a feasible solution to the problem in (8), and thus (j+) (j) for j,,. Then we have (j) () A y for j,,, since A y is a feasible solution to the problem in (8) at the first iteration with the superscript denoting the pseudo-inverse operator. This together with β (j) [ r, r] n, j,,, leads that the sequence (j),β (j) is bounded. Thus, there eists j an accumulation point (,β ) of (j),β (j) j. For the accumulation point (,β ) there eists a subsequence (jl),β (j l) l of (j),β (j) such that j (j l ),β (j l) (,β ), as l +. By (9), we have, for all β [ r, r] n, y A + B (j l) (j l) y (A + B ) (j l), at both sides of which by taking l +, we have, for all β [ r, r] n, y (A + B ) y (A + B ), which concludes (). For (), we first point out that (j), as j +, since (j) is decreasing and j is one of its accumulation points. As in Lemma, let D j : y A + B (j) and D { : y (A + B ) }. By A + B (jl) A + B, as l +, and Lemma, for any D there eists a sequence (l) with l (l) D j l, l,,, such that (l), as l +. By (8), we have, for l,,, (j l+) (l), at both sides of which by taking l +, we have (36) since (j), as j +, and (l), as l +. Finally, () is concluded as (36) holds for arbitrary D. APPENDIX E PROOF OF THEOREM 8 We need to show that an optimal solution (,β ) satisfies () and (). It is obvious for (). For (), we discuss two cases based on Lemma. If y, then and, hence, () holds for any β [ r, r] n. If y >, y (A + B ) holds by () and Lemma. Net we use contradiction to show that () holds in such case. Suppose that () does not hold as y >. That is, there eists β [ r, r] n such that y (A + B ) < y (A + B ) holds with diag (β ). Then by Lemma we see that is a feasible but not optimal solution to the problem min, subject to y (A + B ). (37) Hence, < holds for an optimal solution to the problem in (37). Meanwhile, (,β ) is a feasible solution to the P-BPDN problem in (). Thus (,β ) is not an optimal solution to the P-BPDN problem in () by <, which leads to contradiction. REFERENCES [] E. Candès, J. Romberg, and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Communications on Pure and Applied Mathematics, vol. 59, no. 8, pp. 7 3, 6. [], Robust uncertainty principles: Eact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, vol. 5, no., pp , 6. [3] D. Donoho, Compressed sensing, IEEE Transactions on Information Theory, vol. 5, no. 4, pp , 6. [4] E. Candès, The restricted isometry property and its implications for compressed sensing, Comptes Rendus Mathematique, vol. 346, no. 9-, pp , 8. [5] T. Blumensath and M. Davies, Iterative hard thresholding for compressed sensing, Applied and Computational Harmonic Analysis, vol. 7, no. 3, pp , 9. [6] J. Tropp and A. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Transactions on Information Theory, vol. 53, no., pp , 7. [7] D. Donoho, Y. Tsaig, I. Drori, and J. Starck, Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit, Available online at idrori/stomp.pdf, 6. [8] D. Needell and J. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Applied and Computational Harmonic Analysis, vol. 6, no. 3, pp. 3 3, 9. [9] T. Blumensath and M. Davies, Compressed sensing and source separation, in International Conference on Independent Component Analysis and Signal Separation. Springer, 7, pp [] T. Xu and W. Wang, A compressed sensing approach for underdetermined blind audio source separation with sparse representation, in Statistical Signal Processing, 5th Workshop on. IEEE, 9, pp [] J. Zheng and M. Kaveh, Directions-of-arrival estimation using a sparse spatial spectrum model with uncertainty, in Acoustics, Speech and Signal Processing (ICASSP), IEEE International Conference on. IEEE,, pp [] Z. Yang, L. Xie, and C. Zhang, Off-grid direction of arrival estimation using sparse bayesian inference, Ariv preprint, available online at [3] M. Herman and T. Strohmer, High-resolution radar via compressed sensing, IEEE Transactions on Signal Processing, vol. 57, no. 6, pp , 9. [4] L. Zhang, M. Xing, C. Qiu, J. Li, and Z. Bao, Achieving higher resolution isar imaging with limited pulses via compressed sampling, Geoscience and Remote Sensing Letters, vol. 6, no. 3, pp , 9. [5] M. Herman and T. Strohmer, General deviants: an analysis of perturbations in compressed sensing, IEEE J. Selected Topics in Signal Processing, vol. 4, no., pp ,. [6] M. Herman and D. Needell, Mied operators in compressed sensing, in Information Sciences and Systems (CISS), 44th Annual Conference on. IEEE,, pp. 6. [7] Y. Chi, L. Scharf, A. Pezeshki, and A. Calderbank, Sensitivity to basis mismatch in compressed sensing, IEEE Transactions on Signal Processing, vol. 59, no. 5, pp. 8 95,. [8] D. Chae, P. Sadeghi, and R. Kennedy, Effects of basis-mismatch in compressive sampling of continuous sinusoidal signals, in Future Computer and Communication (ICFCC), nd International Conference on, vol.. IEEE,, pp [9] H. Zhu, G. Leus, and G. Giannakis, Sparsity-cognizant total leastsquares for perturbed compressive sampling, IEEE Transactions on Signal Processing, vol. 59, no. 5, pp. 6,. [] R. Chartrand and V. Staneva, Restricted isometry properties and nonconve compressive sensing, Inverse Problems, vol. 4, p. 35, 8. [] R. Saab, R. Chartrand, and O. Yilmaz, Stable sparse approimations via nonconve optimization, in Acoustics, Speech and Signal Processing, IEEE International Conference on. IEEE, 8, pp [] E. Candès, Compressive sampling, in Proceedings of the International Congress of Mathematicians, vol. 3. Citeseer, 6, pp [3] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, A simple proof of the restricted isometry property for random matrices, Constructive Approimation, vol. 8, no. 3, pp , 8. [4] S. Foucart and M. Lai, Sparsest solutions of underdetermined linear systems via lq-minimization for q, Applied and Computational Harmonic Analysis, vol. 6, no. 3, pp , 9.

Robustly Stable Signal Recovery in Compressed Sensing with Structured Matrix Perturbation

Robustly Stable Signal Recovery in Compressed Sensing with Structured Matrix Perturbation Robustly Stable Signal Recovery in Compressed Sensing with Structured Matri Perturbation Zai Yang, Cishen Zhang, and Lihua Xie, Fellow, IEEE arxiv:.7v [cs.it] 4 Mar Abstract The sparse signal recovery

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing ibeltal F. Alem, Daniel H. Chae, and Rodney A. Kennedy The Australian National

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

1-Bit Compressive Sensing

1-Bit Compressive Sensing 1-Bit Compressive Sensing Petros T. Boufounos, Richard G. Baraniuk Rice University, Electrical and Computer Engineering 61 Main St. MS 38, Houston, TX 775 Abstract Compressive sensing is a new signal acquisition

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series

More information

Color Scheme. swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July / 14

Color Scheme.   swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July / 14 Color Scheme www.cs.wisc.edu/ swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July 2016 1 / 14 Statistical Inference via Optimization Many problems in statistical inference

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Stopping Condition for Greedy Block Sparse Signal Recovery

Stopping Condition for Greedy Block Sparse Signal Recovery Stopping Condition for Greedy Block Sparse Signal Recovery Yu Luo, Ronggui Xie, Huarui Yin, and Weidong Wang Department of Electronics Engineering and Information Science, University of Science and Technology

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

Fast Algorithms for Sparse Recovery with Perturbed Dictionary

Fast Algorithms for Sparse Recovery with Perturbed Dictionary Fast Algorithms for Sparse Recovery with Perturbed Dictionary Xuebing Han, Hao Zhang, Gang Li arxiv:.637v3 cs.it May Abstract In this paper, we account for approaches of sparse recovery from large underdetermined

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University

More information

Recovery of Sparse Signals Using Multiple Orthogonal Least Squares

Recovery of Sparse Signals Using Multiple Orthogonal Least Squares Recovery of Sparse Signals Using Multiple Orthogonal east Squares Jian Wang, Ping i Department of Statistics and Biostatistics arxiv:40.505v [stat.me] 9 Oct 04 Department of Computer Science Rutgers University

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Greedy Sparsity-Constrained Optimization

Greedy Sparsity-Constrained Optimization Greedy Sparsity-Constrained Optimization Sohail Bahmani, Petros Boufounos, and Bhiksha Raj 3 sbahmani@andrew.cmu.edu petrosb@merl.com 3 bhiksha@cs.cmu.edu Department of Electrical and Computer Engineering,

More information

ORTHOGONAL matching pursuit (OMP) is the canonical

ORTHOGONAL matching pursuit (OMP) is the canonical IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4395 Analysis of Orthogonal Matching Pursuit Using the Restricted Isometry Property Mark A. Davenport, Member, IEEE, and Michael

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

The Analysis Cosparse Model for Signals and Images

The Analysis Cosparse Model for Signals and Images The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3740 Ming-Jun Lai Department of Mathematics

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property : An iterative algorithm for sparse recovery with restricted isometry property Rahul Garg grahul@us.ibm.com Rohit Khandekar rohitk@us.ibm.com IBM T. J. Watson Research Center, 0 Kitchawan Road, Route 34,

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals On the l 1 -Norm Invariant Convex -Sparse Decomposition of Signals arxiv:1305.6021v2 [cs.it] 11 Nov 2013 Guangwu Xu and Zhiqiang Xu Abstract Inspired by an interesting idea of Cai and Zhang, we formulate

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Exact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP)

Exact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP) 1 Exact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP) Wei Lu and Namrata Vaswani Department of Electrical and Computer Engineering, Iowa State University,

More information

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse

More information

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Asilomar 2011 Jason T. Parker (AFRL/RYAP) Philip Schniter (OSU) Volkan Cevher (EPFL) Problem Statement Traditional

More information

Fast 2-D Direction of Arrival Estimation using Two-Stage Gridless Compressive Sensing

Fast 2-D Direction of Arrival Estimation using Two-Stage Gridless Compressive Sensing Fast 2-D Direction of Arrival Estimation using Two-Stage Gridless Compressive Sensing Mert Kalfa ASELSAN Research Center ASELSAN Inc. Ankara, TR-06370, Turkey Email: mkalfa@aselsan.com.tr H. Emre Güven

More information

AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE

AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE Ana B. Ramirez, Rafael E. Carrillo, Gonzalo Arce, Kenneth E. Barner and Brian Sadler Universidad Industrial de Santander,

More information

Sparse Algorithms are not Stable: A No-free-lunch Theorem

Sparse Algorithms are not Stable: A No-free-lunch Theorem Sparse Algorithms are not Stable: A No-free-lunch Theorem Huan Xu Shie Mannor Constantine Caramanis Abstract We consider two widely used notions in machine learning, namely: sparsity algorithmic stability.

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Approximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery

Approximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery Approimate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery arxiv:1606.00901v1 [cs.it] Jun 016 Shuai Huang, Trac D. Tran Department of Electrical and Computer Engineering Johns

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Recovery of Compressible Signals in Unions of Subspaces

Recovery of Compressible Signals in Unions of Subspaces 1 Recovery of Compressible Signals in Unions of Subspaces Marco F. Duarte, Chinmay Hegde, Volkan Cevher, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise Srdjan Stanković, Irena Orović and Moeness Amin 1 Abstract- A modification of standard

More information

ACCORDING to Shannon s sampling theorem, an analog

ACCORDING to Shannon s sampling theorem, an analog 554 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 2, FEBRUARY 2011 Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis Omid Taheri, Student Member,

More information

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 13, JULY 1, 2013 3279 Compressed Sensing Affine Rank Minimization Under Restricted Isometry T. Tony Cai Anru Zhang Abstract This paper establishes new

More information

Compressive Phase Retrieval From Squared Output Measurements Via Semidefinite Programming

Compressive Phase Retrieval From Squared Output Measurements Via Semidefinite Programming Compressive Phase Retrieval From Squared Output Measurements Via Semidefinite Programming Henrik Ohlsson, Allen Y. Yang Roy Dong S. Shankar Sastry Department of Electrical Engineering and Computer Sciences,

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

SIGNALS with sparse representations can be recovered

SIGNALS with sparse representations can be recovered IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1497 Cramér Rao Bound for Sparse Signals Fitting the Low-Rank Model with Small Number of Parameters Mahdi Shaghaghi, Student Member, IEEE,

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems 1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Research Article Support Recovery of Greedy Block Coordinate Descent Using the Near Orthogonality Property

Research Article Support Recovery of Greedy Block Coordinate Descent Using the Near Orthogonality Property Hindawi Mathematical Problems in Engineering Volume 17, Article ID 493791, 7 pages https://doiorg/11155/17/493791 Research Article Support Recovery of Greedy Block Coordinate Descent Using the Near Orthogonality

More information

COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION

COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION By Mazin Abdulrasool Hameed A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for

More information

Observability of a Linear System Under Sparsity Constraints

Observability of a Linear System Under Sparsity Constraints 2372 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 58, NO 9, SEPTEMBER 2013 Observability of a Linear System Under Sparsity Constraints Wei Dai and Serdar Yüksel Abstract Consider an -dimensional linear

More information

Research Collection. Compressive sensing a summary of reconstruction algorithms. Master Thesis. ETH Library. Author(s): Pope, Graeme

Research Collection. Compressive sensing a summary of reconstruction algorithms. Master Thesis. ETH Library. Author(s): Pope, Graeme Research Collection Master Thesis Compressive sensing a summary of reconstruction algorithms Authors): Pope, Graeme Publication Date: 009 Permanent Link: https://doi.org/10.399/ethz-a-0057637 Rights /

More information

arxiv: v1 [math.na] 26 Nov 2009

arxiv: v1 [math.na] 26 Nov 2009 Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,

More information

Analysis of Greedy Algorithms

Analysis of Greedy Algorithms Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

On the Observability of Linear Systems from Random, Compressive Measurements

On the Observability of Linear Systems from Random, Compressive Measurements On the Observability of Linear Systems from Random, Compressive Measurements Michael B Wakin, Borhan M Sanandaji, and Tyrone L Vincent Abstract Recovering or estimating the initial state of a highdimensional

More information

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign

More information

Iterative Hard Thresholding for Compressed Sensing

Iterative Hard Thresholding for Compressed Sensing Iterative Hard Thresholding for Compressed Sensing Thomas lumensath and Mike E. Davies 1 Abstract arxiv:0805.0510v1 [cs.it] 5 May 2008 Compressed sensing is a technique to sample compressible signals below

More information

arxiv: v2 [cs.lg] 6 May 2017

arxiv: v2 [cs.lg] 6 May 2017 arxiv:170107895v [cslg] 6 May 017 Information Theoretic Limits for Linear Prediction with Graph-Structured Sparsity Abstract Adarsh Barik Krannert School of Management Purdue University West Lafayette,

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

Robust Sparse Recovery via Non-Convex Optimization

Robust Sparse Recovery via Non-Convex Optimization Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Z Algorithmic Superpower Randomization October 15th, Lecture 12

Z Algorithmic Superpower Randomization October 15th, Lecture 12 15.859-Z Algorithmic Superpower Randomization October 15th, 014 Lecture 1 Lecturer: Bernhard Haeupler Scribe: Goran Žužić Today s lecture is about finding sparse solutions to linear systems. The problem

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Sparse signals recovered by non-convex penalty in quasi-linear systems

Sparse signals recovered by non-convex penalty in quasi-linear systems Cui et al. Journal of Inequalities and Applications 018) 018:59 https://doi.org/10.1186/s13660-018-165-8 R E S E A R C H Open Access Sparse signals recovered by non-conve penalty in quasi-linear systems

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Restricted Strong Convexity Implies Weak Submodularity

Restricted Strong Convexity Implies Weak Submodularity Restricted Strong Convexity Implies Weak Submodularity Ethan R. Elenberg Rajiv Khanna Alexandros G. Dimakis Department of Electrical and Computer Engineering The University of Texas at Austin {elenberg,rajivak}@utexas.edu

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information