Shifting Inequality and Recovery of Sparse Signals

Size: px
Start display at page:

Download "Shifting Inequality and Recovery of Sparse Signals"

Transcription

1 Shifting Inequality and Recovery of Sparse Signals T. Tony Cai Lie Wang and Guangwu Xu Astract In this paper we present a concise and coherent analysis of the constrained l 1 minimization method for stale recovering of high-dimensional sparse signals oth in the noiseless case and noisy case. The analysis is surprisingly simple and elementary, while leads to strong results. In particular, it is shown that the sparse recovery prolem can e solved via l 1 minimization under weaer conditions than what is nown in the literature. A ey technical tool is an elementary inequality, called Shifting Inequality, which, for a given nonnegative decreasing sequence, ounds the l 2 norm of a susequence in terms of the l 1 norm of another susequence y shifting the elements to the upper end. Keywords: l 1 minimization, restricted isometry property, Shifting Inequality, sparse recovery. 1 Introduction Reconstructing a high-dimensional sparse signal ased on a small numer of measurements, possily corrupted y noise, is a fundamental prolem in signal processing. This and other related prolems in compressed sensing have attracted much recent interest in a numer of fields including applied mathematics, electrical engineering, and statistics. Specifically one considers the following model: y = Φβ + z 1) where the matrix Φ R n p with n p) and z R n is a vector of measurement errors. The goal is to reconstruct the unnown vector β R p. In this paper, our main interest is Department of Statistics, The Wharton School, University of Pennsylvania, PA, USA; tcai@wharton.upenn.edu. Research supported in part y NSF Grant DMS Department of Mathematics, Massachusetts Institute of Technology; liewang@math.mit.edu Department of EE & CS, University of Wisconsin-Milwauee, WI, USA; gxu4uwm@uwm.edu 1

2 the case where the signal β is sparse and the noise z is Gaussian, z N0, σ 2 I n ). We shall approach the prolem y considering first the noiseless case and then the ounded noise case, oth of significant interest in their own right. The results for the Gaussian case will then follow easily. It is now well understood that the method of l 1 minimization provides an effective way for reconstructing a sparse signal in many settings. The l 1 minimization method in this context is P B ) min γ R p γ 1 suject to y Φγ B 2) where B is a ounded set determined y the noise structure. For example, B = {0} in the noiseless case and B is the feasile set of the noise in the case of ounded error. The sparse recovery prolem has now een well studied in the framewor of the Restricted Isometry Property RIP) introduced y Candes and Tao [7]. A vector v = v i ) R p is -sparse if suppv), where suppv) = {i : v i 0} is the support of v. For an n p matrix Φ and an integer, 1 p, the -restricted isometry constant δ Φ) is the smallest constant such that 1 δ Φ) c 2 Φc δ Φ) c 2 3) for every -sparse vector c. If + p, the, -restricted orthogonality constant θ, Φ), is the smallest numer that satisfies Φc, Φc θ, Φ) c 2 c 2, 4) for all c and c such that c and c are -sparse and -sparse respectively, and have disjoint supports. For notational simplicity we shall write δ for δ Φ) and θ, for θ, Φ) hereafter. It has een shown that l 1 minimization can recover a sparse signal with a small or zero error under various conditions on δ and θ,. See, for example, Candes and Tao [7, 8], and Candes, Romerg and Tao [6]. These conditions essentially require that every set of columns of Φ with certain cardinality approximately ehaves lie an orthonormal system. For example, the condition δ + θ, + θ,2 < 1 was used in Candes and Tao [7], δ 3 + 3δ 4 < 2 in Candes, Romerg and Tao [6], and δ 2 + θ,2 < 1 in Candes and Tao [8]. Simple conditions involving only δ have also een used in the literature on sparse recovery, for example, δ 2 < 1 was used in Cohen, Dahmen and DeVore [9], and δ 3 2 < 2 1 was used in Candes [5]. In a recent paper, Cai, Xu and Zhang [4] sharpened the previous results y showing that stale recovery can e achieved under the condition δ θ,1.5 < 1 or a stronger ut simpler condition δ 1.75 < 2 1). 2

3 In the present paper we provide a concise and coherent analysis of the constrained l 1 minimization method for stale recovery of sparse signals. The analysis, which yields strong results, is surprisingly simple and elementary. At the heart of our simplified analysis of the l 1 minimization method for stale recovery is an elementary, yet highly useful, inequality. This inequality, called Shifting Inequality, shows that, given a finite decreasing sequence of nonnegative numers, the l 2 norm of a susequence can e ounded in terms of the l 1 norm of another susequence y shifting the terms involved in the l 2 norm to the upper end. The main contriution of the present paper is two-fold: firstly it is shown that the sparse recovery prolem can e solved under weaer conditions and secondly the analysis of the l 1 minimization method can e very elementary and much simplified. In particular, we show that stale recovery of -sparse signals can e achieved if δ θ,1.25 < 1. This condition is weaer than the ones nown in the literature. In particular, the results in Candes and Tao [7, 8], Cai, Xu and Zhang [4] and Candes [5] are extended. In fact, our general treatment of this prolem produces a family of sparse recovery conditions. Interesting conditions include δ < 2 1 and δ 3 < In the case of Gaussian noise, one of the main results is the following. Theorem 1 Consider the model 1) with z N0, σ 2 I n ). If β is -sparse and δ θ,1.25 < 1, then, the l 1 minimizer ˆβ DS = arg min{ γ 1 : Φ T y Φγ ) σ 2 log p} satisfies, with high proaility, ˆβ DS β δ 1.25 θ,1.25 σ 2 log p, 5) and the l 1 minimizer ˆβ l 2 = arg min{ γ 1 : y Φγ 2 σ n + 2 n log n} satisfies, with high proaility, ˆβ l 2 β δ 1.25 ) σ n + 2 n log n. 6) 1 δ 1.25 θ,1.25 3

4 In comparison to Theorem 1.1 in Candes and Tao 2007), the result given in 5) for weaens the condition from δ 2 + θ,2 < 1 to δ θ,1.25 < 1 and improves the constant in the ound from 4/1 δ 2 θ,2 ) to 10/1 δ 1.25 θ,1.25 ). Although our primary interest in this paper is to recovery sparse signals, all the main results in the susequent sections are given for general signals that are not necessarily -sparse. Weaening the RIP condition also has direct implications to the construction of compressed sensing CS) matrices. It is important to note that it is computationally difficult to verify the RIP for a given design matrix Φ when p is large and the sparsity is not too small. It is required to ound the condition numers of p ) sumatrices. The spectral norm of a matrix is often difficult to compute and the cominatorial complexity maes it infeasile to chec the RIP for reasonale values of p and. A general technique for avoiding checing the RIP directly is to generate the entries of the matrix Φ randomly and to show that the resulting random matrix satisfies the RIP with high proaility using the well-nown Johnson-Lindenstrauss Lemma. See, for example, Baraniu, et al. [2]. Weaening the RIP condition maes it easier to prove that the resulting random matrix satisfies the CS properties. The paper is organized as follows. After Section 2, in which asic notation and definitions are reviewed, we introduce in Section 3.1 the elementary Shifting Inequality, which enales us to mae finer analysis of the sparse recovery prolem. We then consider the prolem of exact recovery in the noiseless case in Section 3.2 and stale recovery of sparse signals in Section 3.3. The Gaussian noise case is treated in Section 4. Section 5 discusses various conditions on RIP and effects of the improvement of the RIP condition on the construction of CS matrices. Appendix. ˆβ DS The proofs of some technical results are relegated to the 2 Preliminaries We egin y introducing asic notation and definitions related to the RIP. We also collect a few elementary results needed for the later sections. For a vector v = v i ) R p, we shall denote y v max) the vector v with all ut the -largest entries in asolute value) set to zero and define v max) = v v max), the vector v with the -largest entries in asolute value) set to zero. We use the standard notation v q = q v i q ) 1/q to denote the l q -norm of the vector v. We shall also treat a vector v = v i ) as a function v : {1, 2,, p} R y assigning vi) = v i. For a suset T of {1,, p}, we use Φ T to denote the sumatrix otained y extracting 4

5 the columns of Φ according to the indices in T. Let SSV T = {λ : λ an eigenvalue of Φ T Φ T }, and Λ min ) = min{ T SSV T }, Λ max ) = max{ T SSV T }. It can e seen that 1 δ Λ min ) Λ max ) 1 + δ. Hence the condition 3) can e viewed as a condition on Λ min ) and Λ max ). The following relations can e easily checed. δ δ 1, if 1 p 7) θ, θ 1, 1, if 1, 1, and p. 8) Candes and Tao [7] showed that the constants δ and θ, inequalities, are related y the following θ, δ + θ, + maxδ, δ ). 9) Cai, Xu and Zhang otained the following properties for δ and θ in [4], which are especially useful in producing simplified recovery conditions: θ, l l θ 2 i, i l δ+ 2 i. 10) Consider the l 1 minimization prolem P B ). Let β e a feasile solution to P B ), i.e., y Φβ B. Without loss of generality we assume that suppβ max) ) = {1, 2,, }. Let ˆβ e a solution to the minimization prolem P B ). Then it is clear that ˆβ 1 β 1. Let h = ˆβ β and h 0 = hi {1,2,,} for some positive integer p. Here I A denotes the indicator function of a set A {1, 2,, p}, i.e., I A j) = 1 if j A and 0 if j / A. The following is a widely used fact. See, for example, [4, 6, 8, 13]. Lemma 1 h h 0 1 h β max) 1. This follows from the fact that β 1 ˆβ 1 = β max) + h h h 0 + β max) 1 β max) 1 h i 1 h i 1 β max) 1. Note also that the Cauchy-Schwarz Inequality yields that for u R u 1 u 2. 11) 5

6 3 Shifting Inequality, Exact and Stale Recovery In this section, we consider exact recovery of high-dimensional sparse signals in the noiseless case and stale recovery in the ounded noise case. Recovery of sparse signals with Gaussian noise will e discussed in Section 4. We egin y introducing an elementary inequality which we call the Shifting Inequality. This useful inequality plays a ey role in our analysis of the properties of the solution to the l 1 minimization prolem. 3.1 The Shifting Inequality The following elementary inequality enales us to perform finer estimation involving l 1 and l 2 norms as can e seen from the proofs of Theorem 2 in Section 3.2 and other main results. Lemma 2 Shifting Inequality) Let q, r e positive integers satisfying r q 3r. Then any descending chain of real numers satisfies a 1 a 2 a r 1 q c 1 c r 0 q 2 i + r In particular, any descending chain of real numers c 2 i r a i + q i q + r. 12) 1 q 0 satisfies q 2 i + r2 q r 1 + q i. q + r Proof of this lemma is presented in the Appendix. Remar 1 A particularly useful case is q = 3r. Let Then Lemma 2 yields d 1 d r d r+1 d 4r d 5r 0. 5r j=r+1 d 2 j 1 4r 4r ) 2 d j. We will see that the Shifting Inequality, aleit very elementary, not only simplifies the analysis of l 1 minimization method ut also weaens the required condition on the RIP. 6 j=1

7 3.2 Exact Recovery of Sparse Signals We shall start with the simple setting where no noise is present. In this case the goal is to recover the signal β exactly when it is sparse. This case is of significant interest in its own right as it is also closely connected to the prolem of decoding of linear codes. See, for example, Candes and Tao [7]. The ideas used in treating this special case can e easily extended to treat the general case where noise is present. Suppose y = Φβ. Based on Φ, y), we wish to reconstruct the vector β exactly when it is sparse. Equivalently, we wish to find the sparest representation of the signal y in the dictionary consisting of the columns of the matrix Φ. Let ˆβ e the minimizer to the prolem Exact) min γ γ R p 1 suject to Φγ = y. 13) Note that this is a special case of the l 1 minimization prolem P B ) with B = {0}. We have the following result. Theorem 2 Suppose that β is -sparse and that δ +a + θ +a, < 1 14) holds for some positive integers a and satisfying 2a 4a. Then the solution ˆβ to the l 1 minimization prolem Exact) recovers β exactly. In general, if 14) holds, then ˆβ satisfies ˆβ β 2 1 δ +a + θ +a, 2 β max) 1. 1 δ +a θ +a, Remar 2 We should note that in this and following main theorems, we use the general condition δ +a + θ +a, < 1, which involves two positive intergers a and, in addition to the sparsity parameter. The flexiility in the choice of a and in the condition allows one to derive interesting conditions for compressed sensing matrices. More discussions on special cases and comparisons with the existing conditions used in the current literature are given in Section 5. Remar 3 A particularly interesting choice is = and a = /4. Theorem 2 shows that if β is -sparse and δ θ,1.25 < 1, 15) then the l 1 minimization method recovers β exactly. This condition is weaer than other conditions on RIP currently availale in the literature. Compare, for example, Candes and 7

8 Tao [7, 8], Candes, Romerg and Tao [6], Candes [5], and Cai, Xu and Zhang [4]. See more discussions in Section 5. Proof. The proof of Theorem 2 is elementary. The ey to the proof is the Shifting Inequality. Again, set h = ˆβ β. We shall cut the error vector h into pieces and then apply the Shifting Inequality to suvectors. Without loss of generality, we assume the first coordinates of β are the largest in magnitude. Maing rearrangement if necessary, we may also assume that h + 1) h + 2). Set T 0 = {1, 2,, }, T = { + 1, + 2,, + a} and T i = { + a + i 1) + 1,, + a + i}, i = 1, 2,..., with the last suset of size less than or equal to. Let h 0 = hi T0, h = hi T and h i = hi Ti for i 1. h = ˆβ β: h 0 h h 1 h 2 h 3 Support Size: a To apply the Shifting Inequality, we shall first divide each vector h i into two pieces. Set T i1 = { + a + i 1) + 1,, + i} and T i2 = T i \ T i1 = { i,, + a + i}. We note that T i1 = a and T i2 = a for all i 1. Let h i1 = h i I Ti1 and h i2 = h i I Ti2. h i 1)1 h i 1)2 h i1 h i2 length a a a Note that a a 3a. {h, h 11, h 12 } and {h i 1)2, h i1, h i2 } for i = 2, 3,... yields Applying the Shifting Inequality 12) to the vectors h 1 2 h 1 + h 11 1, h 2 2 h h 21 1,, h i 2 h i 1)2 1 + h i1 1,. It then follows from Lemma 1 and the inequality 11) that i 1 h i 2 h 1 + i 1 h i 1 = h h 0 1 h β max) 1 h β max) 1 h 0 + h β max) 1. 8

9 Now the fact that Φh = 0 yields 0 = Φh, Φh 0 + h ) = Φh 0 + h ), Φh 0 + h ) + i 1 Φh i, Φh 0 + h ) 1 δ +a ) h 0 + h 2 2 θ +a, h 0 + h 2 h i 2 i 1 ) 1 h 0 + h 2 δ+a θ ) 2 β max) 1 +a, h0 + h 2 θ +a, This implies Therefore, h 0 + h 2 θ +a, 1 δ +a θ +a, 2 β max) 1. h 2 h 0 + h 2 + i 1 h i δ +a + θ +a, 2 β max) 1. 1 δ +a θ +a, If β is -sparse, then β max) = 0, which implies β = ˆβ. ) h0 + h β max) 1 The ey argument used in the proof of Theorem 2 is the Shifting Inequality. This simply analysis requires a condition on the RIP that is weaer than other conditions on the RIP used in the literature. In addition to Theorem 2, we also have the following result under s simpler condition. Theorem 3 Let e a positive integer and suppose Then ˆβ satisfies δ + θ,1 < 1. 16) ˆβ β 2 21 δ + θ,1 ) 1 δ θ,1 β max) 1. In particular, if β is -sparse, the l 1 minimization recovers β exactly. Proof. The proof is similar to that of Theorem 2. For each i 1, let T i = { + i} and 9

10 h i = hi Ti. We note Φh, Φh 0 = Φh 0, Φh 0 + i 1 Φh i, Φh 0 1 δ ) h θ,1 h 0 2 h i 2 i 1 ) = h δ ) h 0 2 θ,1 h i 2 = h 0 2 i 1 ) 1 δ ) h 0 2 θ,1 h i 1 h δ ) h 0 2 θ,1 h β max) 1 ) ) h δ ) h 0 2 θ,1 h β max) 1 ) ) h δ θ,1 ) h0 2 2θ,1 β max) 1 ). The remaining steps are the same as those in the proof of Theorem 2. i Recovery in the Presence of Errors We now consider reconstruction of high dimensional sparse signals in the presence of ounded noise. Let B IR n e a ounded set. Suppose we oserve Φ, y) where y = Φβ +z with the error vector z B, and we wish to reconstruct β y solving the l 1 minimization prolem P B ). Specifically, we consider two types of ounded errors: B 1 η) = {z : Φ z η} and B 2 η) = {z : z 2 η}. We shall use ˆβ DS to denote the solution of the l 1 minimization prolem P B ) with B = B 1 η) and use ˆβ l 2 to denote the solution of P B ) with B = B 2 η). The Shifting Inequality again plays a ey role in our analysis in this case. In addition, the analysis of the Gaussian noise case follows easily from that of the ounded noise case. Theorem 4 Suppose δ +a + θ +a, < 1 17) holds for positive integers, a and where 2a 4a. Then the minimizers ˆβ DS and ˆβ l 2 satisfy and ˆβ DS β 2 Aη + B β max) 1, ˆβ l 2 β 2 Cη + B β max) 1 10

11 where A = 2 + a δ +a B = θ +a, 1 + / 1 δ +a C = δ+a 1 δ +a, 18) θ +a, θ +a,, 19). 20) θ +a, A proof of Theorem 4 ased on the ideas of that for Theorem 2 is given in the appendix. Remar 4 As in the noiseless setting, an especially interesting case is = and a = /4. In this case, Theorem 4 yields that if β is -sparse and δ θ,1.25 < 1 21) holds, then the l 1 minimizers ˆβ DS and ˆβ l 2 satisfy and ˆβ DS β δ 1.25 θ,1.25 η, 22) ˆβ l 2 β δ 1.25 ) 1 δ 1.25 θ,1.25 η. 23) Again, the condition 21) for stale recovery in the noisy case is weaer than the existing RIP conditions in the literature. See, for example, Candes and Tao [7, 8], Candes, Romerg and Tao [6], Candes [5]), and Cai, Xu and Zhang [4]. Remar 5 A generalization of Theorem 3 can also e otained for the ounded noise case. Suppose β is -sparse and δ + θ,1 < 1. holds. Then the l 1 minimizers ˆβ DS and ˆβ l 2 satisfy ˆβ DS β δ η and ˆβ l 2 β δ 1 + θ,1 1 δ η. θ,1 11

12 4 Gaussian Noise The Gaussian noise case is of particular interest in statistics and several methods have een developed. See, for example, Tishirani [19], Efron, et al. [14], and Candes and Tao [8]. The results presented in Section 3.3 on the ounded noise case are directly applicale to the case where the noise is Gaussian. This is due to the fact that Gaussian noise is essentially ounded. Suppose we oserve y = Φβ + z, z N0, σ 2 I n ) 24) and wish to recover the signal β ased on Φ, y). We assume that σ is nown and that the columns of Φ are standardized to have unit l 2 norm. Define two ounded sets B 3 = {z : Φ T z σ 2 log p} and B 4 = {z : z 2 σ n + 2 n log n} 25) The following result, which follows from standard proaility calculations, shows that the Gaussian noise z is essentially ounded. The readers are referred to Cai, Xu and Zhang [4] for a proof. Lemma 3 The Gaussian error z N0, σ 2 I n ) satisfies P z B 3 ) π log p and P z B 4 ) 1 1 n. 26) Lemma 3 indicates that the Gaussian variale z is in the ounded sets B 3 and B 4 with high proaility. The results otained in the previous sections for ounded errors can thus e applied directly to treat Gaussian noise. In this case, we shall consider two particular constrained l 1 minimization prolems. Let ˆβ DS e the minimizer of min γ γ R p 1 suject to y Φγ B 3 27) and let ˆβ l 2 e the minimizer of min γ γ R p 1 suject to y Φγ B 4. 28) The following theorem is a direct consequence of Lemma 3 and Theorem 4. Theorem 5 Suppose δ +a + θ +a, < 1 29) 12

13 holds for some positive integers, a and with 2a 4a. Then with proaility at least π log p the minimizer ˆβ DS satisfies ˆβ DS β 2 Aσ 2 log p + B β max) 1, and with proaility at least 1 1, the minimizer ˆβ l 2 satisfies n ˆβ l 2 β 2 Cσ n + 2 n log n + B β max) 1, where the constants A, B and C are given as in Theorem 4. Remar: Again, a special case is = and a = /4. In this case, if β is -sparse and δ θ,1.25 < 1, then, with high proaility, the l 1 minimizers ˆβ DS and ˆβ l 2 satisfy ˆβ 10 DS β 2 σ 2 log p 30) 1 δ 1.25 θ,1.25 ˆβ l 2 β δ 1.25 ) σ n + 2 n log n. 31) 1 δ 1.25 θ,1.25 The result given in 30) for ˆβ DS improves Theorem 1.1 of Candes and Tao [8] y weaening the condition from δ 2 + θ,2 < 1 to δ θ,1.25 < 1 and reducing the constant in the ound from 4/1 δ 2 θ,2 ) to 10/1 δ 1.25 θ,1.25 ). The improvement on the error ound is minor. The improvement on the condition is more significant as it shows signals with larger support can e recovered accurately for fixed n and p. Candes and Tao [8] also derived an oracle inequality for ˆβ DS in the Gaussian noise setting under the condition δ 2 + θ,2 < 1. Our method can also e used to improve Theorems 1.2 and 1.3 in Candes and Tao [8] y weaening the condition to δ θ,1.25 < 1. 5 Discussions The flexiility in the choice of a and in the condition δ +a + θ +a, < 1 used in Theorems 2, 4 and 5 enales us to deduce interesting conditions for compressed sensing matrices. We shall highlight several of them here and compare with the existing conditions used in the current literature. As mentioned in the introduction, it is sometimes more convenient to use conditions only involving the restricted isometry constant δ and for this reason we shall mainly focus on δ. By choosing different values of a and and using equation 10), it is easy to show that each of the following conditions is sufficient for the exact recovery of -sparse signals in the noiseless case and stale recovery in the noisy case: 13

14 1. δ θ,1.25 < 1 2. δ < δ 2 < δ 3 < 22 3) δ 4 < For instance, Condition 2 follows from Condition 1 and equation 10). In fact, if δ < 2 1, then δ θ,1.25 δ δ δ < 1. These conditions for stale recovery improve the conditions used in the literature, e.g., the conditions δ 3 + 3δ 4 < 2 in [6], δ 2 + θ,2 < 1 in [8], δ θ 1.5, < 1 in [4], δ 2 < 2 1 in [5], and δ 1.75 < 2 1 in [4]. It is also interesting to note that Condition 4 allows δ 3 to e large than 0.5. The flexiility in the condition δ +a + θ +a, < 1 also enales us to discuss the asymptotic properties of the RIP conditions. Letting a = t, = 4t and using the equations 7), 8), and 10), it is easy to see that each of the following conditions is sufficient for stale recovery of -sparse signals: 1. δ 3t+1) < 2t 1+ 2t, t δ 9t+1 2 < 2t 1+ 2t, t 1 7, 1 3 ) 3. δ 1+t) < 2t 1+ 2t, t 0, 1 7 ] These conditions reveal two asymptotic properties of the restricted isometry constant δ. The first is that δ can e close to 1 as t gets large), provided that checing the RIP for Φ must e done for sets of columns whose cardinality is much igger than, the sparsity for recovery. The second is that if δ is allowed to e small, then checing the RIP for Φ can e done for sets of columns whose cardinality is close to as t gets small). It is clear that with weaer RIP conditions, more matrices can e verified to e compressed sensing matrices. As mentioned in the introduction, for a given n p matrix, it is computationally difficult to chec its restrictive isometry property. However, it has een very successful in constructing random compressed sensing matrices using δ r and θ r,r, see [1, 2, 6, 7, 8, 18]. 14

15 For example, Baraniu et al [2] showed that if Φ is an n p matrix whose entries are drawn independently according to Gaussian or Bernoulli, then Φ fails to have RIP with proaility less than where c is a constant. δ r < α τ α = 2e α2 3 α) 48 n cp ) r, rα It is not hard to see that the proaility of failing drops at a considerale rate as the ound α increases and/or the index r decreases. δ r < α + ɛ < 1, this rate is In fact, with a weaer condition τ α τ α+ɛ = 2e α2 3 α) ) n cp r 48 rα 2e α+ɛ)2 3 α ɛ) 48 n cp rα+ɛ) ) r e αɛ 12 n 1 + ɛ α) r. This rate is very large if n is large. On the other hand, the improvement of RIP conditions can e interpreted as enlarging the sparsity of the signals to e recovered. For example, one of the previous results showed that the condition δ 2 < 2 1 ensures the recovery of a -sparse signal. Replacing the condition y δ < 2 1, we see that the sparsity of the signals to e recovered is 2 relaxed 1.23 times References [1], W. Bajwa, J. Haupt, G. Raz, S. Wright and R. Nowa, Toeplitz-Structured Compressed Sensing Matrices, 14th Worshop on Statistical Signal Processing, 2007, [2] R. Baraniu, M. Davenport, R. DeVore, and M. Wain, A simple proof of the restricted isometry property for random matrices, Constr. Approx. 28, 2008). [3] T. Cai, On loc thresholding in wavelet regression: Adaptivity, loc size and threshold level, Statist. Sinica, ), [4] T. Cai, G. Xu, and Zhang, On Recovery of Sparse Signals via l 1 Minimization, IEEE Trans. Inf. Theory, 2008), to appear. [5] E. J. Candes, The restricted isometry property and its implications for compressed sensing, Compte Rendus de l Academie des Sciences, Paris, Serie I,

16 [6] E. J. Candes, J. Romerg and T. Tao, Stale signal recovery from incomplete and inaccurate measurements, Comm. Pure Appl. Math., ), [7] E. J. Candes and T. Tao, Decoding y linear programming, IEEE Trans. Inf. Theory, ) [8] E. J. Candes and T. Tao, The Dantzig selector: statistical estimation when p is much larger than n with discussion), Ann. Statist., ), [9] A. Cohen, W. Dahmen, R. DeVore, Compressed sensing and est -term approximation, preprint, [10] D. L. Donoho, For most large underdetermined systems of linear equations the minimal l 1 -norm solution is also the sparsest solution, Comm. Pure Appl. Math., ), [11] D. L. Donoho, For most large underdetermined systems of equations, the minimal l 1 - norm near-solution approximates the sparsest near-solution, Comm. Pure Appl. Math., ), [12] D.L. Donoho, M. Elad, and V.N. Temlyaov, Stale recovery of sparse overcomplete representations in the presence of noise, IEEE Trans. Inf. Theory, ), [13] D. L. Donoho, X. Huo, Uncertainty principles and ideal atomic decomposition, IEEE Trans. Inf. Theory, ), [14] B. Efron, T. Hastie, I. Johnstone, and R. Tishirani, Least angle regression with discussion). Ann. Statist ), [15] J.-J. Fuchs, On sparse representations in aritrary redundant ases, IEEE Trans. Inf. Theory, ), [16] J.-J. Fuchs, Recovery of exact sparse representations in the presence of ounded noise, IEEE Trans. Inf. Theory, ), [17] R. Grionval and M. Nielsen, Sparse representations in unions of ases, IEEE Trans. Inf. Theory, ), [18] M. Rudelson and R. Vershynin, Sparse reconstruction y convex relaxation: Fourier and Gaussian measurements, CISS th Annual Conference on Information Sciences and Systems),

17 [19] R. Tishirani, Regression shrinage and selection via the lasso, J. Roy. Statist. Soc. Ser. B, ), [20] J. Tropp, Greed is good: algorithmic results for sparse approximation, IEEE Trans. Inf. Theory, ), [21] J. Tropp, Just relax: convex programming methods for identifying sparse signals in noise, IEEE Trans. Inf. Theory, ), APPENDIX A-1 Proof of the Shifting Inequality Lemma 2) Let i = i+1 + d i for i = 1, 2,, q 1. Then ) q 1 q 1 q 1 q 1 q q + r) 2 i + r 2 q) = q + r) q + r) 2 q + 2 q d j + d j ) 2 And r 1 + q i ) 2 = q 1 q 1 q 1 = q + r) 2 2 q + 2q + r) q id i + q + r) d j ) 2 ) q 1 2 q + r) q + r + i)d i j=i q 1 = q + r) 2 2 q + 2q + r) q r + i)d i + q 1 j=i j=i ) 2 r + i)d i Note that d i is nonnegative for all i, so 2q + r) q 1 r + i)d i 2q + r) q 1 id i. Also, it can e seen that for any 1 i j q 1, the coefficient of d i d j in q 1 r + i)d ) 2 i is 1 + Ii j))r + i)r + j) 1. And the coefficient of d i d j in q + r) q 1 q 1 j=i d j) 2 is 1 + Ii j))q + r)i. Since q 3r, we now that This means r + i)r + j) r + i) 2 = r i) 2 + 4ri q + r)i. q 1 ) 2 q 1 q 1 r + i)d i q + r) d j ) 2. Hence the inequality is proved. 1 The numer Ii j) is 1 unless i = j, in which case it is 0 j=i 17

18 A-2 Proof of Theorem 4 Similar to the proof of theorem 2, we have Φh, Φh 0 + h ) h 0 + h 2 1 δ+a Case I. B = {z : z 2 η}. It is easy to see that Therefore, Now ) θ ) 2 β max) 1 +a, h0 + h 2 θ +a, Φh, Φh 0 + h ) Φh 2 Φh 0 + h ) 2 2η 1 + δ +a h 0 + h 2 h 0 + h 2 2η 1 + δ +a + 2θ +a, β max) 1 1 δ +a θ +a, h 2 2 = h 0 + h h i 2 2 h 0 + h ) 2 h i 2 i 1 i 1 h 0 + h h 0 + h β ) 2 max) 1 = Case II. B = {z : Φ z η}. 1 + h 0 + h β max) 1 ) 2 Cη + B β max) 1 ) 2. By assumption, there is a z B such that Φβ = y z. So This implies Φh, Φh 0 + h ) = Φ ˆβ y + z, Φ T0 T h 0 + h ) Similar to Case I, we have h 2 2 = Φ T 0 T Φ ˆβ y + z ), h0 + h ) Φ T 0 T Φ ˆβ y + z ) 2 h 0 + h aη h 0 + h 2. h 0 + h 2 2η + a + 2θ +a, β max) 1 1 δ +a θ +a, 1 + h 0 + h β ) 2 max) 1 Aη + B β max) 1 ) 2. 18

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals On the l 1 -Norm Invariant Convex -Sparse Decomposition of Signals arxiv:1305.6021v2 [cs.it] 11 Nov 2013 Guangwu Xu and Zhiqiang Xu Abstract Inspired by an interesting idea of Cai and Zhang, we formulate

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 13, JULY 1, 2013 3279 Compressed Sensing Affine Rank Minimization Under Restricted Isometry T. Tony Cai Anru Zhang Abstract This paper establishes new

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 7-1-013 Compressed Sensing and Affine Ran Minimization Under Restricted Isometry T. Tony Cai University of Pennsylvania

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very

More information

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,

More information

19.1 Problem setup: Sparse linear regression

19.1 Problem setup: Sparse linear regression ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 19: Minimax rates for sparse linear regression Lecturer: Yihong Wu Scribe: Subhadeep Paul, April 13/14, 2016 In

More information

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems 1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical

More information

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3740 Ming-Jun Lai Department of Mathematics

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Necessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization

Necessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization Noname manuscript No. (will be inserted by the editor) Necessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization Hui Zhang Wotao Yin Lizhi Cheng Received: / Accepted: Abstract This

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

Sparse Recovery with Pre-Gaussian Random Matrices

Sparse Recovery with Pre-Gaussian Random Matrices Sparse Recovery with Pre-Gaussian Random Matrices Simon Foucart Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie Paris, 75013, France Ming-Jun Lai Department of Mathematics University of

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

ORTHOGONAL matching pursuit (OMP) is the canonical

ORTHOGONAL matching pursuit (OMP) is the canonical IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4395 Analysis of Orthogonal Matching Pursuit Using the Restricted Isometry Property Mark A. Davenport, Member, IEEE, and Michael

More information

Abstract This paper is about the efficient solution of large-scale compressed sensing problems.

Abstract This paper is about the efficient solution of large-scale compressed sensing problems. Noname manuscript No. (will be inserted by the editor) Optimization for Compressed Sensing: New Insights and Alternatives Robert Vanderbei and Han Liu and Lie Wang Received: date / Accepted: date Abstract

More information

Compressed Sensing and Redundant Dictionaries

Compressed Sensing and Redundant Dictionaries Compressed Sensing and Redundant Dictionaries Holger Rauhut, Karin Schnass and Pierre Vandergheynst December 2, 2006 Abstract This article extends the concept of compressed sensing to signals that are

More information

Compressed Sensing and Redundant Dictionaries

Compressed Sensing and Redundant Dictionaries Compressed Sensing and Redundant Dictionaries Holger Rauhut, Karin Schnass and Pierre Vandergheynst Abstract This article extends the concept of compressed sensing to signals that are not sparse in an

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Simon Foucart, Drexel University Abstract We investigate the recovery of almost s-sparse vectors x C N from

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

On Sparsity, Redundancy and Quality of Frame Representations

On Sparsity, Redundancy and Quality of Frame Representations On Sparsity, Redundancy and Quality of Frame Representations Mehmet Açaaya Division of Engineering and Applied Sciences Harvard University Cambridge, MA Email: acaaya@fasharvardedu Vahid Taroh Division

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

Necessary and sufficient conditions of solution uniqueness in l 1 minimization

Necessary and sufficient conditions of solution uniqueness in l 1 minimization 1 Necessary and sufficient conditions of solution uniqueness in l 1 minimization Hui Zhang, Wotao Yin, and Lizhi Cheng arxiv:1209.0652v2 [cs.it] 18 Sep 2012 Abstract This paper shows that the solutions

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

regression Lie Wang Abstract In this paper, the high-dimensional sparse linear regression model is considered,

regression Lie Wang Abstract In this paper, the high-dimensional sparse linear regression model is considered, L penalized LAD estimator for high dimensional linear regression Lie Wang Abstract In this paper, the high-dimensional sparse linear regression model is considered, where the overall number of variables

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Analysis of Greedy Algorithms

Analysis of Greedy Algorithms Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Stable Signal Recovery from Incomplete and Inaccurate Measurements Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University

More information

Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction

Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction Namrata Vaswani ECE Dept., Iowa State University, Ames, IA 50011, USA, Email: namrata@iastate.edu Abstract In this

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

Does Compressed Sensing have applications in Robust Statistics?

Does Compressed Sensing have applications in Robust Statistics? Does Compressed Sensing have applications in Robust Statistics? Salvador Flores December 1, 2014 Abstract The connections between robust linear regression and sparse reconstruction are brought to light.

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

Combining geometry and combinatorics

Combining geometry and combinatorics Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss

More information

A New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables

A New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables A New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables Niharika Gauraha and Swapan Parui Indian Statistical Institute Abstract. We consider the problem of

More information

Introduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be

Introduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be Wavelet Estimation For Samples With Random Uniform Design T. Tony Cai Department of Statistics, Purdue University Lawrence D. Brown Department of Statistics, University of Pennsylvania Abstract We show

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Uniform uncertainty principle for Bernoulli and subgaussian ensembles

Uniform uncertainty principle for Bernoulli and subgaussian ensembles arxiv:math.st/0608665 v1 27 Aug 2006 Uniform uncertainty principle for Bernoulli and subgaussian ensembles Shahar MENDELSON 1 Alain PAJOR 2 Nicole TOMCZAK-JAEGERMANN 3 1 Introduction August 21, 2006 In

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Sparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing

Sparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing Sparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing Bobak Nazer and Robert D. Nowak University of Wisconsin, Madison Allerton 10/01/10 Motivation: Virus-Host Interaction

More information

Z Algorithmic Superpower Randomization October 15th, Lecture 12

Z Algorithmic Superpower Randomization October 15th, Lecture 12 15.859-Z Algorithmic Superpower Randomization October 15th, 014 Lecture 1 Lecturer: Bernhard Haeupler Scribe: Goran Žužić Today s lecture is about finding sparse solutions to linear systems. The problem

More information

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation

More information

Noisy and Missing Data Regression: Distribution-Oblivious Support Recovery

Noisy and Missing Data Regression: Distribution-Oblivious Support Recovery : Distribution-Oblivious Support Recovery Yudong Chen Department of Electrical and Computer Engineering The University of Texas at Austin Austin, TX 7872 Constantine Caramanis Department of Electrical

More information

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:

More information

COMPRESSED Sensing (CS) is a method to recover a

COMPRESSED Sensing (CS) is a method to recover a 1 Sample Complexity of Total Variation Minimization Sajad Daei, Farzan Haddadi, Arash Amini Abstract This work considers the use of Total Variation (TV) minimization in the recovery of a given gradient

More information

Deterministic Construction of Compressed Sensing Matrices using BCH Codes

Deterministic Construction of Compressed Sensing Matrices using BCH Codes Deterministic Construction of Compressed Sensing Matrices using BCH Codes Arash Amini, and Farokh Marvasti, Senior Memer, IEEE arxiv:0908069v [csit] 7 Aug 009 Astract In this paper we introduce deterministic

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

Error Correction via Linear Programming

Error Correction via Linear Programming Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,

More information

HIGH-DIMENSIONAL GRAPHS AND VARIABLE SELECTION WITH THE LASSO

HIGH-DIMENSIONAL GRAPHS AND VARIABLE SELECTION WITH THE LASSO The Annals of Statistics 2006, Vol. 34, No. 3, 1436 1462 DOI: 10.1214/009053606000000281 Institute of Mathematical Statistics, 2006 HIGH-DIMENSIONAL GRAPHS AND VARIABLE SELECTION WITH THE LASSO BY NICOLAI

More information

Observability of a Linear System Under Sparsity Constraints

Observability of a Linear System Under Sparsity Constraints 2372 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 58, NO 9, SEPTEMBER 2013 Observability of a Linear System Under Sparsity Constraints Wei Dai and Serdar Yüksel Abstract Consider an -dimensional linear

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Instance Optimal Decoding by Thresholding in Compressed Sensing

Instance Optimal Decoding by Thresholding in Compressed Sensing Instance Optimal Decoding by Thresholding in Compressed Sensing Albert Cohen, Wolfgang Dahmen, and Ronald DeVore November 1, 2008 Abstract Compressed Sensing sees to capture a discrete signal x IR N with

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah 00 AIM Workshop on Ranking LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION By Srikanth Jagabathula Devavrat Shah Interest is in recovering distribution over the space of permutations over n elements

More information

Sparse signals recovered by non-convex penalty in quasi-linear systems

Sparse signals recovered by non-convex penalty in quasi-linear systems Cui et al. Journal of Inequalities and Applications 018) 018:59 https://doi.org/10.1186/s13660-018-165-8 R E S E A R C H Open Access Sparse signals recovered by non-conve penalty in quasi-linear systems

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Tractable performance bounds for compressed sensing.

Tractable performance bounds for compressed sensing. Tractable performance bounds for compressed sensing. Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure/INRIA, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed

More information

EVALUATIONS OF EXPECTED GENERALIZED ORDER STATISTICS IN VARIOUS SCALE UNITS

EVALUATIONS OF EXPECTED GENERALIZED ORDER STATISTICS IN VARIOUS SCALE UNITS APPLICATIONES MATHEMATICAE 9,3 (), pp. 85 95 Erhard Cramer (Oldenurg) Udo Kamps (Oldenurg) Tomasz Rychlik (Toruń) EVALUATIONS OF EXPECTED GENERALIZED ORDER STATISTICS IN VARIOUS SCALE UNITS Astract. We

More information

Recursive Dynamic CS: Recursive Recovery of Sparse Signal Sequences from Compressive Measurements: A Review

Recursive Dynamic CS: Recursive Recovery of Sparse Signal Sequences from Compressive Measurements: A Review 1 Recursive Dynamic CS: Recursive Recovery of Sparse Signal Sequences from Compressive Measurements: A Review Namrata Vaswani and Jinchun Zhan Astract In this article, we review the literature on recursive

More information

The deterministic Lasso

The deterministic Lasso The deterministic Lasso Sara van de Geer Seminar für Statistik, ETH Zürich Abstract We study high-dimensional generalized linear models and empirical risk minimization using the Lasso An oracle inequality

More information

A Note on the Complexity of L p Minimization

A Note on the Complexity of L p Minimization Mathematical Programming manuscript No. (will be inserted by the editor) A Note on the Complexity of L p Minimization Dongdong Ge Xiaoye Jiang Yinyu Ye Abstract We discuss the L p (0 p < 1) minimization

More information

A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing

A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 28-30, 20 A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing

More information

An Unconstrained l q Minimization with 0 < q 1 for Sparse Solution of Under-determined Linear Systems

An Unconstrained l q Minimization with 0 < q 1 for Sparse Solution of Under-determined Linear Systems An Unconstrained l q Minimization with 0 < q 1 for Sparse Solution of Under-determined Linear Systems Ming-Jun Lai and Jingyue Wang Department of Mathematics The University of Georgia Athens, GA 30602.

More information

The Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology

The Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology The Sparsity Gap Joel A. Tropp Computing & Mathematical Sciences California Institute of Technology jtropp@acm.caltech.edu Research supported in part by ONR 1 Introduction The Sparsity Gap (Casazza Birthday

More information