Sparse Expander-like Real-valued Projection (SERP) matrices for compressed sensing

Size: px
Start display at page:

Download "Sparse Expander-like Real-valued Projection (SERP) matrices for compressed sensing"

Transcription

1 Sparse Expander-like Real-valued Projection (SERP) matrices for compressed sensing Abdolreza Abdolhosseini Moghadam and Hayder Radha Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI, U.S. s:{abdolhos, Abstract Sparse binary projection matrices, corresponding to high quality expander graphs are arguably the most commonly used sensing matrices in combinatorial approaches to compressed sensing. In this paper, we are interested in properties of Sparse Expanderlike Real-valued Projection (SERP) matrices that are constructed by replacing the non-zero entries of expander-graph sparse binary projection matrices by Gaussian random variables. We prove that these sparse real-valued matrices have a weak form of Restricted Isometery Property (RIP). We also show that such weak RIP enables this class of matrices to be utilized in optimal greedy and geometrical frameworks for compressed sensing. Furthermore, we introduce a combinatorial solver for the proposed class of sparse real-valued projection matrices. While maintaining low-complexity, we prove that this combinatorial solver is (a) robust in the presence of noise and (b) optimal in terms of compressive-sample requirements. Our simulation results verify that these matrices can be robustly utilized in geometrical, greedy and combinatorial compressed sensing frameworks. Keywords: Compressed sensing, combinatorial approaches. I. INTRODUCTION Consider the problem of recovering an unknown signal x R from a set of linear samples y = Px when the system is under-determined (i.e. P R where m < n) and the signal x is known to be sufficiently sparse such that it would be the unique sparsest solution. Given samples y and the sensing operator P, the straightforward approach for finding the unique sparse signal x is to perform an exhaustive search. That is, by assumptions of the problem x = arg min x s. t. y = Px (1) The problem of Compressed Sensing (CS) [1][2] deals with designing a solver Δ(P, y) and the sensing mechanism P such that problem (1) could be solved efficiently [12]. Clearly, the most desirable attributes for a CS framework are (a) demanding

2 minimum number of samples m to provide a good level of reconstruction (i.e. keeping Δ(P, y) x small), and at the same time (b) maintaining the complexity of the decoder Δ as low as possible. Roughly speaking, there are three general approaches to finding the solution x in (1) [5]: convex relaxation (geometrical) methods [2][3][4], greedy algorithms [5] and combinatorial approaches [6][7][8][11]. To motivate the contribution and focus of this paper, we briefly outline the most salient attributes of these CS approaches. The decoder of convex relaxation methods finds the unique sparsest solution to y = Px by solving the l convex optimization problem to (1), i.e. arg min x s. t. y = Px (2) It has been shown [1][2] that such relaxation methods that map the pseudo l norm to the convex l norm in the objective function of (1) leads to exact recovery of the signal x if P approximately preserves the l norm of vectors under projection (a property called Restricted Isometry or RIP), i.e. Px x. Most notably, dense Gaussian random matrices possess this property with overwhelming probability if m (number of samples/rows of P) is sufficiently large. Although being tractable and demanding the fewest number of samples/equations (m) to guarantee perfect recovery, solving such linear program generally could be very complex (e.g. O(n ) for Basis Pursuit [3]). On the other extreme, combinatorial decoders [6][7][8][9][11] are much less complex as compared to convex relaxation but they require higher number of compressive samples y to guarantee recovery. Greedy approaches, which also require incoherent sensing matrices with RIP, in general fall in between combinatorial and convex-relaxation based methods in terms of their complexity and sampling requirements [5][14]. Meanwhile, due to the group testing nature of combinatorial solvers, the projection matrices applicable in such approaches have to be sparse and thus popular dense sensing matrices in greedy/convex relaxation frameworks could not be utilized in the sensing stage of a combinatorial based-cs framework. In fact, from the encoder (sensing) perspective, a good projection or sensing matrix P in a combinatorial approach should correspond to the sparse binary adjacency matrix of an expander graph [8]. The authors of [8] showed that such sparse binary matrices approximately preserve the l norm of vectors under projection ( Px x ) and hence named it RIP-1 [8]. However, it has been shown in [10] that such binary matrices with RIP-1 do not possess the original RIP property. Despite the fact that the authors in [8] proved that convex relaxation CS solvers can work under sparse binary projection matrices in the sensing stage, the l norm preservation of sparse binary projections might lead to weaker guarantees of recovery l < Cl (to be explained in the next section) when compared to the stronger ones in case of greedy/convex relaxation approaches [16]. In this paper, we are interested in finding a sensing operator P which can be utilized in all three types of CS approaches (combinatorial, convex relaxation, and greedy); and at the same time, P can provide recovery guarantees in l norm sense.

3 We show that by some modification in the adjacency matrices of expander graphs, one can construct a class of sensing matrices which exhibits some weak form of RIP. Let us clarify what we mean by weak in here. As stated before, the formal definition of RIP demands that Px must be close to x (with high probability) for all possible k-sparse signals. Although RIP is known to be a sufficient condition to guarantee the success of geometrical approaches in solving CS, it has been reported in numerous papers (e.g. [17][18][19]) that the empirical results of linear programming solvers shows that there might be some less constraining properties than RIP, governing the success of linear programming in CS context. This observation led to the investigation of alternative sufficient conditions, which in turn, led to the introduction of different types of RIP, for instance statistical RIP [18] and weak RIP [20] to name a few. This paper is along with those efforts. More specifically, we focus on the probability of recovering any arbitrary but fixed k-sparse signal, and not the probability of failure of the decoder for all possible k-sparse signals; it should be clear that the first condition is weaker than the latter one. Such notion of fixing the signal support has been introduced and used before, for instance in [20]. By using basic arguments, we show that under this measure (the probability of failure for a fixed arbitrary signal), a simpler and weaker version of RIP, and which we refer to as w-rip (that is very similar to the Weak-RIP in [20]) is sufficient to recover any arbitrary fixed signal under convex relaxation methods and at least one optimal greedy algorithm called Cosamp [5] with high probability. The proposed w-rip potentially broadens the admissibility of many sensing matrices that do not possess the original RIP, to be considered as good sensing mechanism at least in the sense of any arbitrary but k-sparse fixed signal. We show that if a matrix P R is constructed by replacing the non-zero entries of the sparse binary adjacency matrix of an expander graph with a Gaussian random vector then P would benefit from w-rip and hence could be utilized in geometrical and greedy frameworks. Also, we show that there exists at least a combinatorial algorithm (based on Sequential Sparse Matching Pursuit [16]) for the proposed P with the l < Cl guarantee of reconstruction. To the best of our knowledge, no other class of sensing projection matrices has been introduced in prior works that can provide l < Cl guarantee of reconstruction under all three approaches to CS. This paper is organized as follows: in Section II, the proposed SERP and our definition of w-rip are introduced. We prove that if one targets for recovering an arbitrary but fixed signal, then w-rip is a sufficient condition of success for at least one greedy and all geometrical CS frameworks. In the same section, we prove that SERP matrices adhere to such w-rip and thus can be safely utilized in the aforementioned CS frameworks. In Section III, we show that for the proposed class of SERP matrices, there exists a fast combinatorial algorithm that can robustly recover a signal from its noisy samples. Simulation results are presented in section IV; and section V concludes the paper.

4 A. Notations and definitions For a natural number q N, we define [q] {1,2,, q}. For a matrix P R and for a row index i [m], we define ω = j: P, 0. Hence, ω is the set of column indices where the i-th row of P is non-zero. We generalize such notation for any set of rows i [m] as: ω = ω. The set of row indices where the j-th column of P are non-zero is Ω, i.e. Ω = {i: P, 0}. Similarly, one can generalize this notation for any set of columns j [n] by: Ω = Ω. A zero vector of length l is denoted by 0. The support of a vector x is denoted by Support{x}, that is Support{x} = {i: x 0}. A Gaussian distribution with mean μ and variance σ is represented by N(μ, σ ). By x = H [x], we mean that x is a vector with the same length of x and formed by keeping k largest (absolute value) coefficients of x and setting the rest of coefficients in x to zero. Let x be any vector in R. Then we say that a pair of projection matrix P and a decoder Δ(y, P) for the problem (1) guarantees l < Cl recovery if Δ(P, y) x < C H [x] x for some constant C = O(1). For a fixed decoder Δ and projection matrix P, the relative error of reconstruction from samples y = Px is Δ(y, P) x / x. In combinatorial approaches, it is beneficial to model the projection matrix using a bi-partite graph G = (U, V, E) where U and V are the set of left and right nodes/vertices and E is the set of edges connecting vertices in U to V. The j-th left node in G correspond to the j-th signal coefficient (x ) while the i-th right node correspond the i-th sample (y ). An edge (i, j) exists in the partite graph G if and only if P, 0 (i.e. y spans x ). For a fixed set of vertices S, Γ(S) denotes the neighbors of S. A graph is left/right d-regular if and only if the degree of each left/right node is exactly d. We say that G is a (K, d, q) expander, if G is left d-regular and for any subset S [U] with S K we have Γ(S) (1 q)d S. Note that q [0,1]. As assumed by some prior related papers (e.g. [5]), we also follow the same assumption that the signal of interest is exactly sparse (i.e. most of signal entries are zero). In case of compressible signals where most of the coefficients have very small magnitudes but not exactly zero, one can resort to the technique used in [5]. More specifically, assume x is a compressible signal which possibly is non-zero in all indices and let x = H [x] be a vector that has the k largest coefficients of x. Then the system of equation y = Px + μ is equivalent to the system of y = Px + μ where μ = P(x x ) + μ can be thought of as a new noise that encodes both the sampling noise μ and the tail of the signal (x H [x]). The idea is that, if x decays rapidly, then the energy of the tail of the signal should be much smaller than the signal itself: x H [x] x. Hence, one can target finding the k-sparse signal x (which is the best k-sparse approximation to the original signal x) instead of the dense original (compressible) signal x.

5 II. THE PROPOSED PROJECTION MATRIX Consider matrix P = G(m, n, r, d, D) where the function G is defined in Algorithm 1, D is a probability distribution and m, n, r, d are some natural numbers with the condition that m is divisible by r. In words, Algorithm 1 generates the matrix P by following these operations: (i) for each column i [n], the algorithm selects uniformly at random d indices among [m/r] = {1,2,, m/r}. Let us denote those indices by h. (ii) Then for each of the d entries in h, for instance the l-th entry h, the algorithm generates another set of r indices {(h 1)r + 1, (h 1)r + 2,, h r}. The collection of these rd indices forms the row-indices where the i-th column will be non-zero (i.e. Ω ). (iii) Finally, each non-zero entry of the i-th column of P would be a random variable with distribution D. The output of Algorithm 1 could be also viewed as a matrix (tensor) that each of its column is non-zero in exactly d random indices and each non-zero entry is a random vector (t, t,, t ) R with t ~ D for i [r]. Note that, for all i and j where i/r = j/r we have ω = ω ; where, as explained above, ω is the set of column indices where the i-th row of P is non-zero. It is straightforward to verify that for sufficiently large n if (with high probability) the output of G(m/r, n, 1, d, D) corresponds to a high quality expander graph, then G(m, n, r, d, D) would also (with high probability) correspond to a high quality expander. Also it should be clear that permuting columns and rows of the generated projection matrix has no effect on our arguments nor on the quality of the matrix. A wide range of popular projection matrices (in CS context) could be generated by Algorithm 1. For instance if m = O(k log n/k ) and D = N(0,1/m), then G(m, n, 1, m, D) outputs a random dense Gaussian projection matrix which has RIP with high-probability [12][13]. On the other hand, for d = Θ(log n), r = 1 and a distribution with delta-dirac at one and zero otherwise for D (i.e. t~d is a constant number with value of one), the output of the Algorithm 1 (with high probability [7][16]) will correspond to a bipartite expander graph and hence can be utilized in a combinatorial solver for CS. Inputs : m, n, r, d and D Output : P R For i = 1 to n do T = 0 Let h be a random subset of [m/r] of size d ( h = d) / Ω = {(h 1)r + 1, (h 1)r + 2,, h r} j Ω : P, ~D End Algorithm 1 The function G(m, n, r, d, D) In this paper, we are interested in the output of Algorithm 1 when r = O(1) is a small natural number, d = Θ(log n), m = O(k log n/k) and D = N(0,1/rd). We call an output instance of G(m, n, r, d, D) with those values as Sparse Expander-like Real-valued Projection or SERP. With some realistic approximations, we show that this class of matrices adheres to a property which we refer to as w-rip (to be defined bellow) for an arbitrary signal x with a fixed support set

6 T = Support{x}. Then, we show that w-rip is in fact sufficient to guarantee the recovery of that fixed signal under convex relaxation and Cosamp (an optimal greedy CS decoder) and there is no need for the stronger RIP condition. Also by using the same property, we prove that modifying only one line of the SSMP decoder [16] leads to a combinatorial algorithm with a l < Cl guarantee of recovery for that particular signal under SERP sensing matrices. To the best of our knowledge, this is the first class of matrices which can provide l < Cl recovery under all three approaches to the problem of compressed sensing at least for any arbitrary but fixed signal setting. Throughout this paper, we assume that parameters m, d and r are chosen properly such that, if non-zero entries of P = G(m, n, r, d, D) are replaced by one, then it would correspond to a ((c + 1)k, d, q) expander for small constants c 1 and 0 < q < 1. A. w-rip Since the Restricted Isometry Property (RIP) is an essential tool in the analysis of different algorithms in CS, we briefly review that concept in here. A matrix P R has RIP of order k N [2], if for every k-sparse signal x R ( x = k), there exists a Restricted Isometry Constant (RIC) constant δ such that: (1 δ ) x Px (1 + δ ) x (3) No deterministic construction is available so far for a matrix to achieve RIP of order k with the minimal number of rows (samples) of m = O(k log n/k). However, it has been proven that, with high probability, some random matrices satisfy RIP using an optimal number of rows. For instance, it can be shown [12] that Gaussian random matrices have that property with a probability of at least 1 cn for some positive constants c and γ. To establish a clear connection between RIP and our proposed w-rip let us define an auxiliary RIP which we refer to as probabilistic-rip and abbreviate it by p-rip in this paper. Definition 1. A matrix P R has p-rip of order (k, p) N [0,1], if P has RIP of order k with a probability of at least p. Hence Gaussian random matrices with m = O(k log n/k) rows have p-rip of order (k, p) with p = 1 cn for some constants γ and c. Now we present our definition of w-rip. Definition 2. A matrix P R has w-rip of order (k, p) N [0,1], if for any arbitrary but fixed k-sparse signal x R ( x = k), there exists a w-rip constant 0 < δ, < 1 such that: 1 δ, x Px 1 + δ, x

7 with a probability of at least p. In some parts of this paper when p and k are known from the context, we drop subscripts and simply denote δ, by δ. Note that, this definition is slightly different from the definition of weak-rip in [20] and the one in [21]. The connection between RIP and p-rip is obvious from their definitions. The connection between w-rip and p-rip can be justified as follow. Assume the set of T = {T, T, } contains all subsets of [n] = {1,2,, n} with a cardinality of k, i {1,2,, Card{T}}: T [n], Card{T } = k. Furthermore, suppose that P has w-rip of order (k, p). Then it means that for each T, all singular values of P :, are in the range of [1 δ, 1 + δ ] with a probability of p. If columns of P are independently distributed (which is usually the case) then P would have RIP of order k with a probability of p {}, i.e. p- RIP of order k, p {}. The early papers on CS (e.g. [1][2]) were focusing on the failure probability of the decoder for all possible k sparse signals. For instance, to target a probability of success q in case of a convex relaxation decoder, one needs a p-rip of order (2k, q). Consequently, to show that one matrix has this property, that matrix must have w-rip of order (2k, p) where p {} q and T contains all subsets of [n] with a cardinality of k. Note that cardinality of the superset T is Card{T} = n. Even for 2k small values of n, Card{T} is enormously large, demanding {} q to be extremely close to one. This theoretically deprives many sensing matrices from having RIP or p-rip for the optimal number of samples m = O(k log n/k) even though they exhibit good empirical results in practice. This has been one of main motives of recent efforts (e.g. [18][20][21][18] ), which focused on deriving the probability of failure of a CS framework for any arbitrary but fixed signal instead of the probability of failure of a CS framework for all possible signals. One expects that this relaxation should have some implications on RIP as well; and fortunately, that is the case. By looking more closely at proofs in [22] and Cosamp [5], one can easily notice that under both these frameworks, to recover an arbitrary signal x with a fixed support from its compressive samples, there is no need for the full RIP or p-rip. Instead, to prove that an arbitrary signal with a fixed support set could be recovered from its compressive samples with a target probability of q under those decoders, it only suffices to have w-rip of order k, p = q where the constant z is solver-dependent and by far smaller than n. For instance as we show in the 2k proof of Lemma 1, the constant z in the case of a linear program based decoder is only O(n/k). Also later in this paper, we show that the proposed class of SERP matrices can easily achieve w-rip of a proper order with some realistic approximations.

8 B. Implications of w-rip The following two lemmas prove that, if a matrix has w-rip of proper orders, then it will be a good sensing mechanism under Cosamp (a greedy algorithm) and also any convex relaxation decoder when one targets the recovery of any arbitrary signal with a fixed support. Lemma 1) Consider any arbitrary signal x R with a fixed support set T of cardinality k. Assume that, P R has w- RIP of order (2k, p) with a constant δ, < 2 1 for p = 1 whereβ 1. If Δ(P, y) belongs to the class of convex relaxation methods, then Δ provides a l < Cl guarantee of recovery for x from y = Px with a probability of at least p 1. Proof of Lemma 1) Let x = Δ(P, y) where Δ is any convex relaxation based decoder. In other words, x is the solution to optimization problem (2). As in [22] define h = x x and partition [n] into sets of cardinality k, denoted by T, T, T, such that T corresponds to k largest entries of x, T corresponds to k largest entries of h, T corresponds to the next k largest entries of h and so on. Fixing the support of x, the arguments of [22] require that RIP inequalities to be active on sets T T and all possible combinations of T T and T T where i and j range from 2 to the number of partitions. Since k could be as low as one, there are at most 2n of these combinations. Now assume that the matrix P has w-rip of order (2k, p). Then P has the required properties in [22] with a probability of p. Recalling that 1 1 for n 1, it can be concluded that if one targets the recovery of an arbitrary but fixed k-sparse signal with a probability of q = 1 for some constant β 1, then if suffices to have p > 1. Lemma 2) Consider any arbitrary signal x R with the fixed support set T of cardinality k. Assume that, P R has w-rip of order (2k, p) with the constant δ, < 0.1 for p = 1 where β 1. If Δ(P, y) is the Cosamp algorithm, then Δ provides a l < Cl guarantee of recovery for x from y = Px with a probability of at least p 1. Proof of Lemma 2) For a k-sparse signal x, Cosamp requires at most 6(k + 1) 7k iterations (for k 6). Assume that in the start of the i-th iteration, the algorithm estimates x by x {}. Let R {} = x x {} be the residue of estimation in that iteration and denote the discrepancy of estimation by d {} = y Px {}. Finally let Ω {} be the indices corresponding to the 2k largest

9 values of P d {}. Then on the i -th iteration of Cosamp, one needs RIP inequalities only on column indices T {} = SupportΩ {} \Support{R {} } and T {} = SupportΩ {} Support{x} Support{x {} }. Note that for all i, CardT {} 2k and CardT {} 4k. Based on the proof of [5], Cosamp recovers a fixed sparse signal x when the restricted isometery constant on sets T {}, T {},, T {()} and T {}, T {},, T {()} are all smaller than 0.1. Therefore, if one wants to recover any arbitrary but fixed signal x with a target probability of q then the matrix P has to have a w-rip of order (4k, p) with the constant δ, 0.1 for p () q. For instance, if p 1 then 1. C. w-rip for SERP matrices The following two lemmas prove that SERP matrices have w-rip and hence could be utilized in Cosamp and convex relaxation methods in a for any arbitrary but fixed signal scenario. Lemma 3) Let P R be an output instance of G(m, n, r, d, D). If d = Θ(log n), m = O(k log n/k), D = N(0,1/rd) and r = O(1) is a constant, then for any arbitrary vector x R with a fixed support, there exist constants δ and α = O(r) such that Pr( Px (1 + δ) x ) 1 (4) n Proof of Lemma 3) Without loss of generality, assume that x has a unit norm. To bound y = Px (similar to [12]) one can resort to the moment generating function of the random variable y: Pr( y (1 + δ) x ) = Pr(Exp(t y ) Exp(t(1 + δ) x )) Ee (5) e () for a positive value of t, where the last inequality is due to Markov s. Clearly Ee is the moment generating function of the random variable y. Note that y ~x N(0,1)/ rd (see [13]) where N(0,1) is standard normal distribution. Consequently y ~x χ (1)/rd where χ (1) is a chi-squared distribution with one degree of freedom. Define z = x /rd for i [m], then the random variable y ~ z χ (1) is a weighted sum of chi-squares. It is known and easy to verify that manipulating the moment generating function of a random variable with the distribution of weighted sum of chi-squares does not lead to neat or usable formulas. Hence, a number of estimations are suggested to approximate that random variable with another one, which can be manipulated more easily (e.g. [23][24]). Here, we use the approximation in [23]. A commonly practiced approximation of a weighted chi-squared random variable is a Gamma distribution with the

10 same mean and variance of the original random variable. In [23], the authors proved that this approximation leads to acceptable results and more importantly, the gamma approximation tends to be an over-estimate of the true distribution of y when the variance of {z } is not small 1. Since values of {z } in SERP are linearly proportional to x (i.e. norms of random subsets of the signal) and the magnitudes of many signals of the interest (e.g. DCT coefficients of images) almost follow a power law, hence at least for these classes of signals, it is very unlikely that x to have small variances. Consequently, this approximation seems to be reasonable for those practical signals of interest. Define a = z 2 z ), λ = ( z 2 z Then based on the approximation of [23], y can be approximated by a Gamma distribution with the shape parameter λ and scale parameter 1/a. Note that z = c x /rd where c is the number of times that i [n] occurs in ω. Recall that, all columns of P are non-zero in exactly rd row indices. Hence, for all i [n] we have c = rd. Since it is assumed that x is unit norm, we get z = ( z ) = 1 and thus a = λ. Now let us find the upper and lower bounds for z It can be proved by contradiction that given the properties of {z }, the vector z = [z z z z ] has to be maximally sparse when the objective function z attains its maximum. More specifically, assume that n is not extremely small such that rd > 1. Furthermore, assume that a given sequence {z } leads to maximum value for z ; however the vector z = [z z z z ] is not maximally sparse. Then, one can find two indices a, b Support{z} such that z 0 and z 0 (since rd > 1 and z rd) and form a new sequence of w = [w w w ] where z c a, c b w = 0 c = a z + z c = b. It is easy to verify that z = w, w = z 1 but z < w which contradicts our assumption that this sequence of {z } leads to maximum value for z. Performing the re-distribution of signal energies successively, one can conclude that the vector z = [z z z z ] has to be maximally sparse when z attains its upper bound. However, the sparsity of the vector z can never be less than rd, due to two facts: (a) the matrix P corresponds to an expander and (b) each column of P is non-zero at exactly rd indices. This maximally sparse case happens when the signal is non-zero in exactly one location. Since we have assumed that the signal is unit-norm, then this implies that the signal is one at one index 1 Here we are deriving the upper bound for equation (4). Hence, the derived probability from the approximation in here is higher than the actual one.

11 and zero in the rest of indices. It is straightforward to verify that in this case, bound for z. z = 1/rd. This concludes the upper It is known and easy to verify that the when the constraint z = 1 is active, the minimum value of z occurs when all z are equals to 1/m. In that case, z equals to 1/m as well. Putting together the upper and lower bounds, we have rd/2 a = λ m/2. Based on the above arguments, the following holds Pr( y (1 + δ) x ) Ee 1 e () = e () = f(t) (6) For 0 < t < a. By setting () function. Substituting back this value into (6) leads to = 0, we get t = < a which is in a valid range for the corresponding moment generating Pr( y (1 + δ) x ) 1 + δ e (7) Let τ = δ /2(1 + δ + ). Then by writing the Taylor series expansion of (1 + δ)/exp(δ) around δ = 0 upto δ and recalling that 1 x Exp( x), it can be concluded that Now recall that rd/2 a and d = c log n for a constant c. Thus Pr( y (1 + δ) x ) Exp(a) (8) Pr( y (1 + δ) x ) 1 n / (9) In summary, there exists a constant α = such that Pr( y (1 + δ) x ) < 1/n. Clearly r can be tuned such that α 2. Now we proceed in finding the lower bound for the deviation of the norm of compressive samples from the norm of original k-sparse signals. Lemma 4. Let P R be an output instance of G(m, n, r, d, D). If d = Θ(log n), m = O(k log n/k), D = N(0,1/rd) and r = O(1) is a constant then for any arbitrary vector x R with a fixed support, there exist constants δ and α = O(r) such that Proof of Lemma 4: Similar to proofs of Lemma 3, one has: Pr( Px (1 δ) x ) 1 (10) n

12 Pr( y (1 δ) x ) = Pr(Exp( t y ) Exp( t(1 δ) x )) Ee (11) e () for a positive value of t. We again use the approximation of [23] to estimate Ee. More specifically by using arguments provided in the proof of Lemma 3, it can be concluded that Ee e () e() 1 + (12) where rd/2 a m/2. For finding the tightest bound for (11), one should set the derivative of (12) with respect to t to zero and solve for t. That value is t = aδ/(1 δ). It is straightforward to verify that if 0 < δ < ; then this computed value for t is in the valid range of the corresponding moment generating function. Plugging back this value into (12) yields Pr( y (1 δ) x ) (1 δ)e (13) Note that the Taylor series expansion of (1 δ)e around δ = 0 up-to degree 4 is 1 + O(δ ). Hence (1 δ)e e (14) Considering that rd/2 a m/2 and d = c log n for a constant c, the following holds for α = rc δ /4. Again r can be tuned such that α 2. Pr( y (1 δ) x ) 1 (15) n Having upper and lower bounds for deviation of norm compressive samples from original signals, the following theorem is trivial. Theorem 1. Let P R be an output instance of G(m, n, r, d, D). If d = Θ(log n), m = O(k log n/k), D = N(0,1/rd) and r = O(1), then P has w-rip of order (k, p) with the constant δ for p 1 where α = O(r). Proof of Theorem 1) Lemmas 3 and 4, and Definition 2 directly imply that P has w-rip of order (k, α) for p = 1 2/n where = min (rc δ /4, rc δ /4(1 + δ + )) = rc δ /4(1 + δ + ) = O(r). Theorem 1 proves that, the proposed SERP can be utilized in at least one greedy solver and all convex relaxation methods and provide a for any arbitrary but fixed guarantee of recovery in l < cl norm sense even in case of noisy samples and compressible signals. In the next section, we show that the proposed projection matrix could be utilized in combinatorial solvers as well.

13 III. A COMBINATORIAL SOLVER FOR SERP As before, assume P R is generated by using Algorithm 1. In this section, we show how an already available combinatorial solver for binary projection matrices could be easily modified to support the proposed sparse real-valued projections. To that end, we have considered the Sequential Sparse Matching Pursuit (SSMP) decoder algorithm, introduced in [16] due to its low computational cost and also optimality of sample requirements (m = O(k log n/k )). We prove that by changing only one line in the SSMP algorithm leads to a Modified version of Sequential Sparse Matching Pursuit (abbreviated here by MSSMP) that has a l < Cl guarantee of recovery. It is important to note that the l < Cl guarantee under the proposed MSSMP decoder is stronger than the l < Cl guarantee in case of the original SSMP decoder algorithm. Algorithm 2 outlines the proposed MSSMP decoder for the class of sparse real-valued projection (SERP) matrices. The only difference between the MSSMP algorithm and SSMP is in line 5 of Algorithm 2, where in [16] for a fixed index i, Px {} + ze y is minimized. Note that for a fixed index i, the minimizer z to Px {} + ze y is z = P :, y Px {} P :, = P, y P x {} P, while the minimizer to Px {} + ze y in [16] is the median of the vector y P x {}. Inputs : P, y, k Output : x R 1) x {} 0 2) For j = 1 (T = O(log x ) do 3) x x {} 4) For step = 1 (S = (c 1)k) do 5) Find a coordinate i and increment z that minimizes P(x + ze ) y 6) x x + ze 7) End 8) x {} H [x ] 9) End Algorithm 2. The Modified version of the Sequential Sparse Matching Pursuit (MSSMP), a combinatorial algorithm for class of sparse real-valued projections.

14 To prove the l < Cl guarantee of recovery for the proposed pair of SERP sensing operator and MSSMP combinatorial solver, we essentially follow the proof in [16]. However, here the w-rip feature of the proposed projection matrix is utilized in the proof, which has different characteristics than its RIP-1 counterpart in the case of the binary projections in [7][16]. To provide an easy comparison between the presented proof and the one in SSMP, we tried to adapt the notations used in [16] as much as possible in here. The algorithms SSMP and hence MSSMP are composed of two nested loops. The objective of the inner loop is to reduce the discrepancy of the estimation in the j-th iteration (Px {} y ), where l = 1 in case of SSMP and l = 2 for MSSMP. The outer loop just enforces that the signal estimation to be sparse. The following theorem (which itself is built upon some other lemmas) proves that the modified SSMP can handle the proposed real valued projection matrix and guarantee l < Cl recovery for a fixed but arbitrary signal. Theorem 2: Given the projection matrix P, noisy samples y = Px + μ and the sparsity level k = x, then the combinatorial Algorithm 2 returns an estimate x such that x x = O( μ ) with a probability of at least 1 (where α is a constant) if the w-rip constant of P δ, is sufficiently small for p 1 Proof of Theorem 2: Let x {} denotes the estimated value of the true signal x at the beginning of the j-th iteration of the outer loop. Note that at the start of the outer loop, x {} is k-sparse. By Lemma 5 (stated and proved below), if for a fixed regularization parameter c, the discrepancy of the estimation is still high (i.e. Px {} y c μ ) then there exists an index i [n] and a step size z such that. P(x {} + e z) y 1 c k Px{} y where c is a function of the restricted isometery constant of P and also the expander quality of the corresponding bipartite graph. Thus after O(k) iterations of the inner loop, Px y βpx {} y where β = e /. Noting that Px y = P(x x) μ, then the triangle inequality can be applied to get P(x x) βp(x {} x) + (1 + β) μ (16) To relate the improvement in the discrepancy to the improvement of the signal estimation, one could utilize w-rip on the set of columns corresponding to Support{x x} to conclude x x β 1 + δ 1 δ x{} x β 1 δ μ (17)

15 Define constants α = 2β and c = (). Then by Lemma 6 (stated and proved below), we have: x {} x αx {} x + c μ (18) Now if the expander quality of the corresponding bipartite graph and also the number of samples are sufficiently high (leading to smaller values for β and δ respectively), then α would be less than one. In that case, running the outer loop for z iterations, we would have x {} x α x + μ. Thus running the outer loop only for O(log x ) iterations leads to x {} x = O( μ ). As a final remark, note that Algorithm 2 requires τ = O(k log x ) iterations and in each iteration it relies on w-rip inequalities of P for signals which are non-zero at most in 2k indices (see proof of Lemma 5). Hence to recover an arbitrary but fixed signal x from its compressive samples with a target probability of q, P must have w-rip of order (2k, p) where p q. In the following, we state the two lemmas used in the proof of Theorem 2. The proof of Lemma 5 is presented in the appendix. Lemma 5 (equivalent of Lemma 3 in [16]) Let y = Px + μ. If x is k-sparse, Px y c μ and P has a w-rip of a proper order, then there exist an index i and a positive constant c such that P(x e x + e x ) y 1 c k Px y (19) Lemma 6 (equivalent of Lemma 6 in [16]) For any k-sparse vector x and any vector x we have: H [x ] x 2 x x (20) Proof of lemma 6: The proof of Lemma 6 in [16] is for l norm, however it holds for any other valid norm as well. Nevertheless for the sake of completeness, here we present the proof. Note that, H [x ] is the closest k-sparse vector to x. Hence H [x ] x x x. Consequently by triangle-inequality we have: H [x ] x H [x ] x + x x x x + x x = 2 x x (21) We conclude this section with a brief discussion regarding the complexity of the MSSMP decoder. Computing Px in case of a sparse binary projection is achieved by adding some entries of x (since all non-zero entries of P are one). However, for a

16 sparse real-valued projection, computing Px must be performed through computing a series of inner products. Clearly, this update/computation is heavier for a sparse real-valued projection matrix than the one in case of a sparse binary projection matrix. However, it is only a constant factor worse (the complexity of multiplication divided by the complexity of summation). On the other hand, computing Px in case of a sparse real-valued projection matrix is much lighter than when P is a random-dense matrix. Also as stated before, the algorithms of MSSMP and SSMP are exactly the same, except line 5 of Algorithm 2. In that line of SSMP, the median of d numbers are computed while the same line in MSSMP demands the orthogonal projection of two vectors in R. Although this operation is heavier than computing the median of d numbers, order-wise, these two operations have exactly the same complexity (since r = O(1)). Consequently, the complexity order of MSSMP and SSMP are the same. IV. SIMULATION RESULTS In this section, we present numerical results regarding the performance of the proposed sparse real-valued projection matrices (SERP) under combinatorial, greedy, and convex-relaxation approaches. Here, we have considered two scenarios: (i) when compressive samples are noiseless and (ii) when the samples are contaminated by noise. For the noiseless case, the performance of the pair of traditional sparse binary projection matrices and SSMP as the decoder is compared with the proposed pair of sparse real-valued projection and MSSMP as the decoder. More specifically, we fixed the signal length to n = 10,000 and considered signal sparsities k = x = 100,200,300,,800. For each sparsity level, we considered six levels of sampling m = 3k, 4k,,8k. For each configuration of k and m, we generated a signal x R and uniformly at random selected k of its indices and let the signal values at those indices be a uniform random variable in the range of [ 0.5,0.5]. Then, we recorded the average of relative reconstruction error ( Δ(y, P) x / x ) in 50 independent trials. The results for the pair of a sparse binary projection matrix with d = 18 (recall that d is the number of non-zeros in each column of the projection matrix) and SSMP decoder are presented in Figure 1-(a). For the same value of d, the results for MSSMP and sparse real-valued projection matrices with r = 1 and r = 2 are respectively shown in Figure 1-(b) and Figure 1-(c). As shown in those plots, MSSMP and the sparse real-valued projections lead to (sometimes much) lower levels of distortions in recovery when the number of samples are not sufficiently large (e.g. m = 4k and m = 5k for all values of k). However, both of these two approaches (MSSMP and SSMP) introduce virtually no error in the recovery process when m > 6k. Note that under the MSSMP decoder, the results from sensing matrices with r = 1 are often less distorted than those from matrices with r = 2.

17 m/k m/k m/k k k k (a) (b) (c) Figure 1. Relative error of reconstruction ( Δ(y, P) x / x ) for 50 random signals x of length n = 10, 000 as a function of the sparsity level k = x 0 and sampling ratio m/k, (a) when the projection matrix is sparse binary and the solver is SSMP, (b) when the projection matrix is a sparse real-valued sensing matrix (SERP) with r = 1 and the solver is MSSMP and (c) when the projection matrix is a sparse real-valued sensing matrix (SERP) with r = 2 and the solver is MSSMP. Now let us focus on the scenario when samples are noisy, i.e. y = Px + μ where μ > 0. Fixing the signal length, sparsity and number of samples respectively to n = 5000, k = 250 and m = 5k = 1250, we varied the relative error in sampling ( μ / Px ) from 0.1 to one in steps of size 0.1. For each relative error in sampling, we generated 100 random signals and projection matrices as in the noiseless case scenario (i.e. uniformly at random selecting the indices of the signal to be nonzero and populate those indices uniformly in the range of [ 0.5,0.5]). Then, the averages of relative errors in reconstruction as a function of the relative error in sampling were recorded for all three types of CS-decoders and are presented in Figure 2- Figure 4. Let us, we briefly highlight the key results of these plots. As shown in Figure 2, the pair of sparse real-valued projections and MSSMP decoder leads to higher qualities of recovery, when compared to the pair of sparse binary projections and SSMP. Within the class of sparse real-valued projections, when the relative noise energy is small, projection matrices with r = 1 seems to be better sensing operators. However, as the noise energy increases, we introduce less error in the recovery process if non-zero entries of the sparse projection matrix are Gaussian random variables in R (i.e. r = 2). When BP [22] is employed as the decoder, the class of sparse real-valued projection matrices are virtually as good as dense Gaussian projection matrices (see Figure 3). Within the class of sparse real-valued projections and for fixed value of d, BP consistently introduces less errors in the approximation for matrices with r = 2. For evaluating the performance of the sparse real-valued projection in a greedy CS framework, we selected the algorithm of Cosamp [22] for the decoder side. Figure 4 shows that Cosamp is more sensitive to the deployed projection matrix compared to BP when the projection matrix is from the class of sparse real-valued projections. Figure 3 and Figure 4 suggest that for a fixed value of d and under noncombinatorial class of solvers, deploying sparse real-valued projection matrices with parameter r being slightly greater than one, leads to less error in the recovery process.

18 1 Average of relative error in reconstruction Real sparse projection, r=1-mssmp Real sparse projection, r=2-mssmp Binary sparse projection-ssmp Relative error in sampling Figure 2 Comparing the performance of the sparse real valued projection matrices and MSSMP with sparse binary projections under SSMP decoder when samples are noisy. 1 Average of relative error in reconstruction SERP-BP, r=1 SERP-BP, r=2 Random Gaussian dense-bp Relative error in sampling Figure 3 Comparing the performance of the sparse real-valued and dense Gaussian projection matrices when BP is the decoder and samples are noisy. 1 Average of relative error in reconstruction Real sparse projection-cosamp, r=1 Real sparse projection-cosamp, r=2 Random Gaussian dense-cosamp Relative error in sampling Figure 4 Comparing the performance of the sparse real-valued and dense Gaussian projection matrices when Cosamp is the decoder and samples are noisy.

19 V. CONCLUSION In this paper, we introduced SERP, a class of sparse real-valued projections that can be constructed by replacing the non-zero entries of the adjacency matrix of expander graphs by Gaussian random variables in R for small values of r. We proved that, with high probability, an instance of that class, say P R approximately preserves the l norm of any arbitrary but fixed k-sparse vector x R under projection (i.e. Px would be very close to x ). The immediate consequence of this claim is that, these matrices could be utilized under any optimal greedy or convex relaxation solver for compressed sensing. Furthermore, we proved that there exists at least a combinatorial algorithm for the proposed class of projections with l < Cl guarantee of reconstruction. This makes the class of sparse real-valued projections unique, since those matrices provide strong guarantees of recovery under all three (combinatorial, greedy, and convex relaxation) approaches to compressed sensing. APPENDIX A. Proof of lemma 5: Let Δ = x x. Since y = Px + μ; then (19) can be re-expressed as: Note that from the assumption in the lemma P(Δ e Δ ) y 1 c /k PΔ μ (22) (c 1) μ Px y μ Px y + μ = PΔ (23) Since Δ 2k hence w-rip implies that PΔ 1 + δ Δ. Consequently: Δ c 1 μ (24) 1 + δ Let T = Support{Δ} {i: Δ 0}. Note that 0 T = x x 2k. Assume after re-ordering of entries of Δ, we have Δ > Δ for all i [ Δ 1]. Consider G, the bi-partite graph corresponding to the projection matrix P. Assume one enumerate the edges of the graph G in lexicographic order and forms the sets Ω and Ω by following this rule: if (i, j) is the first edge going to the vertex j, then j would be included in Ω, otherwise it goes to the set Ω [16]. More formally Ω = {j: j Ω s. t. z < i, P, 0} (25) and Ω = Ω \Ω. Note that the sets Ω are pair-wise disjoint. As before, let D = dr. Since, P corresponds to an ((c + 1)k, D, q) expander graph (where c 1), one has Ω qds for s (c + 1)k. For a fixed constant c > 1, define: T = {i: i T and Ω < qdc } (26) Similarly define T = T/T. As in [16], our goal is to show that (a) most of energy of Δ is in coordinates T and (b) there is an index in T, such that (22) holds true for that index.

20 Suppose T = {u, u, } where u < u. Now since u k, hence Ω Ω. By definition i T we have Ω > qds and therefore kqdc < Ω. On the other hand since the graph G is D left regular, thus Ω qdu. Consequently, u kc. The immediate consequence is that for all i [k]: {1,2,, u } T k(c 1) (27) As in [16] for any i [k], let S be the set containing the smallest (c 1) elements of {1,2,, u } T /( that j S : Δ Δ and S (c 1), one has S ). Noting Δ Δ { } = (Δ + Δ ) c (Δ ) = c Δ (28) Recalling that Δ = Δ + Δ, it can be concluded that Δ 1 1 c Δ (29) Define: gain = PΔ μ P(Δ Δ e ) μ = PΔ P(Δ Δ e ) 2μ Pe Δ (30) Since the i -th column of P is non-zero only in row indices Ω, hence j [m]/ω : (PΔ) = (PΔ e Δ ). It is straightforward to verify that: gain = 2(PΔ ) (PΔ e ) (PΔ e ) 2μ Pe Δ (31) Using the fact that Δ = Δ + Δ and summing gain over all i T yield: gain = 2(PΔ ) PΔ (PΔ e ) 2μ PΔ = 2 PΔ + 2(PΔ ) (PΔ ) (PΔ e ) 2μ PΔ (32) By w-rip on support of T, one has: (PΔ e ) (1 + δ) Δ = (1 + δ) Δ Using the lower bound of w-rip for T one has PΔ (1 δ) Δ and hence (32) can be simplified to Note that by Cauchy-Schwarz inequality gain (1 3δ) Δ + 2(PΔ ) (PΔ ) 2μ PΔ (33)

21 μ PΔ 1 + δ c 1 Δ (34) Combining two last equations we have: gain 2(1 + δ) 1 3δ c 1 Δ + 2(PΔ ) (PΔ ) (35) Using w-rip of P on two partitions of T, we have P P δ (see [5]) and hence Substituting (36) into (35) By (29) and (37), we have gain gain (PΔ ) (PΔ ) δ Δ Δ (36) 2(1 + δ) 1 3δ c 1 Δ 2δ Δ Δ (37) 2(1 + δ) 1 3δ c 1 Δ 2δ 1 1 c c Δ C Δ (38) for C = (1 1 2(1 + δ) ) 1 3δ c c 1 2δ 1 1 c c Recall that T 2k. Thus there exists an index j [n] such that: C Δ C Δ gain T 2k (39) To relate the gain to discrepancy Px y, it can be shown that 1 + δ Δ PΔ = Px Px μ + μ Px y μ (40) By formula (24) Px y Δ 1 + δ(1 + ) (41) Now one can simplify (39) to j [n]: gain c k Px y (42)

22 where c = C / 2(1 + δ) 1 +. Note that, to improve the descrepancy of estimation and hence reducing the error of estimation, we require that c to be strictly positive. Since the denominator of c is strictly positive, it only suffices to have C > 0. This can be achieved if δ (w-rip constant) is sufficiently small. For instance, for c = 2 and c = 4, C would be positive if δ REFERENCES [1] David Donoho, Compressed sensing. IEEE Trans. on Information Theory, 52(4), pp , April [2] E. Candès, J. Romberg, and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. on IT, 52(2) pp , February 2006 [3] S.S. Chen, D. L. Donoho, Michael, A. Saunders, Atomic decomposition by Basis Pursuit, Siam J. of Scientific Comp, pp , [4] M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. of Selected Topics in Signal Processing: Special Issue on Convex Optimization Methods for Signal Processing, 1(4), pp , [5] D. Needell, J. A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Analysis, Vol. 26, no. 3, pp , [6] R. Berinde and P. Indyk, Sparse recovery using sparse random matrices. (Preprint, 2008) [7] S. Jafarpour, W. Xu, B. Hassibi, and R. Calderbank, Efficient compressed sensing using high-quality expander graphs, IEEE Trans Info Theory, 2009 [8] R. Berinde, A. C. Gilbert, P. Indyk, H. Karloff, and M. J. Strauss, Combining geometry and combinatorics: A unified approach to sparse signal recovery. (Preprint, 2008). [9] D.L. Donoho, A. Maleki, and A. Montanari. Message passing algorithms for compressed sensing, Proc. Natl Acad. Sci., [10] Venkat Chandar, A negative result concerning explicit matrices with the restricted isometry property. (Preprint, 2008) [11] S. Sarvotham, D. Baron, and R. Baraniuk, Sudocodes-Fast measurement and reconstruction of sparse signals. IEEE ISIT [12] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, A simple proof of the restricted isometry property for random matrices. Constructive Approximation, 28(3), pp , December [13] D. Achlioptas, Database-friendly random projections, 20th Annual Symposium on Principles of Database Systems, 2001, pp [14] Michael Elad, Optimized projections for compressed sensing. IEEE Trans. on Signal Processing, 55(12), pp , December 2007 [15] R. Berinde, P. Indyk, and M. Ruzic, Practical near-optimal sparse recovery in the l1 norm, Allerton, [16] R. Berinde and P. Indyk, Sequential Sparse Matching Pursuit, IEEE Allerton 2009, pages

23 [17] Lu Gan, Cong Ling, Thong T. Do, and Trac D. Tran, "Analysis of the statistical restricted isometry property for deterministic sensing matrices using Stein s method". (Preprint, 2009) [18] R. Calderbank, S. Howard, and S. Jafarpour. (2009) "Construction of a large of deterministic sensing matrices that satisfy a statistical isometry property." [Online]. Available: [19] David L. Donoho, Arian Maleki, Andrea Montanari, "the noise-sensitivity phase transition in compressed sensing", IEEE transactions on information theory, Oct. 2011, vol. 57, issue 10, pages [20] E. Candes, "A probabilistic and RIPless theory of compressed sensing", IEEE transactions on information theory, Nov. 2011, Vol. 57, Issue 11, pages [21] J. Haupt and R. Nowak, "A generalized restricted isometry property", preprint [22] E. Candes, "The restricted isometry property and its implications for compressed sensing". Compte Rendus de l'academie des Sciences, Paris, Serie I, [23] H. Feiveson and F. C. Delaney, "the distribution and properties of a weighted sum of chi squares", NASA technical notes, TN D-4575, May [24] S. Gabler and C. Wolff "A quick and easy approximation to the distribution of a sum of weighted chi-square variables", Statistical Papers, vol. 28, pp

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Combining geometry and combinatorics

Combining geometry and combinatorics Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT Sparse Approximations Goal: approximate a highdimensional vector x by x that is sparse, i.e., has few nonzero

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT Tutorial: Sparse Recovery Using Sparse Matrices Piotr Indyk MIT Problem Formulation (approximation theory, learning Fourier coeffs, linear sketching, finite rate of innovation, compressed sensing...) Setup:

More information

Tutorial: Sparse Signal Recovery

Tutorial: Sparse Signal Recovery Tutorial: Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan (Sparse) Signal recovery problem signal or population length N k important Φ x = y measurements or tests:

More information

On combinatorial approaches to compressed sensing

On combinatorial approaches to compressed sensing On combinatorial approaches to compresse sensing Abolreza Abolhosseini Moghaam an Hayer Raha Department of Electrical an Computer Engineering, Michigan State University, East Lansing, MI, U.S. Emails:{abolhos,raha}@msu.eu

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah 00 AIM Workshop on Ranking LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION By Srikanth Jagabathula Devavrat Shah Interest is in recovering distribution over the space of permutations over n elements

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

Sparse Recovery Using Sparse (Random) Matrices

Sparse Recovery Using Sparse (Random) Matrices Sparse Recovery Using Sparse (Random) Matrices Piotr Indyk MIT Joint work with: Radu Berinde, Anna Gilbert, Howard Karloff, Martin Strauss and Milan Ruzic Linear Compression (learning Fourier coeffs, linear

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

Conditions for a Unique Non-negative Solution to an Underdetermined System

Conditions for a Unique Non-negative Solution to an Underdetermined System Conditions for a Unique Non-negative Solution to an Underdetermined System Meng Wang and Ao Tang School of Electrical and Computer Engineering Cornell University Ithaca, NY 14853 Abstract This paper investigates

More information

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT Tutorial: Sparse Recovery Using Sparse Matrices Piotr Indyk MIT Problem Formulation (approximation theory, learning Fourier coeffs, linear sketching, finite rate of innovation, compressed sensing...) Setup:

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Exact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP)

Exact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP) 1 Exact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP) Wei Lu and Namrata Vaswani Department of Electrical and Computer Engineering, Iowa State University,

More information

Compressed Sensing and Related Learning Problems

Compressed Sensing and Related Learning Problems Compressed Sensing and Related Learning Problems Yingzhen Li Dept. of Mathematics, Sun Yat-sen University Advisor: Prof. Haizhang Zhang Advisor: Prof. Haizhang Zhang 1 / Overview Overview Background Compressed

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 13, JULY 1, 2013 3279 Compressed Sensing Affine Rank Minimization Under Restricted Isometry T. Tony Cai Anru Zhang Abstract This paper establishes new

More information

A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission

A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission Li and Kang: A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing 1 A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization

More information

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property : An iterative algorithm for sparse recovery with restricted isometry property Rahul Garg grahul@us.ibm.com Rohit Khandekar rohitk@us.ibm.com IBM T. J. Watson Research Center, 0 Kitchawan Road, Route 34,

More information

Optimal Deterministic Compressed Sensing Matrices

Optimal Deterministic Compressed Sensing Matrices Optimal Deterministic Compressed Sensing Matrices Arash Saber Tehrani email: saberteh@usc.edu Alexandros G. Dimakis email: dimakis@usc.edu Giuseppe Caire email: caire@usc.edu Abstract We present the first

More information

Sparse recovery using sparse random matrices

Sparse recovery using sparse random matrices Sparse recovery using sparse random matrices Radu Berinde MIT texel@mit.edu Piotr Indyk MIT indyk@mit.edu April 26, 2008 Abstract We consider the approximate sparse recovery problem, where the goal is

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Compressed Sensing and Linear Codes over Real Numbers

Compressed Sensing and Linear Codes over Real Numbers Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st,

More information

ORTHOGONAL matching pursuit (OMP) is the canonical

ORTHOGONAL matching pursuit (OMP) is the canonical IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4395 Analysis of Orthogonal Matching Pursuit Using the Restricted Isometry Property Mark A. Davenport, Member, IEEE, and Michael

More information

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

On Sparsity, Redundancy and Quality of Frame Representations

On Sparsity, Redundancy and Quality of Frame Representations On Sparsity, Redundancy and Quality of Frame Representations Mehmet Açaaya Division of Engineering and Applied Sciences Harvard University Cambridge, MA Email: acaaya@fasharvardedu Vahid Taroh Division

More information

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems 1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical

More information

Recovery of Compressible Signals in Unions of Subspaces

Recovery of Compressible Signals in Unions of Subspaces 1 Recovery of Compressible Signals in Unions of Subspaces Marco F. Duarte, Chinmay Hegde, Volkan Cevher, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract

More information

A Generalized Restricted Isometry Property

A Generalized Restricted Isometry Property 1 A Generalized Restricted Isometry Property Jarvis Haupt and Robert Nowak Department of Electrical and Computer Engineering, University of Wisconsin Madison University of Wisconsin Technical Report ECE-07-1

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle

More information

Approximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery

Approximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery Approimate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery arxiv:1606.00901v1 [cs.it] Jun 016 Shuai Huang, Trac D. Tran Department of Electrical and Computer Engineering Johns

More information

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise Srdjan Stanković, Irena Orović and Moeness Amin 1 Abstract- A modification of standard

More information

arxiv: v1 [math.na] 26 Nov 2009

arxiv: v1 [math.na] 26 Nov 2009 Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

Near-Optimal Sparse Recovery in the L 1 norm

Near-Optimal Sparse Recovery in the L 1 norm Near-Optimal Sparse Recovery in the L 1 norm Piotr Indyk MIT indyk@mit.edu Milan Ružić ITU Copenhagen milan@itu.dk Abstract We consider the approximate sparse recovery problem, where the goal is to (approximately)

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Item Type text; Proceedings Authors Jagiello, Kristin M. Publisher International Foundation for Telemetering Journal International Telemetering

More information

AFRL-RI-RS-TR

AFRL-RI-RS-TR AFRL-RI-RS-TR-200-28 THEORY AND PRACTICE OF COMPRESSED SENSING IN COMMUNICATIONS AND AIRBORNE NETWORKING STATE UNIVERSITY OF NEW YORK AT BUFFALO DECEMBER 200 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information

Simultaneous Sparsity

Simultaneous Sparsity Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,

More information

COMPRESSED SENSING IN PYTHON

COMPRESSED SENSING IN PYTHON COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed

More information

Abstract This paper is about the efficient solution of large-scale compressed sensing problems.

Abstract This paper is about the efficient solution of large-scale compressed sensing problems. Noname manuscript No. (will be inserted by the editor) Optimization for Compressed Sensing: New Insights and Alternatives Robert Vanderbei and Han Liu and Lie Wang Received: date / Accepted: date Abstract

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline

More information

Sparse Recovery with Pre-Gaussian Random Matrices

Sparse Recovery with Pre-Gaussian Random Matrices Sparse Recovery with Pre-Gaussian Random Matrices Simon Foucart Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie Paris, 75013, France Ming-Jun Lai Department of Mathematics University of

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

ACCORDING to Shannon s sampling theorem, an analog

ACCORDING to Shannon s sampling theorem, an analog 554 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 2, FEBRUARY 2011 Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis Omid Taheri, Student Member,

More information

Error Correction via Linear Programming

Error Correction via Linear Programming Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,

More information

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,

More information

Robust Support Recovery Using Sparse Compressive Sensing Matrices

Robust Support Recovery Using Sparse Compressive Sensing Matrices Robust Support Recovery Using Sparse Compressive Sensing Matrices Jarvis Haupt and Richard Baraniuk University of Minnesota, Minneapolis MN Rice University, Houston TX Abstract This paper considers the

More information

Compressed Sensing with Very Sparse Gaussian Random Projections

Compressed Sensing with Very Sparse Gaussian Random Projections Compressed Sensing with Very Sparse Gaussian Random Projections arxiv:408.504v stat.me] Aug 04 Ping Li Department of Statistics and Biostatistics Department of Computer Science Rutgers University Piscataway,

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Does Compressed Sensing have applications in Robust Statistics?

Does Compressed Sensing have applications in Robust Statistics? Does Compressed Sensing have applications in Robust Statistics? Salvador Flores December 1, 2014 Abstract The connections between robust linear regression and sparse reconstruction are brought to light.

More information

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Stable Signal Recovery from Incomplete and Inaccurate Measurements Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University

More information

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3740 Ming-Jun Lai Department of Mathematics

More information

Greedy Sparsity-Constrained Optimization

Greedy Sparsity-Constrained Optimization Greedy Sparsity-Constrained Optimization Sohail Bahmani, Petros Boufounos, and Bhiksha Raj 3 sbahmani@andrew.cmu.edu petrosb@merl.com 3 bhiksha@cs.cmu.edu Department of Electrical and Computer Engineering,

More information

Observability of a Linear System Under Sparsity Constraints

Observability of a Linear System Under Sparsity Constraints 2372 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 58, NO 9, SEPTEMBER 2013 Observability of a Linear System Under Sparsity Constraints Wei Dai and Serdar Yüksel Abstract Consider an -dimensional linear

More information