Abstract This paper is about the efficient solution of large-scale compressed sensing problems.

Size: px
Start display at page:

Download "Abstract This paper is about the efficient solution of large-scale compressed sensing problems."

Transcription

1 Noname manuscript No. (will be inserted by the editor) Optimization for Compressed Sensing: New Insights and Alternatives Robert Vanderbei and Han Liu and Lie Wang Received: date / Accepted: date Abstract This paper is about the efficient solution of large-scale compressed sensing problems. We have two main results. First, we propose a reformulation of the vector sensing problem in which the vector signal is stacked into a matrix. We call this the Kronecker reformulation. We show that the Kronecker reformulation requires stronger assumptions for perfect recovery compared to the original vector problem. However, the Kronecker reformulation, modeled correctly, is a much sparser linear optimization problem. Hence, an interior-point method applied to solve the linear programming problem solves the Kronecker formulation much faster than it solves the equivalent vector problem. In our numerical studies, we see a ten-fold improvement in the computation time. Secondly, we note that the vector (or matrix) of zeros is a basic (infeasible) solution for the linear programming problem and therefore, if the sensing signal is very sparse, some variants Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ 08544, USA; rvdb@princeton.edu Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ 08544, USA; hanliu@princeton.edu Research supported by NSF Grant III Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; liewang@math.mit.edu Research supported by NSF Grant DMS Robert J. Vanderbei Department of Ops. Res. and Fin. Eng., Princeton University, Princeton, NJ Tel.: rvdb@princeton.edu

2 2 Robert Vanderbei and Han Liu and Lie Wang of the simplex method can be expected to take only a small number of pivots to arrive at a solution. We implemented one such variant and demonstrate a further ten-fold improvement in computation time. Keywords Linear Programming Compressive sensing parametric simplex method sparse signals interior-point methods Mathematics Subject Classification (2010) MSC 65K05 62P99 Compressed sensing aims to correctly recover a sparse signal from very few measurements. The theoretical foundation of compressed sensing was first laid out by Donoho (2006) and Candès et al. (2006) and can be traced further back to the sparse recovery work of Donoho and Stark (1989); Donoho and Huo (2001); Donoho and Elad (2003). More recent progress in the area of compressed sensing is summarized in Kutyniok (2012) and Elad (2010). In a typical compressed sensing problem, we let x 0 := (x 0 1,..., x 0 n) T R n be a signal where n is assumed to be large. We either assume that x 0 itself is sparse, i.e., it has very few non-zero components, which we can express mathematically by saying that x 0 0 := #{i : x i 0} is small, or that there exists a basis or a frame (a frame is a set of vectors that spans R n but does not have to be linearly independent) Φ such that x 0 = Φc with c being sparse (Kutyniok and Lim, 2011; Aharon et al., 2006; Rauhut et al., 2008; Candès et al., 2010). Let A be an m n sensing matrix with m < n. The compressed sensing problem is to recover x 0 from the knowledge of y = Ax 0, or recover c from the knowledge of y = AΦc. In both cases, we face an underdetermined linear system of equations with sparsity as prior information about the target signal. In the current paper, we mainly consider the case where x 0 is itself sparse. Since x 0 is a sparse vector, one can hope that it is the sparsest solution to the underdetermined linear system and therefore can be recovered from y by solving (P 0 ) min x x 0 subject to Ax = y. This problem is NP-hard due to the nonconvexity of the 0-pseudo-norm. To avoid the NPhardness, Chen et al. (1998) propose the basis pursuit approach in which x 1 replaces x 0 : (P 1 ) min x x 1 subject to Ax = y. (1)

3 Optimization for Compressed Sensing 3 Donoho and Elad (2003) and Cohen et al. (2009) characterize conditions under which the solutions to P 0 and P 1 are unique. The key question in compressed sensing research is to provide conditions under which they are the same. Various sufficient conditions have been discovered: see, e.g., Donoho and Huo (2001); Elad and Bruckstein (2002); Tropp (2004); Candès et al. (2006); Borup et al. (2008); Cohen et al. (2009); Donoho and Kutyniok (2010); Foucart (2010). More specifically, letting A S denote the submatrix of a matrix A with columns indexed by a subset S {1,..., n}, we say that A has the k-restricted isometry property (RIP) with RIP constant δ k if for any S with cardinality k, (1 δ k ) v 2 2 A S v 2 2 (1 + δ k ) v 2 2 for any v R k. (2) We denote by δ k (A) the smallest value of δ k for which the matrix A has the RIP property. Under the assumption that k := x 0 0 n and that the matrix A satisfies the RIP condition, Cai and Zhang (2012) prove that whenever δ k (A) < 1/3, the solutions to P 0 and P 1 are the same. In addition to these sufficient conditions, Donoho and Tanner (2005a,b, 2009) use the theory of convex polytopes to study necessary conditions for P 0 and P 1 to produce the same answer. Existing algorithms for solving the convex program P 1 include interior-point methods (Candès et al., 2006), projected gradient methods (Figueiredo et al., 2008), and Bregman iterations (Yin et al., 2008). Besides solving the convex program P 1, several greedy algorithms have been proposed, including matching pursuit (Mallat and Zhang, 1993) and its many variants (Tropp, 2004; Donoho et al., 2006; Needell and Vershynin, 2009; Needell and Tropp, 2010; Donoho et al., 2009). To achieve more scalability, combinatorial algorithms such as HHS pursuit (Gilbert et al., 2007) and a sub-linear Fourier transform (Iwen, 2010) have also been developed. In this paper, we propose to solve a linear programming formulation of P 1 using a variant of the simplex method. In particular, we use the parametric simplex method (see, e.g., Vanderbei (2007)) applied to a parametrically relaxed problem. Our approach is motivated by the fact that the desired solution is sparse and therefore should require only a relatively small number of simplex pivots from an appropriately chosen starting point the vector of all zeros. If the equality constraints in the model are relaxed and a parametrically-weighted penalty term is added to the objective function to gauge the deviation from equality, then the parametric simplex method can be employed to solve the problem in a small fraction of the number of pivots one would expect a naive implementation to require.

4 4 Robert Vanderbei and Han Liu and Lie Wang We also consider two other methods. These methods involve stacking the signal vector x into a matrix X and then multiplying the matrix signal on both the left and the right to get a compressed matrix signal. Using ideas analogous to one step of a fast Fourier transform, we essentially make a dramatic sparsification of the constraint matrix in the linear programming problem which allows us to solve the problem much faster than we could with the equivalent dense formulation. For our third method, we break up the left- and right-hand multiplications into separate smaller linear programming problems. This last method is very fast but requires much stricter assumptions of sparsity for the method to correctly recover the signal. We refer to these two new methods as Kronecker compressed sensing (KCS). Theoretically, KCS involves a tradeoff between computational complexity and informational complexity: it gains computational advantages at the price of requiring more measurements. More specifically, in later sections, we show that, using sub-gaussian random sensing matrices, whenever m 225k 2 (log(n/k 2 )) 2, (3) we recover the true signal with probability at least 1 4 exp( 0.1 m). It is easy to see that this scaling of (m, n, k) is tight by considering the special case when all the nonzero entries of x form a continuous block. The rest of this paper is organized as follows. In the next section, we describe the main idea the Kronecker compressed sensing (KCS). In particular, we introduce two variants: coupled KCS and decoupled KCS. In section 3, we describe the parametric simplex method and explain how to apply it to solve the original compressed sensing problem and the Kronecker compressed sensing problems. Numerical comparisons and discussion are provided in Section 4. 1 Kronecker Compressed Sensing In this section, we introduce the Kronecker compressed sensing (Duarte and Baraniuk, 2012). Unlike the classical compressed sensing which mainly focus on one-dimensional vector signals, the Kronecker compressed sensing can be used for sensing multidimensional signals (e.g., matrix or tensor). For example, given a sparse matrix signal X 0 R n1 n2, we can use two sensing matrices A R m1 n2 and B R m2 n2 and try to recover X 0 from the knowledge of Y = AX 0 B T. It is clear that when the signal is multidimensional, Kronecker compressed sensing

5 Optimization for Compressed Sensing 5 is more natural than the classical vector compressed sensing. Here, we would like to point out that, sometimes even when facing vector signals, it is still beneficial to use Kronecker compressed sensing due to its added computational efficiency. More specifically, even though the target signal is a vector x 0 R n, we may first stack it into a matrix X 0 R n1 n2 by putting each length n 1 sub-vector of x 0 into a column of X 0. Here we assume n = n 1 n 2. We then multiply the matrix signal X 0 on both the left and the right by sensing matrices A and B to get a compressed matrix signal Y 0. In the next section, we will show that using ideas analogous to one step of a fast Fourier transform, we are able to solve this Kronecker compressed sensing problem much more efficiently than the classical vector compressed sensing problem. In this section, we introduce two variants of the Kronecker compressed sensing: a coupled version versus a decoupled version. The decoupled KCS is computationally easier than the coupled KCS, but require more stringent conditions for successful recovery. In the sequel, we adopt the following notation. Let X j denote the j th column of the matrix X and X i be the i th row of the matrix X. Also, we define X 0 = j,k I(X jk 0) and X 1 := j,k X jk. 1.1 Coupled KCS Given a matrix Y R m1 m2 and the sensing matrices A and B, our goal is to recover the original sparse signal X 0, we first consider a coupled Kronecker sensing method, which solves the following optimization problem: (P 2 ) X = argmin X 1 subject to AXB T = Y. (4) Here, A and B are sensing matrices of size m 1 n 1 and m 2 n 2, respectively. Let x = vec(x) and y = vec(y), where the vec() operator takes a matrix and concatenates its elements columnby-column to build one large column-vector containing all the elements of the matrix. In terms of x and y, problem P 2 can be rewritten as vec( X) = argmin x 1 subject to Ux = y, (5)

6 6 Robert Vanderbei and Han Liu and Lie Wang where U is given by the (m 1 m 2 ) (n 1 n 2 ) Kronecker product of A and B: AB 11 AB 1n2 U := A B = AB m21 AB m2n 2 In this way, (4) becomes an ordinary compressed sensing problem. To analyze the properties of this coupled Kronecker sensing approach, we recall the definition of the restricted isometry constant for a matrix. For any m n matrix U, the k-restricted isometry constant δ k (U) is defined as the smallest nonnegative number such that for any k- sparse vector h R n, (1 δ k (U)) h 2 2 Uh 2 2 (1 + δ k (U)) h 2 2. (6) Based on the results in Cai and Zhang (2012), we have Lemma 1 (Cai and Zhang (2012)) Suppose k = X 0 0 is the sparsity of matrix X 0. Then if δ k (U) < 1/3, we have vec( X) = x 0 or equivalently X = X 0. For the value of δ k (U), by lemma 2 of Duarte and Baraniuk (2012), we know that 1 + δ k (U) (1 + δ k (A))(1 + δ k (B)). (7) In addition, we define strictly sub-gasusian distribution as follows: Definition 1 (Strictly Sub-Gaussian Distribution) We say a mean-zero random variable X follows a strictly sub-gaussian distribution with variance 1/m if it satisfies EX 2 = 1 m, ( ) t 2 E exp (tx) exp for all t R. 2m It is obvious that the Gaussian distribution with mean 0 and variance 1/m 2 satisfies the above definition. The next theorem provides sufficient conditions that guarantees perfect recovery of the coupled KCS with a desired probability. Theorem 1 Suppose matrices A and B are both generated by independent strictly sub-gaussian entries with variance 1/m. Let C > 28.1 be a constant. Whenever m 1 C k log (n 1 /k) and m 2 C k log (n 2 /k), (8)

7 Optimization for Compressed Sensing 7 the convex program P 2 attains perfect recovery with probability ( ) P X = X 0 ( 1 2 exp ( ) ( )m 1 2 exp C }{{} ρ(m 1,m 2) Proof From Equation (7) and Lemma 1, it suffices to show that ( ) )m 2. (9) C ( P δ k (A) < 2 1 and δ k (B) < 2 ) 1 1 ρ(m 1, m 2 ). (10) 3 3 This result directly follows from Theorem 3.6 of Baraniuk et al. (2010) with a careful calculation of constants. From the above theorem, we see that for m 1 = m 2 = m and n 1 = n 2 = n, whenever the number of measurements satisfies m 225k 2 (log(n/k 2 )) 2, (11) we have X = X 0 with probability at least 1 4 exp( 0.1 m). Here we compare the above result to that of vector compressed sensing, i.e., instead of stacking the original signal x 0 R n into a matrix, we directly use a strictly sub-gaussian sensing matrix A R m n to multiply on x 0 to get y = Ax 0. We then plug y and A into the convex program P 1 in Equation (1) to recover x 0. Following the same argument as in Theorem 1, whenever m 30k log (n/k), (12) we have x = x 0 with probability at least 1 2 exp( 0.1m). Compare (12) with (11), we see that coupled KCS requires more stringent conditions for perfect recovery. 1.2 Decoupled KCS In this section, we introduce a decoupled KCS method. More specifically, we consider the following two-stage procedure (P 3, Stage 1) Ŵ := argmin W 1 subject to AW = Y, (13) W R n 1 m 2 (P 3, Stage 2) X := argmin X 1 subject to XB T = Ŵ. (14) X R n 1 n 2

8 8 Robert Vanderbei and Han Liu and Lie Wang It is easy to see that the Stage 1 problem in (13) decomposes into m 2 subproblems, each of which is a linear program: Ŵ j = argmin w 1 subject to Aw = Y j for j = 1,..., m 2. (15) w R n 1 Similarly, the Stage 2 problem in (14) decomposes into n 2 subproblems, each of which is also a linear program: X i = argmin x 1 subject to Bx = ŴT i for i = 1,..., n 1. (16) x R n 2 To solve each individual linear programming problem, we again use the parametric simplex method (which will be introduced in the next section). Since the above problem can be decoupled into many smaller linear program subproblems, it is named decoupled KCS. The next theorem provides theoretical analysis of the decoupled KCS algorithm. Theorem 2 Suppose matrices A and B are both generated by independent strictly sub-gaussian entries with variance 1/m. Let C > 22.5 be a constant. Whenever m 1 C k log (n 1 /k) and m 2 C k log (n 2 /k), (17) The convex program P 3 attains perfect recovery with probability ( ) P X = X 0 ( 1 2 exp ( ) ( )m 1 2 exp C }{{} ξ(m 1,m 2) ( ) )m 2. (18) C Proof Let W 0 = X 0 B T. It is straightforward to show that W 0 j 0 k. We define the event E 1 := {Ŵ = W 0 }. (19) From Lemma 1 and Theorem 3.6 of Baraniuk et al. (2010), we have, whenever n 1 C k log (m 1 /k), ( P(E 1 ) P δ k (A) < 1 ) ( ( 1 2 exp C )m 1 ). (20) Conditioning on E 1 and following the same reasoning, we have, whenever n 2 C k log (m 2 /k): We then get the desired result. ( ) ( ( P X = X 0 E exp C )m 1 ). (21)

9 Optimization for Compressed Sensing 9 From Theorem 2, if we set m 1 = m 2 = m and n 1 = n 2 = n, whenever the number of measurements satisfies m 126k 2 (log(n/k 2 )) 2, (22) we have X = X 0 with probability at least 1 4 exp( 0.1 m). This scaling in (22) is the same as that for coupled KCS in (11). At the first sight, it seems that the decoupled KCS is a better approach than the coupled KCS. This is not necessarily true! The main caveat lies in the fact that the proof of Theorem 2 crucially hinges on the fact that we can perfectly solve the optimization problem (13) in the first stage of the decoupled KCS method. This is practically impossible since all the numerical solvers only deliver solutions that are close to the theoretical minimizer up to a controlled precision. In the numerical study section, we will show that such a small numerical inaccuracy may severely degrade the recovery performance in the second stage. To the best of our knowledge, this is the first time such a phenomenon has been pointed out. 2 Computational Algorithm In this section, we show how to formulate the convex programs P 1, P 2, and P 3 as linear programming problems and we show how to ensure that the LP formulations of the two Kronecker problems involve sparse constraint matrices. 2.1 Vector-Sensor Method The classical vector-sensor method P 1 can be formulated as a linear programming problem: (P 1:LP ) min x +,x 1 T (x + + x ) (23) subject to A(x + x ) = y x +, x 0. (24) The components of the vectors x + and x represent the positive and negative parts of x. Of course, to be the positive and negative parts it is required that one or the other of each component vanish. It is easy to see that this condition is satisfied automatically at optimality.

10 10 Robert Vanderbei and Han Liu and Lie Wang 2.2 Kronecker Sensor Method The coupled Kronecker sensing problem P 2 can also be converted to a linear programming problem: (P 2:LP ) min X +,X 1 T (X + + X )1 subject to A(X + X )B T = Y X +, X 0 where, as with vectors, the matrix inequality denotes component-wise inequality. As before, the components of the matrices X + and X become the component-wise positive and negative parts of X at optimality. As formulated, this linear programming problem is no easier to solve than the vector-sensor formulation P 1. However, it is easy to give an equivalent formulation that is computationally more tractable. The trick is to break the constraint A(X + X )B T = Y into a pair of constraints: (P 2:LPsparse ) min X +,X 1T (X + + X )1 subject to A(X + X ) Z = 0 ZB T = Y X +, X 0 Despite the introduction of new variables, Z, this formulation can be solved more efficiently because the constraint relations have been made sparse. Indeed, the matrix Y has m = m 1 m 2 variables so there are m constraints in the equation system A(X + X )B T = Y. And, each of these constraints involves every one of the n = n 1 n 2 variables in X + and in X. On the other hand, there are m 1 n 2 constraints in the equation A(X + X ) Z = 0 but each column in this matrix equation relates just that column of X + (or X ) to that column of Z. Hence, these equations are highly sparse. The same is true for ZB T = Y. It turns out that this sparsification trick is closely related to the fundamental idea underlying the fast Fourier optimization; see Vanderbei (2012) for more detailed discussions. Sparsity is also well-known to be a key factor in efficiently solving linear programming problems: see Vanderbei (1991). In a similar manner, the decoupled Kronecker sensing method can also be expressed as a linear programming problem.

11 Optimization for Compressed Sensing Vector-Sensor Method Solved by Parametric Simplex Method Finally, we consider a completely different way to improve the computational efficiency for solving the linear programming problems presented above. Consider the following parametric modification to P 1 : x := argmin x µ x T ɛ 1 (25) subject to Ax + ɛ = y. We have relaxed the fundamental equality and we have added a penalty term to the objective function that penalizes in proportion to the degree to which the original equations are not met. For large values of µ, the optimal solution has x = 0 and ɛ = y. For values of µ close to zero, the situation reverses: ɛ = 0. Our aim is to reformulate this problem as a parametric linear programming problem and solve it using the parametric simplex method (see, e.g., Vanderbei (2007)). In particular, we set parameter µ to start at µ = and successively reducing the value of µ for which the current basic solution is optimal until arriving at a value of µ for which the optimal solution has ɛ = 0 at which point we will have solved the original problem. If the number of pivots are few, then the final vector x will be mostly zero. It turns out that the best way to reformulate (25) as a linear programming problem is, as before, to split each variable into a difference between two nonnegative variables, x = x + x and ɛ = ɛ + ɛ. The next step is to replace x 1 with 1 T (x + +x ) and to make a similar substitution for ɛ 1. In general, the sum does not represent the absolute value but it is easy to see that it does at optimality. Here is the linear programming problem: min µ1t (x + + x ) + 1 T (ɛ + + ɛ ) x +,x,ɛ +,ɛ subject to A(x + x ) + (ɛ + ɛ ) = y x +, x, ɛ +, ɛ 0. For µ large, the optimal solution has x + = x = 0, and ɛ + ɛ = y. And, given that these latter variables are required to be nonnegative, it follows that y i > 0 = ɛ + i > 0 and ɛ i = 0

12 12 Robert Vanderbei and Han Liu and Lie Wang whereas y i < 0 = ɛ i > 0 and ɛ + i = 0 (the equality case can be decided either way). With these choices for variable values, the solution is feasible for all µ and is optimal for large µ. Furthermore, declaring the nonzero variables to be basic variables and the zero variables to be nonbasic, we see that this optimal solution is also a basic solution and can therefore serve as a starting point for the parametric simplex method. The parametric simplex algorithm described in Vanderbei (2007) can be downloaded from It is easy to modify this code to generate and solve problems discussed here. In fact, we have done this. The solution time for problems have the size characteristics described above solve in 5 seconds. 3 Numerical Results For the vector sensor, we generated random problems using m = 1,000 and n = 20,000. We varied the number of nonzeros k in signal x 0 from zero to 100. We solved the straightforward linear programming formulations of these instances using an interior-point solver called loqo (Vanderbei (1999)). We also solved a large number of instances of the parametrically formulated problem using the parametric simplex method as outlined above. We followed a similar plan for the Kronecker (Matrix) sensor. For these problems, we used m 1 = 33, m 2 = 34, n 1 = 141, n 2 = 142, and, again, various values of k. Again, the straightforward linear programming problems were solved by loqo and the parametrically formulated versions were solved by a custom developed parametric simplex method. We also made an implementation of the decoupled matrix sensor and implemented a parametric simplex method for that too. The results are shown in Figure 1. In terms of solving the linear programming problems P 1:LP and P 2:LP directly using an interior-point solver, the latter solves more than an order of magnitude faster. Also, the solution time is virtually independent of the signal sparsity parameter k. When the signal is very sparse, say k 70, both of the parametric simplex codes beat the interior-point solver. And, the improvement increases with sparsity. With k = 10, the improvement is yet another order of magnitude. So, for k = 10, the improvement over the

13 Optimization for Compressed Sensing Solution Time vs. Sparsity 10 3 Solution Time (seconds) LOQO Vector LOQO Matrix LOQO Decoupled Param Simplex Vector Param Simplex Matrix Param Simplex Decoupled Number of nonzeros in signal Fig. 1 Solution times for a large number of problem instances having m 1,000, n 20,000, and various degrees of sparsity in the underlying signal. The horizontal axis shows the number of nonzeros in the signal. The vertical axis gives a semi-log scale of solution times. original LP formulation of the vector sensor is almost a factor of 200. This improvement is dramatic and warrants further investigation. Finally, the decoupled problems solve extremely fast in both the regular LP method and in the parametric method. However, beyond k = 8, they don t correctly recover the original signal. The main reason is that in the first stage of P 3 we need to solve m 2 linear program subproblems to recover the columns of W. Though each subproblem admits a perfect recovery with high probability, among all the m 2 subproblems, several of them can not be recovered well. These errors in the first stage are significantly manifested in the second stage. This error cascading phenomenon leads to the inferior performance of the decoupled KCS compared to the coupled one.

14 14 Robert Vanderbei and Han Liu and Lie Wang 4 Conclusions We revisit compressed sensing from an optimization perspective. We advocate the usage of parametric simplex algorithm for solving large-scale compressed sensing problem. The parametric simplex is a homotopy algorithm and enjoys many good computational properties. We also propose two alternative ways for compressed sensing which illustrate a tradeoff between computing and statistics. In future work, we plan to extend the proposed method to the setting of 1-bit compressed sensing. References Aharon, M., Elad, M. and Bruckstein, A. (2006). K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation. IEEE Transactions on Signal Processing, Baraniuk, R., Davenport, M. A., Duarte, M. F. and Hegde, C. (2010). An Introduction to Compressive Sensing. CONNEXIONS, Rice University, Houston, Texas. Borup, L., Gribonval, R. and Nielsen, M. (2008). Beyond coherence: Recovering structured time-frequency representations. Applied and Computational Harmonic Analysis, Cai, T. T. and Zhang, A. (2012). Sharp rip bound for sparse signal and low-rank matrix recovery. Applied and Computational Harmonic Analysis. URL sciencedirect.com/science/article/pii/s Candès, E., Romberg, J. and Tao, T. (2006). Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, Candès, E. J., Eldar, Y. C., Needell, D. and Randall, P. (2010). Compressed sensing with coherent and redundant dictionaries. Applied and Computational Harmonic Analysis. Chen, S. S., Donoho, D. L. and Saunders, M. A. (1998). Atomic Decomposition by Basis Pursuit. SIAM Journal on Scientific Computing, Cohen, A., Dahmen, W. and Devore, R. (2009). Compressed sensing and best k-term approximation. J. Amer. Math. Soc

15 Optimization for Compressed Sensing 15 Donoho, D. L. (2006). Compressed sensing. IEEE Transactions on Information Theory, Donoho, D. L. and Elad, M. (2003). Optimally sparse representation in general (nonorthogonal) dictionaries via l-minimization. Proc. Natl. Acad. Sci. USA, 100. Donoho, D. L., Elad, M. and Temlyakov, V. N. (2006). Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Transactions on Information Theory, Donoho, D. L. and Huo, X. (2001). Uncertainty principles and ideal atomic decomposition. IEEE Transactions on Information Theory, Donoho, D. L. and Kutyniok, G. (2010). Microlocal analysis of the geometric separation problem. CoRR, abs/ Donoho, D. L., Maleki, A. and Montanari, A. (2009). Message passing algorithms for compressed sensing. Proc. Natl. Acad. Sci. USA, Donoho, D. L. and Stark, P. B. (1989). Uncertainty principles and signal recovery. SIAM J. Appl. Math., Donoho, D. L. and Tanner, J. (2005a). Neighborliness of randomly projected simplices in high dimensions. Proceedings of the National Academy of Sciences of the United States of America, Donoho, D. L. and Tanner, J. (2005b). Sparse nonnegative solutions of underdetermined linear equations by linear programming. In Proceedings of the National Academy of Sciences Donoho, D. L. and Tanner, J. (2009). Observed universality of phase transitions in highdimensional geometry, with implications for modern data analysis and signal processing. Philos. Trans. Roy. Soc. S.-A, Duarte, M. F. and Baraniuk, R. G. (2012). Kronecker compressive sensing. IEEE Transactions on Image Processing, Elad, M. (2010). Sparse and Redundant Representations - From Theory to Applications in Signal and Image Processing. Springer. Elad, M. and Bruckstein, A. M. (2002). A generalized uncertainty principle and sparse representation in pairs of bases. IEEE Transactions on Information Theory, Figueiredo, M., Nowak, R. and Wright, S. (2008). Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE Journal of

16 16 Robert Vanderbei and Han Liu and Lie Wang Selected Topics in Signal Processing, Foucart, S. (2010). A note on guaranteed sparse recovery via l 1 -minimization. Appl. Comput. Harmon. Anal., Gilbert, A. C., Strauss, M. J., Tropp, J. A. and Vershynin, R. (2007). One sketch for all: fast algorithms for compressed sensing. In STOC (D. S. Johnson and U. Feige, eds.). ACM, Iwen, M. A. (2010). Combinatorial sublinear-time fourier algorithms. Foundations of Computational Mathematics, Kutyniok, G. (2012). Compressed sensing: Theory and applications. CoRR, abs/ Kutyniok, G. and Lim, W.-Q. (2011). Compactly supported shearlets are optimally sparse. Journal of Approximation Theory, Mallat, S. and Zhang, Z. (1993). Matching pursuits with time-frequency dictionaries. Signal Processing, IEEE Transactions on, Needell, D. and Tropp, J. A. (2010). Cosamp: iterative signal recovery from incomplete and inaccurate samples. Commun. ACM, Needell, D. and Vershynin, R. (2009). Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit. Foundations of Computational Mathematics, Rauhut, H., Schnass, K. and Vandergheynst, P. (2008). Compressed sensing and redundant dictionaries. IEEE Transactions on Information Theory, Tropp, J. A. (2004). Greed is good: algorithmic results for sparse approximation. IEEE Transactions on Information Theory, Vanderbei, R. (1991). Splitting dense columns in sparse linear systems. Lin. Alg. and Appl., Vanderbei, R. (1999). LOQO: An interior point code for quadratic programming. Optimization Methods and Software, Vanderbei, R. (2007). Linear Programming: Foundations and Extensions. 3rd ed. Springer. Vanderbei, R. J. (2012). Fast fourier optimization. Math. Prog. Comp., Yin, W., Osher, S., Goldfarb, D. and Darbon, J. (2008). Bregman iterative algorithms for l1-minimization with applications to compressed sensing. SIAM J. Img. Sci.,

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Abstract We propose two approaches to solve large-scale compressed sensing problems. The

Abstract We propose two approaches to solve large-scale compressed sensing problems. The Noname manuscript No. (will be inserted by the editor) Revisiting Compressed Sensing: Exploiting the Efficiency of Simplex and Sparsification Methods Robert Vanderbei Kevin Lin Han Liu Lie Wang Received:

More information

Sparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ.

Sparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ. Sparsity Matters Robert J. Vanderbei 2017 September 20 http://www.princeton.edu/ rvdb IDA: Center for Communications Research Princeton NJ The simplex method is 200 times faster... The simplex method is

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series

More information

ORTHOGONAL matching pursuit (OMP) is the canonical

ORTHOGONAL matching pursuit (OMP) is the canonical IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4395 Analysis of Orthogonal Matching Pursuit Using the Restricted Isometry Property Mark A. Davenport, Member, IEEE, and Michael

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems 1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse

More information

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Compressed Sensing: Theory and Applications

Compressed Sensing: Theory and Applications Compressed Sensing: Theory and Applications Gitta Kutyniok March 12, 2012 Abstract Compressed sensing is a novel research area, which was introduced in 2006, and since then has already become a key concept

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3740 Ming-Jun Lai Department of Mathematics

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

A Note on the Complexity of L p Minimization

A Note on the Complexity of L p Minimization Mathematical Programming manuscript No. (will be inserted by the editor) A Note on the Complexity of L p Minimization Dongdong Ge Xiaoye Jiang Yinyu Ye Abstract We discuss the L p (0 p < 1) minimization

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

Necessary and sufficient conditions of solution uniqueness in l 1 minimization

Necessary and sufficient conditions of solution uniqueness in l 1 minimization 1 Necessary and sufficient conditions of solution uniqueness in l 1 minimization Hui Zhang, Wotao Yin, and Lizhi Cheng arxiv:1209.0652v2 [cs.it] 18 Sep 2012 Abstract This paper shows that the solutions

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

Solution Recovery via L1 minimization: What are possible and Why?

Solution Recovery via L1 minimization: What are possible and Why? Solution Recovery via L1 minimization: What are possible and Why? Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Eighth US-Mexico Workshop on Optimization

More information

An Unconstrained l q Minimization with 0 < q 1 for Sparse Solution of Under-determined Linear Systems

An Unconstrained l q Minimization with 0 < q 1 for Sparse Solution of Under-determined Linear Systems An Unconstrained l q Minimization with 0 < q 1 for Sparse Solution of Under-determined Linear Systems Ming-Jun Lai and Jingyue Wang Department of Mathematics The University of Georgia Athens, GA 30602.

More information

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

The Analysis Cosparse Model for Signals and Images

The Analysis Cosparse Model for Signals and Images The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under

More information

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals On the l 1 -Norm Invariant Convex -Sparse Decomposition of Signals arxiv:1305.6021v2 [cs.it] 11 Nov 2013 Guangwu Xu and Zhiqiang Xu Abstract Inspired by an interesting idea of Cai and Zhang, we formulate

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Stable Signal Recovery from Incomplete and Inaccurate Measurements Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Signal Recovery, Uncertainty Relations, and Minkowski Dimension

Signal Recovery, Uncertainty Relations, and Minkowski Dimension Signal Recovery, Uncertainty Relations, and Minkowski Dimension Helmut Bőlcskei ETH Zurich December 2013 Joint work with C. Aubel, P. Kuppinger, G. Pope, E. Riegler, D. Stotz, and C. Studer Aim of this

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

Error Correction via Linear Programming

Error Correction via Linear Programming Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,

More information

Combining geometry and combinatorics

Combining geometry and combinatorics Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

Sparse & Redundant Signal Representation, and its Role in Image Processing

Sparse & Redundant Signal Representation, and its Role in Image Processing Sparse & Redundant Signal Representation, and its Role in Michael Elad The CS Department The Technion Israel Institute of technology Haifa 3000, Israel Wave 006 Wavelet and Applications Ecole Polytechnique

More information

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Simon Foucart, Drexel University Abstract We investigate the recovery of almost s-sparse vectors x C N from

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Uncertainty principles and sparse approximation

Uncertainty principles and sparse approximation Uncertainty principles and sparse approximation In this lecture, we will consider the special case where the dictionary Ψ is composed of a pair of orthobases. We will see that our ability to find a sparse

More information

Necessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization

Necessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization Noname manuscript No. (will be inserted by the editor) Necessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization Hui Zhang Wotao Yin Lizhi Cheng Received: / Accepted: Abstract This

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,

More information

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:

More information

COMPRESSED Sensing (CS) is a method to recover a

COMPRESSED Sensing (CS) is a method to recover a 1 Sample Complexity of Total Variation Minimization Sajad Daei, Farzan Haddadi, Arash Amini Abstract This work considers the use of Total Variation (TV) minimization in the recovery of a given gradient

More information

Analysis of Greedy Algorithms

Analysis of Greedy Algorithms Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

Tractable performance bounds for compressed sensing.

Tractable performance bounds for compressed sensing. Tractable performance bounds for compressed sensing. Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure/INRIA, U.C. Berkeley. Support from NSF, DHS and Google.

More information

On Optimal Frame Conditioners

On Optimal Frame Conditioners On Optimal Frame Conditioners Chae A. Clark Department of Mathematics University of Maryland, College Park Email: cclark18@math.umd.edu Kasso A. Okoudjou Department of Mathematics University of Maryland,

More information

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property : An iterative algorithm for sparse recovery with restricted isometry property Rahul Garg grahul@us.ibm.com Rohit Khandekar rohitk@us.ibm.com IBM T. J. Watson Research Center, 0 Kitchawan Road, Route 34,

More information

MATCHING PURSUIT WITH STOCHASTIC SELECTION

MATCHING PURSUIT WITH STOCHASTIC SELECTION 2th European Signal Processing Conference (EUSIPCO 22) Bucharest, Romania, August 27-3, 22 MATCHING PURSUIT WITH STOCHASTIC SELECTION Thomas Peel, Valentin Emiya, Liva Ralaivola Aix-Marseille Université

More information

Fast Hard Thresholding with Nesterov s Gradient Method

Fast Hard Thresholding with Nesterov s Gradient Method Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

On Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery

On Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery On Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery Jeffrey D. Blanchard a,1,, Caleb Leedy a,2, Yimin Wu a,2 a Department of Mathematics and Statistics, Grinnell College, Grinnell, IA

More information

The Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology

The Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology The Sparsity Gap Joel A. Tropp Computing & Mathematical Sciences California Institute of Technology jtropp@acm.caltech.edu Research supported in part by ONR 1 Introduction The Sparsity Gap (Casazza Birthday

More information

COMPRESSED SENSING IN PYTHON

COMPRESSED SENSING IN PYTHON COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed

More information

Compressed Sensing and Redundant Dictionaries

Compressed Sensing and Redundant Dictionaries Compressed Sensing and Redundant Dictionaries Holger Rauhut, Karin Schnass and Pierre Vandergheynst December 2, 2006 Abstract This article extends the concept of compressed sensing to signals that are

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

arxiv: v1 [math.na] 26 Nov 2009

arxiv: v1 [math.na] 26 Nov 2009 Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

Stopping Condition for Greedy Block Sparse Signal Recovery

Stopping Condition for Greedy Block Sparse Signal Recovery Stopping Condition for Greedy Block Sparse Signal Recovery Yu Luo, Ronggui Xie, Huarui Yin, and Weidong Wang Department of Electronics Engineering and Information Science, University of Science and Technology

More information

A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission

A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission Li and Kang: A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing 1 A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization

More information