A Note on the Complexity of L p Minimization

Size: px
Start display at page:

Download "A Note on the Complexity of L p Minimization"

Transcription

1 Mathematical Programming manuscript No. (will be inserted by the editor) A Note on the Complexity of L p Minimization Dongdong Ge Xiaoye Jiang Yinyu Ye Abstract We discuss the L p (0 p < 1) minimization problem arising from sparse solution construction and compressed sensing. For any fixed 0 < p < 1, we prove that finding the global minimal value of the problem is strongly NP-Hard, but computing a local minimizer of the problem can be done in polynomial time. We also develop an interior-point potential reduction algorithm with a provable complexity bound and demonstrate preliminary computational results of effectiveness of the algorithm. Keywords nonconvex programming global optimization interior-point method sparse solution reconstruction MSC2010 Classification Code: 90C26, 90C51 1 Introduction In this note, we consider the following optimization problems: (P) Minimize p(x) := x p j Subject to x F := {x : Ax = b, x 0}, (1) and Minimize Subject to x j p x F := {x : Ax = b}, (2) where the problem inputs consist of A R m n, b R m, and 0 < p < 1. Sparse signal or solution reconstruction by solving optimization problem (1) or (2), especially for the cases of 0 < p 1, has recently received considerable attention; for example, see [6,22] and references therein. In signal reconstruction, one typically has linear measurements b = Ax where x is a sparse signal, i.e., the sparsest or smallest support cardinality solution of the linear system. This sparse signal is recovered by solving the inverse problem (1) or (2) with the objective function x 0 that is the L 0 norm of x and is defined as the number of nonzero components of x [7]. In this note we essentially consider the L p norm functional L p (x) = ( n x j p) 1/p where 0 < p 1 as our optimization objective function. It is easy to see that L p p(x) converges to the L 0 norm functional on a bounded set Ω R n. Thus, as a potential approach to the sparse signal problem, minimizing the L p norm function of x, or simply p(x), naturally arises. Dongdong Ge Antai School of Economics and Management, Shanghai Jiao Tong University, Shanghai, China ddge@sjtu.edu.cn Xiaoye Jiang Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA xiaoye@stanford.edu Yinyu Ye Department of Management Science and Engineering, Stanford University, Stanford, CA yinyu-ye@stanford.edu

2 2 D. Ge, X. Jiang, Y. Ye In terms of computational complexity, the problem with L 0 norm is shown to be NP-hard [19]. On the other hand, when p = 1, the problem (1) or (2), which is a relaxation problem for the L 0 norm problem, is a linear program, and hence it is solvable in polynomial time. For more general convex programs involving the L p norm (p > 1) in the objective (or constraints) and in conic representation, readers are referred to [20] and [25]. In [7,12], it was shown that if a certain restricted isometry property(rip) holds for A, then the solutions of L p norm minimization for p = 0 and p = 1 are identical. Hence, problem L 0 minimization can be relaxed to problem (2) with p = 1. However, this property may be too strong for practical basis design matrices A. Instead, one may consider the sparse recovery problem by solving relaxation problem (1) or (2) for a fixed p with 0 < p < 1. Recently, this approach has attracted many research efforts in variable selection and sparse reconstruction, e.g., [9, 10, 13, 17]. This approach exhibits desired threshold bounds on any non-zero entry of a computed solution [11], and computational experiments show that by replacing p = 1 with p < 1, reconstruction can be done equally fast with many fewer measurements while maintaining robustness to noise and signal non-sparsity [10]. In this paper we present several interesting properties of the L p (0 < p < 1) minimization problem. Minimization problems (1) and (2) are both strongly NP-Hard. However, any basic feasible solution of (1) or (2) is a local minimizer. Moreover, a feasible point satisfying the first order and second order necessary optimality conditions is always a local minimizer. This motivates us to design interior-point algorithms, which iterate within the interior of the feasible region, to generate a local minimizer (hopefully a sparse local minimizer) satisfying the Karush-Kuhn-Tucker (KKT) optimality conditions. 1.1 Notation and Preliminaries For the simplicity of our analysis, throughout this paper we assume that the feasible set F is bounded and A is full-ranked. A feasible point x is called a local minimum point or local minimizer of problem (1) if there exists ɛ > 0 such that p(x ) p(x) for any x B(x, ɛ) F where B(x, ɛ) = {x : x x 2 ɛ}. For any feasible solution x of (1), let S(x) be the support or active set of x, that is, S(x) = {j : x j > 0}. Thus, S n. Let x be a local minimum point of problem (1). Although p(x ) is not differentiable when x is on the boundary of the feasible region, it is not difficult to verify that its non-zero components must be local minimizers in the active set of variables, that is, for the problem min p S(x) (z) := z p j, s.t. A S(x)z = b, z 0, j S(x) where A S R m S is the submatrix of A exactly consisting of the columns of A according to index set S (see [11], for example). Thus, x 0 satisfies the following necessary conditions ([2]). The first order necessary condition or KKT condition: there exists a Lagrange multiplier vector y R m, such that and the complementarity condition holds: The second order necessary condition: p(x ) p 1 j (A T y ) j 0, j S(x ), (3) p(x ) p j x j (A T y ) j = 0, j. (4) λ T 2 zzp S(x )(z )λ 0, (5) for all λ R S(x ) such that A S(x )λ = 0, where z is the vector of all non-zero components of x. The fact that F is bounded and nonempty implies that p(x) has a minimum value (denoted z) and a maximum value (denoted z). An ɛ-minimal solution or ɛ-minimizer is defined as a feasible solution x such that p(x) z ɛ. (6) z z

3 A Note on the Complexity of L p Minimization 3 Vavasis [23] demonstrated the importance of the term z z in the criterion for continuous optimization. Similarly Ye [26] defined an ɛ-kkt (or ɛ-stationary) point as an (x, y) that satisfies (3) in the active set S(x) of x, and the complementarity gap n (p(x ) p j x j (A T y) j ) z z ɛ. (7) This note is organized as follows: in section 2, we show that the L p (0 < p < 1) minimization problem (1) or (2) is strongly NP-Hard. In section 3 we prove that the set of all basic feasible solutions of (1) and (2) are identical with the set of all local minimizers. In section 4 we present our FPTAS (Fully Polynomial Time Approximation Scheme) interior-point potential reduction algorithm to find an ɛ-kkt point of problem (1). Numerical experiments are conducted to test its efficiency in section 5. 2 Hardness To help the reader understand the basic idea of the hardness proof, we start by showing that the L p (0 < p < 1) minimization problems (1) and (2) are both NP-hard before we prove their strong NP-hardness. To prove HP-hardness, we employ a polynomial time reduction from the well known NP-complete partition problem [16]. The partition problem can be described as follows: given a set S of integers or rational numbers {a 1, a 2,..., a n }, is there a way to partition S into two disjoint subsets S 1 and S 2 such that the sum of the numbers in S 1 equals the sum of the numbers in S 2? An instance of the partition problem can be reduced to an instance of the L p (0 < p < 1) minimization problem (1) that has the optimal value n if and only if the former has an equitable bipartition. Consider the following minimization problem: Minimize P (x, y) = (x p j + yp j ) 1 j n Subject to a T (x y) = 0, x j + y j = 1, j, x, y 0. From the strict concavity of the objective function, x p j + yp j x j + y j (= 1), j, and the equality holds if and only if (x j = 1, y j = 0) or (x j = 0, y j = 1). Thus, P (x, y) n for any (continuous) feasible solution of problem (8). If there is a feasible (optimal) solution pair (x, y) such that P (x, y) = n, it must be true that x p j + yp j = 1 = x j + y j for all j so that (x, y) must be a binary solution ((x j = 1, y j = 0) or (x j = 0, y j = 1)), which generates an equitable bipartition of the entries of a. On the other hand, if the entries of a have an equitable bipartition, then the problem must have a binary solution pair (x, y) such that P (x, y) = n. Thus we prove the NP-hardness of problem (1). Note that the objective value of (8) is a constant n when p = 1, so that any feasible solution is a (global) minimizer. However, the sparsest solution of (8) has cardinality n if the problem admits an equitable bipartition. The same instance of the partition problem can also be reduced to the following minimization problem in form (2): Minimize ( x j p + y j p ) 1 j n Subject to a T (x y) = 0, x j + y j = 1, j. Note that this problem has no non-negativity constraints on variables (x, y). However, for any feasible solution (x, y) of the problem, we still have x j p + y j p x j + y j (= 1), j. This is because the minimal value of x j p + y j p is 1 if x j + y j = 1, and the quality holds if and only if (x j = 1, y j = 0) or (x j = 0, y j = 1). Therefore, the instance of the partition problem has an equitable bipartition if and only if the objective value of (9) is n. This leads to the NP-hardness of (2). Now we prove a stronger result: (8) (9)

4 4 D. Ge, X. Jiang, Y. Ye Theorem 1 The L p (0 < p < 1) minimization problems (1) and (2) are both strongly NP-hard. Proof We present a polynomial time reduction from the well known strongly NP-hard 3-partition problem [15,16]. The 3-partition problem can be described as follows: given a multiset S of n = 3m integers {a 1, a 2,..., a n }. The sum of S is equal to mb and each integer in S is strictly between B/4 and B/2. Can S be partitioned into m subsets, such that the sum of the numbers in each subset is equal to each other, i.e., B, which implies each subset has exactly three elements? We describe a reduction from an instance of the 3-partition problem to an instance of the L p (0 < p < 1) minimization problem (1) that has the optimal value n if and only if the former has an equitable 3-partition. Given an instance of the partition problem, let vector a = (a 1, a 2,..., a n ) R n and n = 3m. Let the sum of a be mb and each a i (B/4, B/2). Consider the following minimization problem in form (1): Minimize P (x) = Subject to x p ij i=1 m x ij = 1, i = 1, 2,, n, n i=1 a ix ij = B, j = 1, 2,, m, x ij 0, i = 1, 2,, n; j = 1, 2,, m, From the strict concavity of the objective function, and x ij [0, 1], m x p ij x ij (= 1), i = 1, 2,, n. The equality holds if and only if x ij0 = 1 for some j 0 and other x ij = 0, j j 0. Thus, P (x) n for any feasible solution of (10). If there is a feasible (optimal) solution x such that P (x) = n, it must be true that m xp ij = 1 = m x ij for all i so that for any i, x ij0 = 1 for some j 0 and other x ij = 0, j j 0. This generates an equitable 3-partition of the entries of a. On the other hand, if the entries of a have an equitable 3-partition, then (10) must have a binary solution x such that P (x) = n. Thus we prove the strong NP-hardness of problem (1) according to [15]. For the same instance of the 3-partition problem, we consider the following minimization problem in form (2): Minimize x ij p i=1 Subject to m x (11) ij = 1, i = 1, 2,, n, n i=1 a ix ij = B, j = 1, 2,, m, Note that this problem has no non-negativity constraints on the variables x. However, for any feasible solution x of the problem, we still have x ij p x ij (= 1), i = 1, 2,, n. This is because the minimal value of m x ij p is 1 if m x ij = 1, and the equality holds if and only if for any i, x ij0 = 1 for some j 0 and other x ij = 0, j j 0. Therefore, we can similarly prove that this instance of the partition problem has an equitable 3-partition if and only if the objective value of (11) is n, which leads to the strong NP-hardness of problem (2). (10) 2.1 The Hardness of Smoothed L p Minimization To avoid the non-differentiability of p(x), smooth versions of L p minimization were developed: (P) Minimize (x j + ɛ) p and Minimize Subject to Subject to x G := {x : Ax = b, x 0}, ( x j + ɛ) p x G := {x : Ax = b}, (12) (13)

5 A Note on the Complexity of L p Minimization 5 for a fixed positive constant ɛ; see [5,9 11]. A similar reduction can be used to derive the NP-hardness of the smoothed versions. Theorem 2 The smoothed L p (0 < p < 1) minimization problems (12) and (13) are both strongly NP-hard. Proof For problem (12), we also form a reduced minimization problem similar to the form of (10) for the 3-partition problem with the objective function defined as: Minimize P ɛ (x) = Subject to i=1 (x ij + ɛ) p m x ij = 1, i = 1, 2,, n, n i=1 a ix ij = B, j = 1, 2,, m, x ij 0, i = 1, 2,, n; j = 1, 2,, m. (14) From the concavity of the objective function, for all i, we have (x ij + ɛ) p = (x ij + ɛ x ik ) p = k=1 (x ij (1 + ɛ) + ɛ(1 x ij )) p (x ij (1 + ɛ) p + ɛ p (1 x ij )) = (1 + ɛ) p + (m 1)ɛ p The equality holds if and only if x ij0 = 1 for some j 0 and x ij = 0, j j 0. Thus, P ɛ (x) n((1 + ɛ) p + (m 1)ɛ p ) for any feasible solution of (14). If there is a feasible (optimal) solution x such that P ɛ (x) = n((1 + ɛ) p + (m 1)ɛ p ), it must be true that m (x ij + ɛ) p = (1 + ɛ) p + (m 1)ɛ p for all i so that for any i, x ij0 = 1 for some j 0 and x ij = 0, j j 0. This generates an equitable 3-partition of the entries of a. On the other hand, if the entries of a have an equitable 3-partition, then (14) must have a binary solution x such that P ɛ (x) = n((1 + ɛ) p + (m 1)ɛ p ). Thus we prove the strong NP-hardness of problem (14) and thereby (12). Similar arguments can also be developed to prove the strong NP-hardness of problem (13). 3 The Easiness The above discussion reveals that finding a global minimizer for the L p norm minimization problem is strongly NP-hard as long as p < 1. From a theoretical perspective a strongly NP-hard optimization problem with a polynomially bounded objective function does not admit an FPTAS unless P=NP [24]. Thus, relaxing p = 0 to some p < 1 gains no advantage in terms of the (worst-case) computational complexity. We now turn our attention to local minimizers. Note that, for many optimization problems, finding a local minimizer, or checking if a solution is a local minimizer, remains NP-hard. Here we show that local minimizers of problems (1) and (2) are easy to certify and compute. Theorem 3 The set of all basic feasible solutions of (1) or (2) is exactly the set of local minimizers. Thus, computing a local minimizer of (1) or (2) can be done in polynomial time. Proof Let x be a basic feasible solution (or extreme point) of the feasible polytope of (1), where, without loss of generality, the basic variables are x B = ( x 1,, x m ) > 0 and x j = 0, j = m + 1, n. The feasible directions form a polyhedral cone pointed at x. Consider a directional edge of the feasible direction cone d = d m+1 := (d 1, d 2,, d m+1, 0, 0) from x to an adjacent extreme point. Let d 2 = 1. Since d is a feasible direction, there exists an appropriate and fixed ɛ m+1 > 0, such that x + ɛd is feasible for any 0 < ɛ < ɛ m+1, which implies d m+1 > 0. Thus p( x + ɛd) p( x) = (( x i + ɛd i ) p x p i ) + (ɛd m+1) p. i=1

6 6 D. Ge, X. Jiang, Y. Ye Define the index set I = {i : d i < 0}. If I is empty, then p( x + ɛd) > p( x) for any ɛ > 0 so that d is a strictly increasing feasible direction. If I is nonempty, one can choose sufficiently small but fixed ɛ m+1 such that x i + ɛd i x i 2 for any i I and 0 ɛ ɛ m+1. Then, for such 0 < ɛ ɛ m+1 p( x + ɛd) p( x) i I (( x i + ɛd i )) p x p i ) + (ɛd m+1) p > where the last inequality comes from the strict concavity of p(x). Note that if i I (ɛd i )p( x i 2 )p 1 + (ɛd m+1 ) p > 0 ɛ < ɛ m+1 := d p 1 p m+1 p i I ( d i )( x i i I (ɛd i )p( x i 2 )p 1 + (ɛd m+1 ) p, 2 )p 1 1 p 1. (15) This again shows that the edge direction, d = d m+1, is a strictly increasing direction within a fixed step size min{ɛ m+1, ɛ m+1, ɛ m+1} > 0. There are at most (n m) edge directions of the feasible direction cone, say d j, j = m + 1, n. Thus, there exists a fixed ɛ x > 0 such that x + ɛd j is feasible and p( x + ɛd j ) p( x) > 0, j = m + 1, n, for all 0 < ɛ ɛ x. Let Conv( x, ɛ x ) denote the convex hull spanned by points x and x+ɛ x d j, j = m+1,, n. For any x Conv( x, ɛ x ) and x x, we have p(x) > p( x) by the strict concavity of p(x) and that x can be represented as a convex combination of the corner points of the convex hull. Furthermore, one can always choose a sufficiently small but fixed ɛ > 0, such that B( x, ɛ) F Conv( x, ɛ x ). Therefore, by the definition, x is a (strictly) local minimizer. On the other hand, if x is a local minimizer but not a basic feasible solution or extreme point of the feasible polytope, then for any ɛ > 0, consider the neighborhood B( x, ɛ) F. There must exist a feasible direction d with d 2 = 1 such that both x + ɛ d and x ɛ d are feasible for sufficiently small 0 < ɛ < ɛ, and they both belong to B( x, ɛ) F. The strict concavity of p(x) implies that one of two must be smaller than p( x). Thus, x cannot be a local minimizer. Similarly, we can prove that the set of all basic solutions of (2) is exactly the set of all of its local minimizers. It is well known that computing a basic feasible solution of (1) or (2) can be done in polynomial time as solving a linear programming feasibility problem (e.g., [27]), which completes the proof. Furthermore, one can observe the following property of a local minimizer. Theorem 4 If the first order necessary condition (3) and the second order necessary condition (5) hold at x and x F, then x is a local minimizer of problem (1). Proof If x is in the interior of the feasible region F and satisfies the first order condition (3), the strict concavity of p(x) implies x must be the unique maximum point. If x lies on the boundary, define its support set S and positive vector z as in the second order necessary condition (5). Since 2 zzp S (z) is negative definite, we know that λ T 2 zz(z)λ cannot be nonnegative in the null space of A S unless {λ : A S λ = 0} = {0}. Thus, the second order necessary condition (5) holds only when x is a basic feasible solution. By Theorem 3, x is a local minimizer. Example 1 This example shows that not all basic feasible solutions have a good sparse structure and the L 1 minimization does not always work. Let A = , b = It is not difficult to verify that the only optimal basic feasible solution for the L 1 minimization is x = (0, 1.2, 0, 0.8, 0, 0.4) and the optimal value is 2.4. However, it is not the sparsest solution. A sparsest solution can be given by x = (4, 0, 0, 0, 0, 0). One can also observe that any basic feasible solution containing column 1 is the sparsest solution, but other basic feasible solutions are all dense with cardinality 3.

7 A Note on the Complexity of L p Minimization 7 4 Interior-Point Algorithm From a complexity point of view, Theorem 3 implies that a local minimizer of the L p minimization can be computed in polynomial time as a linear program to find a basic feasible solution. Of course, we are really interested in finding a sparse basic feasible solution; if we start from non-sparse basic feasible solution, there is little hope to find a sparser one by solving problem (1) since the starting point is already a strict local minimizer. Naturally, one would start from an interior-point feasible solution such as an (approximate) analytic center x 0 of the feasible polytope (if it is bounded and has an interior feasible point), and consider an interior-point algorithm for approximately solving (1). Moreover, p(x) is differentiable in the interior feasible region. Hopefully, the interior-point algorithm would generate a sequence of interior points that bypasses non-sparse basic feasible solutions and converges to a sparse one. This is exactly the idea of the potential reduction algorithm developed in [26] for nonconvex quadratic programming. The algorithm starts from the initial point, follows an interior feasible path and finally converges to either a global minimizer or a KKT point or local minimizer. At each step, it chooses a new interior point which produces the maximal potential reduction to a potential function by an affine-scaling operation. See [18] or [27] for an extensive introduction of interior-point algorithms. We now extend the potential reduction algorithm for the L p minimization. The algorithm starts from an interior-point feasible solution such as the analytic center x 0 of the feasible polytope. As in linear programming, one could consider the potential function φ(x) = ρ log x p j z log x j = ρ log(p(x) z) log x j, (16) where z is a lower bound on the global minimal objective value of (1) and ρ satisfies parameter ρ > n. For simplicity, we set z = 0 in this paper, since the L p minimization objective function is always nonnegative. Note that 1/n n xp n j x p j n and therefore n n p log(p(x)) log x j n log n. p Thus, if φ(x) (ρ n/p) log(ɛ) + (n/p) log n, we must have p(x) ɛ, which implies that x must be an ɛ-global minimizer. Give an interior point x, the algorithm looks for the best possible potential reduction from x. In a manner similar to the algorithm discussed in [26], one can consider a potential reduction φ(x + ) φ(x) by one-iteration update from x to x +. Note that φ(x + ) φ(x) = ρ(log(p(x + )) log(p(x))) + ( log(x + j ) + log(x j )). Let d x with Ad x = 0 be a vector such that x + = x + d x > 0. Then, from the concavity of log(p(x)), we have log(p(x + )) log(p(x)) 1 p(x) p(x)t d x. On the other hand, by restricting X 1 d x β < 1, where X = Diag(x), we have (see Section 9.3 in [3] for a detailed discussion) log(x + j ) + log(x j ) e T X 1 β 2 d x + 2(1 β). From the analysis above, if X 1 d x β < 1, then x + = x + d x > 0, and φ(x + ρ ) φ(x) ( p(x) p(x)t X e T )X 1 β 2 d x + 2(1 β).

8 8 D. Ge, X. Jiang, Y. Ye Let d = X 1 d x. Then, to achieve a potential reduction, one can minimize an affine-scaled linear function subject to a ball constraint as it is done for linear programming (see Chapter 1 and 4 in [27] for more details): ( ) Z(d ) := Minimize ρ p(x) p(x)t X e T d Subject to AXd = 0 d 2 β 2. (17) This is simply a linear projection problem. The minimal value is Z(d ) = β g(x) and the optimal direction is given by d = g(x). Here β g(x) g(x) = (I XA T (AX 2 A T ) 1 ρ AX)( X p(x) e) p(x) = e ρ p(x) X( p(x) AT y), where y = (AX 2 A T ) 1 AX(X p p(x) ρ e). If g(x) 1, then the minimal objective value of the subproblem is less than β so that φ(x + ) φ(x) < β + β 2 2(1 β). Thus, the potential value is reduced by a constant if we set β = 1/2. If this case would hold for O((ρ n/p) log 1 ɛ ) iterations, we would have produced an ɛ-global minimizer of (1). On the other hand, if g(x) 1, from g(x) = e ρ p(x) X( p(x) AT y), we must have ρ p(x) X( p(x) AT y) 0, ρ p(x) X( p(x) AT y) 2e, j. In other words, ( ) p(x) A T y 0, j x ( ) j p(x) A T y p(x) 2 j ρ, j. The first condition indicates that the Lagrange multiplier y is feasible. For the second inequality, by choosing ρ 2n 1 ɛ we have p(x) xt ( p(x) A T y) ɛ. Therefore, x T ( p(x) A T y) z z xt ( p(x) A T y) p(x) which implies that x is an ɛ-kkt point of (1); see (7). Concluding the analysis above, we have the following lemma. Lemma 1 The interior-point algorithm returns an ɛ-kkt or ɛ-global solution of (1) in no more than O( n ɛ log 1 ɛ ) iterations. A more careful computation will make the complementarity point satisfy the second order necessary condition; see [26]. By combining these observations with Theorem 4, we immediately conclude the convergence of our interior-point algorithm. Corollary 1 The potential reduction algorithm that starts from the (approximate) analytic center of the polytope and generates a sequence of interior points converging to a local minimizer is an FPTAS to compute an approximate local minimizer for the L p minimization problem. To conclude, some interior-point algorithms, including the simple affine-scaling algorithm that always goes along a descent direction, can be effective (with fully polynomial time) in tackling the L p minimization problem as well. ɛ,

9 A Note on the Complexity of L p Minimization 9 1 Binary Random Matrices 1 Gaussian Random Matrices 1 Bernoulli Random Matrices 0.9 L L p 0.8 L 1 L p L 1 L p Frequency of Success Frequency of Success Frequency of Success Sparsity Sparsity Sparsity (a) (b) (c) Fig. 1 Successful sparse recovery rates of L p and L 1 solutions, with matrix A constructed from (a) a binary matrix; (b) a Gaussian random matrix; and (c) a Bernoulli random matrix. 5 Computational Experiments of the Potential Reduction Algorithm In this section, we compute a solution of (1) using the interior-point algorithm above and compare it with the solution of the L 1 problem, i.e., the solution of (1) with p = 1, which is also computed by an interiorpoint algorithm for linear programming. Our preliminary results reinforce our reasoning that interior-point algorithms likely avoid some local minimizers on the boundary and recover the sparse solution. A more extensive computation is in working process. We construct 1000 random pairs (A, x) with matrices A size of 30 by 120 and vectors x R 120 for sparsity x 0 = s with s = 1, 2,, 20. With the basis matrix given, the vector b = Ax is known. We use several basis design matrices A to test our algorithms. In particular, let A = [M, M] where M is one of the following matrices: (1) Sparse binary random matrices where there are only a small number of ones in each column; (2) Gaussian random matrices whose entries are i.i.d. Gaussian random variables; (3) Bernoulli random matrices whose entries are i.i.d. Bernouli random variables. We note that Gaussian or Bernoulli random matrices satisfy the restricted isometry property [7, 8], i.e., columns of the basis matrix are nearly orthogonal, while sparse binary random matrices only satisfy a weaker form of that property [1]. Because two copies of the same random matrices are concatenated in A, column orthogonality will not be maintained, which suggests that sparse recovery by the L 1 problem will fail. We solve (1) with p = 1/2 by the interior-point algorithm developed above and compare the successful sparse recovery rate with the solutions of the L 1 problem. A solution is considered to successfully recovering x if the L 2 distance between the solution and x is less than The phase transitions of successful sparse recovery rates for L p and L 1 problem for three cases of basis matrices are plotted in figure 5. We observe that the L 0.5 interior-point algorithm performs better in successfully identifying the sparse solution x than the L 1 algorithm does. Moreover, we note that when the basis matrices are binary, we have comparatively lower rates of successful sparse recovery (when sparsity s = 4, for example, successful recovery rate for L p solutions is about 95% for binary random matrices, compared with almost 100% for Gaussian or Bernoulli random matrices; for L 1 solutions it is about 80% for binary random matrices compared with about 90% for Gaussian or Bernoulli random matrices). This may be supported by the fact that sparse binary random matrices have even worse column orthogonality than Gaussian or Bernoulli cases. Our simulation results also show that the interior-point algorithm for the L p problem with 0 < p < 1 runs as fast as the interior-point method for the L 1 problem, which makes the interior-point method competitive for large scale sparse recovery problems. As a final remark, we aim to compare the solution sparsity between the L p and the L 1 minimization models but not the algorithms. Thus, the L 1 minimizer we used in our experiments is independent of the specific choice of computational techniques. We note that many classes of computational techniques have arisen in recent years for solving sparse approximation problems; see [22] for a state-of-the-art survey. Some of those methods such as the Orthogonal Matching Pursuit [14, 21], iterative thresholding [4], etc, can be shown to be able to approximately find the L 1 minimizer under good RIP conditions on the basis matrix A. In our case, when the basis matrix A does not have a good RIP property, these computational techniques have frequently failed. For example, the orthogonal matching pursuit algorithm exhibited difficulties in identifying which bases maximize the absolute value of the correlation with the residual due to the fact that there are two copies of the same basis that only differ in signs; the iterative thresholding approach could not converge to a sparse solution with our basis matrix. However, when A has a good RIP property, we see no significant recovering difference among all these methods.

10 10 D. Ge, X. Jiang, Y. Ye Acknowledgements The authors would like to thank Zizhuo Wang, Jiawei Zhang and John Gunnar Carlsson for their helpful discussions, as well as the editor and anonymous referees for their insightful comments and suggestions. Research by the first author is supported by National Natural Science Foundation of China under Grants References 1. R. Berinde, A. C. Gilbert, P. Indyk, H. J. Karloff and M. J. Strauss, Combining geometry and combinatorics: A unified approach to sparse signal recovery, preprint, D. P. Bertsekas, Nonlinear programming, Athena Scientific, (1999). 3. D. Bertsimas and J. Tsitsiklis, Introduction to linear Optimization, Athena Scientific, (1997), T. Blumensath and M. Davies, Iterative hard thresholding for compressed sensing, Appl. Comp. Harmonic Anal., 27(2009), J. M. Borwein and D. R. Luke, Entropic Regularization of the l 0 function, in Fixed-Point Algorithms for Inverse Problems in Science and Engineering in Springer Optimization and Its Applications, (2010). 6. A.M. Bruckstein, D.L. Donoho, M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Review 51-1(2009), 34C E. J. Candès and T. Tao, Decoding by linear programming, IEEE Transaction of Information Theory, 51(2005), E. J. Candès and T. Tao, Near optimal signal recovery from random projections: Universal encoding strategies, IEEE Transaction of Information Theory, 52(2006), R. Chartrand Exact Reconstruction of sparse signals via nonconvex minimization, IEEE Signal Processing Letters, 14-10(2009). 10. R. Chartrand Fast algorithms for nonconvex compressive sensing: MRI reconstruction from very few data, in IEEE International Symposium on Biomedical Imaging (ISBI), (2009). 11. X. Chen, F. Xu and Y. Ye, Lower Bound Theory of Nonzero Entries in Solutions of l 2 -l p Minimization, Technical Report, The Hong Kong Polytechnic University, (2009). 12. D. Donoho, For most large underdetermined systems of linear equations the minimal l 1 -norm solution is also the sparsest Solution, Technical Report, Stanford University, (2004). 13. J. Fan and R. Li, Variable selection via nonconcave penalized likelihood and its oracle properties, Journal of American Statistical Society, 96(2001), A. C. Gilbert and M. Muthukrishnan and M. J. Strauss, Approximation of functions over redundant dictionaries using coherence, in Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms(SODA), (2003). 15. M. R. Garey and D. S. Johnson, Strong NP-Completeness Results: Motivation, Examples, and Implications, Journal of the Association of Computing Machinery, 25(1978), M. R. Garey and D. S. Johnson, Computers and Intractability; A Guide to the Theory of NP-Completeness, W. H. Freeman & Company, (1979), N. Mourad and P. Reilly, Minimizing nonconvex functions for sparse vector reconstruction, IEEE Transactions on Signal Processing, 58(2010), Y. Nesterov and A. Nemirovski, Interior Point Polynomial Methods in Convex Programming, SIAM, Philadelphia, PA, (1994). 19. B. K. Natarajan, Sparse approximate solutions to linear systems, SIAM Journal on Computing, 24(1995), T. Terlaky, On l p programming, European Journal of Operational Research, 22(1985), J. A. Tropp, Greed is good: Algorithmic results for sparse approximation, IEEE Trans. Info. Theory, 50(2004), J. Tropp and S.J. Wright, Computational methods for sparse solutions of linear inverse problems, to appear in Proceedings of the IEEE, (2010). 23. S. A. Vavasis, Polynomial time weak approximation algorithms for quadratic programming, in C.A. Floudas and P.M. Pardalos: Complexity in Numerical Optimization, World Scientific, New Jersey, (1993). 24. V. Vazirani, Approximation Algorithms, Springer, Berlin, (2003). 25. G. Xue and Y. Ye, An efficient algorithm for minimizing a sum of P-norms, SIAM J. Optimization 10 (2000) Y. Ye, On the complexity of approximating a KKT point of quadratic programming, Mathematical Programming, 80(1998), Y. Ye, Interior point algorithms: theory and analysis, John Wiley & Sons, Inc., New York, (1997).

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Mathematical Programming manuscript No. (will be inserted by the editor) Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Wei Bian Xiaojun Chen Yinyu Ye July

More information

Conditions for a Unique Non-negative Solution to an Underdetermined System

Conditions for a Unique Non-negative Solution to an Underdetermined System Conditions for a Unique Non-negative Solution to an Underdetermined System Meng Wang and Ao Tang School of Electrical and Computer Engineering Cornell University Ithaca, NY 14853 Abstract This paper investigates

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Combining geometry and combinatorics

Combining geometry and combinatorics Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems 1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals On the l 1 -Norm Invariant Convex -Sparse Decomposition of Signals arxiv:1305.6021v2 [cs.it] 11 Nov 2013 Guangwu Xu and Zhiqiang Xu Abstract Inspired by an interesting idea of Cai and Zhang, we formulate

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse

More information

Error Correction via Linear Programming

Error Correction via Linear Programming Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,

More information

Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p

Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p G. M. FUNG glenn.fung@siemens.com R&D Clinical Systems Siemens Medical

More information

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

More information

Data Sparse Matrix Computation - Lecture 20

Data Sparse Matrix Computation - Lecture 20 Data Sparse Matrix Computation - Lecture 20 Yao Cheng, Dongping Qi, Tianyi Shi November 9, 207 Contents Introduction 2 Theorems on Sparsity 2. Example: A = [Φ Ψ]......................... 2.2 General Matrix

More information

Sparse signals recovered by non-convex penalty in quasi-linear systems

Sparse signals recovered by non-convex penalty in quasi-linear systems Cui et al. Journal of Inequalities and Applications 018) 018:59 https://doi.org/10.1186/s13660-018-165-8 R E S E A R C H Open Access Sparse signals recovered by non-conve penalty in quasi-linear systems

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

On a Polynomial Fractional Formulation for Independence Number of a Graph

On a Polynomial Fractional Formulation for Independence Number of a Graph On a Polynomial Fractional Formulation for Independence Number of a Graph Balabhaskar Balasundaram and Sergiy Butenko Department of Industrial Engineering, Texas A&M University, College Station, Texas

More information

Sparse analysis Lecture II: Hardness results for sparse approximation problems

Sparse analysis Lecture II: Hardness results for sparse approximation problems Sparse analysis Lecture II: Hardness results for sparse approximation problems Anna C. Gilbert Department of Mathematics University of Michigan Sparse Problems Exact. Given a vector x R d and a complete

More information

Phase Transition Phenomenon in Sparse Approximation

Phase Transition Phenomenon in Sparse Approximation Phase Transition Phenomenon in Sparse Approximation University of Utah/Edinburgh L1 Approximation: May 17 st 2008 Convex polytopes Counting faces Sparse Representations via l 1 Regularization Underdetermined

More information

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Tractable performance bounds for compressed sensing.

Tractable performance bounds for compressed sensing. Tractable performance bounds for compressed sensing. Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure/INRIA, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Optimal Deterministic Compressed Sensing Matrices

Optimal Deterministic Compressed Sensing Matrices Optimal Deterministic Compressed Sensing Matrices Arash Saber Tehrani email: saberteh@usc.edu Alexandros G. Dimakis email: dimakis@usc.edu Giuseppe Caire email: caire@usc.edu Abstract We present the first

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

Z Algorithmic Superpower Randomization October 15th, Lecture 12

Z Algorithmic Superpower Randomization October 15th, Lecture 12 15.859-Z Algorithmic Superpower Randomization October 15th, 014 Lecture 1 Lecturer: Bernhard Haeupler Scribe: Goran Žužić Today s lecture is about finding sparse solutions to linear systems. The problem

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

Sharpening the Karush-John optimality conditions

Sharpening the Karush-John optimality conditions Sharpening the Karush-John optimality conditions Arnold Neumaier and Hermann Schichl Institut für Mathematik, Universität Wien Strudlhofgasse 4, A-1090 Wien, Austria email: Arnold.Neumaier@univie.ac.at,

More information

1 Strict local optimality in unconstrained optimization

1 Strict local optimality in unconstrained optimization ORF 53 Lecture 14 Spring 016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, April 14, 016 When in doubt on the accuracy of these notes, please cross check with the instructor s

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3740 Ming-Jun Lai Department of Mathematics

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Abstract This paper is about the efficient solution of large-scale compressed sensing problems.

Abstract This paper is about the efficient solution of large-scale compressed sensing problems. Noname manuscript No. (will be inserted by the editor) Optimization for Compressed Sensing: New Insights and Alternatives Robert Vanderbei and Han Liu and Lie Wang Received: date / Accepted: date Abstract

More information

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Tutorial: Sparse Signal Recovery

Tutorial: Sparse Signal Recovery Tutorial: Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan (Sparse) Signal recovery problem signal or population length N k important Φ x = y measurements or tests:

More information

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah 00 AIM Workshop on Ranking LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION By Srikanth Jagabathula Devavrat Shah Interest is in recovering distribution over the space of permutations over n elements

More information

Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations

Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations David Donoho Department of Statistics Stanford University Email: donoho@stanfordedu Hossein Kakavand, James Mammen

More information

arxiv: v1 [math.na] 26 Nov 2009

arxiv: v1 [math.na] 26 Nov 2009 Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 13, JULY 1, 2013 3279 Compressed Sensing Affine Rank Minimization Under Restricted Isometry T. Tony Cai Anru Zhang Abstract This paper establishes new

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Necessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization

Necessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization Noname manuscript No. (will be inserted by the editor) Necessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization Hui Zhang Wotao Yin Lizhi Cheng Received: / Accepted: Abstract This

More information

Compressed Sensing: a Subgradient Descent Method for Missing Data Problems

Compressed Sensing: a Subgradient Descent Method for Missing Data Problems Compressed Sensing: a Subgradient Descent Method for Missing Data Problems ANZIAM, Jan 30 Feb 3, 2011 Jonathan M. Borwein Jointly with D. Russell Luke, University of Goettingen FRSC FAAAS FBAS FAA Director,

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms Tim Roughgarden March 9, 2017 1 Preamble Our first lecture on smoothed analysis sought a better theoretical

More information

Least squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova

Least squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova Least squares regularized or constrained by L0: relationship between their global minimizers Mila Nikolova CMLA, CNRS, ENS Cachan, Université Paris-Saclay, France nikolova@cmla.ens-cachan.fr SIAM Minisymposium

More information

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation

Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation Alfredo Nava-Tudela John J. Benedetto, advisor 5/10/11 AMSC 663/664 1 Problem Let A be an n

More information

ORTHOGONAL matching pursuit (OMP) is the canonical

ORTHOGONAL matching pursuit (OMP) is the canonical IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4395 Analysis of Orthogonal Matching Pursuit Using the Restricted Isometry Property Mark A. Davenport, Member, IEEE, and Michael

More information

Sparse Recovery with Pre-Gaussian Random Matrices

Sparse Recovery with Pre-Gaussian Random Matrices Sparse Recovery with Pre-Gaussian Random Matrices Simon Foucart Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie Paris, 75013, France Ming-Jun Lai Department of Mathematics University of

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

On Sparsity, Redundancy and Quality of Frame Representations

On Sparsity, Redundancy and Quality of Frame Representations On Sparsity, Redundancy and Quality of Frame Representations Mehmet Açaaya Division of Engineering and Applied Sciences Harvard University Cambridge, MA Email: acaaya@fasharvardedu Vahid Taroh Division

More information

Algorithms for sparse analysis Lecture I: Background on sparse approximation

Algorithms for sparse analysis Lecture I: Background on sparse approximation Algorithms for sparse analysis Lecture I: Background on sparse approximation Anna C. Gilbert Department of Mathematics University of Michigan Tutorial on sparse approximations and algorithms Compress data

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

Interpolation-Based Trust-Region Methods for DFO

Interpolation-Based Trust-Region Methods for DFO Interpolation-Based Trust-Region Methods for DFO Luis Nunes Vicente University of Coimbra (joint work with A. Bandeira, A. R. Conn, S. Gratton, and K. Scheinberg) July 27, 2010 ICCOPT, Santiago http//www.mat.uc.pt/~lnv

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information