One condition for all: solution uniqueness and robustness of l 1 -synthesis and l 1 -analysis minimizations
|
|
- Kerry Harrington
- 5 years ago
- Views:
Transcription
1 One condition for all: solution uniqueness and robustness of l 1 -synthesis and l 1 -analysis minimizations Hui Zhang Ming Yan Wotao Yin June 9, 2013 The l 1-synthesis and l 1-analysis models recover structured signals from their undersampled measurements. Abstract The solution of the former model is often a sparse sum of dictionary atoms, and that of the latter model often makes sparse correlations with dictionary atoms. This paper addresses the question: when can we trust these models to recover specific signals? We answer the question with a necessary and sufficient condition that guarantees the recovery to be unique and exact and that also guarantees the recovery is robust in presence of measurement noise. The condition is one-for-all in the sense that it applies to both of the l 1-synthesis and l 1-analysis models, and to both of their constrained and unconstrained formulations. Furthermore, a convex infinity-norm program is introduced for numerically verifying the condition. The comparison with related existing conditions are included. Keywords: exact recovery, robust recovery, l 1 -analysis, l 1 -synthesis, sparse optimization, compressive sensing 1 Introduction Let x R n be a signal of interest. This paper studies when l 1 minimization can uniquely and robustly recover x from its linear measurements b = Φx + w, (1) where Φ R m n is a certain matrix and w R m is noise. We focus on the compressive setting m n. The results of this paper cover the following l 1 minimization formulations: minimize Ψ T x 1, subject to Φx = b, (2a) x minimize Φx b λ Ψ T x 1, x (2b) minimize Ψ T x 1, subject to Φx b 2 δ, (2c) x where δ, λ are positive parameters. When Ψ = Id, the identify matrix, models in (2) are referred to as the l 1 (or more generally, l 1 -synthesis) models. When Ψ Id, they are referred to as the l 1 -analysis models (see [6] for a recent overview). Department of Mathematics and Systems Science, College of Science, National University of Defense Technology, Changsha, Hunan, China, hhuuii.zhang@gmail.com Department of Computational and Applied Mathematics, Rice University, Houston, Texas 77005, USA. s: yanm@ucla.edu and wotao.yin@rice.edu 1
2 In synthesis models, the signal of interest is synthesized as x = Dc, where D is a certain dictionary and c has the sparse coefficients. The analysis model recently attracts a lot of attention. It is assumed in [3, 14, 13] that the underlying signal makes sparse correlations with the columns (atoms) in a dictionary ˉD, i.e., ˉDT x is sparse. The signal recover model is (2b) with Ψ set to ˉD. For both synthesis and analysis models, one is interested in when the recovery is successful, that is, the solution is unique and the solution error is proportional to the amount of noise. There are several nonuniversal (applied to specific sparse signals) and universal (applied to all sparse signals) conditions addressing questions in various forms for l 1 -synthesis minimization; examples include the non-universal dual certificate condition [7] and the RIPless property [4], and universal conditions such as the restricted isometry principle [2], the null space condition [5], the spherical section property [20], and others. Since l 1 -analysis minimization takes a more general form than l 1 -synthesis minimization, some of the above conditions have been extended to the analysis case; recent works [9, 18, 11, 16, 15, 12] have made significant contributions. Regarding a specific signal x, this paper establishes a necessary and sufficient condition that guarantees the unique solution of any model in (2) and also that the solutions of models (2b) and (2c) are robust to any noise added to b. A method based on l minimization that verifies the condition is presented. In addition, the proposed condition is compared to other conditions in the literature, most of which are stronger than ours and are thus sufficient but not necessary. Certain parts of our proofs are inspired by [1, 9, 10, 19]. Notation. We equip R n with the canonical scalar product, and Euclidean norm 2. We let return the cardinality if the input is a set or the absolute value if the input is a number. For any x R n, supp(x) = {k : 1 k n, x k 0} is the index set of the non-zero entries of x. sign(x) is the vector whose ith entry is the sign of x i, taking a value among +1, 1, and 0. For any p 1, the l p -norm of x R n is ( n ) 1/p x p = x i p, i=1 its l 0 - norm is x 0 = supp(x), and its l -norm is x = max{ x i : i = 1,, n}. For x R n and I {1, 2,, n}, x I denotes the vector formed by the entries x i of x for i I, and I c is the complement of I. Similarly, A I is the submatrix formed by the columns of A indexed by I. A T is the transpose of A. We use A T I for the transpose of submatrix A I, not a submatrix of A T. For square matrix A, λ max (A) and λ min (A) denote its largest and smallest eigenvalues, respectively, Cond(A) denotes its condition number, and A denotes its spectral norm. The null and column spaces of A are denoted by Ker(A) and Im(A), respectively. Outline. The rest of the paper is organized as follows. Section 2 states the main results of this paper. Section 3 reviews several related results. Section 4 discusses condition verification. Proofs for the main results are given in sections 5 and 6. 2 Main condition and results 2.1 Main condition Condition 1. Given ˉx R n, index sets I = supp(ψ T ˉx) {1,, l} and J = I c satisfy (1) Ker(Ψ T J ) Ker(Φ) = {0}; (2) There exists y R l such that Ψy Im(Φ T ), y I = sign(ψ T I ˉx), and y J < 1. Part (1) of the condition says that there does not exist any nonzero Δx satisfying both Ψ T J ˉx = ΨT J (ˉx+Δx) and Φˉx = Φ(ˉx+Δx). Otherwise, there exists a nonempty interval I = [ˉx αδx, ˉx+αΔx] for some sufficiently small α > 0 so that Φx = Φˉx and Ψ T x 1 is linear for x I; hence ˉx cannot be the unique minimizer. Part 2
3 (2) states the existence of a strictly-complementary dual certificate y. To see this, let us check a part of the optimality conditions of (2a): 0 Ψ 1 (Ψ T x) Φ T β, where vector β is the Lagrangian multipliers; we can rewrite the condition as 0 = Ψy Φ T β, with y 1 (Ψ T x), which translates to y I = sign(ψ T I x) and y J 1. This y certifies the optimality of ˉx. For solution uniqueness and/or robustness, we shall later show that y J < 1 strictly is needed. A variant of Condition 1 is given as follows, which is equivalent to Condition 5 below from [9]. Condition 2. Given ˉx R n, let I = supp(ψ T ˉx). There exists a nonempty index set J I c such that the index sets I, J and K = (I J) c satisfy (1) Ker(Ψ T J ) Ker(Φ) = {0}; (2) There exists y R l such that Ψy Im(Φ T ), y I = sign(ψ T I ˉx), y J < 1, and y K 1. In Condition 2, a smaller J relaxes part (2) but gives a larger Ker(Ψ T J ) and thus tightens part (1). Although Condition 2 allows a more flexible J than Condition 1, we shall show that they are equivalent. 2.2 Main results Depending on the specific models in (2), we need the following assumptions: Assumption 1. Matrix Φ has full row-rank. Assumption 2. λ max (ΨΨ T ) = 1. Assumption 3. Matrix Ψ has full row-rank. Assumptions 1 and 3 are standard. Assumption 2 is non-essential. We can scale a general Ψ by multiplying 1 it with. Below we state our main results, whose proofs are given in sections 5 and 6. λmax(ψψ T ) Theorem 1 (Uniqueness). Under Assumption 1, let ˆx be a solution to problem (2a) or (2b), or under Assumptions 1 and 3, let ˆx be a solution to problem (2c). The followings are equivalent: 1) Solution ˆx is unique; 2) Condition 1 holds for ˉx = ˆx; 3) Condition 2 holds for ˉx = ˆx. This theorem states that Conditions 1 and 2 are equivalent, and they are necessary and sufficient for a solution ˆx to problem (2a), or to problem (2b), or to problem (2c) to be unique. To state our next result on robustness, we let r(j) := u 2 sup. u Ker(Ψ T J )\{0} Φu 2 Part (1) of Condition 1 ensures that 0 < r(j) < +. If Ψ = I, then u Ker(Ψ T J )\{0} is a sparse nonzero vector with maximal support J c, so r(j) is the inverse of the minimal singular value of the submatrix Φ J c. Below we claim Condition 1 ensures the robustness of problems (2b) and (2c) to arbitrary noise in b. Theorem 2 (Robustness). Under Assumptions 1-3, given an original signal x R n, let I = supp(ψ T x ) and J = I c. For arbitrary noise w, let b = Φx + w. If Condition 1 is met for ˉx = x and w 2 δ, then 1) For any C 0 > 0, there exists constant C 1 > 0 such that every minimizer x δ,λ of problem (2b) using parameter λ = C 0 δ satisfies Ψ T (x δ,λ x ) 1 C 1 δ; 3
4 2) Every minimizer x δ of problem (2c) satisfies Moreover, defining Ψ T (x δ x ) 1 C 2 δ. we can let β = (ΦΦ T ) 1 Ψy, C 3 = r(j) I and C 4 = 1 + Cond(Ψ) Φ C 3 1 y J, C 1 = 2C 3 + C 0 β 2 + (1 + C 0 β 2 /2) 2 C 4 C 0, C 2 = 2C 3 + 2C 4 β 2. Remark 1. From the results of Theorem 2, it is straightforward to derive l 1 or l 2 bounds for (x δ,λ x ) and (x δ x ) under Assumption 2. Remark 2. Since C 0 is free to choose, one can choose the optimal C 0 = and simplify C 1 to 4C 4 4 β 2 +C 4 β 2 2 C 1 = 2C 3 + C 4 β 2 + C4 2 β C 4 β 2 2C 3 + 2C 4 β 2 + 2, which becomes very similar to C 2. This reflects the equivalence between problems (2b) and (2c) in the sense that given λ, one can find δ so that they have the same solution, and vice versa. Remark 3. Both C 1 and C 2 are the sum of 2C 3 and other terms. 2C 3 alone bounds the error when Ψ T x δ,λ (or Ψ T x δ ) and Ψ T x have matching signs. Since C 3 does not depend on y J, part (2) of Condition 1 does not play any role, whereas part (1) plays the major role. When the signs of Ψ T x δ,λ (or Ψ T x δ ) and Ψ T x do not match, the remaining terms in C 1 and C 2 are involved, and they are affected part (2) of Condition 1; in particular, y J < 1 plays a big role as C 4 is inversely proportional to 1 y. Also, since there is no knowledge about the support of Ψ T x δ,λ, which may or may not equal to that of Ψ T x, C 4 inevitably depends the global properties of Ψ and Φ. In contrast, C 3 only depends on the restricted property of Φ. 3 Related works In the case of Ψ = Id, Condition 1 is well known in the literature for l 1 (or l 1 -synthesis) minimization. It is initially proposed in [7] as a sufficient condition for the l 1 solution uniqueness. For problems (2b) and (2c), [8, 17] present sufficient but non-necessary conditions for solutions uniqueness. Later, its necessity is established in [10] for model (2b) and then in [19] for all models in (2), assuming Ψ = Id or equal to an orthogonal basis. The solution robustness of model (2b) is given under the same condition in [10]. Below we restrict our literature review to results for the l 1 -analysis model. 3.1 Previous uniqueness conditions Papers [9, 14, 18, 11] cover the uniqueness of the l 1 -analysis model and use stronger conditions than ours. The following condition in [14] guarantees the solution uniqueness for problem (2a): Condition 3. Given ˉx, let Q be a basis matrix of Ker(Φ), and I = supp(ψ T ˉx). The followings are met: (1) Ψ T IcQ is full column rank; (2) (Q T Ψ I c) + Q T Ψ I sign(ψ T I ˉx) < 1. 4
5 Paper [18] proposes the following condition for the solution uniqueness and robustness for problems (2a) and (2b) (the robustness requires the non-zero entries of Ψ T I x to be sufficient large compared to noise). Condition 4. For a given ˉx, index sets I = supp(ψ T ˉx) and J = I c satisfy: (1)Ker(Ψ T J ) Ker(Φ) = {0}; (2)Let A [J] = U(U T Φ T ΦU) 1 U T and Ω [J] = Ψ + J (ΦT ΦA [J] Id)Ψ I, where U is a basis matrix of Ker(Ψ T J ). Then IC(sign(Ψ T I ˉx)) := min u Ker(Ψ J ) Ω[J] sign(ψ T I ˉx) u < 1. According to [18], Conditions 3 and 4 do not contain each other. The following example shows that Conditions 3 and 4 are both stronger than Conditions 1 and 2. Let ( ) ( ) Ψ =, Φ =, ˆx =, b = It is straightforward to verify that Conditions 1 and 2 hold. However, Conditions 3 and 4 fail to hold. Indeed, we have Ψ T x = (10.5, 0, 0) T and I = {1}. Q = (1, 0, 0) T is a basis matrix of Ker(Φ). Thus ( (Q T Ψ I c) + Q T Ψ I sign(ψ T 10.5 I x) = 101, 105 ) T = (3) Hence, Condition 3 does not hold. Furthermore, U = (1, 1, 10) T is a basis matrix of Ker(Ψ T J ), and the definition of Ω [J] gives us Ω [J] = ( , )T. Therefore, IC(sign(Ψ T 105 I x)) = 101 > 1, so Condition 4 does not hold either. Paper [18] also presents sufficient conditions for solution uniqueness, which are reviewed in [19] and shown to be not necessary. 3.2 Previous robustness conditions Turning to solution robustness, [9, 11] have studied the robustness of problems (2b) and (2c) in the Hilbertspace setting. Translating to the finite dimension, the conditions in [9] read: Condition 5. Given ˉx, the following two statements hold: (1) Ker(Ψ T J ) Ker(Φ) = {0}; (2) There exists y 1 (Ψ T ˉx) such that Ψy Im(Φ T ) and J = {i : y i < 1} =. This condition is equivalent to Condition 2. Under Condition 5, [9] shows the existence of constant C (not explicitly given) such that the solution x δ,λ to (2b) obeys Ψ T (x δ,λ x ) 2 Cδ when λ is set proportional to the noise level δ. In order to obtain an explicit formula for C, [11] introduces the following: Condition 6. Let ˆΨ = (ΨΨ T ) 1 Ψ. Given ˉx, the following two statements hold: (1) There exists some y 1 (Ψ T ˉx) such that Ψy Im(Φ T ); (2) For some t (0, 1), letting I(t) = {i : y i > t}, the mapping ˆΦ := Φ Span{ ˆΨi :i I(t)} is injective. Under this condition, the solutions to (2b) and (2c) are subject to error bounds whose constants depend on t, ˆΦ, the lower frame bound of Ψ, and other quantities. Proposition 1. Condition 6 is stronger than Condition
6 Proof. Let J = I(t) c ; then we have y J t < 1 from the definition of I(t). It remains to show that Ker(Ψ T J ) Ker(Φ) = {0}. For any x Ker(Ψ T J ), we have x = (ΨΨ T ) 1 ΨΨ T x = (ΨΨ T ) 1 Ψ J Ψ T J x + (ΨΨ T ) 1 Ψ J cψ T J cx = (ΨΨT ) 1 Ψ I(t) Ψ T I(t) x. Since Φ restricted to Span{ ˆΨ i : i I(t)} = Im((ΨΨ T ) 1 Ψ I(t) ) is injective, we have what we need. Paper [18] provides a much stronger condition below that strengthens Condition 4 by dropping the dependence on the Ψ-support (see the definition of RC(I) below). Condition 7. Given ˉx, index sets I = supp(ψ T ˉx) and J = I c satisfy: (1) Ker(Ψ T J ) Ker(Φ) = {0}; (2) Letting Ω [J] be given as in Condition 4, RC(I) := max p R I, p 1 min u Ker(Ψ J ) Ω[J] p u < 1. Under this condition, a nice error bound and a certain kind of weak sign consistency (between Ψ T x δ,λ and Ψ T x ) are given provided that problem (2b) is solved with the parameter λ = ρ w 2c J 2(1 RC(I)) for some ρ > 1, where c J = Ψ + J ΦT (ΦA [J] Φ T Id) 2,. When 1 RC(I) gets close to 0, this λ can become too large than it should be. 4 Verifying the conditions In this section, we present a method to verify Condition 1. Our method includes two steps: (Step 1:) Let Φ = UΣV T be the singular value decomposition of Φ. Assume V = [v 1,, v n ]. Since Φ has full row-rank, we have Ker(Φ) = Span{v m+1,, v n } and Q = [v m+1,, v n ] as a basis of Ker(Φ). We verify that Ψ T J Q has full row-rank, ensuring part (1) of Condition 1. (Step 2:) Let u 1 = Q T Ψ I sign(ψ T I ˉx) and A = QT Ψ J. Solve the convex problem minimize u, subject to Au = u 1. (4) u R J If the optimal objective of (4) is strictly less than 1, then part (2) of Condition 1 holds. In fact, we have: Proposition 2. Part (2) of Condition 1 holds if and only if (4) has an optimal objective < 1. Proof. Let û be a minimizer of (4). Assume û < 1. We consider the vector y composed by y I = sign(ψ T I ˉx) and y J = û. To show part (2) of Condition 1, it suffices to prove Ψy Im(Φ T ), or equivalently, Q T Ψy = 0. Indeed, The converse is obvious. Q T Ψy = Q T Ψ J y J + Q T Ψ I y I = Q T Ψ J û + Q T Ψ I y I = 0. Convex program (4) is similar in form to one in [18] though they are used to verify different conditions. 5 Proof of Theorem 1 We establish Theorem 1 in two steps. Our first step proves the theorem for problem (2a) only. The second step proves Theorem 1 for problems (2b) and (2c). 6
7 5.1 Proof of Theorem 1 for problem (2a) The equivalence of the three statements is shown in the following orders: 3) = 1) = 2) = 3). 3) = 1). Consider any perturbation ˆx + h where h Ker(Φ)\{0}. Take a subgradient g 1 (Ψ T ˆx) obeying g I = sign(ψ T I ˆx) = y I, g K = y K, and g J 1 such that g J, Ψ T J h = ΨT J h 1. Then, Ψ T (ˆx + h) 1 Ψ T ˆx 1 + Ψg, h (5a) = Ψ T ˆx 1 + Ψg Ψy, h (5b) = Ψ T ˆx 1 + g y, Ψ T h (5c) = Ψ T ˆx 1 + g J y J, Ψ T J h (5d) Ψ T ˆx 1 + Ψ T J h 1 (1 y J ), where (5b) follows from Ψy Im(Φ T ) = Ker(Φ) and h Ker(Φ), (5d) follows from the setting of g, and (5e) is an application of the inequality x, y x 1 y and g J, Ψ T J h = ΨT J h 1. Since h Ker(Φ)\{0} and Ker(Ψ T J ) Ker(Φ) = {0}, we have Ψ T J h 1 > 0. Together with the condition y J < 1, we have Ψ T (ˆx + h) 1 > Ψ T ˆx 1 for every h Ker(Φ)\{0} which implies that ˆx is the unique minimizer of (2a). 1) = 2). For every h Ker(Φ)\{0}, we have Φ(ˆx + th) = Φˆx and can find t small enough around 0 such that sign(ψ T I (ˆx + th)) = sign(ψt I ˆx). Since ˆx is the unique solution, for small and nonzero t we have Therefore, for any h Ker(Φ)\{0}, we have Ψ T (ˆx) 1 < Ψ T (ˆx + th) 1 = Ψ T I (ˆx + th) 1 + Ψ T I c(ˆx + th) 1 (5e) (6a) = Ψ T I (ˆx + th), sign(ψ T I (ˆx + th)) + tψ T I ch 1 (6b) = Ψ T I ˆx + tψ T I h, sign(ψ T I ˆx) + tψ T I ch 1 (6c) = Ψ T I ˆx, sign(ψ T I ˆx) + t Ψ T I h, sign(ψ T I ˆx) + tψ T I ch 1 (6d) = Ψ T (ˆx) 1 + t Ψ T I h, sign(ψ T I ˆx) + tψ T I ch 1. (6e) Ψ T I h, sign(ψ T I ˆx) < Ψ T I ch 1. (7) If the condition Ker(Ψ T I c) Ker(Φ) = {0} does not hold, we can choose a nonzero vector h Ker(Ψ T I c) Ker(Φ). We also have h Ker(Ψ T I c) Ker(Φ). Then we have Ψ T I h, sign(ψt I ˆx) < 0 and ΨT I h, sign(ψt I ˆx) < 0, which is a contradiction. It remains to show the existence of y in item (2) of Condition 1. This part is in spirit of the methods in papers [10] and [19], which are based on linear programming strong duality. We take ŷ with restrictions ŷ I = sign(ψ T I ˆx) and ŷ I c = 0. If such ŷ satisfies Ψŷ Im(ΦT ), then the existence has been shown. If Ψŷ / Im(Φ T ) = Ker(Φ), then we shall construct a new vector to satisfy part (2) of Condition 1. Let Q be a basis matrix of Ker(Φ). We have that a := Q T Ψŷ must be a nonzero vector. Consider the following problem minimize z subject to Q T Ψz = a and z I = 0. (8) z R l For any minimizer ẑ of problem (8), we have Ψ(ŷ + ẑ) Ker(Φ) = Im(Φ T ) and (ŷ + ẑ) I = ŷ I = sign(ψ T I ˆx). Thus, we shall show that the objective of problem (8) is strictly less than 1. To this end, we rewrite problem (8) in an equivalent form as: whose Lagrange dual problem is minimize z I c subject to Q T Ψ I cz I c = a, (9) z maximize p, a subject to Ψ T I cqp 1 1. (10) p 7
8 Note that Qp Ker(Φ) and p, a = p, Q T Ψŷ = p, Q T Ψ I sign(ψ T I ˆx) = ΨT I Qp, sign(ψt I ˆx). By using (7), for any p we have { Ψ T I p, a = Qp, sign(ψt I ˆx) = 0, if Qp = 0; Ψ T I Qp, sign(ψt I ˆx) < ΨT I cqp (11) 1 1, otherwise. Hence, problem (10) is feasible, and its objective value is strictly less than 1. By the linear programming strong duality property, problems (8) and (9) also have solutions, and their the objective value is strictly less than 1, too. This completes the proof. 2) = 3). Let J = I c and K = ; then Condition 2 follows. The proof of 3) = 1) is a standard technique in compressed sensing community. 5.2 Proof of Theorem 1 for problems (2b) and (2c) Lemma 1. Let γ > 0. If γ Φx b Ψ T x 1 is constant on a convex set Ω, then both Φx b and Ψ T x 1 are constant on Ω. Proof. It suffices to prove the case where the convex set has more than one point. Suppose x 1 and x 2 are arbitrary two different points in Ω. Consider the line segment L connecting x 1 and x 2. By the convexity of set Ω, we know L Ω. Thus, ĉ = γ Φx b Ψ T x 1 is a constant on L. If Φx 1 b Φx 2 b, then for any 0 < α < 1, we have γ Φ(αx 1 + (1 α)x 2 ) b Ψ T (αx 1 + (1 α)x 2 ) 1 (12a) = γ α(φx 1 b) + (1 α)(φx 2 b) α(ψ T x 1 ) + (1 α)(ψ T x 2 ) 1 (12b) < α(γ Φx 1 b Ψ T x 1 1 ) + (1 α)(γ Φx 2 b Ψ T x 2 1 ) (12c) = αĉ + (1 α)ĉ = ĉ, (12d) where the strict inequality follows from the strict convexity of γ 2 2 and the convexity of Ψ T x 1. This means that the points αx 1 + (1 α)x 2 on L attain a lower value than ĉ, which is a contradiction. Therefore, we have Φx 1 b = Φx 2 b, from which it is easy to see Ψ T x 1 1 = Ψ T x 2 1. We let X λ and Y δ denote the sets of solutions to problems (2b) and (2c), respectively; moreover, we assume that these two sets are nonempty. Then, from Lemma 1, we have the following result. Corollary 1. In problem (2b), Φx b and Ψ T x 1 are constant on X λ ; in problem (2c), Φx b and Ψ T x 1 are constant on Y δ. Proof. Since Φx b λ Ψ T x 1 is constant over X λ, the result follows directly from Lemma 1 for problem (2b). For problem (2c), if 0 Y δ, then we have Y δ = {0} because of the full row-rankness of Ψ. The result holds trivially. Suppose 0 Y δ. Since the optimal objective Ψ T x 1 is constant for all x Y δ, we have to show that Φx b 2 2 = δ for all x Y δ. If there exist a nonzero ˆx Y δ such that Φˆx b 2 2 < δ, we can find a non-empty ball B centered at ˆx with a sufficiently small radius ρ > 0 such that Φ x b 2 2 < δ for all x B. Let α = min{ ρ 2 ˆx 2, 1 2 } (0, 1). We have (1 α)ˆx B and (1 α)ψt ˆx 1 < Ψ T ˆx 1, which is a contradiction. Proof of Theorem 1 for problems (2b) and (2c). This proof exploits Corollary 1. Since the results of Corollary 1 are identical for problems (2b) and (2c), we present the proof for problem (2b) only. 8
9 By assumption, X λ is nonempty so we pick ˆx X λ. Let b = Φˆx, which is independent of the choice of ˆx according to Corollary 1. We introduce the following problem minimize Ψ T x 1, subject to Φx = b, (13) x and let X denote its solution set. Now, we show that X λ = X. Since Φx = Φˆx and Ψ T x 1 = Ψ T ˆx 1 for all x X λ and conversely any x obeying Φx = Φˆx and Ψ T x 1 = Ψ T ˆx 1 belongs to X λ, it suffices to show that Ψ T x 1 = Ψ T ˆx 1 for any x X. Assuming this does not hold, then since problem (13) has ˆx as a feasible solution and has a finite objective, we have a nonempty X and there exists x X satisfying Ψ T x 1 < Ψ T ˆx 1. But, Φ x b 2 = b b 2 = Φˆx b 2 and Ψ T x 1 < Ψ T ˆx 1 mean that x is a strictly better solution to problem (2b) than ˆx, contradicting the assumption ˆx X λ. Since X λ = X, ˆx is the unique solution to problem (2b) if and only if it is the unique solution to problem (13). Since problem (13) is in the same form of problem (2a), applying the part of Theorem 1 for problem (2a), which is already proved, we conclude that ˆx is the unique solution to problem (2b) if and only if Condition 1 or 2 holds. 6 Proof of Theorem 2 Lemma 2. Assume that vectors ˉx and y satisfy Condition 1. Let I = supp(ψ T ˉx) and J = I c. We have Ψ T x Ψ T ˉx 1 C 3 Φ(x ˉx) 2 + C 4 d y (x, ˉx), x, (14) where d y (x, ˉx) := Ψ T x 1 Ψ T ˉx 1 Ψy, x ˉx is the Bregman distance of function Ψ T 1, the absolute constants C 3, C 4 are given in Theorem 2. Proof. This proof is divided into two parts. They are partially inspired by [9]. 1. this part shows that for any u Ker(Ψ T J ), ( ) Ψ T x Ψ T C 3 Φ ˉx Ψ T (x u) 1 + C 3 Φ(x ˉx) 2. (15) λmin (ΨΨ T ) 2. this part shows that f(x) := min { Ψ T (x u) 1 : u Ker(Ψ T J ) } (1 y J ) 1 d y (x, ˉx). (16) Using the definition of C 4, combining (15) and (16) gives (14). Part 1. Let u Ker(Ψ T J ). By the triangle inequality of norms, we get Ψ T x Ψ T ˉx 1 Ψ T (x u) 1 + Ψ T (u ˉx) 1. (17) Since ˉx Ker(Ψ T J ), we have u ˉx Ker(ΨT J ) and thus u ˉx 2 r(j) Φ(u ˉx) 2, where r(j) < + follows from part (1) of Condition 1. Using the fact that supp(ψ T (u ˉx)) = I, we derive that Ψ T (u ˉx) 1 I Ψ T (u ˉx) 2 I u ˉx 2 I r(j) Φ(u ˉx) 2 (18a) (18b) (18c) = C 3 Φ(u ˉx) 2, (18d) 9
10 where we have used the assumption λ max (ΨΨ T ) = 1 and the definition C 3 = r(j) I. Furthermore, Φ(u ˉx) 2 Φ(x u) 2 + Φ(x ˉx) 2 Φ x u 2 + Φ(x ˉx) 2 (19a) (19b) Φ ΨT (x u) 2 λmin (ΨΨ T ) + Φ(x ˉx) 2 (19c) Φ ΨT (x u) 1 λmin (ΨΨ T ) + Φ(x ˉx) 2. (19d) Therefore, we get (15) after combining (17), (18), and (19). Part 2. Since Ψy, ˉx = Ψ T ˉx 1 implies d y (x, ˉx) = Ψ T x 1 Ψy, x, it is equivalent to proving f(x) (1 y J ) 1 ( Ψ T x 1 Ψy, x ). (20) Since u Ker(Ψ T J ) is equivalent to ΨT J u = 0, the Lagrangian of the minimization problem in (16) is L(u, v) = Ψ T (x u) 1 + v, Ψ T J u = Ψ T (x u) 1 + Ψ J v, u x + Ψ J v, x. (21) Then, f(x) = min u max v L(u, v). Following the minimax theorem, we derive that Let f(x) = max v min u L(u, v) = max min{ Ψ T (x u) 1 + Ψ J v, u x + Ψ J v, x } (22a) v u = max min w u { ΨT (x u) 1 + w, u x + w, x : w Im(Ψ J )} (22b) = max w { w, x : w ΨT 1 (0) Im(Ψ J )} (22c) = max w { cψy + w, x : w ΨT 1 (0) Im(Ψ J )} cψy, x, c > 0 (22d) =c max w { Ψy + w, x : w c 1 Ψ T 1 (0) Im(Ψ J )} c Ψy, x, c > 0. (22e) c = (1 y J ) 1 and Z J = {z R l : z I = 0}. Since y J < 1 from part (2) of Condition 1, we have c < + and get ( y + c 1 1 (0) Z J ) 1 (0), (23) from which we conclude ( Ψy + c 1 Ψ T 1 (0) Im(Ψ J ) ) Ψ T 1 (0). (24) Hence, for any w c 1 Ψ T 1 (0) Im(Ψ J ), it holds Ψy + w Ψ T 1 (0), which by the convexity of Ψ T 1 implies Ψy + w, x Ψ T x 1. Therefore, f(x) c( Ψ T x 1 Ψy, x ). Lemma 3. ([1], Theorem 3; [10], Lemma 3.5) Suppose that x R n is a fixed vector obeying supp(ψ T x ) = I and that there are vectors satisfying y (Ψ T x ) and Ψy = Φ T β. Then for every δ > 0 and every data vector b satisfying Φx b 2 δ, the following two statements hold: 1)Every minimizer x δ,λ of problem (2b) satisfies d y (x δ,λ, x ) (δ+λ β 2/2) 2 λ and Φx δ,λ b 2 δ+λ β 2 ; 2)Every minimizer x δ of problem (2c) satisfies d y (x δ, x ) 2δ β 2. From Ψy = Φ T β and the full-rankness of Φ, we have β = (ΦΦ T ) 1 Ψy. 10
11 Proof of Theorem 2. Firstly, we derive that Ψ(x δ,λ x ) 1 C 3 Φ(x δ,λ x ) 2 + C 4 d y (x δ,λ, x ) C 3 Φx δ,λ b 2 + C 3 Φx b 2 + C 4 d y (x δ,λ, x ) (25a) (25b) (δ + λ β 2 /2) 2 C 3 (δ + λ β 2 ) + C 3 δ + C 4, (25c) λ where the first and the third inequalities follow from Lemmas 2 and 3, respectively. Substituting λ = C 0 δ and collecting like terms in (25c), we obtain the first part of Theorem 2. The second part can be proved in the same way. Acknowledgements The work of H. Zhang is supported by China Scholarship Council during his visit to Rice University, and in part by Graduate School of NUDT under Funding of Innovation B110202, Hunan Provincial Innovation Foundation for Postgraduate CX2011B008, and NSFC Grants No and No The work of M. Yan is supported in part by the Center for Domain-Specific Computing (CDSC) funded by NSF grants CCF and ARO/ARL MURI grant FA The work of W. Yin is supported in part by NSF grants DMS and ECCS They thank Rachel Ward and Xiaoya Zhang for their discussions. References [1] M. Burger and S. Osher, Convergence rates of convex variational regularization, Inverse Problems, 20 (2004), p [2] E. Candes, The restricted isometry property and its implications for compressed sensing, Comptes Rendus Mathematique, 346 (2008), pp [3] E. J. Candès, Y. C. Eldar, D. Needell, and P. Randall, Compressed sensing with coherent and redundant dictionaries, Applied and Computational Harmonic Analysis, 31 (2011), pp [4] E. J. Candes and Y. Plan, A probabilistic and RIPless theory of compressed sensing, IEEE Trans. Inf. Theor., 57 (2011), pp [5] A. Cohen, W. Dahmen, and R. Devore, Compressed sensing and best k-term approximation, J. Amer. Math. Soc, (2009), pp [6] M. Elad, P. Milanfar, and R. Rubinstein, Analysis versus synthesis in signal priors, Inverse Problems, 23 (2007), p [7] J. J. Fuchs, On sparse representations in arbitrary redundant bases, IEEE Trans. Inf. Theor., 50 (2004), pp [8] J. J. Fuchs, Recovery of exact sparse representations in the presence of bounded noise, IEEE Trans. Inf. Theor., 51 (2005), pp [9] M. Grasmair, Linear convergence rates for Tikhonov regularization with positively homogeneous functionals, Inverse Problems, 27 (2011), p
12 [10] M. Grasmair, M. Haltmeier, and O. Scherzer, Necessary and sufficient conditions for linear convergence of l 1 -regularization, Comm. Pure Appl. Math., 64 (2011), pp [11] M. Haltmeier, Stable signal reconstruction via l 1 -minimization in redundant, non-tight frames, Signal Processing, IEEE Transactions on, 61 (2013), pp [12] F. Krahmer and R. Ward, Compressive imaging: stable and robust recovery from variable density frequency samples, arxiv: , (2012). [13] Y. Liu, T. Mi, and S. Li, Compressed sensing with general frames via optimal-dual-based l 1 -analysis, Information Theory, IEEE Transactions on, 58 (2012), pp [14] S. Nam, M. E. Davies, M. Elad, and R. Gribonval, The cosparse analysis model and algorithms, Applied and Computational Harmonic Analysis, 34 (2013), pp [15] D. Needell and R. Ward, Near-optimal compressed sensing guarantees for total variation minimization, arxiv: , (2012). [16] D. Needell and R. Ward, Stable image reconstruction using total variation minimization, arxiv: , (2012). [17] R. J. Tibshirani, The lasso problem and uniqueness, arxiv: , (2012). [18] S. Vaiter, G. Peyre, C. Dossal, and J. Fadili, Robust sparse analysis regularization, Information Theory, IEEE Transactions on, 59 (2013), pp [19] H. Zhang, W. Yin, and L. Cheng, Necessary and sufficient conditions of solution uniqueness in l 1 minimization, arxiv: , (2012). [20] Y. Zhang, Theory of compressive sensing via l 1 -minimization: a non-rip analysis and extensions, Journal of the Operations Research Society of China, 1 (2013), pp
Necessary and sufficient conditions of solution uniqueness in l 1 minimization
1 Necessary and sufficient conditions of solution uniqueness in l 1 minimization Hui Zhang, Wotao Yin, and Lizhi Cheng arxiv:1209.0652v2 [cs.it] 18 Sep 2012 Abstract This paper shows that the solutions
More informationNecessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization
Noname manuscript No. (will be inserted by the editor) Necessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization Hui Zhang Wotao Yin Lizhi Cheng Received: / Accepted: Abstract This
More informationSparse Optimization Lecture: Dual Certificate in l 1 Minimization
Sparse Optimization Lecture: Dual Certificate in l 1 Minimization Instructor: Wotao Yin July 2013 Note scriber: Zheng Sun Those who complete this lecture will know what is a dual certificate for l 1 minimization
More informationINDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina
INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationLecture: Introduction to Compressed Sensing Sparse Recovery Guarantees
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin
More informationRecovering overcomplete sparse representations from structured sensing
Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix
More informationA New Estimate of Restricted Isometry Constants for Sparse Solutions
A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist
More informationOn the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals
On the l 1 -Norm Invariant Convex -Sparse Decomposition of Signals arxiv:1305.6021v2 [cs.it] 11 Nov 2013 Guangwu Xu and Zhiqiang Xu Abstract Inspired by an interesting idea of Cai and Zhang, we formulate
More informationStrengthened Sobolev inequalities for a random subspace of functions
Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)
More informationNear Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing
Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar
More informationUniqueness Conditions for A Class of l 0 -Minimization Problems
Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationof Orthogonal Matching Pursuit
A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement
More informationThe Analysis Cosparse Model for Signals and Images
The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under
More informationRui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China
Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series
More informationRandom projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016
Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use
More informationA Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases
2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary
More informationRSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems
1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical
More informationSparse Optimization Lecture: Sparse Recovery Guarantees
Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,
More informationCOMPRESSED Sensing (CS) is a method to recover a
1 Sample Complexity of Total Variation Minimization Sajad Daei, Farzan Haddadi, Arash Amini Abstract This work considers the use of Total Variation (TV) minimization in the recovery of a given gradient
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationMath 273a: Optimization Subgradients of convex functions
Math 273a: Optimization Subgradients of convex functions Made by: Damek Davis Edited by Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com 1 / 42 Subgradients Assumptions
More informationLeast squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova
Least squares regularized or constrained by L0: relationship between their global minimizers Mila Nikolova CMLA, CNRS, ENS Cachan, Université Paris-Saclay, France nikolova@cmla.ens-cachan.fr SIAM Minisymposium
More informationEUSIPCO
EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,
More informationOn John type ellipsoids
On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to
More informationStability and Robustness of Weak Orthogonal Matching Pursuits
Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery
More informationSolution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions
Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationNoisy Signal Recovery via Iterative Reweighted L1-Minimization
Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationLecture 7: Semidefinite programming
CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational
More informationOrthogonal Matching Pursuit for Sparse Signal Recovery With Noise
Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published
More informationCompressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery
Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad
More informationOn Optimal Frame Conditioners
On Optimal Frame Conditioners Chae A. Clark Department of Mathematics University of Maryland, College Park Email: cclark18@math.umd.edu Kasso A. Okoudjou Department of Mathematics University of Maryland,
More informationECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference
ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring
More informationORTHOGONAL matching pursuit (OMP) is the canonical
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4395 Analysis of Orthogonal Matching Pursuit Using the Restricted Isometry Property Mark A. Davenport, Member, IEEE, and Michael
More informationFast Angular Synchronization for Phase Retrieval via Incomplete Information
Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department
More informationRecent Developments in Compressed Sensing
Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline
More informationMIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design
MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More informationConstructing Explicit RIP Matrices and the Square-Root Bottleneck
Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry
More informationThe uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008
The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationBias-free Sparse Regression with Guaranteed Consistency
Bias-free Sparse Regression with Guaranteed Consistency Wotao Yin (UCLA Math) joint with: Stanley Osher, Ming Yan (UCLA) Feng Ruan, Jiechao Xiong, Yuan Yao (Peking U) UC Riverside, STATS Department March
More informationConditions for Robust Principal Component Analysis
Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationSCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD
EE-731: ADVANCED TOPICS IN DATA SCIENCES LABORATORY FOR INFORMATION AND INFERENCE SYSTEMS SPRING 2016 INSTRUCTOR: VOLKAN CEVHER SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD STRUCTURED SPARSITY
More informationA new method on deterministic construction of the measurement matrix in compressed sensing
A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationLinear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions
Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix
More informationSupremum of simple stochastic processes
Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there
More informationCHAPTER 7. Connectedness
CHAPTER 7 Connectedness 7.1. Connected topological spaces Definition 7.1. A topological space (X, T X ) is said to be connected if there is no continuous surjection f : X {0, 1} where the two point set
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationIE 521 Convex Optimization Homework #1 Solution
IE 521 Convex Optimization Homework #1 Solution your NAME here your NetID here February 13, 2019 Instructions. Homework is due Wednesday, February 6, at 1:00pm; no late homework accepted. Please use the
More informationApproximating scalable frames
Kasso Okoudjou joint with X. Chen, G. Kutyniok, F. Philipp, R. Wang Department of Mathematics & Norbert Wiener Center University of Maryland, College Park 5 th International Conference on Computational
More informationCHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.
1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function
More informationData Sparse Matrix Computation - Lecture 20
Data Sparse Matrix Computation - Lecture 20 Yao Cheng, Dongping Qi, Tianyi Shi November 9, 207 Contents Introduction 2 Theorems on Sparsity 2. Example: A = [Φ Ψ]......................... 2.2 General Matrix
More informationAn Introduction to Compressed Sensing
An Introduction to Compressed Sensing Mathukumalli Vidyasagar March 8, 216 2 Contents 1 Compressed Sensing: Problem Formulation 5 1.1 Mathematical Preliminaries..................................... 5 1.2
More informationLinear Systems. Carlo Tomasi
Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationExact Low-rank Matrix Recovery via Nonconvex M p -Minimization
Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:
More informationStability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries
Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Simon Foucart, Drexel University Abstract We investigate the recovery of almost s-sparse vectors x C N from
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationImplications of the Constant Rank Constraint Qualification
Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates
More informationStrong Duality and Dual Pricing Properties in Semi-infinite Linear Programming A Non-Fourier-Motzkin Elimination Approach
Strong Duality and Dual Pricing Properties in Semi-infinite Linear Programming A Non-Fourier-Motzkin Elimination Approach Qinghong Zhang Department of Mathematics and Computer Science Northern Michigan
More informationEXISTENCE OF SOLUTIONS FOR KIRCHHOFF TYPE EQUATIONS WITH UNBOUNDED POTENTIAL. 1. Introduction In this article, we consider the Kirchhoff type problem
Electronic Journal of Differential Equations, Vol. 207 (207), No. 84, pp. 2. ISSN: 072-669. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu EXISTENCE OF SOLUTIONS FOR KIRCHHOFF TYPE EQUATIONS
More informationCS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5
CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given
More informationThe Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1
The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,
More informationMinimizing the Difference of L 1 and L 2 Norms with Applications
1/36 Minimizing the Difference of L 1 and L 2 Norms with Department of Mathematical Sciences University of Texas Dallas May 31, 2017 Partially supported by NSF DMS 1522786 2/36 Outline 1 A nonconvex approach:
More informationA Short Course on Frame Theory
A Short Course on Frame Theory Veniamin I. Morgenshtern and Helmut Bölcskei ETH Zurich, 8092 Zurich, Switzerland E-mail: {vmorgens, boelcskei}@nari.ee.ethz.ch April 2, 20 Hilbert spaces [, Def. 3.-] and
More informationAn algebraic perspective on integer sparse recovery
An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationSignal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit
Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably
More informationTractable Upper Bounds on the Restricted Isometry Constant
Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.
More informationLeast Sparsity of p-norm based Optimization Problems with p > 1
Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from
More informationConvergence rates in l 1 -regularization when the basis is not smooth enough
Convergence rates in l 1 -regularization when the basis is not smooth enough Jens Flemming, Markus Hegland November 29, 2013 Abstract Sparsity promoting regularization is an important technique for signal
More informationNotes on Ordered Sets
Notes on Ordered Sets Mariusz Wodzicki September 10, 2013 1 Vocabulary 1.1 Definitions Definition 1.1 A binary relation on a set S is said to be a partial order if it is reflexive, x x, weakly antisymmetric,
More information5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE
5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years
More informationContents. 0.1 Notation... 3
Contents 0.1 Notation........................................ 3 1 A Short Course on Frame Theory 4 1.1 Examples of Signal Expansions............................ 4 1.2 Signal Expansions in Finite-Dimensional
More informationPARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION
PARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION ALESSIO FIGALLI AND YOUNG-HEON KIM Abstract. Given Ω, Λ R n two bounded open sets, and f and g two probability densities concentrated
More informationTHEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS
THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS YIN ZHANG Abstract. Compressive sensing (CS) is an emerging methodology in computational signal processing that has
More informationCoordinate Update Algorithm Short Course Proximal Operators and Algorithms
Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow
More informationThe Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology
The Sparsity Gap Joel A. Tropp Computing & Mathematical Sciences California Institute of Technology jtropp@acm.caltech.edu Research supported in part by ONR 1 Introduction The Sparsity Gap (Casazza Birthday
More informationMultipath Matching Pursuit
Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy
More informationConvexity of the Reachable Set of Nonlinear Systems under L 2 Bounded Controls
1 1 Convexity of the Reachable Set of Nonlinear Systems under L 2 Bounded Controls B.T.Polyak Institute for Control Science, Moscow, Russia e-mail boris@ipu.rssi.ru Abstract Recently [1, 2] the new convexity
More informationCompressed Sensing and Robust Recovery of Low Rank Matrices
Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech
More informationA strongly polynomial algorithm for linear systems having a binary solution
A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57
More informationc 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE
METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.
More informationINF-SUP CONDITION FOR OPERATOR EQUATIONS
INF-SUP CONDITION FOR OPERATOR EQUATIONS LONG CHEN We study the well-posedness of the operator equation (1) T u = f. where T is a linear and bounded operator between two linear vector spaces. We give equivalent
More informationResearch Article Support Recovery of Greedy Block Coordinate Descent Using the Near Orthogonality Property
Hindawi Mathematical Problems in Engineering Volume 17, Article ID 493791, 7 pages https://doiorg/11155/17/493791 Research Article Support Recovery of Greedy Block Coordinate Descent Using the Near Orthogonality
More informationMath 273a: Optimization Subgradients of convex functions
Math 273a: Optimization Subgradients of convex functions Made by: Damek Davis Edited by Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com 1 / 20 Subgradients Assumptions
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More information1 Continuous-time Systems
Observability Completely controllable systems can be restructured by means of state feedback to have many desirable properties. But what if the state is not available for feedback? What if only the output
More information