Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
|
|
- Daisy Fitzgerald
- 6 years ago
- Views:
Transcription
1 MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t. r c T i x i r A i x i = b x i K i, i = 1,..., r (1) and its dual max b T y s.t A T i y + s i = c i, i = 1,..., r s i Ki (2) where b, y IR m ; c i, x i, s i IR ni, A i IR m ni, i = 1,..., r. For each i = 1,..., r, x i and s i are the primal and dual slack variables associated with the ith cone and K i = { s i IR ni : x T i s i 0, x i K i } (3) is the dual cone to K i. We assume that K i, Ki, i = 1,..., r are pointed closed convex cones with nonempty interiors. Let K = K 1 K 2... K r be the r overall cone in (1) and let n = n i denote its size. The overall dual cone is K = K 1 K 2... K r. We are interested in self-dual cones K where K = K. Let int(k) denote the interior of the closed convex cone K. The important self-dual cones include the following: 1. Linear cone: We have int(ir n +) = {x IR n : x > 0}. IR n + = {x IR n : x 0}. (4) 1-1
2 2. Second order cone: Q n + = x n IRn : x 1 i=2 x 2 i. (5) The notation x Q 0 indicates that x lies in a second order cone. We also have int(q n +) = x IRn : x 1 > n x 2 i. 3. Semidefinite cone: i=2 S n + = {X S n : X is symmetric and positive semidefinite}. (6) The notation X 0 indicates that the matrix X lies in a semidefinite cone. We also have int(s n +) = {X S n : X is symmetric and positive definite}. Exercise: Show that IR n +, Q n +, and S n + are closed convex cones. Every convex optimization problem can be written as a conic program. To illustrate the idea consider the strictly convex quadratic program min x T Qx s.t. Ax = b x 0 (7) where x IR n, Q is a symmetric positive definite matrix of size n, A IR m n, and b IR m. One can reformulate (7) as min t,x s.t. x T Qx t Ax = b x 0. t (8) Consider the Cholesky factorization of Q, i.e., Q = RR T where R IR n n. We can rewrite the quadratic inequality constraint in (8) as a second order cone constraint as follows: x T Qx t x T (RR T )x t Rx 2 t t + 1 2Rx t 1 Q 0. This allows us to rewrite (8) as the following conic program min t,x s.t. Ax = b x 0 t + 1 2Rx Q 0. t 1 t (9) 1-2
3 Setting t = t1 t2 where t1, t2 0 and u = t + 1 2Rx we can rewrite (9) as t 1 the following second order cone program in standard form min c T x x s.t. Ā x = b x K (10) where x = ( x 1 x 2 ) T with x 1 = (t1 t2 x) T and x 2 = u. We have x 1 0 and x 2 Q 0. Therefore, in this problem K = K 1 K 2 where K 1, K 2 IR n+2 are linear and second order cones, respectively. The parameters c, Ā, and b in (10) can be easily obtained. Exercise: What are c, Ā, and b in (10)? The purpose of this lecture is to provide a short introduction to duality theory, strict complementarity, optimality conditions, and introduce the notions of extreme point solutions and nondegeneracy in conic programs. 2 Conic duality Consider the conic program min c T x s.t. Ax = b x K (11) with dual (12) max b T y s.t. A T y + s = c s K where c, x IR n, b IR m, and A IR m n with m < n. Let x and (y, s) be a feasible solutions in (11) and (12), respectively. We have c T x = (A T y + s) T x = y T Ax + s T x = b T y + s T x 0 (13) where the last inequality in (13) follows from x K, s K and the definition (3) of the dual cone. Therefore, we have c T x b T y and so the quantity b T y is a lower bound on the optimal value of (11). The difference c T x b T y is also referred to as the duality gap. The weak duality theorem suggests that the objective value of any feasible solution to the primal (minimization) problem is greater than the objective value of any feasible solution to the dual (maximization) problem. The best lower bound we can obtain on the optimal objective value of (11) is precisely the dual problem (12). We will henceforth refer to (12) as the dual problem to (11). The conic problems (11) and (12) minimize a linear functional over the intersection of an affine subspace and a convex cone. To see this, we will make the following assumption. 1-3
4 Assumption 1 The matrix A is surjective, i.e. b IR m, x IR n, such that Ax = b. This implies that the adjoint operator A T is injective (one-to-one), i.e., w = z A T w = A T z, i.e. A T has only the trivial nullspace 0. From linear algebra, this implies that A has full row rank. If assumption 1 is not met, i.e., there is a b IR m such that Ax b for all x IR n. Then there is a nonzero vector e IR m satisfying A T e = 0 and b T e 0. Without loss of generality, we assume that b T e > 0 (else choose e = e). Now if the dual problem (12) is feasible then (ȳ, s) satisfying A T ȳ + s = c and s K. Now consider y = ȳ + µe with µ > 0. Since A T e = 0, we have A T y + s = A T (ȳ + µe) + s = A T ȳ + s = c indicating that (y, s) is feasible in (12) too. Moreover b T y = b T (ȳ + µe) (as µ ) indicating that (12) is unbounded. So if assumption (1) is not met, then the dual problem is unbounded whenever it is feasible. Using assumption 1, we can write (11) in the following fashion. Let L = {d IR n : Ad = 0} be the null space of A. Assumption 1 ensures that L is non-empty. Consider x satisfying Ax = b and (y, s ) satisfying A T y + s = c. We have and Ax = b x x + L c T x = b T y + (s ) T x for all feasible x in (11). So we can rewrite (11) as Moreover, we have min (s ) T x s.t. x (x + L) K A T y + s = c s = s A T (y y ) s s + L where we have used the fact that the row space of a matrix A is orthogonal to its null space. Also, we have b T y = c T x (x ) T s. for all feasible (y, s) in (12). So we can rewrite (12) as min (x ) T s s.t. s (s + L ) K. Therefore, any conic program can be expressed as the problem of minimizing a linear function over a convex set that is described as the intersection of an affine subspace and a closed convex cone. 1-4
5 Unlike linear programs, we have to impose some conditions to achieve strong duality. We will first introduce the notions of strictly feasible solutions (Slater points) for the primal and dual conic programs. Consider the following regions F (P ) = {x IR n : Ax = b, x K} F 0 (P ) = {x IR n : Ax = b, x int(k)} F (D) = {y IR m, s IR n : A T y + s = c, s K} F 0 (D) = {y IR m, s IR n : A T y + s = c, s int(k)}. (14) The sets F (P ) and F (D) denote the feasible regions for the conic programs (11) and (12), respectively. Moreover, F 0 (P ) and F 0 (D) represent the interiors of F (P ) and F (D), respectively. Note that F 0 (P ) and F 0 (D) are open sets (they have no boundary). We will refer to any x F 0 (P ) ((y, s) F 0 (D)) as a strictly feasible primal (dual) Slater point. We now discuss the strong duality theorem for optimal solutions to the primal and dual conic programs. First, we will describe the strong duality theorem for linear programs whose proof can be found in Chvátal [4]. Theorem 2 Consider the primal linear program (11) along with its dual (12). If F (P ) and F (D) are nonempty, then both problems (11) and (12) attain their optimal solutions and the optimal objective values of (11) and (12) are the same. We will now state the strong duality theorem for general conic programs whose proof can be found in Section 2.4 of Ben Tal and Nemirovskii [3], Chapter 3 of Renegar [6], or Section 4 of Todd [7]. Theorem 3 Consider the primal conic problem (11) along with its dual (12). 1. If F 0 (P ) and F (D) are nonempty, then the dual problem (12) attains its optimal solution and the optimal objective values of (11) and (12) are the same. 2. If F (P ) and F 0 (D) are nonempty, then the primal problem (11) attains its optimal solution and the optimal objective values of (11) and (12) are the same. 3. If F 0 (P ) and F 0 (D) are nonempty, then both problems (11) and (12) attain their optimal solutions and the optimal objective values of (11) and (12) are the same. Case 3 in theorem 3 is similar to theorem 2 in the LP case. Note that we need stronger conditions, i.e., strictly feasible primal and dual solutions (unlike feasible primal and dual solutions in the LP case) in the general conic case. We shall now consider some examples involving semidefinite cones to illustrate some of the pathologies that can occur in the conic case. Our primal (11) semidefinite program is min C X s.t. A i X = b i, i = 1,..., m X 0 and the dual (12) semidefinite program is max b T y s.t. S = C S 0. m y i A i (15) (16) 1-5
6 The matrices X, S, C, and A i, i = 1,..., m are symmetric matrices of size n. Moreover, X, S are positive semidefinite matrices. The notation C X = trace(cx) n n = C ij X (17) ij j=1 represents the Frobenius inner product for symmetric matrices. Moreover, assumption 1 requires that the matrices A i, i = 1,..., m in (15) are linearly independent. Since S n is isomorphic to IR (n+1 2 ) (, this implies that m n+1 ) 2. Consider min 0 0 X 1 s.t. 1 0 X = X = 2 2 X 0 with dual max 2y 2 s.t. S = y 1 y 2 0 y 2 1 2y 2 S 0 The primal constraints suggest that X 11 = 0, and X 12 + X 33 = 1. Thus any 0 ξ 1 ξ 2 feasible X is of the form ξ 1 ξ 3 ξ 4. Also X must be positive semidefinite, and this forces ξ 1, ξ 2 = 0 (since 0 ξ 1 ξ 1 ξ 3 ξ 2 ξ 4 1 ξ 1 0 and 0 ξ 2 ξ 2 1 ξ 1 0). 0 Thus any feasible X = 0 ξ 3 ξ 4. The optimal objective value for the 0 ξ 4 1 primal is 1, and an optimal solution is In the dual we require S = y 1 y 2 0 y 2 0. So y 2 = 0 (since 1 2y 2 y 1 y 2 y 2 0 0). Moreover, y 1 has to be nonpositive. Thus y = () T is an optimal solution to the dual problem and the dual optimal value is 0. Note that both problems attain their optimal solutions, but there is a duality gap. We note that neither the primal nor the dual semidefinite program has a Slater point and this explains the duality gap. Moreover, we can also construct examples where there is an infinite duality gap between the primal and dual 1-6
7 objective values. Now consider ( ) 0 1 min X ( 1 0 ) 1 0 s.t. X = 1, ( ) X = 0, 0 1 X 0 with dual max y 1 ( ) y1 1 s.t. S = 1 y 2 S 0 ( ) 1 0 It can be easily seen that the primal has only one feasible point X = which is not a strictly feasible solution. So, the primal problem ( has an ) optimal y1 1 objective value of 0. Now consider the dual. We require 0. So 1 y 2 the feasible region is {(y 1, y 2 ) : y 1 > 0, y 2 > 0, y 1 y 2 1}. So the dual optimal objective value is 0, but it is not attained. We can get arbitrarily close with solutions (ɛ, 1 ɛ ) for some arbitrarily small ɛ > 0. We also want to relate the existence of primal-dual Slater points to the notion of robust solvability of these problems. Here is an example, which shows that tweaking one of the data parameters by a small amount can lead to something drastically different. Consider min 0 0 X 1 1 s.t. 0 X = X = 2 2 X 0 Any feasible X is of the form feasible X must have the form X = It follows that an optimal X is 0 ξ 1 ξ 2 ξ 1 ξ 3 ξ 4 ξ 2 ξ 4 1 ξ ξ 3 ξ 4 0 ξ , and since X 0, any with an optimal value 1. The 1-7
8 dual problem is max 2y 2 s.t. S = y 1 y 2 0 y 2 1 2y 2 S 0 and so y 2 = 0, and y 1 should be nonpositive. Thus y = () T is optimal, with optimal value 0. Here both problems attain their optimal values, but there is a gap between them. Note that the dual problem does not have a Slater point either in this case. Now let us see what happens if we perturb b 1 from its present value 0 to ɛ > 0. In this case X 11 = ɛ, and so X 12 is no longer constrained to be 0. In fact any feasible X has the form X = ɛ ξ 1 ξ 2 ξ 1 ξ 3 ξ 4 ξ 2 ξ 4 1 ξ 1 and in fact an optimal X is given by ɛ 1 0 X = 1 1 ɛ 0 0 for an optimal value of 0. It can be easily verified (check this!) that the dual optimal solution is attained at the same point as before the perturbation, and its optimal value is once again zero. So after the perturbation, we see that both points attain their optimal solution, and the duality gap is zero. This brings up the issue of robustness. We see that the properties of this semidefinite program are not robust with respect to small perturbations in the data, i.e. for a small change in parameters, something entirely different can happen. 3 Complementary slackness, strict complementarity, and optimality conditions Consider the conic programs (1) and (2). The strong duality theorem for this r pair of conic programs suggests that x T s = x T i s i = 0. Since x T i s i 0, i = 1,..., r we must have x T i s i = 0, i = 1,..., r. Let K i be the ith cone in (1). We will use the condition x T i s i = 0 to derive the complementary slackness conditions for K i as follows: 1. Linear cone: Suppose K i = IR ni. Consider x i, s i K i. In this case, we have x T i s n i i = (x i ) j (s i ) j = 0 with x i, s i 0 j=1 which gives the following complementary slackness conditions (x i ) j (s i ) j = 0, j = 1,..., n i (18) for a linear cone. See chapter 5 in Chvátal [4] for more details. 1-8
9 2. Second order cone: Suppose K i = Q ni +. Consider x i = (x i0 x i ) T and s i = (s i0 s i ) T where x i0, s i0 IR and x i, s i IR ni 1. In this case, we have x T i s i = x i0 s i0 + x it s i = 0 with x i, s i Q ni + which gives the following complementary slackness conditions x i0 s i + s i0 x i = 0 (19) for a second order cone. See section 5 in Alizadeh and Goldfarb [1] for more details. 3. Semidefinite cone: Suppose K i = S ni +. Consider X i, S i K i, i.e., X i, S i 0. In this case, we have X i S i = 0 with X i, S i 0 which gives the following complementary slackness conditions X i S i = 0 (20) for a semidefinite cone. Note that the right hand side in (20) is the zero matrix of size n. See section 1 in Alizadeh et al. [2] for more details. The complementary slackness conditions (20) indicate that X i and S i commute and so they are simultaneously diagonalizable (see Horn and Johnson [5]), i.e., X i = P i Diag(λ i1,..., λ ini )Pi T and S = P i Diag(ω i1,..., ω ini )Pi T where P i IR ni ni is an orthogonal matrix containing the common eigenspace of X i and S i. Moreover, λ i1,..., λ ini and ω i1,..., ω ini are the eigenvalues of X i and S i, respectively. Therefore, we can also rewrite the conditions (20) as λ ij ω ij = 0, j = 1,..., n i. (21) We say that a pair of optimal solutions x and (y, s ) to (11) and (12) satisfy strict complementarity if x + s int(k). The Goldman-Tucker theorem (see theorem 2.4 in Wright [8]) shows that there is always a primal-dual pair of optimal solutions that satisfy strict complementarity in the linear case. However, there are semidefinite and second order cone programs that do not have any primal-dual pair of optimal solutions satisfying strict complementarity (see Alizadeh et al. [2] and Alizadeh and Goldfarb [1]). The optimality conditions for conic programs include the primal feasibility, dual feasibility, and complementary slackness conditions. Theorem 4 The vectors x IR n and y IR m, s IR n are optimal in (1) and (2) if and only if the following conditions hold: 1. Primal feasibility: Let x = (x 1 x 2... x r). We have and x i K i, i = 1,..., r. r A i x i = b 2. Dual feasibility: Let s = (s 1 s 2... s r). We have A T i y + s i = c i, s i K i, i = 1,..., r. 3. Complementary slackness conditions: For i = 1,..., r, x i and s i satisfy the complementary slackness conditions for the ith cone. 1-9
10 4 Extreme point solutions and nondegeneracy We noted in section 2 that a conic program involves the minimization of a linear function over a convex set that is the intersection of an affine subspace and a closed convex cone. Therefore, we know that an optimal solution will be attained at an extreme point of the feasible region. In this section, we review the notions of extreme points and nondegeneracy for linear and semidefinite programs. We will first introduce some definitions: For any point x IR n, the distance from x to the convex cone K is defined as the distance from x to the unique closest point in K and is denoted by dist(x, K). Consider the primal conic program. (11) and its feasible region F (P ). A point x F (P ) is said to be an extreme point of F (P ) if and only if there is no d IR n such that x±λd F (P ) for some λ > 0. Let Definition 5 Bx K = {d IR n : x ± λd K for some λ > 0} Tx K = {d IR n : dist(x ± λd, K) = O(λ 2 ) for some λ > 0} N = {d IR n : Ad = 0} N = {d IR n : d = A T y for some y IR m }. (22) Note that N is the orthogonal complement of N. From linear algebra, we know that N is the row space of A. Exercise: Show that the row space of A is the orthogonal complement of the null space of A. Theorem 6 Let x be a feasible solution in (11). x is an extreme point of (11) B K x N =. Proof: Consider a x F (P ). If x is not an extreme point of (11) then there is a nonzero d IR n such that x ± λd F (P ) for some λ > 0. Therefore, A(x ± λd) = Ax = b which implies that d N. Moreover, x ± λd 0 for some λ > 0 which implies that d Bx K. Therefore, d Bx K N which implies that Bx K N =. Similarly, if there is a nonzero d Bx K N then x ± λd F (P ) for some λ > 0 which implies that x is not an extreme point of (11). We will now describe B K x for linear and semidefinite cones: 1. Let F (P ) = {x IR n : Ax = b, x 0} be the feasible set of a linear program and let x F (P ). We have B LP x = {d IR n : d i = 0 if x i = 0, i = 1,..., n}. (23) 2. Let F (P ) = {X S n : A i X = b i, i = 1,..., m, [ X 0} be ] the feasible Λ 0 set of a semidefinite program and let X = [P Q] [P Q] T be a feasible solution where P IR n r with P T P = I is an orthonormal matrix containing the positive eigenspace of X and Λ 0 is a diagonal matrix of size r containing the positive eigenvalues of X. We have { [ ] } U 0 BX SDP = [P Q] [P Q] T : U S r. (24) 1-10
11 For the linear cone Tx LP. In fact, we have T SDP X T SDP X = { [ U V [P Q] V T 0 = Bx LP. However, for the semidefinite cone BX SDP ] [P Q] T : U S r V IR r (n r) }. (25) To see this, consider sufficiently small perturbations X ± λ X, with X TX SDP. These matrices are typically indefinite, but a [ psd matrix ] can be recovered by adding a matrix W of the form W = [P Q] [P Q] 0 Σ T to them where Σ S+ n r. Thus, the distance from X ± λ X to S+ n is the norm of the matrix Σ. We utilize the Schur complement idea to obtain a bound on Σ. Notice that [ Λ + λu λv X + λ X + W = [P Q] λv T Σ ] [P Q] T so we obtain the following condition on Σ: X + λ X + W 0 [ ] Λ + λu λv λv T 0 Σ Σ λ 2 V T (Λ + λu) 1 V 0 We have utilized the fact that Λ (since it is positive definite) dominates λu, so the matrix Λ + λu is invertible in the Schur complement. Thus loosely speaking, we can choose Σ with Σ = O(λ 2 ), so X is in TX SDP. For a sufficiently small perturbation λ we can say that whenever X TX SDP, then the matrix X ±λ X is sufficiently close to being a positive semidefinite matrix. Moreover, we have { [ ] } TX SDP = [P Q] [P Q] 0 W T : W S n r. (26) We will now introduce the notion of primal nondegeneracy. For more details, see Alizadeh et al. [2]. Definition 7 Let x IR n be a feasible solution in (11) and then X is primal nondegenerate. Note that the definition (27) can also be stated as T K x + N = IR n (27) Tx K N = (28) where Tx K and N denote the orthogonal complements of TX K and N, respectively. It is easy to see that the definitions (27) and (28) are equivalent: For instance, if x Tx K N then x is neither in Tx K Similarly, we can define dual nondegeneracy as follows: nor N and so T K x +N IR n. Definition 8 Let y IR m and s IR n be a feasible solution in (12) and T K z + N = IR n (29) 1-11
12 We will review the notion of extreme points and primal nondegeneracy for linear and semidefinite programs. 1. Let x be a feasible solution to a linear program. Without loss of generality, we will assume that x i > 0, i = 1,..., r and x i = 0, i = r + 1,..., n. (a) x is an extreme point if and only if the first r columns of A corresponding to the positive components of x are linearly independent in IR m. This suggests that r m. (b) x is primal nondegenerate if and only if the m rows of A are linearly independent in IR r. This suggests that r m. Note that if x is nondegenerate extreme point solution to a linear program then we have Bx LP N = IR n and that r = m. Exercise: Show all the above statements. [ ] Λ 0 2. Let X = [P Q] [P Q] T be a feasible solution to a semidefinite program where P IR n r and Q IR n (n r) are orthonormal matrices containing the positive eigenspace and null space of X, respectively and Λ 0 is a diagonal matrix containing the positive eigenvalues of X. Theorem 9 X is an extreme point if and only if the matrices P T A i P, i = 1,..., m span S r. Proof: Let X be an extreme point solution. Consider X BX SDP. We have X = P UP T where U S r. Since X is an extreme point, X / N. Therefore, the equations A i (P UP T ) = (P T A i P ) U = 0, i = 1,..., m have only the trivial solution U = 0. This implies that the matrices P T A i P, i = 1,..., m span S r. By reversing the arguments, we can show that if P T A i P span S r then X = P ΛP T is an extreme point solution. r(r + 1) Moreover, theorem 9 implies that m, i.e., r = 2m. This 2 suggests that the rank of an optimal solution to the primal semidefinite program is O( m). Theorem 10 X is primal nondegenerate if and only if the matrices [ ] P B k = T A k P P T A k Q Q T, k = 1,..., m A k P 0 are linearly independent in S n. Proof: Let X be a feasible solution that is primal nondegenerate. Suppose, the matrices B k, k = 1,..., m are not linearly independent. m Then, we have y k B k = 0 where y 0, i.e., some of the components y k k=1 of y are nonzero. Using the definition of the B k matrices, we have ( m ) [ ] [P Q] T y k A k [P Q] =. 0 W k=1 1-12
13 Therefore, Since, N = m k=1 m y k A k k=1 [ y k A k = [P Q] 0 W we have TX SDP N ] [P Q] T. which contradicts the nondegeneracy assumption on X. Moreover, if the matrices B k, k = 1,..., m are linearly independent in S n, then we have TX SDP N = which implies that X is nondegenerate. For linear programs, if the primal (dual) optimal solution is nondegenerate then the dual (primal) optimal solution is unique (see Chvátal [4]). For semidefinite programs, if the primal optimal solution is nondegenerate then the dual optimal solution is unique as we shall show below: [ ] Λ 0 Theorem 11 Let X = [P Q] [P Q] T with Λ 0 be a primal nondegenerate optimal solution to the primal semidefinite program. Then there exists a unique optimal dual solution (y, S). Proof: We assume that our semidefinite program satisfies assumption 1, which in turn implies that the dual slack variable S is uniquely determined by the y variable. Therefore, we have only to show that there exists a unique optimal solution y to the dual semidefinite program. The complementary slackness conditions XS = 0 [ for semidefinite ] programs suggests that an optimal S is of the form S = [P Q] [P Q] 0 Σ T. Consider [P Q] T S[P Q] = 0. This gives [ ] [ ] P = T CP P T CQ m [ ] P Q T y T A i P P T A i Q CP 0 i Q T A i P 0 m Q T SQ = Q T CQ y i (Q T A i Q). and (30) Suppose, there are two optimal solutions ȳ and ỹ to the dual semidefinite program, then the first equation in (30) implies that ȳ = ỹ (using the linear independence of the B k matrices). This indicates that the dual problem has a unique optimal solution (y, S). The converse of theorem 11 also holds under the addition of strict complementarity (see Alizadeh et al. [2]). Thus, the notions of nondegeneracy and extreme point solutions in conic programs (under the additional assumption of strict complementarity) are complementary, i.e., if the primal (dual) optimal is nondegenerate then the dual (primal) optimal solution is an extreme point. Since, strictly complementary optimal solutions always exist for linear programs (Goldman-Tucker theorem), this is true for all linear programs. This is not always true for second order and semidefinite programs and we refer the reader to examples in Alizadeh et al. [2, 1]. Similarly, we can show that if the dual optimal solution to a semidefinite program is nondegenerate then the primal semidefinite program has a unique optimal solution. Moreover, the converse is also true under the assumption of strict complementarity (see Alizadeh et al. [2]). 1-13
14 We have not discussed the notion of extreme point solutions and primal and dual nondegeneracy in second order cone programs. For more details, we refer the reader to Alizadeh and Goldfarb [1]. References [1] F. Alizadeh and D. Goldfarb, Second order cone programming, Mathematical Programming, 95(2003), pp [2] F. Alizadeh, J.A. Haeberly, and M.L. Overton, Complementarity and nondegeneracy in semidefinite programming, Mathematical Programming, 77(1997), pp [3] A. Ben-Tal and A. Nemirovskii, Lectures on Modern Convex Optimization, MPS-SIAM Series on Optimization, [4] V. Chvátal, Linear Programming, W.H. Freeman and Company, New York, [5] R. Horn and C. Johnson, Matrix Analysis, Cambridge University Press, [6] J. Renegar, A Mathematical View of Interior-Point Methods in Convex Optimization, MPS-SIAM Series on Optimization, [7] M.J. Todd, Semidefinite Optimization, Acta Numerica 10(2001), pp [8] S.J. Wright, Primal-Dual Interior-Point Methods, SIAM, Philadelphia,
Lecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More informationSummer School: Semidefinite Optimization
Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationLecture: Introduction to LP, SDP and SOCP
Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:
More informationConic Linear Optimization and its Dual. yyye
Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Instructor: Farid Alizadeh Scribe: Haengju Lee 10/1/2001 1 Overview We examine the dual of the Fermat-Weber Problem. Next we will
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analysis Handout 1 Luca Trevisan October 3, 017 Scribed by Maxim Rabinovich Lecture 1 In which we begin to prove that the SDP relaxation exactly recovers communities
More informationThe Q Method for Symmetric Cone Programmin
The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programmin Farid Alizadeh and Yu Xia alizadeh@rutcor.rutgers.edu, xiay@optlab.mcma Large Scale Nonlinear and Semidefinite Progra
More informationarxiv: v1 [math.oc] 26 Sep 2015
arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,
More information1 Review of last lecture and introduction
Semidefinite Programming Lecture 10 OR 637 Spring 2008 April 16, 2008 (Wednesday) Instructor: Michael Jeremy Todd Scribe: Yogeshwer (Yogi) Sharma 1 Review of last lecture and introduction Let us first
More informationA semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint
Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationLecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008
Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationEE 227A: Convex Optimization and Applications October 14, 2008
EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationLecture: Cone programming. Approximating the Lorentz cone.
Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More informationInterior Point Methods: Second-Order Cone Programming and Semidefinite Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationLecture 5. The Dual Cone and Dual Problem
IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the
More informationA new primal-dual path-following method for convex quadratic programming
Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté
More informationhomogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45
address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More informationSEMIDEFINITE PROGRAM BASICS. Contents
SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n
More informationMIT Algebraic techniques and semidefinite optimization February 14, Lecture 3
MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications
More informationThen x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r
Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationConic Linear Programming. Yinyu Ye
Conic Linear Programming Yinyu Ye December 2004, revised October 2017 i ii Preface This monograph is developed for MS&E 314, Conic Linear Programming, which I am teaching at Stanford. Information, lecture
More informationConic Linear Programming. Yinyu Ye
Conic Linear Programming Yinyu Ye December 2004, revised January 2015 i ii Preface This monograph is developed for MS&E 314, Conic Linear Programming, which I am teaching at Stanford. Information, lecture
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationLecture 14: Optimality Conditions for Conic Problems
EE 227A: Conve Optimization and Applications March 6, 2012 Lecture 14: Optimality Conditions for Conic Problems Lecturer: Laurent El Ghaoui Reading assignment: 5.5 of BV. 14.1 Optimality for Conic Problems
More informationContents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2
Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition
More informationChapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationAgenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms
Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point
More informationLecture 7: Semidefinite programming
CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationSemidefinite Programming
Semidefinite Programming Basics and SOS Fernando Mário de Oliveira Filho Campos do Jordão, 2 November 23 Available at: www.ime.usp.br/~fmario under talks Conic programming V is a real vector space h, i
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research
More informationConvex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version
Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationA priori bounds on the condition numbers in interior-point methods
A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationChapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.
Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space
More informationLinear and non-linear programming
Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationLimiting behavior of the central path in semidefinite optimization
Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path
More informationLINEAR ALGEBRA REVIEW
LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication
More informationSemidefinite Programming
Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationLecture 8. Strong Duality Results. September 22, 2008
Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation
More informationKey words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone
ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called
More informationThe Q Method for Second-Order Cone Programming
The Q Method for Second-Order Cone Programming Yu Xia Farid Alizadeh July 5, 005 Key words. Second-order cone programming, infeasible interior point method, the Q method Abstract We develop the Q method
More informationInterior-Point Methods
Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals
More informationLecture: Duality.
Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationThe Simplest Semidefinite Programs are Trivial
The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR-93-12
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationLecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016
Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More information3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions
A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence
More informationarxiv: v4 [math.oc] 12 Apr 2017
Exact duals and short certificates of infeasibility and weak infeasibility in conic linear programming arxiv:1507.00290v4 [math.oc] 12 Apr 2017 Minghui Liu Gábor Pataki April 14, 2017 Abstract In conic
More information14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.
CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity
More informationHandout 6: Some Applications of Conic Linear Programming
ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and
More informationDegeneracy in Maximal Clique Decomposition for Semidefinite Programs
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Raghunathan, A.U.; Knyazev, A. TR2016-040 July 2016 Abstract Exploiting
More informationAcyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs
2015 American Control Conference Palmer House Hilton July 1-3, 2015. Chicago, IL, USA Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca and Eilyan Bitar
More informationGeometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as
Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.
More informationCSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming
CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2 Instructor: Farid Alizadeh Scribe: Xuan Li 9/17/2001 1 Overview We survey the basic notions of cones and cone-lp and give several
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationStrong Duality and Minimal Representations for Cone Optimization
Strong Duality and Minimal Representations for Cone Optimization Levent Tunçel Henry Wolkowicz August 2008, revised: December 2010 University of Waterloo Department of Combinatorics & Optimization Waterloo,
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationExample: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma
4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationMidterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane
More informationA QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING
A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationYinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method
The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear
More informationA PRIMER ON SESQUILINEAR FORMS
A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form
More informationCopositive Plus Matrices
Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their
More information