On the Sandwich Theorem and a approximation algorithm for MAX CUT
|
|
- Andrea Ray
- 5 years ago
- Views:
Transcription
1 On the Sandwich Theorem and a approximation algorithm for MAX CUT Kees Roos Technische Universiteit Delft Faculteit Electrotechniek. Wiskunde en Informatica C.Roos@its.tudelft.nl URL: roos WI4 060 feb A.D. 2004
2 Outline Conic Optimization (CO) Duality theory for CO Semidefinite Optimization (SDO) Some examples Algorithms for SDO Application to some combinatorial problems Maximal clique Graph coloring Lovasz sandwich theorem Maximal cut problem Semidefinite relaxation of MAX CUT Result of Goemans and Williamson Nemirovski s proof Original proof Concluding remarks Some references Optimization Group 1
3 General conic optimization A general conic optimization problem is a problem in the conic form { c T x : Ax b K }, min x R n where K is a closed convex pointed cone. We restrict ourselves to the cases that K is either the non-negative orthant R m + (Linear Inequality constraints); the Lorentz (or second order, or ice-cream) cone L m (Conic Quadratic constraints); the semidefinite cone S m +, i.e. the cone of positive semidefinite matrices of size m m (Linear Matrix Inequality (LMI) constraints); a direct product of such cones. In all these cases the above problem can be solved efficiently by an interior-point method. Optimization Group 2
4 Conic optimization Conic optimization addresses the problem of minimizing a linear objective function over the intersection of an affine set and a convex cone. The general form is { (COP) c T x : Ax b K }. min x R n The convex cone K is a subset of R m. The objective function c T x is linear. Ax b represents an affine function from R n to R m. Usually A is given as an m n (constraint) matrix, and b R m. Two important facts: many nonlinear problems can be modelled in this way, and under some weak conditions on the underlying cone K, conic optimization problems can be solved efficiently. The most easy and most well known case occurs when the cone K is the nonnegative orthant of R m, i.e. when K = R m + : (LO) { min c T x : Ax b R m } +. x R This is nothing else as one of the standard forms of the well known Linear Optimization (LO) problem. Thus it becomes clear that LO is a special case of CO. Optimization Group 3
5 Convex cones A subset K of R m is a cone if and the cone K is a convex cone if moreover a K, λ 0 λa K, (1) a, a K a + a K. (2) We will impose three more conditions on K. Recall that CO is a generalization of LO. To obtain duality results for CO similar to those for LO, the cone K should inherit three more properties from the cone underlying LO, namely the nonnegative orthant: R m + = { x = (x 1,..., x m ) T : x i 0, i = 1,..., m }. This cone is called the linear cone. Optimization Group 4
6 Convex cones (cont.) The linear cone is not just a convex cone; it is also pointed, it is closed and it has a nonempty interior. These are exactly the three properties we need. We describe these properties now. A convex cone K is called pointed if it does not contain a line. This property can be stated equivalently as a K, a K a = 0. (3) A convex cone K is called closed if it is closed under taking limits: a i K (i = 1,2,...), a = lim i a i a K. (4) Finally, denoting the interior of a cone K as int K, we will require that int K =. (5) This means that there exists a vector (in K) such that a ball of positive radius centered at the vector is contained in K. Optimization Group 5
7 Convex cones (cont.) In conic optimization we only deal with cones K that enjoy all of the above properties. So we always assume that K is a pointed and closed convex cone with a nonempty interior. Apart from the linear cone, two other relevant examples of such cones are 1. The Lorentz cone L m = { x R m : x m } x x2 m 1. This cone is also called the second-order cone, or the ice-cream cone. 2. The positive semidefinite cone S m +. This cone lives in the space Sm of m m symmetric matrices (equipped with the Frobenius inner product A, B = Tr(AB) = i,j A ij B ij ) and consist of all m m matrices A which are positive semidefinite, i.e., S m + = { A S m : x T Ax 0, x R m}. We assume that the cone K in (CP) is a direct product of the form K = K 1... K m, where each K i is either a linear, a Lorentz or a semidefinite cone. Optimization Group 6
8 Conic Duality Before we discuss the duality theory for conic optimization, we need to define the dual cone of a convex cone K: K = { λ R m : λ T a 0, a K }. Theorem 1 Let K R m is a nonempty cone. Then (i) The set K is a closed convex cone. (ii) If K has a nonempty interior (i.e., int K = ) then K is pointed. (iii) If K is a closed convex pointed cone, then int K. (iv) If K is a closed convex cone, then so is K, and the cone dual to K is K itself. Corollary 1 If K R m is a closed pointed convex cone with nonempty interior then so is K, and vice versa. One may easily verify that the linear, the Lorentz and the semidefinite cone are self-dual. Since K = K 1... K m K = K 1... Km, any direct product of linear, Lorentz and semidefinite cones is self-dual. Optimization Group 7
9 Conic Duality Now we are ready to deal with the problem dual to a conic problem (COP). We start with observing that whenever x is a feasible solution for (COP) then the definition of K implies λ T (Ax b) 0, for all λ K, and hence x satisfies the scalar inequality λ T Ax λ T b, λ K. It follows that whenever λ K satisfies the relation then one has A T λ = c (6) c T x = (A T λ) T x = λ T Ax λ T b = b T λ for all x feasible for (COP). So, if λ K satisfies (6), then the quantity b T λ is a lower bound for the optimal value of (COP). The best lower bound obtainable in this way is the optimal value of the problem { (COD) max b T λ : A T } λ = c, λ K. λ R m By definition, (COD) is the dual problem of (COP). Using Theorem 1 (iv), one easily verifies that the duality is symmetric: the dual problem is conic and the problem dual to the dual problem is the primal problem. Optimization Group 8
10 Conic Duality Indeed, from the construction of the dual problem it immediately follows that we have the weak duality property: if x is feasible for (COP) and λ is feasible for (COD), then c T x b T λ 0. The crucial question is, of course, if we have equality of the optimal values whenever (COP) and (COD) have optimal values. Different from the LO case, however, this is in general not the case, unless some additional conditions are satisfied. The following theorem clarifies the situation. We call the problem (COP) solvable if it has a (finite) optimal value, and this value is attained. Before stating the theorem it may be worth pointing out that a finite optimal value is not necessarily attained. For example, the problem min x : x 1 1 y 0, x, y R has optimal value 0, but one may easily verify that this value is not attained. We need one more definition: if there exists an x such that Ax b int K then we say that (COP) is strictly feasible. We have similar, and obvious, definitions for (COD) being solvable and strictly feasible, respectively. Optimization Group 9
11 Conic Duality Theorem Theorem 2 Let the primal problem (COP) and its dual problem (COD) be as given above. Then one has (i) a. If (COP) is below bounded and strictly feasible, then (COD) is solvable and the respective optimal values are equal. b. If (COD) is above bounded and strictly feasible, then (COP) is solvable, and the respective optimal values are equal. (ii) Suppose that at least one of the two problems (COP) and (COD) is bounded and strictly feasible. Then a primal-dual feasible pair (x, λ) is comprised of optimal solutions to the respective problems a. if and only if b T λ = c T x (zero duality gap). b. if and only if λ T [Ax b] = 0 (complementary slackness). This result is slightly weaker than the duality theorem for LO: in the LO case the theorem holds by putting everywhere feasible instead of strictly feasible. The adjective strictly cannot be omitted here, however. For a more extensive discussion and some appropriate counterexamples we refer to the book of Ben-Tal and Nemirovski. Optimization Group 10
12 min Bad duality example Consider the following conic problem with two variables x = (x 1, x 2 ) T and the 3-dimensional ice-cream cone: x 1 x 2 : Ax b = x 2 x 1 L3. The problem is equivalent to the problem { min x 2 : } x x2 2 x 1, i.e., to the problem min {x 2 : x 2 = 0, x 1 0}. The problem is clearly solvable, and its optimal set is the ray {x 1 0, x 2 = 0}. Now let us build the conic dual to our (solvable!) primal. Since the cone dual to an ice-cream cone is this ice-cream cone itself, the dual problem is max λ 0 : λ 1 + λ 3 = 0, λ L 3 λ 2 1. In spite of the fact that the primal problem is solvable, the dual is infeasible: indeed, assuming that λ is dual feasible, we have λ L 3, which means that λ 3 λ λ2 2 ; since also λ 1 + λ 3 = 0, we come to λ 2 = 0, which contradicts the equality λ 2 = 1. Optimization Group 11
13 LO as a special case of SOCO By definition, the m-dimensional Lorentz cone is given by { L m = x R m : x m x x2 m 1 Hence, the 1-dimensional Lorentz cone is given by Thus it follows that L 1 = {x R : x 0}. R n + = L1 L 1... L 1 }{{} n times As a consequence, every LO problem can be written as a SOCO problem. }. Optimization Group 12
14 SOCO as a special case of SDO Recall that the m-dimensional Lorentz cone is given by { L m = x R m : x m x x2 m 1 }. One may easily verify that x L m if and only if x m x m 1 x m 2... x 1 x m 1 x m x m 2 0 x m x x m S m +. The above matrix depends linearly on (the coordinates of) the vector x, and hence any SOCO constraint could be written as an SDO constraint. Optimization Group 13
15 SOCO as a special case of SDO (proof) Let a R m 1 and α R. Then (a, α) L m if and only if a α. We need to show that this holds if and only if α at 0, a αi where I denotes the (m 1) (m 1) identity matrix. The latter is equivalent to ( ) β b T α at β 0, β R, b R m 1. a αi b Thus we obtain α at a αi 0 αβ 2 + 2βb T a + αb T b 0, β R, b R m 1 This proves the claim. α 2 β 2 + 2αβb T a + α 2 b T b 0, β R, b R m 1, α 0 ( αβ + b T a ) 2 + α 2 b T b (b T a) 2 0, β R, b R m 1, α 0 α 2 b T b (b T a) 2 0, b R m 1, α 0 α 2 a T a (a T a) 2 0, α 0 α 2 a T a 0, α 0 a α. Optimization Group 14
16 Convex quadratic optimization (CQO) as a special case of SOCO Any convex quadratic problem with quadratic constraints can be written as min {f 0 (x) : f i (x) 0, i = 1,..., m}, where f i (x) = (B i x + b i ) T (B i x + b i ) c T i x d i, i = 0,1,... m. The objective can be made linear by introducing an extra variable τ such that f 0 (x) τ, and minimizing τ. Redefining f 0 (x) := f 0 (x) τ, the problem becomes min {τ : f i (x) 0, i = 0,, m}. So it suffices to show that every convex quadratic f i (x) 0 can be written as a SOC constraint. Omitting the index i such a constraint has the form (Bx + b) T (Bx + b) c T x + d, or Bx + b 2 c T x + d. This is not yet a SOC constraint!! A SOC constraint has the form Gx + g p T x + q Gx + g p T x + q L k. Optimization Group 15
17 Convex quadratic optimization (CQO) as a special case of SOCO Bx + b 2 c T x + d. To put this constraint in the SOC form we observe that c T x + d = [ c T x + d + 1 ] 2 [ 4 c T x + d 1 2 4] Thus we have Bx + b 2 c T x + d Bx + b 2 + [ c T x + d 1 4 This is equivalent, for a suitable k (k = rowsize(b) + 2), to Bx + b c T x + d 1 4 c T x + d Lk. ] 2 [ c T x + d ] 2. Optimization Group 16
18 More on Semidefinite Optimization A semidefinite optimization problem can be written in the form (SD) d = sup { b T y : A(y) C } where A(y) := y 1 A y m A m ; A i = A T i R n n, 1 i m; C = C T R n n ; A(y) C means: C A(y) is positive semidefinite (PSD). Optimization Group 17
19 Convex quadratic optimization (CQO) as a special case of SDO z T z ρ I z T z ρ 0 We have seen before that any convex quadratic constraint has the form Bx + b 2 c T x + d. By the above statement (which is a simple version of the so-called Schur complement lemma) this can be equivalently expressed as the SD constraint I Bx + b (Bx + b) T c T 0. x + d Optimization Group 18
20 Semidefinite duality Dual problem: Primal problem: d = sup bt y : m i=1 y i A i + S = C, S 0 p = inf {Tr(CX) : Tr(A i X) = b i ( i), X 0} Duality gap: Tr(CX) b T y = Tr (SX) 0 Central path: SX = µi, µ > 0. the central path exists if and only if the primal and the dual problem are strictly feasible (IPC); then both problems have optimal solutions and the duality gap = 0 at optimality. Optimization Group 19
21 Algorithms for SDO Dikin-type affine scaling approach Primal dual search directions ( X, y, Z) must satisfy Tr(A i X) = 0, mi=1 y i A i + Z = 0. i = 1,..., m X and Z are orthogonal: Tr( X Z) = 0. Duality gap after the step: Tr(X + X)(Z + Z). We minimize this duality gap over the so-called Dikin ellipsoid. The search directions follow by solving X + D ZD = XZX ( Tr(XZ) 2 )1 2, subject to the feasibility conditions. D is the socalled Nesterov-Todd (NT) scaling-matrix D := Z 1 2 ( Z 1 2XZ 1 2)1 2 Z 1 2. We assume that the matrices A i are linearly independent. Optimization Group 20
22 Measure of centrality The eigenvalues of XZ are real and positive if X, Z 0, since where denotes the similarity relation. XZ X 1 2 (XZ) X 1 2 = X 1 2ZX 1 2 0, The proximity to the central path is measured by κ(xz) := λ max(xz) λ min (XZ), where λ max (XZ) denotes the largest eigenvalue of XZ and λ min (XZ) the smallest. Optimization Group 21
23 Input: A strictly feasible pair (X 0, Z 0 ); a step size parameter α > 0; an accuracy parameter ɛ > 0. begin X := X 0 ; Z := Z 0 ; while Tr(XZ) ɛ do begin X := X + α X; y := y + α y; Z := Z + α Z; end end Primal dual Dikin affine-scaling algorithm Theorem 3 Let τ > 1 be such that κ(x 0 Z 0 ) τ. If α = 1 stops after at most τnl iterations, where L := ln ( Tr(X 0 Z 0 )/ɛ ). τ n, the Dikin Step Algorithm The output is a feasible primal dual pair (X, Z ) satisfying κ(x Z ) τ and Tr(X Z ) ɛ. Optimization Group 22
24 Primal dual Newton direction As before, the search directions ( X, y, Z) must satisfy Tr(A i X) = 0, mi=1 y i A i + Z = 0. i = 1,..., m We want Omitting the quadratic term this leads to (X + X)(Z + Z) = µi. which can be rewritten as XZ + X Z + XZ = µi, X + X ZZ 1 = µz 1 X, Note that Z will be symmetric, but X possibly not! To overcome this difficulty we use instead the equation X + D ZD = µz 1 X, and we obtain the socalled Nesterov-Todd (NT) directions. Here D is the same NT scalingmatrix as introduced before. Optimization Group 23
25 Proximity measure Let (X, Z) be a strictly feasible pair. We measure the distance of this pair to the µ-center by δ(x, Z; µ) := 1 2 where V is determined by µv 1 1 V µ V 2 = D 1 2XZD 1 2. Theorem 4 If δ := δ(x, Z, µ) < 1 then the full Newton step is strictly feasible and the duality gap attains it target value nµ. Moreover, δ ( X +, Z + ; µ ) δ 2. 2(1 δ 2 ) Optimization Group 24
26 Algorithm with full Newton steps Input: A proximity parameter τ, 0 τ < 1; an accuracy parameter ɛ > 0; X 0, Z 0, µ 0 > 0 such that δ(x 0, Z 0 ; µ 0 ) τ; a barrier update parameter θ, 0 < θ < 1. begin X := X 0 ; Z := Z 0 ; µ := µ 0 ; while nµ (1 θ)ɛ do begin X := X + X; Z := Z + Z; µ := (1 θ)µ; end end Theorem 5 If τ = 1/ 2 and θ = 1/(2 n), then the above algorithm with full NT steps requires at most 2 nlog nµ0 ɛ iterations. The output is a primal-dual pair (X, Z) such that Tr(XZ) ɛ. Optimization Group 25
27 Approximation algorithm for discrete problems, via SDO relaxation In graph G = (V, E) find a maximal clique, i.e., a subset C V, such that C is maximal, with i, j C (i j) : {i, j} E. Linear model: ω(g) := max e T x s.t. x i + x j 1, {i, j} / E (i j) x i {0,1}, i V. Quadratic model: ω(g) = max e T x s.t. x i x j = 0, {i, j} / E (i j) x i {0,1}, i V. Semidefinite relaxation: ω(g) ϑ(g) := max Tr ( ee T X ) = e T Xe s.t. X ij = 0, {i, j} / E (i j) Tr(X) = 1, X 0. Optimization Group 26
28 Lovasz sandwich theorem (1979) Semidefinite dual for ϑ(g): ϑ(g) = min s.t. λ Y + ee T λi Y ij = 0, {i, j} E Y ii = 0, i V A coloring of G with k colors induces a feasible Y, with λ = k, yielding ϑ(g) k. Thus we obtain the sandwich theorem of Lovász: ω(g) ϑ(g) χ(g), where ω(g) and χ(g) are the clique and chromatic numbers of G. The name sandwich theorem comes from the interesting paper of D.E. Knuth, The sandwhich theorem, The Electronic Journal of Combinatorics, Vol. 1, 1-48, The next two sheets provide proofs of both inequalities based on semidefinite duality. The first proof is more or less classical, but the second proof seems to be new. If G is perfect then ω(g) = ϑ(g) = χ(g). (Grötschel, Lovász, Schrijver, 1981) Optimization Group 27
29 Proof of the first inequality Let V = {v 1, v 2,, v n }, and let C = {v 1, v 2,, v k } be a clique in G = (V, E) of size k,1 k n. So the vertices v i C are mutually connected. Define Then satisfies x C = 1,,1 }{{} k X = 1 k,0,,0, X := 1 k x Cx T C. } {{ } n k X ij = 0, {i, j} / E (i j) and X 0,Tr(X) = 1. Since e T Xe = k, ϑ(g) k. Optimization Group 28
30 Proof of the second inequality Let Γ = (C 1, C 2,, C k ) be a coloring of G = (V, E) with k colors. So the C i are vertex disjoint cocliques. Let γ i := C i,1 i k. For i = 1,2,, k define M i := k ( J γi I γi ), where J γi denotes the all one matrix of size γ i γ i and I γi the identity matrix of the same size. Then the block matrix M M Y = M k is dual feasible, with λ = k. Hence ϑ(g) k. Optimization Group 29
31 The maximal cut problem Input: A graph G = (V, E) with rational nonnegative weights a vw for {v, w} E. We take a vw = 0 if {v, w} / E. Goal: Partition the nodes into two classes so as to maximize the sum of the weights of the edges whose nodes are in different classes (weight of the cut). a 5 c d b 3 e Optimization Group 30
32 The maximal cut problem (cont.) a 5 c 2 4 d 1 e 3 b The maximum cut problem (MAX CUT) is NP-hard, even if a vw {0,1} for all {v, w} E. N.B. The problem is solvable in polynomial-time if the graph is planar. Optimization Group 31
33 Relevance of the problem In a mathematical sense the maximum cut problem is interesting in itself. But there exist interesting applications in a wide variety of domains: Finding the ground state of magnetic particles subject to a field in the Ising spin glas model. Minimizing the number of vias (holes) drilled in a two-sided circuit board. Solving network design problems. Finding the radius of nonsingularity of a square matrix. Optimization Group 32
34 Solution approaches Several approaches: Integer/linear optimization enumerative techniques (e.g., branch and cut) heuristics (e.g., local search methods) approximation algorithms Optimization Group 33
35 Approximation algorithms An α-approximation algorithm for an NP-hard optimization problem: runs in polynomial time returns a feasible solution with value not worse than a factor α from optimal (with α < 1 in case of a maximization and α > 1 in case of a minimization problem). For randomized α-approximation algorithms the expected value of the solution is within a factor α of optimal. A 1 2-approximation algorithm of Sahni and Gonzalez (1976) was the best known for MAX CUT for almost 20 years. Goemans and Williamson (1995) improved this ratio from 1 2 to If a 0.94-approximation algorithm exists then P=NP. This has been shown by Trevisan, Sorkin, Sudan and Williamson (1996) and by Hastad (1997). Optimization Group 34
36 Quadratic model for MAX CUT Let (S, T) be a any cut, i.e., a partitioning of the set V of the nodes in two disjoint classes S and T. Assuming V = n, the cut can be identified with an n-dimensional { 1,1}-vector x as follows x v = 1 if v S 1 if v T The weight of the cut (S, T) is then given by 1 4 v,w V, v V. a vw (1 x v x w ) We conclude that MAX CUT can be posed as follows: max x 1 4 v,w V a vw (1 x v x w ) : x 2 v = 1, v V. Optimization Group 35
37 max x Semidefinite relaxation of MAX CUT 1 4 v,w V a vw (1 x v x w ) : x 2 v = 1, v V Defining the matrix X = xx T, we have X vw = x v x w and the matrix X is a symmetric positive semidefinite matrix of rank 1. Thus we can reformulate the problem as max X 1 4 v,w V. a vw (1 X vw ) : X 0, rank(x) = 1, X vv = 1, v V Omitting the rank constraint we arrive at the following relaxation: max X 1 4 v,w V a vw (1 X vw ) : X 0, X vv = 1, v V The optimal solutions of this problem are the same as for the SDO problem min X v,w V a vw X vw = Tr(AX) : X 0, X vv = 1, v V where A = (a vw ). Note that X vv = 1 iff Tr (E v X) = 1, where E vv = 1 and all other elements of E v are zero..,. Optimization Group 36
38 Result of Goemans and Williamson OPT = max x SDP = max X v,w V v,w V a vw (1 x v x w ) : x 2 v = 1, v V a vw (1 X vw ) : X 0, X vv = 1, v V (7) (8) The relaxation (8) can be used in an ingenious way to obtain an approximation algorithm for the maximal cut problem. It is based on Theorem 6 With α = 0.878, one has α SDP OPT SDP. and a rounding procedure that generates a cut whose expected weight equals α SDP. (Goemans & Williamson, 1994) S.D.G. Optimization Group 37
39 Nemirovski s proof of the Goemans-Williamson bound Theorem 6 One has α SDP OPT SDP, with α = Proof: The right inequality is obvious. To get the left hand side inequality, let X = [X vw ] be an optimal solution to the SD relaxation. Since X is positive semidefinite, it is the covariance matrix of a Gaussian random vector ξ with zero mean, so that E {ξ v ξ w } = X vw. Now consider the random vector ζ = sign[ξ] comprised of signs of the entries in ξ. A realization of ζ is almost surely a vector with coordinates ±1, i.e., it is a cut. A straightforward computation demonstrates that E {ζ v ζ w } = 2 π arcsin(x vw). It follows that the expected weight of the cut vector ζ is given by 1 E a vw (1 ζ v ζ v ) 4 = 1 a vw (1 2 ) 4 π arcsin(x vw) = 1 ( ) 2 a vw 4 π arccos(x vw) ; v,w V v,w V we used that arccos(t) + arcsin(t) = π for 1 t 1. Now one may easily verify that if 2 1 t 1 then 2 arccos(t) α(1 t), α = π Using also a vw 0, this implies that 1 E a vw (1 ζ v ζ v ) 4 α a vw (1 X vw ) = α SDP. 4 v,w V The left hand side in this inequality, by evident reasons, is OPT. Thus we have proved that OPT α SDP. v,w V v,w V Optimization Group 38
40 The inequality π 2 arccos(t) α(1 t), α = π arccos(t) α(1 t) t Optimization Group 39
41 Original proof of the Goemans-Williamson bound Theorem 6 One has α SDP OPT SDP, with α = Proof: As before, let X = [X vw ] be an optimal solution to the SD relaxation. Since X is positive semidefinite, we may write X = U T U, for some k n matrix U, where k = rank(x). Let u 1,..., u n denote the columns of U. Then we have u T v u w = X vw, for all i and all j. Note that X vv = 1 implies that u v = 1, for each v V. Let r R k be a randomly chosen unit vector in R k. Define a cut vector ζ: +1, if r T u v 0 ζ v = 1, if r T u v < 0. What is the expected weight of the cut corresponding to ζ? We claim that Pr(ζ v ζ w = 1) = 1 π arccos(ut v u w ) = 1 π arccos(x vw). Hence one has E(ζ v ζ w ) = ( 1 1 ) π arccos(x vw) 1 π arccos(x vw) = 2 π arcsin(x vw), where we used again that arccos(t) + arcsin(t) = π for 1 t 1. Hence 2 1 E a vw (1 ζ v ζ v ) 4 = 1 a vw (1 2 ) 4 π arcsin(x vw) = 1 ( ) 2 a vw 4 π arccos(x vw). v,w V v,w V From here we proceed as in Nemirovski s proof. v,w V Optimization Group 40
42 Geometric pictures related to the Goemans-Williamson proof u 3 u v u 1 θ vw θ vw r u n r u w u 2 The figure shows n unit vectors u v and a random unit vector r. If u v is on one side of the hyperplane r T v = 0 then ζ v = 1, and if u v is on the other side of this hyperplane then ζ v = 1. What is Pr(ζ v ζ w = 1)? In the green region r T u v and r T u w have opposite signs. If r is a random vector, this happens with probability 2θ vw 2π = θ vw π = arccos(ut v u w). π Optimization Group 41
43 Concluding remarks The last decade gave rise to a revolution in algorithms and software for linear, convex and semidefinite optimization. SDO unifies a wide variety of optimization problems. SDO models can be solved efficiently. This opens the way to many new applications, including applications which could not be handled some years ago. Since 1995, the techniques discussed in this talk have led to numerous improved approximation algorithms for other combinatorial optimization problems, like, e.g., MAX SAT, MAX 2SAT, MAX 3SAT, MAX 4SAT, MAX k-cut, k-coloring, scheduling, etc. Optimization Group 42
44 Some references A. Ben-Tal and A. Nemirovski. Lectures on Modern Convex Optimization. Analysis, Algorithms and Engineering Applications. Volume 1 of MPS/SIAM Series on Optimization. SIAM, Philadelphia, USA, S. Boyd, El Ghaoui, Feron and Balakrishnan. Linear Matrix Inequalities in System and Control Theory. SIAM, M. Goemans and D. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM 42, , E. de Klerk. Aspects of Semidefinite Programming. Volume 65 in the series Applied Optimization. Kluwer, L. Lovász and A. Schrijver. Cones of matrices and set functions and 0-1 optimization, SIAM Journal on Optimization 1, , S. Sahni and T. Gonzalez. P-complete approximation problems. Journal of the ACM, 23: , A. Nesterov and A. Nemirovsky. Interior Point Polynomial Algorithms in Convex Programming. SIAM, L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review 38, 49 95, Optimization Group 43
Introduction to Semidefinite Programming I: Basic properties a
Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite
More informationMIT Algebraic techniques and semidefinite optimization February 14, Lecture 3
MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationResearch Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More informationAdvances in Convex Optimization: Theory, Algorithms, and Applications
Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne
More informationIE 521 Convex Optimization
Lecture 14: and Applications 11th March 2019 Outline LP SOCP SDP LP SOCP SDP 1 / 21 Conic LP SOCP SDP Primal Conic Program: min c T x s.t. Ax K b (CP) : b T y s.t. A T y = c (CD) y K 0 Theorem. (Strong
More informationRelaxations and Randomized Methods for Nonconvex QCQPs
Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be
More informationCopositive Programming and Combinatorial Optimization
Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with M. Bomze (Wien) and F. Jarre (Düsseldorf) and
More informationThe maximal stable set problem : Copositive programming and Semidefinite Relaxations
The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationDiscrete (and Continuous) Optimization WI4 131
Discrete (and Continuous) Optimization WI4 131 Kees Roos Technische Universiteit Delft Faculteit Electrotechniek, Wiskunde en Informatica Afdeling Informatie, Systemen en Algoritmiek e-mail: C.Roos@ewi.tudelft.nl
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationCopositive Programming and Combinatorial Optimization
Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with I.M. Bomze (Wien) and F. Jarre (Düsseldorf) IMA
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More informationA Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization
A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,
More informationA CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING
A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Advanced Optimization Laboratory McMaster University Joint work with Gema Plaza Martinez and Tamás
More informationA semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint
Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite
More informationc 2000 Society for Industrial and Applied Mathematics
SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.
More informationSDP Relaxations for MAXCUT
SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationLecture 5. The Dual Cone and Dual Problem
IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the
More informationSummer School: Semidefinite Optimization
Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory
More information1 The independent set problem
ORF 523 Lecture 11 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Tuesday, March 29, 2016 When in doubt on the accuracy of these notes, please cross chec with the instructor
More informationInterior Point Methods: Second-Order Cone Programming and Semidefinite Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationLNMB PhD Course. Networks and Semidefinite Programming 2012/2013
LNMB PhD Course Networks and Semidefinite Programming 2012/2013 Monique Laurent CWI, Amsterdam, and Tilburg University These notes are based on material developed by M. Laurent and F. Vallentin for the
More informationConvex Optimization M2
Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationChapter 3. Some Applications. 3.1 The Cone of Positive Semidefinite Matrices
Chapter 3 Some Applications Having developed the basic theory of cone programming, it is time to apply it to our actual subject, namely that of semidefinite programming. Indeed, any semidefinite program
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationApplications of the Inverse Theta Number in Stable Set Problems
Acta Cybernetica 21 (2014) 481 494. Applications of the Inverse Theta Number in Stable Set Problems Miklós Ujvári Abstract In the paper we introduce a semidefinite upper bound on the square of the stability
More informationWhat can be expressed via Conic Quadratic and Semidefinite Programming?
What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationFour new upper bounds for the stability number of a graph
Four new upper bounds for the stability number of a graph Miklós Ujvári Abstract. In 1979, L. Lovász defined the theta number, a spectral/semidefinite upper bound on the stability number of a graph, which
More informationSemidefinite programs and combinatorial optimization
Semidefinite programs and combinatorial optimization Lecture notes by L. Lovász Microsoft Research Redmond, WA 98052 lovasz@microsoft.com http://www.research.microsoft.com/ lovasz Contents 1 Introduction
More informationSelected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.
. Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,
More information1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad
Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,
More informationA full-newton step infeasible interior-point algorithm for linear programming based on a kernel function
A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with
More informationAn Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationSemidefinite Programming
Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization
More informationChapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More informationSEMIDEFINITE PROGRAM BASICS. Contents
SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n
More informationHandout 6: Some Applications of Conic Linear Programming
ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and
More informationCanonical Problem Forms. Ryan Tibshirani Convex Optimization
Canonical Problem Forms Ryan Tibshirani Convex Optimization 10-725 Last time: optimization basics Optimization terology (e.g., criterion, constraints, feasible points, solutions) Properties and first-order
More informationConic Linear Optimization and its Dual. yyye
Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
More informationA new primal-dual path-following method for convex quadratic programming
Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationAcyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs
2015 American Control Conference Palmer House Hilton July 1-3, 2015. Chicago, IL, USA Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca and Eilyan Bitar
More information6.854J / J Advanced Algorithms Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms
More informationCOURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion
COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F
More informationA solution approach for linear optimization with completely positive matrices
A solution approach for linear optimization with completely positive matrices Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with M. Bomze (Wien) and F.
More informationApproximation Algorithms
Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationCSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization
CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of
More informationLecture 4: January 26
10-725/36-725: Conve Optimization Spring 2015 Lecturer: Javier Pena Lecture 4: January 26 Scribes: Vipul Singh, Shinjini Kundu, Chia-Yin Tsai Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:
More informationEE 227A: Convex Optimization and Applications October 14, 2008
EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider
More informationA Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization
A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationRelations between Semidefinite, Copositive, Semi-infinite and Integer Programming
Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming Author: Faizan Ahmed Supervisor: Dr. Georg Still Master Thesis University of Twente the Netherlands May 2010 Relations
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization
Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012
More informationLecture 3: Semidefinite Programming
Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality
More information1 Review of last lecture and introduction
Semidefinite Programming Lecture 10 OR 637 Spring 2008 April 16, 2008 (Wednesday) Instructor: Michael Jeremy Todd Scribe: Yogeshwer (Yogi) Sharma 1 Review of last lecture and introduction Let us first
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationA path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal
Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016
U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest
More informationLocal Self-concordance of Barrier Functions Based on Kernel-functions
Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi
More informationLecture 17: Primal-dual interior-point methods part II
10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS
More informationModeling with semidefinite and copositive matrices
Modeling with semidefinite and copositive matrices Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/24 Overview Node and Edge relaxations
More informationLimiting behavior of the central path in semidefinite optimization
Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path
More informationOn self-concordant barriers for generalized power cones
On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov
More informationAcyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs
Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca & Eilyan Bitar School of Electrical and Computer Engineering American Control Conference (ACC) Chicago,
More informationLecture 14: Optimality Conditions for Conic Problems
EE 227A: Conve Optimization and Applications March 6, 2012 Lecture 14: Optimality Conditions for Conic Problems Lecturer: Laurent El Ghaoui Reading assignment: 5.5 of BV. 14.1 Optimality for Conic Problems
More informationNotation and Prerequisites
Appendix A Notation and Prerequisites A.1 Notation Z, R, C stand for the sets of all integers, reals, and complex numbers, respectively. C m n, R m n stand for the spaces of complex, respectively, real
More informationORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016
ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016 When in doubt on the accuracy of these notes, please cross check with the instructor
More informationSemidefinite Programming
Semidefinite Programming Basics and SOS Fernando Mário de Oliveira Filho Campos do Jordão, 2 November 23 Available at: www.ime.usp.br/~fmario under talks Conic programming V is a real vector space h, i
More informationPrimal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond
Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Tor Myklebust Levent Tunçel September 26, 2014 Convex Optimization in Conic Form (P) inf c, x A(x) = b, x
More informationExample: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma
4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid
More informationDiscrete (and Continuous) Optimization Solutions of Exercises 2 WI4 131
Discrete (and Continuous) Optimization Solutions of Exercises 2 WI4 131 Kees Roos Technische Universiteit Delft Faculteit Electrotechniek, Wiskunde en Informatica Afdeling Informatie, Systemen en Algoritmiek
More informationFull Newton step polynomial time methods for LO based on locally self concordant barrier functions
Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/
More informationA strongly polynomial algorithm for linear systems having a binary solution
A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th
More information2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.
FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool
More informationNew stopping criteria for detecting infeasibility in conic optimization
Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:
More informationPOLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS
POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationLecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut
CMPUT 675: Approximation Algorithms Fall 011 Lecture 17 (Nov 3, 011 ): Approximation via rounding SDP: Max-Cut Lecturer: Mohammad R. Salavatipour Scribe: based on older notes 17.1 Approximation Algorithm
More informationA notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations
A notion of for Convex, Semidefinite and Extended Formulations Marcel de Carli Silva Levent Tunçel April 26, 2018 A vector in R n is integral if each of its components is an integer, A vector in R n is
More informationConvex and Semidefinite Programming for Approximation
Convex and Semidefinite Programming for Approximation We have seen linear programming based methods to solve NP-hard problems. One perspective on this is that linear programming is a meta-method since
More informationTamás Terlaky George N. and Soteria Kledaras 87 Endowed Chair Professor. Chair, Department of Industrial and Systems Engineering Lehigh University
5th SJOM Bejing, 2011 Cone Linear Optimization (CLO) From LO, SOCO and SDO Towards Mixed-Integer CLO Tamás Terlaky George N. and Soteria Kledaras 87 Endowed Chair Professor. Chair, Department of Industrial
More information