Semidefinite Programming

Size: px
Start display at page:

Download "Semidefinite Programming"

Transcription

1 Chapter 2 Semidefinite Programming Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization problem: (SDP ) inf C X subject to A i X = b i, i = 1, 2,..., m, X 0. Recall that the operation is the matrix inner product A B := tra T B. The notation X 0 means that X is a positive semi-definite matrix, and X 0 means that X is a positive definite matrix. If a point X 0 and satisfies all equations in (SDP), it is called a (primal) strictly or interior feasible solution.. The dual problem to (SDP) can be written as: (SDD) sup b T y m subject to i y i A i + S = C, S 0, which is analogous to the dual (LD) of LP. Here y R m and S M n. If a point (y, S 0) satisfies all equations in (SDD), it is called a dual interior feasible solution. 31

2 32 CHAPTER 2. SEMIDEFINITE PROGRAMMING Example 2.1 Let P (y R m ) = C + m i y i A i, where C and A i, i = 1,..., m, are given symmetric matrices. The problem of minimizing the max-eigenvalue of P (y) can be cast as a (SDD) problem. In semi-definite programming, we minimize a linear function of a matrix in the positive semi-definite matrix cone subject to affine constraints. In contrast to the positive orthant cone of linear programming, the positive semi-definite matrix cone is non-polyhedral (or non-linear ), but convex. So positive semi-definite programs are convex optimization problems. Semi-definite programming unifies several standard problems, such as linear programming, quadratic programming, and convex quadratic minimization with convex quadratic constraints, and finds many applications in engineering, control, and combinatorial optimization. We have several theorems analogous to Farkas lemma. Theorem 2.1 (Farkas lemma in SDP) Let A i M n, i = 1,..., m, have rank m (i.e., m i y i A i = 0 implies y = 0) and b R m. Then, there exists a symmetric matrix X 0 with A i X = b i, i = 1,..., m, if and only if m i y i A i 0 and m i y i A i 0 implies b T y < 0. Note the difference between the LP and SDP. Theorem 2.2 (Weak duality theorem in SDP) Let F p and F d, the feasible sets for the primal and dual, be non-empty. Then, C X b T y where X F p, (y, S) F d. The weak duality theorem is identical to that of (LP) and (LD). Corollary 2.3 (Strong duality theorem in SDP) Let F p and F d be nonempty and have an interior. Then, X is optimal for (PS) if and only if the following conditions hold: i) X F p ; ii) there is (y, S) F d ; iii) C X = b T y or X S = 0. Again note the difference between the above theorem and the strong duality theorem for LP.

3 2.1. ANALYTIC CENTER 33 Example 2.2 Let C = , A 1 == , A 1 == and ( 0 b = 10 ). Two positive semi-definite matrices are complementary to each other, X S = 0, if and only if XS = 0 (Exercise 1.22). From the optimality conditions, the solution set for certain (SDP) and (SDD) is or S p = {X F p, (y, S) F d : C X b T y = 0}, S p = {X F p, (y, S) F d : XS = 0}, which is a system of linear matrix inequalities and equations. In general, we have Theorem 2.4 (SDP duality theorem) If one of (SDP) or (SDD) has a strictly or interior feasible solution and its optimal value is finite, then the other is feasible and has the same optimal value. If one of (SDP) or (SDD) is unbounded then the other has no feasible solution. Note that a duality gap may exists if neither (SDP) nor (SDD) has a strictly feasible point. This is in contrast to (LP) and (LD) where no duality gap exists if both are feasible. Although semi-definite programs are much more general than linear programs, they are not much harder to solve. It has turned out that most interior-point methods for LP have been generalized to semi-definite programs. As in LP, these algorithms possess polynomial worst-case complexity under certain computation models. They also perform well in practice. We will describe such extensions later in this book. 2.1 Analytic Center AC for polytope Let Ω be a bounded polytope in R m represented by n (> m) linear inequalities, i.e., Ω = {y R m : c A T y 0},

4 34 CHAPTER 2. SEMIDEFINITE PROGRAMMING where A R m n and c R n are given and A has rank m. Denote the interior of Ω by Ω= {y R m : c A T y > 0}. Define d(y) = n (c j a T j y), y Ω, where a.j is the jth column of A. Traditionally, we let s := c A T y and call it a slack vector. Thus, the function is the product of all slack variables. Its logarithm is called the (dual) potential function, B(y) := log d(y) = log(c j a Ṭ jy) = log s j, (2.1) and B(y) is the classical logarithmic barrier function. For convenience, in what follows we may write B(s) to replace B(y) where s is always equal to c A T y. Example 2.3 Let A = (1, 1) and c = (1; 1). Then the set of Ω is the interval [ 1, 1]. Let A = (1, 1, 1) and c = (1; 1; 1). Then the set of Ω is also the interval [ 1, 1]. Note that and d( 1/2) = (3/2)(1/2) = 3/4 and B( 1/2) = log(3/4), d ( 1/2) = (3/)(1/2)(1/2) = 3/8 and B ( 1/2) = log(3/8). The interior point, denoted by y a and s a = c A T y a, in Ω that maximizes the potential function is called the analytic center of Ω, i.e., B(Ω) := B(y a, Ω) = max log d(y, Ω). y Ω (y a, s a ) is uniquely defined, since the potential function is strictly concave in a bounded convex Ω. Setting B(y, Ω) = 0 and letting x a = (S a ) 1 e, the analytic center (y a, s a ) together with x a satisfy the following optimality conditions: Xs = e Ax = 0 (2.2) A T y s = c. Note that adding or deleting a redundant inequality changes the location of the analytic center.

5 2.1. ANALYTIC CENTER 35 Example 2.4 Consider Ω = {y R : y 0, y 1}, which is interval [0, 1]. The analytic center is y a = 1/2 with x a = (2, 2) T. Consider Ω = {y R : n times {}}{ y 0,, y 0, y 1}, which is, again, interval [0, 1] but y 0 is copied n times. The analytic center for this system is y a = n/(n + 1) with x a = ((n + 1)/n,, (n + 1)/n, (n + 1)) T. The analytic center can be defined when the interior is empty or equalities are presented, such as Ω = {y R m : c A T y 0, By = b}. Then the analytic center is chosen on the hyperplane {y : By = b} to maximize the product of the slack variables s = c A T y. Thus, the interior of Ω is not used in the sense that the topological interior for a set is used. Rather, it refers to the interior of the positive orthant of slack variables: R n + := {s : s 0}. When say Ω has an interior, we mean that Again R n + {s : s = c A T y for some y where By = b} =. R n +:= {s R n + : s > 0}, i.e., the interior of the orthant R n +. Thus, if Ω has only a single point y with s = c A T y > 0, we still say Ω is not empty. Example 2.5 Consider the system Ω = {x : Ax = 0, e T x = n, x 0}, which is called Karmarkar s canonical set. If x = e is in Ω then e is the analytic center of Ω, the intersection of the simplex {x : e T x = n, x 0} and the hyperplane {x : Ax = 0} (Figure 2.1) AC for SDP Let Ω be a bounded convex set in R m represented by n (> m) a matrix inequality, i.e., m Ω = {y R m : C y i A i 0, }. Let S = C m i y i A i and B(y) := log det(s)) = log det(c i m y i A i ). (2.3) i

6 36 CHAPTER 2. SEMIDEFINITE PROGRAMMING x 3 (0,0,3) Ax=0. (1,1,1) (3,0,0) x 1 (0,3,0) x 2 Figure 2.1: Illustration of the Karmarkar (simplex) polytope and its analytic center. The interior point, denoted by y a and S a = C m i yi aa i, in Ω that maximizes the potential function is called the analytic center of Ω, i.e., max B(y). y Ω (y a, S a ) is uniquely defined, since the potential function is strictly concave in a bounded convex Ω. Setting B(y, Ω) = 0 and letting X a = (S a ) 1, the analytic center (y a, S a ) together with X a satisfy the following optimality conditions: XS = I AX = 0 (2.4) A T y S = C. 2.2 Potential Functions for LP and SDP We show how potential functions can be defined to solve linear programming problems and semi-definite programming. We assume that for a given LP data set (A, b, c), both the primal and dual have interior feasible point. We also let z be the optimal value of the standard form (LP) and (LD). Denote the feasible sets of (LP) and (LD) by F p and F d, respectively. Denote by F = F p F d, and the interior of F by F.

7 2.2. POTENTIAL FUNCTIONS FOR LP AND SDP Primal potential function for LP Consider the level set Ω(z) = {y R m : c A T y 0, z + b T y 0}, (2.5) where z < z. Since both (LP) and (LD) have interior feasible point for given (A, b, c), Ω(z) is bounded and has an interior for any finite z, even Ω := F d is unbounded (Exercise 1.23). Clearly, Ω(z) Ω, and if z 2 z 1, Ω(z 2 ) Ω(z 1 ) and the inequality z + b T y is translated from z = z 1 to z = z 2. From the duality theorem again, finding a point in Ω(z) has a homogeneous primal problem For (x, x 0) satisfying let x := x /x 0 F p, i.e., minimize c T x zx 0 s.t. Ax bx 0 = 0, (x, x 0) 0. Ax bx 0 = 0, (x, x 0) > 0, Ax = b, x > 0. Then, the primal potential function for Ω(z) (Figure 2.2), as described in the preceding section, is P(x, Ω(z)) = (n + 1) log(c T x zx 0) = (n + 1) log(c T x z) log x j j=0 log x j =: P n+1 (x, z). The latter, P n+1 (x, z), is the Karmarkar potential function in the standard LP form with a lower bound z for z. One algorithm for solving (LD) is suggested in Figure 2.2. If the objective hyperplane is repeatedly translated to the analytic center, the sequence of new analytic centers will converge to an optimal solution and the potentials of the new polytopes will decrease to. As we illustrated before, one can represent Ω(z) differently: ρ times {}}{ Ω(z) = {y : c A T y 0, z + b T y 0,, z + b T y 0}, (2.6)

8 38 CHAPTER 2. SEMIDEFINITE PROGRAMMING y a b T y = b T y a y a b T y = z The objective hyperplane The updated objective hyperplane Figure 2.2: Intersections of a dual feasible region and the objective hyperplane; b T y z on the left and b T y b T y a on the right. i.e., z + b T y 0 is copied ρ times. Geometrically, this representation does not change Ω(z), but it changes the location of its analytic center. Since the last ρ inequalities in Ω(z) are identical, they must share the same slack value and the same corresponding primal variable. Let (x, x 0) be the primal variables. Then the primal problem can be written as ρ times {}}{ minimize c T x zx 0 zx 0 ρ times {}}{ s.t. Ax bx 0 bx 0 = 0, (x, x 0) 0. Let x = x /(ρx 0) F p. Then, the primal potential function for the new Ω(z) given by (2.6) is P(x, Ω(z)) = (n + ρ) log(c T x z(ρx 0)) = (n + ρ) log(c T x z) =: P n+ρ (x, z) + ρ log ρ. log x j ρ log x 0 log x j + ρ log ρ

9 2.2. POTENTIAL FUNCTIONS FOR LP AND SDP 39 The function P n+ρ (x, z) = (n + ρ) log(c T x z) log x j (2.7) is an extension of the Karmarkar potential function in the standard LP form with a lower bound z for z. It represents the volume of a coordinate-aligned ellipsoid whose intersection with A Ω(z) contains S Ω(z), where z + b T y 0 is duplicated ρ times Dual potential function for LP We can also develop a dual potential function, symmetric to the primal, for (y, s) F d B n+ρ (y, s, z) = (n + ρ) log(z b T y) log s j, (2.8) where z is a upper bound of z. One can show that it represents the volume of a coordinate-aligned ellipsoid whose intersection with the affine set {x : Ax = b} contains the primal level set {x F p : ρ times {}}{ c T x z 0,, c T x z 0}, where c T x z 0 is copied ρ times (Exercise 2.7). For symmetry, we may write B n+ρ (y, s, z) simply by B n+ρ (s, z), since we can always recover y from s using equation A T y = c s Primal-dual potential function for LP A primal-dual potential function for linear programming will be used later. For x F p and (y, s) F d it is defined by where ρ 0. We have the relation: ψ n+ρ (x, s) := (n + ρ) log(x T s) ψ n+ρ (x, s) = (n + ρ) log(c T x b T y) log(x j s j ), (2.9) log x j log s j

10 40 CHAPTER 2. SEMIDEFINITE PROGRAMMING Since = P n+ρ (x, b T y) = B n+ρ (s, c T x) log s j log x j. ψ n+ρ (x, s) = ρ log(x T s) + ψ n (x, s) ρ log(x T s) + n log n, then, for ρ > 0, ψ n+ρ (x, s) implies that x T s 0. More precisely, we have x T s exp( ψ n+ρ(x, s) n log n ). ρ We have the following theorem: Theorem 2.5 Define the level set i) Ψ(δ) := {(x, y, s) F: ψ n+ρ (x, s) δ}. Ψ(δ 1 ) Ψ(δ 2 ) if δ 1 δ 2. ii) Ψ (δ) = {(x, y, s) F : ψ n+ρ (x, s) < δ}. iii) For every δ, Ψ(δ) is bounded and its closure ˆΨ(δ) has non-empty intersection with the solution set. Later we will show that a potential reduction algorithm generates sequences {x k, y k, s k } F such that ψ n+ n (x k+1, y k+1, s k+1 ) ψ n+ n (x k, y k, s k ).05 for k = 0, 1, 2,... This indicates that the level sets shrink at least a constant rate independently of m or n.

11 2.2. POTENTIAL FUNCTIONS FOR LP AND SDP Potential function for SDP The potential functions for SDP of Section are analogous to those for LP. For given data, we assume that both (SDP) and (SDD) have interior feasible points. Then, for any X F p and (y, S) F d, the primal potential function is defined by P n+ρ (X, z) := (n + ρ) log(c X z) log det(x), z z ; the dual potential function is defined by B n+ρ (y, S, z) := (n + ρ) log(z b T y) log det(s), z z, where ρ 0 and z designates the optimal objective value. For X F p and (y, S) F d the primal-dual potential function for SDP is defined by ψ n+ρ (X, S) := (n + ρ) log(x S) log(det(x) det(s)) = (n + ρ) log(c X b T y) log det(x) log det(s) = P n+ρ (X, b T y) log det(s) = B n+ρ (S, C X) log det(x), where ρ 0. Note that if X and S are diagonal matrices, these definitions reduce to those for LP. Note that we still have (Exercise 2.8) ψ n+ρ (X, S) = ρ log(x S) + ψ n (X, S) ρ log(x S) + n log n. Then, for ρ > 0, ψ n+ρ (X, S) implies that X S 0. More precisely, we have X S exp( ψ n+ρ(x, S) n log n ). ρ We also have the following corollary: Corollary 2.6 Let (SDP) and (SDD) have non-empty interior and define the level set Ψ(δ) := {(X, y, S) F: ψ n+ρ (X, S) δ}. i) ii) Ψ(δ 1 ) Ψ(δ 2 ) if δ 1 δ 2. Ψ (δ) = {(X, y, S) F : ψ n+ρ (X, S) < δ}. iii) For every δ, Ψ(δ) is bounded and its closure ˆΨ(δ) has non-empty intersection with the solution set.

12 42 CHAPTER 2. SEMIDEFINITE PROGRAMMING 2.3 Central Paths of LP and SDP Many interior-point algorithms find a sequence of feasible points along a central path that connects the analytic center and the solution set. We now present this one of the most important foundations for the development of interior-point algorithms Central path for LP Consider a linear program in the standard form (LP) and (LD). Assume that F, i.e., both F p and F d, and denote z the optimal objective value. The central path can be expressed as { } C = (x, y, s) F: Xs = xt s n e in the primal-dual form. We also see { C = (x, y, s) F: } ψ n (x, s) = n log n. For any µ > 0 one can derive the central path simply by minimizing the primal LP with a logarithmic barrier function: (P ) minimize c T x µ n log x j s.t. Ax = b, x 0. Let x(µ) F p be the (unique) minimizer of (P). Then, for some y R m it satisfies the optimality conditions Xs = µe Ax = b A T y s = c. (2.10) Consider minimizing the dual LP with the barrier function: (D) maximize b T y + µ n log s j s.t. A T y + s = c, s 0. Let (y(µ), s(µ)) F d be the (unique) minimizer of (D). Then, for some x R n it satisfies the optimality conditions (2.10) as well. Thus, both minimizers x(µ) and (y(µ), s(µ)) are on the central path with x(µ) T s(µ) = nµ.

13 2.3. CENTRAL PATHS OF LP AND SDP 43 Another way to derive the central path is to consider again the dual level set Ω(z) of (2.5) for any z < z (Figure 2.3). Then, the analytic center (y(z), s(z)) of Ω(z) and a unique point (x (z), x 0(z)) satisfies Ax (z) bx 0(z) = 0, X (z)s = e, s = c A T y, and x 0(z)(b T y z) = 1. Let x(z) = x (z)/x 0(z), then we have Ax(z) = b, X(z)s(z) = e/x 0(z) = (b T y(z) z)e. Thus, the point (x(z), y(z), s(z)) is on the central path with µ = b T y(z) z and c T x(z) b T y(z) = x(z) T s(z) = n(b T y(z) z) = nµ. As we proved earlier in Section 2.2, (x(z), y(z), s(z)) exists and is uniquely defined, which imply the following theorem: Theorem 2.7 Let both (LP) and (LD) have interior feasible points for the given data set (A, b, c). Then for any 0 < µ <, the central path point (x(µ), y(µ), s(µ)) exists and is unique. y a The objective hyperplanes Figure 2.3: The central path of y(z) in a dual feasible region. The following theorem further characterizes the central path and utilizes it to solving linear programs.

14 44 CHAPTER 2. SEMIDEFINITE PROGRAMMING Theorem 2.8 Let (x(µ), y(µ), s(µ)) be on the central path. i) The central path point (x(µ), s(µ)) is bounded for 0 < µ µ 0 and any given 0 < µ 0 <. ii) For 0 < µ < µ, c T x(µ ) < c T x(µ) and b T y(µ ) > b T y(µ). iii) (x(µ), s(µ)) converges to an optimal solution pair for (LP) and (LD). Moreover, the limit point x(0) P is the analytic center on the primal optimal face, and the limit point s(0) Z is the analytic center on the dual optimal face, where (P, Z ) is the strict complementarity partition of the index set {1, 2,..., n}. Proof. Note that (x(µ 0 ) x(µ)) T (s(µ 0 ) s(µ)) = 0, since (x(µ 0 ) x(µ)) N (A) and (s(µ 0 ) s(µ)) R(A T ). This can be rewritten as or ( s(µ 0 ) j x(µ) j + x(µ 0 ) ) j s(µ) j = n(µ 0 + µ) 2nµ 0, j j ( x(µ)j x(µ 0 + s(µ) ) j ) j s(µ 0 2n. ) j Thus, x(µ) and s(µ) are bounded, which proves (i). We leave the proof of (ii) as an exercise. Since x(µ) and s(µ) are both bounded, they have at least one limit point which we denote by x(0) and s(0). Let x P (x Z = 0) and s Z (s P = 0), respectively, be the unique analytic centers on the primal and dual optimal faces: {x P : A P x P = b, x P 0} and {s Z : s Z = c Z A T Z y 0, c P A T P y = 0}. Again, we have ( s j x(µ) j + x ) j s(µ) j = nµ, j or ( x j x(µ) j j P ) + j Z ( s ) j = n. s(µ) j

15 2.3. CENTRAL PATHS OF LP AND SDP 45 Thus, we have and This implies that and Furthermore, or j P x j x(µ) j x j /n > 0, j P s(µ) j s j /n > 0, j Z. j P x(µ) j 0, j Z s(µ) j 0, j P. x j s j 1 x(µ) j s(µ) j j Z s j j Z x(µ) j s(µ) j. j P j Z However, ( j P x j )( j Z s j ) is the maximal value of the potential function over all interior point pairs on the optimal faces, and x(0) P and s(0) Z is one interior point pair on the optimal face. Thus, we must have = x(0) j s(0) j. j P j Z Therefore, j P x j j Z s j x(0) P = x P and s(0) Z = s Z, since x P and s Z are the unique maximizer pair of the potential function. This also implies that the limit point of the central path is unique. We usually define a neighborhood of the central path as { } N (η) = (x, y, s) F: Xs µe ηµ and µ = xt s, n where. can be any norm, or even a one-sided norm as We have the following theorem: x = min(0, min(x)).

16 46 CHAPTER 2. SEMIDEFINITE PROGRAMMING Theorem 2.9 Let (x, y, s) N (η) for constant 0 < η < 1. i) The N (η) {(x, y, s) : x T s nµ 0 } is bounded for any given µ 0 <. ii) Any limit point of N (η) as µ 0 is an optimal solution pair for (LP) and (LD). Moreover, for any j P x j (1 η)x j n where x is any optimal primal solution; for any j Z, s j (1 η)s j n where s is any optimal dual solution., Central path for SDP Consider a SDP problem in Section and Assume that F, i.e., both F p and F d. The central path can be expressed as { C = (X, y, S) F: } XS = µi, 0 < µ <, or a symmetric form { C = (X, y, S) F: } X.5 SX.5 = µi, 0 < µ <, where X.5 M n + is the square root matrix of X M n +, i.e., X.5 X.5 = X. We also see { C = (X, y, S) F: } ψ n (X, S) = n log n. When X and S are diagonal matrices, this definition is identical to LP. We also have the following corollary: Corollary 2.10 Let both (SDP) and (SDD) have interior feasible points. Then for any 0 < µ <, the central path point (X(µ), y(µ), S(µ)) exists and is unique. Moreover, i) the central path point (X(µ), S(µ)) is bounded where 0 < µ µ 0 for any given 0 < µ 0 <. ii) For 0 < µ < µ, C X(µ ) < C X(µ) and b T y(µ ) > b T y(µ).

17 2.4. NOTES 47 iii) (X(µ), S(µ)) converges to an optimal solution pair for (SDP) and (SDD), and the rank of the limit of X(µ) is maximal among all optimal solutions of (SDP) and the rank of the limit S(µ) is maximal among all optimal solutions of (SDD). 2.4 Notes General convex problems, such as membership, separation, validity, and optimization, can be solved by the central-section method; see Grötschel, Lovász and Schrijver [170]. Levin [244] and Newman [318] considered the center of gravity of a convex body; Elzinga and Moore [110] considered the center of the max-volume sphere contained in a convex body. A number of Russian mathematicians (for example, Tarasov, Khachiyan and Érlikh [403]) considered the center of the max-volume ellipsoid inscribing the body; Huard and Liêu [190, 191] considered a generic center in the body that maximizes a distance function; and Vaidya [438] considered the volumetric center, the maximizer of the determinant of the Hessian matrix of the logarithmic barrier function. See Kaiser, Morin and Trafalis [210] for a complete survey. Dyer and Frieze [104] proved that computing the volume of a convex polytope, either given as a list of facets or vertices, is itself #P -Hard. Furthermore, Elekes [109] has shown that no polynomial time algorithm can compute the volume of a convex body with less than exponential relative error. Bárány and Fürendi [42] further showed that for Ω R m, any polynomial time algorithm that gives an upper and lower bound on the volume of Ω, represented as V (Ω) and V (Ω), respectively, necessarily has an exponential gap between them. They showed V (Ω)/V (Ω) (cm/ log m) m, where c is a constant independent of m. In other words, there is no polynomial time algorithm that would compute V (Ω) and V (Ω) such that V (Ω)/V (Ω) < (cm/ log m) m. Recently, Dyer, Frieze and Kannan [105] developed a random polynomial time algorithm that can, with high probability, find a good estimate for the volume of Ω. Apparently, the result that every convex body contains a unique ellipsoid of maximal volume and is contained in a unique ellipsoid of minimal volume, was discovered independently by several mathematicians see, for example, Danzer, Grünbaum and Klee [94]. These authors attributed the first proof to K. Löwner. John [208] later proved the Theorem.

18 48 CHAPTER 2. SEMIDEFINITE PROGRAMMING Khachiyan and Todd [222] established a polynomial complexity bound for computing an approximate point of the center of the maximal inscribing ellipsoid if the convex body is represented by linear inequalities, The analytic center for a convex polyhedron given by linear inequalities was introduced by Huard [190], and later by Sonnevend [383]. The function d(y, Ω) is very similar to Huard s generic distance function, with one exception, where property (3) there was stated as If Ω Ω, then d(y, Ω ) d(y, Ω). The reason for the difference is that the distance function may return different values even if we have the same polytope but two different representations. The negative logarithmic function d(y, Ω), called the barrier function, was introduced by Frisch [126]. Todd [405] and Ye [465] showed that Karmarkar s potential function represents the logarithmic volume of a coordinate-aligned ellipsoid who contains the feasible region. The Karmarkar potential function in the standard form (LP) with a lower bound z for z was seen in Todd and Burrell [413], Anstreicher [24], Gay [133], and Ye and Kojima [477]. The primal potential function with ρ > 1 was proposed by Gonzaga [160], Freund [123], and Ye [466, 468]. The primal-dual potential function was proposed by Tanabe [400], and Todd and Ye [415]. Potential functions for LCP and SDP were studied by Kojima et al. [230, 228], Alizadeh [9], and Nesterov and Nemirovskii [327]. McLinden [267] earlier, then Bayer and Lagarias [45, 46], Megiddo [271], and Sonnevend [383], analyzed the central path for linear programming and convex optimization. Megiddo [271] derived the central path simply minimizing the primal with a logarithmic barrier function as in Fiacco and McCormick [116]. The central path for LCP, with more general matrix M, was given by Kojima et al. [227] and Güler [174]; the central path theory for SDP was first developed by Nesterov and Nemirovskii [327]. McLinden [267] proved Theorem 2.8 for the monotone LCP, which includes LP. 2.5 Exercises 2.1 Find the min-volume ellipsoid containing a half of the unit ball {x R n : x 1}. 2.2 Verify Examples 2.3, 2.4, and Compare and contrast the center of gravity of a polytope and its analytic center.

19 2.5. EXERCISES Consider the maximum problem maximize f(x) = x e n x j s.t. e T x = n, x > 0 R n. Prove that its maximizer is achieved at x 1 = β and x 2 =... = x n = (n β)/(n 1) > 0 for some 1 < β < n. 2.5 Let Ω= {y R m : c A T y > 0} =, Ω = {y R m : c A T y > 0} =, and c c. Prove B(Ω ) B(Ω). 2.6 If Ω = {y : c A T y 0} is nonempty, prove the minimal value of its primal problem is 0; if Ω is bounded and has an interior, prove the interior of X Ω := {x R n : Ax = 0, x 0} is nonempty and x = 0 is the unique primal solution. 2.7 Let (LP) and (LD) have interior. Prove the dual potential function B n+1 (y, s, z), where z is a upper bound of z, represents the volume of a coordinate-aligned ellipsoid whose intersection with the affine set {x : Ax = b} contains the primal level set {x F p : c T x z}. 2.8 Let X, S M n be both positive definite. Then prove ψ n (X, S) = n log(x S) log(det(x) det(s)) n log n. 2.9 Consider linear programming and the level set Prove that Ψ(δ) := {(x, y, s) F: ψ n+ρ (x, s) δ}. Ψ(δ 1 ) Ψ(δ 2 ) if δ 1 δ 2, and for every δ Ψ(δ) is bounded and its closure ˆΨ(δ) has non-empty intersection with the solution set Consider the linear program max b T y s.t. 0 y 1 1, 0 y 2 1. Draw the feasible region the the (dual) potential level sets respectively, for {y : B 5 (y, s, 2) 0} and {y : B 5 (y, s, 2) 10},

20 50 CHAPTER 2. SEMIDEFINITE PROGRAMMING 1. b = (1; 0); 2. b = (1; 1)/2; 3. b = (2; 1)/ Consider the polytope {y R 2 : 0 y 1, 0 y 2 1, y 1 + y 2 z}. Describe how the analytic center changes as z decreases from 10 to Consider the linear program max b T y s.t. 0 y 1 1, 0 y 2 1. Draw the feasible region, central path, and solution set for 1. b = (1; 0); 2. b = (1; 1)/2; 3. b = (2; 1)/3. Finally, sketch a neighborhood for the third central path Prove (ii) of Theorem Prove Theorem 2.9.

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

Conic Linear Optimization and its Dual. yyye

Conic Linear Optimization and its Dual.   yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

More information

Semidefinite Programming. Yinyu Ye

Semidefinite Programming. Yinyu Ye Semidefinite Programming Yinyu Ye December 2002 i ii Preface This is a monograph for MS&E 314, Semidefinite Programming, which I am teaching at Stanford. Information, supporting materials, and computer

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

Conic Linear Programming. Yinyu Ye

Conic Linear Programming. Yinyu Ye Conic Linear Programming Yinyu Ye December 2004, revised October 2017 i ii Preface This monograph is developed for MS&E 314, Conic Linear Programming, which I am teaching at Stanford. Information, lecture

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

Summer School: Semidefinite Optimization

Summer School: Semidefinite Optimization Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory

More information

Conic Linear Programming. Yinyu Ye

Conic Linear Programming. Yinyu Ye Conic Linear Programming Yinyu Ye December 2004, revised January 2015 i ii Preface This monograph is developed for MS&E 314, Conic Linear Programming, which I am teaching at Stanford. Information, lecture

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Nonlinear Programming Models

Nonlinear Programming Models Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

More First-Order Optimization Algorithms

More First-Order Optimization Algorithms More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Optimization: Then and Now

Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Why would a dynamicist be interested in linear programming? Linear Programming (LP) max c T x s.t. Ax b αi T x b i for i

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

LECTURE 10 LECTURE OUTLINE

LECTURE 10 LECTURE OUTLINE LECTURE 10 LECTURE OUTLINE Min Common/Max Crossing Th. III Nonlinear Farkas Lemma/Linear Constraints Linear Programming Duality Convex Programming Duality Optimality Conditions Reading: Sections 4.5, 5.1,5.2,

More information

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.

More information

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1, 4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x

More information

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

An E cient A ne-scaling Algorithm for Hyperbolic Programming

An E cient A ne-scaling Algorithm for Hyperbolic Programming An E cient A ne-scaling Algorithm for Hyperbolic Programming Jim Renegar joint work with Mutiara Sondjaja 1 Euclidean space A homogeneous polynomial p : E!R is hyperbolic if there is a vector e 2E such

More information

7. Lecture notes on the ellipsoid algorithm

7. Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Michel X. Goemans 18.433: Combinatorial Optimization 7. Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm proposed for linear

More information

Lecture 9 Sequential unconstrained minimization

Lecture 9 Sequential unconstrained minimization S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725 Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

c 2000 Society for Industrial and Applied Mathematics

c 2000 Society for Industrial and Applied Mathematics SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Theory and Internet Protocols

Theory and Internet Protocols Game Lecture 2: Linear Programming and Zero Sum Nash Equilibrium Xiaotie Deng AIMS Lab Department of Computer Science Shanghai Jiaotong University September 26, 2016 1 2 3 4 Standard Form (P) Outline

More information

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 1 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

Limiting behavior of the central path in semidefinite optimization

Limiting behavior of the central path in semidefinite optimization Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

The Simplest Semidefinite Programs are Trivial

The Simplest Semidefinite Programs are Trivial The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR-93-12

More information

Convex optimization. Javier Peña Carnegie Mellon University. Universidad de los Andes Bogotá, Colombia September 2014

Convex optimization. Javier Peña Carnegie Mellon University. Universidad de los Andes Bogotá, Colombia September 2014 Convex optimization Javier Peña Carnegie Mellon University Universidad de los Andes Bogotá, Colombia September 2014 1 / 41 Convex optimization Problem of the form where Q R n convex set: min x f(x) x Q,

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

More information

A ten page introduction to conic optimization

A ten page introduction to conic optimization CHAPTER 1 A ten page introduction to conic optimization This background chapter gives an introduction to conic optimization. We do not give proofs, but focus on important (for this thesis) tools and concepts.

More information

Primal-dual IPM with Asymmetric Barrier

Primal-dual IPM with Asymmetric Barrier Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

Polynomiality of Linear Programming

Polynomiality of Linear Programming Chapter 10 Polynomiality of Linear Programming In the previous section, we presented the Simplex Method. This method turns out to be very efficient for solving linear programmes in practice. While it is

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations A notion of for Convex, Semidefinite and Extended Formulations Marcel de Carli Silva Levent Tunçel April 26, 2018 A vector in R n is integral if each of its components is an integer, A vector in R n is

More information

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

Introduction and Math Preliminaries

Introduction and Math Preliminaries Introduction and Math Preliminaries Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Appendices A, B, and C, Chapter

More information

A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING

A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the

More information

Lecture notes on the ellipsoid algorithm

Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Handout 1 18.433: Combinatorial Optimization May 14th, 007 Michel X. Goemans Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm

More information

LP. Kap. 17: Interior-point methods

LP. Kap. 17: Interior-point methods LP. Kap. 17: Interior-point methods the simplex algorithm moves along the boundary of the polyhedron P of feasible solutions an alternative is interior-point methods they find a path in the interior of

More information

Inner approximation of convex cones via primal-dual ellipsoidal norms

Inner approximation of convex cones via primal-dual ellipsoidal norms Inner approximation of convex cones via primal-dual ellipsoidal norms by Miaolan Xie A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of

More information

Chapter 6 Interior-Point Approach to Linear Programming

Chapter 6 Interior-Point Approach to Linear Programming Chapter 6 Interior-Point Approach to Linear Programming Objectives: Introduce Basic Ideas of Interior-Point Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize

More information

Chapter 1 Linear Programming. Paragraph 5 Duality

Chapter 1 Linear Programming. Paragraph 5 Duality Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution

More information

PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES

PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES YU. E. NESTEROV AND M. J. TODD Abstract. In this paper we continue the development of a theoretical foundation for efficient primal-dual interior-point

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Lecture 15: October 15

Lecture 15: October 15 10-725: Optimization Fall 2012 Lecturer: Barnabas Poczos Lecture 15: October 15 Scribes: Christian Kroer, Fanyi Xiao Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have

More information

SCALING DUALITIES AND SELF-CONCORDANT HOMOGENEOUS PROGRAMMING IN FINITE DIMENSIONAL SPACES

SCALING DUALITIES AND SELF-CONCORDANT HOMOGENEOUS PROGRAMMING IN FINITE DIMENSIONAL SPACES SCALING DUALITIES AND SELF-CONCORDANT HOMOGENEOUS PROGRAMMING IN FINITE DIMENSIONAL SPACES BAHMAN KALANTARI Abstract. In this paper first we prove four fundamental theorems of the alternative, called scaling

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

Lecture 8. Strong Duality Results. September 22, 2008

Lecture 8. Strong Duality Results. September 22, 2008 Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

The maximal stable set problem : Copositive programming and Semidefinite Relaxations The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu

More information