Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Similar documents
Lecture 5. The Dual Cone and Dual Problem

Conic Linear Optimization and its Dual. yyye

Supplement: Hoffman s Error Bounds

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

New stopping criteria for detecting infeasibility in conic optimization

Summer School: Semidefinite Optimization

Chapter 1. Preliminaries

4. Algebra and Duality

Solving conic optimization problems via self-dual embedding and facial reduction: a unified approach

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

A New Self-Dual Embedding Method for Convex Programming

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Convex Optimization M2

Chapter 2: Preliminaries and elements of convex analysis

5. Duality. Lagrangian

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

CO 250 Final Exam Guide

Linear Programming. Chapter Introduction

Facial Reduction and Geometry on Conic Programming

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Lecture: Duality.

Lagrangian Duality Theory

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

4TE3/6TE3. Algorithms for. Continuous Optimization

Lecture Note 18: Duality

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Convex Optimization Boyd & Vandenberghe. 5. Duality

15. Conic optimization

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

1 Review of last lecture and introduction

Facial Reduction and Partial Polyhedrality

Week 3 Linear programming duality

"SYMMETRIC" PRIMAL-DUAL PAIR

Finite-Dimensional Cones 1

Optimality Conditions for Constrained Optimization

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

BBM402-Lecture 20: LP Duality

3. Linear Programming and Polyhedral Combinatorics

A STRUCTURAL GEOMETRICAL ANALYSIS OF WEAKLY INFEASIBLE SDPS

LECTURE 10 LECTURE OUTLINE

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems

Approximate Farkas Lemmas in Convex Optimization

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Lecture 8. Strong Duality Results. September 22, 2008

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

Semidefinite Programming

3. Linear Programming and Polyhedral Combinatorics

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Convex Optimization and Modeling

arxiv: v1 [math.oc] 21 Jan 2019

LINEAR PROGRAMMING II

Research overview. Seminar September 4, Lehigh University Department of Industrial & Systems Engineering. Research overview.

ON IMPLEMENTATION OF A SELF-DUAL EMBEDDING METHOD FOR CONVEX PROGRAMMING

Semidefinite Programming

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2)

(Linear Programming program, Linear, Theorem on Alternative, Linear Programming duality)

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

Introduction to optimization

GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III

Convex Optimization & Lagrange Duality

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

EE364a Review Session 5

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

LP. Kap. 17: Interior-point methods

Linear and Integer Optimization (V3C1/F4C1)

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

SEMIDEFINITE PROGRAM BASICS. Contents

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Linear and Combinatorial Optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Lecture 6: Conic Optimization September 8

Interior Point Methods for Nonlinear Optimization

Optimality, Duality, Complementarity for Constrained Optimization

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Detecting Infeasibility in Infeasible-Interior-Point Methods for Optimization

The Karush-Kuhn-Tucker (KKT) conditions

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Lecture: Algorithms for LP, SOCP and SDP

On implementing a primal-dual interior-point method for conic quadratic optimization

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

Lectures 6, 7 and part of 8

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Lecture 10. Primal-Dual Interior Point Method for LP

Constrained Optimization and Lagrangian Duality

Assignment 1: From the Definition of Convexity to Helley Theorem

Chapter 1 Linear Programming. Paragraph 5 Duality

Discrete Optimization

3 The Simplex Method. 3.1 Basic Solutions

Convex optimization. Javier Peña Carnegie Mellon University. Universidad de los Andes Bogotá, Colombia September 2014

Transcription:

IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding

IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c T y = 1 has a solution (but not both nor neither). Such type of statement is called theorem of alternatives. There are in fact quite a few such theorems of alternatives, in the context of linear algebra, as we will discuss shortly.

IE 8534 3 A very general form of theorem of alternatives is Hilbert s Nullstellensatz. Let P i (x) be a polynomial of n variables x 1,..., x n in the complex field (x C n ), i = 1,..., m. Theorem 1 (Nullstellensatz) Either P 1 (x) = 0, P 2 (x) = 0,..., P m (x) = 0 has a solution x C n, or there exist polynomials Q 1 (x),..., Q m (x), such that has a solution. P 1 (x)q 1 (x) + P 2 (x)q 2 (x) + + P m (x)q m (x) = 1 For more details, visit Terence Tao s blog at: http://terrytao.wordpress.com/2007/11/26/hilberts-nullstellensatz/

IE 8534 4 For the system of linear inequalities, the most famous theorem of alternative is the following Farkas lemma (shown by Farkas around the end of the 19th century). Theorem 2 (Farkas) Either Ax = b, x 0 has a solution, or A T y 0, b T y < 0 has a solution. In fact, some more refined forms of the theorems of alternatives exist, in the style of the Farkas lemma. Some examples are shown in the next few slides.

IE 8534 5 Let x and y be two vectors. Denote x y to be x y and x y ; similarly for x y. Theorem 3 (Gordan) Either Ax > 0 has a solution, or A T y = 0, y 0 has a solution. Theorem 4 (Stiemke) Either Ax 0 has a solution, or A T y = 0, y > 0 has a solution.

IE 8534 6 Theorem 5 (Gale) Either Ax b has a solution, or A T y = 0, b T y = 1, y 0 has a solution. Theorem 6 (Tucker) Suppose A 0. Either Ax 0, Bx 0, Cx = 0 has a solution, or A T u + B T v + C T w = 0, u > 0, v 0 has a solution.

IE 8534 7 Theorem 7 (Motzkin) Suppose A 0. Either Ax > 0, Bx 0, Cx = 0 has a solution, or A T u + B T v + C T w = 0, u 0, v 0 has a solution.

IE 8534 8 The most important theorem of alternatives is still the Farkas lemma. The Farkas lemma can be proven either by the simplex method for linear programming, or it can be proven by the separation theorem. Interestingly, the Farkas lemma can also be proven in an elementary manner, due to a recent brilliant expository article by Terence Tao. Let us slightly rephrase the Farkas lemma: Let P i (x) = n j=1 p ijx j r i, i = 1, 2,..., m. If the system P i (x) 0, i = 1, 2,..., m, does not have a solution, then there are y i 0 such that m i=1 y ip i (x) = 1. We prove the result by induction on n. When n = 1, we can scale P i (x) 0 possibly into three parts: x 1 a i 0, i I + ; x 1 + b j 0, j I ; c k 0, k I 0. If there is k I 0 with c k < 0 then we may let y k = 1/c k and y l = 0 for all l k. Otherwise there is i I + and j I with a i > b j. We let y i = y j = 1/(a i b j ) and 0 elsewhere, yielding m i=1 y ip i (x) = 1.

IE 8534 9 Now suppose that the result is proven for the dimension (of x) no more than n. Consider the case when we have n + 1 variables: x = (x; x n+1 ), where x R n. Depending on the coefficients of x n+1 we scale the original inequalities to get the following equivalent system of inequalities: P i ( x) = x n+1 Q i (x) 0, i I + ; P j ( x) = x n+1 + Q j (x) 0, j I ; P k ( x) = Q k (x) 0, k I 0. If Q i (x) + Q j (x) 0, i I + and j I ; Q k (x) 0, k I 0 has a solution then the original system would have a solution too.

IE 8534 10 By the condition of the lemma, therefore, it cannot have a solution. Now using the induction hypothesis, there are y ij, y k 0 such that y ij ( Q i (x) + Q j (x)) + y k Q k (x) = 1. i I + ;j I k I 0 Hence, i I + (x n+1 Q i (x)) + y ij j I j I i I + y ij ( x n+1 + Q j (x)) + k I 0 y k Q k (x) = 1. The Farkas lemma is thus proven by this simple induction argument.

IE 8534 11 In fact, the strict separation theorem of convex polytopes follows from the Farkas lemma. The separation theorem that we are talking about here is: If polytopes P 1 and P 2 do not intersect then there is an affine linear function f(x) = y, x + y 0 such that f(x) 1 for x P 1, and f(x) 1 for x P 2. Let the extreme points of P 1 be {p 1,..., p s }, and the extreme points of P 2 be {q 1,..., q t }. Then f(x) 1 for x P 1, and f(x) 1 for x P 2 y, p i + y 0 1, for i = 1,..., s; and y, q j + y 0 1, for j = 1,..., t.

IE 8534 12 Let us now use a contradiction argument. Suppose the above does not have a solution in y and y 0. Then the Farkas lemma asserts that there are u i 0 (i = 1,..., s) and v j 0 (j = 1,..., t), such that s u i ( y, p i + y 0 1) + i=1 t v j ( y, q j y 0 1) = 1. j=1 This implies that s u i p i t v j q j = 0, s u i t v j = 0, and s u i + t v j = 1. i=1 j=1 i=1 j=1 i=1 j=1 Therefore s i=1 u i = t j=1 v j = 1/2. Moreover, P 1 2 s u i p i = 2 i=1 t v j q j P 2, j=1 which contradicts to the assumption that P 1 P 2 =. The same arguments will work if polytopes are replaced by polydra.

IE 8534 13 Let us return to conic optimization models. To study the geometric structure of the problem, introduce the linear subspace L := {x Ax = 0}, and its orthogonal complement L = {A T y y R m }. Let a = A T (AA T ) 1 b, and so a + L = {x Ax = b, x 0}. In this notation, the generic conic optimization problem can be written as minimize c T x subject to x (a + L) K. (1)

IE 8534 14 Geometrically, the conic optimization is nothing but linear optimization over the intersection of an affine linear subspace a + L and a given closed convex cone K. Such problems can be either feasible or infeasible; having unbounded optimal value or finite optimal value; attainable optimal solution exists or not. We shall see first that the situation for a generic conic optimization gets more complicated than for linear programming.

IE 8534 15 Let us consider the case of infeasibility. It is well known that for linear programming the state of being infeasibility is rather stable, in the sense that if the data (a, c) corresponds to an infeasible problem instance, then a small perturbation on (a, c) will remain the infeasibility status. Why? Consider the following equivalence due to the Farkas lemma: {x R n Ax = b, x R n +} = {y R m A T y R n +, b T y < 0} =.

IE 8534 16 For conic optimization, however, this no longer holds. Consider Example 1 minimize x 22 subject to x 11 = b 1, x 12 = b 2 x 11, x 12, 0. x 21, x 22 For (b 1, b 2 ) = (0, 1) the problem is infeasible. However, for (b 1, b 2 ) = (ϵ, 1) where ϵ is any positive number, the problem is always feasible.

IE 8534 17 Definition 1 We call the conic optimization problem: weakly infeasible (a + L) K = but dist(a + L, K) = 0; strongly infeasible dist(a + L, K) > 0; weakly feasible (a + L) K = but (a + L) int K = ; strongly feasible (a + L) int K =.

IE 8534 18 Theorem 8 (Alternatives of strong infeasibility). {x R n Ax = b, x K} is not strongly infeasible {y R m A T y K, b T y < 0} =. Proof. Note that for s L, i.e. s = A T y for some y R m, we have a T s = a T A T y = b T y. Observe the following chain of equivalent statements: {y R m A T y K, b T y < 0} = for all s L K it must follow a T s 0 a (L K ) = cl (K + L) x i K, x i L such that a (x i + x i ) = (a x i ) x i 0 (a + L) K is not strongly infeasible. The theorem is proven.

IE 8534 19 Definition 2 We call d P an primal improving direction if d P L K, and c T d P < 0; d i P an primal improving direction sequence if d i P K, dist(d i P, L) 0 and lim sup i c T d i P < 0; d D a dual improve direction if d D L K and a T d D < 0; d i D a dual improve direction sequence if and lim sup i a T d i D < 0. d i D K, dist(d i D, L ) 0

IE 8534 20 If a problem is not strongly infeasible, then it can be either feasible or weakly infeasible. Next question is: can we further characterize the feasibility of a conic problem? Naturally if it is possible, then by exclusion we will also have characterized the weak feasibility. Theorem 9 (Alternatives of primal feasibility). {x R n Ax = b, x K} is infeasible if and only if there is a dual improving direction sequence.

IE 8534 21 Proof. Let L = span (H ({x Ax = b})) = x 0 x x 0b + Ax = 0 and K = R + K. We have L = s 0 s y Rm, s 0 = b T y, s = A T y and K = R + K. Let c = 1 0 n.

IE 8534 22 Theorem 8 states that the primal problem is strongly infeasible is equivalent to the existence of a dual improve direction. To state it in the dual way, Theorem 8 can also be written as ( c + L ) K is strongly infeasible x L K, c T x < 0. (2) It is easy to see that {x Ax = b, x K} is feasible x L K, c T x < 0 (by (2)) ( c + L ) K is strongly infeasible.

IE 8534 23 Therefore, {x Ax = b, x K} is infeasible ( c + L ) K is not strongly infeasible s i 0 0, s i K such that dist si 0, c + L 0 s i K, dist (s i, L ) 0 and s i lim sup i a T s i < 0 a dual improve direction sequence.

IE 8534 24 As a consequence, by exclusion, this implies the following result: Corollary 1 {x R n Ax = b, x K} is weakly infeasible there is a dual improving direction sequence but no dual improving direction. So far we have been concerned with characterizing infeasible conic programs. How about feasible ones? Recall that we have two types of feasibilities: the strong feasibility (also known as to satisfy the Slater condition), and the weak feasibility, which means that it is feasible but the Slater condition is not satisfied. It is helpful to derive an equivalent condition for the Slater condition, using the dual information.

IE 8534 25 Lemma 1 (a + L) int K = if and only if for any d 0 and d L K it must follow a T d > 0. Proof. Note that x int K 0 y K it follows x T y > 0. = : Let x (a + L) int K. In particular, write x = a + z with z L. Take any d 0 and d L K : 0 < x T d = (a + z) T d = a T d. = : If (a + L) int K =, then a + L and int K can be separated by a hyperplane. That is, there is s 0 such that s T x 0 for all x K and s T (a + y) 0 for all y L. The last inequality means that s T a 0 and s T y = 0 for all y L. Thus, we have found a non-zero vector s L K with a T s 0, which contradicts the assumption.

IE 8534 26 Theorem 10 If the primal conic program satisfies the Slater condition, and its dual is feasible, then the dual has a non-empty and compact optimal solution set. Proof. Clearly, the objective value of the dual problem is bounded below by the weak duality theorem. The only possibility for the dual problem not to possess an optimal solution is that the sup is non-attainable; i.e. there is an infinite sequence s i = c A T y i (a + L ) K such that a T s i = a T c b T y i a T c optimal value of the dual problem, and s i. Without loss of generality, suppose that s i / s i converges to d. Then we have d L K and a T d = 0. This contradicts with the primal Slater condition, in light of Lemma 1. Hence the optimal solutions must be bounded.

IE 8534 27 Therefore, if a pair of primal-dual conic programs satisfy the Slater condition, then attainable optimal solutions exist for both problems, according to Theorem 10. The only remaining issue in this regard is: how about the strong duality theorem? In other words, do these optimal solutions satisfy the complementarity slackness conditions? The answer is yes. Theorem 11 If the primal conic program and its dual conic program both satisfy the Slater condition, then the optimal solution sets for both problems are non-empty and compact. Moreover, the optimal solutions are complementary to each other with zero duality gap.

IE 8534 28 Proof. The first part of the theorem is simply Theorem 10, stated both for the primal and for the dual parts separately. We now focus on the second part. Let x and s be optimal solutions for teh primal and the dual conic optimization problems respectively. Write the primal problem in the parametric form: (P (a)) minimize c T x subject to x a + L x K. Let the optimal value of (P (a)), as a function of the vector a, be v(a). Clearly v(a) is a convex function in its domain, D, whenever (P (a)) is feasible. Obviously, D K. Also we see that v(0) = 0 due to Theorem 8.

IE 8534 29 By the Slater condition we know that a int D. Let g be a subgradient for v at a; that is for all z D. v(z) v(a) + g T (z a) (3) Observe that for any d L we have a + L = a + d + L. Thus, v(a + d) = v(a) for any d L. This implies g L. Take any u K. Since x + u is a feasible solution for (P (a + u)), it follows that v(a + u) c T (x + u) = v(a) + c T u. Together with the subgradient inequality (3) this yields (c g) T u 0 for any u K; hence c g K. Thus, c g is a dual feasible solution, and so a T s a T (c g).

IE 8534 30 Taking z = 0, the subgradient inequality gives us 0 = v(0) v(a) + g T (0 a) = c T x g T a. (4) Combining with (4) we have a T c c T x a T s 0. Hence, 0 = (x a) T (s c) = (x ) T s + a T c c T x a T s (x ) T s 0. Thus, (x ) T s = 0, and the complementarity slackness condition is satisfied.

IE 8534 31 So far we have discussed the relationship between a duality pair of conic optimization problems (P ) minimize c T x subject to Ax = b x K, and (D) maximize b T y subject to A T y + s = c s K.

IE 8534 32 In case K = K = R n + the linear programming case the following facts are well known: If (P ) and (D) both are feasible, then both problems are solvable, with attainable optimal solutions; It is possible that (P ) is feasible and unbounded in its objective. In that case, (D) is infeasible. Symmetrically, if (D) is feasible and unbounded, then (P ) is infeasible. It is possible that both (P ) and (D) are infeasible.

IE 8534 33 This can be summarized by the following table, where the row is the status of the primal problem and the column represents the status of the dual problem: Feasible Unbounded Infeasible Feasible (P ), (D) solvable Unbounded Infeasible

IE 8534 34 For general conic programs, assuming (D) is feasible, then the relationship with (P ) is as follows: Primal Dual s. f. bd opt sol set (Theorem 10) w. f. no improving direction sequence (Theorem 9) w. inf. improving dir seq but no improving dir (Corollary 1) strongly infeasible improving direction (Theorem 8)

IE 8534 35 Example 2 with the dual minimize x 12 + x 21 subject to x 12 + x 21 + x 33 = 1, x 22 = 0 x 11, x 12, x 13 x 21, x 22, x 23 0, x 31, x 32, x 33 maximize y 1 subject to 0, 1 y 1, 0 1 y 1, y 2, 0 0, 0, y 1 0. The primal problem is weakly feasible, while the dual is weakly infeasible.

IE 8534 36 Example 3 with the dual minimize x 12 + x 21 subject to x 12 + x 21 + x 23 + x 32 = 1 x 22 = 0 x 11, x 12, x 13 x 21, x 22, x 23 x 31, x 32, x 33 0, maximize y 1 subject to 0, 1 y 1, 0 1 y 1, y 2, y 1 0, y 1, 0 0. Both the primal and the dual are weakly infeasible.

IE 8534 37 Example 4 with the dual minimize x 12 + x 21 x 33 subject to x 12 + x 21 + x 23 + x 32 = 1, x 22 = 0 x 11, x 12, x 13 x 21, x 22, x 23 0, x 31, x 32, x 33 maximize y 1 subject to 0, 1 y 1, 0 1 y 1, y 2, y 1 0, y 1, 1 0. In this case, the primal is weakly infeasible, while the dual is strongly infeasible.

IE 8534 38 Example 5 with the dual minimize x 12 + x 21 x 33 subject to x 12 + x 21 + x 23 + x 32 = 1, x 22 = 1 x 11, x 12, x 13 x 21, x 22, x 23 0, x 31, x 32, x 33 maximize y 1 y 2 0, 1 y 1, 0 subject to 1 y 1, y 2, y 1 0, y 1, 1 0. In this case, both the primal and the dual are strongly infeasible.

IE 8534 39 Consider the standard conic optimization problems (P ) and (D). Take any x 0 int K, s 0 int K, and y 0 R m. Moreover, define r p = b Ax 0, r d = s 0 c + A T y 0, and r g = 1 + c T x 0 b T y 0. The following model is self-dual: (SD) min βθ s.t. Ax bτ +r p θ = 0 A T y +cτ +r d θ s = 0 b T y c T x +r g θ κ = 0 r T p y r T d x r gτ = β x K, τ 0, s K, κ 0, where β = 1 + (x 0 ) T s 0 > 1, and the decision variables are (y, x, τ, θ, s, κ).

IE 8534 40 The above self-dual model admits an interior feasible solution (y, x, τ, θ, s, κ) = (y 0, x 0, 1, 1, s 0, 1). Proposition 1 The problem (SD) has a maximally complementary optimal solution, denoted by (y, x, τ, θ, s, κ ), such that θ = 0 and (x ) T s + τ κ = 0. Moreover, if τ > 0, then x /τ is an optimal solution for the primal problem, and (y /τ, s /τ ) is an optimal solution for the dual problem. If κ > 0 then either c T x < 0 or b T y > 0; in the former case the dual problem is strongly infeasible, and in the latter case the primal problem is strongly infeasible.

IE 8534 41 If strict complementarity fails, then one can only conclude that the original problem does not have complementary optimal solutions. In that case, one may wish to identify whether the problems are weakly feasible or weakly infeasible. However, the weak infeasibility is in a slippery terrace; it cannot be certified by a finite proof. The type of certificate that one can expect is in the spirit of if the problem is ever feasible, then any of its feasible solution must have a very large norm.

IE 8534 42 For linear programming, the situation is particularly interesting. Remember we have shown before that if a primal-dual LP pair is strongly feasible, then the primal-dual central path solutions converge to a strictly complementary optimal solutions. That is, Ax(µ) = b A T y(µ) + s(µ) = c x i (µ)s i (µ) = µ, i = 1, 2,..., n and x := lim µ 0 x(µ) and s := lim µ 0 s(µ) are strictly complementary.

IE 8534 43 When we apply this to (SD), we have Ax bτ = 0 A T y + cτ s = 0 b T y c T x κ = 0 with x 0, s 0, τ 0, κ 0 and x + s > 0, τ + κ > 0.

IE 8534 44 This constructively proves a well known result of Goldman and Tucker: Theorem 12 (Goldman and Tucker, 1956). Suppose that M is skew-symmetric. Then the following system always has a solution: Mx s = 0 x 0, s 0, x + s > 0. In fact, the theorem of Goldman and Tucker can also be used to prove the previous result: they are equivalent.

IE 8534 45 Key References: T. Tao, Structure and Randomness, American Mathematical Society, 2008. Z.Q. Luo, J.F. Sturm and S. Zhang, Conic Convex Programming and Self-Dual Embedding, Optimization Methods and Software, 14, 169 218, 2000. J.F. Sturm, Theory and Algorithms for Semidefinite Programming. In High Performance Optimization, eds. H. Frenk, K. Roos, T. Terlaky, and S. Zhang, Kluwer Academic Publishers, 1999. Y. Ye, M.J. Todd, and S. Mizuno, An O( nl)-iteration Homogeneous and Self-Dual Linear Programming Algorithm, Mathematics of Operations Research, 19, 53 67, 1994.