Nondegeneracy of Polyhedra and Linear Programs

Size: px
Start display at page:

Download "Nondegeneracy of Polyhedra and Linear Programs"

Transcription

1 Computational Optimization and Applications 7, (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. Nondegeneracy of Polyhedra and Linear Programs YANHUI WANG AND RENATO D.C. MONTEIRO School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Received December 23, 1994; Revised September 25, 1995 Abstract. This paper deals with nondegeneracy of polyhedra and linear programming (LP) problems. We allow for the possibility that the polyhedra and the feasible polyhedra of the LP problems under consideration be non-pointed. (A polyhedron is pointed if it has a vertex.) With respect to a given polyhedron, we consider two notions of nondegeneracy and then provide several equivalent characterizations for each of them. With respect to LP problems, we study the notion of constant cost nondegeneracy first introduced by Tsuchiya [25] under a different name, namely dual nondegeneracy. (We do not follow this terminology since the term dual nondegeneracy is already used to refer to a related but different type of nondegeneracy.) We show two main results about constant cost nondegeneracy of an LP problem. The first one shows that constant cost nondegeneracy of an LP problem is equivalent to the condition that the union of all minimal faces of the feasible polyhedron be equal to the set of feasible points satisfying a certain generalized strict complementarity condition. When the feasible polyhedron of an LP is nondegenerate, the second result shows that constant cost nondegeneracy is equivalent to the condition that the set of feasible points satisfying the generalized condition be equal to the set of feasible points satisfying the same complementarity condition strictly. For the purpose of giving a preview of the paper, the above results specialized to the context of polyhedra and LP problems in standard form are described in the introduction. Keywords: linear programming, polyhedron, nondegeneracy, constant cost face, complementary slackness 1. Introduction This paper deals with the subject of nondegeneracy of polyhedra and linear programming (LP) problems. Nondegeneracy is a subject worth of intensive investigation due to its application in several branches of mathematical programming and has already been studied in several papers in the literature. These include papers dealing with cycling and termination of the simplex method and with the study of sensitivity and parametric analysis (Adler and Monteiro [1], Akgül [2], Aucamp and Steinberg [3], Beale [5], Bland [6], Charnes [7], Dantzig [8], Gal [10, 11], Greenberg [12], Hoffman [15], Magnanti and Orlin [16], Megiddo [17], Monteiro and Mehrotra [18], Ward and Wendell [29], Williams [30], Wolfe [31]), with the convergence of the affine scaling interior point algorithm (Barnes [4], Dikin [9], Hall and Vanderbei [14], Monteiro and Tsuchiya [19], Monteiro et al. [20], Tsuchiya [24 26], Vanderbei et al. [28], Vanderbei and Lagarias [27]), and etc. The paper by Güler et al. [13] surveys the theoretical and practical issues related to degeneracy in the context of interior point methods for linear programming. The work of these authors was based on research supported by the National Science Foundation under grant DMI and the Office of Naval Research under grants N and N

2 222 WANG AND MONTEIRO Recall that the LP problem optimize {c T x Ax = b, x 0}, where x, c IR n, b IR m and A is an m n-matrix, is said to be primal nondegenerate if every feasible point x has at least m positive components, and strongly primal nondegenerate if every x IR n satisfying Ax = b has at least m nonzero components (see Murty [21], page 121). These two concepts depend on A and b only, and hence only on the feasible polyhedron of the LP problem. The above LP problem is said to be dual nondegenerate if s(y) = c A T y has at least n m nonzero components for every dual feasible solution y IR m, and strongly dual nondegenerate (see Murty [21], page 253) if the same property holds for every y IR m. Note also that the two types of dual nondegeneracy depend only on A and c, and hence only on the dual feasible polyhedron. Hence, it is natural to think of the above notions of nondegeneracy as being concepts associated with polyhedra. In the first part of the paper (Section 3), we study the concept of nondegeneracy of a general (not necessarily pointed) polyhedron. (A polyhedron is said to be pointed if it contains a vertex.) Two notions of nondegeneracy (corresponding to the polyhedron being nondegenerate and/or strongly nondegenerate) are defined and then several equivalent conditions for each type of nondegeneracy are given. Most of the results derived in Section 3 are well known in the context of pointed polyhedra, but are scattered throughout the literature. Our goal here is to provide a unified treatment of this subject and to generalize it to the context of not necessarily pointed polyhedra. The results of Section 3 are not only interesting in their own right but are also needed for a full understanding of the subject of Section 4. In the second part of the paper (Section 4), we discuss the concept of constant cost nondegeneracy (or simply, CC-nondegeneracy) of an LP problem whose feasible region is allowed to be a non-pointed polyhedron. Tsuchiya [25] refers to this concept as dual nondegeneracy, a term which is not appropriate since it is already used to refer to a different but related concept. Consider the LP problem optimize {b T y y P}, where P IR m is a (not necessarily pointed) polyhedron. A nonempty face F of P is said to be a constant cost face of the LP problem optimize {b T y y P} if b T y is constant over F. When the reference to the LP problem is understood, we simply say that F is a constant cost face. The LP problem optimize {b T y y P} is said to be CC-nondegenerate if every constant cost face is a minimal face of P. (A nonempty face is said to be minimal if it does not properly contain any other nonempty face.) One of the main results of Section 4, namely Theorem 4.4, states that CC-nondegeneracy of the LP problem optimize {b T y y P} is equivalent to the condition that the union of all minimal faces of P be equal to the set of feasible points satisfying a certain generalized strict complementarity condition. Moreover, when the polyhedron P is nondegenerate, we show that CC-nondegeneracy is equivalent to the following condition: every point satisfying the generalized complementarity condition must also satisfy it strictly (see Theorem 4.6). We give below a preview of the main results of the paper when specialized to the context of polyhedra and LP problems in standard form. First, we introduce the following notation. We assume for the remaining part of this section that A is an m n matrix and b is an m-vector. Given x IR n, we let σ(x) {i x i 0}. Ifα {1,...,m}and β {1,...,n}, we let A αβ denote the submatrix [A ij ] i α, j β. If α ={1,...,m} we denote A αβ simply by A β and if β ={1,...,n}we denote A αβ by A α, or simply, A α. Given a vector x IR p and an

3 NONDEGENERACY OF POLYHEDRA AND LINEAR PROGRAMS 223 index set α {1,...,p}, we denote the subvector [x i ] i α by x α and the vector [ x i ] p i=1 by x. Given two vectors x IR p and s IR p, we denote the vector [x i s i ] p i=1 by x s. Ifαis a finite set then α denotes its cardinality. The Euclidean norm is denoted by. The polyhedron {x Ax = b, x 0} is said to be nondegenerate if it satisfies any one of the equivalent conditions of the result below. Theorem 1.1. The following statements are equivalent: (a) for any x {x Ax = b, x 0}, σ(x) m(that is, x has at least m positive components); (b) for any x {x Ax = b, x 0}, the rows of A σ(x) are linearly independent (that is, the submatrix of A consisting of the columns corresponding to the positive components of x has full row rank); (c) for any vertex x {x Ax = b, x 0}, σ(x) =m(that is, every vertex has exactly m positive components); (d) for any vertex x {x Ax = b, x 0}, the submatrix A σ(x) is nonsingular (that is, the submatrix of A consisting of the columns corresponding to the positive components of x is a basis of A); (e) for any c IR n and any x {x Ax = b, x 0}, the set arg min{ x s A T y+s= c,(y,s) IR m IR n } contains exactly one point; (f) for any c IR n and any constant cost face F of optimize {c T x Ax = b, x 0}, the set {(y, s) A T y + s = c and s x = 0 for some x F} contains exactly one point; (g) for any c IR n, if the L P problem max{b T y A T y c} has an optimal solution then it has a unique optimal solution. The polyhedron {x Ax = b, x 0} is said to be strongly nondegenerate if it satisfies any one of the equivalent conditions of the result below. Theorem 1.2. The following statements are equivalent: (a) for any x IR n such that Ax = b, σ(x) m; (b) for any x IR n such that Ax = b, the rows of A σ(x) are linearly independent; (c) for any x IR n such that Ax = b and rank(a σ(x) ) = m, σ(x) =m; (d) for any x IR n such that Ax = b and rank(a σ(x) ) = m, the submatrix A σ(x) is nonsingular; (e) for any c IR n and any x IR n satisfying Ax = b, the set arg min{ x s A T y+s =c, (y,s) IR m IR n } contains exactly one point; (f) for any c IR n and any set of the form A {x IR n A σ x σ = b, x σ c = 0} where σ {1,...,n}and c T x is constant on A, the set {(y, s) IR m +n x Asuch that A T y+ s = c, s x = 0} contains exactly one point; (g) for all c IR n, every constant cost face of the L P problem optimize {b T y A T y c} is a vertex. Note that in the results above we have not assumed that the polyhedron {x Ax =b, x 0} is nonempty.

4 224 WANG AND MONTEIRO Regarding CC-nondegeneracy of an LP problem, we have the following two main results for standard form LP problems. In the following two results, c denotes an n-vector. Theorem 1.3. The LP problem optimize {c T x x P} where P {x Ax = b, x 0} is CC-nondegenerate if and only if the set of all vertices of P is equal to the set { x P (y, s) such that } AT y + s = c,. x s = 0, and x + s >0. Theorem 1.4. Assume that P {x Ax = b, x 0} is nondegenerate. Then, the LP problem optimize {c T x x P} is CC-nondegenerate if and only if the two sets { x P (y, s) such that } AT y + s = c, x s = 0 and { x P (y, s) such that } AT y + s = c,. x s = 0, and x + s >0 are equal. We end this introduction by pointing out the relationship between CC-nondegeneracy of an LP problem in standard form and the strong dual nondegeneracy defined above. For the sake of future reference, we repeat the definition below. Definition 1. The LP problem optimize {c T x Ax = b, x 0} is said to be strongly dual nondegenerate if for every (y, s) IR m IR n such that A T y + s = c, the vector s has at least n m nonzero components. Using a more general version of Theorem 1.2, namely Theorem 3.9, the following equivalence can be proved under the assumption that rank(a) = m: the LP problem optimize {c T x Ax =b, x 0}is strongly dual nondegenerate if and only if the LP problem optimize {c T x Ax =b, x 0} is CC-nondegenerate for every b IR m (see Corollary 3.10). 2. Notation and terminology In this section, we introduce some additional notation which will be used in the remaining part of the paper. If M is a matrix then Null(M) denotes the null space of M and Range(M) denotes the range space of M. In our study of nondegeneracy of polyhedra and LP problems, we consider the following polyhedron P {y IR m H I y c I, H E y = c E }, (1) where H IR n m, c IR n and I ={1,...,n 1 }, E ={n 1 +1,...,n}.

5 NONDEGENERACY OF POLYHEDRA AND LINEAR PROGRAMS 225 The polyhedron P is said to be pointed if it contains a vertex. Throughout the remaining of the paper, we let l denote the dimension of the lineality space Null(H) of P. It is well known that P is pointed if and only if l = 0. Given y IR m and a set IR m, we let ν(y) {i I H i y=c i }and ν( ) {i I H i y=c i, y }. For a face F of P, ri(f)denotes its relative interior. For any y IR m, we denote the corresponding slack vector by s(y) c Hy. When the variable y is understood, we denote s(y) simply by s. Also, we denote s(ŷ) by ŝ, s(ȳ) by s, and etc. For y P, F(y) denotes the smallest face of P containing y. Finally, for any set α I, α c denotes the set I \α. For the purpose of future reference, we now make the following simple observations. Given y P and two faces F and F of P,wehave: ν(y) = ν(f(y)); (2) y F F(y) F; (3) y ri F F(y) = F; (4) F F ν(f) ν(f ). (5) Given a nonempty face F of P, there always exists a y F such that s ν(f) = 0, s ν c (F) > 0. (6) 3. Nondegeneracy of a polyhedron In this section, we discuss two notions of nondegeneracy of a (not necessarily pointed) polyhedron. We then provide several equivalent conditions for these two types of nondegeneracy. The results of this section are not only interesting in their own right but are also needed for the discussion of Section 4. Most of the results derived in this section, when specialized to the context of pointed polyhedra, are well known in the research community but are scattered throughout the literature. Hence, one of our goals is to provide a unified treatment of this subject. In what follows, P denotes the polyhedron defined in (1). A nonempty face F of P is called a minimal face if it does not have any nonempty face properly contained in it. The following result gives the main properties of minimal faces that are used in our presentation. For its proof, we refer the reader to Chapter 8 of Schrijver [23]. Proposition 3.1. Let l denote the dimension of the linearity space of P (hence, rank(h) = m l) and let F be a nonempty face of P. Then, F is a minimal face of P if and only if rank(h ν(f) E ) = m l, in which case ν(f) E m l. Lemma 3.2. Given any y P, there exists a minimal face F of P such that ν(y) ν(f). Proof: Let y P be given. We claim that if F(y) is not a minimal face then there exists ŷ P such that ν(ŷ) properly contains ν(y). It is easy to see that using this claim a

6 226 WANG AND MONTEIRO finite number of times, we can construct a point ȳ such that F F(ȳ) is a minimal face and ν(f)=ν(ȳ) ν(y), thereby showing that the lemma holds. To show the claim, assume that F(y) is not a minimal face. It follows from Proposition 3.1 and (2) that rank(h ν(y) E )<rank(h) = m l, or equivalently, that Null(H ν(y) E ) properly contains Null(H). Then, there exists d Null(H ν(y) E ) such that H r d 0 for some r ν c (y). By multiplying d by 1 if necessary, we may assume without loss of generality that H r d > 0. Let λ min{(c i H i y)/h i d i such that H i d > 0} and let ŷ = y + λd. It is now easy to see that ν(ŷ) ν(y) {r}, from which the claim follows. Given b, ȳ IR m, define the following sets E b (ȳ) arg min { s I x I H T I x I +H T E x E =b, x IR n }, X b (ȳ) { x IR n H T I x I + H T E x E = b, s I x I = 0 }. (7) where s = s(ȳ). Observe that E b (ȳ) whenever b Range(H T I H T E ) and that E b(ȳ) = X b (ȳ) whenever X b (ȳ). The proof of the following lemma is left to the reader. Lemma 3.3. Let b IR m and ȳ P be given. Then the following statements are equivalent: (a) F(ȳ) is a constant cost face of the LP problem optimize {b T y y P}; (b) b Range(Hν(ȳ) T HT E ); (c) X b (ȳ). Lemma 3.4. Let b IR m, a face F of P and ȳ ri(f) be given. Then X b (ȳ) X b (y) for every y F. Proof: We have ν(ȳ) ν(y) for every y F. Hence the implication s I x I = 0 s I x I = 0 holds for every x IR n, where s = s(ȳ) and s = s(y). This clearly implies the lemma. The following theorem, which is the first main result of this section, gives several equivalent characterizations of the notion of nondegeneracy of a polyhedron. The first four conditions are primal-type characterizations with appealing geometric meanings while the other three conditions are dual type characterizations. Theorem 3.5. The following statements are equivalent: (a) for any y P, ν(y) E m l(there are at most m l active hyperplanes at any feasible point); (b) for any y P, the set {H i i ν(y) E} is linearly independent (the normal vectors to the active hyperplanes at any feasible point are linearly independent); (c) for any minimal face F of P, ν(f) E =m l(there are exactly m l hyperplanes containing a minimal face); (d) for any minimal face F of P, the set {H i i ν(f) E} is linearly independent (the normal vectors to the hyperplanes containing a minimal face are linearly independent); (e) for any b Range(HI T HE T ) and any ȳ P, E b(ȳ) contains exactly one point;

7 NONDEGENERACY OF POLYHEDRA AND LINEAR PROGRAMS 227 (f) for any b IR m and any constant cost face F of the LP problem optimize {b T y y P}, the set {X b (y) y F} contains exactly one point; (g) for any b IR m, if the linear program minimize subject to c T I x I + c T E x E HI T x I + HE T x E = b x I 0, x E unrestricted, (8) has an optimal solution, then it has a unique optimal solution. Proof: (a) (c): The forward implication follows from Proposition 3.1 and the reverse implication follows from Lemma 3.2. (c) (d): Follows immediately from Proposition 3.1. (b) (d): The forward implication is trivial. The reverse implication follows from Lemma 3.2. (b) (e): Assume (b) holds and let b Range(HI T HE T ) and ȳ P be given. Define (B, N) (ν(ȳ), ν c (ȳ)). Since s B = 0, we have s I x I = s N x N. Hence, due to (7), if x E b (ȳ) then x N is an optimal solution of minimize { s N x N 2 H T N x N b Range ( H T B E)}. (9) Since s N > 0, (9) is a strictly convex quadratic program. Hence, x N is uniquely determined. Moreover, by (b), the columns of HB E T are linearly independent. These two observations together with the fact that HB E T x B E + HN T x N = b implies that x B E is also uniquely determined. (e) (f): Let b IR m, a constant cost face F of optimize {b T y y P} and y F be given. By Lemma 3.3, it follows that X b (y). This implies that E b (y) = X b (y), and hence, by (e), it follows that X b (y) contains exactly one point for every y F. This together with Lemma 3.4 implies that {X b (y) y F} contains exactly one point. (f) (g): Assume (f) holds and let b IR m be such that (8) has an optimal solution. Let D denote the optimal face of (8). It follows from the duality theorem that the dual of (8), namely the problem maximize {b T y y P}, has a nonempty optimal face F. By the complementary slackness theorem, we know that D X b (y) for any y F. This fact, (f) and the fact that F is obviously a constant cost face of maximize {b T y y P} implies that D contains exactly one point. (g) (b): Let ȳ P be given. We will show that the set {H i i ν(ȳ) E} is linearly independent. Indeed, let b Hν(ȳ) T 1 + HT E 1. Clearly, x = ( x ν(ȳ), x ν c (ȳ), x E ) (1,0,1) is a feasible solution of (8) with b = b which, together with ȳ, satisfies the strict complementarity condition. Hence, it follows that the optimal face D of problem (8) with b = b is given by D ={x IR n H T ν(ȳ) x ν(ȳ) + H T E x E = b, x ν(ȳ) 0, x ν c (ȳ) = 0}. Since x D and x satisfies x ν(ȳ) > 0, it follows that the dimension of D is equal to the dimension of Null (H T ν(ȳ) HT E ). Since (g) holds by assumption, it follows that (8) with b = b

8 228 WANG AND MONTEIRO has a unique optimal solution, and hence its optimal face has dimension zero. Thus, we conclude that the dimension of Null (Hν(ȳ) T HT E ) is equal to zero, or equivalently, the set {H i i ν(ȳ) E} is linearly independent. In view of Theorem 3.5, we introduce the following definition. Definition 2. P is said to be nondegenerate if any one of the equivalent conditions of Theorem 3.5 holds. We remark that nondegeneracy of a polyhedron is a notion that depends not only on the polyhedron itself but also on its representation as a system of linear equalities and inequalities. We can also define a stronger notion of nondegeneracy as follows. Definition 3. P is said to be strongly nondegenerate if ν(y) E m lfor every y {y IR m H E y = c E }. Note that strong nondegeneracy is a condition that depends on H, c and the index set E only. More specifically, if P is strongly nondegenerate then any other polyhedron of the form {y IR m A E y = c E, A I y? c I } is strongly nondegenerate, where? denotes a vector of and symbols. Similar to the concept of nondegeneracy, there are several equivalent ways to express the strong nondegeneracy of a polyhedron. In what follows, we discuss this issue. For the purpose of stating the next result, we need to introduce the following set: M {y IR m H E y = c E, rank(h ν(y) E ) = m l}. Since, by Proposition 3.1, {F F is a minimal face of P} ={y P rank(h ν(y) E ) = m l}, it follows that M P = {F Fis a minimal face of P}. Hence, the set M is a natural extension of the set {F F is a minimal face of P}. A nonempty set A is said to be an affine set associated with P if A ={y IR m H E y = c E, H ν y = c ν } for some index set ν I. It can be shown that for every affine set A there exists ȳ A such that ν(ȳ) = ν(a). We state the following lemmas whose proofs are similar to those of Lemma 3.2, Lemma 3.3 and Lemma 3.4, and hence are left to the reader. Lemma 3.6. ν(ȳ). Given any y IR m satisfying H E y = c E, there exists ȳ M such that ν(y) Lemma 3.7. Let b IR m and an affine set A IR m be given. Then the following statements are equivalent:

9 NONDEGENERACY OF POLYHEDRA AND LINEAR PROGRAMS 229 (a) b T y is constant over the set A; (b) b Range(H T ν(a) HT E ); (c) X b (y) for every y A. Lemma 3.8. Let b IR m and an affine set A IR m be given. Let ȳ A be such that ν(ȳ) = ν(a). Then X b (ȳ) X b (y) for every y A. We are now ready to state the second main result of this section. It gives several equivalent characterizations of the notion of strong nondegeneracy of a polyhedron. Theorem 3.9. The following statements are equivalent. (a) for any y IR m satisfying H E y = c E, ν(y) E m l (i.e., P is strongly nondegenerate); (b) for any y IR m satisfying H E y = c E, the set {H i i ν(y) E}is linearly independent; (c) for any y M, ν(y) E =m l; (d) for any y M, {H i i ν(y) E} is linearly independent; (e) for any b Range(HI T HE T ) and any ȳ IRm satisfying H E ȳ = c E, E b (ȳ) contains exactly one point; (f) for any b IR m and any nonempty affine set A IR m over which b T y is constant, the set {X b (y) y A} contains exactly one point; (g) for all b IR m, every constant cost face of the LP problem optimize x IR n s.t. is a vertex. c T I x I + c T E x E HI T x I + HE T x E = b x I 0, x E unrestricted, (10) Proof: The equivalences and the implications (= (a) (c), (c) (d), (b) (d), (b) (e) (f) can be proved using similar arguments as the ones used to prove the same equivalences and implications of Theorem 3.5, except that now Lemma 3.6, Lemma 3.7 and Lemma 3.8 are used instead of Lemma 3.2, Lemma 3.3 and Lemma 3.4. We next prove that (f) (g) and (g) (b). (f) (g): Assume (f) holds and let b IR m be given. Suppose that D is a constant cost face of (10). D can be written as D ={x IR n H T x = b, x B 0, x N = 0 }, for some index sets B I and N = I \B such that x B > 0 for at least one x D. Since D is a constant cost face of (10), Lemma 3.3 implies that c B 0 H B c N Range I N H N, c E 0 H E

10 230 WANG AND MONTEIRO where I N is the N N identity matrix. This implies that there exists a y IR m such that H B y = c B, H E y = c E. Define the affine set A ={y IR m H B y = c B, H E y = c E }. Clearly, D X b (y) for any y A. Lemma 3.7 and the fact that HB T x B + HE T x E = b, for x D, imply that b T y is constant over A. Hence, by (f), we conclude that D contains exactly one point, that is, that D is a vertex. (g) (b): Let ȳ satisfying H E ȳ = c E be given. We will show that the set {H i i ν(ȳ) E} is linearly independent. Indeed, let b Hν(ȳ) T 1 + HT E 1 and let D { x IR n H T ν(ȳ) x ν(ȳ) + H T E x E = b, x ν(ȳ) 0, x ν c (ȳ) = 0 }. Clearly, x = ( x ν(ȳ), x ν c (ȳ), x E ) = (1,0,1) D. Since c ν(ȳ) E = H ν(ȳ) E ȳ, we know that c ν(ȳ) E Null(Hν(ȳ) E T ), and hence D is a constant cost face of (10) with b = b. By condition (g), we know that D is a vertex. This implies that the set {H i i ν(ȳ) E} is linearly independent. Corollary Let A IR m n, b IR m, c IR n and assume that rank(a) = m. Then the LP problem optimize {c T x Ax = b, x 0} is strongly dual nondegenerate if and only if the LP problem optimize {c T x Ax = b, x 0}is CC-nondegenerate for every b IR m. Proof: The assumption rank(a) = m implies that the polyhedron {y IR m A T y c} is pointed. By this observation, Definition 1 and Definition 3, we conclude that strong dual nondegeneracy of the LP problem optimize {c T x Ax = b, x 0} is equivalent to the condition that the polyhedron {y IR m A T y c} be strongly nondegenerate. From the equivalence of (a) and (g) of Theorem 3.9 and the fact that every minimal face of {x Ax =b, x 0} is a vertex, it follows that the latter condition is equivalent to the condition that the LP problem optimize {c T x Ax = b, x 0} be CC-nondegenerate for every b IR m. 4. CC-nondegeneracy of a linear program In this section, we discuss the notion of CC-nondegeneracy with respect to the LP problem optimize b T y subject to y P, (11) where b IR m and P is the polyhedron defined in (1). Problem (11) is allowed to be either a maximization or a minimization problem. The main results of this section are Theorem 4.4 and Theorem 4.6. Theorem 4.4 gives a characterization of CC-nondegeneracy of (11) for general feasible polyhedron P while Theorem 4.6 gives an alternative characterization that holds only when P is nondegenerate. Throughout this section we consider the following two sets: C {y P X b (y) } = { y P x IR n such that HI T x I + HE T x E = b, x I s I = 0 } ; { SC y P x IRn such that HI T x I + HE T x } E = b,. x I s I = 0, and x I +s I >0

11 NONDEGENERACY OF POLYHEDRA AND LINEAR PROGRAMS 231 C is the set of feasible points satisfying the complementarity condition while SC is the set of feasible points satisfying the same condition strictly. The main of this section is to show that CC-nondegeneracy of (11) is equivalent to the condition SC = {F Fis minimal face of P} (see Theorem 4.4) and, when the feasible polyhedron P is nondegenerate, to the condition C = SC (see Theorem 4.6). We start with the following result which follows as an immediate consequence of Lemma 3.3. Proposition 4.1. C = {F Fis a constant cost face of (11)}. The next result establishes an important relationship between the set SC and the maximal constant cost faces of problem (11). First we need the following definition. Definition 4. A face F of P is said to be a maximal constant cost face (of problem (11)) if it is a constant cost face and is not properly contained in any other constant cost face. Theorem 4.2. {ri F F is maximal constant cost face} SC. (12) Proof: Let F be a maximal constant cost face for (11) and let ȳ ri F. We will show that ȳ SC. By (4), we have F = F(ȳ), and hence F(ȳ) is a maximal constant cost face. Hence, by the equivalence of statements (a) and (b) of Lemma 3.3 we conclude that there exists an x IR n such that HB T x B + HE T x E = b, x N = 0, where B ν(ȳ) and N I \B. Clearly, if x B > 0 then ȳ SC since s N > 0, s B = 0. Hence, assume that x B 0 and let B + ={i B x i > 0}, and B 0 = B\B + ={i B x i =0}. Consider the problem maximize b T y subject to H B+ y = c B+ H B0 y c B0 H N y c N H E y = c E (13) and its dual problem minimize subject to c T B + x B+ + c T B 0 x B0 + c T N x N + c T E x E H T B + x B+ + H T B 0 x B0 + H T N x N + H T E x E = b x B0 0, x N 0, x B+, x E unrestricted. (14) It is easy to verify that x and ȳ are optimal solutions to problems (14) and (13), respectively. Since every pair of primal and dual linear programs has a pair of optimal solutions satisfying strict complementarity, we conclude that there exist optimal solutions ˆx and ŷ of (14) and

12 232 WANG AND MONTEIRO (13), respectively, such that ˆx B0 ŝ B0 =0, ˆx B0 +ŝ B0 >0, (15) ˆx N ŝ N =0, ˆx N +ŝ N >0. (16) Since every pair of optimal solutions of (13) and (14) satisfies the complementarity condition, we have ˆx N s N =0. Since s N > 0, we obtain ˆx N = 0. It then follows from (16) that ŝ N > 0, and hence ν(ŷ) B = ν(ȳ). By (5), we conclude that F(ȳ) F(ŷ). Since ŷ is an optimal solution of (13), it follows that the face F(ŷ) is contained in the optimal face of (13) and hence that F(ŷ) is a constant cost face. Hence, we conclude that F(ȳ) = F(ŷ) since F(ȳ) F(ŷ) and F(ȳ) is a maximal constant cost face. Thus, ν(ȳ) = ν(ŷ), and since B 0 ν(ȳ) we conclude that ŝ B0 = 0. It then follows from (15) that ˆx B0 > 0. Using the fact that x B+ > 0, x B0 =0 and ˆx B0 > 0, we conclude that there exists a δ (0, 1] sufficiently small such that λ ˆx B + (1 λ) x B > 0 for all 0 <λ δ. Defining x = δ ˆx + (1 δ) x and noting that x N =ˆx N =0, we have H I x I + H E x E = b, x B > 0 and x N = 0. Clearly, this shows that ȳ SC. The following simple example shows that the two sets in (12) may differ. Example 4.3. Consider the LP problem given by maximize y 1 + y 2 subject to y 1 + y 2 0, y 1 + y 2 2, y 2 1. (17) Let F 1 denote the face of the feasible polyhedron in which the first constraint is active. Clearly, the only maximal constant cost face is F 1. It is easy to verify that SC = F 1 and hence that the two sets in (12) differ. The next result provides a characterization of CC-nondegeneracy of the LP problem (11). Theorem 4.3. Assume that C. Then, problem (11) is CC-nondegenerate if and only if SC = {F Fis minimal face of P}, (18) in which case, we have SC = C. Proof: We first prove the only if part. So assume that problem (11) is CC-nondegenerate. Using Proposition (11), Theorem 4.1 and the CC-nondegeneracy of problem (11), we obtain C = {F Fis a constant cost face} = {F Fis a minimal face} = {ri F F is a minimal face} = {ri F F is a maximal constant cost face} SC. (19)

13 NONDEGENERACY OF POLYHEDRA AND LINEAR PROGRAMS 233 On the other hand, we know that SC C. Hence, we conclude from (19) that (18) holds and SC = C. For the if part, note that Theorem 4.2 and relation (18) imply {ri F F is a maximal constant cost face} SC = {F Fis a minimal face}. This inclusion clearly implies that problem (11) is CC-nondegenerate. Theorem 4.4 shows that CC-nondegeneracy of (11) implies that SC = C. On the other hand, the reverse implication may not hold as Example 4.3 illustrates. But when the polyhedron P is nondegenerate, we will show in what follows that the condition SC = C implies that (11) is CC-nondegenerate. With this goal in mind, we introduce the following set: ( ) ( ) HB Hν(y) SC 0 = y P B ν(y) such that rank = rank, H E H E x IR n such that HI T x I + HE T x E = b, x B > 0, x B c = 0.. (20) The following theorem relates the set {ri F F is a maximal constant cost face} with the set SC 0. Its proof is postponed until the end of this section. Theorem 4.5. SC 0 {ri F F is a maximal constant cost face}. As a consequence of this theorem, we obtain the following result with respect to problem (11) when P is nondegenerate. Theorem 4.6. Assume that P is nondegenerate. Then, (a) SC = SC 0 = {ri F F is a maximal constant cost face}; (b) problem (11) is CC-nondegenerate if and only if C = SC. Proof: We first prove (a). Combining Theorem 4.2 and 4.5, we obtain SC 0 {ri F F is a maximal constant cost face} SC. (21) Since P is nondegenerate, it follows from Theorem 3.5 that the set {H i i ν(y) E} is linearly independent for any y P. This observation implies that SC = SC 0, due to the definition of these sets. Hence, (a) follows due to (21). For (b), note that the only if part follows from Theorem 4.4. For the proof of if part, assume that C = SC. This condition, statement (a) and Proposition 4.1 then imply {ri F F is a maximal constant cost face} = SC = C = {F Fis a constant cost face}. This equality obviously imply that every constant cost face must be a minimal face, that is, problem (11) is CC-nondegenerate.

14 234 WANG AND MONTEIRO We now turn our efforts towards proving Theorem 4.5. Several preliminary lemmas are needed. The first one can be proved in the same way as Proposition 3.4 of Nemhauser and Worsey [22] and hence, we leave its proof to the reader. In what follows, the following notation is used. Given two index sets B {1,...,m} and N {1,...,m} such that B N =, we denote by [B, N] the polyhedron given by [B, N] {y IR m H B y = c B, H N y c N }. Lemma 4.7. Let F be a face of P and assume that F = [B, N]. Assume that r Nis such that the ( possibly empty) set {y F H r y = c r } has dimension less than dim(f) 1. Then, F = [B, N\{r}]. Lemma 4.8. Let F and F be two nonempty faces of P such that F F and F F. Then there exists an index r ν(f)\ν(f ) such that the face F ={y F H r y=c r } satisfies that F F and dim( F) = dim(f ) 1. Proof: Let B ν(f) E, N {1,...,n}\B, B ν(f ) E and N {1,...,n}\B. By (5), we know that B B. Forr B\B, let F r {y F H r y=c r }. Obviously, F F r F, and dim(f r )<dim(f ), since otherwise we would have {y F H r y = c r }=F and hence that r ν(f ), a contradiction. Now assume for contradiction that for every r B\B, dim(f r )<dim(f ) 1 and let B\B ={r 1,...,r k }. Using Lemma 4.7, we can easily show by induction that F = [B, N \{r 1,...,r j }] for every j = 1,...,k. In particular, F = [B, N] since N = N \{r 1,...,r k }. Since F is a subface of F,it follows that F = [B N 1, N 2 ], where N 1 and N 2 are index sets such that N 1 N 2 = N and N 1 N 2 =. Clearly, B N 1 B and since B N =, we must have N 1 = and N 2 = N. Hence, it follows that F = [B, N] = F. But this contradicts the assumption that F F. Thus, we conclude there exists an index r J such that dim(f r ) = dim(f ) 1 and the result follows by letting F F r. Lemma 4.9. Let F and F be two nonempty faces of P such that F F and F F. Then there exist a face ˆF of P and an index r ν(f)\ν( ˆF) satisfying the following properties: (a) F ˆF F and dim( ˆF) = dim(f) + 1; (b) F ={y ˆF H r y=c r }; (c) rank( H ν(f) ) = rank( H ν( ˆF) {r} ). H E H E Proof: The proofs of statements (a) and (b) follow immediately from Lemma 4.8. It remains to show (c). We first prove that H r is linearly independent from the rows of H ν( ˆF) E. Assume for contradiction that there exist scalars α i with i ν( ˆF) E such that H r = α i H i. i ν( ˆF) E (22)

15 NONDEGENERACY OF POLYHEDRA AND LINEAR PROGRAMS 235 Then, using (22) we obtain H r y = α i H i y = i ν( ˆF ) E i ν( ˆF) E α i c i, y ˆF. (23) Since F ˆF and H r y = c r for all y F, it follows from (23) that i ν( ˆF) E α ic i = c r and that r ν( ˆF ), a contradiction. We have thus shown that H r is linearly independent from the rows of H ν( ˆF) E. Using this fact and statement (a), we obtain ( ) Hν(F) rank = n dim(f) = n dim( ˆF) + 1 H E ) ) Hν( = rank( ˆF) Hν( + 1 = rank( ˆF) {r}. H E H E Lemma X(F) = Let F be a constant cost face of P. Define the set { x IR n H I T x I + H E T x } E = b, and y F such that s I = c I H I y satisfies xi T s. (24) I = 0 Then, for any y F and any x X(F), we have x T I s I = 0. Proof: Let y F and x X(F) be given. By the definition of X(F), x satisfies H T I x I + H T E x E = b and there exists ȳ F such that x T I (c I H I ȳ) = 0. Since F is a constant cost face and y, ȳ F,wehaveb T y =b T ȳ. Using this fact and the fact that H E y = c E = H E ȳ, we obtain xi T s I = xi T (c I H I ȳ + H I ȳ H I y) = xi T H I (ȳ y) = ( b HE T x T E) (ȳ y) = b T (ȳ y) xe T H E(y ȳ) = 0. We are now ready to prove Theorem 4.5. Proof of Theorem 4.5: For any ȳ SC 0, we will show that F(ȳ) is a maximal constant cost face from which the inclusion of the theorem follows. Indeed, let ȳ SC 0 be given. Since SC 0 C, it follows that ȳ C, and hence that F(ȳ) is a constant cost face, due to Proposition 4.1. Assume for contradiction that F(ȳ) is not a maximal constant cost face, that is, there exists a constant cost face F such that F(ȳ) F and F(ȳ) F. Applying Lemma 4.9 with F = F(ȳ), we conclude that there exists a face ˆF and an index r ν(ȳ)\ν( ˆF) such that F(ȳ) ˆF F, ( ) Hν(ȳ) rank H E = rank( Hν( ˆF) {r} H E (25) ). (26)

16 236 WANG AND MONTEIRO Letting B ν( ˆF) {r} ν(ȳ) and using relation (26), the definition of SC 0 and the fact that ȳ SC o, we conclude that there exists x IR n such that H T I x I + H T E x E = b, x B > 0, x B c = 0. (27) By (6), we know there exists ŷ ˆF such that ŝ ν c ( ˆF) > 0, ŝ ν( ˆF) = 0. (28) Hence, we have x T I ŝi = x T BŝB+ x T B cŝ B c = x T ν( ˆF)ŝν( ˆF) + x rŝ r + x B T cŝ B c (29) = x r ŝ r 0, (30) where x T ν( ˆF)ŝν( ˆF) is equal to zero by (28), x B T cŝ B c is equal to zero by (27) and x rŝ r is nonzero due to (27) and (28) and the fact that r B, r ν( ˆF). On the other hand, since B ν(ȳ), relation (27) implies that x I T s I = 0. This fact, relation (27), the definition of X( ˆF) and the fact that ȳ ˆF imply that x X( ˆF). Since F is a constant cost face, it follows from (25) that ˆF is also a constant cost face. Using these two last conclusions, the fact that ŷ ˆF and Lemma 4.10, we conclude that x I T ŝi = 0, a fact that contradicts (30). The reverse inclusion in Theorem 4.5 does not hold in general. To see this, consider the LP problem of maximizing y 2 subject to the same set of constraints as the problem in Example 4.3. Then, the vertex (1, 1) is the only maximal constant cost face, but (1, 1) / SC 0. References 1. I. Adler and R.D.C. Monteiro, A geometric view of parametric linear programming, Algorithmica, vol. 8, pp , M. Akgül, A note on shadow prices in linear programming, Journal of Operations Research Society, vol. 35, pp , D.C. Aucamp and D.I. Steinberg, The computation of shadow prices in linear programming, Journal of Operations Research, vol. 33, pp , E.R. Barnes, A variation on Karmarkar s algorithm for solving linear programming problems, Mathematical Programming, vol. 36, pp , E.M.L. Beale, Cycling in the dual simplex algorithm, Navel Research Logistics Quarterly, vol. 2, pp , R.G. Bland, New finite pivoting rules for the simplex algorithm, Mathematics of Operations Research, vol. 2, pp , A. Charnes, Optimality and degeneracy in linear programming, Econometrica, vol. 20, pp , April G. Dantzig, Linear Programming and Extensions, Princeton University Press: Princeton, NJ, I.I. Dikin, Iterative solution of problems of linear and quadratic programming, Doklady Akademii Nauk SSSR, vol. 174, pp , Translated in: Soviet Mathematics Doklady, vol. 8, pp , T. Gal, Postoptimal Analysis, Parametric Programming and Related Topics, McGraw-Hill: New York, 1979.

17 NONDEGENERACY OF POLYHEDRA AND LINEAR PROGRAMS T. Gal, Shadow prices and sensitivity analysis in linear programming under degeneracy, state-of-the-artsurvey, OR Spektrum, vol. 8, pp , H.J. Greenberg, An analysis of degeneracy, Naval Research Logistics Quarterly, pp , O. Güler, D. Den Hertog, C. Roos, T. Terlaky, and T. Tsuchiya, Degeneracy in interior point methods for linear programming: A survey, Annals of Operations Research, vol. 46, pp , L.A. Hall and R.J. Vanderbei, Two-thirds is sharp for affine scaling, Operations Research Letters, vol. 13, pp , A.J. Hoffman, Cycling in the simplex algorithm, Tech. Report 2974, National Bureau of Standards Report, Washington, D.C., Dec T.L. Magnanti and J.B. Orlin, Parametric linear programming and anti-cylcing pivoting rules, Mathematical Programming, vol. 41, pp , N. Megiddo, A note on degeneracy in linear programming, Mathematical Programming, vol. 35, pp , R.D.C. Monteiro and S. Mehrotra, A general parametric analysis approach and its implication to sensitivity analysis in interior point methods, Mathematical Programming, vol. 47, pp , R.D.C. Monteiro and T. Tsuchiya, Global convergence of the affine scaling algorithm for convex quadratic programming, Technical Report, School of Industrial Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA, March R.D.C. Monteiro, T. Tsuchiya, and Y. Wang, A simplified global convergence proof of the affine scaling algorithm, Annals of Operations Research, vol. 47, pp , K.G. Murty, Linear Programming, John Wiley & Sons, G. Nemhauser and L. Wolsey, Integer and Combinatorial Optimization, John Wiley & Sons, A. Schrijver, Theory of Linear and Integer Programming, John Wiley & Sons, New York, T. Tsuchiya, Global convergence of the affine-scaling methods for degenerate linear programming problems, Mathematical Programming, vol. 52, pp , T. Tsuchiya, Global convergence property of the affine scaling method for primal degenerate linear programming problems, Mathematics of Operations Research, vol. 17, pp , T. Tsuchiya and M. Muramatsu, Global convergence of a long-step affine scaling algorithm for degenerate linear programming problems, SIAM Journal on Optimization, vol. 5, pp , R.J. Vanderbei and J.C. Lagarias, I.I. Dikin s convergence result for the affine-scaling algorithm, in Mathematical Development Arising from Linear Programming: Proceedings of a Joint Summer Research Conference held at Bowdoin College, Brunswick, Maine, USA, June/July 1988, J.C. Lagarias and M.J. Todd (eds.) of Contemporary Mathematics, American Mathematical Society: Providence, Rhode Island, USA, vol. 114, pp , R.J. Vanderbei, M.S. Meketon, and B.A. Freedman, A modification of Karmarkar s linear programming algorithm, Algorithmica, vol. 1, pp , J.E. Ward and R.E. Wendell, Approaches to sensitivity analysis in linear programming, Annals of Operations Research, vol. 27, pp. 3 38, A.C. Williams, Marginal values in linear programming, Journal of SIAM, vol. 11, pp , P. Wolfe, A technique for resolving degeneracy in linear programming, Journal of SIAM, vol. 11, 1963.

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

::::: OFTECHY. .0D 0 ::: ::_ I;. :.!:: t;0i f::t l. :- - :.. :?:: : ;. :--- :-.-i. .. r : : a o er -,:I :,--:-':: : :.:

::::: OFTECHY. .0D 0 ::: ::_ I;. :.!:: t;0i f::t l. :- - :.. :?:: : ;. :--- :-.-i. .. r : : a o er -,:I :,--:-':: : :.: ,-..., -. :', ; -:._.'...,..-.-'3.-..,....; i b... {'.'',,,.!.C.,..'":',-...,'. ''.>.. r : : a o er.;,,~~~~~~~~~~~~~~~~~~~~~~~~~.'. -...~..........".: ~ WS~ "'.; :0:_: :"_::.:.0D 0 ::: ::_ I;. :.!:: t;0i

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}

More information

1. Introduction and background. Consider the primal-dual linear programs (LPs)

1. Introduction and background. Consider the primal-dual linear programs (LPs) SIAM J. OPIM. Vol. 9, No. 1, pp. 207 216 c 1998 Society for Industrial and Applied Mathematics ON HE DIMENSION OF HE SE OF RIM PERURBAIONS FOR OPIMAL PARIION INVARIANCE HARVEY J. REENBER, ALLEN. HOLDER,

More information

A Strongly Polynomial Simplex Method for Totally Unimodular LP

A Strongly Polynomial Simplex Method for Totally Unimodular LP A Strongly Polynomial Simplex Method for Totally Unimodular LP Shinji Mizuno July 19, 2014 Abstract Kitahara and Mizuno get new bounds for the number of distinct solutions generated by the simplex method

More information

Parametric LP Analysis

Parametric LP Analysis Rose-Hulman Institute of Technology Rose-Hulman Scholar Mathematical Sciences Technical Reports (MSTR) Mathematics 3-10-2010 Parametric LP Analysis Allen Holder Rose-Hulman Institute of Technology, holder@rose-hulman.edu

More information

A NEW FACE METHOD FOR LINEAR PROGRAMMING. 1. Introduction We are concerned with the standard linear programming (LP) problem

A NEW FACE METHOD FOR LINEAR PROGRAMMING. 1. Introduction We are concerned with the standard linear programming (LP) problem A NEW FACE METHOD FOR LINEAR PROGRAMMING PING-QI PAN Abstract. An attractive feature of the face method [9 for solving LP problems is that it uses the orthogonal projection of the negative objective gradient

More information

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 7 February 7th Overview In the previous lectures we saw applications of duality to game theory and later to learning theory. In this lecture

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane

More information

COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING +

COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING + Mathematical Programming 19 (1980) 213-219. North-Holland Publishing Company COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING + Katta G. MURTY The University of Michigan, Ann Arbor, MI, U.S.A.

More information

On mixed-integer sets with two integer variables

On mixed-integer sets with two integer variables On mixed-integer sets with two integer variables Sanjeeb Dash IBM Research sanjeebd@us.ibm.com Santanu S. Dey Georgia Inst. Tech. santanu.dey@isye.gatech.edu September 8, 2010 Oktay Günlük IBM Research

More information

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs * Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Linear and Combinatorial Optimization

Linear and Combinatorial Optimization Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality

More information

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm. 1 date: February 23, 1998 le: papar1 KLEE - MINTY EAMPLES FOR (LP) Abstract : The problem of determining the worst case behavior of the simplex algorithm remained an outstanding open problem for more than

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming Gary D. Knott Civilized Software Inc. 1219 Heritage Park Circle Silver Spring MD 296 phone:31-962-3711 email:knott@civilized.com URL:www.civilized.com May 1, 213.1 Duality

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,

More information

1 Maximal Lattice-free Convex Sets

1 Maximal Lattice-free Convex Sets 47-831: Advanced Integer Programming Lecturer: Amitabh Basu Lecture 3 Date: 03/23/2010 In this lecture, we explore the connections between lattices of R n and convex sets in R n. The structures will prove

More information

Optimization methods NOPT048

Optimization methods NOPT048 Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

On the Existence and Convergence of the Central Path for Convex Programming and Some Duality Results

On the Existence and Convergence of the Central Path for Convex Programming and Some Duality Results Computational Optimization and Applications, 10, 51 77 (1998) c 1998 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. On the Existence and Convergence of the Central Path for Convex

More information

that nds a basis which is optimal for both the primal and the dual problems, given

that nds a basis which is optimal for both the primal and the dual problems, given On Finding Primal- and Dual-Optimal Bases Nimrod Megiddo (revised June 1990) Abstract. We show that if there exists a strongly polynomial time algorithm that nds a basis which is optimal for both the primal

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

A Simpler and Tighter Redundant Klee-Minty Construction

A Simpler and Tighter Redundant Klee-Minty Construction A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central

More information

Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods

Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods Renato D. C. Monteiro Jerome W. O Neal Takashi Tsuchiya March 31, 2003 (Revised: December 3, 2003) Abstract Solving

More information

Linear Programming. Chapter Introduction

Linear Programming. Chapter Introduction Chapter 3 Linear Programming Linear programs (LP) play an important role in the theory and practice of optimization problems. Many COPs can directly be formulated as LPs. Furthermore, LPs are invaluable

More information

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP 1 / 23 Repetition the simplex algorithm: sequence of pivots starting

More information

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method... Contents Introduction to Linear Programming Problem. 2. General Linear Programming problems.............. 2.2 Formulation of LP problems.................... 8.3 Compact form and Standard form of a general

More information

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 10: Linear programming duality and sensitivity 0-0 Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

Linear Programming. 1 An Introduction to Linear Programming

Linear Programming. 1 An Introduction to Linear Programming 18.415/6.854 Advanced Algorithms October 1994 Lecturer: Michel X. Goemans Linear Programming 1 An Introduction to Linear Programming Linear programming is a very important class of problems, both algorithmically

More information

1 Review of last lecture and introduction

1 Review of last lecture and introduction Semidefinite Programming Lecture 10 OR 637 Spring 2008 April 16, 2008 (Wednesday) Instructor: Michael Jeremy Todd Scribe: Yogeshwer (Yogi) Sharma 1 Review of last lecture and introduction Let us first

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

Optimization methods NOPT048

Optimization methods NOPT048 Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague

More information

EE364a Review Session 5

EE364a Review Session 5 EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)

More information

Interior Point Methods for LP

Interior Point Methods for LP 11.1 Interior Point Methods for LP Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor, Winter 1997. Simplex Method - A Boundary Method: Starting at an extreme point of the feasible set, the simplex

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible

More information

MATH 4211/6211 Optimization Linear Programming

MATH 4211/6211 Optimization Linear Programming MATH 4211/6211 Optimization Linear Programming Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 The standard form of a Linear

More information

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Linear programming Linear programming. Optimize a linear function subject to linear inequalities. (P) max c j x j n j= n s. t. a ij x j = b i i m j= x j 0 j n (P) max c T x s. t. Ax = b Lecture slides

More information

Minimal inequalities for an infinite relaxation of integer programs

Minimal inequalities for an infinite relaxation of integer programs Minimal inequalities for an infinite relaxation of integer programs Amitabh Basu Carnegie Mellon University, abasu1@andrew.cmu.edu Michele Conforti Università di Padova, conforti@math.unipd.it Gérard Cornuéjols

More information

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark.

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. DUALITY THEORY Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. Keywords: Duality, Saddle point, Complementary

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

Interior Point Algorithm for Linear Programming Problem and Related inscribed Ellipsoids Applications.

Interior Point Algorithm for Linear Programming Problem and Related inscribed Ellipsoids Applications. International Journal of Computational Science and Mathematics. ISSN 0974-3189 Volume 4, Number 2 (2012), pp. 91-102 International Research Publication House http://www.irphouse.com Interior Point Algorithm

More information

Introduce the idea of a nondegenerate tableau and its analogy with nondenegerate vertices.

Introduce the idea of a nondegenerate tableau and its analogy with nondenegerate vertices. 2 JORDAN EXCHANGE REVIEW 1 Lecture Outline The following lecture covers Section 3.5 of the textbook [?] Review a labeled Jordan exchange with pivoting. Introduce the idea of a nondegenerate tableau and

More information

Lecture slides by Kevin Wayne

Lecture slides by Kevin Wayne LINEAR PROGRAMMING I a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM Linear programming

More information

A primal-simplex based Tardos algorithm

A primal-simplex based Tardos algorithm A primal-simplex based Tardos algorithm Shinji Mizuno a, Noriyoshi Sukegawa a, and Antoine Deza b a Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-12-1-W9-58, Oo-Okayama,

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Minimal inequalities for an infinite relaxation of integer programs

Minimal inequalities for an infinite relaxation of integer programs Minimal inequalities for an infinite relaxation of integer programs Amitabh Basu Carnegie Mellon University, abasu1@andrew.cmu.edu Michele Conforti Università di Padova, conforti@math.unipd.it Gérard Cornuéjols

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence Linear Algebra Review: Linear Independence IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 21st March 2005 A finite collection of vectors x 1,..., x k R n

More information

Lecture Notes for CAAM 378 A Quick Introduction to Linear Programming (DRAFT) Yin Zhang

Lecture Notes for CAAM 378 A Quick Introduction to Linear Programming (DRAFT) Yin Zhang Lecture Notes for CAAM 378 A Quick Introduction to Linear Programming (DRAFT) Yin Zhang Sept. 25, 2007 2 Contents 1 What is Linear Programming? 5 1.1 A Toy Problem.......................... 5 1.2 From

More information

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics JS and SS Mathematics JS and SS TSM Mathematics TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN School of Mathematics MA3484 Methods of Mathematical Economics Trinity Term 2015 Saturday GOLDHALL 09.30

More information

LP Relaxations of Mixed Integer Programs

LP Relaxations of Mixed Integer Programs LP Relaxations of Mixed Integer Programs John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA February 2015 Mitchell LP Relaxations 1 / 29 LP Relaxations LP relaxations We want

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality

More information

A Combinatorial Active Set Algorithm for Linear and Quadratic Programming

A Combinatorial Active Set Algorithm for Linear and Quadratic Programming A Combinatorial Active Set Algorithm for Linear and Quadratic Programming Andrew J Miller University of Wisconsin-Madison 1513 University Avenue, ME 34 Madison, WI, USA 53706-157 ajmiller5@wiscedu Abstract

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Math 273a: Optimization The Simplex method

Math 273a: Optimization The Simplex method Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form

More information

On the projection onto a finitely generated cone

On the projection onto a finitely generated cone Acta Cybernetica 00 (0000) 1 15. On the projection onto a finitely generated cone Miklós Ujvári Abstract In the paper we study the properties of the projection onto a finitely generated cone. We show for

More information

Duality Theory, Optimality Conditions

Duality Theory, Optimality Conditions 5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every

More information

Answers to problems. Chapter 1. Chapter (0, 0) (3.5,0) (0,4.5) (2, 3) 2.1(a) Last tableau. (b) Last tableau /2 -3/ /4 3/4 1/4 2.

Answers to problems. Chapter 1. Chapter (0, 0) (3.5,0) (0,4.5) (2, 3) 2.1(a) Last tableau. (b) Last tableau /2 -3/ /4 3/4 1/4 2. Answers to problems Chapter 1 1.1. (0, 0) (3.5,0) (0,4.5) (, 3) Chapter.1(a) Last tableau X4 X3 B /5 7/5 x -3/5 /5 Xl 4/5-1/5 8 3 Xl =,X =3,B=8 (b) Last tableau c Xl -19/ X3-3/ -7 3/4 1/4 4.5 5/4-1/4.5

More information

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P) Lecture 10: Linear programming duality Michael Patriksson 19 February 2004 0-0 The dual of the LP in standard form minimize z = c T x (P) subject to Ax = b, x 0 n, and maximize w = b T y (D) subject to

More information

Minimal Valid Inequalities for Integer Constraints

Minimal Valid Inequalities for Integer Constraints Minimal Valid Inequalities for Integer Constraints Valentin Borozan LIF, Faculté des Sciences de Luminy, Université de Marseille, France borozan.valentin@gmail.com and Gérard Cornuéjols Tepper School of

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities Linear Programming Murti V Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities murtis@umnedu September 4, 2012 Linear Programming 1 The standard Linear Programming (SLP) problem:

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

ON THE PARAMETRIC LCP: A HISTORICAL PERSPECTIVE

ON THE PARAMETRIC LCP: A HISTORICAL PERSPECTIVE ON THE PARAMETRIC LCP: A HISTORICAL PERSPECTIVE Richard W. Cottle Department of Management Science and Engineering Stanford University ICCP 2014 Humboldt University Berlin August, 2014 1 / 55 The year

More information

Linear and Integer Programming - ideas

Linear and Integer Programming - ideas Linear and Integer Programming - ideas Paweł Zieliński Institute of Mathematics and Computer Science, Wrocław University of Technology, Poland http://www.im.pwr.wroc.pl/ pziel/ Toulouse, France 2012 Literature

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748 COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu https://moodle.cis.fiu.edu/v2.1/course/view.php?id=612 Gaussian Elimination! Solving a system of simultaneous

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations A notion of for Convex, Semidefinite and Extended Formulations Marcel de Carli Silva Levent Tunçel April 26, 2018 A vector in R n is integral if each of its components is an integer, A vector in R n is

More information

SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD. Emil Klafszky Tamas Terlaky 1. Mathematical Institut, Dept. of Op. Res.

SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD. Emil Klafszky Tamas Terlaky 1. Mathematical Institut, Dept. of Op. Res. SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD FOR QUADRATIC PROGRAMMING Emil Klafszky Tamas Terlaky 1 Mathematical Institut, Dept. of Op. Res. Technical University, Miskolc Eotvos University Miskolc-Egyetemvaros

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

Integer Programming Duality

Integer Programming Duality Integer Programming Duality M. Guzelsoy T. K. Ralphs July, 2010 1 Introduction This article describes what is known about duality for integer programs. It is perhaps surprising that many of the results

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

Lecture 7 Duality II

Lecture 7 Duality II L. Vandenberghe EE236A (Fall 2013-14) Lecture 7 Duality II sensitivity analysis two-person zero-sum games circuit interpretation 7 1 Sensitivity analysis purpose: extract from the solution of an LP information

More information

Integer Programming, Part 1

Integer Programming, Part 1 Integer Programming, Part 1 Rudi Pendavingh Technische Universiteit Eindhoven May 18, 2016 Rudi Pendavingh (TU/e) Integer Programming, Part 1 May 18, 2016 1 / 37 Linear Inequalities and Polyhedra Farkas

More information

Absolute Value Programming

Absolute Value Programming O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 15, No. 4, pp. 1147 1154 c 2005 Society for Industrial and Applied Mathematics A NOTE ON THE LOCAL CONVERGENCE OF A PREDICTOR-CORRECTOR INTERIOR-POINT ALGORITHM FOR THE SEMIDEFINITE

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1

More information

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Introduction to Large-Scale Linear Programming and Applications Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Daniel J. Epstein Department of Industrial and Systems Engineering, University of

More information

Multicommodity Flows and Column Generation

Multicommodity Flows and Column Generation Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07

More information