1. Introduction and background. Consider the primal-dual linear programs (LPs)
|
|
- Griffin Sparks
- 5 years ago
- Views:
Transcription
1 SIAM J. OPIM. Vol. 9, No. 1, pp c 1998 Society for Industrial and Applied Mathematics ON HE DIMENSION OF HE SE OF RIM PERURBAIONS FOR OPIMAL PARIION INVARIANCE HARVEY J. REENBER, ALLEN. HOLDER, KEES ROOS, AND AMÁS ERLAKY Abstract. wo new dimension results are presented. For linear programs, it is shown that the sum of the dimension of the optimal set and the dimension of the set of objective perturbations for which the optimal partition is invariant equals the number of variables. A decoupling principle shows that the primal and dual results are additive. he main result is then extended to convex quadratic programs, but the dimension relationships are no longer dependent only on problem size. Furthermore, although the decoupling principle does not extend completely, the dimensions are additive, as in the linear case. Key words. linear programming, optimal partition, polyhedron, polyhedral combinatorics, quadratic programming, computational economics AMS subject classification. 90C05 PII. S Introduction and background. Consider the primal-dual linear programs LPs min{cx : x 0, Ax= b}, max{yb : s 0, ya+ s = c}, where c is a row vector in R n, called objective coefficients; x is a column vector in R n, called levels; b is a column vector in R m, called right-hand sides; y is a row vector in R m called prices; and A is an m n matrix with rank m. Let P and D denote the primal and dual polyhedra, respectively, and let P and D denote their optimality regions, which we assume to be nonempty. Let x,y,s be a strictly complementary optimal solution, and let the optimal partition be denoted by B N, where B = σx {j : x j > 0} and N = σs {j : s j > 0}. For background, see [6]. his paper first presents a result concerning the dimension of P D in connection with the set of direction vectors in R n respectively, R m for which the optimal partition does not change when the objective coefficients respectively, right-hand sides are perturbed in that direction. After establishing fundamental relations for LPs, we consider extensions to convex quadratic programs. he technical terms used throughout this paper are defined in the Mathematical Programming lossary [2]. Received by the editors February 10, 1997; accepted for publication in revised form February 9, 1998; published electronically December 2, University of Colorado at Denver, Mathematics Department, P.O. Box , Denver, CO hgreenbe@carbon.cudenver.edu, hgreenbe; agholder@tiger.cudenver.edu, agholder. Faculty of echnical Mathematics and Informatics, Delft University of echnology, Delft, he Netherlands c.roos@twi.tudelft.nl; t.terlaky@twi.tudelft.nl, People/Staf/.erlaky.html. 207
2 208 H. J. REENBER, A.. HOLDER, K. ROOS, AND. ERLAKY 2. Linear programs. Following reenberg [3], let r =b, c denote the rim data, and let H denote the set of rim direction vectors, h =δb, δc, for which the optimal partition does not change on the interval [r, r + θh] for some θ>0, i.e., H = {δb, δc : there is x 0, y 0, θ>0 such that Ax = b + θ δb, x B > 0, x N =0; ya + s = c + θ δc, s B =0,s N > 0}. Here we follow the notation in [1, 6], where a subscript on a vector means it is the subvector restricted to the indexes in the subscript. For example, x B is the vector of positive levels. his notation extends to matrices: A partitions into [A B A N ]. Let H c denote the projection of H onto R n for changing only c: H c {δc :0,δc H}. Similarly, let H b denote the projection of H onto R m for changing only b: H b {δb :δb, 0 H}. reenberg [3] showed that H is a convex cone that satisfies a decoupling principle: H = H b H c. o help build intuition, notice first that if the dimension of the primal optimality region, dimp, is zero, this means it is an extreme point. In that case, every vector in R n can be used to change c without changing the optimal partition, so dimh c =n. At the other extreme, suppose dimp =n m, such as when c = 0, so every feasible solution is optimal in the primal LP. In that case, H c consists of change vectors that maintain equal net effects among the positive variables, so dimh c =m. his latter case can be illustrated with the following. Example. min{ j 0x j : j x j =1,x 0}. In this case, B = {1,...,n}. In order for this partition not to change for the LP: min{ j δc jx j : j x j =1,x 0}, it is necessary and sufficient that δc j = δc 1 for all j. hus, dimh c =1. In both cases, we see that dimp + dimh c =n. his is what we shall prove in general along with related results. heorem 2.1. he following equations hold for any LP whose primal and dual sets have nonempty strict interiors. 1. dimp + dimh c =n. 2. dimd + dimh b =m. 3. dimp D + dimh =n + m. Proof. From Lemma IV.44 in [6], we have dimp = B ranka B. he conditions for δc H c are ya B = c B + θδc B and ya N <c N + θδc N for some θ>0. hus, δc N can be arbitrary, so dimh c = N + dim{δc B : δy R m δya B = δc B } = N + ranka B. his implies dimp + dimh c = B + N = n.
3 RIM PERURBAIONS FOR OPIMAL PARIION INVARIANCE 209 he second statement has a similar argument. From Lemma IV.44 in [6], dimd = m ranka B. he conditions for δb H b are A B x B = b + θδband x B > 0. hus, dimh b = ranka B, so dimd + dimh b =m. he last statement follows from the decoupling principle, upon adding the first two equations H = H b H c dimh = dimh b + dimh c. We now consider some corollaries whose proofs follow directly from the theorem but whose meanings lend insight into how perturbation relates to the dimensions of the primal and dual optimality regions. he dimension of a set is sometimes called the degrees of freedom. If there are n variables and no constraints on their values, the set has the full degrees of freedom, which is n; i.e., each variable can vary independently. When the set is defined by a system of m independent equations, as in our case, we sometimes refer to m as the degrees of freedom lost. Because we assume that there exists a strict interior solution x >0, there are no implied equalities among the nonnegativity constraints, so dimp =n m. hus, the feasibility region has m degrees of freedom lost due to the equations that relate the variables. A meaningful special case is when there is an excess number of columns, say B = m + k, and there is enough linear independence retained in the columns so that ranka B =m recall that we assume ranka =m. hen, dimp =k, so dimh c =n k. Expressed in words, the degrees of freedom lost in varying objective coefficients equals the number of excess columns over those of a basic optimal solution. Furthermore, ranka B =mis equivalent to dimd = 0 i.e., unique dual solution, so we can say the following. Corollary 2.2. he following are equivalent. 1. he dual solution is unique. 2. dimh c =n + m B. 3. dimh b =m. Another special case arises when the LP is a conversion from the inequality constraints, A x b, where A is m n, and ranka =m. In that case, A =[A I], and n = n + m. Suppose x > 0, so B includes all of the structural variables and some of the surplus variables, say B = n + k. hen, dimp =k, and heorem 2.1 implies dimh c =n + m k. Since we do not allow the costs of the surplus variables to be nonzero, we can reduce this by m, giving dimh c =n k. Expressed in words, this says that the degrees of freedom lost in varying structural cost coefficients equals the number of positive surplus variables. A similar result follows for the primal. he next corollary says, in part, that dimp = 0 if and only if dimh c =n. Expressed in words, this says that the primal solution is unique if and only if every objective coefficient can be perturbed independently without changing the optimal partition. he last equivalence includes the special case of a nondegenerate basic solution, in which case B = m, so every right-hand side can be perturbed without changing the optimal partition. Corollary 2.3. he following are equivalent. 1. he primal solution is unique. 2. dimh c =n. 3. dimh b = B. hese corollaries combine into the following, which is the familiar case of a unique strictly complementary optimum which is basic. Corollary 2.4. he following are equivalent.
4 210 H. J. REENBER, A.. HOLDER, K. ROOS, AND. ERLAKY 1. he primal-dual solution is unique. 2. dimh c =n and dimh b =m. 3. dimh =m + n. he following corollary says that dimh c m, and it follows from the main theorem since the maximum dimension of P is n m. he analogous bound for dimh b is merely that it is nonnegative since the maximum dimension of D is m. Corollary 2.5. here are at least m degrees of freedom to vary the objective coefficients without changing the optimal partition. In the next section, we extend heorem 2.1 to convex quadratic programs, and note that care must be taken when specializing it to an LP. 3. Quadratic programs. We now extend heorem 2.1 to the convex quadratic program min{cx x Qx : Ax = b, x 0}, where Q is symmetric and positive semidefinite. We use the Wolfe dual [2] max{yb 1 2 u Qu : ya + s u Q = c, s 0}. Let QP and QD denote primal and dual feasibility regions, respectively. Let us introduce the following notation: QP = {x : x QP, and x is primal optimal}, QD = {y, s :y, s, u QD and y, s, u is dual optimal}, QD = {y, s, u :y, s, u QD and y, s, u is dual optimal}. Here QP and QD denote their optimality regions, except that we define QD exclusive of the u-variables, while QD denotes the full dual optimality region to distinguish it from QD. We shall explain this shortly. Following Jansen [4] and Berkelaar, Roos, and erlaky [1], an optimal partition is defined by three sets B N, where B = {j : x j > 0 for some x QP }, N = {j : s j > 0 for some y, s QD }, and = {1,...,n}\B N. We assume that the solution obtained is maximal [1]: x j > 0 j B and s j > 0 j N. üler and Ye [5] show that many interior point algorithms converge to a solution whose support sets comprise the maximal partition: B = σx,n = σs, and = {1,...,n}\B N. Unlike linear programming, there is no guarantee of a strictly complementary optimal solution, so need not be empty. For this and other reasons, there are some important differences see [1, 4] for details that affect our extension of heorem 2.1. In particular, the decoupling principle does not apply since a change in c affects both primal and dual optimality conditions.
5 RIM PERURBAIONS FOR OPIMAL PARIION INVARIANCE 211 We begin our extension with the following lemma. In the proof we use the following notation: col = column space of = {u : u = x for some x R n }, N = null space of = {x : x =0}. Lemma 3.1. Let F and be m n and g n matrices, respectively, and consider the set: S = {v : v = Fu for some u u =0}. hen, dims = rank F rank. Proof. Without loss in generality assume has full row rank, and let {u 1,...,u g } be a basis for col. Let {v 1,...,v s } be a basis for S where dims =s, and consider the following set in col : { w1 u 1... F wg u g v } vs, 0 where w i F [ ] 1 u i. Once we prove that this is a basis for col F, we have that g + s = rank F, which implies the desired result. First, we shall prove that these vectors are linearly independent. Suppose wi α i + vj β u j =0. i 0 i j Since {u 1,...,u g } is a basis, α = 0, which then implies β = 0, because {v 1,...,v s } are also linearly independent. Second, we shall prove that these vectors span col F. Let v u = F x for some x R n. Decompose x = y + z, where y col and z N. hen, x = y = λ, where y = λ, and Fx = Fy + Fz. Since Fz S, F x = F λ + j β jv j. We thus have u = x = λ, but since u col, λ = i α iu i. his implies λ = i α i[ ] 1 u i,so Fx = F i α i[ ] 1 u i + j β jv j = i α if [ ] 1 u i + j β jv j. By the definition of w, we have derived α, β such that v = u i α wi i + u j β vj j. i 0 o prove the main theorem, we use the following dimension results of Berkelaar, Roos, and erlaky [1]: 3.1 dimqp = B rank, 3.2 dimqd =m ranka B A +n rankq. he last portion, n rankq, accounts for the u-variables because the dual conditions can use x Q in place of u Q, leaving u to appear only in the equation Qu = Qx. For
6 212 H. J. REENBER, A.. HOLDER, K. ROOS, AND. ERLAKY our purposes it is not necessary or desirable to include this, so we define the dual optimality region exclusive of the u-variables: QD = {y, s :y, s, x QD for some x QP }. hen, 3.2 yields the dimension of the dual optimality region that we shall use: 3.3 dimqd =m ranka B A. As in the linear case, s N > 0 implies that each component of c N can vary independently, so dimh is the sum of N and the dimension of the set of other possible changes. Keeping x N = 0 and s B = 0, the partition does not change if and only if there exists δy, δu, δx B to satisfy the following primal-dual conditions: 3.4 Here we follow the notation in [1]: A B Q Q 0 δy δc B B δu = δc δb. δx 0 Q Q B B 0 Q I = rows of Q associated with index set I, Q J = columns of Q associated with index set J, Q IJ = submatrix of Q associated with index sets I and J. he quadratic extensions rely on the fact that the rank of the matrix in 3.4 is related to the rank of the matrices found in statements 3.1 and 3.3. hese relations are formalized in the following lemma. Lemma 3.2. he following relations hold for Q positive semidefinite: A B Q 3.5 ranka B A + rank = rank Q 0 B rankq 3.6 A = rank + ranka Q B. B Proof. o prove 3.5, performing elementary row and column operations on the large matrix first on the right produces the following matrix of the same rank: A B 0 0 Q B B. 0 Q 0 So, A B Q rank Q 0 B = rank A B Q B 0 A B + rankq.
7 RIM PERURBAIONS FOR OPIMAL PARIION INVARIANCE 213 he positive semidefiniteness of Q implies that Q B is linearly dependent on [1]. Hence, A B Q B 0 A B à B QBB 0 à à B à B à à B, QBB where is used to represent a series of row and column operations that preserve rank, and represents an arbitrary matrix of appropriate size. Hence, A B Q rank Q 0 B = rankq à + rank B ÃB + rank 0 à A = rankq + rank B + rank 0 QBB, which yields the result. he proof of 3.6 is similar, using the positive semidefiniteness property of Q in reducing the large matrix to row echelon form. We now have what we need to prove the following extension of heorem 2.1. heorem 3.3. he following equations hold for any convex quadratic program whose primal and dual sets are not empty. 1. dimqp + dimh c =n + ranka B A ranka B. 2. dimqd + dimh b =m ranka B A + ranka B. 3. dimqp QD + dimh =n + m. Proof. o prove 1, we set δb = 0 in 3.4, and apply Lemmas 3.1 and 3.2 to
8 214 H. J. REENBER, A.. HOLDER, K. ROOS, AND. ERLAKY produce the following: A B Q dimh c = N + rank A Q 0 0 Q B rank Q B 0 0 A B = N + ranka B A + rank + rankq rankq ranka B = N + ranka B A rank ranka B. Adding 3.1 to the last statement and substituting n = B + N gives the first result. Similarly, to prove 2, set δc B = 0 and δc = 0 in 3.4. hen, Lemma 3.1 implies the equation A B Q dimh b = rank A A B Q 0 B rank Q Q 0. Using row and column operations on the matrix in the last term together with Lemma 3.2 we obtain the dimension of H b : A dimh b = rank B + rank = ranka B, A rank B Q B where the last equation follows from 3.6. Adding this to 3.3 yields the second result. he third result does not follow from a decoupling principle, as in the linear case where H = H b H c. Rather, it needs a development similar to the first two parts just obtained. Using Lemmas 3.1 and 3.2 yields the following equations
9 RIM PERURBAIONS FOR OPIMAL PARIION INVARIANCE 215 A B Q dimh = N + rank Q 0 B rank A = N + rank B + rank he sum of the last statement with 3.1 and 3.3, plus substituting n = B + N, implies the third result. Notice that the statements in heorem 3.3 reduce to the corresponding statements in heorem 2.1 when = and Q = 0, which is the case for an LP. his reduction occurs because we eliminated the u-variables. In fact, the statements in the theorem imply each of the following. dimqp + dimh c n with equality if =. dimqd + dimh b m with equality if =. dimqp QD + dimh m + n with equality if =. he reduction of QD also enables us to have the following extension of Corollary 2.2. In fact, u is unique if and only if Q is positive definite because it can be any solution to Qu = Qx for any x QP. Corollary 3.4. he following are equivalent. 1. he dual solution is unique. 2. dimh c =n + m ranka B + ranka B. 3. dimh b =m + ranka B ranka B A. he above cases reduce to the corresponding LP cases in Corollary 2.2, where Q = 0 and =, as does the following extension of Corollary 2.3. Corollary 3.5. he following are equivalent. 1. he primal solution is unique. 2. dimh c =n + ranka B A ranka B. 3. dimh b = B ranka B + ranka B. Combining these, despite the absence of a decoupling principle, the dimensions are additive, so we also obtain the following extension of Corollary 2.4. Corollary 3.6. he following are equivalent. 1. he primal-dual solution is unique. 2. dimh c =n and dimh b =m. 3. dimh =m + n. Unlike the LP case, this shows that we lose degrees of freedom in varying the cost coefficients. For example, if δc j > 0 for j, the partition immediately changes since s j = δc j is optimal for the perturbed quadratic program. his loss appears in the last extension, which follows. Corollary 3.7. here are at least m + ranka B A ranka B degrees of freedom to vary the rim data without a change in the optimal partition. his lower bound on dimh c follows in the same way as in Corollary 2.5, and it is m when =. More generally, we see that the bound is at most m, which reflects the fact that we can lose some degrees of freedom by lacking strict complementarity..
10 216 H. J. REENBER, A.. HOLDER, K. ROOS, AND. ERLAKY 4. Concluding comments. For LPs, the dimension of the cone of rim direction vectors for which the optimal partition does not change has an Eulerian property with the dimension of the optimality region: they sum to the number of variables and equations. his decouples into Eulerian properties for varying the primal and dual right-hand sides separately: cost coefficients change with lost degrees of freedom equal to the dimension of primal space; right-hand sides change with lost degrees of freedom equal to the dimension of dual space. he comparable equation for quadratic programs is not Eulerian in that the sum of dimensions depends on the partition notably on the number of complementary coordinate pairs that are both zero. REFERENCES [1] A. Berkelaar, C. Roos, and. erlaky, he optimal set and optimal partition approach to linear and quadratic programming, in Advances in Sensitivity Analysis and Parametric Programming,. al and H. reenberg, eds., Kluwer Academic Publishers, Boston, MA, 1997, Chapter 6. [2] H. reenberg, Mathematical Programming lossary, hgreenbe/glossary/glossary.html, [3] H. reenberg, Rim Sensitivity Analysis from an Interior Solution, echnical report CCM 86, Center for Computational Mathematics, Mathematics Department, University of Colorado at Denver, Denver, CO, [4] B. Jansen, Interior Point echniques in Optimization: Complexity, Sensitivity, and Algorithms, Kluwer Academic Publishers, Boston, MA, [5] O. üler and Y. Ye, Convergence behavior of interior-point algorithms, Math. Programming, , pp [6] C. Roos,. erlaky, and J.-P. Vial, heory and Algorithms for Linear Optimization: An Interior Point Approach, John Wiley and Sons, Chichester, UK, 1997.
Chapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More informationA Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization
A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,
More informationParametric LP Analysis
Rose-Hulman Institute of Technology Rose-Hulman Scholar Mathematical Sciences Technical Reports (MSTR) Mathematics 3-10-2010 Parametric LP Analysis Allen Holder Rose-Hulman Institute of Technology, holder@rose-hulman.edu
More informationAn Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationA full-newton step infeasible interior-point algorithm for linear programming based on a kernel function
A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with
More informationc 2000 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 10, No. 2, pp. 427 442 c 2000 Society for Industrial and Applied Mathematics SIMULTANEOUS PRIMAL-DUAL RIGHT-HAND-SIDE SENSITIVITY ANALYSIS FROM A STRICTLY COMPLEMENTARY SOLUTION OF
More informationA FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS
Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationAlgorithms to Compute Bases and the Rank of a Matrix
Algorithms to Compute Bases and the Rank of a Matrix Subspaces associated to a matrix Suppose that A is an m n matrix The row space of A is the subspace of R n spanned by the rows of A The column space
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationEE364a Review Session 5
EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary
More informationLinear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence
Linear Algebra Review: Linear Independence IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 21st March 2005 A finite collection of vectors x 1,..., x k R n
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More information1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad
Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,
More information8. Geometric problems
8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid
More informationLecture Note 18: Duality
MATH 5330: Computational Methods of Linear Algebra 1 The Dual Problems Lecture Note 18: Duality Xianyi Zeng Department of Mathematical Sciences, UTEP The concept duality, just like accuracy and stability,
More informationMAT-INF4110/MAT-INF9110 Mathematical optimization
MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:
More informationYinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method
The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear
More informationReview Notes for Linear Algebra True or False Last Updated: February 22, 2010
Review Notes for Linear Algebra True or False Last Updated: February 22, 2010 Chapter 4 [ Vector Spaces 4.1 If {v 1,v 2,,v n } and {w 1,w 2,,w n } are linearly independent, then {v 1 +w 1,v 2 +w 2,,v n
More informationc 2000 Society for Industrial and Applied Mathematics
SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.
More informationInteger Programming, Part 1
Integer Programming, Part 1 Rudi Pendavingh Technische Universiteit Eindhoven May 18, 2016 Rudi Pendavingh (TU/e) Integer Programming, Part 1 May 18, 2016 1 / 37 Linear Inequalities and Polyhedra Farkas
More informationResearch Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More informationAn improved characterisation of the interior of the completely positive cone
Electronic Journal of Linear Algebra Volume 2 Volume 2 (2) Article 5 2 An improved characterisation of the interior of the completely positive cone Peter J.C. Dickinson p.j.c.dickinson@rug.nl Follow this
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationA PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:
STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose
More informationIntroduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following
More information6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC
6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationNondegeneracy of Polyhedra and Linear Programs
Computational Optimization and Applications 7, 221 237 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. Nondegeneracy of Polyhedra and Linear Programs YANHUI WANG AND RENATO D.C.
More informationConic Linear Optimization and its Dual. yyye
Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
More informationMATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.
MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis.
More informationMATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.
MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:
More informationLP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra
LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality
More informationChapter 1: Linear Programming
Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationLP Relaxations of Mixed Integer Programs
LP Relaxations of Mixed Integer Programs John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA February 2015 Mitchell LP Relaxations 1 / 29 LP Relaxations LP relaxations We want
More information8. Geometric problems
8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 1 Minimum volume ellipsoid around a set Löwner-John ellipsoid
More informationSystems of Linear Equations
Systems of Linear Equations Math 108A: August 21, 2008 John Douglas Moore Our goal in these notes is to explain a few facts regarding linear systems of equations not included in the first few chapters
More informationCSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming
CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150
More informationSparse Optimization Lecture: Dual Certificate in l 1 Minimization
Sparse Optimization Lecture: Dual Certificate in l 1 Minimization Instructor: Wotao Yin July 2013 Note scriber: Zheng Sun Those who complete this lecture will know what is a dual certificate for l 1 minimization
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More informationOptimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems
Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}
More informationMIT Algebraic techniques and semidefinite optimization February 14, Lecture 3
MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications
More informationLinear Programming Inverse Projection Theory Chapter 3
1 Linear Programming Inverse Projection Theory Chapter 3 University of Chicago Booth School of Business Kipp Martin September 26, 2017 2 Where We Are Headed We want to solve problems with special structure!
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationSecond-order cone programming
Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The
More informationLecture 5. The Dual Cone and Dual Problem
IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the
More informationLecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008
Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:
More informationA Control Methodology for Constrained Linear Systems Based on Positive Invariance of Polyhedra
A Control Methodology for Constrained Linear Systems Based on Positive Invariance of Polyhedra Jean-Claude HENNET LAAS-CNRS Toulouse, France Co-workers: Marina VASSILAKI University of Patras, GREECE Jean-Paul
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More information3. THE SIMPLEX ALGORITHM
Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).
More informationCopositive Plus Matrices
Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their
More informationMAT 242 CHAPTER 4: SUBSPACES OF R n
MAT 242 CHAPTER 4: SUBSPACES OF R n JOHN QUIGG 1. Subspaces Recall that R n is the set of n 1 matrices, also called vectors, and satisfies the following properties: x + y = y + x x + (y + z) = (x + y)
More informationMidterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane
More information14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.
CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity
More informationLecture: Cone programming. Approximating the Lorentz cone.
Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationOn the projection onto a finitely generated cone
Acta Cybernetica 00 (0000) 1 15. On the projection onto a finitely generated cone Miklós Ujvári Abstract In the paper we study the properties of the projection onto a finitely generated cone. We show for
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationKey words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone
ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called
More informationA Review of Linear Programming
A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex
More informationLimiting behavior of the central path in semidefinite optimization
Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path
More information9. Geometric problems
9. Geometric problems EE/AA 578, Univ of Washington, Fall 2016 projection on a set extremal volume ellipsoids centering classification 9 1 Projection on convex set projection of point x on set C defined
More informationDeterministic Methods for Detecting Redundant Linear. Constraints in Semidefinite Programming
Deterministic Methods for Detecting Redundant Linear Constraints in Semidefinite Programming Daniel Stover Department of Mathematics and Statistics Northen Arizona University,Flagstaff, AZ 86001. July
More informationGeometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as
Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More informationy Ray of Half-line or ray through in the direction of y
Chapter LINEAR COMPLEMENTARITY PROBLEM, ITS GEOMETRY, AND APPLICATIONS. THE LINEAR COMPLEMENTARITY PROBLEM AND ITS GEOMETRY The Linear Complementarity Problem (abbreviated as LCP) is a general problem
More informationFarkas Lemma, Dual Simplex and Sensitivity Analysis
Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x
More informationResearch Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007
Computer and Automation Institute, Hungarian Academy of Sciences Research Division H-1518 Budapest, P.O.Box 63. ON THE PROJECTION ONTO A FINITELY GENERATED CONE Ujvári, M. WP 2007-5 August, 2007 Laboratory
More informationA Simple Computational Approach to the Fundamental Theorem of Asset Pricing
Applied Mathematical Sciences, Vol. 6, 2012, no. 72, 3555-3562 A Simple Computational Approach to the Fundamental Theorem of Asset Pricing Cherng-tiao Perng Department of Mathematics Norfolk State University
More information1 Review Session. 1.1 Lecture 2
1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions
More informationMODULE 8 Topics: Null space, range, column space, row space and rank of a matrix
MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x
More informationA new primal-dual path-following method for convex quadratic programming
Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationA Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization
A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,
More informationINVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA
BULL. AUSRAL. MAH. SOC. VOL. 24 (1981), 357-366. 9C3 INVEX FUNCIONS AND CONSRAINED LOCAL MINIMA B.D. CRAVEN If a certain weakening of convexity holds for the objective and all constraint functions in a
More informationSENSITIVITY ANALYSIS IN CONVEX QUADRATIC OPTIMIZATION: SIMULTANEOUS PERTURBATION OF THE OBJECTIVE AND RIGHT-HAND-SIDE VECTORS
SENSITIVITY ANALYSIS IN CONVEX QUADRATIC OPTIMIZATION: SIMULTANEOUS PERTURBATION OF THE OBJECTIVE AND RIGHT-HAND-SIDE VECTORS ALIREZA GHAFFARI HADIGHEH Department of Mathematics, Azarbaijan University
More informationChap6 Duality Theory and Sensitivity Analysis
Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationA FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS
A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics
More information1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations
The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear
More informationNotes taken by Graham Taylor. January 22, 2005
CSC4 - Linear Programming and Combinatorial Optimization Lecture : Different forms of LP. The algebraic objects behind LP. Basic Feasible Solutions Notes taken by Graham Taylor January, 5 Summary: We first
More informationGame theory: Models, Algorithms and Applications Lecture 4 Part II Geometry of the LCP. September 10, 2008
Game theory: Models, Algorithms and Applications Lecture 4 Part II Geometry of the LCP September 10, 2008 Geometry of the Complementarity Problem Definition 1 The set pos(a) generated by A R m p represents
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationDuality Theory, Optimality Conditions
5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every
More information3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions
A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence
More informationConvex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version
Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationSelected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.
. Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,
More informationSOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction
ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems
More informationLecture: Introduction to LP, SDP and SOCP
Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:
More informationNote 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)
Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical
More information4.6 Linear Programming duality
4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP Different spaces and objective functions but in general same optimal
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationDuality Theory of Constrained Optimization
Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive
More informationRank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about
Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix
More information