The Optimal Set and Optimal Partition Approach to Linear. Arjan B. Berkelaar, Kees Roos and Tamças Terlaky. October 16, 1996

Size: px
Start display at page:

Download "The Optimal Set and Optimal Partition Approach to Linear. Arjan B. Berkelaar, Kees Roos and Tamças Terlaky. October 16, 1996"

Transcription

1 The Optimal Set and Optimal Partition Approach to Linear and Quadratic Programming æ Arjan B. Berkelaar, Kees Roos and Tamças Terlaky October 16, 1996 Department of Econometrics and Operations Research Econometric Institute Faculty of Economics Erasmus University Rotterdam P.O.Box 1738, 3000 DR Rotterdam, The Netherlands. Abstract In this chapter we describe the optimal set approach for sensitivity analysis for LP.We show that optimal partitions and optimal sets remain constant between two consecutive transition-points of the optimal value function. The advantage of using this approach instead of the classical approach èusing optimal basesè is shown. Moreover, we present an algorithm to compute the partitions, optimal sets and the optimal value function. This is a new algorithm and uses primal and dual optimal solutions. We also extend some of the results to parametric quadratic programming, and discuss diæerences and resemblances with the linear programming case. Key words: Sensitivity Analysis, Parametric Programming, Complementarity æ This paper will appear as Chapter 6 in H. Greenberg and T. Gal èeditorsè, Recent Advances in Sensitivity Analysis and Parametric Programming, Kluwer Publishers, è1997è. 1

2 2 Optimal Set and Optimal Partition Approach 1 Introduction In this chapter we deal with parametric versions of linear programming èlpè and convex quadratic programming èqpè. First consider the LP problem èp è in standard format èp è min æ c T x : Ax = b; x ç 0 æ ; where c; x 2 IR n, b 2 IR m, and A is an m æ n matrix with full row rank. The dual problem èdè is written as èdè max æ b T y : A T y + s = c; s ç 0 æ ; where y 2 IR m and s 2 IR n. The input data for both problems consists of the matrix A, and the vectors b and c. In Chapter 4 it is shown that diæculties may arise, when the problem under consideration is degenerate. It is stated that the optimal basis may not be unique, that multiple optimal solutions may exist, and that the notion of shadow price is not correctly deæned 1. In Chapter 3 strictly complementary solutions are already mentioned. These solutions, although already shown to exist in 1956 by Goldman and Tucker ë10ë, came into view in the 1980s due to the immense popularity of interior point methods èden Hertog ë14ë and Jansen ë15ëè. This popularity was initiated by the seminal paper of Karmarkar ë17ë. Gíuler and Ye ë12ë showed that interior point methods for LP generate a strictly complementary solution èin the limitè. This triggered interest in investigating parametric analysis without using bases, instead making use of strictly complementary solutions èadler and Monteiro ë1ë, Mehrotra and Monteiro ë22ë, and Jansen et al. ë16ëè. In this chapter we describe this approach and formulate a new algorithm to compute the optimal value function, which uses primal and dual optimal solutions. Let us now turn to the quadratic programming formulation. The general CQP problem is given by ç èqp è min c T x + 1 ç 2 xt Qx : Ax = b; ç 0 ; where c; x 2 IR n, b 2 IR m, A an m æ n matrix with full row rank and Q a symmetric positive semideænite n æ n matrix. The Wolfeídual of èqp è is given by ç èqdè max b T y, 1 ç 2 ut Qu : A T y + s, Qu = c; s ç 0 ; where y 2 IR m and u; s 2 IR n. The input data for both problems consists of the matrix Q, the matrix A, and the vectors b and c. We assume that Q is positive semideænite resulting in a convex quadratic programming problem. Unless stated otherwise QP refers to convex quadratic programming problem. It is well known that if there exist optimal solutions for èqpè, then there also exist optimal solutions for which x = u. Furthermore, it is clear that LP is a special case of QP 2. In this chapter we are only concerned with changes in the vectors b and c, A and Q are taken to be æxed. Although sensitivity analysis and parametric programming for QP are not being performed on a large scale, there is at least one important application on a commercial level. The well-known Markowitz meanívariance model ë20ë for modern portfolio theory is formulated as a parametric QP. The optimal value function of this parametric QP is known as the eæcient frontier. The eæcient frontier is a useful tool, being used by various diæerent ænancial institutions for portfolio decision problems. Recent studies on computing the eæcient frontier èsee, e.g. ë19, 26, 27ëè all use optimal bases. Similar diæculties w.r.t. degeneracy as in the LP case exist for QP èthe optimal basis 3 1 See also, Gal ë9ë and Greenberg ë11ë for a survey and Rubin and Wagner ë24ë for an overview of practical implications. 2 Take Q to be the zero matrix. 3 Note that the primal or the dual QP problem may have a solution that is not a basic solution. However if we consider both the primal and the dual QP problem together, then, if the problem has a solution there always exists an optimal basis èrewrite the primal and dual problem as a linear complementarity problem to see thisè. In this chapter we take the freedom to speak about an optimal basis in the latter sense.

3 Optimal Set and Optimal Partition Approach 3 may not be unique, multiple solutions may exist, etc.è. Berkelaar et al. ë4ë consider the eæcient frontier as the outcome of their analysis for parametric QP using maximal complementary solutions. Interior point methods for QP generate such a maximal complementary solution èin the limitè. A related result to this was already shown by McLinden ë21ë in 1980 èsee also Gíuler and Ye ë13ëè. We describe parametric analysis for QP using maximal complementary solutions in this chapter, and again formulate an algorithm to compute the optimal value function. This algorithm uses both primal and dual solutions and their supports. Let us denote the optimal value of èqp è and èqdèaszèb; cè, with zèb; cè =,1 if èqp èisunbounded and èqdè infeasible and zèb; cè =1if èqdè isunbounded and èqp è infeasible. If èqp è and èqdè are both infeasible then zèb; cè is undeæned. We call z the optimal value function for the data b and c. Since LP is a special case of QP, we also denote the optimal value function for LP problems by zèb; cè with the same conventions. Although in the literature assumptions are often made to prevent situations concerning degeneracy, we shall not do so here. The main tools we use are the existence of strictly complementary and maximal complementary solutions in LP and QP respectively. Such solutions uniquely deæne the partition of the problem. We show that the pieces of the optimal value function correspond to intervals for the parameter on which the partition is constant. The proposed algorithms to compute the optimal value function are based on this key result. This chapter is organized as follows. In Section 2 we consider a transportation èlpè example to show that the classical approach to sensitivity analysis based on optimal bases leads to diæculties in case of degeneracy. Section 3 describes the optimal partition and we show how this concept and given optimal solutions can be used to characterize the optimal sets of LP and QP problems. In Section 4 we consider parametric LP based on the optimal set approach. We omit proofs in Section 4, and postpone them to Section 5. In Section 5 we consider parametric QP. Since LP is a special case of QP it is left to the reader to specialize the proofs to LP. Some results for LP and their proofs can be formulated and presented diæerently or need no proof at all. Section 4 has been organized so as to concentrate on the resemblances between parametric LP and parametric QP. For a more detailed analysis in LP the reader is referred to Jansen et al. ë15, 16ë. Finally we close this chapter by outlining how the ideas of Sections 4 and 5 can be applied to sensitivity analysis. 2 The Optimal Bases Approach - An Example In commercial packages for LP and QP usually the possibility is oæered to perform sensitivity analysis. As far as we know sensitivity analysis in all existing commercial packages is based on optimal bases. As a result, the outcome of the sensitivity analysis is often only partly correct. In this section we show this using an example. The classical approach to sensitivity analysis is based on pivoting methods èsuch as the Simplex method for LPè for solving LP and QP problems. These methods produce a so-called basic solution of the problem. It suæces for our purpose to know that such a solution is determined by anoptimal basis. We only brieæy consider a small textbook LP problem to illustrate problems that can occur in case of degeneracy. This example is taken from Jansen ë15ë. For a more detailed description we refer to Jansen et al. ë16ë and Jansen ë15ë. To illustrate the shortcomings of the implemented sensitivity analysis techniques we apply several commercial packages to a small LP problem. 2.1 Comparison of the classical and the new approach Example 1 We consider a simple transportation problem with three supply and demand nodes.

4 4 Optimal Set and Optimal Partition Approach Table 1: Sensitivity analysis for a transportation problem - RHS changes Ranges of supply and demand values LP-package b 1è2è b 2è6è b 3è5è b 4è3è b 5è3è b 6è3è CPLEX ë0,3ë ë4,7ë ë1,1è ë2,7ë ë2,5ë ë2,5ë LINDO ë1,3ë ë2,1è ë4,7ë ë2,4ë ë1,4ë ë1,7ë PC-PROG ë0,1è ë4,1è ë3,6ë ë2,5ë ë0,5ë ë2,5ë XMP ë0,3ë ë6,7ë ë1,1è ë2,3ë ë2,3ë ë2,7ë OSL ë0,3ë ë4,7ë è,1; 1è ë2,7ë ë2,5ë ë2,5ë Correct range ë0,1è ë2,1è ë1,1è ë0,7ë ë0,7ë ë0,7ë min P 9 i=1 x i s.t. x 1 + x 2 + x 3 + x 10 = 2 x 4 + x 5 + x 6 + x 11 = 6 x 7 + x 8 + x 9 + x 12 = 5 x 1 + x 4 + x 7, x 13 = 3 x 2 + x 5 + x 8, x 14 = 3 x 3 + x 6 + x 9, x 15 = 3 x i ç 0;i=1;:::;15. The results of a sensitivity analysis are shown in Table 1. The columns correspond to the RHS elements. The rows in the table correspond to æve packages CPLEX, LINDO, PC-PROG, XMP and OSL and show the ranges produced by these packages. The last row contains the ranges calculated by the approach outlined in this chapter 4. The diæerent ranges in the Table 1 are due to the diæerent optimal bases that are found by the diæerent packages. For each optimal basis the range can be calculated by examining for which values of the RHS element the optimal basis remains constant. The table demonstrates the weaknesses of the optimal bases approach that is implemented in the commercial packages. Sensitivity analysis is considered to be a tool for obtaining information about the bottlenecks and degrees of freedom in the problem. The information provided by the commercial packages is confusing and hardly allows a solid interpretation. The diæculties lie in the fact that an optimal basis need not be unique. 3 In this chapter we show that the optimal partition of a strictly complementary solution for LP or a maximal complementary solution for QP leads to a much more solid analysis. The reason for this is that the optimal partition is unique for any strictly or maximal complementary solution. 3 Optimal Partitions and Optimal Sets 3.1 Linear Programming The feasible regions of èp è and èdè are denoted as P := fx : Ax = b; x ç 0g ; D := æ èy; sè : A T y + s = c; s ç 0 æ : 4 The range provided by the IBM package OSL èoptimization Subroutine Libraryè for b 3 is not a subrange of the correct range; this must be due to a bug in OSL. The correct range for the optimal basis found by OSL should be ë1; 1è.

5 Optimal Set and Optimal Partition Approach 5 Assuming that èp è and èdè are both feasible, the optimal sets of èp è and èdè are denoted as P æ and D æ.we deæne the index sets B and N by B := fi : x i é 0 for some x 2P æ g; N := fi : s i é 0 for some èy; sè 2D æ g: The Duality Theorem for LP implies that B ë N = o=, and the Goldman and Tucker Theorem ë10ë that B ë N = f1; 2; æææ;ng: So, B and N form a partition of the full index set. This èorderedè partition, denoted as ç =èb; Nè, is the optimal partition of the problem èp è and of the problem èdè. In the rest of this chapter we assume that b and c are such that èp è and èdè have optimal solutions, and ç = èb; Nè denotes the optimal partition of both problems. By deænition, the optimal partition is determined by the set of optimal solutions for èp è and èdè. In this section it is made clear that, conversely, the optimal partition provides essential information on the optimal solution sets P æ and D æ. We use the notation x B and x N to refer to the restriction of a vector x 2 IR n to the coordinate sets B and N respectively. Similarly, A B denotes the restriction of A to the columns in B, and A N the restriction of A to the columns in N. Now the sets P æ and D æ can be described in terms of the optimal partition. The next lemma follows immediately from the Duality Theorem and the deænition of the optimal partition for LP and is therefore stated without proof. Lemma 1 Let x æ 2P æ and èy æ ;s æ è2d æ. Given the optimal partition èb; Nè ofèpè and èdè, the optimal sets of both problems are given by P æ = æ x : x 2P;x T s æ =0 æ = fx : x2p;x N =0g; D æ = æ èy; sè : èy; sè 2D;s T x æ =0 æ = fèy; sè : èy; sè 2D;s B =0g: The next result deals with the dimensions of the optimal sets of èp è and èdè. Here, as usual the èaæneè dimension of a subset of IR k is the dimension of the smallest aæne subspace in IR k containing the subset. Lemma 2 One has dimp æ = jbj,rank èa B è dimd æ = m, rank èa B è Lemma 2 immediately implies that èp è has a unique solution 5 if and only if rank èa B è=jbj. Clearly this happens if and only if the columns in A B are linearly independent. Also, èdè has a unique solution if and only if rank èa B è=m, which happens if and only if the rows in A B are linearly independent. Thus, both èpè and èdè have a unique solution if and only if A B is a basis èthe unique optimal basisè. 5 Notice that we speak of uniqueness of the optimal solution and not about optimal basis èwhich is not necessarily uniqueè.

6 6 Optimal Set and Optimal Partition Approach 3.2 Quadratic Programming In this section we describe analogous results, as in the preceding section, for QP. The proofs of these results are included here and it is left to the reader to specialize these proofs to LP. The feasible regions of èqp è and èqdè are denoted as QP := fx : Ax = b; x ç 0g ; QD := æ èu; y; sè : A T y +s,qu = c; s ç 0 æ : We start with the Duality Theorem for QP, which is stated without proof èsee, e.g. Dorn ë8ëè. Theorem 3 If x is feasible for èqp è and èu; y; sè for èqdè, then these solutions are optimal if and only if Qx = Qu and x T s =0. Assuming that èqp è and èqdè are both feasible, the optimal sets of èqp è and èqdè are denoted as QP æ and QD æ. These optimal sets can be characterized by maximal complementary solutions and the corresponding partition ègíuler and Ye ë13ëè. Let us deæne B := f i : x i é 0 for some x of QP æ g; N := f i : s i é 0 for some èu; y; sè ofqd æ g; T := f1;:::;ngnèbënè: The Duality Theorem for QP implies that B ën = o=. Note that, contrary to LP, in QP the Goldman and Tucker Theorem ë10ë does not hold, T may be nonempty. So B, Nand T form a partition of the full index set. This èorderedè partition, denoted as ç = èb; N; Tè, is the optimal partition of the problem èqp è and of the problem èqdè. A maximal complementary solution èx; y; sè is a solution for which x i é 0 èè i 2 B; s i é 0 èè i 2 N: The existence of such a solution is a consequence of the convexity of the optimal sets of èqp è and èqdè and was introduced by McLinden ë21ë. He showed an important result concerning such solutions, which was used by Gíuler and Ye ë13ë to show that interior point methods èas Anstreicher et al. ë2ë, Carpenter et al. ë5ë and Vanderbei ë25ëè generate such a solution èin the limitè. In Chapter 3 the deænition of the support of a vector was given. In this chapter we denote the support of a vector v by çèvè. Hence, given a strictly complementary or maximal complementary solution èx æ ;y æ ;s æ èwehaveb=çèx æ è and N = çès æ è. In the rest of this chapter we assume that b and c are such that èqp è and èqdè have optimal solutions, and ç = èb; N; Tè denote the optimal partition of both problems. By deænition, the optimal partition is determined by the set of optimal solutions for èqp è and èqdè. In this section we show that, conversely, the optimal partition provides essential information on the optimal solution sets QP æ and QD æ. In the introduction we already mentioned that there exist optimal solutions for which x = u; when useful we denote a dual optimal solution by èx; y; sè instead of èu; y; sè. Later we also use the following well-known result. Lemma 4 Let èx æ ; èu æ ;y æ ;s æ èè and è~x; è~u; ~y; ~sèè both be optimal solutions of èqp è and èqdè. Then Qx æ = Q~x = Qu æ = Q~u, c T x æ = c T ~x and b T y æ = b T ~y. Proof: Let us ærst consider the case where x æ = u æ and ~x =~u. Since èx æ ;y æ ;s æ è and è~x; ~y; ~sè are both optimal, we conclude that c T x æ xæ Qx æ = b T ~y, 1 2 ~xt Q~x:

7 Optimal Set and Optimal Partition Approach 7 Using that A T ~y +~s,q~x=cand multiplying by èx æ è T it follows, 1 2 èxæ Qx æ +~xq~x, 2~xQx æ è=b T ~y,c T x æ,~xqx æ =0: Thus, èx æ, ~xè T Qèx æ, ~xè = 0, which implies Qèx æ, ~xè = 0, i.e. Qx æ = Q~x, since Q is positive semideænite. Using that Qx æ = Q~x we conclude that c T x æ = c T ~x and b T y æ = b T ~y. From the Duality Theorem the proof is completed. 2 We use the notation x B, x N and x T to refer to the restriction of a vector x 2 IR n to the coordinate sets B, N and T respectively. Similarly, A B denotes the restriction of A to the columns in B, A N the restriction of A to the columns in N, and A T the restriction of A to the columns in T. The matrix Q is partitioned similarly. Now the sets QP æ and QD æ can be described in terms of the optimal partition. The next lemma follows immediately from the Duality Theorem and the deænition of the optimal partition for QP and is therefore stated without proof. Lemma 5 Let x æ 2QP æ and èu æ ;y æ ;s æ è 2QD æ. Given the optimal partition èb; N; Tè ofèqp è and èqdè, the optimal sets of both problems are given by QP æ = æ x : x 2QP;x T s æ =0;Qx=Qu ææ = fx : x 2QP;x NëT =0;Qx=Qu æ g ; QD æ = æ èu; y; sè : èu; y; sè 2QD;s T x æ =0;Qu=Qx ææ = fèu; y; sè : èu; y; sè 2QD;s BëT =0;Qu=Qx æ g : Lemma 6 Let M 2 IR næn be a symmetric positive semideænite matrix, partitioned according to B; N and T. Then N èm BB è=nèmæbè: Proof: Let x be an arbitrary vector in IR n and partitioned according to B and ç B = N ë T. Then x T Mx = x T B M BBx B +2x T ç B Mç BB x B + x T ç B M ç B ç B x ç B : The result is proven by contradiction. Suppose to the contrary that N èm BB è 6= NèMæBè. Take x B from the null space of M BB. Then, M BB x B = 0 and M ç BB x B 6= 0. Let " be given and consider x = "çx. Furthermore let æ = x T ç B è2m ç BB x B èè and æ = x T ç B M ç B ç B x ç B. It can easily be seen that æ 6= 0 and æ ç 0. Now we can rewrite x T Mx as follows x T Mx = "èx T ç B è2m ç BB çx B èè + " 2 èçx T ç B M ç B ç B çx ç B è="æ + " 2 æ: Thus, x T Mx é 0 if and only if "é,æ æ. Since M is positive semideænite we have a contradiction. This implies the result. 2 The next result deals with the dimensions of the optimal sets of èqp è and èqdè. Lemma 7 One has ç dim QP æ = jbj,rank dimqd æ = m, rank ç A T B A T T AB Q BB ç ç + n, rank èqè:

8 8 Optimal Set and Optimal Partition Approach Proof: given by Let a dual optimal solution u be given. Then, by Lemma 5 the optimal set of èqp èis QP æ = fx : Ax = b; x B ç 0; x NëT =0;Qx=Qug ; and hence the smallest aæne subspace of IR n containing QP æ is given by fx : A B x B = b; x NëT =0;QæBx B =Qug : The dimension of this aæne space is equal to the dimension of the null space of èa T B QT æb èt. Since the dimension of the null space of this matrix is given by jbj,rank èèa T B QT æb èt è, the ærst statement follows from Lemma 6. For the proof of the second statement we use that the dual optimal set can be described as in Lemma 5. Let x be a given primal solution. Then the optimal dual set is given by This is equivalent to QD æ = æ èu; y; sè : A T y +s,qu = c; s BëT =0;s N ç0;qu=qx æ : QD æ = fèu; y; sè : A T B y, èquè B = c B ; The smallest aæne subspace containing this set is A T T y, èquè T = c T ; A T N y + s N, èquè N = c N ; s B =0;s T =0;s N ç0;qu=qxg : QD æ = fèu; y; sè : A T B y, èquè B = c B ; A T T y, èquè T = c T ; A T N y + s N, èquè N = c N ; s B =0;s T =0;Qu=Qxg : Obviously s N is uniquely determined by y, and any y satisfying A T B y,èquè B = c B ;A T T y,èquè T = c T yields a point in the aæne èy; sè-space. Hence the dimension of this aæne space is equal to the dimension of the null space of èa B A T è T. The dimension of the null space of this matrix equals m, rank èèa B A T è T è. Furthermore, any u satisfying Qu = Qx yields a point in the aæne u-space. Hence the dimension of this aæne space is equal to the dimension of the null space of Q, which equals n, rank èqè. Combining these results completes the proof. 2 Note that when Q = 0 is substituted in the formulae of Lemma 7 the dimension of the dual set is not equal to the dimension in Lemma 2. Nevertheless, the results are consistent since u is not to be taken into account in the formula for the dimension of the dual optimal set. In this section we havecharacterized the optimal sets in LP and QP by using the optimal partition. In LP there is a one to one relation between the optimal partition and the primal and dual optimal sets. Given an optimal partition for an LP problem, the primal and dual optimal sets can be characterized. In QP we need the optimal partition and a dual optimal solution to characterize the primal optimal set; and the optimal partition and a primal optimal solution to characterize the dual optimal set. The results for parametric LP are thus obtained from parametric QP using the characterization of the optimal sets by the optimal partition. 4 Parametric Linear Programming In this section we investigate the eæects of changes in the vectors b and c on the optimal value zèb; cè of èp è and èdè. We consider one-dimensional parametric perturbations of b and c. So we want to study zèb + çæb; c + çæcè;

9 Optimal Set and Optimal Partition Approach 9 as a function of the parameters ç and ç, where æb and æc are given perturbation vectors. So, the vectors b and c are æxed, and the variations come from the parameters ç and ç. We denote the perturbed problems as èp ç è and èd ç è, and their feasible regions as P ç and D ç respectively. The dual problem of èp ç è is denoted as èd ç è and the dual problem of èd ç è is denoted as èp ç è. Observe that the feasible region of èd ç è is simply D and the feasible region of èp ç è is simply P. We use the superscript æ to refer to the optimal set of each of these problems. As we already mentioned in the introduction we postpone proofs that have an analogue in QP to the next section, where we consider parametric convex quadratic programming. We assume that b and c are such that èp è and èdè are both feasible. Then zèb; cè iswell deæned and ænite. It is convenient to introduce the following notations: bèçè :=b+çæb; cèçè :=c+çæc; fèçè :=zèbèçè; cè; gèçè:=zèb; cèçèè: Here the domain of the parameters ç and ç is taken as large as possible. Let us consider the domain of f. The function f is deæned as long as zèbèçè;cèiswell deæned. Since the feasible region of èd ç è is constant when ç varies, and since we assumed that èd ç è is feasible for ç = 0, it follows that èd ç è is feasible for all values of ç. Therefore, since fèçè iswell deæned if the dual problem èd ç è has an optimal solution and fèçè is not deæned èor inænityè if the dual problem èd ç èisunbounded. Using the Duality Theorem it follows that fèçè iswell deæned if and only if the primal problem èp ç èis feasible. In exactly the same way it can be understood that the domain of g consists of all ç for which èd ç è is feasible èand èp ç è boundedè. The following theorem is well known. Theorem 8 The domains of f and g are closed intervals on the real line. 4.1 Optimal value function and optimal sets on a linearity interval In this section we show that the functions fèçè and gèçè are piecewise linear on their domains. The pieces correspond to intervals where the partition is constant. For any ç in the domain of f we denote the optimal set of èp ç èasp æ ç and the optimal set of èd çèasd æ ç. The results in this section are related to similar results, already obtained in 1960s by e.g. Bereanu ë3ë, Charnes and Cooper ë6ë, and Kelly ë18ë èsee also Dinkelbach ë7, Chapter 5, Sections 1 and 2ë and Gal ë9ëè. The ærst theorem shows that the dual optimal set is constant on certain intervals and that f is linear on these intervals. This results from the fact that the optimal partition is constant on certain intervals èsee the next sectionè and the characterization of the optimal sets èsee Lemma 1è. Theorem 9 Let ç 1 and ç 2 éç 1 be such that D æ ç 1 = D æ ç 2. Then D æ ç is constant for all ç 2 ëç 1;ç 2 ë and fèçè is linear on the interval ëç 1 ;ç 2 ë. From this theorem we conclude the following result giving a partition of the domain of f in intervals on which the dual optimal set remains constant. Theorem 10 The domain of f can be partitioned in a ænite set of subintervals such that the dual optimal set is constant on a subinterval. Using the former two theorems we conclude that f is convex and piecewise linear. Theorem 11 The optimal value function fèçè is continuous, convex and piecewise linear. The values of ç where the partition of the optimal value function fèçè changes are called transitionpoints of f, and any interval between two successive transition-points of f is called a linearity interval of f. In a similar way we deæne transition-points and linearity intervals for g. Each of the above results on fèçè has its analogue for gèçè.

10 10 Optimal Set and Optimal Partition Approach Theorem 12 Let ç 1 and ç 2 éç 1 be such that P æ ç 1 = P æ ç 2. Then P æ ç is constant for all ç 2 ëç 1 ;ç 2 ë and gèçè is linear on the interval ëç 1 ;ç 2 ë. Theorem 13 The domain of g can be partitioned in a ænite set of subintervals such that the primal optimal set is constant on a subinterval. Theorem 14 The optimal value function gèçè is continuous, concave and piecewise linear. 4.2 Extreme points of a linearity interval In this section we assume that ç belongs to the interior of a linearity interval ëç 1 ;ç 2 ë. Given an optimal solution of èdç ç èwe show how the extreme points ç 1 and ç 2 of the linearity interval containing ç can be found by solving two auxiliary linear optimization problems. This is stated in the next theorem. Theorem 15 Let ç ç be arbitrary and let èy æ ;s æ èbeany optimal solution of èdç ç è. Then the extreme points of the linearity interval ëç 1 ;ç 2 ë containing ç ç follow from æ ç 1 = min ç : Ax = b + çæb; x ç 0; x T s æ =0 æ ç;x æ ç 2 = max ç : Ax = b + çæb; x ç 0; x T s æ =0 æ : ç;x Theorem 16 Let ç ç be a transition-point and let èy æ ;s æ è be a strictly complementary optimal solution of èdç ç è. Then the numbers ç 1 and ç 2 given by Theorem 15 satisfy ç 1 = ç 2 = ç ç. Proof: If èy æ ;s æ è is a strictly complementary optimal solution of èdç ç è then it uniquely determines the optimal partition of èdç ç è and this partition diæers from the optimal partition corresponding to the optimal sets at the linearity intervals surrounding ç. ç Hence èy æ ;s æ è does not belong to the optimal sets at the linearity intervals surrounding ç. ç Furthermore it holds in any transition-point that that æb T y æ satisæes æb T y, é æb T y æ é æb T y + ; where y, belongs to the linearity interval just to the left of y and y + belongs to the linearity interval just to the right of y. Hence, the theorem follows. 2 The corresponding results for g are stated below. Theorem 17 Let ç be arbitrary and let x æ be any optimal solution of èp ç è. Then the extreme points of the linearity interval ëç 1 ;ç 2 ë containing ç follow from æ ç 1 = min ç : A T y + s = c + çæc; s ç 0; s T x æ =0 æ ç;y;s æ ç 2 = max ç : A T y + s = c + çæc; s ç 0; s T x æ =0 æ : ç;y;s Theorem 18 Let ç be a transition-point and let x æ be a strictly complementary optimal solution of èp ç è. Then the numbers ç 1 and ç 2 given by Theorem 17 satisæes ç 1 = ç 2 =ç.

11 Optimal Set and Optimal Partition Approach Left and right derivatives of the value function and optimal sets in a transitionpoint In this section we show that the transition-points occur exactly where the the optimal value function is not diæerentiable. We have already seen that the optimal set remains constant on linearity intervals. We ærst deal with the diæerentiability offèçè. If the domain of f has a right extreme point then we may consider the right derivative at this point to be 1, and if the domain of f has a left extreme point the left derivative at this point maybe taken,1. Then we maysay that ç is a transition-point off if and only if the left and the right derivative of f at ç are diæerent. This follows from the deænition of a transition-point. Denoting the left and the right derivative asf 0,èçè and f 0 +èçè respectively, the convexity offimplies that at a transition-point ç one has f 0,èçè éf 0 +èçè: If domèfè has a right extreme point then it is convenient to consider the open interval at the right of this point as a linearity interval where both f and its derivative are 1. Similarly, if domèfè has a left extreme point then we may consider the open interval at the left of this point as a linearity interval where both f and its derivative are,1. Obviously, these extreme linearity intervals are characterized by the fact that on the intervals the primal problem is infeasible and the dual problem unbounded. The dual problem is unbounded if and only if the set D æ ç of optimal solutions is empty. Theorem 19 Let ç 2 domèfè and let x æ be any optimal solution of èp ç è. Then the derivatives at ç satisfy f,èçè 0 æ = min æb T y : A T y + s = c; s ç 0; s T x æ =0 æ y;s f+èçè 0 æ = max æb T y : A T y + s = c; s ç 0; s T x æ =0 æ : y;s Note that we could also have uses the deænition of the optimal set D æ ç in the above theorem. The above theorem reveals that æb T y must have the same value for all y 2D æ ç and for all ç 2 èç 1;ç 2 è. So we may state Corollary 20 Let ç 2 dom èfè belong to the linearity interval èç 1 ;ç 2 è. Then one has f 0 èçè =æb T y; 8ç 2 èç 1 ;ç 2 è; 8y 2D æ ç : By continuity we may write fèçè =b T çy+çæb T çy=bèçè T çy; 8ç 2 ëç 1 ;ç 2 ë: Lemma 21 Let ç 2 dom èfè belong to the linearity interval èç 1 ;ç 2 è. Moreover, let D æ èç 1;ç 2è := Dæ ç for arbitrary ç 2 èç 1 ;ç 2 è. Then one has D æ èç 1;ç 2è çdæ ç 1 ; D æ èç 1;ç 2è çdæ ç 2 : Corollary 22 Let ç be a nonextreme transition-point off and let ç + belong to the open linearity interval just to the right ofçand ç, to the open linearity interval just to the left of ç. Then we have D æ ç, çdæ ç ; Dæ ç +çdæ ç ; Dæ ç,ëdæ ç + =o=; where the inclusions are strict. Two other almost obvious consequences of the above results are the following corollaries.

12 12 Optimal Set and Optimal Partition Approach Corollary 23 Let ç be a nonextreme transition-point offand let ç + and ç, be as deæned in Corollary 22. Then we have D æ ç,=æ y2d æ ç : æb T y =æb T y,æ ; D æ ç +=æ y2d æ ç : æb T y =æb T y +æ : Corollary 24 Let ç be a nonextreme transition-point offand let ç + and ç, be as deæned in Corollary 22. Then dimd æ ç, é dim Dæ ç ; dimdæ ç + é dimdæ ç : The picture becomes more complete now. Note that Theorem 19 is valid for any value of ç in the domain of f. The theorem re-establishes that at a `non-transition' point, where the left and right derivative offare equal, the value of æb T y is constant when y runs through the dual optimal set D æ ç. But it also makes clear that at a transition-point, where the two derivatives are diæerent, æbt y is not constant when y runs through the dual optimal set D æ ç. Then the extreme values of æbt y yield the left and the right derivative offat ç; the left derivative is the minimum and the right derivative the maximal value of æb T y when y runs through the dual optimal set D æ ç. The reverse of Theorem 9 also holds. This is stated in the following theorem. Theorem 25 If fèçè is linear on the interval ëç 1 ;ç 2 ë, where ç 1 éç 2, then the dual optimal set D æ ç is constant for ç 2 èç 1 ;ç 2 è. If ç ç is not a transition-point then there is only one linearity interval containing ç ç, and hence this must be the linearity interval ëç 1 ;ç 2 ë, as given by Theorem 15. It may beworthwhile to point out that if ç ç is a transition-point, however, there are three linearity intervals containing ç ç, namely the singleton interval ë ç ç; ç çë and the two surrounding linearity intervals. In that case, the linearity interval ëç 1 ;ç 2 ë given by Theorem 15 may beany of these three intervals, and which of the three intervals is gotten depends on the given optimal solution èy æ ;s æ èofèdç ç è. It can easily be understood that the linearity interval at the right of ç çis found if èy æ ;s æ è happens to be optimal on the right linearity interval. This occurs exactly when æb T y æ = f 0 +è ç çè, due to Corollary 23. Similarly, the linearity interval at the left of ç ç is found if èy æ ;s æ è is optimal on the left linearity interval and this occurs exactly when æb T y æ = f 0,è ç çè, also due to Corollary 23. Finally, if è1è f 0,è ç çèéæb T y æ éf 0 +è ç çè; then we haveç 1 =ç 2 = ç çin Theorem 15. The last situation seems to be most informative. It clearly indicates that ç ç is a transition-point off, which is not apparent in the other two situations. Knowing that ç ç is a transition-point off we can ænd the two one-sided derivatives of f at ç ç as well as optimal solutions for the two surrounding interval at ç ç from Theorem 19. Remark 1 It is interesting to consider the dual optimal set D æ ç when ç runs from,1 to 1. Left from the smallest transition-point èthe transition-point for which ç is minimalè the set D æ ç is constant. It may happen that D æ ç is empty there, due to the absence of optimal solutions for these small values of ç. This occurs if èd ç èisunbounded èwhich means that èp ç è is infeasibleè for the values of ç on the most left open linearity interval. Then, at the ærst transition-point, the set D æ ç increases to a larger set, and when passing to the next open linearity interval the set D æ ç becomes equal to a proper subset of this enlarged set. This process repeats itself at every new transitionpoint: at a transition-point offthe dual optimal set expends itself and when passing to the next open linearity interval it shrinks to a proper subset of the enlarged set. Since the derivative off is monotonically increasing when ç runs from,1 to 1 every new dual optimal set arising in this way diæers from all previous ones. In other words, every transition-point off and every linearity interval of f has its own dual optimal set.

13 Optimal Set and Optimal Partition Approach 13 Each of the above results has a dual analogy for gèçè. Lemma 26 Let ç belong to the interior of dom ègè and let ç + belong to the open linearity interval just to the right ofçand ç, to the open linearity interval just to the left of ç. Moreover, let x + 2P æ ç and x, 2P æ + ç. Then one has, g,èçè 0 æ = max æc T x : x 2P æ x çæ =æc T x, g+èçè 0 æ = min æc T x : x 2P æ æ ç =æc T x + : x Theorem 27 Let ç 2 domègè and let èy æ ;s æ èbeany optimal solution of èd ç è. Then the derivatives at ç satisfy g, 0 æ èçè = max æc T x : Ax = b; x ç 0; x T s æ =0 æ x g+èçè 0 æ = min æc T x : Ax = b; x ç 0; x T s æ =0 æ : x Corollary 28 Under the hypothesis of Theorem 25 one has g 0 èçè =æc T x; 8ç 2 èç 1 ;ç 2 è; 8x2P æ ç : Corollary 29 Let ç 2 dom ègè belong to the linearityinterval èç 1 ;ç 2 è. Moreover, let P æ èç 1;ç 2è := Pæ ç for arbitrary ç 2 èç 1 ;ç 2 è. Then one has P æ èç 1;ç 2è çpæ ç 1 ; P æ èç 1;ç 2è çpæ ç 2 : Corollary 30 Let ç be a nonextreme transition-point ofgand let ç + belong to the open linearity interval just to the right ofçand ç, to the open linearity interval just to the left of ç. Then we have P æ ç, çpæ ç ; Pæ ç +çpæ ç ; Pæ ç,ëpæ ç + =o=; where the inclusions are strict. Corollary 31 Let ç be a nonextreme transition-point ofgand let ç + and ç, be as deæned in Corollary 30. Then we have P æ ç,=æ x2p æ ç : æc T x =æb T x,æ ; P æ ç +=æ x2p æ ç : æc T x =æb T x +æ : Corollary 32 Let ç be a nonextreme transition-point ofgand let ç + and ç, be as deæned in Corollary 30. Then dimp æ ç, é dimpæ ç ; dimpæ ç + é dimpæ ç : Theorem 33 If gèçè is linear on the interval ëç 1 ;ç 2 ë, where ç 1 éç 2, then the primal optimal set P æ ç is constant for ç 2 èç 1 ;ç 2 è. 4.4 Computing the optimal value function Using the results of the previous sections, we present in this section an algorithm which yields the optimal value function for a one-dimensional perturbation of the vector b or the vector c. We ærst deal with a one-dimensional perturbation of the vector b with a scalar multiple of the vector æb; we state the algorithm for the calculation of the optimal value function and that the algorithm ænds all the transition-points and linearity intervals of it. Having done this it is clear how to treat

14 14 Optimal Set and Optimal Partition Approach a one-dimensional perturbation of the vector c; we also state the corresponding algorithm and its convergence results. Assume that we have given optimal solutions x æ of èp è and èy æ ;s æ èofèdè. Using the notation of the previous sections, the problems èp ç è and its dual èd ç è arise by replacing the vector b by bèçè = b+çæb; the optimal value of these problems is denoted as fèçè. So wehavefè0è = c T x æ = b T y æ. The domain of the optimal value function is è,1; 1è and fèçè =1if and only if èd ç èisunbounded. Recall from Theorem 11 that fèçè is convex and piecewise linear. Algorithm 1 determines f on the nonnegative part of the real line. We leave it to the reader to ænd some straightforward modiæcations of the algorithm yielding an algorithm which generates f on the other part of the real line. Input: An optimal solution x æ of èp è; An optimal solution èy æ ;s æ èofèdè; a perturbation vector æb. begin ready:=false; k := 1; x 0 := x æ ; y 0 := y æ ; s 0 = s æ ; Solve f 0 +è0è = max y;s æ æb T y : A T y + s = c; s ç 0; s T x 0 =0 æ ; while not ready do begin Solve max ç;x æ ç : Ax = b + çæb; x ç 0; x T s k,1 =0 æ ; if this problem is unbounded: ready:=true else let èç k ;x k è be an optimal solution; begin Solve f 0 +èç k è = max y;s æ æb T y : A T y + s = c; s ç 0; s T x k =0 æ ; if this problem is unbounded: ready:=true else let èy k ;s k è be an optimal solution; k := k +1; end end end Algorithm 1: Optimal Value Function fèçè; çç0 The following theorem states that Algorithm 1 ænds the successive transition-points of f on the nonnegative part of the real line, as well as the slopes of f on the successive linearity intervals. Theorem 34 Algorithm 1 terminates after a ænite number of iterations. If K is the number of iterations upon termination, then ç 1 ;ç 2 ;æææ;ç K are the successive transition-points of f on the nonnegative real line. The optimal value at ç k è1 ç k ç Kè isgiven by c T x k, and the slope of f on the interval èç k ;ç k+1 èè1çkékèbyæb T y k. From Algorithm 1 we ænd the linearity intervals and the slope of f on these intervals. The optimal value function can now easily be drawn with these ingredients. When perturbing the vector c with a scalar multiple of æc to cèçè =c+çæcthe algorithm for the calculation of the optimal value function gèçè can be stated as in Algorithm 2 èrecall that g is concaveè. Algorithm 2 ænds the successive transition-points of g on the nonnegative real line as well as the slopes of g on the successive linearity intervals.

15 Optimal Set and Optimal Partition Approach 15 Input: An optimal solution x æ of èp è; An optimal solution èy æ ;s æ èofèdè; a perturbation vector æc. begin ready:=false; k := 1; x 0 := x æ ; s 0 = s æ ; y 0 = y æ Solve g 0 +è0è = min x æ æc T x : Ax = b; x ç 0; x T s 0 =0 æ ; while not ready do begin Solve max ç;y;s æ ç : A T y + s = c + çæc; s ç 0; s T x k,1 =0 æ ; if this problem is unbounded: ready:=true else let èç k ;y k ;s k è be an optimal solution; begin Solve g 0 + èç kè = min x æ æc T x : Ax = b; x ç 0; x T s k =0 æ ; if this problem is unbounded: ready:=true else let x k be an optimal solution; k := k +1; end end end Algorithm 2: Optimal Value Function gèçè; çç0 Theorem 35 Algorithm 2 terminates after a ænite number of iterations. If K is the number of iterations upon termination, then ç 1 ;ç 2 ;æææ;ç K are the successive transition-points of g on the nonnegative real line. The optimal value at ç k è1 ç k ç Kè isgiven by b T y k, and the slope of g on the interval èç k ;ç k+1 èè1çkékèbyæc T x k. 5 Parametric Quadratic Programming In this section we start to investigate the eæect of changes in b and c on the optimal value zèb; cè of èqp è and èqdè. Again, as in LP, we consider one-dimensional parametric perturbations of b and c. So we want to study zèb + çæb; c + çæcè; as a function of the parameters ç and ç, where æb and æc are given perturbation vectors. So, again the vectors b and c are æxed, and the variations come from the parameters ç and ç. The perturbed problems are denoted as èqp ç è and èqd ç è, and their feasible regions as QP ç and QD ç respectively. The dual problem of èqp ç è is denoted as èqd ç è and the dual problem of èqd ç è is denoted as èqp ç è. Observe that the feasible region of èqd ç è is simply QD and the feasible region of èqp ç è is simply QP. Again, we use the superscript æ to refer to the optimal set of each of these problems. We assume that b and c are such that èqp è and èqdè are both feasible. Then, zèb; cè is again well deæned and ænite. We use the following notations again: bèçè :=b+çæb; cèçè :=c+çæc; fèçè :=zèbèçè; cè; gèçè:=zèb; cèçèè: The domain of the parameters ç and ç is again taken as large as possible. Let us consider the domain of f. This function is deæned as long as zèbèçè;cè is well deæned. Therefore, fèçè is well deæned if

16 16 Optimal Set and Optimal Partition Approach the dual problem èqd ç è has an optimal solution and fèçè is not deæned èor equals inænityè if the dual problem èqd ç èisunbounded. Using the Duality Theorem it follows that fèçè iswell deæned if and only if the primal problem èqp ç è is feasible. In exactly the same way it can be understood that the domain of g consists of all ç for which èqd ç è is feasible èhence èqp ç è boundedè. Note the new meaning of the functions f and g. We prefer to use the same notation as in the preceding section. However, these functions are deæned diæerently. Lemma 36 The domains of f and g are convex. Proof: We give the proof for f. The proof for g is similar and therefore omitted. Let ç 1 ;ç 2 2 domèfè and ç 1 éçéç 2. Then fèç 1 è and fèç 2 è are ænite, which means that both QP ç1 and QP ç2 are nonempty. Let x 1 2QP ç1 and x 2 2QP ç2. Then x 1 and x 2 are nonnegative and Ax 1 = b + ç 1 æb; Ax 2 = b + ç 2 æb: Now consider x := x 1 + ç, ç 1 ç 2, ç 1, x 2, x 1æ = èç 2, çè x 1 +èç,ç 1 èx 2 ç 2,ç 1 : Note that x is a convex combination of x 1 and x 2 and hence x is nonnegative. We proceed by showing that x 2QP ç. Using that A, x 2, x 1æ =èç 2,ç 1 èæb this goes as follows: Ax = Ax 1 + ç, ç 1 ç 2, ç 1 A, x 2, x 1æ = b + ç 1 æb + ç, ç 1 ç 2, ç 1 èç 2, ç 1 èæb = b+ç 1 æb+èç,ç 1 èæb = b+çæb: This proves that èqp ç è is feasible and hence ç 2 domèfè, completing the proof. 2 Lemma 37 The functions f and g are convex. Proof: We present the proof only for f, the proof for g is similar. Let ç 1 ;ç 2 be elements of the interior of the domain of f. Let æ 2 è0; 1è be given and deæne ç æ := æç 1 +è1,æèç 2. Then we have fèç æ è = æ çèb+ç 1 æbè T y èçæè, 12 ç èxèçæè è T Qx èçæè + è1, æè çèb + ç 2 æbè T y èçæè, 12 ç èxèçæè è T Qx èçæè ç æfèç 1 è+è1,æèfèç 2 è; where the inequality holds since the feasible set of èqd ç è is independent ofç. 2 Lemma 38 The complements of the domains of f and g are open intervals of the real line.

17 Optimal Set and Optimal Partition Approach 17 Proof: Let ç belong to the complement of the domain of f. This means that èqd ç èisunbounded. This is equivalent to the existence of a vector z, such that A T z ç 0; èb + çæbè T zé0: Fixing z and considering ç as a variable, the set of all ç satisfying èb + çæbè T z é0isanopen interval. For all ç in this interval èqd ç èisunbounded. Hence, the complement of the domain of f is open. This concludes the proof. 2 A consequence of the last two lemmas is the next theorem ècf. Theorem 8è which requires no further proof. Theorem 39 The domains of f and g are closed intervals on the real line. 5.1 Optimal value function and optimal partitions on a curvy-linearity interval In this section we show that the optimal value functions fèçè and gèçè are piecewise quadratic on their domains. The pieces correspond to intervals where the partition is constant. In LP these results are given in terms of the optimal primal and dual sets. In QP this is not possible since these sets are intertwined. Neither èqp ç è nor èqd ç è is constant when ç or ç vary. The proofs for LP are obtained by using the characterization of the optimal set by the optimal partition èsee Section 6.3è. For any ç in the domain of f we denote the optimal set of èqp ç èasqp æ ç and the optimal set of èqd ç èasqd æ ç. The ærst theorem ècf. Theorem 9è shows that the partition is constant on certain intervals and that f is quadratic on these intervals. Theorem 40 Let ç 1 and ç 2 éç 1 be such that ç ç1 = ç ç2. Then ç ç is constant for all ç 2 ëç 1 ;ç 2 ë and fèçè is quadratic on the interval ëç 1 ;ç 2 ë. Proof: Without loss of generalitywe assume that ç 1 = 0 and ç 2 = 1. Let èx èç1è ; èu èç1è ;y èç1è ;s èç1è èè and èx èç2è ; èu èç2è ;y èç2è ;s èç2è èè be maximal complementary solutions for the respective problems. We assume that u èç1è = x èç1è and u èç2è = x èç2è and deæne for ç 2 è0; 1è è2è xèçè :=è1,çèx èç1è +çx èç2è ; yèçè :=è1,çèy èç1è +çy èç2è ; sèçè :=è1,çès èç1è +çs èç2è : It is easy to see that Axèçè =b+çæband xèçè ç 0. Also A T yèçè+sèçè,qxèçè =è1,çèc+çc = c; and sèçè ç 0. So èxèçè;yèçè;sèçèè is feasible for èqp ç è and èqd ç è. Since ç 0 = ç 1,wehave xèçè T sèçè = 0, hence the proposed solution is optimal for èqp ç è and èqd ç è. Using the support of xèçè and sèçè, B and N respectively, and their complement T, this implies è3è B ç B ç ; N ç N ç and T ç T ç : We now show that equality holds. Assuming to the contrary that T ç T ç, there exists a maximal complementary solution èx èçè ;y èçè ;s èçè è of èqp ç è and èqd ç è such that è4è Let us now deæne for æé0 èx èçè è i +ès èçè è i é0 for some i 2 T; i =2 T ç : çxèæè:=x èç2è +æèx èç2è,x èç1è è; çyèæè:=y èç2è +æèy èç2è,y èç1è è; çsèæè:=s èç2è +æès èç2è,s èç1è è:

18 18 Optimal Set and Optimal Partition Approach For some æé0 small enough it holds è5è çxèæè B é 0; çxèæè NëT =0; çsèæè N é0; çsèæè BëT =0; from which it follows that the proposed solutions are optimal for èqp 1+æ è and èqd 1+æ è. Finally, we deæne ~x := æ 1,ç+æ xèçè + 1,ç 1,ç+æ çxèæè; which are feasible in èqp 1 è and èqd 1 è. Also, Since è3è and è5è imply and ~y := æ 1,ç+æ yèçè + 1,ç 1,ç+æ çyèæè; ~s := æ 1,ç+æ sèçè + 1,ç 1,ç+æ çsèæè; ~x T æ 1, ç ~s = 1, ç + æ 1, ç + æ ç èx èçè è T çsèæè+çxèæè T s èçèç : èx èçè è T çsèæè =èx èçè è T Nçsèæè N =0 ès èçè è T çxèæè =ès èçè è T Bçxèæè B =0; è~x; ~y; ~sè is optimal for èqp 1 è and èqd 1 è. However, if è4è would hold, we would have a solution of èqp 1 è and èqd 1 è with either è~xè i é 0orè~sè i é0 for i 2 T, contradicting the deænition of èb; N; Tè. Thus we conclude T ç = T. Using è3è the ærst part of the theorem follows. The second part can now be proven almost straightforwardly. From the proof of the ærst part we know that èxèçè;yèçè;sèçèè deæned in è2è is optimal in èqd ç è for ç 2 è0; 1è. Hence Note that fèçè = èb + çæbè T yèçè, 1 2 xèçèt Qxèçè = fè0è + çæb T y èç1è + çb T èy èç2è, y èç1è è+ç 2 æb T èy èç2è,y èç1è è, çèx èç2è,x èç1è è T Qx èç1è, 1 2 ç2 èx èç2è, x èç1è è T Qèx èç2è, x èç1è è: Aèx èç2è, x èç1è è = æb A T èy èç2è, y èç1è è+s èç2è,s èç1è = Qèx èç2è,x èç1è è: Multiplying the second equation with x èç2è, x èç1è respectively with x èç1è, and using the ærst, gives è6è è7è So we may write fèçè as æb T èy èç2è, y èç1è è = èx èç2è, x èç1è è T Qèx èç2è, x èç1è è b T èy èç2è, y èç1è è = èx èç1è è T Qèx èç2è, x èç1è è: fèçè=fè0è + çæb T y èç1è ç2 æb T èy èç2è, y èç1è è: This concludes the proof. 2 The function f on ëç 1 ;ç 2 ë is explicitly given by the following formula ç 2 fèçè = ç2 æç èbt y èç 1 è è, 1 2æç èxèç 1 è è T Qx èç 1 è + ç1 æç èbt y ç2 è, 1 2 æç èxèç 2 è è T Qx èç 2 è + ç æç ç c T èx èç 2 è, x èç 1 è è+è2,ç 1,ç 2èb T èy èç 2 è,y èç 1 è è+ 2ç2ç1 æç æbt èy èç 2 è,y èç 1 è è ç 2 æç æbt èy èç 2 è,y èç 1 è è: Note that we can now calculate the optimal value function between two subsequent transition-points. Theorem 40 implies the following corollary. ç 1 ç

19 Optimal Set and Optimal Partition Approach 19 Corollary 41 If ç ç1 = ç ç2 = ç then fèçè is linear on ëç 1 ;ç 2 ë if and only if Qx èç1è = Qx èç2è. Proof: Assuming again ç 1 = 0 and ç 2 =1,wehave from the proof of Theorem 40 that fèçè is linear on ë0; 1ë if and only if æb T èy èç2è, y èç1è è = 0. Using è6è this is equivalent to èx èç2è,x èç1è è T Qèx èç2è,x èç1è è=0 which holds if and only if Qèx èç2è, x èç1è è. 2 As a consequence of Theorem 40 we have the following results ècf. Theorem 9 and Theorem 11. Theorem 42 The domain of f can be partitioned in a ænite set of subintervals such that the optimal partition is constant on a subinterval. Proof: Since the number of possible partitions is ænite and the number of elements in the domain of f is inænite, it follows from Theorem 40 that the domain of f can be partitioned into èopenè subintervals on which the partition is constant, while it is diæerent in the singletons in between the subintervals. This implies the result. 2 Theorem 43 The optimal value function fèçè is continuous, convex and piecewise quadratic. Proof: Corollary 41 implies that on each subinterval deæned by a partition the function fèçè is quadratic. Since fèçè is convex èlemma 37è it is continuous on the interior of its domain. It remains to be shown that the optimal value function is right-continuous and left-continuous in the left and right endpoints of the domain of f respectively. To this end we consider the left endpoint of the domain of f èfor the right endpoint the proof is similarè. Let ç æ denote the left endpoint. Now, we need to proof that lim fèçè =fèçæ è: çèç æ Let èçx; çy; çsè denote the optimal solution at the left endpoint ç æ.furthermore consider the limit of èxèçè;yèçè;sèçèè for ç è ç æ. Since the dual feasible set èqd ç è is closed and is independent ofç, the limit point ~y; ~s of yèçè;sèçè is dual feasible. Further the limit point ~xof the sequence xèçè is feasible for èqp ç æè. Since the sequences are complementary, the limit points are also complementary, hence optimal. Applying Lemma 4 completes the proof. 2 The values of ç where the partition of the optimal value function fèçè changes are called transitionpoints of f, and any interval between two successive transition-points of f is called a curvy-linearity interval of f. In a similar way we deæne transition-points and curvy-linearity intervals for g. Each of the above results on fèçè has its analogue for gèçè. We state these results without further proof. The omitted proofs are straightforward modiæcations of the above proofs. Theorem 44 Let ç 1 and ç 2 éç 1 be such that ç ç1 = ç ç2. Then ç ç is constant for all ç 2 ëç 1 ;ç 2 ë and gèçè is quadratic on the interval ëç 1 ;ç 2 ë. Corollary 45 If ç ç1 = ç ç2 = ç then gèçè is linear on ëç 1 ;ç 2 ë if and only if Qx èç1è = Qx èç2è. Theorem 46 The domain of g can be partitioned in a ænite set of subintervals such that the optimal partition is constant on a subinterval.

1. Introduction and background. Consider the primal-dual linear programs (LPs)

1. Introduction and background. Consider the primal-dual linear programs (LPs) SIAM J. OPIM. Vol. 9, No. 1, pp. 207 216 c 1998 Society for Industrial and Applied Mathematics ON HE DIMENSION OF HE SE OF RIM PERURBAIONS FOR OPIMAL PARIION INVARIANCE HARVEY J. REENBER, ALLEN. HOLDER,

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

Nondegeneracy of Polyhedra and Linear Programs

Nondegeneracy of Polyhedra and Linear Programs Computational Optimization and Applications 7, 221 237 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. Nondegeneracy of Polyhedra and Linear Programs YANHUI WANG AND RENATO D.C.

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008 Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

P.B. Stark. January 29, 1998

P.B. Stark. January 29, 1998 Statistics 210B, Spring 1998 Class Notes P.B. Stark stark@stat.berkeley.edu www.stat.berkeley.eduèçstarkèindex.html January 29, 1998 Second Set of Notes 1 More on Testing and Conædence Sets See Lehmann,

More information

SENSITIVITY ANALYSIS IN CONVEX QUADRATIC OPTIMIZATION: SIMULTANEOUS PERTURBATION OF THE OBJECTIVE AND RIGHT-HAND-SIDE VECTORS

SENSITIVITY ANALYSIS IN CONVEX QUADRATIC OPTIMIZATION: SIMULTANEOUS PERTURBATION OF THE OBJECTIVE AND RIGHT-HAND-SIDE VECTORS SENSITIVITY ANALYSIS IN CONVEX QUADRATIC OPTIMIZATION: SIMULTANEOUS PERTURBATION OF THE OBJECTIVE AND RIGHT-HAND-SIDE VECTORS ALIREZA GHAFFARI HADIGHEH Department of Mathematics, Azarbaijan University

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1

More information

LECTURE Review. In this lecture we shall study the errors and stability properties for numerical solutions of initial value.

LECTURE Review. In this lecture we shall study the errors and stability properties for numerical solutions of initial value. LECTURE 24 Error Analysis for Multi-step Methods 1. Review In this lecture we shall study the errors and stability properties for numerical solutions of initial value problems of the form è24.1è dx = fèt;

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

Strong Duality: Without Simplex and without theorems of alternatives. Somdeb Lahiri SPM, PDPU. November 29, 2015.

Strong Duality: Without Simplex and without theorems of alternatives. Somdeb Lahiri SPM, PDPU.   November 29, 2015. Strong Duality: Without Simplex and without theorems of alternatives By Somdeb Lahiri SPM, PDPU. email: somdeb.lahiri@yahoo.co.in November 29, 2015. Abstract We provide an alternative proof of the strong

More information

problem of detection naturally arises in technical diagnostics, where one is interested in detecting cracks, corrosion, or any other defect in a sampl

problem of detection naturally arises in technical diagnostics, where one is interested in detecting cracks, corrosion, or any other defect in a sampl In: Structural and Multidisciplinary Optimization, N. Olhoæ and G. I. N. Rozvany eds, Pergamon, 1995, 543í548. BOUNDS FOR DETECTABILITY OF MATERIAL'S DAMAGE BY NOISY ELECTRICAL MEASUREMENTS Elena CHERKAEVA

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

Linear Programming in Matrix Form

Linear Programming in Matrix Form Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

More information

Duality Theory, Optimality Conditions

Duality Theory, Optimality Conditions 5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every

More information

CS 294-2, Visual Grouping and Recognition èprof. Jitendra Malikè Sept. 8, 1999

CS 294-2, Visual Grouping and Recognition èprof. Jitendra Malikè Sept. 8, 1999 CS 294-2, Visual Grouping and Recognition èprof. Jitendra Malikè Sept. 8, 999 Lecture è6 èbayesian estimation and MRFè DRAFT Note by Xunlei Wu æ Bayesian Philosophy æ Markov Random Fields Introduction

More information

Conic Linear Optimization and its Dual. yyye

Conic Linear Optimization and its Dual.   yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

322 HENDRA GUNAWAN AND MASHADI èivè kx; y + zk çkx; yk + kx; zk: The pair èx; kæ; ækè is then called a 2-normed space. A standard example of a 2-norme

322 HENDRA GUNAWAN AND MASHADI èivè kx; y + zk çkx; yk + kx; zk: The pair èx; kæ; ækè is then called a 2-normed space. A standard example of a 2-norme SOOCHOW JOURNAL OF MATHEMATICS Volume 27, No. 3, pp. 321-329, July 2001 ON FINITE DIMENSIONAL 2-NORMED SPACES BY HENDRA GUNAWAN AND MASHADI Abstract. In this note, we shall study ænite dimensional 2-normed

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality

More information

EE364a Review Session 5

EE364a Review Session 5 EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017

Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017 Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017 Chapter 1 - Mathematical Preliminaries 1.1 Let S R n. (a) Suppose that T is an open set satisfying T S. Prove that

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

7. Lecture notes on the ellipsoid algorithm

7. Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Michel X. Goemans 18.433: Combinatorial Optimization 7. Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm proposed for linear

More information

MAT 242 CHAPTER 4: SUBSPACES OF R n

MAT 242 CHAPTER 4: SUBSPACES OF R n MAT 242 CHAPTER 4: SUBSPACES OF R n JOHN QUIGG 1. Subspaces Recall that R n is the set of n 1 matrices, also called vectors, and satisfies the following properties: x + y = y + x x + (y + z) = (x + y)

More information

Lecture notes on the ellipsoid algorithm

Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Handout 1 18.433: Combinatorial Optimization May 14th, 007 Michel X. Goemans Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

SUMS PROBLEM COMPETITION, 2000

SUMS PROBLEM COMPETITION, 2000 SUMS ROBLEM COMETITION, 2000 SOLUTIONS 1 The result is well known, and called Morley s Theorem Many proofs are known See for example HSM Coxeter, Introduction to Geometry, page 23 2 If the number of vertices,

More information

A Strongly Polynomial Simplex Method for Totally Unimodular LP

A Strongly Polynomial Simplex Method for Totally Unimodular LP A Strongly Polynomial Simplex Method for Totally Unimodular LP Shinji Mizuno July 19, 2014 Abstract Kitahara and Mizuno get new bounds for the number of distinct solutions generated by the simplex method

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

Combinatorial Optimization

Combinatorial Optimization Combinatorial Optimization 2017-2018 1 Maximum matching on bipartite graphs Given a graph G = (V, E), find a maximum cardinal matching. 1.1 Direct algorithms Theorem 1.1 (Petersen, 1891) A matching M is

More information

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Sensitivity analysis The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Changing the coefficient of a nonbasic variable

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Simplex method(s) for solving LPs in standard form

Simplex method(s) for solving LPs in standard form Simplex method: outline I The Simplex Method is a family of algorithms for solving LPs in standard form (and their duals) I Goal: identify an optimal basis, as in Definition 3.3 I Versions we will consider:

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Advanced Linear Programming: The Exercises

Advanced Linear Programming: The Exercises Advanced Linear Programming: The Exercises The answers are sometimes not written out completely. 1.5 a) min c T x + d T y Ax + By b y = x (1) First reformulation, using z smallest number satisfying x z

More information

Lecture Note 18: Duality

Lecture Note 18: Duality MATH 5330: Computational Methods of Linear Algebra 1 The Dual Problems Lecture Note 18: Duality Xianyi Zeng Department of Mathematical Sciences, UTEP The concept duality, just like accuracy and stability,

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 10: Linear programming duality and sensitivity 0-0 Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 22 October 22, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 16 Table of Contents 1 The Simplex Method, Part II Ming Zhong (JHU) AMS Fall 2018 2 /

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

A New Invariance Property of Lyapunov Characteristic Directions S. Bharadwaj and K.D. Mease Mechanical and Aerospace Engineering University of Califor

A New Invariance Property of Lyapunov Characteristic Directions S. Bharadwaj and K.D. Mease Mechanical and Aerospace Engineering University of Califor A New Invariance Property of Lyapunov Characteristic Directions S. Bharadwaj and K.D. Mease Mechanical and Aerospace Engineering University of California, Irvine, California, 92697-3975 Email: sanjay@eng.uci.edu,

More information

Journal of Universal Computer Science, vol. 3, no. 11 (1997), submitted: 8/8/97, accepted: 16/10/97, appeared: 28/11/97 Springer Pub. Co.

Journal of Universal Computer Science, vol. 3, no. 11 (1997), submitted: 8/8/97, accepted: 16/10/97, appeared: 28/11/97 Springer Pub. Co. Journal of Universal Computer Science, vol. 3, no. 11 (1997), 1250-1254 submitted: 8/8/97, accepted: 16/10/97, appeared: 28/11/97 Springer Pub. Co. Sequential Continuity of Linear Mappings in Constructive

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

LINEAR PROGRAMMING II

LINEAR PROGRAMMING II LINEAR PROGRAMMING II LP duality strong duality theorem bonus proof of LP duality applications Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM LINEAR PROGRAMMING II LP duality Strong duality

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

Parabolic Layers, II

Parabolic Layers, II Discrete Approximations for Singularly Perturbed Boundary Value Problems with Parabolic Layers, II Paul A. Farrell, Pieter W. Hemker, and Grigori I. Shishkin Computer Science Program Technical Report Number

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

CHAPTER 7. Connectedness

CHAPTER 7. Connectedness CHAPTER 7 Connectedness 7.1. Connected topological spaces Definition 7.1. A topological space (X, T X ) is said to be connected if there is no continuous surjection f : X {0, 1} where the two point set

More information

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur END3033 Operations Research I Sensitivity Analysis & Duality to accompany Operations Research: Applications and Algorithms Fatih Cavdur Introduction Consider the following problem where x 1 and x 2 corresponds

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1 CSCI5654 (Linear Programming, Fall 2013) Lecture-8 Lecture 8 Slide# 1 Today s Lecture 1. Recap of dual variables and strong duality. 2. Complementary Slackness Theorem. 3. Interpretation of dual variables.

More information

Copositive Plus Matrices

Copositive Plus Matrices Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their

More information

Continuity of Bçezier patches. Jana Pçlnikovça, Jaroslav Plaçcek, Juraj ç Sofranko. Faculty of Mathematics and Physics. Comenius University

Continuity of Bçezier patches. Jana Pçlnikovça, Jaroslav Plaçcek, Juraj ç Sofranko. Faculty of Mathematics and Physics. Comenius University Continuity of Bezier patches Jana Plnikova, Jaroslav Placek, Juraj Sofranko Faculty of Mathematics and Physics Comenius University Bratislava Abstract The paper is concerned about the question of smooth

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Linear and Combinatorial Optimization

Linear and Combinatorial Optimization Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality

More information

MAXIMA AND MINIMA CHAPTER 7.1 INTRODUCTION 7.2 CONCEPT OF LOCAL MAXIMA AND LOCAL MINIMA

MAXIMA AND MINIMA CHAPTER 7.1 INTRODUCTION 7.2 CONCEPT OF LOCAL MAXIMA AND LOCAL MINIMA CHAPTER 7 MAXIMA AND MINIMA 7.1 INTRODUCTION The notion of optimizing functions is one of the most important application of calculus used in almost every sphere of life including geometry, business, trade,

More information

also has x æ as a local imizer. Of course, æ is typically not known, but an algorithm can approximate æ as it approximates x æ èas the augmented Lagra

also has x æ as a local imizer. Of course, æ is typically not known, but an algorithm can approximate æ as it approximates x æ èas the augmented Lagra Introduction to sequential quadratic programg Mark S. Gockenbach Introduction Sequential quadratic programg èsqpè methods attempt to solve a nonlinear program directly rather than convert it to a sequence

More information

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method) Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Linear Programming. Chapter Introduction

Linear Programming. Chapter Introduction Chapter 3 Linear Programming Linear programs (LP) play an important role in the theory and practice of optimization problems. Many COPs can directly be formulated as LPs. Furthermore, LPs are invaluable

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information