c 2000 Society for Industrial and Applied Mathematics
|
|
- Bethany Dean
- 6 years ago
- Views:
Transcription
1 SIAM J. OPTIM. Vol. 10, No. 2, pp c 2000 Society for Industrial and Applied Mathematics SIMULTANEOUS PRIMAL-DUAL RIGHT-HAND-SIDE SENSITIVITY ANALYSIS FROM A STRICTLY COMPLEMENTARY SOLUTION OF A LINEAR PROGRAM HARVEY J. GREENBERG Abstract. This paper establishes theorems about the simultaneous variation of right-hand sides and cost coefficients in a linear program from a strictly complementary solution. Some results are extensions of those that have been proven for varying the right-hand side of the primal or the dual, but not both; other results are new. In addition, changes in the optimal partition and what that means in economic terms are related to the basis-driven approach, notably to the theory of compatibility. In addition to new theorems about this relation, the transition graph is extended to provide another visualization of the underlying economics. Key words. linear programming, sensitivity analysis, computational economics, interior point methods, parametric programming, optimal partition AMS subject classifications. 90C05, 90C31, 49Q12 PII. S Introduction. Consider the primal-dual pair of linear programs: P : min{cx : x 0,Ax b}, D : max{πb : π 0,πA c}, where x is a column vector in R n of levels; b is a column vector in R m of right-hand sides; c is a row vector in R n of objective coefficients; π is a row vector in R m of prices; and A is an m n matrix. This paper concerns the simultaneous variation of right-hand sides and objective coefficients (dual right-hand sides), which we call rim data: r =(b, c). The change is of the form θh, where θ>0 and h is a nonzero direction vector. We have traditionally been concerned with the effect a change has on the optimality of a basis [2]. Here we suppose we have a strictly complementary solution, which is generally not basic (unless the primal-dual solution is unique). A key property of a strictly complementary solution is that it identifies the optimal partition. While we define this formally in the next section, it is a unique partition of the rows and columns of the linear program matrix, A, into active and inactive parts, somewhat analogous to a partition into basic and nonbasic activities. We are interested in the following questions: Must the optimal partition change for any positive value of θ? If so, what is the new optimal partition? If not, for what range does this partition remain optimal? How does this relate to basic ranges? How does this relate to the differential Lagrangian? How does the optimal objective value change as a function of θ? Previous results [1, 10, 12] answered most of these questions when b or c change separately, but some of those proofs do not have natural extensions to deal with their simultaneous variation, and we shall consider the decoupling principle mentioned in [9]. Received by the editors October 8, 1996; accepted for publication (in revised form) March 7, 1999; published electronically February 10, Mathematics Department, University of Colorado at Denver, P.O. Box , Denver, CO (hgreenbe@carbon.cudenver.edu, hgreenbe/). 427
2 428 HARVEY J. GREENBERG The rest of this paper is organized as follows. In the next section, we briefly give the terms and concepts needed for the main results. (In general, the technical terms used throughout this paper are defined in the Mathematical Programming Glossary [6].) Then, we consider the first set of questions concerning the optimal partition, both when it does not change and when it does. In doing so, we shall relate this to the differential Lagrangian, and we shall derive the piecewise quadratic form of the objective value from a new vantage point. Finally, we relate the optimal partition change (if any) to basis-driven sensitivity analysis, notably to the theory of compatible bases (see [4]). 2. Terms and concepts. Let P (b) and D(c) denote the primal and dual polyhedra, respectively. For (x, π) P (b) D(c), we associate surplus variables, s = Ax b, and reduced costs, d = c πa. Let P (r) and D (r) denote the primal and dual optimality regions, respectively, which we suppose are not empty. The support set of a nonnegative vector, v, is denoted σ(v) ={k:v k >0}. Then, primal-dual optimality can be represented by complementary slackness: σ(x) σ(d) = and σ(π) σ(s) =. As shown by Goldman and Tucker [3], there must exist a strictly complementary solution, whereby the support sets span the rows and columns: σ(π) σ(s) ={1,...,m} and σ(x) σ(d) ={1,...,n}. This defines the (unique) optimal partition, obtained from any strictly complementary (i.e., interior) solution. Although the optimal partition was discovered in 1956 [3] and has been shown to be an important part of algorithm design [14, 16] and sensitivity analysis [5], it has not become familiar enough to appear in the linear programming textbooks. For that reason we consider a small example to illustrate the optimal partition and related concepts. Later, after presenting the theory in sections 3 and 4, we shall consider another example pertaining to electricity generation from competing sources. Example. min x 1 : x 0, x 1 b 1, x 2 b 2. The primal optimality region is the line segment, [(b 1, 0), (b 1,b 2 )], whose relative interior simply excludes the extreme points. The optimal partition has σ(x) ={1,2},σ(d)=,σ(s)={2},and σ(π) ={1}. As long as c does not change, this partition remains optimal (for all b> 0). If c changes such that c 2 0, one of the two extreme points becomes uniquely optimal, and the optimal partition must change immediately. That is, suppose we have the perturbed problem where c 1 < 1. Then, min ( 1+ c 1 )x 1 + c 2 x 2 : x 0, x 1 b 1, x 2 b 2, c 2 > 0 x =(b 1,0), c 2 < 0 x =(b 1,b 2 ). In the first case, the optimal partition changes to σ(x) ={1}and σ(d) ={2}(no change in σ(s) and σ(π)). In the second case, we have σ(s) = and σ(π) ={1,2} (no change in σ(x) and σ(d)). We call the rows in σ(π) active because they never have surplus in any optimal solution (i.e., s i =0 i σ(π)), and for each row we have an optimal solution where its price is positive (namely, the π obtained). Similarly, we call the columns in σ(x) active because they never have a positive reduced cost (i.e., d j =0 j σ(x)), and for each column we have an optimal solution where its level is positive (namely, the x obtained). The complementary rows and columns are called inactive. The rows in σ(s) never have a positive price, and each inactive row has a positive surplus in at
3 PRIMAL-DUAL RIGHT-HAND-SIDE SENSITIVITY ANALYSIS 429 least one optimal solution (namely, the s obtained). The columns in σ(d) never have a positive level, and each inactive column has positive reduced cost in at least one optimal solution (namely, the d obtained). In [5] several problems were presented to illustrate how the optimal partition provides the information sought, and that this is not available from just one optimal basic solution (unless it is unique). The postoptimal sensitivity analysis examples included job shop scheduling (critical path problem) and peer group identification (DEA). The examples went on to show how the optimal partition helps with debugging, such as finding irreducible infeasible subsystems or all implied equalities with less computational effort than a simplex method due to knowing when a level is positive in some optimal solution. Partition A according to the optimal partition: [ ] B N σ(π) = rows active in some optimal solution A = B N σ(s) = rows inactive in all optimal solutions σ(d) = columns inactive in all optimal solutions σ(x) = columns active in some optimal solution. Partition the rim data vectors conformally: b = ( b N ) bb and c = (cb x = ( x B ) ( xn,s= sn ) sb,π=(πn π B ), and d =(d B d N ). c N ). Also, Let us extend the previous example to illustrate this notation: min x 1 + x 3 : x 0, x 1 b 1, x 2 b 2, x 1 x 2 x 3 b 3, where b 1 + b 2 >b 3 >max{b 1,b 2 }. A strictly complementary optimal solution is x =(b 1, 1 2 (b 3 b 1 ),0) t, d =(0,0,1), s =(0,b (b 3 b 1 ), 1 2 (b 3 b 1 )) t, and π =(1,0,0). The optimal partition, revealed by this solution, has only one active row, {1} (= σ(π)), and two active columns, {1, 2} (= σ(x)). Thus, the induced partitions are as follows: Active Inactive {}}{{}}{ A = Active } Inactive, c = ( ), x t = ( ), d = ( ). b 1 b 2 b = s, = π t, Using the optimal partition, the original linear programs are equivalent to the following primal-dual pair: Primal Dual min c B x B + c N x N : max π N b N + π B b B : Bx B + Nx N s N =b N, π N B +π B B +d B = c B, B x B +N x N s B = b B, π N N +π B N +d N = c N, x, s 0, π, d 0. Maintaining the partition conditions, x N =0,s N =0,π B = 0, and d B =0,we define the following primal and dual polyhedral conditions, which we shall use later: P(b; r) ={(x B,0) : x B 0, Bx B =b N,B x B b B }, D(c;r)={(π N,0) : π N 0, π N B=c B,π N N c N },
4 430 HARVEY J. GREENBERG where the current rim data value, r, determines the partition, B,N. (While P(b; r) = P (r) and D(c; r) =D (r), we use P(b; r) and D(c; r) to denote the same polyhedral conditions for (b, c) r, keeping the partition fixed at B,N.) Their relative interiors are the strictly complementary solutions: ri(p(b; r)) = {(x B, 0) : x B > 0, Bx B =b N,B x B >b B }, ri(d(c; r)) = {(π N, 0) : π N > 0, π N B=c B,π N N<c N }. We say h =(δb, δc) isadmissible if the linear program has an optimal solution for r + θh for some θ>0. The set of admissible directions, say H, is composed of those h for which the primal and dual feasibility conditions hold: H = {(δb, δc) R m+n : θ >0,x 0,π 0 Ax b + θδb and πa c + θδc}. A basis, B, is optimal at r if its associated primal and dual solutions are feasible. (We use B, not to be confused with the active submatrix, B, in an optimal partition. In general, B Bunless the solution is unique.) For h R m+n,wesaybis compatible with h (and h with B) ifbis also optimal for r + θh for some θ>0. Its range of compatibility is ρ(b; h) = sup{θ: B is optimal for r + θh}. (Note: B is optimal throughout [r, r + ρ(b; h)h].) Let H(B) denote the set of directions compatible with B: H(B) ={h R m+n :ρ(b;h)>0}. One of the fundamental theorems of (basic) compatibility [4] is H = B H(B). We shall relate this to a new theory of compatibility in connection with the optimal partition. Also, we denote the basic spectrum: ρ (h) = sup{ρ(b; h): B is optimal for r}. Given h H, the objective value is z(r +θh), as θ increases from zero. Suppose B is a compatible basis (one must exist) with (x, π) the associated basic solution. Then, since the basis remains optimal in [0,ρ(B;h)], the optimal value is quadratic: z(r + θh) =z(r)+θ(δc B x B + π N δb N )+θ 2 (δc B B 1 δb N ), where N is the complement of B (following notation analogous to the partition, but induced by basic status). We shall prove a similar result holds when the optimal partition does not change. We say z has constant functional form if the coefficients are constant. In particular, z has constant functional form on [0,ρ (h)] h H. Further, if either δb =0or δc = 0, the quadratic term is zero and z(r + θh) z(r) is linear in θ. In this case, we call the range of θ for which z has constant functional form a linearity interval. It has already been proven [1, 10] that the break points of the linearity intervals correspond precisely to where the optimal partition changes (which is not necessarily the same as when the basis must change see [7] for an example). Here we extend this to the more general rim variation, where the functional form is piecewise quadratic. 3. The optimal partition for the perturbation. Define the range for which the optimal partition does not change for a given direction (h): τ(h) sup{θ : the optimal partition does not change throughout [r, r + θh]}. In this definition, the left endpoint of the line segment is closed, so if the partition must change at r (for any θ>0), τ(h) =0. If0<τ(h)<, the optimal partition is invariant on [r, r + τ (h)h), but it could change at r + τ(h)h.
5 PRIMAL-DUAL RIGHT-HAND-SIDE SENSITIVITY ANALYSIS 431 Lemma 3.1. Suppose h = {δb, δc} is an admissible direction and ( b, c) = θh for θ>0such that r + θh has a primal-dual solution. Then, the optimal partition for r+θh is the same as the optimal partition for r if and only if ri(p( b; r) D( c; r)). Further, when the optimal partition is the same at both endpoints, it remains the same throughout the line segment, [r, r + θh]. Proof. The first part follows from the uniqueness of the optimal partition, determined by any strictly complementary solution. To show the optimal partition remains invariant on the line segment, [r, r + θh], let (x 0,π 0 ) be a strictly complementary solution in P (r) D (r), and let (x,π ) be a strictly complementary solution in P (r + θh) D (r + θh). Suppose r = αr +(1 α)(r + θh) for some α [0, 1], and define (x, π) =α(x 0,π 0 )+(1 α)(x,π ). Since the optimal partition for r and r + θh is the same, we have x B = αx 0 B +(1 α)x B >0 and x N = αx 0 N +(1 α)x N =0; π N =απn 0 +(1 α)π N >0 and π B = απb 0 +(1 α)π B =0. Thus, σ(x) =σ(x 0 ) and σ(π) =σ(π 0 ). Further, Bx B =B[αx 0 B +(1 α)x B ] = αb N +(1 α)b N =b N, B x B = B [αx 0 B +(1 α)x B ]>αb B+(1 α)b B =b B, π B B =[απ 0 B +(1 α)π B ]B =αc B +(1 α)c B =c B, π N N =[απ 0 N +(1 α)π N ]N < αc N+(1 α)c N =c N. Thus, σ(s) =σ(s 0 ) and σ(d) =σ(d 0 ), so (x, π) is a strictly complementary solution for the linear program defined by r, and it has the same partition. This must therefore be the optimal partition, since it is unique. Suppose h =(δb, δc) is an admissible direction, so θ h is an admissible change for some θ > 0. If the optimal partition for r + θ h is the same as it is for r, Lemma 3.1 establishes that it is the same for r + θh θ [0,θ ]. In that case, the objective value changes with constant functional form. To see this, use the construction in the proof: (x, π) =α(x 0,π 0 )+(1 α)(x,π ), where (x 0,π 0 ) is strictly complementary for r, (x,π ) is strictly complementary for r + θ h, and α =1 θ/θ. Then, since the optimal partition is the same, (x, π) is strictly complementary for r + θh, and z(r + θh) =(c+θδc)[(1 θ/θ )x + θ/θ x ] = z(r)+θ[c B (x B x 0 B)/θ + δc B x 0 B]+θ 2 δc B (x B x 0 B)/θ. This proves the following generalization of the linear case [1, 10, 12, 13]. Theorem 3.2 (optimal value function). If the optimal partition does not change at r for the admissible change direction h, then z has constant functional form. Further, Lemma 3.1 extends to the following convexity property. Theorem 3.3 (optimal partition convexity). If the optimal partition is the same throughout [r, r + h 1 ] as it is throughout [r, r + h 2 ], it is the same throughout [r, r + αh 1 +(1 α)h 2 ] α [0, 1]. Proof. Let (x k,π k ) be a strictly complementary solution for k =1,2, so they satisfy the primal-dual conditions: Bx k B =b N +δb k N, πk N B = c B +δc k B, B x k B >b B+δb k B, πk N N<c N+δc k N, x k B > 0, xk N =0, πk N > 0,πk B =0.
6 432 HARVEY J. GREENBERG Define (x, π) =α(x 1,π 1 )+(1 α)(x 2,π 2 ). Multiply the above by α for k = 1 and by 1 α for k = 2 to satisfy the following for h = αh 1 +(1 α)h 2 =( b, c): Bx B =b N + b N, π N B = c B + c B, B x B >b B + b B, π N N<c N + c N, x B >0,x N =0, π N > 0,π B =0. So, (x, π) is a strictly complementary solution for r + αh 1 +(1 α)h 2 with the same partition. It follows from Lemma 3.1 that the optimal partition remains the same throughout [r, r + αh 1 +(1 α)h 2 ]. In the special case that h 1 =( b, 0) and h 2 =(0, c), Theorem 3.3 on optimal partition convexity can be strengthened to the following decoupling principle. Corollary 3.4. The optimal partition does not change in [r, r +( b, c)] if and only if it does not change in [r, r +( b, 0)] [r, r +(0, c)]. Proof. If the optimal partition does not change in [r, r +( b, c)], the following primal-dual system has a solution: Bx B =b N + b N, π N B = c B + c B, B x B >b B + b B, π N N<c N + c N, x B >0, π N > 0. Let (x,π ) be a solution, and let (x 0,π 0 ) be a strictly complementary solution for r. Then, (x,π 0 ) is a strictly complementary solution for r +( b, 0), and (x 0,π ) is a strictly complementary solution for r +(0, c). These imply that the optimal partition does not change in [r, r +( b, 0)] [r, r +(0, c)]. Conversely, if the optimal partition does not change in [r, r +( b, 0)], there exists x to satisfy the primal conditions, and if the optimal partition does not change in [r, r +(0, c)], there exists π to satisfy the dual conditions. Since the partitions are the same, (x,π ) is a strictly complementary solution for r +( b, c), so the partition is the same throughout [r, r +( b, c)]. Let the optimal partition be compatible with h (and h with it) if τ(h) > 0. Define the set of compatible directions: H = {h : τ(h) > 0}. Then, we have the following analogy to the basis compatibility convexity theorem (see [4]). Theorem 3.5 (partition compatibility). The following properties hold for H and τ. (1) H is a nonempty convex cone. (2) τ is quasi-concave on H; i.e., τ(αh 1 +(1 α)h 2 ) min{τ(h 1 ),τ(h 2 } for h 1,h 2 H and α [0, 1]. (3) H satisfies the decoupling principle; i.e., (δb, δc) H if and only if (δb, 0) H and (0,δc) H. Proof. (1) Suppose h 1,h 2 H and define θ = min{τ(h 1 ),τ(h 2 )}>0. Then, for θ (0,θ ), (x k,π k ) to satisfy the strictly complementary primal-dual conditions: Bx k B =b N +θδb k N, πk N B = c B +θδc k B, B x k B >b B+θδb k B, πk N N<c N+θδc k N, x k B > 0, πk N > 0 for k =1,2. Define (x, π) = 1 2 (x1,π 1 )+ 1 2 (x2,π 2 ), then multiply the above by 1 2 and
7 PRIMAL-DUAL RIGHT-HAND-SIDE SENSITIVITY ANALYSIS 433 sum to obtain the following: Bx B =b N θδb N, π N B = c B θδc B, B x B >b B θδb B, π N N<c N θδc N, x B > 0, π N > 0. Define θ = 1 2 θ and θ = 1 2 θ, and we have the desired result: the optimal partition for r + θ (h 1 + h 2 ) is the same as the optimal partition for r, soh 1 +h 2 H. To show that H is nonempty, let h =(b, c), so r + θh =(1+θ)r. Then, by rescaling (x = x/(1+θ) and π = π/(1+θ)), the strictly complementary solution has the same partition for all θ 0. (2) Let θ min{τ(h 1 ),τ(h 2 } > 0 and (x, π) =α(x 1,π 1 )+(1 α)(x 2,π 2 ). For θ (0,θ ), multiply the first system (k =1)byα, the second (k =2)by1 α, and sum to prove that (x, π) is a strictly complementary solution for the partition: Bx B =b N +θδb N, π N B = c B +θδc B, B x B >b B +θδb B, π N N<c N +θδc N, x B > 0, π N > 0. Thus, τ(αh 1 +(1 α)h 2 ) sup{θ : θ<θ }=θ. (3) Let h =(δb, δc) H. Then, θ > 0 such that for θ [0,θ ), the primaldual conditions have a strictly complementary solution, say, (x, π) (with the same partition). Let (x 0,π 0 ) be a strictly complementary solution for r. Then, since these have the same partition, (x, π 0 ) is a strictly complementary solution for r+θ(δb, 0) and (x 0,π) is a strictly complementary solution for r + θ(0,δc). Conversely, if (x, π 0 )isa strictly complementary solution for r+θ(δb, 0) and (x 0,π) is a strictly complementary solution for r + θ(0,δc), both having the partition defined by B, it follows that (x, π) is a strictly complementary solution for r + θ(δb, δc). Now suppose that h is an admissible direction, but the optimal partition changes: ri(p(r + θh) D(r+ θh)) = θ>0. The following theorem shows the fundamental relationship the new partition has with the differential linear programs that comprise Mills s differential Lagrangian [11] when A does not change. (Mills s theorem was extended [15, 8] to apply to any linear program, rather than the special case of a game.) Further, this theorem applies generally, even if the optimal partition does not change. The new result is found in part (3), and the proofs [1, 10] of parts (1) and (2) do not extend. (They are included here for self-containment.) Theorem 3.6 (optimal partition perturbation). Suppose (x 0,π 0 ) is a strictly complementary solution for r and (δb, δc) is an admissible direction. Define the differential linear programs: δp : min{(δc)x : x P (r)}, δd : max{π(δb) :π D (r)}. Let x and π be respective strictly complementary solutions. There exists θ > 0 such that the following are true for θ (0,θ ). (1) The optimal partition for r + θ(δb, 0) is the same as the optimal partition for δd, and z(r + θ(δb, 0)) = z(r)+θπ N (δb N). (2) The optimal partition for r + θ(0,δc) is the same as the optimal partition for δp, and z(r + θ(0,δc)) = z(r)+θ(δc B )x B. (3) The optimal partition for r + θ(δb, δc) is determined by σ(x ) from δp and σ(π ) from δd. Further, z(r+θ(δb, δc)) = z(r)+θ(δc B x B +π N δb N)+θ 2 (δc B B + δb N ), where B + is any generalized inverse of B.
8 434 HARVEY J. GREENBERG Proof. (1) The following proof is from Jansen, Roos, and Terlaky [10]. The dual of δd is min{cξ : Bξ B + Nξ N δb N,ξ N 0}. Since δd has an optimal solution, there is a strictly complementary optimum, say, (ξ,π ). Consider x = x 0 + θξ. Since x 0 B > 0, there exists θ > 0 for which x B > 0 for θ [0,θ ). Further, x N = θξ N 0, so x 0, and we have [B N]x = Bx 0 B + θ(bξ B + Nξ N ) b N + θδb N. Further, [B N ]x = B x 0 B + θ(b ξ B + N ξ N ). Since B x 0 B >b B, there exists θ > 0 such that [B N ]x>b B +θδb B for θ [0,θ ). Let θ = min{θ,θ } > 0. So far, we have that (x, π ) satisfies the primal-dual conditions θ [0,θ ): Bx 0 B + θ(bξ B +Nξ N ) b N +θδb N, πn B = c B, B x 0 B +θ(b ξ B +N ξ N ) >b B +θδb B, πn N c N, x 0 B +θξ B > 0, θξ N 0, πn 0. We now prove that (x, π ) is a strictly complementary solution for r + θ(δb, 0), where θ>0. Suppose B i x B + N i x N = b i + θδb i. Since B i x B = b i, we must have B i ξ B + N i ξ N = δb i. This implies πi > 0 since (ξ,π ) is strictly complementary for δd and its dual, so σ(π )= σ(s ). Also, since (ξ,π ) is strictly complementary, σ(d )= σ(ξ N ) σ(x 0 B )= (σ(ξ N) σ(x 0 B )) = σ(x ). Thus, we have proven (x,π ) is a strictly complementary solution for r + θ(δb, 0), with the same optimal partition as D, for all θ (0,θ ). Further, z(r + θ(δb, 0)) = cx = cx + θcξ. We have cx = z(r) and cξ = π (δb) (from duality), so z(r + θ(δb, 0)) = z(r)+θπ (δb). Since π B =0 π D (r), we conclude z(r + θ(δb, 0)) = z(r) +θπn (δb N). The proof of (2) is similar by constructing π = π 0 + θξ, where ξ is the vector of variables for the dual of δp : max{ξ b : ξ B 0, ξ N B+ξ B B δc B }. We now prove (3). From (1), there exists θ > 0 such that the optimal partition does not change throughout (r, r + θ (δb, 0)), and the set of active columns is σ(x ). (Note from the proof of (1) that the set of active rows does not change.) From (2), there exists θ > 0 such that the optimal partition does not change throughout (r, r + θ (0,δc)), and the set of active rows is σ(π ). (By analogy, the proof of (2) shows that the set of active columns does not change.) Let θ = min{θ,θ } > 0. Then, the optimal partition does not change throughout (r, r + θ (δb, δc)), and its active sets are the rows in σ(π ) and the columns in σ(x ). (Note: We cannot use the solutions in (1) and (2) directly because (x, π) need not be complementary, in which case it is not a solution for r + θh. This proof can be viewed as first moving to r + θ(δb, 0), where θ<θ and the optimal partition is defined by σ(x ) and σ(π), then changing c by θδc to move to r + θh, where the optimal partition is defined by σ(x ) and σ(π ). Equivalently, we can move first to r + θ(0,δc), with optimal partition defined by σ(x) and σ(π ), then move to r + θh to obtain the same result. This argument is similar to the one used by Roos [13] for a different result.) Finally, to show that z(r + θh) has the asserted quadratic form, we shall use the defining properties of generalized inverses. Let B correspond to the optimal partition throughout (r, r + θ h). Then, x B (θ) =B + (b N +θδb N )+(I B + B)v(θ), where B + is any generalized inverse of B, and v(θ) is any vector in R σ(x). The defining property of B + is that BB + B = B, and a fundamental property is that the equation has a solution if and only if BB + (b N + θδb N )=b N +θδb N. Since this applies to θ = 0, we must have BB + b N = b N, which then implies we must also have BB + δb N = δb N. Similarly, the dual equations are π N (θ)b = c B + θδc B,sowemust have π N (θ) =(c B +θδc B )B + + u(θ)(i BB + ), where u(θ) is any vector in R σ(π).
9 PRIMAL-DUAL RIGHT-HAND-SIDE SENSITIVITY ANALYSIS 435 Then, z(r + θh) =(c B +θδc B )[B + (b N + θδb N )+(I B + B)v(θ)] = c B B + b N + θ(δc B B + b N + c B B + δb N )+θ 2 (δc B B + δb N ), where the terms with v(θ) are zero because c B + θδc B =(c B +θδc B )B + B,so (c B +θδc B )(I B + B)v(θ) = (c B +θδc B )B + B(I B + B)v(θ) = (c B +θδc B )B + (B BB + B)v(θ) = 0. (The last equation follows from B = BB + B.) Example. min x 1 +3x 2 +2x 3 :x 0,x 1 +x 2 1,x 2 +x 3 1. A strictly complementary optimal solution is x =( 1 2,1 2,1 2 ) and π =(1,2), so the optimal partition has σ(x) ={1,2,3}and σ(π) ={1,2}, which gives the optimality regions: P (r) ={x:x 0,x 1 +x 2 =1,x 2 +x 3 =1}={(1 x 2,x 2,1 x 2 ):0 x 2 1}, D (r)={π:π 0,π 1 =1,π 1 +π 2 =3,π 2 =2}={(1, 2)}. For δb =( 1,0) and δc =( 1,0,0), the two differential linear programs and their duals are as follows: δp : min{ x 1 : x P (r)}, δd : max{ π 1 : π D (r)}, max{ξ 1 + ξ 2 : ξ 1 1,ξ 1 +ξ 2 0,ξ 2 0}, min{ξ 1 +3ξ 2 +2ξ 3 :ξ 1 +ξ 2 1,ξ 2 +ξ 3 0}. A strictly complementary solution for δp and its dual is x = (1,0,1) and ξ =( 1,0), so the optimal partition has σ(x )={1,3}. A strictly complementary solution for δd and its dual is π =(1,2) and ξ =( 2,1, 1), so its optimal partition has σ(π )={1,2}. As given in Theorem 3.6 on optimal partition perturbation, σ(x) =σ(x ) from δp, and σ(π) =σ(π ) from δd for the optimal partition in (r, r + θh). Let us verify this. The perturbed linear program is the following primal-dual pair: min (1 θ)x 1 +3x 2 +2x 3 :x 0, max π 1 (1 θ)+π 2 :π 0, x 1 +x 2 1 θ, x 2 + x 3 1, π 1 1 θ, π 1 + π 2 3, π 2 2. For θ (0, 1) a strictly complementary optimal solution is x =(1 θ, 0, 1) (so s =0) and π =(1 θ, 2) (so d =(0,θ,0)). Indeed, σ(x) ={1,3}and σ(π) ={1,2). 4. Relation to basic compatibility. Now we develop a range theory for the optimal partition analogous to the range of basic compatibility [4]. Here the optimal partition can change initially but must then remain invariant. Let Υ(h) denote the greatest value of θ for which the optimal partition does not change throughout (r, r + θh) for h H. Note that the line segment is open, so the optimal partition need not be the same at the endpoints. In particular, the partition might have to change at r (i.e., τ(h) = 0); otherwise, Υ(h) = τ(h). The optimal partition perturbation theorem (Theorem 3.6) tells us that Υ(h) > 0 when h is admissible, in which case z(r + θh) has constant functional form for θ (0, Υ(h)). When h is decoupled (i.e., δb =0or δc = 0), (0, Υ(h)) is a linearity interval of z(r + θh) z(r).
10 436 HARVEY J. GREENBERG The following lemma says that this bounds each basic range of compatibility, which establishes the optimal partition range theorem (Theorem 4.2). Lemma 4.1. Suppose h is an admissible direction for which B is a compatible basis with range ρ = ρ(b; h). Then, the optimal partition does not change throughout (r, r + ρh). Proof. From the optimal partition perturbation theorem (Theorem 3.6), there exists θ>0such that the optimal partition does not change in (r, r + θh). Let θ be the supremum value of θ for which this is true. If θ ρ, we are done, so suppose θ <ρ. Let (x 0,π 0 ) be any strictly complementary solution for r+ 1 2 θ h, so that σ(x 0 ) and σ(π 0 ) determine the optimal partition throughout (r, r + θ h). We shall reach a contradiction by constructing (x,π ) that is optimal for r + θh, where θ <θ<ρ, and σ(x )=σ(x 0 ),σ(s )=σ(s 0 ),σ(π )=σ(π 0 ), and σ(d )=σ(d 0 ). Define α =(ρ θ)/ρ, so1 α=θ/ρ and α (0, 1). We shall form a convex combination of the strictly complementary solution and the basic solution response values, which we shall prove is feasible and has the same support sets as the strictly complementary solution. Suppose the basic solution for r, (x, π), changes by ( x, π), for r + ρh. Then, define the following convex combination: x = αx 0 +(1 α)(x + x) and π = απ 0 +(1 α)(π + π). Clearly, (x, π) 0. Further, we have x B = ρb 1 δb N and x N = 0, so the primal equations are given by B x B + N x N = B[αx 0 B +(1 α)(x B + ρb 1 δb N )] + αn x 0 N = α[bx 0 B + N x 0 N ]+(1 α)[bx B + ρδb N ] αb N +(1 α)b N +θδb N = b N + θδb N, B x B + N x N = B [αx 0 B +(1 α)(x B + ρb 1 δb N )] + αn x 0 N = α[b x 0 B + N x 0 N ]+(1 α)b [x B + x B ] αb B +(1 α)(b B + ρδb B ) = b B + θδb B. Thus, Ax b + θδb, which proves x is feasible in the primal. Similarly, π N = ρδc B B 1 and π B = 0, so the dual equations are given by π N B + π B B =[απ 0 N +(1 α)(π N + ρδc B B 1 )]B + απ 0 BB = α[πn 0 B + πbb 0 ]+(1 α)[π N B + ρδc B ] αc B +(1 α)(c B + ρδc B ) = c B + θδc B, π N N + π B N =[απn 0 +(1 α)(π N + ρδc B B 1 )]N + απbn 0 = α[πn 0 N + πbn 0 ]+(1 α)[π N + π N ]N αc N +(1 α)(c N + ρδc N ) = c N + θc N.
11 PRIMAL-DUAL RIGHT-HAND-SIDE SENSITIVITY ANALYSIS 437 Thus, πa c + θδc, which proves π is feasible in the dual. We have proven that (x, π) satisfies the primal-dual conditions for r+θh. Wenow prove its support sets are the same as those of (x 0,π 0 ). Let β j denote the jth row of B 1. For a nonbasic activity (j), x j = αx 0 j,soj σ(x) if and only if j σ(x0 ). For a basic activity (j), x j = αx 0 j +(1 α)(x j + ρβ j δb N ). For j σ(x 0 ), we have x j > 0 because x j + ρβ j δb N 0, so σ(x 0 ) σ(x). Now suppose j σ(x), so 0 <x j =αx 0 j +(1 α)x j +θβ j δb N. We shall prove that x 0 j = 0 leads to a contradiction. Upon so doing, we will have proven σ(x) σ(x 0 ), thus proving σ(x) =σ(x 0 ). The contradiction comes from the meaning of the optimal partition: every optimal solution, say, x (λ), for r + λh (λ (0,θ )), must have x j (λ) =0 j σ(x0 ). One such optimal solution is the basic one: x j + λβ j δb N = 0. Since this must hold for all λ (0,θ ), we must have x j = 0 and β j δb N = 0, so we reach the contradiction: x j = 0. Hence, σ(x) =σ(x 0 ). The remaining support set equalities follow in a similar manner. The opposite inequality does not hold. The optimal partition can be invariant on (r, r + θ h), but the optimal bases at r may have a range far less than θ. For example, consider the following linear program: 1 min x 2 :(x 1,x 2 ) 0, 1 x 1 +x 2 4, 1 x 1 x 2 2,θ x 2 2. For θ [0, 2], the strictly complementary solution is ( 3 2,θ), and the optimal bases correspond to two extreme points, starting with (1, 0) and (2, 0) at θ = 0. No matter which compatible basis is used, ρ (h) = 1, stopped by the turning point at x 2 =1 when θ = 1. Thus, the optimal partition does not change throughout [r, r +2h], but there is no basis that is optimal at r and at r + θh for θ>1. Theorem 4.2 (optimal partition range). Υ(h) ρ (h). Proof. This is immediate from Lemma 4.1. Theorem 4.2 says that the range of the perturbation for which the (possibly new) partition remains the same is at least as great as the maximum of the ranges of basic compatibility, taken over all optimal bases. Thus, the associated interval for which z(r + θh) z(r) has constant functional form in θ is determined by when the optimal partition changes, which could be strictly greater than the basic spectrum. This generalizes the linear case (where h is decoupled). Using the previous example, there are three optimal bases, as follows, with compatibility conditions following the semicolons: B 1 =[A 1 A 2 ]: x 1 =(0,1,0), π 1 =(1,2); δb 1 δb 2 0, δc 1 δc 2 + δc 3 0. B 2 =[A 1 A 3 ]: x 2 =(1,0,1), π 2 =(1,2); δc 1 δc 2 + δc 3 0. B 3 =[A 2 A 3 ]: x 3 =(0,1,0), π 3 =(1,2); δb 1 + δb 2 0, δc 1 δc 2 + δc 3 0. For δb =( 1,0) and δc =( 1,0,0), only B 2 is compatible, and its range of compatibility is ρ(b 2 ; h) =ρ (h) = 1. Thus, the basic compatibility theorem of [4] tells us that z(r + θh) z(r) has constant functional form if we decrease b 1 and c 1, both at 1 The author thanks Tamás Terlaky for pointing this out and Kees Roos for the example.
12 438 HARVEY J. GREENBERG Purchase Generate PCL POL PUR GCL GOL GUR COST = min BCL balance coal BOL balance oil BUR balance uranium LNU 1 10 limit nuclear generation DEL demand electricity Fig. 1. Electricity generation example. unit rate. In particular, we have the following quadratic function for θ [0, 1]: z(r + θh) z(r) = (c B +θδc B )[B 2 ] 1 (b N + θδb N ) c B [B 2 ] 1 b N 1 θ =[1 θ, 2] 3 1 = 2θ + θ 2. The interior point approach gave us the same result, but in a different manner. From one of the main results of the basic compatibility theory [4], we have the following. Corollary 4.3. h is admissible if and only if Υ(h) > 0. Proof. By definition, Υ(h) > 0 h H, and the converse follows from Theorem 4.2 and H = {h : ρ (h) > 0} [4]. This corollary says that the set of admissible directions equals the set of directions for which the optimal partition is invariant on the associated open line segment. We now consider another example [4] to help understand economic interpretations, introduce the optimal partition transition graph, and illustrate a form of activity analysis built on how the optimal partition changes rather than on how optimal bases change. There are three fuels from which to generate electricity: coal, oil, and uranium. Define six activities, as follows: PCL: purchase coal, GCL: generate electricity from coal, POL: purchase oil, GOL: generate electricity from oil, PUR: purchase uranium, GUR: generate electricity from uranium. Figure 1 shows the linear program. The objective is to minimize cost, shown as the first row, while meeting the required electricity demand, shown as the last row. Rows BCL, BOL, and BUR balance the associated fuels: what is purchased must be at least as great as what is used for generation. Row LNU limits the generation from uranium: GUR 10. Generation from uranium is the least costly (per unit of electricity generated, including the cost of uranium), so its level is as high as possible, limited to 10 units, which generates 4 units of electricity. The other 6 units are generated from oil, and none is generated from coal. Thus, the levels of PCL and GCL are zero in every optimal solution; however, PCL is in one optimal basis (compatible with increasing the right-hand side) and GCL is in another (compatible with decreasing the right-hand side). Figure 2 shows the active submatrix, where the optimal partition has σ(x) =
13 PRIMAL-DUAL RIGHT-HAND-SIDE SENSITIVITY ANALYSIS 439 σ(x) : POL PUR GOL GUR σ(π) σ(d) : PCL GCL BCL BOL B = 1 1 BUR N = 1 LNU.3.4 DEL.33 Fig. 2. Optimal partition for the electricity generation example. {POL, PUR, GOL, GUR} (activities to generate electricity from oil and uranium) and σ(π) equal to all rows (B and N are null). For b BCL > 0, we increase the right-hand side of the coal balance row, which corresponds to a stockpile requirement. The theory of basic compatibility says that the coal purchase activity (PCL) needs to be in the basis to provide the appropriate response: buy coal. As b BCL < 0, we are providing free coal, making the cost of electricity generation consist of only the operation and maintenance cost. This is $.80 per unit of coal, which is $2.42 per unit of electricity ($.80.33). Thus, the generation activity (GCL) needs to be in the basis to provide the appropriate response: displace oil-fired generation with coal-fired generation. The displacement continues until all oil-fired generation is displaced, which occurs at b BCL = A view of these is with the basis transition graph, shown in Figure 3, that is a part of the theory of basic compatibility, which we now extend. Let δb = e 1 (i.e., decrease the right-hand side of row BCL). An interior point approach first considers the differential linear program: max{ π 1 : π D (r)} = min{π 1 : π =(p, 15, 20,.4,52), p 18} = This gives us the new optimal partition for r θe 1 with θ sufficiently small. (Our goal is to obtain the greatest value of θ, which defines Υ( e 1 ).) The new optimal partition adds activity GCL to the set of active columns, so the following equations must hold as θ is increased: Bx B = x POL x PUR x GOL x GUR x GCL This gives the following primal conditions that limit θ: = θ Υ( e 1 ) = max{ θ : x 0, x GCL = θ x POL x GOL = 0 x PUR x GUR = 0 x GUR = x GCL +.3 x GOL +.4 x GUR = 10 }. This reduces to Υ( e 1 ) = max{θ :.33θ 6} = While this equals the range we obtained from the basis-driven approach, the reasoning is different. At b BCL = 18.18, the optimal partition changes again to deactivate oil-fired generation i.e., exclude activities GOL and POL from the set of active columns. (POL must remain.
14 440 HARVEY J. GREENBERG prepare prepare prepare for to to coal displace displace surplus uranium oil b 1 : Basis: GCL GCL GCL PCL POL POL POL POL PUR PUR PUR PUR LNU LNU GOL GOL BCL GUR GUR GUR prepare prepare prepare to to to generate generate purchase from from oil coal uranium Fig. 3. Basis transition graph. deactivate deactivate activate deactivate PUR, GUR POL, GOL GCL PCL to stop to stop to begin to stop nuclear oil-fired coal-fired coal generation generation generation purchases (displaced) (displaced) (displace oil) (not req.) b 1 : σ(x): PCL POL POL POL PUR PUR PUR PUR GCL GCL GCL GOL GOL GOL GUR GUR GUR GUR σ(s): BCL activate activate deactivate activate PUR, GUR POL, GOL GCL PCL to begin to begin to stop to begin nuclear oil-fired coal-fired coal generation generation generation purchases (displace (displace (displaced) (required) coal) coal) Fig. 4. Optimal partition transition graph. basic in the theory of basic compatibility, even though its level is zero in every optimal solution, in order to have the correct price of oil, π BOL.) Analogous to basic compatibility, the optimal partition changes due to an event that makes something change status: from inactive to active, or vice versa. Whereas Figure 3 shows the basis transition graph that was introduced in [4] for varying the amount of coal, Figure 4 introduces a partition transition graph. Notice that in the basis transition graph, events occur at the threshold, choosing the event that is compatible with the particular variation (left or right transition). By contrast, in the optimal partition transition graph, events occur just on one side of each threshold. At b 1 =0,itisafter θ>0 that coal purchases begin (i.e., activity PCL is activated by entering σ(x)). Similarly, it is after b 1 < 0 that coal-fired generation begins (i.e., activity GCL is activated). As we continue to move to the left, the optimal partition remains invariant on the open interval, ( 18.18, 0). At the threshold, all of the oil is displaced by coal, so the optimal partition changes at r 18.18e 1.It is just before this change that the event occurs: deactivate POL and GOL. Then, the optimal partition is invariant on the half-open interval: θ [18.18, 30.3) = σ(x) = {PUR, GCL, GUR} for r θe 1. This view of events that activities are activated or deactivated just before or after the threshold where the optimal partition changes complements the basic view that describes which basis is a compatible one in terms of events that prepare for the
15 PRIMAL-DUAL RIGHT-HAND-SIDE SENSITIVITY ANALYSIS 441 movement away from the threshold. Of course, phrases like just before and just after are not mathematical, but the idea is to gain insight from the solution, and this distinction in the two kinds of transition graphs does provide an added vantage point, based on the underlying events. 5. Summary. Here is a summary of the main points: The new optimal partition is obtained by solving two differential linear programs, one over the primal optimality region, the other over the dual. The new set of active columns equals that of the primal differential linear program; the new set of active rows equals that of the dual differential linear program. The interval for which the objective value has constant functional form, obtained from the range of the (possibly new) optimal partition, contains the interval obtained from the range of compatible bases. Further, this containment can be strict. The optimal partition transition graph, which shows threshold events when the optimal partition changes, provides another visualization of the underlying economics. Acknowledgments. The author gratefully acknowledges helpful comments from Allen Holder, Kees Roos, and Tamás Terlaky. Also, this article benefited from the reviews of Karla Hoffman and an anonymous referee. REFERENCES [1] I. Adler and R. Monteiro, A geometric view of parametric linear programming, Algorithmica, 8 (1992), pp [2] T. Gal, Postoptimal Analyses, Parametric Programming, and Related Topics, 2nd ed., Walter de Gruyter, Berlin, Germany, [3] A. Goldman and A. Tucker, Theory of linear programming, in Linear Inequalities and Related Systems, H. Kuhn and A. Tucker, eds., Ann. of Math. Stud. 38, Princeton University Press, Princeton, NJ, 1956, pp [4] H. Greenberg, An analysis of degeneracy, Naval Res. Logist., 33 (1986), pp [5] H. Greenberg, The use of the optimal partition in a linear programming solution for postoptimal analysis, Oper. Res. Lett., 15 (1994). [6] H. Greenberg, Mathematical Programming Glossary, hgreenbe/ glossary/glossary.html, [7] H. Greenberg, Myths and Counterexamples in Mathematical Programming: LP-2, hgreenbe/myths/myths.html, [8] H. Greenberg, Linear programming 1: Basic principles, in Recent Advances in Sensitivity Analysis and Parametric Programming, T. Gal and H. Greenberg, eds., Kluwer Academic Publishers, Boston, MA, [9] H. J. Greenberg, A. G. Holder, K. Roos, and T. Terlaky, On the dimension of the set of Rim perturbations for optimal partition invariance, SIAM J. Optim., 9 (1999), pp [10] B. Jansen, C. Roos, and T. Terlaky, An Interior Point Approach to Postoptimal and Parametric Analysis in Linear Programming, Report 92-21, Faculty of Technical Mathematics and Informatics/Computer Science, Delft University of Technology, Delft, The Netherlands, [11] H. Mills, Marginal values of matrix games and linear programs, in Linear Inequalities and Related Systems, H. Kuhn and A. Tucker, eds., Ann. of Math. Stud. 38, Princeton University Press, Princeton, NJ, 1956, pp [12] R. D. C. Monteiro and S. Mehrotra, A general parametric analysis approach and its implication to sensitivity analysis in interior point methods, Math. Programming, 47 (1996), pp
16 442 HARVEY J. GREENBERG [13] C. Roos, Interior point approach to linear programming: Theory, algorithms & parametric analysis, in Topics in Engineering Mathematics, A. van der Burgh and J. Simonis, eds., Kluwer Academic Publishers, Norwell, MA, 1992, pp [14] C. Roos, T. Terlaky, and J.-P. Vial, Theory and Algorithms for Linear Optimization: An Interior Point Approach, John Wiley, New York, [15] A. Williams, Marginal values in linear programming, J. Soc. Indust. Appl. Math., 11 (1963), pp [16] S. Wright, Primal-Dual Interior-Point Methods, SIAM, Philadelphia, PA, 1997.
1. Introduction and background. Consider the primal-dual linear programs (LPs)
SIAM J. OPIM. Vol. 9, No. 1, pp. 207 216 c 1998 Society for Industrial and Applied Mathematics ON HE DIMENSION OF HE SE OF RIM PERURBAIONS FOR OPIMAL PARIION INVARIANCE HARVEY J. REENBER, ALLEN. HOLDER,
More informationParametric LP Analysis
Rose-Hulman Institute of Technology Rose-Hulman Scholar Mathematical Sciences Technical Reports (MSTR) Mathematics 3-10-2010 Parametric LP Analysis Allen Holder Rose-Hulman Institute of Technology, holder@rose-hulman.edu
More information"SYMMETRIC" PRIMAL-DUAL PAIR
"SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax
More informationII. Analysis of Linear Programming Solutions
Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois
More informationA Review of Linear Programming
A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex
More informationLecture Note 18: Duality
MATH 5330: Computational Methods of Linear Algebra 1 The Dual Problems Lecture Note 18: Duality Xianyi Zeng Department of Mathematical Sciences, UTEP The concept duality, just like accuracy and stability,
More informationDuality Theory, Optimality Conditions
5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationNote 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)
Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationNondegeneracy of Polyhedra and Linear Programs
Computational Optimization and Applications 7, 221 237 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. Nondegeneracy of Polyhedra and Linear Programs YANHUI WANG AND RENATO D.C.
More informationIntroduction to linear programming using LEGO.
Introduction to linear programming using LEGO. 1 The manufacturing problem. A manufacturer produces two pieces of furniture, tables and chairs. The production of the furniture requires the use of two different
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationSummary of the simplex method
MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:
More informationAn Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationDiscrete Optimization
Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 15, 2010 Discussions: March 25, April 01 Discrete Optimization Spring 2010 s 3 You can hand in written solutions for up to two of the exercises
More informationLinear and Combinatorial Optimization
Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality
More information3. THE SIMPLEX ALGORITHM
Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).
More informationA Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)
A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point
More informationChap6 Duality Theory and Sensitivity Analysis
Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we
More informationLinear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016
Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources
More informationLinear Programming Inverse Projection Theory Chapter 3
1 Linear Programming Inverse Projection Theory Chapter 3 University of Chicago Booth School of Business Kipp Martin September 26, 2017 2 Where We Are Headed We want to solve problems with special structure!
More informationLinear Programming Redux
Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains
More information3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions
A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More information3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...
Contents Introduction to Linear Programming Problem. 2. General Linear Programming problems.............. 2.2 Formulation of LP problems.................... 8.3 Compact form and Standard form of a general
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More informationIntroduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following
More informationAppendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS
Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution
More informationLecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)
Lecture 10: Linear programming duality Michael Patriksson 19 February 2004 0-0 The dual of the LP in standard form minimize z = c T x (P) subject to Ax = b, x 0 n, and maximize w = b T y (D) subject to
More informationFarkas Lemma, Dual Simplex and Sensitivity Analysis
Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x
More informationMathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7
Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum
More informationA Parametric Simplex Algorithm for Linear Vector Optimization Problems
A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationYinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method
The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear
More informationLP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra
LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality
More information1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations
The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear
More informationJørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark.
DUALITY THEORY Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. Keywords: Duality, Saddle point, Complementary
More informationChapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More information4.6 Linear Programming duality
4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP Different spaces and objective functions but in general same optimal
More informationConic Linear Optimization and its Dual. yyye
Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1
More informationEND3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur
END3033 Operations Research I Sensitivity Analysis & Duality to accompany Operations Research: Applications and Algorithms Fatih Cavdur Introduction Consider the following problem where x 1 and x 2 corresponds
More informationA Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization
A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,
More informationF 1 F 2 Daily Requirement Cost N N N
Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever
More informationLinear Programming in Matrix Form
Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,
More informationSENSITIVITY ANALYSIS IN CONVEX QUADRATIC OPTIMIZATION: SIMULTANEOUS PERTURBATION OF THE OBJECTIVE AND RIGHT-HAND-SIDE VECTORS
SENSITIVITY ANALYSIS IN CONVEX QUADRATIC OPTIMIZATION: SIMULTANEOUS PERTURBATION OF THE OBJECTIVE AND RIGHT-HAND-SIDE VECTORS ALIREZA GHAFFARI HADIGHEH Department of Mathematics, Azarbaijan University
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationCO 602/CM 740: Fundamentals of Optimization Problem Set 4
CO 602/CM 740: Fundamentals of Optimization Problem Set 4 H. Wolkowicz Fall 2014. Handed out: Wednesday 2014-Oct-15. Due: Wednesday 2014-Oct-22 in class before lecture starts. Contents 1 Unique Optimum
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationLinear programs, convex polyhedra, extreme points
MVE165/MMG631 Extreme points of convex polyhedra; reformulations; basic feasible solutions; the simplex method Ann-Brith Strömberg 2015 03 27 Linear programs, convex polyhedra, extreme points A linear
More informationIntroduction to Mathematical Programming
Introduction to Mathematical Programming Ming Zhong Lecture 22 October 22, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 16 Table of Contents 1 The Simplex Method, Part II Ming Zhong (JHU) AMS Fall 2018 2 /
More informationCOMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING +
Mathematical Programming 19 (1980) 213-219. North-Holland Publishing Company COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING + Katta G. MURTY The University of Michigan, Ann Arbor, MI, U.S.A.
More informationSharpening the Karush-John optimality conditions
Sharpening the Karush-John optimality conditions Arnold Neumaier and Hermann Schichl Institut für Mathematik, Universität Wien Strudlhofgasse 4, A-1090 Wien, Austria email: Arnold.Neumaier@univie.ac.at,
More informationLagrangian Duality and Convex Optimization
Lagrangian Duality and Convex Optimization David Rosenberg New York University February 11, 2015 David Rosenberg (New York University) DS-GA 1003 February 11, 2015 1 / 24 Introduction Why Convex Optimization?
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationA full-newton step infeasible interior-point algorithm for linear programming based on a kernel function
A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with
More information4. Duality and Sensitivity
4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair
More informationMath Models of OR: Some Definitions
Math Models of OR: Some Definitions John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Some Definitions 1 / 20 Active constraints Outline 1 Active constraints
More informationRelation of Pure Minimum Cost Flow Model to Linear Programming
Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m
More informationNew stopping criteria for detecting infeasibility in conic optimization
Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:
More informationMath 5593 Linear Programming Week 1
University of Colorado Denver, Fall 2013, Prof. Engau 1 Problem-Solving in Operations Research 2 Brief History of Linear Programming 3 Review of Basic Linear Algebra Linear Programming - The Story About
More informationPart 1. The Review of Linear Programming
In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationThe general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.
1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,
More informationSeminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1
Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with
More informationSensitivity Analysis and Duality
Sensitivity Analysis and Duality Part II Duality Based on Chapter 6 Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan
More informationChapter 1: Linear Programming
Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of
More informationThe Dual Simplex Algorithm
p. 1 The Dual Simplex Algorithm Primal optimal (dual feasible) and primal feasible (dual optimal) bases The dual simplex tableau, dual optimality and the dual pivot rules Classical applications of linear
More informationResearch Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More informationOPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES
General: OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and homework.
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationApplications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang
Introduction to Large-Scale Linear Programming and Applications Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Daniel J. Epstein Department of Industrial and Systems Engineering, University of
More informationPart 1. The Review of Linear Programming
In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis
MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis Ann-Brith Strömberg 2017 03 29 Lecture 4 Linear and integer optimization with
More informationLecture 5. The Dual Cone and Dual Problem
IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the
More informationNonlinear Programming and the Kuhn-Tucker Conditions
Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we
More informationCSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming
CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationChapter 33 MSMYM1 Mathematical Linear Programming
Chapter 33 MSMYM1 Mathematical Linear Programming 33.1 The Simplex Algorithm The Simplex method for solving linear programming problems has already been covered in Chapter??. A given problem may always
More informationReview Solutions, Exam 2, Operations Research
Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To
More informationA PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:
STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose
More information6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games
6.254 : Game Theory with Engineering Applications Lecture 7: Asu Ozdaglar MIT February 25, 2010 1 Introduction Outline Uniqueness of a Pure Nash Equilibrium for Continuous Games Reading: Rosen J.B., Existence
More informationA Brief Review on Convex Optimization
A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review
More informationLimiting behavior of the central path in semidefinite optimization
Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path
More informationLinear Programming. Operations Research. Anthony Papavasiliou 1 / 21
1 / 21 Linear Programming Operations Research Anthony Papavasiliou Contents 2 / 21 1 Primal Linear Program 2 Dual Linear Program Table of Contents 3 / 21 1 Primal Linear Program 2 Dual Linear Program Linear
More informationLecture 10: Linear programming duality and sensitivity 0-0
Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to
More informationMAT016: Optimization
MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationAgenda. 1 Duality for LP. 2 Theorem of alternatives. 3 Conic Duality. 4 Dual cones. 5 Geometric view of cone programs. 6 Conic duality theorem
Agenda 1 Duality for LP 2 Theorem of alternatives 3 Conic Duality 4 Dual cones 5 Geometric view of cone programs 6 Conic duality theorem 7 Examples Lower bounds on LPs By eliminating variables (if needed)
More informationSimplex method(s) for solving LPs in standard form
Simplex method: outline I The Simplex Method is a family of algorithms for solving LPs in standard form (and their duals) I Goal: identify an optimal basis, as in Definition 3.3 I Versions we will consider:
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationInteger Programming, Part 1
Integer Programming, Part 1 Rudi Pendavingh Technische Universiteit Eindhoven May 18, 2016 Rudi Pendavingh (TU/e) Integer Programming, Part 1 May 18, 2016 1 / 37 Linear Inequalities and Polyhedra Farkas
More informationEE364a Review Session 5
EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary
More informationA semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint
Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite
More informationLINEAR PROGRAMMING II
LINEAR PROGRAMMING II LP duality strong duality theorem bonus proof of LP duality applications Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM LINEAR PROGRAMMING II LP duality Strong duality
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality
More information1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.
1 date: February 23, 1998 le: papar1 KLEE - MINTY EAMPLES FOR (LP) Abstract : The problem of determining the worst case behavior of the simplex algorithm remained an outstanding open problem for more than
More information