Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1
Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of f on D, or image of D by f. f has a maximum on D at x M D provided that f (x M ) f (x) for all x D. max{f (x) : x D } = f (x M ), maximum value of f on D. x M is called a maximizer of f on D. arg max{f (x) : x D } = {x D : f (x) = f (x M ) }. f has a minimum on D at x m D provided that f (x m ) f (x) for all x D. min{f (x) : x D } = f (x m ), minimum of f on D arg min{f (x) : x D } = {x D : f (x) = f (x m ) } set of minimizers f has an extremum at x 0 p.t. x 0 is either a maximizer or minimizer. Chapter 1: Linear Programming 2
Basic Optimization Problem No maximizer or minimizer of f (x) = x 3 on (0, 1), arg max{ x 3 : 0 < x < 1 } =, & arg min{ x 3 : 0 < x < 1 } =, Optimization Problem: Does f (x) attain a maximum (or minimum) for some x D? If so, what is the maximum value (or minimum value) on D and what are the points at which f (x) attains a maximum (or minimum) subject to x D? Chapter 1: Linear Programming 3
Notations v w in R n means v i w i for 1 i n v w in R n means v i > w i for 1 i n R n + = { x R n : x 0 } = { x R n : x i 0 for 1 i n } R n ++ = { x R n : x 0 } = { x R n : x i > 0 for 1 i n } Chapter 1: Linear Programming 4
Linear Programming: 1.4.1 Wheat-Corn Example Up to 100 acres of land can be used to grow wheat and/or corn: x 1 acres used to grow wheat and x 2 acres used to grow corn: x 1 + x 2 100. Cost or capital constraint: 5x 1 + 10x 2 800. Labor constraint: 2x 1 + x 2 150. Profit: f (x 1, x 2 ) = 80x 1 + 60x 2. Objective function Problem: Maximize: 80x 1 + 60x 2 (profit), Subject to: x 1 + x 2 100, (land) 5x 1 + 10x 2 800, (capital) 2x 1 + x 2 150, (labor) x 1 0, x 2 0. All constraints and objective function are linear: lin. programming prob. Chapter 1: Linear Programming 5
Example, continued Feasible set F is set of all the points satisfying all constraints x 1 + x 2 100 (land), 5x 1 + 10x 2 800 (capital), 2x 1 + x 2 150 (labor) x 1 0, x 2 0. (40, 60) (50, 50) (0, 0) Vertices of the feasible set: (0, 0), (75, 0), (50, 50), (40, 60), (0, 80). Other points where two constraints are tight ( 140 3, 170 ) 3, (100, 0), etc 140 lie outside the feasible set, 3 + 170 3 = 310 3 > 100, 2(100) > 150,... Chapter 1: Linear Programming 6
Example, continued Since f = (80, 60) (0, 0), maximum must be on boundary of F. f (x 1, x 2 ) along an edge is linear combination of values at end points. If a maximizer were in middle of an edge, then f would have the same value at two end points of this edge. Maximizer can be found at one of vertices. Values at vertices: f (0, 0) = 0, f (75, 0) = 6000, f (50, 50) = 7000, f (40, 60) = 6800, f (0, 80) = 4800. max value of f is 7000, maximizer (50, 50). End of Example Chapter 1: Linear Programming 7
Standard Max Linear Programming Problem (MLP) Maximize objective function: f (x) = c x = c 1 x 1 + + c n x n, Subject to resource constraints: a 11 x 1 + + a 1n x n b 1... a m1 x 1 + + a mn x n b m x j 0 for 1 j n. Given data: c = (c 1,..., c n ), m n matrix A = (a ij ), b = (b 1,..., b m ) with all b i 0, Constraints using matrix notation are Ax b and x 0. Feasible set: F = { x R n + : Ax b }. Chapter 1: Linear Programming 8
Minimization Example Minimize: 3x 1 + 2x 2, Subject to: 2x 1 + x 2 4, x 1 + x 2 3, x 1 + 2x 2 4, x 1 0, and x 2 0. (0, 4) (1, 2) (2, 1) (4, 0) F is unbounded but f (x) 0 is bded below so f (x) has a minimum. Vertices: (4, 0), (2, 1), (1, 2), (0, 4) Values: f (4, 0) = 12, f (2, 1) = 8, f (1, 2) = 7, f (0, 4) = 8. min{f (x) : x F } = 7 arg min{f (x) : x F } = {(1, 2)}. Chapter 1: Linear Programming 9
Geometric Method of Solving Linear Prog Problem 1 Determine or draw the feasible set F. If F =, then problem has no optimal solution, problem is called infeasible 2 Problem is called unbounded and has no solution p.t. objective function on F has a. arbitrarily large positive values for a maximization problem, or b. arbitrarily large negative values for a minimization problem. 3 A problem is called bounded p.t. it is not infeasible nor unbounded; an optimal solution exists. Determine all the vertices of F and values at vertices. Choose the vertex of F producing the maximum or minimum value of the objective function. Chapter 1: Linear Programming 10
Rank of a Matrix Rank of a matrix A is dimension of column space of A. i.e., largest number of linearly independent columns of A. Same as number of pivots (in row reduced echelon form of A). rank(a) = k iff k 0 is the largest integer s.t. det(a k ) 0, where A k is any k k submatrix of A formed by selecting any k columns and any k rows. A k is submatrix of pivot columns and rows. Sketch: Let A be submatrix with k linearly independent columns that span column space; rank(a ) = k dim(row space of A ) = k, so k rows of A to get k k A k with rank(a k ) = k, so det(a k ) 0. Chapter 1: Linear Programming 11
Slack Variables For large number of variables, need a practical algorithm. Simplex method uses row reduction as solution method. First step: make all the inequalities of type x i 0. Inequality of the form a i1 x 1 + + a in x n b i resource constraint. for b i 0 is called For resource constraint, introduce slack variable s i a i1 x 1 + + a in x n + s i = b i with s i 0. by s i represents unused resource. Introduction of a slack variable changes a resource constraint into equality constraint and s i 0. Chapter 1: Linear Programming 12
Introducing Slack Variables into Wheat-Corn Example x 1 + x 2 + s 1 = 100, 5x 1 + 10x 2 + s 2 = 800, 2x 1 + x 2 + s 3 = 150, s 1, s 2, s 3 0. In matrix form, Maximize: (80, 60, 0, 0, 0) (x 1, x 2, s 1, s 2, s 3 ) = 80 x 1 + 60 x 2 x 1 x 1 0, 1 1 1 0 0 x 2 100 x 2 0, Subj. to: 5 10 0 1 0 s 1 2 1 0 0 1 s 2 = 800 and s 1 0, 150 s 2 0, s 3 s 3 0. Because of 1 s and 0 s in last 3 columns of matrix, rank is 3. Initial feasible solution x 1 = 0 = x 2, s 1 = 100, s 2 = 800, s 3 = 150 Chapter 1: Linear Programming 13
Wheat-Corn Example, continued 1 1 1 0 0 5 10 0 1 0 2 1 0 0 1 x 1 x 2 100 s 1 s 2 = 800 with x i 0, and s i 0. 150 s 3 Initial feasible sol n: p 0 = (0, 0, 100, 800, 150) with f (p 0 ) = 0. Rank is 3, so 5 3 = 2 free variables. x 1, x 2. p 0 obtained by setting free variables x 1 = x 2 = 0 and solving for dependent variables, which are 3 slack variables. Pivot to make a different pair the free variables equal to zero and a different triple of positive variables. Chapter 1: Linear Programming 14
Wheat-Corn Example, continued s 3 s 1 s 2 p 0 p1 If leave the vertex (x 1, x 2 ) = (0, 0), or p 0 = (0, 0, 100, 800, 150), making x 1 > 0 entering variable while keeping x 2 = 0; first slack variable to become zero is s 3 when x 1 = 75. Arrive at the vertex (x 1, x 2 ) = (75, 0), or p 1 = (75, 0, 25, 425, 0). New sol n has two zero variables and three positive variables. Move along one edge from p 0 to p 1. f (p 1 ) = 80(75) = 6000 > 0 = f (p 0 ). p 1 is a better feasible sol n than p 0. Chapter 1: Linear Programming 15
Wheat-Corn Example, continued s 3 p 1 = (75, 0, 25, 425, 0). p 0 p 3 p 2 s 1 p 1 Repeat, leaving p 1 by making x 2 > 0 entering variable s 2 while keeping s 3 = 0. First other variable to become zero is s 1. Arrive p 2 = (50, 50, 0, 50, 0) f (p 2 ) = 80(50) + 60(50) = 7000 > 6000 = f (p 1 ). p 2 is a better feasible solution than p 1. Have moved along another edge of the feasible set from (x 1, x 2 ) = (75, 0) and arrived at (x 1, x 2 ) = (50, 50). Chapter 1: Linear Programming 16
Wheat-Corn Example, continued If leave p 2 by making s 3 > 0 entering variable while keeping s 1 = 0, first variable to become zero is s 2, arrive at p 3 = (40, 60, 0, 0, 10). f (p 3 ) = 80(40) + 60(60) = 6800 < 7000 = f (p 2 ). p 3 is worse feasible solution than p 2. Let z F {p 2 }. v = z p 2, v j = p j p 2. f (v 1 ) = f (p 1 ) f (p 2 ) < 0, f (v 3 ) = f (p 3 ) f (p 2 ) < 0 v 1 & v 3 are basis of R 2, so v = y 1 v 1 + y 3 v 3 with y 1, y 3 0. v 0 points into F, so (i) y 1, y 3 0 and (ii) y 1 > 0 or y 3 > 0. f (z) = f (p 2 ) + f (v) = f (p 2 ) + y 1 f (v 1 ) + y 3 f (v 3 ) < f (p 2 ) Since cannot increase f by moving along either edge going out from p 2, p 2 is an optimal feasible solution. Chapter 1: Linear Programming 17
Wheat-Corn Example, continued In this example, p 0 = (0, 0, 100, 800, 150), p 1 = (75, 0, 25, 425, 0), p 2 = (50, 50, 0, 50, 0), p 3 = (40, 60, 0, 0, 10), p 4 = (0, 80, 20, 0, 70) are called basic solutions since at most 3 variables are positive, where 3 is the rank (and number of constraints). End of Example Chapter 1: Linear Programming 18
Slack Variables added to a Standard Lin Prog Prob, MLP All resource constraints Given b = (b 1,..., b m ) R m +, c = (c 1,..., c n ) R n, m n matrix A = (a ij ). Find x R n + and s R m + that maximize: f (x) = c x subject to: a 11 x 1 + + a 1n x n + s 1 = b 1... a m1 x 1 + + a mn x n + s m = b m x i 0, s j 0 for 1 i n, 1 j m. Using matrix notation with I m m identity matrix, [ ] x maximize f (x) subject to [A, I] = b with x 0, s 0. s Indicate partition of augmented matrix by extra vertical lines [A I b] Chapter 1: Linear Programming 19
Basic Solutions Assume Ā m (n + m) matrix with rank m (like Ā = [A, I] ) m dependent variables are called basic variables. (var of pivot col ns) n free variables are called non-basic variables. (var of non-pivot col ns) A basic solution is a solution p satisfying Āp = b such that columns corresponding to p i 0 are linearly indep. rank(a) = m. If p is also feasible with p 0, then called a basic feasible solution. Obtain by setting n free variables = 0, and get basic variables 0, allow possibly some basic variables = 0 Chapter 1: Linear Programming 20
Linear Algebra Solution of Wheat-Corn Problem Augmented matrix for the original wheat-corn problem is x 1 x 2 s 1 s 2 s 3 1 1 1 0 0 100 5 10 0 1 0 800 2 1 0 0 1 150 with free variables x 1 and x 2 and basic variables s 1, s 2, and s 3. If make x 1 > 0 while keeping x 2 = 0, x 1 becomes a new basic variable ( for new pivot coln ) called entering variable. (i) s 1 will become zero when x 1 = 100 1 = 100, (ii) s 2 will become zero when x 1 = 800 5 = 160, and (iii) s 3 will become zero when x 1 = 150 2 = 75. Since s 3 becomes zero for the smallest value of x 1, s 3 is the departing variable and new pivot is 1st column (for x 1 ) and 3rd row ( old pivot for s 3 ) Chapter 1: Linear Programming 21
First Pivot Row reducing to make a pivot in first column third row, x 1 x 2 s 1 s 2 s 3 1 1 1 0 0 100 5 10 0 1 0 800 2 1 0 0 1 150 Setting free variables x 2 = s 3 = 0, new basic solution p 1 = (75, 0, 25, 425, 0). Entries in the right (augmented) column give values of the new basic variables that are > 0. x 1 x 2 s 1 s 2 s 3 0.5 1 0.5 25 0 7.5 0 1 2.5 425 1.5 0 0.5 75 Chapter 1: Linear Programming 22
Including Objective Function in Matrix Objective function (or variable) is f = 80x 1 + 60x 2, or 80x 1 60x 2 + f = 0. Adding a row for this equation and a column for variable f keeps track of the value of f during row reduction. x 1 x 2 s 1 s 2 s 3 f 1 1 1 0 0 0 100 5 10 0 1 0 0 800 2 1 0 0 1 0 150 80 60 0 0 0 1 0 This matrix including objective function row is called tableau. Entries in column for f are 1 in objective function row and 0 elsewhere. In objection function row of this tableau, entries for x i are negative. Chapter 1: Linear Programming 23
First Pivot, continued Row reducing tableau by making first column and third row a new pivot, x 1 x 2 s 1 s 2 s 3 f 1 1 1 0 0 0 100 5 10 0 1 0 0 800 2 1 0 0 1 0 150 80 60 0 0 0 1 0 x 1 x 2 s 1 s 2 s 3 f 0.5 1 0.5 0 25 0 7.5 0 1 2.5 0 425 1.5 0 0.5 0 75 0 20 0 0 40 1 6000 For x 2 = s 3 = 0, x 1 = 75 > 0, s 1 = 25 > 0, s 2 = 425 > 0, Bottom right entry of 6000 is new value of f Chapter 1: Linear Programming 24
Second Pivot x 1 x 2 s 1 s 2 s 3 f 0.5 1 0.5 0 25 0 7.5 0 1 2.5 0 425 1.5 0 0.5 0 75 0 20 0 0 40 1 6000 x 2 & s 3 free (non-basic) variables If pivot back to make s 3 > 0, the value of f becomes smaller, so select x 2 as the next entering variable, keeping s 3 = 0. (i) s 1 becomes zero when x 2 = 25.5 = 50, and (ii) s 2 becomes zero when x 2 = 425 7.5 = 56.67. (iii) x 1 becomes zero when x 2 = 75.5 = 150, Since the smallest positive value of x 1 comes from s 1, s 1 is the departing variable and pivot on 1st row 2nd column. Chapter 1: Linear Programming 25
Second Pivot, contin. Pivot on 1st row 2nd column, x 1 x 2 s 1 s 2 s 3 f 0.5 1 0.5 0 25 0 7.5 0 1 2.5 0 425 1.5 0 0.5 0 75 0 20 0 0 40 1 6000 x 1 x 2 s 1 s 2 s 3 f 0 1 2 0 1 0 50 0 0 15 1 5 0 50 1 0 1 0 1 0 50 0 0 40 0 20 1 7000 Entries in column for f don t change. f = 7000 objective function Chapter 1: Linear Programming 26
Third Pivot Why does the objective function decrease when moving along the edge making s 3 > 0 an entering variable, keeping s 1 = 0? (i) x 1 becomes zero when s 3 = 50 1 = 50, (ii) x 2 becomes zero when s 3 = 50 1 = (iii) s 2 becomes zero when s 3 = 50 5 = 10. 50, and Smallest positive value of s 3 comes from s 2, and pivot on the 2nd row 5th column. x 1 x 2 s 1 s 2 s 3 f x 1 x 2 s 1 s 2 s 3 f 0 1 2 0 1 0 50 0 0 15 1 5 0 50 1 0 1 0 1 0 50 0 1 1 0 0 0 60 0 0 3.2 1 0 10 1 0 2 2 0 0 40 0 0 40 0 20 1 7000 0 0 100 4 0 1 6800 Value of objective function decreases since the entry is already positive before pivot in column for s 3 of the objective function row Chapter 1: Linear Programming 27
Tableau Drop the column for the variable f from augmented matrix since it does not play a role in the row reduction. (entry in last row stays = 1 and others stay = 0) Augmented matrix with objective function row but without column for objective function variable is called the tableau. Chapter 1: Linear Programming 28
Steps in the Simplex Method for Stand MLP 1 Set up the tableau so that all b i 0. An initial feasible basic solution is determined by setting x i = 0 and solving for s i. 2 Choose as entering variable any free variable with a negative entry in objection function row. (often most negative) 3 From column selected in previous step, select row for which ratio of entry in augmented column divided by entry in column selected is smallest value 0; departing variable is basic variable for this row. Row reduce the matrix using selected new pivot position. 4 Objective function has no upper bound and no optimal solution when one column has only nonpositive coefficients above a negative coefficient in objective function row. 5 Solution is optimal when all entries in objective function row are nonnegative. 6 If optimal tableau has zero entry in objective row for nonbasic variable and all basic variables are positive, then nonunique solution. Chapter 1: Linear Programming 29
General Constraints: Requirement Constraints All b i 0 (by multiplying inequalities by 1 if necessary.) Requirement constraint is given by a i1 x 1 + + a in x n b i (occur especially for a min problem.) Require to have at least a minimum amount to quantity. Can have a surplus of quantity, so subtract off a surplus variable to get equality a i1 x 1 + + a in x n s i = b i with s i 0. To solve equation initially, also add an artificial variable r i 0, a i1 x 1 + + a in x n s i + r i = b i. (Walker uses a i, but we use r i to distinguish from matrix a ij.) Initial sol n sets the artificial variable r i = b i while s i = 0 = x j j. Chapter 1: Linear Programming 30
Equality Constraints For an equality constraint a i1 x 1 + + a in x n = b i, add one artificial variable a i1 x 1 + + a in x n + r i = b i, with an initial solution r i = b i 0 while the x j = 0. For general constraints (with either requirement or equality constraint) initial solution has all x i = 0 and all surplus variables zero, while slack variables and artificial variables 0. This sol n is not feasible if any artificial variables is positive for a requirement constraint or an equality constraint Chapter 1: Linear Programming 31
Minimization Example Assume two foods are consumed in amounts x 1 and x 2 with costs per unit of 15 and 7 respectively, and yield (5, 3, 5) and (2, 2, 1) units of three vitamins respectively. Problem is to minimize cost 15 x 1 + 7 x 2 or Maximize: 15 x 1 7 x 2 Subject to: 5 x 1 + 2 x 2 60 3 x 1 + 2 x 2 40, and 5 x 1 + 1 x 2 35. x = 0 not a feasible sol n, initial sol n involves the artificial variables Chapter 1: Linear Programming 32
Minimization Example, continued The tableau is x 1 x 2 s 1 s 2 s 3 r 1 r 2 r 3 5 2 1 0 0 1 0 0 60 3 2 0 1 0 0 1 0 40 5 1 0 0 1 0 0 1 35 15 7 0 0 0 0 0 0 0 To eliminate artificial variables, preliminary steps to force all artificial variables to be zero. Use artificial objective function that is negative sum of equations that contain artificial variables, 13x 1 5x 2 + s 1 + s 2 + s 3 + ( r 1 r 2 r 3 ) = 135. 13x 1 5x 2 + s 1 + s 2 + s 3 + R = 135 R = r 1 r 2 r 3 0 new variable, with max of 0. Chapter 1: Linear Programming 33
Minimization Example, continued Tableau with the artificial objective function included (but not a column for the variable R) is x 1 x 2 s 1 s 2 s 3 r 1 r 2 r 3 5 2 1 0 0 1 0 0 60 3 2 0 1 0 0 1 0 40 5 1 0 0 1 0 0 1 35 15 7 0 0 0 0 0 0 0 13 5 1 1 1 0 0 0 135 If artificial objective function can be made equal zero, then this gives an initial feasible basic solution with r i = 0 and only original and slack variables positive. Then, artificial variables can be dropped and proceed as before. Chapter 1: Linear Programming 34
Minimization Example, continued 13 < 5 x 1 x 2 s 1 s 2 s 3 r 1 r 2 r 3 5 2 1 0 0 1 0 0 60 3 2 0 1 0 0 1 0 40 5 1 0 0 1 0 0 1 35 15 7 0 0 0 0 0 0 0 13 5 1 1 1 0 0 0 135 35 5 = 7 < 40 3 13.3, 60 5 = 12: Pivoting on a 31 (making into a pivot) x 1 x 2 s 1 s 2 s 3 r 1 r 2 r 3 0 1 1 0 1 1 0 1 25 0 7 5 0 1 3 5 0 1 3 5 19 1 1 1 1 5 0 0 5 0 0 5 7 0 4 0 0 3 0 0 3 105 12 8 13 0 5 1 1 5 0 0 5 44 Chapter 1: Linear Programming 35
Minimization Example, continued Pivoting on a 22 (19 5 7 13.57 < 25 < 7 5) x 1 x 2 s 1 s 2 s 3 r 1 r 2 r 3 0 1 1 0 1 1 0 1 25 0 7 5 0 1 3 5 0 1 3 5 19 1 1 1 1 5 0 0 5 0 0 5 7 0 4 0 0 3 0 0 3 105 12 8 13 0 5 1 1 5 0 0 5 44 x 1 x 2 s 1 s 2 s 3 r 1 r 2 r 3 5 0 0 1 7 5 0 1 0 7 1 1 0 0 7 20 0 0 0 7 5 0 0 1 7 4 5 7 1 7 3 5 7 0 7 2 1 7 0 7 9 20 7 0 7 4 12 7 0 7 4 7 3 7 2 7 9 7 11 7 80 7 95 7 30 7 1115 7 80 7 Chapter 1: Linear Programming 36
Minimization Example, continued Pivoting on a 14 yields an initial feasible basic solution: x 1 x 2 s 1 s 2 s 3 r 1 r 2 r 3 5 4 5 4 80 0 0 1 7 7 1 7 7 7 5 3 5 3 95 0 1 0 7 7 0 7 7 7 1 2 1 2 30 1 0 0 7 7 0 7 7 7 20 9 20 9 1115 0 0 0 7 7 0 7 7 7 5 4 12 11 80 0 0 1 7 7 0 7 7 7 x 1 x 2 s 1 s 2 s 3 r 1 r 2 r 3 7 4 7 4 0 0 5 1 5 5 1 5 16 0 1 1 0 1 1 0 1 25 1 2 1 2 1 0 5 0 5 5 0 5 2 0 0 4 0 1 4 0 1 205 0 0 0 0 0 1 1 1 0 Chapter 1: Linear Programming 37
Minimization Example, continued After these two steps, (2, 25, 0, 16, 0) is an initial feasible basic solution and artificial variables can be dropped. Pivoting on a 15 yields final solution: x 1 x 2 s 1 s 2 s 3 x 1 x 2 s 1 s 2 s 3 7 4 0 0 5 1 5 16 7 5 0 0 4 4 1 20 3 5 0 1 1 0 1 25 0 1 1 2 4 4 0 5 1 0 5 0 5 2 1 1 1 0 2 2 0 10 0 0 4 0 1 205 9 5 0 0 4 4 0 185 All entries in the obj fn are positive, so maximal solution: (10, 5, 0, 0, 20) with an value of 185. For original problem, minimal sol n has value of 185.. End of Example Chapter 1: Linear Programming 38
Steps in Simplex Method w/ General Constraints 1 Make all b i 0 of constraints by multiplying by 1 if necessary. 2 Add a slack variable for each resource inequality, add a surplus variable and an artificial variable for each requirement constraint, and add an artificial variable for each equality constraint. 3 If either requirement constraints or equality constraints are present, then form artificial objective function by taking negative sum of all equations that contain artificial variables, dropping terms involving artificial variables. Set up tableau matrix. (The row for artificial objective function has zeroes in the columns of the artificial variables.) An initial solution of equations including artificial variables is determined by setting all original variables x j = 0, all slack variables s i = b i, all the surplus variables s i = 0, and all artificial variables r i = b i. Chapter 1: Linear Programming 39
Steps for General Constraints, continued 4 Apply simplex algorithm using artificial objective function. a. If it is not possible to make artificial objective function equal to zero, then there is no feasible solution. b. If the artificial variables can be made equal to zero, then drop artificial variables and artificial objective function from tableau and continue using initial feasible basic solution constructed. 5 Apply simplex algorithm to actual objective function. Solution is optimal when all entries in objective function row are nonnegative. Chapter 1: Linear Programming 40
Example with Equality Constraint Consider the problem of Maximize: 3 x 1 + 4 x 2 Subject to: 2 x 1 + x 2 6, and 2 x 1 + 2 x 2 24, x 1 = 8, x 1 0, x 2 0. With slack, surplus, and artificial variables added the problem becomes Maximize: 3 x 1 + 4 x 2 Subject to: 2 x 1 + x 2 + s 1 = 6 2 x 1 + 2 x 2 s 2 + r 2 = 24 x 1 + r 3 = 8. Artificial obj fn is negative sum of the 2nd and 3rd rows 3 x 1 2 x 2 + s 2 + R = 32 where R = r 2 r 3 Chapter 1: Linear Programming 41
Example, continued The tableau with variables is x 1 x 2 s 1 s 2 r 2 r 3 2 1 1 0 0 0 6 2 2 0 1 1 0 24 1 0 0 0 0 1 8 3 4 0 0 0 0 0 3 2 0 1 0 0 32 Pivoting on a 31 and then a 22, x 1 x 2 s 1 s 2 r 2 r 3 x 1 x 2 s 1 s 2 r 2 r 3 0 1 1 0 0 2 22 1 1 0 0 1 2 2 3 18 0 2 0 1 1 2 8 1 1 1 0 0 0 0 1 8 0 1 0 2 2 1 4 0 4 0 0 0 3 24 1 0 0 0 0 1 8 0 0 0 2 2 1 40 0 2 0 1 0 3 8 0 0 0 0 1 1 0 Attained a feasible solution of (x 1, x 2, s 1, s 2 ) = (8, 4, 18, 0). Chapter 1: Linear Programming 42
Example, continued Can now drop artificial objective function and artificial variables. Pivoting on a 1,4 x 1 x 2 s 1 s 2 1 0 0 1 2 18 1 0 1 0 2 4 1 0 0 0 8 0 0 0 2 40 x 1 x 2 s 1 s 2 0 0 2 1 36 0 1 1 0 22 1 0 0 0 8 0 0 4 0 112 Optimal solution of f = 112 for (x 1, x 2, s 1, s 2 ) = (8, 22, 0, 36). End of Example Chapter 1: Linear Programming 43
Duality: Introductory Example Return to wheat-corn problem (MLP). Maximize: z = 80x 1 + 60x 2 subject to: x 1 + x 2 100, (land) 5x 1 + 10x 2 800, (capital) 2x 1 + x 2 150, (labor), x 1, x 2 0 Assume excess resources can be rented out or shortfalls rented for prices of y 1, y 2, and y 3. (Shadow prices of inputs.) Profit P = 80x 1 + 60x 2 + (100 x 1 x 2 )y 1 +(800 5x 1 10x 2 )y 2 + (150 2x 1 x 2 )y 3 If profit by renting outside land, then competitors raise price of land force farmer 100 x 1 x 2 0. If 100 x 1 x 2 > 0, then market sets y 1 = 0. (100 x 1 x 2 )y 1 = 0 at optimal Similar other constraints, Farmer s perspective yields original MLP Chapter 1: Linear Programming 44
Duality: Introductory Example, contin. P = 80x 1 + 60x 2 + (100 x 1 x 2 )y 1 + (800 5x 1 10x 2 )y 2 + (150 2x 1 x 2 )y 3 If a resource is slack, 100 x 1 x 2 > 0, then market sets y 1 = 0. Market is minimizing P as function of (y 1, y 2, y 3 ) Rearranging profit function from markets perspective yields P = (80 y 1 5y 2 2y 3 )x 1 + (60 y 1 10y 2 y 3 )x 2 + 100y 1 + 800y 2 + 150y 3 Coefficients of x i represents net profit after costs of unit of i th -good If net profit > 0, the competitors grow wheat/corn. Force 0, 80 y 1 5y 2 2y 3 0 or 80 y 1 + 5y 2 + 2y 3 60 y 1 10y 2 y 3 0 or 60 y 1 + 10y 2 + y 3 If a resource is slack, 100 x 1 x 2 > 0, then market sets y 1 = 0. 0 = (100 x 1 x 2 )y 1 0 = (800 5x 1 10x 2 )y 2 0 = (150 2x 1 x 2 )y 3 Market is minimizing P = 100y 1 + 800y 2 + 150y 3 Chapter 1: Linear Programming 45
Duality: Introductory Example, contin. Market s perspective results in dual minimization problem: Minimize: w = 100y 1 + 800y 2 + 150y 3 Subject to: y 1 + 5y 2 + 2y 3 80, (wheat) y 1 + 10y 2 + y 3 60, (corn) y 1, y 2, y 3 0. MLP: Maximize: z = 80x 1 + 60x 2 Subject to: x 1 + x 2 100, 5x 1 + 10x 2 800, 2x 1 + x 2 150, x 1, x 2 0 1. Coefficient matrices of x i and y i are transposes of each other. 2. Coefficients for objective function of MLP become constants for inequalities of dual mlp. 3. Constants for inequalities of MLP become coefficients for objective function of dual mlp. Chapter 1: Linear Programming 46
Duality: Introductory Example, contin. For the wheat-corn MLP problem, final tableau x 1 x 2 s 1 s 2 s 3 1 1 1 0 0 100 5 10 0 1 0 800 2 1 0 0 1 150 80 60 0 0 0 0 x 1 x 2 s 1 s 2 s 3 0 1 2 0 1 50 0 0 15 1 5 50 1 0 1 0 1 50 0 0 40 0 20 7000 Optimal sol n MLP is x 1 = 50 and x 2 = 50 with a payoff of 7000. Optimal sol n dual mlp is (by theorem given later) y 1 = 40, y 2 = 0, and y 3 = 20 with the same payoff, where 40, 0, and 20 are entries in bottom row of final tableau in columns associated with slack variables. End of Example Chapter 1: Linear Programming 47
Example, Bicycle Manufacturing A bicycle manufacturer manufactures x 1 3-speeds and x 2 5-speeds. Maximize profits given by z = 12x 1 + 15x 2. Constraints are 20 x 1 + 30 x 2 2400 finishing time in minutes, 15 x 1 + 40 x 2 3000 assembly time in minutes, x 1 + x 2 100 frames used for assembly, x 1 0 x 2 0. Dual problem is Minimize: w = 2400 y 1 + 3000 y 2 + 100 y 3, Subject to: 20 y 1 + 15 y 2 + y 3 12, 30 y 1 + 40 y 2 + y 3 15, y 1 0, y 2 0, y 3 0. Chapter 1: Linear Programming 48
Bicycle Manufacturing, contin. MLP x 1 x 2 s 1 s 2 s 3 20 30 1 0 0 2400 15 40 0 1 0 3000 1 1 0 0 1 100 12 15 0 0 0 0 x 1 x 2 s 1 s 2 s 3 1 0 1 10 0 2 40 5 0 0 2 1 35 500 1 1 0 10 0 3 60 3 0 0 10 0 6 1320 x 1 x 2 s 1 s 2 s 3 0 10 1 0 20 400 0 25 0 1 15 1500 1 1 0 0 1 100 0 3 0 0 12 1200 Optimal sol n has x 1 = 60 3-speeds, x 2 = 40 5-speeds, with a profit of $1320. Chapter 1: Linear Programming 49
Bicycle Manufacturing, continued y i x 1 x 2 s 1 s 2 s 3 1 0 1 10 0 2 40 5 0 0 2 1 35 500 1 1 0 10 0 3 60 3 0 0 10 0 6 1320 are marginal values of corresponding constraint. Dual problem has a solution of y 1 = 3 10 profit per finishing minute, y 2 = 0 profit per assembly minute y 3 = 6 profit per frame. Additional units of the exhausted resources, finishing time and frames, contribute to the profit but not assembly time. Chapter 1: Linear Programming 50
Rules for Forming Dual LP Maximization Problem, MLP i th constraint j a ijx j b i Minimization Problem, mlp i th variable 0 y i i th constraint j a ijx j b i i th variable 0 y i i th constraint j a ijx j = b i i th variable y i unrestricted j th variable 0 x j j th variable 0 x j j th constraint i a ijy i c j j th constraint i a ijy i c j j th variable x j unrestricted j th constraint i a ijy i = c j Standard conditions for MLP, j a ijx j b i or 0 x j, corresp to standard conditions for mlp, 0 y i or i a ijy i c j Nonstandard conditions, b i or 0 x j, corresp to nonstand conditions Equality constraints correspond to unrestricted variables These rules follow from proof of Duality Theorem (given subsequently) Chapter 1: Linear Programming 51
Example Minimize : 8y 1 + 10y 2 + 4y 3 Subject to : 4y 1 + 2y 2 3y 3 20 2y 1 + 3y 2 + 5y 3 150 6y 1 + 2y 2 + 4y 3 = 40 y 1 unrestricted, y 2 0, y 3 0 By table, dual maximization problem is Maximize : 20x 1 + 150x 2 + 40x 3 Subject to : 4x 1 + 2x 2 + 6x 3 = 8 2x 1 + 3x 2 + 2x 3 10 3x 1 + 5x 2 + 4x 3 4 x 1 0, x 2 0, x 3 unrestricted Chapter 1: Linear Programming 52
Example, continued Maximize : 20x 1 + 150x 2 + 40x 3 Subject to : 4x 1 + 2x 2 + 6x 3 = 8 2x 1 + 3x 2 + 2x 3 10 3x 1 + 5x 2 + 4x 3 4 x 1 0, x 2 0, x 3 unrestricted By making change of variables x 2 = v 2 and x 3 = v 3 w 3, all restrictions on variables are 0: Maximize : 20x 1 150v 2 + 40v 3 40w 3 Subject to : 4x 1 2v 2 + 6v 3 6w 3 = 8 2x 1 3v 2 + 2v 3 2w 3 10 3x 1 5v 2 + 4x 3 4w 3 4 x 1 0, v 2 0, v 3 0, w 3 0 Chapter 1: Linear Programming 53
Example, continued Tableau for maximization problem with variables x 1, v 2, v 3, w 3, with artificial variable r 1, and with slack variables s 2 and s 3 is x 1 v 2 v 3 w 3 s 2 s 3 r 1 4 2 6 6 0 0 1 8 2 3 2 2 1 0 0 10 3 5 4 4 0 1 0 4 20 150 40 40 0 0 0 0 4 2 6 6 0 0 0 8 x 1 v 2 v 3 w 3 s 2 s 3 r 1 1 3 3 1 1 2 2 2 0 0 4 2 1 0 2 1 1 1 0 2 6 13 17 17 3. 0 2 2 2 0 1 4 10 0 140 10 10 0 0 5 40 0 0 0 0 0 0 1 0 Chapter 1: Linear Programming 54
Example, continued Drop artificial objective function but keep artificial variable to determine value of its dual variable. x 1 v 2 v 3 w 3 s 2 s 3 r 1 1 3 1 2 2 1 0 2 1 1 1 0 0 13 2 17 2 3 1 2 0 0 4 2 2 6 17 3 2 0 1 4 10 0 140 10 10 0 0 5 40 x 1 v 2 v 3 w 3 s 2 s 3 r 1 11 3 2 1 17 0 0 0 17 17 47 2 7 0 17 0 0 1 17 17 13 2 3 0 17 1 1 0 17 34 2250 20 100 0 17 0 0 0 17 17 4 17 122 17 20 17 880 17 Chapter 1: Linear Programming 55
Example, continued x 1 v 2 v 3 w 3 s 2 s 3 r 1 11 3 2 1 17 0 0 0 17 17 47 2 7 0 17 0 0 1 17 17 13 2 3 0 17 1 1 0 17 34 2250 20 100 0 17 0 0 0 17 17 4 17 122 17 20 17 880 17 Optimal solution of MLP is x 1 = 4 17, x 2 = v 2 = 0, x 3 = v 3 w 3 = 20 17 0 = 20 17, s 2 = 122 17, and s 3 = r 1 = 0 with a maximal value 20 ( 4 17 ) + 150(0) + 40 ( 20 17 ) = 880 17. Chapter 1: Linear Programming 56
Example, continued Optimal solution for original mlp can be also read off final tableau, x 1 v 2 v 3 w 3 s 2 s 3 r 1 11 3 2 1 17 0 0 0 17 17 47 2 7 0 17 0 0 1 17 17 13 2 3 0 17 1 1 0 17 34 2250 20 100 0 17 0 0 0 17 17 y 1 = 100 17, y 2 = 0, y 3 = 20 17, and minimal value 8 ( 100 17 ) + 10 (0) + 4 ( 20 17 ) = 880 17. 4 17 122 17 20 17 880 17 Chapter 1: Linear Programming 57
Example, continued Alternatively method: First write minimization problem in with variables 0 by setting y 1 = u 1 z 1, Minimize : 8u 1 8z 1 + 10y 2 + 4y 3 Subject to : 4u 1 4z 1 + 2y 2 3y 3 20 2u 1 2z 1 + 3y 2 + 5y 3 150 6u 1 6z 1 + 2y 2 + 4y 3 = 40 u 1 0, z 1 0, y 2 0, y 3 0 Dual MLP will now have a different tableau than before but the same solution. Chapter 1: Linear Programming 58
Remark on Tableau with Surplus/Equal After artificial objective function is zero drop this row but keep artificial variables to determine values of dual variables. Proceed to make all the entries of row for objective function 0 in columns of x i, slack and surplus variable, but allow negative values in artificial variable columns. Dual variables are entries in row for objective function in columns of slack variables and artificial variables. For pair of surplus and artificial variable columns, value in artificial variable column is 0 and 1 of value in surplus variable column. Chapter 1: Linear Programming 59
Notation for Duality Theorems MLP: (primal) maximization linear programming problem Maximize: f (x) = c x Subject to: j a ijx j b i, b i, or = b i for 1 i m and x j 0, 0, or unrestricted for 1 j n Feasible set: F M. mlp: (dual) minimization linear programming problem Minimize: g(y) = b y Subject to: i a ijy i c j, c j, or = c j for 1 j n and y i 0, 0, or unrestricted for 1 i m Feasible set: F m. Dual of the minimization problem is the maximization problem. Chapter 1: Linear Programming 60
Weak Duality Theorem Theorem (Weak Duality Theorem) Let x F M for MLP and y F m for mlp, any feasible solutions. a. Then, f (x) = c x b y = g(y). Thus, optimal value M to either problem satisfies c x M b y for any x F M and y F m. b. c x = b y iff x & y satisfy complementary slackness 0 = y j (b j a j1 x 1 a jn x n ) for 1 j m, and 0 = x i (a 1i y 1 + + a mi y m c i ) for 1 i n. In matrix notation, 0 = y (b Ax) and 0 = x ( A y c ). Chapter 1: Linear Programming 61
Complementary Slackness For linear programming usually solve by simplex method, row reduction In nonlinear programming with inequalities often solve complementary slackness equations, Karush-Kuhn-Tucker equations Chapter 1: Linear Programming 62
Proof of Weak Duality Theorem (1) If If If j a ijx j b i then y i 0 and y i (Ax) i = y i j a ijx j y i b i. j a ijx j b i then y i 0 and y i (Ax) i = y i j a ijx j y i b i. j a ijx j = b i then y i is arb and y i (Ax) i = y i j a ijx j = y i b i. Summing over i y Ax = i y i(ax) i i y i b i = y b or y (b Ax) 0. (2) By same type of argument as (1), c x x ( A y ) = ( A y ) x = y (Ax) = y Ax ( A y c ) x 0. c x y Ax y b (a) (b) y b c x = y (b Ax) + ( A y c ) x = 0 iff 0 = ( A y c ) x and 0 = y (b Ax). QED Chapter 1: Linear Programming 63
Feasibility/Boundedness Corollary Assume that MLP and mlp both have feasible solutions. Then MLP is bounded above and has an optimal solution. Also, mlp is bounded below and has an optimal solution. Proof. If y 0 F m and x F M, then f (x) = c x b y 0 so f is bounded above, and has an optimal solution. Similarly, if x 0 F M, and y F m then g(y) = b y c x 0 so g is bounded below, and has an optimal solution. Chapter 1: Linear Programming 64
Necessary Conditions for Optimal Solution Proposition If x is an optimal solution for MLP, then there is a feasible solution ȳ F m of the dual mlp that satisfies complementary slackness equations, 1. ȳ (b A x) = 0, 2. ( A ȳ c ) x = 0. Similarly, if ȳ F m is optimal solution of mlp, then there is a feasible solution x F M that satisfy 1-2. Proof is longest of duality arguments. Prove similar necessary conditions for nonlinear situation. Chapter 1: Linear Programming 65
Proof of Necessary Conditions Let E be set of i such that b i = j a ij x j, i.e. is tight or effective. Gradient of this constraint is transpose of i th -row of A, R i, Let E be set of i such that x i = 0 is tight. e i = (0,..., 1,..., 0) is negative of gradient Assume nondegenerate, so gradients of constraints {R T i } i E { e i } i E = {w i } i E are linearly independent. (Otherwise take an appropriate subset in following argument.) f has a maximum at x on level set for constraints i E E. By Lagrange multipliers, f ( x) = c = i E ȳi R T j i E z i e i By setting ȳ i = 0 for i / E and 1 i m and z i = 0 for i / E and 1 i n, c = 1 i m ȳi R T i 1 i m z i e i = A ȳ z. (*) Chapter 1: Linear Programming 66
Proof of Necessary Conditions, contin. Since ȳ i = 0 for b i j a ij x j 0 0 = ȳ i (b i j a ij x j ) for 1 i m. Since z j = 0 for x j 0, 0 = x j z j 1 j n. In vector-matrix form using (*), (1) 0 = ȳ (b A x) (2) 0 = x z = ( A ȳ c ) x Still need (4): (i) ȳ i 0 for resource constraint, (ii) ȳ i 0 for requirement constraint, (iii) ȳ i is unrestricted for equality constraint, (iv) z j = i a ijȳ i c j 0 for x j 0, (v) z j 0 for x j 0, and (vi) z j = 0 for x j unrestricted. Chapter 1: Linear Programming 67
Proof of Necessary Conditions, contin. {R T i } i E { e i } i E = {w k } k E are linearly independent Complete to a basis of R n using vectors perp to these first vectors. W = (w 1,..., w n ) and V = (v 1,..., v n ) s.t. W V = I. Column v k perp to w i except for i = k. c = 1 i m ȳi R T i 1 i n z i e i = j p j w j where p j = ȳ i, z i, or 0 c v k = ( j p j w j ) v k = p k Take i E. Gradient of this constraint is R i = w k for some k E. Set δ = 1 for resource and δ = +1 for requirement constraints δ R i points into F M (unless = constraint). For small t 0, x + t δ v k F M 0 f ( x) f ( x + t δ v k ) = t δ c v k = t δ p k, δ p k 0, or ȳ i 0 for resource and 0 for requirement constraint Can t move off for equality constraint, so y i unrestricted Chapter 1: Linear Programming 68
Proof of Necessary Conditions, contin. Take i E. e i = w k for some k E. Set δ = 1 if x i 0 and δ = 1 if x i 0 δ w k = δ e i points into F M (unless x i unrestricted). By argument as before, δ p k = δ z i 0. Therefore z i 0 if x i 0 and z i 0 if x i 0 If x i is unrestricted, then the equation is not tight and z i = 0. This proves ȳ F m and satisfies complementary slackness (1) and (2) QED Chapter 1: Linear Programming 69
Optimality and Complementary Slackness Corollary Assume that x F M is a feasible solution for primal MLP and ȳ F m is a feasible solution of dual mlp. Then the following are equivalent. a. x is an optimal solution of MLP and ȳ is an optimal solution of mlp. b. c x = b ȳ. c. 0 = x (c A ȳ) and 0 = (b A x) ȳ. Proof. (b c) Restatement of Weak Duality Theorem. (a c) By proposition, ȳ that satisfies complementary slackness By Weak Duality Theorem, c x = b ȳ. So, c x = b ȳ b ȳ c x. By Weak Duality Theorem, ȳ satisfies complementary slackness. Chapter 1: Linear Programming 70
Proof of Corollary, contin. Proof. (b a) If x and ȳ satisfy c x = b ȳ, then for any x F M & y F m, c x b ȳ = c x b y x & ȳ must be optimal solutions. Chapter 1: Linear Programming 71
Duality Theorem Theorem Consider two dual problems MLP and mlp. Then, MLP has an optimal sol n iff dual mlp has an optimal sol n. Proof. If MLP has an optimal sol n x, then mlp has feasible sol n ȳ that satisfies complementary slackness. By Corollary, ȳ is optimal sol n of mlp. Converse is similar Chapter 1: Linear Programming 72
Duality and Tableau Theorem If either MLP or mlp is solved for an optimal sol n by simplex method, then sol n of its dual LP is displayed in bottom row of final optimal tableau in the columns associated with slack and artificial variables. (not surplus) Proof: Start with MLP. To solve by tableau, need x 0. Group equations into resource, requirement, and equality constraints. so tableau for MLP A 1 I 1 0 0 0 b 1 A 2 0 I 2 I 2 0 b 2 A 3 0 0 0 I 3 b 3 c 0 0 0 0 0 Chapter 1: Linear Programming 73
Proof continued Row operations to final tableau realized by by matrix multiplication [ ] A 1 I 1 0 0 0 b 1 M1 M 2 M 3 0 A 2 0 I 2 I 2 0 b 2 ȳ 1 ȳ 2 ȳ 2 1 A 3 0 0 0 I 3 b 3 c 0 0 0 0 0 [ M 1 A 1 + M 2 A 2 + M 3 A 3 M 1 M 2 M 2 M 3 Mb = ȳ 1 A 1 + ȳ 2 A 2 + ȳ 3 A 3 c ȳ 1 ȳ 2 ȳ 2 ȳ 3 ȳ b ] Obj fn row is not added to the other rows so last column = (0, 1), In final tableau, entries in objective function row 0, except for artificial variable columns, so A ȳ c = ( ȳ A c ) 0, ȳ1 0, ȳ 2 0, so ȳ F m c x max = ȳ b = b ȳ so ȳ is minimizer by Optimality Corollary. Chapter 1: Linear Programming 74
Proof continued ( Note: ) A ȳ = L i i ȳ L i i th -column of A If x i 0, set ξ i = x i 0. Column in tableau, 1 original column, and new obj fn coef c i. Now have 0 ( L i ) ȳ ( c i ), L i ȳ c i resource constraint of dual. If x i arbitrary, set x i = ξ i η i. Then get both L i ȳ c i & L i ȳ c i, L i ȳ = c i, equality constraint of dual. QED Chapter 1: Linear Programming 75
Sensitivity Analysis Sensitive analysis concerns the extent to which more of a resource would increase the maximum value of a MLP. Example. In short run, in stock product 1 per item product 2 per item profit $40 $10 units of paint 1020 15 10 fasteners 400 10 2 hours of labor 420 3 5 x 1 x 2 s 1 s 2 s 3 15 10 1 0 0 1020 10 2 0 1 0 400 3 5 0 0 1 420 40 10 0 0 0 0 x 1 x 2 s 1 s 2 s 3 0 7 1 1.5 0 420 1.2 0.1 0 40 0 4.4 0.3 1 300 0 2 0 4 0 1600 Chapter 1: Linear Programming 76
Sensitivity Analysis, continued x 1 x 2 s 1 s 2 s 3 0 7 1 1.5 0 420 1.2 0.1 0 40 0 4.4 0.3 1 300 0 2 0 4 0 1600 x 1 x 2 s 1 s 2 s 3 1 3 0 1 7 14 0 60 1 1 1 0 35 7 0 28 22 9 0 0 35 14 1 36 2 25 0 0 7 7 0 1720 Optimal solution is x 1 = 28, x 2 = 60, s 1 = 0, s 2 = 0, s 3 = 36, with optimal profit of 1720. Chapter 1: Linear Programming 77
Sensitivity Analysis, continued x 1 x 2 s 1 s 2 s 3 1 3 0 1 7 14 0 60 1 1 1 0 35 7 0 28 22 9 0 0 35 14 1 36 2 25 0 0 7 7 0 1720 Values of an increase of constrained quantities are y 1 = 2 7, y 2 = 25 7, y 3 = 0 for quantity that is not tight. Increase is largest for second constraint, limitation on fasteners. Next consider range that b 2 can be increased while keeping same basic variables with s 1 = s 2 = 0. Chapter 1: Linear Programming 78
Sensitivity Analysis, continued Let δ 2 be change in 2nd resource (fasteners): 10x 1 + 2x 2 + s 2 = 400 + δ 2 starting form of constraint. s 2 and δ 2 play similar roles (and have similar units), so the new final tableau adds δ 2 times s 2 -column to right side of equalities. Still need x 1, x 2, s 3 0, 0 x 2 = 60 3 δ 2 14 or δ 2 60 14 3 = 280, 0 x 1 = 28 + δ 2 7 or δ 2 28 7 = 196, 0 s 3 = 36 + 9 δ 2 14 or δ 2 36 14 9 = 56; 56 δ 2 280. Chapter 1: Linear Programming 79
Sensitivity Analysis, continued Resource can be incr at most 280 units and decr at most 56 units, or 56 δ 2 280 344 = 400 56 b 2 400 + 280 = 680. For this range, x 1, x 2, and s 3 are still basic variables. Resource = 344 Resource = 400 Resource = 680 (20,72) (28,60) (40,0) (68,0) Chapter 1: Linear Programming 80
Sensitivity for Change of Constraint Constants For δ 2 = 280, x 1 = 28 + 280 1 7 = 68, x 2 = 60 280 3 14 = 0, s 3 = 36 + 280 9 14 = 216, z = 1720 + 280 25 7 For δ 2 = 56, x 1 = 28 56 1 7 = 20, x 2 = 60 + 56 3 14 = 72, s 3 = 36 56 9 14 = 0, z = 1720 56 25 7 = 2720 is optimal value. = 1520 is optimal value. Chapter 1: Linear Programming 81
General Changes in Tight Constraint Use optimal (final) tableau that gives maximum b i entry of i th -row of constants in right hand column c j 0 entry in jth -column of objective row entry in i th -row and j th -column a ij exclude right side constants and any artificial variable columns. C j j th -column of A (note capital and bold, not c j ) R i i th -row of A Chapter 1: Linear Programming 82
General Changes in Constraint, contin. Change in tight r th - resource constraint, b r + δ r Assume that s r s r A e r b + δ r e r c T 0 0 is in k th -column s r A C k b + δ r C k c T c k M + δ r c k z i basic variable with pivot in i th -row, need 0 z i = b i + δ r a ik. For a ik < 0, need δ δ r min i { b i a ik For a ik > 0, need b min i { b i a ik r a ik b i, so : a ik < 0 }, k th -column for s r i δ r a ik, so : a ik > 0 } δ r, k th -column for s r. Change in optimal value for δ r in allowable range: δ r c k Chapter 1: Linear Programming 83
Change in Slack Constraint Constant Let s r be for a pivot column in optimal tableau for a slack r th -resource. To keep same basic variables, need changed amount b r + δ r 0 δ r b r. b r can be increased by an arbitrary amount. For δ r is this range, optimal value is unchanged. Chapter 1: Linear Programming 84
Sensitivity Analysis, continued x 1 x 2 s 1 s 2 s 3 1 3 0 1 7 14 0 60 1 1 1 0 35 7 0 28 22 9 0 0 35 14 1 36 2 25 0 0 7 7 0 1720 Allowable δ 1 for first resource, δ 1 min { 28 35 1, 36 35 22 } = min { 980, 57.27 } = 57.27. δ 1 min { 60 7 1 } = 420. Change of optimal value 1720 + 2 7 δ 1 Allowable δ 3 δ 3 36 Chapter 1: Linear Programming 85
Changes in Objective Function Coefficients For a change from c 1 to c 1 + 1, changes in tableaux x 1 x 2 s 1 s 2 s 3 15 10 1 0 0 1020 10 2 0 1 0 400 3 5 0 0 1 420 40 1 10 0 0 0 0 x 1 x 2 s 1 s 2 s 3 1 3 0 1 7 14 0 60 1 1 1 0 35 7 0 28 22 9 0 0 35 14 1 36 2 25 1 0 7 7 0 1720 x 1 x 2 s 1 s 2 s 3 0 0 2 7 1 35 1 25 7 + 1 7 1 0 1720 + 28 1 Chapter 1: Linear Programming 86
Changes in Coefficients, contin. To keep objective function row 0, need 2 7 1 35 1 0 & 25 7 + 1 7 1 0 1 2 7 35 1 = 10 & 1 25 7 7 1 = 25 15 = 40 25 c 1 + 1 40 + 10 = 50. Corresponding optimal value of objective function is 1720 + 28 1 Chapter 1: Linear Programming 87
General Changes in Objective Function Coefficients Consider change k in coefficient c k of basic x k in obj fn, where x k is a basic in optimal solution and its pivot is in r th row. a rj denote the entries in r th row and j th column, excluding right side constants and any artificial variable columns. c j 0 denote entries in the objective row of optimal tableau With change, original entry in the Obj Rn row becomes c k k. Entry in optimal tableau changes from 0 to To keep x k basic, need to add k R r to Obj Fn row. Entry in x k -column is now 0 and j th -column is c j + ka rj For all j, need c j + ka rj 0. k Chapter 1: Linear Programming 88
Changes in Coefficients, contin. c k + k for basic variable with r th -pivot row, a rk = 1 pivot. For a rj > 0 in r th -pivot row, need k a rj c j or k c j a rj, k min j { c j a rj : a rj > 0, j k }, maximal decrease of c k. For a rj < 0 in r th -pivot row, need c j a rj k or c j a rj k, k min j { c j a rj : a rj < 0 }, maximal increase of c k. If c j = 0 for a rj < 0, then need k 0. If c j = 0 for a rj > 0 & j k, then need k 0. Change in optimal value is k b r Chapter 1: Linear Programming 89
Sensitivity Analysis for Non-basic Variable Our example does not have any non-basic variables, x k. If x k were a non-basic variable in optimal solution, x k = 0, then c k + k 0 insures that x r is a non-basic variable, k c k. i.e., c k 0 is min decrease needed to make x k a basic variable and a positive contribution to optimal solution. Chapter 1: Linear Programming 90
Theory: Convex Combinations Weighted averages in R n : For three vectors a 1, a 2, and a 3, a 1 + a 2 + a 3 3 is average of each component, average of these vectors. a 1 + a 2 + a 2 + a 3 + a 3 + a 3 6 = a 1 + 2a 2 + 3a 3 6 = 1 6 a 1 + 2 6 a 2 + 3 6 a 3 is a weighted average of these vectors with weights 1 6, 2 6, and 3 6. For vectors {a i } k i=1 and numbers k i=1 t i = 1 with t i 0 k i=1 t i a i is a weighted average, and is called a convex combination of {a i }. Chapter 1: Linear Programming 91
Convex Sets Definition A set S R n is convex provided that if x 0 and x 1 are any two points in S then convex combination x t = (1 t)x 0 + tx 1 is also in S for all 0 t 1, i.e., line segment from x 0 to x 1 in S. convex convex convex not convex not convex Chapter 1: Linear Programming 92
Convex Sets, contin. Each constraint, a i1 x 1 + + a in x n b i or b i, or x i 0, defines a closed half-space in R n. a i1 x 1 + + a in x n = b i is a hyperplane (n 1 dimensional). Definition Any intersection of a finite number of closed half-spaces and possibly some hyperplanes is called a polyhedron. Chapter 1: Linear Programming 93
Convex Sets, contin. Theorem a. Intersection of convex sets, j S j, is convex. b. A polyhedron is convex. So feasible set of any LP is convex. Proof. (a) If x 0, x 1 S j & 0 t 1, then (1 t) x 0 + t x 1 S j (1 t) x 0 + t x 1 j S j. j, and (b) Each closed half space & hyperplane is convex, so intersection is convex. Chapter 1: Linear Programming 94
Convex Sets, contin. Theorem If S is a convex set, and p i S for 1 i k, then any convex combination k i=1 t i p i S. Proof. Proof is by induction. For k = 2, it follows from the def n of convex set. Assume true for k 1 2. If t k = 1 & t j = 0 for 1 j < k, then clear. If t k < 1, then k 1 i=1 t i = 1 t k > 0, & k 1 i=1 k 1 i=1 t i 1 t k p i S by induction hypothesis So, k i=1 t i p i = (1 t k ) k 1 t i i=1 1 t k p i + t k p k S. t i 1 t k = 1. Chapter 1: Linear Programming 95
Extreme Points and Vertices Definition A point p in a nonempty convex set S is called an extreme point p.t. if p = (1 t)x 0 + tx 1 with x 0, x 1 in S and 0 < t < 1, then p = x 0 = x 1. An extreme point in a polyhedron is called a vertex. An extreme point of S must be a boundary point of S. Disk D = { x R 2 : x 1 } is convex. Each point on its boundary is an extreme point. For next few theorems, consider feasible set F = { x R n+m + : Ax = b } with slack and surplus variables included in x s and constraints. Chapter 1: Linear Programming 96
Vertices Theorem Let F = { x R n+m + : Ax = b }. x F is a vertex of F if and only if basic feasible solution, i.e., columns of A with x j > 0 linearly independent set of vectors. Proof: By reindexing columns and variables, can assume that x 1 > 0,..., x r > 0 x r+1 = = x n+m = 0. (a) Assume { A j } r j=1 linearly dependent: (β 1,..., β r ) 0 β 1 A 1 + + β r A r = 0. For β = (β 1,..., β r, 0,..., 0), A β = 0, For small λ, w 1 = x + λβ 0, & w 2 = x λβ 0, Aw i = Ax = b. so w 1, w 2 F & x = 1 2 w 1 + 1 2 w 2, so not vertex. Chapter 1: Linear Programming 97
Proof (b) Conversely, assume that x F is not vertex, x = t y + (1 t) z for 0 < t < 1, with y z in F. For r < j, 0 = x j = t y j + (1 t) z j. Since both y j 0 and z j 0, both must be zero for j > r. Because y z are both in F, b = Ay = y 1 A 1 + + y r A r, b = Az = z 1 A 1 + + z r A r, 0 = (y 1 z 1 ) A 1 + + (y r z r ) A r, and columns { A j } r j=1 are linearly dependent. QED Chapter 1: Linear Programming 98
Some Vertex is an Optimal Solution For any convex combination f ( t j x j ) = c t j x j = t j c x j = t j f (x j ). Theorem Assume that F = { x R n+m + : Ax = b } for bounded MLP. Then following hold. a. If x 0 F, then there exists a basic feasible x b F s.t. f (x b ) = c x b c x 0 = f (x 0 ). b. There is at least one optimal basic solution. c. If two or more basic solutions are optimal, then any convex combination of them is also an optimal solution. Chapter 1: Linear Programming 99
Proof a. If x 0 is already a basic feasible sol n then done. Otherwise, columns A for xi 0 > 0 are lin. depen. Let A be matrix with only these columns. y 0 such that A y = 0. Adding 0 in other entries, get y 0 s.t. Ay = 0. A( y) = 0, so can assume that c y 0. A [x 0 + t y] = A x 0 = b, If y i 0, then xi 0 > 0, so x 0 + t y 0 for small t is in F. Chapter 1: Linear Programming 100
Proof a, contin. Case 1. Assume that c y > 0 and some component y i < 0. x 0 i > 0 and x 0 i + t y i = 0 for t i = x0 i y i > 0. As t increases from 0 to t i, objective function increases from c x 0 to c [x 0 + t i y]. If more than one y i < 0, then select one with smallest t i. Have constructed point in F with one more zero component of x 0, fewer components y i < 0, and a greater value of objective function. Can continue until either columns are linearly independent or all y i 0. Chapter 1: Linear Programming 101
Proof a, continued Case 2. If c y > 0 and y 0, then x 0 + t y F for all t > 0, c [x 0 + t y] = c x 0 + t c y is arbitrarily large. MLP is unbounded and has no maximum, contradiction. Case 3. If c y = 0: f (x 0 + t y) = c x 0 unchanged Some y i 0. Considering y & y can assume some y i < 0. first t i > 0, to make another xi 0 + t i y i = 0. Eventually, get corresponding columns linearly independent, and a basic solution as claimed in part (a). Chapter 1: Linear Programming 102
Proof b,c (b) Only finitely many basic feasible solutions, {p j } N j=1. f (x) max 1 j N f (p j ) for x F by part (a) Maximum can be found among f (p j ). (c) Assume f (p ji ) = M = max{ f (x) : x F } for i = 1,..., l > 1. l i=1 t j i p ji F ( l ) f i=1 t j i p ji = l i=1 t j i f (p ji ) = l i=1 t j i M = M. optimal QED Chapter 1: Linear Programming 103
Validation of Simplex Method If degen. basic feasible sol ns with fewer than m positive basic var, then simplex method can cycle by row reduction to matrices with same positive basic variables but different sets of pivots: interchange one zero basic variable with zero free variable. Same vertex of feasible set Need to insure don t repeat same set of basic variables at a vertex (cycle) Theorem If a maximum solution exists for a linear programming problem and simplex algorithm does not cycle among degen basic feasible sol ns, then simplex algorithm locates a maximum solution in finitely many steps. Chapter 1: Linear Programming 104