Chapter 1: Linear Programming
|
|
- Thomasine Lee
- 6 years ago
- Views:
Transcription
1 Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1
2 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of f on D, or image of D by f. f has a maximum on D at x M D provided that f (x M ) f (x) for all x D. max{f (x) : x D } = f (x M ), maximum value of f on D. x M is called a maximizer of f on D. arg max{f (x) : x D } = {x D : f (x) = f (x M ) }. f has a minimum on D at x m D provided that f (x m ) f (x) for all x D. min{f (x) : x D } = f (x m ), minimum of f on D arg min{f (x) : x D } = {x D : f (x) = f (x m ) } set of minimizers f has an extremum at x 0 p.t. x 0 is either a maximizer or minimizer. Chapter 1: Linear Programming 2
3 Basic Optimization Problem No maximizer or minimizer of f (x) = x 3 on (0, 1), arg max{ x 3 : 0 < x < 1 } =, & arg min{ x 3 : 0 < x < 1 } =, Optimization Problem: Does f (x) attain a maximum (or minimum) for some x D? If so, what is the maximum value (or minimum value) on D and what are the points at which f (x) attains a maximum (or minimum) subject to x D? Chapter 1: Linear Programming 3
4 Notations v w in R n means v i w i for 1 i n v w in R n means v i > w i for 1 i n R n + = { x R n : x 0 } = { x R n : x i 0 for 1 i n } R n ++ = { x R n : x 0 } = { x R n : x i > 0 for 1 i n } Chapter 1: Linear Programming 4
5 Linear Programming: Wheat-Corn Example Up to 100 acres of land can be used to grow wheat and/or corn: x 1 acres used to grow wheat and x 2 acres used to grow corn: x 1 + x Cost or capital constraint: 5x x Labor constraint: 2x 1 + x Profit: f (x 1, x 2 ) = 80x x 2. Objective function Problem: Maximize: 80x x 2 (profit), Subject to: x 1 + x 2 100, (land) 5x x 2 800, (capital) 2x 1 + x 2 150, (labor) x 1 0, x 2 0. All constraints and objective function are linear: lin. programming prob. Chapter 1: Linear Programming 5
6 Example, continued Feasible set F is set of all the points satisfying all constraints x 1 + x (land), 5x x (capital), 2x 1 + x (labor) x 1 0, x 2 0. (40, 60) (50, 50) (0, 0) Vertices of the feasible set: (0, 0), (75, 0), (50, 50), (40, 60), (0, 80). Other points where two constraints are tight ( 140 3, 170 ) 3, (100, 0), etc 140 lie outside the feasible set, = > 100, 2(100) > 150,... Chapter 1: Linear Programming 6
7 Example, continued Since f = (80, 60) (0, 0), maximum must be on boundary of F. f (x 1, x 2 ) along an edge is linear combination of values at end points. If a maximizer were in middle of an edge, then f would have the same value at two end points of this edge. Maximizer can be found at one of vertices. Values at vertices: f (0, 0) = 0, f (75, 0) = 6000, f (50, 50) = 7000, f (40, 60) = 6800, f (0, 80) = max value of f is 7000, maximizer (50, 50). End of Example Chapter 1: Linear Programming 7
8 Standard Max Linear Programming Problem (MLP) Maximize objective function: f (x) = c x = c 1 x c n x n, Subject to resource constraints: a 11 x a 1n x n b 1... a m1 x a mn x n b m x j 0 for 1 j n. Given data: c = (c 1,..., c n ), m n matrix A = (a ij ), b = (b 1,..., b m ) with all b i 0, Constraints using matrix notation are Ax b and x 0. Feasible set: F = { x R n + : Ax b }. Chapter 1: Linear Programming 8
9 Minimization Example Minimize: 3x 1 + 2x 2, Subject to: 2x 1 + x 2 4, x 1 + x 2 3, x 1 + 2x 2 4, x 1 0, and x 2 0. (0, 4) (1, 2) (2, 1) (4, 0) F is unbounded but f (x) 0 is bded below so f (x) has a minimum. Vertices: (4, 0), (2, 1), (1, 2), (0, 4) Values: f (4, 0) = 12, f (2, 1) = 8, f (1, 2) = 7, f (0, 4) = 8. min{f (x) : x F } = 7 arg min{f (x) : x F } = {(1, 2)}. Chapter 1: Linear Programming 9
10 Geometric Method of Solving Linear Prog Problem 1 Determine or draw the feasible set F. If F =, then problem has no optimal solution, problem is called infeasible 2 Problem is called unbounded and has no solution p.t. objective function on F has a. arbitrarily large positive values for a maximization problem, or b. arbitrarily large negative values for a minimization problem. 3 A problem is called bounded p.t. it is not infeasible nor unbounded; an optimal solution exists. Determine all the vertices of F and values at vertices. Choose the vertex of F producing the maximum or minimum value of the objective function. Chapter 1: Linear Programming 10
11 Rank of a Matrix Rank of a matrix A is dimension of column space of A. i.e., largest number of linearly independent columns of A. Same as number of pivots (in row reduced echelon form of A). rank(a) = k iff k 0 is the largest integer s.t. det(a k ) 0, where A k is any k k submatrix of A formed by selecting any k columns and any k rows. A k is submatrix of pivot columns and rows. Sketch: Let A be submatrix with k linearly independent columns that span column space; rank(a ) = k dim(row space of A ) = k, so k rows of A to get k k A k with rank(a k ) = k, so det(a k ) 0. Chapter 1: Linear Programming 11
12 Slack Variables For large number of variables, need a practical algorithm. Simplex method uses row reduction as solution method. First step: make all the inequalities of type x i 0. Inequality of the form a i1 x a in x n b i resource constraint. for b i 0 is called For resource constraint, introduce slack variable s i a i1 x a in x n + s i = b i with s i 0. by s i represents unused resource. Introduction of a slack variable changes a resource constraint into equality constraint and s i 0. Chapter 1: Linear Programming 12
13 Introducing Slack Variables into Wheat-Corn Example x 1 + x 2 + s 1 = 100, 5x x 2 + s 2 = 800, 2x 1 + x 2 + s 3 = 150, s 1, s 2, s 3 0. In matrix form, Maximize: (80, 60, 0, 0, 0) (x 1, x 2, s 1, s 2, s 3 ) = 80 x x 2 x 1 x 1 0, x x 2 0, Subj. to: s s 2 = 800 and s 1 0, 150 s 2 0, s 3 s 3 0. Because of 1 s and 0 s in last 3 columns of matrix, rank is 3. Initial feasible solution x 1 = 0 = x 2, s 1 = 100, s 2 = 800, s 3 = 150 Chapter 1: Linear Programming 13
14 Wheat-Corn Example, continued x 1 x s 1 s 2 = 800 with x i 0, and s i s 3 Initial feasible sol n: p 0 = (0, 0, 100, 800, 150) with f (p 0 ) = 0. Rank is 3, so 5 3 = 2 free variables. x 1, x 2. p 0 obtained by setting free variables x 1 = x 2 = 0 and solving for dependent variables, which are 3 slack variables. Pivot to make a different pair the free variables equal to zero and a different triple of positive variables. Chapter 1: Linear Programming 14
15 Wheat-Corn Example, continued s 3 s 1 s 2 p 0 p1 If leave the vertex (x 1, x 2 ) = (0, 0), or p 0 = (0, 0, 100, 800, 150), making x 1 > 0 entering variable while keeping x 2 = 0; first slack variable to become zero is s 3 when x 1 = 75. Arrive at the vertex (x 1, x 2 ) = (75, 0), or p 1 = (75, 0, 25, 425, 0). New sol n has two zero variables and three positive variables. Move along one edge from p 0 to p 1. f (p 1 ) = 80(75) = 6000 > 0 = f (p 0 ). p 1 is a better feasible sol n than p 0. Chapter 1: Linear Programming 15
16 Wheat-Corn Example, continued s 3 p 1 = (75, 0, 25, 425, 0). p 0 p 3 p 2 s 1 p 1 Repeat, leaving p 1 by making x 2 > 0 entering variable s 2 while keeping s 3 = 0. First other variable to become zero is s 1. Arrive p 2 = (50, 50, 0, 50, 0) f (p 2 ) = 80(50) + 60(50) = 7000 > 6000 = f (p 1 ). p 2 is a better feasible solution than p 1. Have moved along another edge of the feasible set from (x 1, x 2 ) = (75, 0) and arrived at (x 1, x 2 ) = (50, 50). Chapter 1: Linear Programming 16
17 Wheat-Corn Example, continued If leave p 2 by making s 3 > 0 entering variable while keeping s 1 = 0, first variable to become zero is s 2, arrive at p 3 = (40, 60, 0, 0, 10). f (p 3 ) = 80(40) + 60(60) = 6800 < 7000 = f (p 2 ). p 3 is worse feasible solution than p 2. Let z F {p 2 }. v = z p 2, v j = p j p 2. f (v 1 ) = f (p 1 ) f (p 2 ) < 0, f (v 3 ) = f (p 3 ) f (p 2 ) < 0 v 1 & v 3 are basis of R 2, so v = y 1 v 1 + y 3 v 3 with y 1, y 3 0. v 0 points into F, so (i) y 1, y 3 0 and (ii) y 1 > 0 or y 3 > 0. f (z) = f (p 2 ) + f (v) = f (p 2 ) + y 1 f (v 1 ) + y 3 f (v 3 ) < f (p 2 ) Since cannot increase f by moving along either edge going out from p 2, p 2 is an optimal feasible solution. Chapter 1: Linear Programming 17
18 Wheat-Corn Example, continued In this example, p 0 = (0, 0, 100, 800, 150), p 1 = (75, 0, 25, 425, 0), p 2 = (50, 50, 0, 50, 0), p 3 = (40, 60, 0, 0, 10), p 4 = (0, 80, 20, 0, 70) are called basic solutions since at most 3 variables are positive, where 3 is the rank (and number of constraints). End of Example Chapter 1: Linear Programming 18
19 Slack Variables added to a Standard Lin Prog Prob, MLP All resource constraints Given b = (b 1,..., b m ) R m +, c = (c 1,..., c n ) R n, m n matrix A = (a ij ). Find x R n + and s R m + that maximize: f (x) = c x subject to: a 11 x a 1n x n + s 1 = b 1... a m1 x a mn x n + s m = b m x i 0, s j 0 for 1 i n, 1 j m. Using matrix notation with I m m identity matrix, [ ] x maximize f (x) subject to [A, I] = b with x 0, s 0. s Indicate partition of augmented matrix by extra vertical lines [A I b] Chapter 1: Linear Programming 19
20 Basic Solutions Assume Ā m (n + m) matrix with rank m (like Ā = [A, I] ) m dependent variables are called basic variables. (var of pivot col ns) n free variables are called non-basic variables. (var of non-pivot col ns) A basic solution is a solution p satisfying Āp = b such that columns corresponding to p i 0 are linearly indep. rank(a) = m. If p is also feasible with p 0, then called a basic feasible solution. Obtain by setting n free variables = 0, and get basic variables 0, allow possibly some basic variables = 0 Chapter 1: Linear Programming 20
21 Linear Algebra Solution of Wheat-Corn Problem Augmented matrix for the original wheat-corn problem is x 1 x 2 s 1 s 2 s with free variables x 1 and x 2 and basic variables s 1, s 2, and s 3. If make x 1 > 0 while keeping x 2 = 0, x 1 becomes a new basic variable ( for new pivot coln ) called entering variable. (i) s 1 will become zero when x 1 = = 100, (ii) s 2 will become zero when x 1 = = 160, and (iii) s 3 will become zero when x 1 = = 75. Since s 3 becomes zero for the smallest value of x 1, s 3 is the departing variable and new pivot is 1st column (for x 1 ) and 3rd row ( old pivot for s 3 ) Chapter 1: Linear Programming 21
22 First Pivot Row reducing to make a pivot in first column third row, x 1 x 2 s 1 s 2 s Setting free variables x 2 = s 3 = 0, new basic solution p 1 = (75, 0, 25, 425, 0). Entries in the right (augmented) column give values of the new basic variables that are > 0. x 1 x 2 s 1 s 2 s Chapter 1: Linear Programming 22
23 Including Objective Function in Matrix Objective function (or variable) is f = 80x x 2, or 80x 1 60x 2 + f = 0. Adding a row for this equation and a column for variable f keeps track of the value of f during row reduction. x 1 x 2 s 1 s 2 s 3 f This matrix including objective function row is called tableau. Entries in column for f are 1 in objective function row and 0 elsewhere. In objection function row of this tableau, entries for x i are negative. Chapter 1: Linear Programming 23
24 First Pivot, continued Row reducing tableau by making first column and third row a new pivot, x 1 x 2 s 1 s 2 s 3 f x 1 x 2 s 1 s 2 s 3 f For x 2 = s 3 = 0, x 1 = 75 > 0, s 1 = 25 > 0, s 2 = 425 > 0, Bottom right entry of 6000 is new value of f Chapter 1: Linear Programming 24
25 Second Pivot x 1 x 2 s 1 s 2 s 3 f x 2 & s 3 free (non-basic) variables If pivot back to make s 3 > 0, the value of f becomes smaller, so select x 2 as the next entering variable, keeping s 3 = 0. (i) s 1 becomes zero when x 2 = 25.5 = 50, and (ii) s 2 becomes zero when x 2 = = (iii) x 1 becomes zero when x 2 = 75.5 = 150, Since the smallest positive value of x 1 comes from s 1, s 1 is the departing variable and pivot on 1st row 2nd column. Chapter 1: Linear Programming 25
26 Second Pivot, contin. Pivot on 1st row 2nd column, x 1 x 2 s 1 s 2 s 3 f x 1 x 2 s 1 s 2 s 3 f Entries in column for f don t change. f = 7000 objective function Chapter 1: Linear Programming 26
27 Third Pivot Why does the objective function decrease when moving along the edge making s 3 > 0 an entering variable, keeping s 1 = 0? (i) x 1 becomes zero when s 3 = 50 1 = 50, (ii) x 2 becomes zero when s 3 = 50 1 = (iii) s 2 becomes zero when s 3 = 50 5 = , and Smallest positive value of s 3 comes from s 2, and pivot on the 2nd row 5th column. x 1 x 2 s 1 s 2 s 3 f x 1 x 2 s 1 s 2 s 3 f Value of objective function decreases since the entry is already positive before pivot in column for s 3 of the objective function row Chapter 1: Linear Programming 27
28 Tableau Drop the column for the variable f from augmented matrix since it does not play a role in the row reduction. (entry in last row stays = 1 and others stay = 0) Augmented matrix with objective function row but without column for objective function variable is called the tableau. Chapter 1: Linear Programming 28
29 Steps in the Simplex Method for Stand MLP 1 Set up the tableau so that all b i 0. An initial feasible basic solution is determined by setting x i = 0 and solving for s i. 2 Choose as entering variable any free variable with a negative entry in objection function row. (often most negative) 3 From column selected in previous step, select row for which ratio of entry in augmented column divided by entry in column selected is smallest value 0; departing variable is basic variable for this row. Row reduce the matrix using selected new pivot position. 4 Objective function has no upper bound and no optimal solution when one column has only nonpositive coefficients above a negative coefficient in objective function row. 5 Solution is optimal when all entries in objective function row are nonnegative. 6 If optimal tableau has zero entry in objective row for nonbasic variable and all basic variables are positive, then nonunique solution. Chapter 1: Linear Programming 29
30 General Constraints: Requirement Constraints All b i 0 (by multiplying inequalities by 1 if necessary.) Requirement constraint is given by a i1 x a in x n b i (occur especially for a min problem.) Require to have at least a minimum amount to quantity. Can have a surplus of quantity, so subtract off a surplus variable to get equality a i1 x a in x n s i = b i with s i 0. To solve equation initially, also add an artificial variable r i 0, a i1 x a in x n s i + r i = b i. (Walker uses a i, but we use r i to distinguish from matrix a ij.) Initial sol n sets the artificial variable r i = b i while s i = 0 = x j j. Chapter 1: Linear Programming 30
31 Equality Constraints For an equality constraint a i1 x a in x n = b i, add one artificial variable a i1 x a in x n + r i = b i, with an initial solution r i = b i 0 while the x j = 0. For general constraints (with either requirement or equality constraint) initial solution has all x i = 0 and all surplus variables zero, while slack variables and artificial variables 0. This sol n is not feasible if any artificial variables is positive for a requirement constraint or an equality constraint Chapter 1: Linear Programming 31
32 Minimization Example Assume two foods are consumed in amounts x 1 and x 2 with costs per unit of 15 and 7 respectively, and yield (5, 3, 5) and (2, 2, 1) units of three vitamins respectively. Problem is to minimize cost 15 x x 2 or Maximize: 15 x 1 7 x 2 Subject to: 5 x x x x 2 40, and 5 x x x = 0 not a feasible sol n, initial sol n involves the artificial variables Chapter 1: Linear Programming 32
33 Minimization Example, continued The tableau is x 1 x 2 s 1 s 2 s 3 r 1 r 2 r To eliminate artificial variables, preliminary steps to force all artificial variables to be zero. Use artificial objective function that is negative sum of equations that contain artificial variables, 13x 1 5x 2 + s 1 + s 2 + s 3 + ( r 1 r 2 r 3 ) = x 1 5x 2 + s 1 + s 2 + s 3 + R = 135 R = r 1 r 2 r 3 0 new variable, with max of 0. Chapter 1: Linear Programming 33
34 Minimization Example, continued Tableau with the artificial objective function included (but not a column for the variable R) is x 1 x 2 s 1 s 2 s 3 r 1 r 2 r If artificial objective function can be made equal zero, then this gives an initial feasible basic solution with r i = 0 and only original and slack variables positive. Then, artificial variables can be dropped and proceed as before. Chapter 1: Linear Programming 34
35 Minimization Example, continued 13 < 5 x 1 x 2 s 1 s 2 s 3 r 1 r 2 r = 7 < , 60 5 = 12: Pivoting on a 31 (making into a pivot) x 1 x 2 s 1 s 2 s 3 r 1 r 2 r Chapter 1: Linear Programming 35
36 Minimization Example, continued Pivoting on a 22 ( < 25 < 7 5) x 1 x 2 s 1 s 2 s 3 r 1 r 2 r x 1 x 2 s 1 s 2 s 3 r 1 r 2 r Chapter 1: Linear Programming 36
37 Minimization Example, continued Pivoting on a 14 yields an initial feasible basic solution: x 1 x 2 s 1 s 2 s 3 r 1 r 2 r x 1 x 2 s 1 s 2 s 3 r 1 r 2 r Chapter 1: Linear Programming 37
38 Minimization Example, continued After these two steps, (2, 25, 0, 16, 0) is an initial feasible basic solution and artificial variables can be dropped. Pivoting on a 15 yields final solution: x 1 x 2 s 1 s 2 s 3 x 1 x 2 s 1 s 2 s All entries in the obj fn are positive, so maximal solution: (10, 5, 0, 0, 20) with an value of 185. For original problem, minimal sol n has value of End of Example Chapter 1: Linear Programming 38
39 Steps in Simplex Method w/ General Constraints 1 Make all b i 0 of constraints by multiplying by 1 if necessary. 2 Add a slack variable for each resource inequality, add a surplus variable and an artificial variable for each requirement constraint, and add an artificial variable for each equality constraint. 3 If either requirement constraints or equality constraints are present, then form artificial objective function by taking negative sum of all equations that contain artificial variables, dropping terms involving artificial variables. Set up tableau matrix. (The row for artificial objective function has zeroes in the columns of the artificial variables.) An initial solution of equations including artificial variables is determined by setting all original variables x j = 0, all slack variables s i = b i, all the surplus variables s i = 0, and all artificial variables r i = b i. Chapter 1: Linear Programming 39
40 Steps for General Constraints, continued 4 Apply simplex algorithm using artificial objective function. a. If it is not possible to make artificial objective function equal to zero, then there is no feasible solution. b. If the artificial variables can be made equal to zero, then drop artificial variables and artificial objective function from tableau and continue using initial feasible basic solution constructed. 5 Apply simplex algorithm to actual objective function. Solution is optimal when all entries in objective function row are nonnegative. Chapter 1: Linear Programming 40
41 Example with Equality Constraint Consider the problem of Maximize: 3 x x 2 Subject to: 2 x 1 + x 2 6, and 2 x x 2 24, x 1 = 8, x 1 0, x 2 0. With slack, surplus, and artificial variables added the problem becomes Maximize: 3 x x 2 Subject to: 2 x 1 + x 2 + s 1 = 6 2 x x 2 s 2 + r 2 = 24 x 1 + r 3 = 8. Artificial obj fn is negative sum of the 2nd and 3rd rows 3 x 1 2 x 2 + s 2 + R = 32 where R = r 2 r 3 Chapter 1: Linear Programming 41
42 Example, continued The tableau with variables is x 1 x 2 s 1 s 2 r 2 r Pivoting on a 31 and then a 22, x 1 x 2 s 1 s 2 r 2 r 3 x 1 x 2 s 1 s 2 r 2 r Attained a feasible solution of (x 1, x 2, s 1, s 2 ) = (8, 4, 18, 0). Chapter 1: Linear Programming 42
43 Example, continued Can now drop artificial objective function and artificial variables. Pivoting on a 1,4 x 1 x 2 s 1 s x 1 x 2 s 1 s Optimal solution of f = 112 for (x 1, x 2, s 1, s 2 ) = (8, 22, 0, 36). End of Example Chapter 1: Linear Programming 43
44 Duality: Introductory Example Return to wheat-corn problem (MLP). Maximize: z = 80x x 2 subject to: x 1 + x 2 100, (land) 5x x 2 800, (capital) 2x 1 + x 2 150, (labor), x 1, x 2 0 Assume excess resources can be rented out or shortfalls rented for prices of y 1, y 2, and y 3. (Shadow prices of inputs.) Profit P = 80x x 2 + (100 x 1 x 2 )y 1 +(800 5x 1 10x 2 )y 2 + (150 2x 1 x 2 )y 3 If profit by renting outside land, then competitors raise price of land force farmer 100 x 1 x 2 0. If 100 x 1 x 2 > 0, then market sets y 1 = 0. (100 x 1 x 2 )y 1 = 0 at optimal Similar other constraints, Farmer s perspective yields original MLP Chapter 1: Linear Programming 44
45 Duality: Introductory Example, contin. P = 80x x 2 + (100 x 1 x 2 )y 1 + (800 5x 1 10x 2 )y 2 + (150 2x 1 x 2 )y 3 If a resource is slack, 100 x 1 x 2 > 0, then market sets y 1 = 0. Market is minimizing P as function of (y 1, y 2, y 3 ) Rearranging profit function from markets perspective yields P = (80 y 1 5y 2 2y 3 )x 1 + (60 y 1 10y 2 y 3 )x y y y 3 Coefficients of x i represents net profit after costs of unit of i th -good If net profit > 0, the competitors grow wheat/corn. Force 0, 80 y 1 5y 2 2y 3 0 or 80 y 1 + 5y 2 + 2y 3 60 y 1 10y 2 y 3 0 or 60 y y 2 + y 3 If a resource is slack, 100 x 1 x 2 > 0, then market sets y 1 = 0. 0 = (100 x 1 x 2 )y 1 0 = (800 5x 1 10x 2 )y 2 0 = (150 2x 1 x 2 )y 3 Market is minimizing P = 100y y y 3 Chapter 1: Linear Programming 45
46 Duality: Introductory Example, contin. Market s perspective results in dual minimization problem: Minimize: w = 100y y y 3 Subject to: y 1 + 5y 2 + 2y 3 80, (wheat) y y 2 + y 3 60, (corn) y 1, y 2, y 3 0. MLP: Maximize: z = 80x x 2 Subject to: x 1 + x 2 100, 5x x 2 800, 2x 1 + x 2 150, x 1, x Coefficient matrices of x i and y i are transposes of each other. 2. Coefficients for objective function of MLP become constants for inequalities of dual mlp. 3. Constants for inequalities of MLP become coefficients for objective function of dual mlp. Chapter 1: Linear Programming 46
47 Duality: Introductory Example, contin. For the wheat-corn MLP problem, final tableau x 1 x 2 s 1 s 2 s x 1 x 2 s 1 s 2 s Optimal sol n MLP is x 1 = 50 and x 2 = 50 with a payoff of Optimal sol n dual mlp is (by theorem given later) y 1 = 40, y 2 = 0, and y 3 = 20 with the same payoff, where 40, 0, and 20 are entries in bottom row of final tableau in columns associated with slack variables. End of Example Chapter 1: Linear Programming 47
48 Example, Bicycle Manufacturing A bicycle manufacturer manufactures x 1 3-speeds and x 2 5-speeds. Maximize profits given by z = 12x x 2. Constraints are 20 x x finishing time in minutes, 15 x x assembly time in minutes, x 1 + x frames used for assembly, x 1 0 x 2 0. Dual problem is Minimize: w = 2400 y y y 3, Subject to: 20 y y 2 + y 3 12, 30 y y 2 + y 3 15, y 1 0, y 2 0, y 3 0. Chapter 1: Linear Programming 48
49 Bicycle Manufacturing, contin. MLP x 1 x 2 s 1 s 2 s x 1 x 2 s 1 s 2 s x 1 x 2 s 1 s 2 s Optimal sol n has x 1 = 60 3-speeds, x 2 = 40 5-speeds, with a profit of $1320. Chapter 1: Linear Programming 49
50 Bicycle Manufacturing, continued y i x 1 x 2 s 1 s 2 s are marginal values of corresponding constraint. Dual problem has a solution of y 1 = 3 10 profit per finishing minute, y 2 = 0 profit per assembly minute y 3 = 6 profit per frame. Additional units of the exhausted resources, finishing time and frames, contribute to the profit but not assembly time. Chapter 1: Linear Programming 50
51 Rules for Forming Dual LP Maximization Problem, MLP i th constraint j a ijx j b i Minimization Problem, mlp i th variable 0 y i i th constraint j a ijx j b i i th variable 0 y i i th constraint j a ijx j = b i i th variable y i unrestricted j th variable 0 x j j th variable 0 x j j th constraint i a ijy i c j j th constraint i a ijy i c j j th variable x j unrestricted j th constraint i a ijy i = c j Standard conditions for MLP, j a ijx j b i or 0 x j, corresp to standard conditions for mlp, 0 y i or i a ijy i c j Nonstandard conditions, b i or 0 x j, corresp to nonstand conditions Equality constraints correspond to unrestricted variables These rules follow from proof of Duality Theorem (given subsequently) Chapter 1: Linear Programming 51
52 Example Minimize : 8y y 2 + 4y 3 Subject to : 4y 1 + 2y 2 3y y 1 + 3y 2 + 5y y 1 + 2y 2 + 4y 3 = 40 y 1 unrestricted, y 2 0, y 3 0 By table, dual maximization problem is Maximize : 20x x x 3 Subject to : 4x 1 + 2x 2 + 6x 3 = 8 2x 1 + 3x 2 + 2x x 1 + 5x 2 + 4x 3 4 x 1 0, x 2 0, x 3 unrestricted Chapter 1: Linear Programming 52
53 Example, continued Maximize : 20x x x 3 Subject to : 4x 1 + 2x 2 + 6x 3 = 8 2x 1 + 3x 2 + 2x x 1 + 5x 2 + 4x 3 4 x 1 0, x 2 0, x 3 unrestricted By making change of variables x 2 = v 2 and x 3 = v 3 w 3, all restrictions on variables are 0: Maximize : 20x 1 150v v 3 40w 3 Subject to : 4x 1 2v 2 + 6v 3 6w 3 = 8 2x 1 3v 2 + 2v 3 2w x 1 5v 2 + 4x 3 4w 3 4 x 1 0, v 2 0, v 3 0, w 3 0 Chapter 1: Linear Programming 53
54 Example, continued Tableau for maximization problem with variables x 1, v 2, v 3, w 3, with artificial variable r 1, and with slack variables s 2 and s 3 is x 1 v 2 v 3 w 3 s 2 s 3 r x 1 v 2 v 3 w 3 s 2 s 3 r Chapter 1: Linear Programming 54
55 Example, continued Drop artificial objective function but keep artificial variable to determine value of its dual variable. x 1 v 2 v 3 w 3 s 2 s 3 r x 1 v 2 v 3 w 3 s 2 s 3 r Chapter 1: Linear Programming 55
56 Example, continued x 1 v 2 v 3 w 3 s 2 s 3 r Optimal solution of MLP is x 1 = 4 17, x 2 = v 2 = 0, x 3 = v 3 w 3 = = 20 17, s 2 = , and s 3 = r 1 = 0 with a maximal value 20 ( 4 17 ) + 150(0) + 40 ( ) = Chapter 1: Linear Programming 56
57 Example, continued Optimal solution for original mlp can be also read off final tableau, x 1 v 2 v 3 w 3 s 2 s 3 r y 1 = , y 2 = 0, y 3 = 20 17, and minimal value 8 ( ) + 10 (0) + 4 ( ) = Chapter 1: Linear Programming 57
58 Example, continued Alternatively method: First write minimization problem in with variables 0 by setting y 1 = u 1 z 1, Minimize : 8u 1 8z y 2 + 4y 3 Subject to : 4u 1 4z 1 + 2y 2 3y u 1 2z 1 + 3y 2 + 5y u 1 6z 1 + 2y 2 + 4y 3 = 40 u 1 0, z 1 0, y 2 0, y 3 0 Dual MLP will now have a different tableau than before but the same solution. Chapter 1: Linear Programming 58
59 Remark on Tableau with Surplus/Equal After artificial objective function is zero drop this row but keep artificial variables to determine values of dual variables. Proceed to make all the entries of row for objective function 0 in columns of x i, slack and surplus variable, but allow negative values in artificial variable columns. Dual variables are entries in row for objective function in columns of slack variables and artificial variables. For pair of surplus and artificial variable columns, value in artificial variable column is 0 and 1 of value in surplus variable column. Chapter 1: Linear Programming 59
60 Notation for Duality Theorems MLP: (primal) maximization linear programming problem Maximize: f (x) = c x Subject to: j a ijx j b i, b i, or = b i for 1 i m and x j 0, 0, or unrestricted for 1 j n Feasible set: F M. mlp: (dual) minimization linear programming problem Minimize: g(y) = b y Subject to: i a ijy i c j, c j, or = c j for 1 j n and y i 0, 0, or unrestricted for 1 i m Feasible set: F m. Dual of the minimization problem is the maximization problem. Chapter 1: Linear Programming 60
61 Weak Duality Theorem Theorem (Weak Duality Theorem) Let x F M for MLP and y F m for mlp, any feasible solutions. a. Then, f (x) = c x b y = g(y). Thus, optimal value M to either problem satisfies c x M b y for any x F M and y F m. b. c x = b y iff x & y satisfy complementary slackness 0 = y j (b j a j1 x 1 a jn x n ) for 1 j m, and 0 = x i (a 1i y a mi y m c i ) for 1 i n. In matrix notation, 0 = y (b Ax) and 0 = x ( A y c ). Chapter 1: Linear Programming 61
62 Complementary Slackness For linear programming usually solve by simplex method, row reduction In nonlinear programming with inequalities often solve complementary slackness equations, Karush-Kuhn-Tucker equations Chapter 1: Linear Programming 62
63 Proof of Weak Duality Theorem (1) If If If j a ijx j b i then y i 0 and y i (Ax) i = y i j a ijx j y i b i. j a ijx j b i then y i 0 and y i (Ax) i = y i j a ijx j y i b i. j a ijx j = b i then y i is arb and y i (Ax) i = y i j a ijx j = y i b i. Summing over i y Ax = i y i(ax) i i y i b i = y b or y (b Ax) 0. (2) By same type of argument as (1), c x x ( A y ) = ( A y ) x = y (Ax) = y Ax ( A y c ) x 0. c x y Ax y b (a) (b) y b c x = y (b Ax) + ( A y c ) x = 0 iff 0 = ( A y c ) x and 0 = y (b Ax). QED Chapter 1: Linear Programming 63
64 Feasibility/Boundedness Corollary Assume that MLP and mlp both have feasible solutions. Then MLP is bounded above and has an optimal solution. Also, mlp is bounded below and has an optimal solution. Proof. If y 0 F m and x F M, then f (x) = c x b y 0 so f is bounded above, and has an optimal solution. Similarly, if x 0 F M, and y F m then g(y) = b y c x 0 so g is bounded below, and has an optimal solution. Chapter 1: Linear Programming 64
65 Necessary Conditions for Optimal Solution Proposition If x is an optimal solution for MLP, then there is a feasible solution ȳ F m of the dual mlp that satisfies complementary slackness equations, 1. ȳ (b A x) = 0, 2. ( A ȳ c ) x = 0. Similarly, if ȳ F m is optimal solution of mlp, then there is a feasible solution x F M that satisfy 1-2. Proof is longest of duality arguments. Prove similar necessary conditions for nonlinear situation. Chapter 1: Linear Programming 65
66 Proof of Necessary Conditions Let E be set of i such that b i = j a ij x j, i.e. is tight or effective. Gradient of this constraint is transpose of i th -row of A, R i, Let E be set of i such that x i = 0 is tight. e i = (0,..., 1,..., 0) is negative of gradient Assume nondegenerate, so gradients of constraints {R T i } i E { e i } i E = {w i } i E are linearly independent. (Otherwise take an appropriate subset in following argument.) f has a maximum at x on level set for constraints i E E. By Lagrange multipliers, f ( x) = c = i E ȳi R T j i E z i e i By setting ȳ i = 0 for i / E and 1 i m and z i = 0 for i / E and 1 i n, c = 1 i m ȳi R T i 1 i m z i e i = A ȳ z. (*) Chapter 1: Linear Programming 66
67 Proof of Necessary Conditions, contin. Since ȳ i = 0 for b i j a ij x j 0 0 = ȳ i (b i j a ij x j ) for 1 i m. Since z j = 0 for x j 0, 0 = x j z j 1 j n. In vector-matrix form using (*), (1) 0 = ȳ (b A x) (2) 0 = x z = ( A ȳ c ) x Still need (4): (i) ȳ i 0 for resource constraint, (ii) ȳ i 0 for requirement constraint, (iii) ȳ i is unrestricted for equality constraint, (iv) z j = i a ijȳ i c j 0 for x j 0, (v) z j 0 for x j 0, and (vi) z j = 0 for x j unrestricted. Chapter 1: Linear Programming 67
68 Proof of Necessary Conditions, contin. {R T i } i E { e i } i E = {w k } k E are linearly independent Complete to a basis of R n using vectors perp to these first vectors. W = (w 1,..., w n ) and V = (v 1,..., v n ) s.t. W V = I. Column v k perp to w i except for i = k. c = 1 i m ȳi R T i 1 i n z i e i = j p j w j where p j = ȳ i, z i, or 0 c v k = ( j p j w j ) v k = p k Take i E. Gradient of this constraint is R i = w k for some k E. Set δ = 1 for resource and δ = +1 for requirement constraints δ R i points into F M (unless = constraint). For small t 0, x + t δ v k F M 0 f ( x) f ( x + t δ v k ) = t δ c v k = t δ p k, δ p k 0, or ȳ i 0 for resource and 0 for requirement constraint Can t move off for equality constraint, so y i unrestricted Chapter 1: Linear Programming 68
69 Proof of Necessary Conditions, contin. Take i E. e i = w k for some k E. Set δ = 1 if x i 0 and δ = 1 if x i 0 δ w k = δ e i points into F M (unless x i unrestricted). By argument as before, δ p k = δ z i 0. Therefore z i 0 if x i 0 and z i 0 if x i 0 If x i is unrestricted, then the equation is not tight and z i = 0. This proves ȳ F m and satisfies complementary slackness (1) and (2) QED Chapter 1: Linear Programming 69
70 Optimality and Complementary Slackness Corollary Assume that x F M is a feasible solution for primal MLP and ȳ F m is a feasible solution of dual mlp. Then the following are equivalent. a. x is an optimal solution of MLP and ȳ is an optimal solution of mlp. b. c x = b ȳ. c. 0 = x (c A ȳ) and 0 = (b A x) ȳ. Proof. (b c) Restatement of Weak Duality Theorem. (a c) By proposition, ȳ that satisfies complementary slackness By Weak Duality Theorem, c x = b ȳ. So, c x = b ȳ b ȳ c x. By Weak Duality Theorem, ȳ satisfies complementary slackness. Chapter 1: Linear Programming 70
71 Proof of Corollary, contin. Proof. (b a) If x and ȳ satisfy c x = b ȳ, then for any x F M & y F m, c x b ȳ = c x b y x & ȳ must be optimal solutions. Chapter 1: Linear Programming 71
72 Duality Theorem Theorem Consider two dual problems MLP and mlp. Then, MLP has an optimal sol n iff dual mlp has an optimal sol n. Proof. If MLP has an optimal sol n x, then mlp has feasible sol n ȳ that satisfies complementary slackness. By Corollary, ȳ is optimal sol n of mlp. Converse is similar Chapter 1: Linear Programming 72
73 Duality and Tableau Theorem If either MLP or mlp is solved for an optimal sol n by simplex method, then sol n of its dual LP is displayed in bottom row of final optimal tableau in the columns associated with slack and artificial variables. (not surplus) Proof: Start with MLP. To solve by tableau, need x 0. Group equations into resource, requirement, and equality constraints. so tableau for MLP A 1 I b 1 A 2 0 I 2 I 2 0 b 2 A I 3 b 3 c Chapter 1: Linear Programming 73
74 Proof continued Row operations to final tableau realized by by matrix multiplication [ ] A 1 I b 1 M1 M 2 M 3 0 A 2 0 I 2 I 2 0 b 2 ȳ 1 ȳ 2 ȳ 2 1 A I 3 b 3 c [ M 1 A 1 + M 2 A 2 + M 3 A 3 M 1 M 2 M 2 M 3 Mb = ȳ 1 A 1 + ȳ 2 A 2 + ȳ 3 A 3 c ȳ 1 ȳ 2 ȳ 2 ȳ 3 ȳ b ] Obj fn row is not added to the other rows so last column = (0, 1), In final tableau, entries in objective function row 0, except for artificial variable columns, so A ȳ c = ( ȳ A c ) 0, ȳ1 0, ȳ 2 0, so ȳ F m c x max = ȳ b = b ȳ so ȳ is minimizer by Optimality Corollary. Chapter 1: Linear Programming 74
75 Proof continued ( Note: ) A ȳ = L i i ȳ L i i th -column of A If x i 0, set ξ i = x i 0. Column in tableau, 1 original column, and new obj fn coef c i. Now have 0 ( L i ) ȳ ( c i ), L i ȳ c i resource constraint of dual. If x i arbitrary, set x i = ξ i η i. Then get both L i ȳ c i & L i ȳ c i, L i ȳ = c i, equality constraint of dual. QED Chapter 1: Linear Programming 75
76 Sensitivity Analysis Sensitive analysis concerns the extent to which more of a resource would increase the maximum value of a MLP. Example. In short run, in stock product 1 per item product 2 per item profit $40 $10 units of paint fasteners hours of labor x 1 x 2 s 1 s 2 s x 1 x 2 s 1 s 2 s Chapter 1: Linear Programming 76
77 Sensitivity Analysis, continued x 1 x 2 s 1 s 2 s x 1 x 2 s 1 s 2 s Optimal solution is x 1 = 28, x 2 = 60, s 1 = 0, s 2 = 0, s 3 = 36, with optimal profit of Chapter 1: Linear Programming 77
78 Sensitivity Analysis, continued x 1 x 2 s 1 s 2 s Values of an increase of constrained quantities are y 1 = 2 7, y 2 = 25 7, y 3 = 0 for quantity that is not tight. Increase is largest for second constraint, limitation on fasteners. Next consider range that b 2 can be increased while keeping same basic variables with s 1 = s 2 = 0. Chapter 1: Linear Programming 78
79 Sensitivity Analysis, continued Let δ 2 be change in 2nd resource (fasteners): 10x 1 + 2x 2 + s 2 = δ 2 starting form of constraint. s 2 and δ 2 play similar roles (and have similar units), so the new final tableau adds δ 2 times s 2 -column to right side of equalities. Still need x 1, x 2, s 3 0, 0 x 2 = 60 3 δ 2 14 or δ = 280, 0 x 1 = 28 + δ 2 7 or δ = 196, 0 s 3 = δ 2 14 or δ = 56; 56 δ Chapter 1: Linear Programming 79
80 Sensitivity Analysis, continued Resource can be incr at most 280 units and decr at most 56 units, or 56 δ = b = 680. For this range, x 1, x 2, and s 3 are still basic variables. Resource = 344 Resource = 400 Resource = 680 (20,72) (28,60) (40,0) (68,0) Chapter 1: Linear Programming 80
81 Sensitivity for Change of Constraint Constants For δ 2 = 280, x 1 = = 68, x 2 = = 0, s 3 = = 216, z = For δ 2 = 56, x 1 = = 20, x 2 = = 72, s 3 = = 0, z = = 2720 is optimal value. = 1520 is optimal value. Chapter 1: Linear Programming 81
82 General Changes in Tight Constraint Use optimal (final) tableau that gives maximum b i entry of i th -row of constants in right hand column c j 0 entry in jth -column of objective row entry in i th -row and j th -column a ij exclude right side constants and any artificial variable columns. C j j th -column of A (note capital and bold, not c j ) R i i th -row of A Chapter 1: Linear Programming 82
83 General Changes in Constraint, contin. Change in tight r th - resource constraint, b r + δ r Assume that s r s r A e r b + δ r e r c T 0 0 is in k th -column s r A C k b + δ r C k c T c k M + δ r c k z i basic variable with pivot in i th -row, need 0 z i = b i + δ r a ik. For a ik < 0, need δ δ r min i { b i a ik For a ik > 0, need b min i { b i a ik r a ik b i, so : a ik < 0 }, k th -column for s r i δ r a ik, so : a ik > 0 } δ r, k th -column for s r. Change in optimal value for δ r in allowable range: δ r c k Chapter 1: Linear Programming 83
84 Change in Slack Constraint Constant Let s r be for a pivot column in optimal tableau for a slack r th -resource. To keep same basic variables, need changed amount b r + δ r 0 δ r b r. b r can be increased by an arbitrary amount. For δ r is this range, optimal value is unchanged. Chapter 1: Linear Programming 84
85 Sensitivity Analysis, continued x 1 x 2 s 1 s 2 s Allowable δ 1 for first resource, δ 1 min { , } = min { 980, } = δ 1 min { } = 420. Change of optimal value δ 1 Allowable δ 3 δ 3 36 Chapter 1: Linear Programming 85
86 Changes in Objective Function Coefficients For a change from c 1 to c 1 + 1, changes in tableaux x 1 x 2 s 1 s 2 s x 1 x 2 s 1 s 2 s x 1 x 2 s 1 s 2 s Chapter 1: Linear Programming 86
87 Changes in Coefficients, contin. To keep objective function row 0, need & = 10 & = = c = 50. Corresponding optimal value of objective function is Chapter 1: Linear Programming 87
88 General Changes in Objective Function Coefficients Consider change k in coefficient c k of basic x k in obj fn, where x k is a basic in optimal solution and its pivot is in r th row. a rj denote the entries in r th row and j th column, excluding right side constants and any artificial variable columns. c j 0 denote entries in the objective row of optimal tableau With change, original entry in the Obj Rn row becomes c k k. Entry in optimal tableau changes from 0 to To keep x k basic, need to add k R r to Obj Fn row. Entry in x k -column is now 0 and j th -column is c j + ka rj For all j, need c j + ka rj 0. k Chapter 1: Linear Programming 88
89 Changes in Coefficients, contin. c k + k for basic variable with r th -pivot row, a rk = 1 pivot. For a rj > 0 in r th -pivot row, need k a rj c j or k c j a rj, k min j { c j a rj : a rj > 0, j k }, maximal decrease of c k. For a rj < 0 in r th -pivot row, need c j a rj k or c j a rj k, k min j { c j a rj : a rj < 0 }, maximal increase of c k. If c j = 0 for a rj < 0, then need k 0. If c j = 0 for a rj > 0 & j k, then need k 0. Change in optimal value is k b r Chapter 1: Linear Programming 89
90 Sensitivity Analysis for Non-basic Variable Our example does not have any non-basic variables, x k. If x k were a non-basic variable in optimal solution, x k = 0, then c k + k 0 insures that x r is a non-basic variable, k c k. i.e., c k 0 is min decrease needed to make x k a basic variable and a positive contribution to optimal solution. Chapter 1: Linear Programming 90
91 Theory: Convex Combinations Weighted averages in R n : For three vectors a 1, a 2, and a 3, a 1 + a 2 + a 3 3 is average of each component, average of these vectors. a 1 + a 2 + a 2 + a 3 + a 3 + a 3 6 = a 1 + 2a 2 + 3a 3 6 = 1 6 a a a 3 is a weighted average of these vectors with weights 1 6, 2 6, and 3 6. For vectors {a i } k i=1 and numbers k i=1 t i = 1 with t i 0 k i=1 t i a i is a weighted average, and is called a convex combination of {a i }. Chapter 1: Linear Programming 91
92 Convex Sets Definition A set S R n is convex provided that if x 0 and x 1 are any two points in S then convex combination x t = (1 t)x 0 + tx 1 is also in S for all 0 t 1, i.e., line segment from x 0 to x 1 in S. convex convex convex not convex not convex Chapter 1: Linear Programming 92
93 Convex Sets, contin. Each constraint, a i1 x a in x n b i or b i, or x i 0, defines a closed half-space in R n. a i1 x a in x n = b i is a hyperplane (n 1 dimensional). Definition Any intersection of a finite number of closed half-spaces and possibly some hyperplanes is called a polyhedron. Chapter 1: Linear Programming 93
94 Convex Sets, contin. Theorem a. Intersection of convex sets, j S j, is convex. b. A polyhedron is convex. So feasible set of any LP is convex. Proof. (a) If x 0, x 1 S j & 0 t 1, then (1 t) x 0 + t x 1 S j (1 t) x 0 + t x 1 j S j. j, and (b) Each closed half space & hyperplane is convex, so intersection is convex. Chapter 1: Linear Programming 94
95 Convex Sets, contin. Theorem If S is a convex set, and p i S for 1 i k, then any convex combination k i=1 t i p i S. Proof. Proof is by induction. For k = 2, it follows from the def n of convex set. Assume true for k 1 2. If t k = 1 & t j = 0 for 1 j < k, then clear. If t k < 1, then k 1 i=1 t i = 1 t k > 0, & k 1 i=1 k 1 i=1 t i 1 t k p i S by induction hypothesis So, k i=1 t i p i = (1 t k ) k 1 t i i=1 1 t k p i + t k p k S. t i 1 t k = 1. Chapter 1: Linear Programming 95
96 Extreme Points and Vertices Definition A point p in a nonempty convex set S is called an extreme point p.t. if p = (1 t)x 0 + tx 1 with x 0, x 1 in S and 0 < t < 1, then p = x 0 = x 1. An extreme point in a polyhedron is called a vertex. An extreme point of S must be a boundary point of S. Disk D = { x R 2 : x 1 } is convex. Each point on its boundary is an extreme point. For next few theorems, consider feasible set F = { x R n+m + : Ax = b } with slack and surplus variables included in x s and constraints. Chapter 1: Linear Programming 96
97 Vertices Theorem Let F = { x R n+m + : Ax = b }. x F is a vertex of F if and only if basic feasible solution, i.e., columns of A with x j > 0 linearly independent set of vectors. Proof: By reindexing columns and variables, can assume that x 1 > 0,..., x r > 0 x r+1 = = x n+m = 0. (a) Assume { A j } r j=1 linearly dependent: (β 1,..., β r ) 0 β 1 A β r A r = 0. For β = (β 1,..., β r, 0,..., 0), A β = 0, For small λ, w 1 = x + λβ 0, & w 2 = x λβ 0, Aw i = Ax = b. so w 1, w 2 F & x = 1 2 w w 2, so not vertex. Chapter 1: Linear Programming 97
98 Proof (b) Conversely, assume that x F is not vertex, x = t y + (1 t) z for 0 < t < 1, with y z in F. For r < j, 0 = x j = t y j + (1 t) z j. Since both y j 0 and z j 0, both must be zero for j > r. Because y z are both in F, b = Ay = y 1 A y r A r, b = Az = z 1 A z r A r, 0 = (y 1 z 1 ) A (y r z r ) A r, and columns { A j } r j=1 are linearly dependent. QED Chapter 1: Linear Programming 98
99 Some Vertex is an Optimal Solution For any convex combination f ( t j x j ) = c t j x j = t j c x j = t j f (x j ). Theorem Assume that F = { x R n+m + : Ax = b } for bounded MLP. Then following hold. a. If x 0 F, then there exists a basic feasible x b F s.t. f (x b ) = c x b c x 0 = f (x 0 ). b. There is at least one optimal basic solution. c. If two or more basic solutions are optimal, then any convex combination of them is also an optimal solution. Chapter 1: Linear Programming 99
100 Proof a. If x 0 is already a basic feasible sol n then done. Otherwise, columns A for xi 0 > 0 are lin. depen. Let A be matrix with only these columns. y 0 such that A y = 0. Adding 0 in other entries, get y 0 s.t. Ay = 0. A( y) = 0, so can assume that c y 0. A [x 0 + t y] = A x 0 = b, If y i 0, then xi 0 > 0, so x 0 + t y 0 for small t is in F. Chapter 1: Linear Programming 100
101 Proof a, contin. Case 1. Assume that c y > 0 and some component y i < 0. x 0 i > 0 and x 0 i + t y i = 0 for t i = x0 i y i > 0. As t increases from 0 to t i, objective function increases from c x 0 to c [x 0 + t i y]. If more than one y i < 0, then select one with smallest t i. Have constructed point in F with one more zero component of x 0, fewer components y i < 0, and a greater value of objective function. Can continue until either columns are linearly independent or all y i 0. Chapter 1: Linear Programming 101
102 Proof a, continued Case 2. If c y > 0 and y 0, then x 0 + t y F for all t > 0, c [x 0 + t y] = c x 0 + t c y is arbitrarily large. MLP is unbounded and has no maximum, contradiction. Case 3. If c y = 0: f (x 0 + t y) = c x 0 unchanged Some y i 0. Considering y & y can assume some y i < 0. first t i > 0, to make another xi 0 + t i y i = 0. Eventually, get corresponding columns linearly independent, and a basic solution as claimed in part (a). Chapter 1: Linear Programming 102
103 Proof b,c (b) Only finitely many basic feasible solutions, {p j } N j=1. f (x) max 1 j N f (p j ) for x F by part (a) Maximum can be found among f (p j ). (c) Assume f (p ji ) = M = max{ f (x) : x F } for i = 1,..., l > 1. l i=1 t j i p ji F ( l ) f i=1 t j i p ji = l i=1 t j i f (p ji ) = l i=1 t j i M = M. optimal QED Chapter 1: Linear Programming 103
104 Validation of Simplex Method If degen. basic feasible sol ns with fewer than m positive basic var, then simplex method can cycle by row reduction to matrices with same positive basic variables but different sets of pivots: interchange one zero basic variable with zero free variable. Same vertex of feasible set Need to insure don t repeat same set of basic variables at a vertex (cycle) Theorem If a maximum solution exists for a linear programming problem and simplex algorithm does not cycle among degen basic feasible sol ns, then simplex algorithm locates a maximum solution in finitely many steps. Chapter 1: Linear Programming 104
Ω R n is called the constraint set or feasible set. x 1
1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We
More informationOPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM
OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation
More informationChapter 5 Linear Programming (LP)
Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider
More informationReview Solutions, Exam 2, Operations Research
Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To
More informationYinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method
The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear
More informationSensitivity Analysis and Duality in LP
Sensitivity Analysis and Duality in LP Xiaoxi Li EMS & IAS, Wuhan University Oct. 13th, 2016 (week vi) Operations Research (Li, X.) Sensitivity Analysis and Duality in LP Oct. 13th, 2016 (week vi) 1 /
More informationMidterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane
More information1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations
The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear
More informationF 1 F 2 Daily Requirement Cost N N N
Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever
More informationPart 1. The Review of Linear Programming
In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation
More information1 Review Session. 1.1 Lecture 2
1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions
More informationOPERATIONS RESEARCH. Linear Programming Problem
OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More informationMATH 4211/6211 Optimization Linear Programming
MATH 4211/6211 Optimization Linear Programming Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 The standard form of a Linear
More informationChapter 3, Operations Research (OR)
Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z
More informationNote 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)
Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical
More information"SYMMETRIC" PRIMAL-DUAL PAIR
"SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationmin 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14
The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,
More informationSimplex Algorithm Using Canonical Tableaus
41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau
More informationDuality Theory, Optimality Conditions
5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationLinear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004
Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define
More informationSlack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0
Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2
More information3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...
Contents Introduction to Linear Programming Problem. 2. General Linear Programming problems.............. 2.2 Formulation of LP problems.................... 8.3 Compact form and Standard form of a general
More informationTHE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I
LN/MATH2901/CKC/MS/2008-09 THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS Operations Research I Definition (Linear Programming) A linear programming (LP) problem is characterized by linear functions
More informationSummary of the simplex method
MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationThe use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:
Sensitivity analysis The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Changing the coefficient of a nonbasic variable
More informationStandard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta
Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau
More informationThe dual simplex method with bounds
The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the
More informationTIM 206 Lecture 3: The Simplex Method
TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),
More informationUNIT-4 Chapter6 Linear Programming
UNIT-4 Chapter6 Linear Programming Linear Programming 6.1 Introduction Operations Research is a scientific approach to problem solving for executive management. It came into existence in England during
More informationII. Analysis of Linear Programming Solutions
Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois
More information4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n
2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6
More informationIntroduction to linear programming using LEGO.
Introduction to linear programming using LEGO. 1 The manufacturing problem. A manufacturer produces two pieces of furniture, tables and chairs. The production of the furniture requires the use of two different
More informationThe Dual Simplex Algorithm
p. 1 The Dual Simplex Algorithm Primal optimal (dual feasible) and primal feasible (dual optimal) bases The dual simplex tableau, dual optimality and the dual pivot rules Classical applications of linear
More informationMATH 445/545 Test 1 Spring 2016
MATH 445/545 Test Spring 06 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 545 level. Please read and follow all of these
More informationChap6 Duality Theory and Sensitivity Analysis
Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we
More informationCSC Design and Analysis of Algorithms. LP Shader Electronics Example
CSC 80- Design and Analysis of Algorithms Lecture (LP) LP Shader Electronics Example The Shader Electronics Company produces two products:.eclipse, a portable touchscreen digital player; it takes hours
More information3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions
A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence
More informationMotivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory
Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization
More informationTRANSPORTATION PROBLEMS
Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations
More informationLinear programs, convex polyhedra, extreme points
MVE165/MMG631 Extreme points of convex polyhedra; reformulations; basic feasible solutions; the simplex method Ann-Brith Strömberg 2015 03 27 Linear programs, convex polyhedra, extreme points A linear
More information4. Duality and Sensitivity
4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair
More information3. Duality: What is duality? Why does it matter? Sensitivity through duality.
1 Overview of lecture (10/5/10) 1. Review Simplex Method 2. Sensitivity Analysis: How does solution change as parameters change? How much is the optimal solution effected by changing A, b, or c? How much
More informationFarkas Lemma, Dual Simplex and Sensitivity Analysis
Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x
More informationIntroduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More information21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.
Extra Problems for Chapter 3. Linear Programming Methods 20. (Big-M Method) An alternative to the two-phase method of finding an initial basic feasible solution by minimizing the sum of the artificial
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis
MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis Ann-Brith Strömberg 2017 03 29 Lecture 4 Linear and integer optimization with
More informationOPRE 6201 : 3. Special Cases
OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are
More information3. THE SIMPLEX ALGORITHM
Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).
More information1 Simplex and Matrices
1 Simplex and Matrices We will begin with a review of matrix multiplication. A matrix is simply an array of numbers. If a given array has m rows and n columns, then it is called an m n (or m-by-n) matrix.
More information4.6 Linear Programming duality
4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP Different spaces and objective functions but in general same optimal
More informationAppendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS
Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution
More informationMAT016: Optimization
MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The
More information56:171 Operations Research Fall 1998
56:171 Operations Research Fall 1998 Quiz Solutions D.L.Bricker Dept of Mechanical & Industrial Engineering University of Iowa 56:171 Operations Research Quiz
More informationThe Simplex Algorithm and Goal Programming
The Simplex Algorithm and Goal Programming In Chapter 3, we saw how to solve two-variable linear programming problems graphically. Unfortunately, most real-life LPs have many variables, so a method is
More informationThe Simplex Algorithm
8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.
More informationLinear Programming, Lecture 4
Linear Programming, Lecture 4 Corbett Redden October 3, 2016 Simplex Form Conventions Examples Simplex Method To run the simplex method, we start from a Linear Program (LP) in the following standard simplex
More informationMath 273a: Optimization The Simplex method
Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form
More informationM340(921) Solutions Problem Set 6 (c) 2013, Philip D Loewen. g = 35y y y 3.
M340(92) Solutions Problem Set 6 (c) 203, Philip D Loewen. (a) If each pig is fed y kilograms of corn, y 2 kilos of tankage, and y 3 kilos of alfalfa, the cost per pig is g = 35y + 30y 2 + 25y 3. The nutritional
More informationSensitivity Analysis and Duality
Sensitivity Analysis and Duality Part II Duality Based on Chapter 6 Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan
More information(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6
The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. Problem 1 Consider
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationChapter 4 The Simplex Algorithm Part I
Chapter 4 The Simplex Algorithm Part I Based on Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan Lewis Ntaimo 1 Modeling
More informationSAMPLE QUESTIONS. b = (30, 20, 40, 10, 50) T, c = (650, 1000, 1350, 1600, 1900) T.
SAMPLE QUESTIONS. (a) We first set up some constant vectors for our constraints. Let b = (30, 0, 40, 0, 0) T, c = (60, 000, 30, 600, 900) T. Then we set up variables x ij, where i, j and i + j 6. By using
More informationSpecial cases of linear programming
Special cases of linear programming Infeasible solution Multiple solution (infinitely many solution) Unbounded solution Degenerated solution Notes on the Simplex tableau 1. The intersection of any basic
More informationDistributed Real-Time Control Systems. Lecture Distributed Control Linear Programming
Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can
More informationEND3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur
END3033 Operations Research I Sensitivity Analysis & Duality to accompany Operations Research: Applications and Algorithms Fatih Cavdur Introduction Consider the following problem where x 1 and x 2 corresponds
More informationCSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1
CSCI5654 (Linear Programming, Fall 2013) Lecture-8 Lecture 8 Slide# 1 Today s Lecture 1. Recap of dual variables and strong duality. 2. Complementary Slackness Theorem. 3. Interpretation of dual variables.
More informationLinear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016
Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources
More informationSpring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization
Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table
More informationLinear Programming Redux
Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains
More information56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker
56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker Answer all of Part One and two (of the four) problems of Part Two Problem: 1 2 3 4 5 6 7 8 TOTAL Possible: 16 12 20 10
More informationA Review of Linear Programming
A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex
More informationOptimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems
Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}
More informationOPERATIONS RESEARCH. Michał Kulej. Business Information Systems
OPERATIONS RESEARCH Michał Kulej Business Information Systems The development of the potential and academic programmes of Wrocław University of Technology Project co-financed by European Union within European
More information+ 5x 2. = x x. + x 2. Transform the original system into a system x 2 = x x 1. = x 1
University of California, Davis Department of Agricultural and Resource Economics ARE 5 Optimization with Economic Applications Lecture Notes Quirino Paris The Pivot Method for Solving Systems of Equations...................................
More informationDuality in Linear Programming
Duality in Linear Programming Gary D. Knott Civilized Software Inc. 1219 Heritage Park Circle Silver Spring MD 296 phone:31-962-3711 email:knott@civilized.com URL:www.civilized.com May 1, 213.1 Duality
More informationThe Graphical Method & Algebraic Technique for Solving LP s. Métodos Cuantitativos M. En C. Eduardo Bustos Farías 1
The Graphical Method & Algebraic Technique for Solving LP s Métodos Cuantitativos M. En C. Eduardo Bustos Farías The Graphical Method for Solving LP s If LP models have only two variables, they can be
More information9.1 Linear Programs in canonical form
9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems
More informationLinear programming. Starch Proteins Vitamins Cost ($/kg) G G Nutrient content and cost per kg of food.
18.310 lecture notes September 2, 2013 Linear programming Lecturer: Michel Goemans 1 Basics Linear Programming deals with the problem of optimizing a linear objective function subject to linear equality
More informationLinear Programming Duality
Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve
More informationNumerical Optimization
Linear Programming Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on min x s.t. Transportation Problem ij c ijx ij 3 j=1 x ij a i, i = 1, 2 2 i=1 x ij
More informationLP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra
LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality
More informationUnderstanding the Simplex algorithm. Standard Optimization Problems.
Understanding the Simplex algorithm. Ma 162 Spring 2011 Ma 162 Spring 2011 February 28, 2011 Standard Optimization Problems. A standard maximization problem can be conveniently described in matrix form
More informationLINEAR PROGRAMMING II
LINEAR PROGRAMMING II LP duality strong duality theorem bonus proof of LP duality applications Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM LINEAR PROGRAMMING II LP duality Strong duality
More informationLecture 2: The Simplex method
Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.
More informationDr. S. Bourazza Math-473 Jazan University Department of Mathematics
Dr. Said Bourazza Department of Mathematics Jazan University 1 P a g e Contents: Chapter 0: Modelization 3 Chapter1: Graphical Methods 7 Chapter2: Simplex method 13 Chapter3: Duality 36 Chapter4: Transportation
More information3 The Simplex Method. 3.1 Basic Solutions
3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,
More informationAdvanced Linear Programming: The Exercises
Advanced Linear Programming: The Exercises The answers are sometimes not written out completely. 1.5 a) min c T x + d T y Ax + By b y = x (1) First reformulation, using z smallest number satisfying x z
More informationLinear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming
Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)
More informationLinear programs Optimization Geoff Gordon Ryan Tibshirani
Linear programs 10-725 Optimization Geoff Gordon Ryan Tibshirani Review: LPs LPs: m constraints, n vars A: R m n b: R m c: R n x: R n ineq form [min or max] c T x s.t. Ax b m n std form [min or max] c
More informationCO350 Linear Programming Chapter 8: Degeneracy and Finite Termination
CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible
More informationOptimisation and Operations Research
Optimisation and Operations Research Lecture 9: Duality and Complementary Slackness Matthew Roughan http://www.maths.adelaide.edu.au/matthew.roughan/ Lecture_notes/OORII/
More informationDEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions
DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION Part I: Short Questions August 12, 2008 9:00 am - 12 pm General Instructions This examination is
More informationPrelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.
Prelude to the Simplex Algorithm The Algebraic Approach The search for extreme point solutions. 1 Linear Programming-1 x 2 12 8 (4,8) Max z = 6x 1 + 4x 2 Subj. to: x 1 + x 2
More information9.5 THE SIMPLEX METHOD: MIXED CONSTRAINTS
SECTION 9.5 THE SIMPLEX METHOD: MIXED CONSTRAINTS 557 9.5 THE SIMPLEX METHOD: MIXED CONSTRAINTS In Sections 9. and 9., you looked at linear programming problems that occurred in standard form. The constraints
More information