IE 534 Linear Programming Lecture Notes Fall 2011
|
|
- Nelson Williams
- 5 years ago
- Views:
Transcription
1 IE 534 Linear Programming Lecture Notes Fall 2011 Lizhi Wang Iowa State University 1 Introduction Problem Model Algorithm Solver An example of a linear program: max ζ = 2x 1 + 3x 2 (1) subject to x 1 + 4x 2 8 (2) x 1 x 2 1 (3) x 1, x 2 0 (4) Here x 1 and x 2 are variables whose values need to be determined to maximize the linear function ζ = 2x 1 + 3x 2 subject to the linear constraints (2)-(4) We can get the optimal solution to LP (1)-(4) graphically as illustrated in Figure 1 There are three steps: 1 Find the region that satisfies all constraints (shaded region in Figure 1), 2 Find the direction that the objective function is maximizing towards, and 3 Eyeball the optimal solution (for the example in Figure 1, x 1 = 24, x 2 = 14, and ζ = 9) These three steps can only be used to manually find the optimal solution to linear programs with two variables Figure 1: Graphical solution to LP (1)-(4) Copyright Lizhi Wang, 2011 All rights reserved If you have to print this document, please consider doublesided printing to save some trees 1
2 In general, a linear program is given as follows: max x ζ = c 1 x 1 + c 2 x c n x n (5) s t a 1,1 x 1 + a 1,2 x a 1,n x n b 1 (6) a i,1 x 1 + a i,2 x a i,n x n = b i (7) a j,1 x 1 + a j,2 x a j,n x n b j (8) a m,1 x 1 + a m,2 x a m,n x n b m (9) l 1 x 1 u 1 (10) l 2 x 2 u 2 (11) l n x n u n (12) ˆ The variables x 1, x 2,, x n are called decision variables, and a, b, c, l, and u are given parameters ˆ A solution x = [x 1, x 2,, x n is a feasible solution if it satisfies all constraints (6)-(12) Similarly, a solution is an infeasible solution if it violates any constraint ˆ The set of all feasible solutions is called the feasible region or feasible set ˆ If l i = and u i = +, then x i is a free or unrestricted variable ˆ The linear function (5) is the objective function ˆ ζ represents the value of the objective function, called objective value For a given solution x, we use ζ(x) to denote the objective value of x: ζ(x) = c 1 x 1 + c 2 x c n x n ˆ The symbol max indicates that we want to maximize the objective value ζ We may also use min to indicate the opposite ˆ The symbol st is short for subject to or such that, which starts the list of constraints that a feasible solution must satisfy ˆ A solution x is optimal if it is feasible and ζ(x ) ζ(x ) for any other feasible solution x ˆ The solution of an LP has three possibilities 1 There exists an optimal solution, either uniquely or among infinitely many others 2 LP is infeasible, which means that a feasible solution does not exist 3 LP is unbounded, which means that for any real number K, there always exists a feasible solution x such that ζ(x) > K In that case, the optimal objective value is said to be + 2
3 In many situations, it is convenient to study a linear program in the following standard form: max x ζ = c 1 x 1 + c 2 x c n x n (13) s t a 1,1 x 1 + a 1,2 x a 1,n x n b 1 (14) a m,1 x 1 + a m,2 x a m,n x n b m (15) x 1, x 2,, x n 0 (16) The definition of a standard form is not unique, but may depend on context or personal preference The following tricks can be used to transform any linear program to the standard form: ˆ For a minimization objective we can replace it with ˆ For a greater-than-or-equal-to constraint min ζ = c 1 x 1 + c 2 x c n x n, max ζ = c 1 x 1 c 2 x 2 c n x n a j,1 x 1 + a j,2 x a j,n x n b j, we can replace it with a j,1 x 1 a j,2 x 2 a j,n x n b j ˆ For an equality constraint a i,1 x 1 + a i,2 x a i,n x n = b i, we can replace it with two less-than-or-equal-to constraints a i,1 x 1 + a i,2 x a i,n x n b i a i,1 x 1 a i,2 x 2 a i,n x n b i ˆ For a variable with both lower and upper bounds there are four cases: l k x k u k, (17) 1 If l k = and u k is finite, we can rewrite (17) as x k u k 0, and then define x k = u k x k Now, if we substitute x k with u k x k, (17) can be replaced by x k 0 2 If l k is finite and u k = +, we can rewrite (17) as 0 x k l k, and then define x k = x k l k Now, if we substitute x k with x k + l k, (17) can be replaced by x k 0 3
4 3 If both l k and u k are finite, we can rewrite (17) as 0 x k l k u k l k, and then define x k = x k l k Now, if we substitute x k with x k + l k, (17) can be replaced by x k u k l k x k 0 4 If l k = and u k = +, we can introduce two variables x + k and x k Now, if we substitute x k with x + k x k, (17) can be replaced by x + k 0 x k 0 We often write the standard form LP (13)-(16) in matrix notation: max ζ = c x s t Ax b x 0, where c n 1 = c 1 c 2 c n, x n 1 = x 1 x 2 x n, A m n = a 1,1 a 1,2 a 1,n a 2,1 a 2,2 a 2,n a m,1 a m,2 a m,n, b m 1 = b 1 b 2 b m We will use n and m to denote the number of variables and the number of constraints, respectively The idea of making decisions to maximize an objective function subject to certain constraints applies to more general forms of mathematical programs: max{f(x) : G(x) 0 m 1, x X }, where f( ) : R n R, G( ) : R n R m, and X is the feasible region Linear Programming has a linear objective function f(x) and linear constraints G(x) 0 For the linear program (1)-(4), f(x) = 2x 1 + 3x 2, [ [ [ 1 4 x1 8 G(x) =, x 2 X = {(x 1, x 2 ) : x 1 0, x 2 0} Nonlinear Programming has a nonlinear objective function f(x) and/or nonlinear constraints G(x) 0 For example, max 2x x 2 (18) s t x x 2 8 (19) x 1 x 2 1 (20) 4 x 1, x 2 0 (21)
5 Here, G(x) = [ f(x) = 2x x 2, [ [ [ x x x 2 2 x 2 X = {(x 1, x 2 ) : x 1 0, x 2 0} [ 8 1, This problem is illustrated in Figure 2 Figure 2: Graphical solution to nonlinear program (18)-(21) Integer Programming requires that some or all of the variables be integers For example, Here, max 2x 1 + 3x 2 (22) s t x 1 + 4x 2 8 (23) G(x) = This problem is illustrated in Figure 3 x 1 x 2 1 (24) x 1, x 2 {0, 1, 2, } (25) f(x) = 2x 1 + 3x 2, [ 1 4 [ x1 1 1 x 2 [ 8 1, X = {(x 1, x 2 ) : x 1, x 2 {0, 1, 2, }} 5
6 2 Computer Solvers Figure 3: Graphical solution to integer program (22)-(25) There are numerous LP computer solvers, among which are GLPK (GNU Linear Programming Kit) and MATLAB Below are examples of solving the LP (1)-(4) using GLPK and MATLAB 21 GLPK for linear program Step 0: Download a compiled version of GLPK for Windows from the course web A good location to extract the files to is C:\\User Files Do NOT extract them to the U: drive Step 1: Create a text document named example1txt in the same folder with GLPK, type the following codes, and save the file var x1 >= 0; var x2 >= 0; maximize zeta: 2 * x1 + 3 * x2; subject to c1: x1 + 4 * x2 <= 8; c2: x1 - x2 <= 1; end; Step 2: Open the Command Prompt in the folder and type the following command: glpsol --math example1txt --output example1 outputtxt and then press Enter A new file named example1 outputtxt will be created This file contains the optimal solution x 1 = 24, x 2 = 14, and the optimal objective value ζ = 9 For more information about GLPK, read its manual and/or visit its website: 22 MATLAB for linear program Step 1: Open MATLAB, create a new file, and type the following codes in the Editor Window : c = [2, 3 ; 6
7 A = [1, 4; 1, -1; b = [8, 1 ; Aeq = [; beq = [; lx = [0, 0 ; [x, zeta = linprog(-c, A, b, Aeq, beq, lx) The default LP formulation that MATLAB assumes is min{c x : Ax b, A eq x = b eq, lx x ux}, so we need to give it the negative c Save this file somewhere, and give it a name, eg, example1 Do NOT use linprog or quadprog as the name, because they are reserved for MATLAB system functions Step 2: Press F5 or hit the run button The result will be given in the Command Window : Optimization terminated x = zeta = The optimal solution here is the same as given by GLPK, but the optimal objective value has an opposite sign since MATLAB solves a minimization problem For more information about MATLAB, read its help document and/or visit its website: 23 MATLAB for quadratic program MATLAB has a function quadprog that solves quadratic programs with linear constraints, for example min 05x x2 2 x 1x 2 2x 1 6x 2 s t x 1 + x 2 2 x 1 + 2x 2 2 2x 1 + x 2 3 x 1, x 2 0 This QP can be equivalently rewritten in the following matrix form: { min c x + 1 } 2 x Qx : Ax b, x 0, where c = [ 2 6 [ 1 1, Q = 1 2, A = , b = Here the matrix Q can be obtained as Q i,j = 2 f(x) x i x j, in which f(x) denotes the objective function The default QP formulation that MATLAB assumes is min{c x + 05x Qx : Ax b, A eq x = b eq, lx x ux} So, we can use the following MATLAB codes to solve the problem: 7
8 A = [1, 1; -1, 2; 2, 1; b = [2, 2, 3 ; c = [-2, -6 ; Aeq = [; beq = [; Q = [1, -1; -1, 2; lx = zeros(2,1); [x, zeta = quadprog(q, c, A, b, Aeq, beq, lx) The optimal solution is x 1 = 067, x 2 = 133, and the optimal objective value is ζ = GLPK for mixed integer linear program Integer programs that contain both continuous and integer variables are called mixed integer programs GLPK is able to solve mixed integer linear programs For example, to solve min x 1 2x 2 3x 3 (26) s t x 1 + 7x 2 8x 3 12 (27) x 1 + x 2 + 3x 3 1 (28) 5x 2 + 9x 3 = 13 (29) x 1 {0, 1}, x 2 {0, 1, 2, 3}, x 3 0, (30) we can use the following GLPK codes: var x1 binary; var x2 integer, >=0, <=3; var x3 >=0; minimize zeta: x1-2 * x2-3 * x3; subject to c1: x1 + 7 * x2-8 * x3 <= 12; c2: -x1 + x2 + 3 * x3 >= 1; c3: 5 * x2 + 9 * x3 = 13; end; The optimal solution is x 1 = 0, x 2 = 2, x 3 = 033, and the optimal objective value is ζ = 5 25 GLPKMEX for mixed integer linear program GLPKMEX is a MATLAB interface to GLPK, which enables one to use GLPK through MATLAB codes A copy of GLPKMEX can be downloaded from the course web With MATLAB s current directory being where GLPKMEX is saved, the following MATLAB codes can be used to solve the integer program (26)-(30): c = [1, -2, -3 ; A = [1, 7, -8; -1, 1, 3; 0, 5, 9; b = [12, 1, 13 ; lx = [0, 0, 0 ; ux = [1, 3, inf ; ctype = [ U, L, S ; 8
9 vartype = [ B, I, C ; s = 1; [x, zeta = glpk(c, A, b, lx, ux, ctype, vartype, s) For more information about GLPKMEX, read the file glpkm and/or visit its website: 3 Modeling Mathematical programming roots from real life problems where people always have some objectives to maximize or minimize, but these objectives are kept from going to infinity by various constraints An important goal of this course is to develop the skills of using mathematical programming (especially linear programming) to formulate, solve, and analyze real life problems Three examples are given below 31 Arbitrage in currency exchange In currency markets, arbitrage is trading among different currencies in order to profit from a rate discrepancy For example, if the exchange rates between dollar and euro are 1 dollar 07 euro and 1 euro 15 dollar, then a profit can be made by trading from dollar to euro and then back to dollar Obviously, arbitrage opportunities are not supposed to exist, at least in theory The following are actual trades made on February 14 of 2002 with minor modification How to use an LP model to identify if an arbitrage opportunity exists in this example? Notice that an arbitrage may involve more than two currencies, eg, Dollar Yen Pound Euro Dollar from \ to Dollar Euro Pound Yen Dollar Euro Pound Yen
10 32 Turning junk into treasure A company s large machine broke down Instead of buying a new one, which is very expensive, they find that there are some old and broken machines of the same type in the junk yard, so they decide to reassemble a working machine out of the broken ones This type of machine has ten different components, connected one after another like a train as illustrated in Figure 4 Components at the same position from different machines are interchangeable but components at different positions are not A 0 means that the component is broken and a 1 means that the component is still good They plan to reassemble the good components to make a complete working machine The numbers between components represent the disassembling and reassembling costs We assume that these costs are the same for all old machines For example, they can make a working machine from (a) and (b), which requires three assembling points at (#2,#3), (#4,#5), and (#8,#9), with a total cost of = 29 (a) 1 (b) 0 (c) 0 (d) Figure 4: Four examples of old machines Suppose that there are 20 old machines in their junk yard, and the functioning status of their components is given in matrix A, in which each column represents an old machine The assembling costs are given in vector C, with C i representing the disassembling and reassembling cost between A i,j and A i+1,j for all j Build a linear programming or integer programming model to find the least costly way to make a complete working machine using these old ones Which old machines should be used to make it? What is the least possible cost? A = Old McDonald had a farm, C = A farmer can grow wheat, corn, and beans on his 500-acre farm The planting costs are $150/acre, $230/acre, and $260/acre, and expected yields are 25 tons/acre, 3 tons/acre, and 20 tons/acre, respectively He needs 200 tons of wheat and 240 tons of corn to feed his cattle The farmer can sell wheat and corn to a wholesaler at $170/ton and $150/ton He can also buy wheat and corn
11 from the wholesaler at $238/ton and $210/ton Beans sell at $36/ton for the first 6000 tons Due to economic quotas on bean production, beans in excess of 6000 tons can only be sold at $10/ton How shall the farmer allocate his 500 acres to maximize his profit? Define x W, x C, x B : acres of wheat, corn, and beans planted w W, w C, w B : tons of wheat, corn, and beans sold at favorable price e B : tons of beans sold at lower price y W, y C : tons of wheat and corn purchased, then the profit maximization problem can be formulated as: max ζ = 150x W 230x C 260x B 238y W + 170w W 210y C + 150w C + 36w B + 10e B s t x W + x C + x B x W + y W w W = 200 3x C + y C w C = x B w B e B = 0 w B 6000 x W, x C, x B, w W, w C, w B, e B, y W, y C 0 The optimal solution of this problem is x W = 120, x C = 80, x B = 300, w W = 100, w C = 0, w B = 6000, e B = 0, y W = 0, y C = 0, ζ = $118, 600 While this solution makes sense, it assumes that the yields (25 tons/acre, 3 tons/acre, and 20 tons/acre) are known However, yields are greatly dependent on the weather and could increase by 20% in a good year and decrease by 20% in a bad one If we know in advance that next year will be a good year and the yields become 3 tons/acre, 36 tons/acre, and 24 tons/acre, then the optimal solution becomes x W = 18333, x C = 6667, x B = 250, w W = 350, w C = 0, w B = 6000, e B = 0, y W = 0, y C = 0, ζ = $167, 667 If we know in advance that next year will be a bad year and the yields become 2 tons/acre, 24 tons/acre, and 16 tons/acre, then the optimal solution becomes x W = 100, x C = 25, x B = 375, ww = 0, w C = 0, w B = 6000, e B = 0, y W = 0, y C = 180, ζ = $59, 950 However, without knowing the weather of next year in advance, how can we formulate an LP model to find the optimal solution that maximizes the expected profit? Suppose the probabilities that next year will be a good, average, or bad year are 035, 04, and 025, respectively Notice that the farm allocation decisions (x W, x C, x B ) must be made at the beginning of the year without knowing the weather, and (y W, w W, y C, w C, w B, e B ) can be made after observing the weather Therefore we can have three recourse solutions of the latter group: (yw G, wg W, yg C, wg C, wg B, eg B ) for a good year, (yw A, wa W, ya C, wa C, wa B, ea B ) for an average year, and (yb W, wb W, yb C, wb C, wb B, eb B ) for a bad year The expected profit can be then defined as: E[ζ = 150x W 230x C 260x B +035 ( 238yW G + 170wW G 210yC G + 150wC G + 36wB G + 10e G B) +04 ( 238y A W + 170w A W 210y A C + 150w A C + 36w A B + 10e A B) +025 ( 238y B W + 170w B W 210y B C + 150w B C + 36w B B + 10e B B) 11
12 34 Minimizing a convex piecewise linear objective function A function f : R n R is called convex if for all x, y R n and all λ [0, 1, we have f[λx + (1 λ)y λf(x) + (1 λ)f(y) A function f : R n R is called concave if for all x, y R n and all λ [0, 1, we have f[λx + (1 λ)y λf(x) + (1 λ)f(y) Some properties about convex and concave functions: 1 If f is a convex function, then f is a concave function 2 If f 1, f 2,, f n are all convex functions, then max{f 1, f 2,, f n } is a convex function 3 If f 1, f 2,, f n are all concave functions, then min{f 1, f 2,, f n } is a concave function It is much easier to find the minimum of a convex function than a concave one Similarly, it is much easier to find the maximum of a concave function than a convex one Consider the following problem: min f(x 1, x 2 ) (31) s t x 1 + 4x 2 8 (32) x 1 x 2 1 (33) x 1, x 2 0, (34) where the objective function f(x 1, x 2 ) is a convex piecewise linear function: f(x 1, x 2 ) = max{x 1 3x 2, 4x 1 x 2, 2x 1 + 5x 2 } To reformulate problem (31)-(34) as a linear program, we use the fact that max{x 1 3x 2, 4x 1 x 2, 2x 1 + 5x 2 } is equal to the smallest number z that satisfies z x 1 3x 2, z 4x 1 x 2 and z 2x 1 + 5x 2 Therefore, problem (31)-(34) is equivalent to the following linear program: min z s t x 1 + 4x 2 8 x 1 x 2 1 z x 1 3x 2 z 4x 1 x 2 z 2x 1 + 5x 2 x 1, x 2 0; z free Challenge 1: Can you reformulate the problem if the objective function is redefined as f(x 1, x 2 ) = min{x 1 3x 2, 4x 1 x 2, 2x 1 + 5x 2 }? 12
13 35 Dealing with binary decision variables Consider the following nonlinear nonconvex binary program: max x 1 2x 2 + 5x 3 x x x2 3 7x 1x 2 4x 1 x x 2 x 3 (35) s t x 1 + 4x 2 2x 3 4 (36) 4x 1 x 2 + 3x 2 3 2x 1x 3 x 2 3 (37) {x 1, x 2, x 3 } {1, 0, 1} (38) x 1, x 2, x 3 binary (39) This problem can be reformulated as an LP with binary decision variables by using the following tricks ˆ Replace squared terms x 2 1, x2 2, and x2 3 with x 1, x 2, and x 3, respectively ˆ For other quadratic terms such as x 1 x 3, replace it with a new binary variable x 13 and add two new constraints 2x 13 x 1 + x x 13 ˆ Replace constraint (38) with a new constraint (1 x 1 ) + x 2 + (1 x 3 ) 1 The resulting model is the following LP with binary decision variables: max x 1 2x 2 + 5x 3 x 1 + 5x x 3 7x 12 4x x 23 (40) s t x 1 + 4x 2 2x 3 4 (41) 4x x 3 2x 13 x 2 3 (42) 2x 12 x 1 + x x 12 (43) 2x 13 x 1 + x x 13 (44) 2x 23 x 2 + x x 23 (45) (1 x 1 ) + x 2 + (1 x 3 ) 1 (46) x 1, x 2, x 3, x 12, x 13, x 23 binary (47) Challenge 2: The formulation (40)-(47) can be further simplified because some constraints are redundant Without actually solving for the optimal solution, can you identify which constraints can be removed and still guarantee the model will give the correct optimal solution? Challenge 3: If we have an x 1 x 2 x 3 term in the objective function, how can you linearize this nonlinear term by introducing new variables and constraints? 36 At least k out of m constraints must hold The following problem does not require all constraints to hold, but at least k out of m must do: max s t c x a 1 x b 1 a 2 x b 2 a m x b m x 0 at least k out of m constraints must hold 13
14 We can reformulate this problem as a mixed integer program by introducing some binary variables y s and a constant M whose value is set to be sufficiently large so that the constraint a i x b i + M can be ignored because it will never get violated: max c x s t a 1 x b 1 + M(1 y 1 ) a 2 x b 2 + M(1 y 2 ) a m x b m + M(1 y m ) y 1 + y y m k x 0, y binary Challenge 4: Can you reformulate the problem if the at least k out of m constraints must hold requirement is changed to exactly k out of m constraints must hold? 4 Optimality Conditions Different algorithms may vary in details, but most of them contain the same basic steps: Step 1: Find a feasible solution to start from Step 2: Check to see if the current solution is optimal Step 3: If the solution is optimal then stop Otherwise find a better solution and go to Step 2 In order for an algorithm to be thorough, it also needs to be able to identify infeasible or unbounded problems We first introduce KKT (Karush-Kuhn-Tucker) conditions, which are widely used for checking the optimality of a solution to mathematical programs We consider a general form nonlinear program max{f(x) : G(x) 0 m 1 }, (48) where f( ) : R n R and G( ) : R n R m The KKT conditions are: there exists a vector y R m such that the following constraints hold: m f(x) + y i G i (x) = 0 n 1 (49) i=1 G i (x) 0, i = 1,, m (50) y i 0, i = 1,, m (51) y i G i (x) = 0, i = 1,, m (52) To interpret KKT conditions from a more intuitive perspective, let s look at the LP (1)-(4) again and use it as an example to see why the KKT conditions (49)-(52) make sense If we rewrite G 1 (x) 8 x 1 4x 2 LP (1)-(4) in the form of (48), then f(x) = 2x 1 + 3x 2, G(x) = G 2 (x) G 3 (x) = 1 x 1 + x 2 x 1, G 4 (x) x 2 m = 4, and n = 2 14
15 Figure 4: Figure 1 from a different angle We look at Figure 1 from a different angle as if the constraints are walls and the objective function is towards the direction of gravity, as is shown in Figure 4 Now, what will happen if we put a small pingpong ball inside the two-dimensional feasible region and let it go? Since this environment is exactly analogous to LP (1)-(4), we can imagine that the ball will end up nowhere but the optimal solution to the LP If you still remember physics, in order for the ball to stop moving, all forces that apply to it must cancel out The possible forces are: gravity in the direction of f(x), and a force from the constraint wall in the direction of G i (x) for all i = 1,, m The magnitude of gravity depends on the mass of the ball, and let s assume it s 1; for i = 1,, m, let the magnitude of the force from wall i be denoted by y i Now, from the physics perspective, the conditions for the ball to stop are: All forces cancel out: f(x) + m y i G i (x) = 0 n 1 (53) Ball must stay inside the walls: G i (x) 0, i = 1,, m (54) Force magnitudes are nonnegative: y i 0, i = 1,, m (55) If the ball does not touch a wall, the force from that wall is zero: y i G i (x) = 0, i = 1,, m (56) Notice that conditions (53)-(56) are exactly the same with the KKT conditions (49)-(52), which is an interpretation of the KKT conditions from the physics perspective KKT conditions (49)-(52) can be written as f(x) + i=1 m y i G i (x) = 0 n 1 (57) i=1 0 G(x) y 0 (58) 15
16 Here indicates the complementarity of two vectors If vectors a R n and b R n are of the same dimension, then 0 a b 0 means that: (i) a 0, (ii) b 0, and (iii) a b = 0 Two vectors that satisfy condition (iii) are said to be complementary or perpendicular to each other In general, for nonlinear programs, KKT conditions may not be necessary or sufficient optimality conditions For example, solution (x 1 = 1, x 2 = 0) does not satisfy KKT conditions, but it is optimal to max x 1 s t x 2 + (x 1 1) 3 0 x 2 0 Solution (x 1 = 05, x 2 = 025) satisfies KKT conditions, but it is not optimal to min x 2 s t x 2 1 x 1 + x x 1 2 For linear programs, KKT are both necessary and sufficient conditions of optimality, which means that a solution to an LP is optimal if and only if it satisfies KKT conditions We now derive the KKT conditions for the standard form linear program which can be written in the form of (48) as { [ max f(x) = c Am n x : G(x) = I n n max{c x : Ax b, x 0}, (59) [ bm 1 x + 0 n 1 0 (m+n) 1 }, then the KKT conditions for a standard linear program becomes: there exist two vectors y R m and µ R n, corresponding to Ax b and x 0, respectively, such that the following constraints are satisfied: c + [ A I [ y = 0 (60) 0 [ b Ax x µ [ y µ 0 (61) Equation (60) is equivalent to c A y + µ = 0 or µ = A y c, with which we can substitute µ in (61) Now, (60)-(61) are simplified to: 0 [ b Ax x [ y A y c 0 (62) Condition (62) is necessary and sufficient for the optimality of the standard form LP (59), thus it is also called the optimality condition As an exercise, let s prove the optimality of the solution (x 1 = 24, x 2 = 14) to LP (1)-(4) using KKT conditions We plug in the numbers of x, f(x ) and G(x ), then KKT conditions (62) 16
17 become: there exists a vector y such that the following constraints hold: 0 8 ( ) y (24 14) 24 y 2 y 1 + y y 1 y 2 3 It is not hard to find that y = [ 1 1 satisfies the above condition, which confirms the optimality of (x 1 = 24, x 2 = 14) to LP (1)-(4) As another exercise, let s find the optimality condition of a non-standard form LP: min{b µ : A µ c, µ 0} (63) LP (63) can be equivalently rewritten in the standard form as follows: max{ b µ : A µ c, µ 0} (64) Let s relate (µ, A, c, b) in (64) to (x, A, b, c) in (59), respectively We also introduce a new variable λ to relate to y in (62) Then the optimality condition for (63) and (64) is: [ 0 λ Aλ + b [ c + A µ 0 (65) µ It is interesting to notice that optimality conditions (62) and (65) are equivalent to each other in the sense that if (x, y) is a feasible solution to (62) then (µ = y, λ = x) is also a feasible solution to (65), and vice versa For this reason, we can say that LPs (59) and (63) actually share a same optimality condition if we simply change the notation µ to y in (63): min{b y : A y c, y 0} (66) Once we find a pair of feasible solutions (x, y) to the optimality condition (62), then x and y are the optimal solutions to LPs (59) and (66), respectively In the optimality condition (62): [ b Ax 0 x [ y A y c 0, the left hand side is simply saying that x must be feasible to (59), and the right hand side is saying that y must be feasible to (66) These two problems both have many feasible solutions, but what makes the optimal solutions (x, y ) optimal is the fact that they also satisfy the complementarity conditions: 0 b Ax y 0 and 0 x A y c 0 5 Duality We are ready to introduce perhaps the most important concept in linear programming: duality Two linear programs, like (59) and (66), that share a same optimality condition in a complementary manner are called dual to each other: the decision variable y in (66) is the magnitude of force of the constraint wall for (59), and vice versa Either one of the LPs can be called the primal problem 17
18 and the other is the dual problem Decision variables of the primal (dual) problem are called primal (dual) variables Every linear program has a dual problem, which can be found by reformulating the primal problem in the same form as either (59) or (66), and then simplifying its corresponding dual problem For example, to find the dual problem to min{c x : Ax b}, (67) we first introduce two non-negative variables x + and x to replace the free variable x with x = x + x, and then rewrite (67) as the same form with (66): { [ min c c [ x + x : [ A A [ x + [ } x + x b, x 0, (68) whose dual problem is clearly max { [ b A y : A [ c y c }, y 0, (69) which can be simplified as { } max b y : A y = c, y 0 (70) Now, we have found that the dual to (67) is (70) The following table summarizes the primal-dual relations for general form LPs: max c 1 x 1 + c 2 x 2 + c 3 x 3 b 1 y 1 + b 2 y 2 + b 3 y 3 min a 1,1 x 1 + a 1,2 x 2 + a 1,3 x 3 b 1 y 1 0 constraints a 2,1 x 1 + a 2,2 x 2 + a 2,3 x 3 = b 2 y 2 free variables a 3,1 x 1 + a 3,2 x 2 + a 3,3 x 3 b 3 y 3 0 variables x 1 0 x 2 0 x 3 free a 1,1 y 1 + a 2,1 y 2 + a 3,1 y 3 c 1 a 1,2 y 1 + a 2,2 y 2 + a 3,2 y 3 c 2 a 1,3 y 1 + a 2,3 y 2 + a 3,3 y 3 = c 3 constraints We can use this table to find the dual of a general form LP directly For example, (P21): min x 1 + 2x 2 + 3x 3 s t x 1 3x 2 = 5 2x 1 x 2 + 3x 3 6 x 3 4 x 1 0 x 2 0 x 3 free (D21): max 5y 1 + 6y 2 + 4y 3 s t y 1 free y 2 0 y 3 0 y 1 + 2y 2 1 3y 1 y 2 2 3y 2 + y 3 = 3 This table can also be used to check the optimality of a solution to a non-standard form LP If (i) x satisfies the constraints of the primal problem, (ii) there exists a y that satisfies the constraints of the dual, and (iii) the corresponding constraints and variables are complementary to each other, then (x, y) is optimal For example, we can check the optimality of (x 1 = 5, x 2 = 0, x 3 = 4/3) 18
19 (y1 = 1, y 2 = 1, y 3 = 0) by observing that (i) x is primal feasible, (ii) y is dual feasible, and (iii) (x 1 3x 2 5)y 1 = (2x 1 x 2 + 3x 3 6)y 2 = (x 3 4)y 3 = x 1 (y 1 + 2y 2 1) = x 2 ( 3y 1 y 2 2) = x 3 (3y 2 + y 3 3) = 0 The following are some important theorems on duality Complementary Slackness: Solutions x and y are optimal to max{c x : Ax b, x 0} and min{b y : A y c, y 0}, respectively, if and only if condition (62) is met: 0 [ b Ax x [ y A y c 0 Weak Duality: If x and y are feasible solutions to max{c x : Ax b, x 0} and min{b y : A y c, y 0}, respectively, then c x b y Proof Since x and y are feasible, we have 0 (b Ax) y = b y (Ax) y = b y y Ax, and Therefore, c x y Ax b y 0 (A y c) x = y Ax c x Strong Duality: If x and y are optimal solutions to max{c x : Ax b, x 0} and min{b y : A y c, y 0}, respectively, then c x = b y Proof Since x and y are optimal, by complementary slackness, we have 0 = (b Ax) y = b y (Ax) y = b y y Ax, and Therefore, c x = y Ax = b y 0 = (A y c) x = y Ax c x Primal-Dual Possibility Table: Recall that a linear program has three possibilities: finitely optimal, infeasible, or unbounded The primal-dual pair has nine combinations, but only four of them are possible Primal \ Dual Finitely optimal Unbounded Infeasible Finitely optimal Possible Impossible Impossible Unbounded Impossible Impossible Possible Infeasible Impossible Possible Possible Two examples of primal unbounded and dual infeasible, which are also primal infeasible and dual unbounded : min y max x 1 + x 2 s t y 1 s t x 1 1 0y 1 x 1, x 2 0 y 0 19
20 max 3x 1 + 4x 2 s t x 1 + x 2 1 2x 1 x 2 3 x 1, x 2 0 Two examples of both primal and dual are infeasible : min y 1 3y 2 s t y 1 + 2y 2 3 y 1 y 2 4 y 1, y 2 0 min x 1 + 2x 2 s t x 1 + x 2 = 1 x 1 + x 2 = 2 max y 1 + 2y 2 s t y 1 + y 2 = 1 y 1 + y 2 = 2 max 2x 1 x 2 s t x 1 x 2 1 x 1 + x 2 2 x 1, x 2 0 min y 1 2y 2 s t y 1 y 2 2 y 1 + y 2 1 y 1, y 2 0 For an LP to be unbounded, there must exist two things: (i) a feasible solution x 0, satisfying Ax 0 b, x 0 0, and (ii) a direction x, which leads the objective value c (x 0 +λ x) towards infinity without violating any constraints as the step size λ approaches infinity This direction is called an extreme ray If x satisfies c x > 0, A x 0, x 0, then x is called an extreme ray to the LP max x {c x : Ax b, x 0} In the dual space, if y satisfies b y < 0, A y 0, y 0, then y is called an extreme ray to the LP min y {b y : A y c, y 0} Farkas Lemma: Let A R m n and b R m 1 be a matrix and a vector, respectively Then exactly one of the following two alternatives holds: (a) There exists some x 0 such that Ax b (b) There exists some y 0 such that A y 0 and b y < 0 Proof (a) true (b) false: If (a) is true, then for any y 0 such that A y 0, we have b y (Ax) y = x A y 0, which means that (b) is false (a) false (b) true: Consider max{0 : Ax b, x 0} and its dual min{b y : A y 0, y 0} If (a) is false, then max{0 : Ax b, x 0} is infeasible Its dual is either unbounded or infeasible It is easy to see that y = 0 is a feasible solution to the dual, so it must be unbounded, which means that there must exist some y 0 such that A y 0 and b y < 0 Therefore, (b) is true A Variation of Farkas Lemma: Let A R m n and b R m 1 be a matrix and a vector, respectively Then exactly one of the following two alternatives holds: (a) There exists some x 0 such that Ax = b (b) There exists some y such that A y 0 and b y < 0 Proof (a) true (b) false: If (a) is true, then for any y such that A y 0, we have b y = (Ax) y = x A y 0, which means that (b) is false (a) false (b) true: Consider max{0 : Ax = b, x 0} and its dual min{b y : A y 0} If (a) is false, then max{0 : Ax = b, x 0} is infeasible According to the Primal-Dual Possibility Table, its dual is either unbounded or infeasible It is easy to see that y = 0 is a feasible solution to the dual, so it must be unbounded, which means that there must exist some y such that A y 0 and b y < 0 Therefore, (b) is true 20
21 Extended Primal-Dual Possibility Table: Farkas Lemma enables us to define the primaldual possibility table in a more revealing manner We use x 0 or y 0 to indicate that a feasible solution exists, and x or y to indicate that an extreme ray does not exist Farkas Lemma basically says that x 0 y, y 0 x, x 0 y, and y 0 x Therefore, we have the following extended primal-dual possibility table: 6 Simplex Algorithm primal dual optimal x 0 + x y + y 0 optimal unbounded x 0 + x y + y 0 infeasible infeasible x 0 + x y + y 0 infeasible infeasible x 0 + x y + y 0 unbounded Now that we know what condition a solution needs to satisfy to be an optimal solution, but how do we come up with such an optimal solution? In this section, we will learn about an algorithm called simplex that finds an optimal solution if the LP has one, or determines that the LP is infeasible or unbounded if that is the case Simplex is perhaps the most important linear programming algorithm, and we are going to learn about it in great detail The basic idea of simplex is based on the observation that the optimal solution to an LP, if it exists, occurs at a corner of the feasible region This can be verified with the LP examples we have seen in the lecture notes or homework examples Based on this observation, we can find the optimal solution by (i) starting from a feasible corner point, and (ii) moving to a better corner point until the current one is already optimal If we cannot find a starting point, then the LP is infeasible; if we can optimize the objective value to infinity, then the LP is unbounded While the idea may sound simple and intuitive, we need to rigorously establish its theoretical correctness 61 What exactly is a corner point? Corner point is a nickname of a well-known concept called basic solution So let s use basic solution instead of corner point from now on A solution x 0 is a basic solution if it is uniquely determined by its active constraints at equality An inequality constraint (Ax) i b i is active at x 0 if it holds at equality: (Ax 0 ) i = b i An equality constraint (Ax) i = b i is also considered active as long as the equality holds To explain the definition of a basic solution, let us suppose the feasible region is defined by Ax b, which already includes any non-negativity constraints x 0 For any solution x 0, let I(x 0 ) be the set of its active constraints: I(x 0 ) = {i : (Ax 0 ) i = b i } By definition, x 0 satisfies these constraints at equality: (Ax) I(x 0 ) = b I(x 0 ) If x 0 is the only solution determined by this equation, then it is a basic solution However, if there exists another solution x 1 that also satisfies (Ax 1 ) I(x 0 ) = b I(x 0 ), then x 0 is not a basic solution Recall from linear algebra that a necessary condition for a linear system of equations Ax = b to have a unique solution x R n is that, there exist n linearly independent rows in matrix A 21
22 Given a finite number of vectors V 1, V 2,, V K R n, we say that they are linearly dependent if there exist real numbers a 1, a 2,, a K such that K i=1 a i > 0 and K i=1 a iv i = 0 Otherwise, they are called linearly independent As an example, we look at LP (1)-(4) and Figure 1 again: (1) : max 2x 1 + 3x 2 (2) : s t x 1 + 4x 2 8 (3) : x 1 x 2 1 (4) : x 1, x 2 0 We know intuitively that (x 1 = 1, x 2 = 0) is a basic solution (because it is a corner point) Now let s check with the definition There are two active constraints at this point: x 1 x 2 1 and x 2 0, and (x 1 = 1, x 2 = 0) is uniquely determined by these two constraints at equality, therefore, it is a basic solution On the other hand, (x 1 = 2, x 2 = 15) is not a basic solution, because there is only one active constraint at that point: x 1 + 4x 2 8, and (x 1 = 2, x 2 = 15) is obviously not the only solution determined by that constraint at equality Also notice that a basic solution is not required by definition to be feasible Point (x 1 = 8, x 2 = 0) is not a feasible solution, but it is a basic solution because it is uniquely determined by two active constraints at equality: x 1 + 4x 2 8 and x Is the optimal solution always a basic solution? Unfortunately, the answer is no First, some LPs may not even have a basic solution For example max{0 : x 1, x 2 free}, where we are maximizing the constant 0 over the entire two-dimensional space with no constraints, and there is no basic solution Secondly, even if an LP has basic solutions, there may also exist an optimal solution that is not a basic solution For example (x 1 = 1, x 2 = 0) is an optimal solution to min{x 2 : 0 x 1 2, x 2 0}, but it is not a basic solution With that being said, there is some good news that still validates the idea of simplex: (i) If we write an LP in the standard form max{c x : Ax b, x 0}, there always exists a basic solution As a matter of fact, the origin point (x = 0) is uniquely determined by the active constraints x 0 at equality, thus it is a basic solution (ii) Suppose an LP has at least one basic solution and one optimal solution It can be proved that: if the optimal solution is unique, then it must be a basic solution; if there are infinitely many optimal solutions, then there exists one that is a basic solution 22
23 63 How to find basic solutions? In the simplex context, it is oftentimes more convenient to consider a non-standard form LP max{ζ = c x : Ax + w = b, x 0, w 0}, (71) which is equivalently reformulated from the standard form LP max{c x : Ax b, x 0} by simply introducing a new variable w R m, called slack variable, to make the inequality constraint Ax b an equality one Ax + w = b We now write (71) in the following matrix form: [ where c = c 0 m 1 max ζ = c x (72) s t Ax = b (73) x 0, (74), A = [ [ x A I m m, and x = R w n+m Since the dimension of x is (n + m) 1, we need n + m linearly independent active constraints to uniquely determine a basic solution We already have m rows in Ax = b, so we need at least n constraints in x 0 to hold at equality Define N as the indices of constraints in x 0 that are set to hold at equality, and B as the indices of other constraints in x 0 Such a pair of (B, N ) is an exclusive and exhaustive partition of the set {1, 2,, n + m} For example, if we set x i = 0, i = 1,, n and x i 0, i = n + 1,, n + m, then the partition is (B = {n + 1,, n + m}, N = {1,, n}) The above conditions are only necessary for a basic solution To find the sufficient condition for a basic solution, we rewrite (72)-(74) using the definition of N and B: max ζ = c B x B + c N x N (75) s t A B x B + A N x N = b (76) x B 0, x N 0, (77) where A B is the collection of columns in A whose indices are in the set B, and x N and c N are the collections of elements in x and c, respectively, whose indices are in the set N Here, the value for 23
24 x N = 0 is uniquely determined To make sure that the value for x B is also uniquely determined, we should guarantee that the equation A B x B + A N x N = A B x B = b has a unique solution, which requires that the matrix A B R m m be invertible This can be achieved by choosing m linearly independent columns in A as the set of B Now we have the necessary and sufficient conditions for a basic partition: The m elements in the set B should be chosen such that A B is invertible, and (78) The n elements in the set N are then determined by N = {1,, n + m}\b (79) This partition uniquely determines a basic solution (x B = A 1 B b, x N = 0) We define: a partition (B, N ) that satisfies (78) and (79) as a basic partition; B and N in a basic partition are called basis and non-basis, respectively; and the variables x B and x N are called basic variable and non-basic variable, respectively The relationship between a basic partition and a basic solution is that, a basic partition (B, N ) uniquely determines a basic solution; for any basic solution x, there exists (uniquely or not) a basic partition that determines this basic solution x One example of different basic partitions all uniquely determining a same basic solution is the following Suppose A = [ and b = then both (B = {1, 3}, N = {2, 4}) and (B = {1, 4}, N = {2, 3}) uniquely determine the same basic solution x = [ Since a basic partition (B, N ) uniquely determines one basic solution, the number of basic solutions is bounded by the number of ways (B, N ) can be selected, which is no more than (n+m)! n!m! The set of partitions of {1, 2,, n + m} can be divided into the following regions [ 1 3, A+B+C+D: All possible partitions B+C+D: Basic partitions, which satisfy (78) and (79) C+D: Feasible basic partitions with x B = A 1 B b 0 D: Optimal basic partitions It is relatively easy to enter region B The following points will discuss how to enter region C and then D 24
25 64 How to find an initial feasible basic solution to start from? What if the LP is infeasible? The way simplex algorithm proceeds is to start from a feasible basic solution, move from one feasible basic solution to another better feasible basic solution, and finally stop at a feasible basic solution which is optimal to the LP (Let s use fbs to abbreviate feasible basic solution ) We are now ready to discuss how to find an initial fbs to start from It is not hard to observe that (B = {n + 1,, n + m}, N = {1,, n}) is a basic partition to (71), which corresponds to B = {n + 1,, n + m}, N = {1,, n}, c B = 0 m 1, c N = c, A B = I m m, A N = A, x B = b, x N = 0 n 1 The partition (B, N ) is indeed a basic partition because A B = I m m is invertible If b 0, then (x = 0, w = b) is also a feasible basic solution However, if b i < 0 for some i, then x 0 is violated, and the basic solution is not feasible To obtain an fbs in such a case, we need to use a little trick called the big-m method Define K = {i : b i < 0, i = 1,, m}, and let k = K Now we consider a new problem max { ζ = c x M } k t i : Ax + w + Ht = b; x, w, t 0 (80) i=1 Here M is an extremely large finite constant, vector t R k 1 is a new variable called artificial variable, and matrix H R m k is defined as { 1, if i = K(j); H i,j = 0, otherwise For example, consider the following LP as an instance of (71): max ζ = 5x 1 + 4x 2 + 3x 3 s t 2x 1 + 3x 2 + x 3 + w 4 = 5 4x 1 + x 2 + 2x 3 + w 5 = 11 3x 1 + 4x 2 + 2x 3 + w 6 = 8 x 1, x 2, x 3, w 4, w 5, w 6 0 Then (80) corresponds to the following formulation: max ζ = 5x 1 + 4x 2 + 3x t t 8 s t 2x 1 + 3x 2 + x 3 + w 4 = 5 4x 1 + x 2 + 2x 3 + w 5 t 7 = 11 3x 1 + 4x 2 + 2x 3 + w 6 t 8 = 8 x 1, x 2, x 3, w 4, w 5, w 6, t 7, t
26 In LP (80), since there are k artificial variables but no additional constraints, the dimensions of its partition should be N = n + k and B = m The way we initialize this partition is to add all the artificial variables to B and then move those indices that correspond to b i < 0 from B to N It is not hard to see that the basic partition (N = {1,, n} {n + K}, B = {1,, n + m + k}\n ) uniquely determines an fbs (x N = 0, x B = A 1 B b = b ) We can use this fbs as a starting point to solve (80) by following the rest of the simplex steps In the format of (72)-(74), (80) has c = c 0 m 1 M 1 k 1, A = [ A I m m H m k, and x = x w t R n+m+k The relation between optimal solutions to (71) and (80) is given in the following propositions Proposition 1 Solution (x, w ) is optimal to (71) if and only if there exists a finitely large M such that (x, w, t = 0) is optimal to (80) Proof ( ): Prove by construction Let y be the dual optimal solution to (71), which is min{ζ = b y : A y c, y 0} Set M = max i=1,,m y i, and we know that y is also feasible to the dual of (80), which is min { ζ = b y : A y c, H y M1 k 1 ; y 0 } Since (x, w, t = 0) and y are respectively feasible to (80) and its dual with the same objective value, they are also respectively optimal ( ): Prove by contradiction Suppose (x, w, t = 0) is optimal to (80) but (x, w ) is not optimal to (71), then there must exist a feasible solution (x, w ) to (71) with c x > c x However, this implies that (x, w, t = 0) is a better solution than (x, w, t = 0) to (80), because (x, w, t = 0) is feasible to (80) and c x > c x This contradicts the assumption that (x, w, t = 0) is an optimal solution to (80) In the simplex algorithm, there is a way to make sure that M is sufficiently large So if we solve (80) with the simplex algorithm and get an optimal solution (x, w, t ) with t i > 0 for some i, then (71) is infeasible Proposition 2 If for a sufficiently large M, (80) possesses an optimal solution (x, w, t ) with t i > 0 for some i, then (71) is infeasible Proof By Proposition 1, (71) does not have an optimal solution Therefore, it suffices to show that (71) is not unbounded Let y be the optimal dual solution to (80), then it is feasible to the dual of (71), which means that (71) is not unbounded If (80) is unbounded, then (71) could be either infeasible or unbounded, and we need to solve the following LP to verify: { } k min ζ = t i : Ax + w + Ht = b; x, w, t 0 (81) i=1 If t = 0 is an optimal solution to (81), then (71) is unbounded; otherwise (71) is infeasible The possibilities of (80) and their implications of (71) are summarized as the following: 26
27 (x, w, t ) optimal to (80) (80) unbounded t optimal to (81) t = 0 t 0 t = 0 t 0 (x, w ) optimal to (71) (71) infeasible (71) unbounded ˆ (80) is optimal with (x, w, t = 0) (71) is optimal with (x, w ) ˆ (80) is optimal with (x, w, t ) and t i > 0 for some i (71) is infeasible ˆ (80) is unbounded and t = 0 is optimal to (81) (71) is unbounded ˆ (80) is unbounded and t = 0 is not optimal to (81) (71) is infeasible ˆ LP (80) cannot be infeasible, because (x N = 0, x B = b ) is a feasible solution to (80) The following diagram provides an overview of the Simplex algorithm Here we refer to the three LPs (71), (80), and (81) as LP0, LP1, and LP2, respectively LP0 N b 0? Y improve N N optimal? unbounded? Y Y LP0 optimal LP0 unbounded LP1 improve N N optimal? unbounded? Y LP2 t = 0 Y t 0 improve LP0 optimal LP0 infeasible LP0 may be optimal, unbounded, or infeasible LP1 may be optimal or unbounded, never infeasible LP2 must be optimal, never unbounded or infeasible N optimal? t = 0 Y t 0 LP0 unbounded LP0 infeasible 65 How to tell if the current fbs is optimal or not? One way to check the optimality of a solution is to check the optimality condition (62) In the simplex algorithm, there is a more convenient way to check the optimality by reformulating (75)- 27
28 (77) as: max Equations (82) and (83) are called a dictionary: ζ = c B A 1 B b + (c N c B A 1 B A N )x N (82) s t x B = A 1 B b A 1 B A N x N (83) x B 0, x N 0 (84) ζ = c B A 1 B b + (c N c B A 1 B A N )x N x B = A 1 B b A 1 B A N x N The term (c N c B A 1 B A N ) is called the reduced cost Proposition 3 In (82), for a given feasible basic partition (B, N ), if we have (c N c B A 1 B A N ) 0, then the fbs (x B = A 1 B b, x N = 0) is optimal to (82)-(84) Proof Prove by contradiction Suppose (x B = A 1 B b, x N = 0) is not optimal to (82)-(84), and there exists another fbs x with ζ(x ) > ζ(x) This implies that ζ(x ) ζ(x) = (c N c B A 1 B A N )x N (c N c B A 1 B A N )x N = (c N c B A 1 B A N )x N > 0, which is impossible since (c N c B A 1 B A N ) 0 and x N 0 Notice that reduced cost being non-positive is a sufficient not necessary condition for the optimality of an fbs For example, consider the following LP: max ζ = x 1 + 2x 2 + 3x 3 s t x 1 3x 3 + w 4 = 0 7x 1 + 2x 2 + 5x 3 + w 5 = 1 x 1, x 2, x 3, w 4, w 5 0 The feasible basic partition (B 1 = {1, 2}, N 1 = {3, 4, 5}) determines the following dictionary: ζ = 1 20x 3 + 6w 4 w 5 x 1 = 3x 3 w 4 x 2 = 05 13x w 4 05w 5, which gives the fbs (x 1 = 0, x 2 = 05, x 3 = 0) Although the reduced cost does contain a positive term, this fbs is actually optimal To see this, consider another feasible basic partition (B 2 = {2, 4}, N 2 = {1, 3, 5}), which determines the following dictionary: ζ = 1 6x 1 2x 3 w 5 x 2 = 05 35x 1 25x 3 05w 5 w 4 = x 1 + 3x 3 It gives the same fbs (x 1 = 0, x 2 = 05, x 3 = 0) Since it has a non-positive reduced cost, we know this fbs is optimal 28
29 66 How to find a better fbs if the current one is not optimal? Because of the close relation between an fbs and a feasible basic partition (B, N ), the search for a better fbs (and eventually the optimal one) is equivalent to the search for a better feasible basic partition The way simplex updates a feasible basic partition is by switching one pair of numbers between the current basis B and non-basis N at a time If the current basic partition is (B 0, N 0 ), then we select an i from B 0 and a j from N 0, and update the basic partition as (B 1 = B 0 \{i } {j }, N 1 = N 0 \{j } {i }) The variable x j is called the entering variable, because it will enter the basis Similarly, x i is called the leaving variable Geometrically, such an update means a move from an fbs to an adjacent one Of course, we need to make sure that the new fbs is no worse than the current one in terms of the objective value Now let s assume that we have obtained a feasible basic partition (B, N ) which is not optimal, then we will have to update the basic partition by selecting an entering variable and a leaving variable The rule for selecting the entering and leaving variables is called a pivoting rule There are various pivoting rules, one of which called Bland s rule is introduced below: Entering variable x j : Choose j = min{j N : (c N c B A 1 B A N ) j > 0} (85) { Leaving variable x i : Choose i = min i (A 1 } B A N ) i,j argmax i B (A 1 (86) B b) i After the entering and leaving variables are chosen, we get an updated partition Bland s rule ensures that the new partition is a better feasible basic partition The calculation for determining the leaving variable is called the ratio test, because we are trying to find the largest ratio of (A 1 B A N ) i,j The i that achieves the largest ratio is said to be the winner of the ratio test, and x (A 1 i B b) i will be the leaving variable If there is a tie in the ratio test, the smallest winner i will be selected The rationale for Bland s rule is explained in the following example: max 5x 1 + 4x 2 + 3x 3 (87) s t 2x 1 + 3x 2 + x 3 5 (88) 4x 1 + x 2 + 2x 3 11 (89) 3x 1 + 4x 2 + 2x 3 8 (90) x 1, x 2, x 3 0 (91) We start by introducing new variables w 4, w 5, w 6, to reformulate (88)-(90) into equality constraints: max 5x 1 + 4x 2 + 3x 3 (92) s t 2x 1 + 3x 2 + x 3 + w 4 = 5 (93) 4x 1 + x 2 + 2x 3 + w 5 = 11 (94) 3x 1 + 4x 2 + 2x 3 + w 6 = 8 (95) x 1, x 2, x 3, w 4, w 5, w 6 0 (96) Since the right-hand-side values are all positive, the first fbs is easy to find: x 1 = x 2 = x 3 = 0, w 4 = 29
CO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationIntroduction to linear programming using LEGO.
Introduction to linear programming using LEGO. 1 The manufacturing problem. A manufacturer produces two pieces of furniture, tables and chairs. The production of the furniture requires the use of two different
More informationThe Simplex Algorithm
8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.
More informationChap6 Duality Theory and Sensitivity Analysis
Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we
More informationNote 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)
Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical
More information3. Duality: What is duality? Why does it matter? Sensitivity through duality.
1 Overview of lecture (10/5/10) 1. Review Simplex Method 2. Sensitivity Analysis: How does solution change as parameters change? How much is the optimal solution effected by changing A, b, or c? How much
More informationChapter 1: Linear Programming
Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of
More informationMATH 445/545 Test 1 Spring 2016
MATH 445/545 Test Spring 06 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 545 level. Please read and follow all of these
More information"SYMMETRIC" PRIMAL-DUAL PAIR
"SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax
More informationDiscrete Optimization
Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 15, 2010 Discussions: March 25, April 01 Discrete Optimization Spring 2010 s 3 You can hand in written solutions for up to two of the exercises
More informationFundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15
Fundamentals of Operations Research Prof. G. Srinivasan Indian Institute of Technology Madras Lecture No. # 15 Transportation Problem - Other Issues Assignment Problem - Introduction In the last lecture
More informationSummary of the simplex method
MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:
More informationII. Analysis of Linear Programming Solutions
Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois
More informationYinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method
The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear
More information3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...
Contents Introduction to Linear Programming Problem. 2. General Linear Programming problems.............. 2.2 Formulation of LP problems.................... 8.3 Compact form and Standard form of a general
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More informationAn introductory example
CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1
More informationThe simplex algorithm
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case. It does yield insight into linear programs, however,
More informationOPTIMISATION /09 EXAM PREPARATION GUIDELINES
General: OPTIMISATION 2 2008/09 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and
More informationLinear Programming Redux
Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains
More informationLecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)
Lecture 10: Linear programming duality Michael Patriksson 19 February 2004 0-0 The dual of the LP in standard form minimize z = c T x (P) subject to Ax = b, x 0 n, and maximize w = b T y (D) subject to
More information15-780: LinearProgramming
15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear
More informationWeek 3 Linear programming duality
Week 3 Linear programming duality This week we cover the fascinating topic of linear programming duality. We will learn that every minimization program has associated a maximization program that has the
More informationLecture 10: Linear programming duality and sensitivity 0-0
Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationIntroduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following
More informationΩ R n is called the constraint set or feasible set. x 1
1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We
More informationFarkas Lemma, Dual Simplex and Sensitivity Analysis
Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x
More informationMS-E2140. Lecture 1. (course book chapters )
Linear Programming MS-E2140 Motivations and background Lecture 1 (course book chapters 1.1-1.4) Linear programming problems and examples Problem manipulations and standard form Graphical representation
More information1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations
The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear
More informationChapter 3, Operations Research (OR)
Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z
More informationMS-E2140. Lecture 1. (course book chapters )
Linear Programming MS-E2140 Motivations and background Lecture 1 (course book chapters 1.1-1.4) Linear programming problems and examples Problem manipulations and standard form problems Graphical representation
More informationSpring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization
Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More informationLinear Programming: Chapter 5 Duality
Linear Programming: Chapter 5 Duality Robert J. Vanderbei September 30, 2010 Slides last edited on October 5, 2010 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544
More informationTHE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I
LN/MATH2901/CKC/MS/2008-09 THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS Operations Research I Definition (Linear Programming) A linear programming (LP) problem is characterized by linear functions
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More informationLinear Programming Duality
Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve
More informationIntroduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Module - 03 Simplex Algorithm Lecture 15 Infeasibility In this class, we
More information1 Review Session. 1.1 Lecture 2
1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions
More information56:171 Operations Research Fall 1998
56:171 Operations Research Fall 1998 Quiz Solutions D.L.Bricker Dept of Mechanical & Industrial Engineering University of Iowa 56:171 Operations Research Quiz
More information4.6 Linear Programming duality
4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP Different spaces and objective functions but in general same optimal
More informationOPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES
General: OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and homework.
More informationMAT016: Optimization
MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The
More informationDiscrete Optimization. Guyslain Naves
Discrete Optimization Guyslain Naves Fall 2010 Contents 1 The simplex method 5 1.1 The simplex method....................... 5 1.1.1 Standard linear program................. 9 1.1.2 Dictionaries........................
More informationPart 1. The Review of Linear Programming
In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm
More informationMATH2070 Optimisation
MATH2070 Optimisation Linear Programming Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The standard Linear Programming (LP) Problem Graphical method of solving LP problem
More informationThe Dual Simplex Algorithm
p. 1 The Dual Simplex Algorithm Primal optimal (dual feasible) and primal feasible (dual optimal) bases The dual simplex tableau, dual optimality and the dual pivot rules Classical applications of linear
More informationLinear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming
Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)
More informationOPRE 6201 : 3. Special Cases
OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are
More information(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6
The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. Problem 1 Consider
More informationLecture slides by Kevin Wayne
LINEAR PROGRAMMING I a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM Linear programming
More informationIntroduction. Very efficient solution procedure: simplex method.
LINEAR PROGRAMMING Introduction Development of linear programming was among the most important scientific advances of mid 20th cent. Most common type of applications: allocate limited resources to competing
More informationLinear Programming and Marginal Analysis
337 22 Linear Programming and Marginal Analysis This chapter provides a basic overview of linear programming, and discusses its relationship to the maximization and minimization techniques used for the
More informationContents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod
Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis
MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis Ann-Brith Strömberg 2017 03 29 Lecture 4 Linear and integer optimization with
More informationLINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm
Linear programming Linear programming. Optimize a linear function subject to linear inequalities. (P) max c j x j n j= n s. t. a ij x j = b i i m j= x j 0 j n (P) max c T x s. t. Ax = b Lecture slides
More informationLinear Programming in Matrix Form
Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,
More informationUNIT-4 Chapter6 Linear Programming
UNIT-4 Chapter6 Linear Programming Linear Programming 6.1 Introduction Operations Research is a scientific approach to problem solving for executive management. It came into existence in England during
More informationIn Chapters 3 and 4 we introduced linear programming
SUPPLEMENT The Simplex Method CD3 In Chapters 3 and 4 we introduced linear programming and showed how models with two variables can be solved graphically. We relied on computer programs (WINQSB, Excel,
More information56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker
56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker Answer all of Part One and two (of the four) problems of Part Two Problem: 1 2 3 4 5 6 7 8 TOTAL Possible: 16 12 20 10
More informationAM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1
AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1 Prof. Yiling Chen Fall 2018 Here are some practice questions to help to prepare for the midterm. The midterm will
More informationOptimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 28, 2017 1 / 40 The Key Idea of Newton s Method Let f : R n R be a twice differentiable function
More informationLINEAR PROGRAMMING II
LINEAR PROGRAMMING II LP duality strong duality theorem bonus proof of LP duality applications Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM LINEAR PROGRAMMING II LP duality Strong duality
More informationCS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2)
CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2) Tim Roughgarden February 2, 2016 1 Recap This is our third lecture on linear programming, and the second on linear programming
More informationSupport Vector Machines
Support Vector Machines Ryan M. Rifkin Google, Inc. 2008 Plan Regularization derivation of SVMs Geometric derivation of SVMs Optimality, Duality and Large Scale SVMs The Regularization Setting (Again)
More informationChapter 1 Linear Programming. Paragraph 5 Duality
Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution
More informationCritical Reading of Optimization Methods for Logical Inference [1]
Critical Reading of Optimization Methods for Logical Inference [1] Undergraduate Research Internship Department of Management Sciences Fall 2007 Supervisor: Dr. Miguel Anjos UNIVERSITY OF WATERLOO Rajesh
More informationLinear Programming. H. R. Alvarez A., Ph. D. 1
Linear Programming H. R. Alvarez A., Ph. D. 1 Introduction It is a mathematical technique that allows the selection of the best course of action defining a program of feasible actions. The objective of
More informationThe dual simplex method with bounds
The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the
More informationmin 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14
The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,
More informationCSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming
CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150
More informationThe Simplex Algorithm and Goal Programming
The Simplex Algorithm and Goal Programming In Chapter 3, we saw how to solve two-variable linear programming problems graphically. Unfortunately, most real-life LPs have many variables, so a method is
More informationCSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017
CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =
More informationEND3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur
END3033 Operations Research I Sensitivity Analysis & Duality to accompany Operations Research: Applications and Algorithms Fatih Cavdur Introduction Consider the following problem where x 1 and x 2 corresponds
More informationIntroduction to optimization
Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24 The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization)
More informationIE 400: Principles of Engineering Management. Simplex Method Continued
IE 400: Principles of Engineering Management Simplex Method Continued 1 Agenda Simplex for min problems Alternative optimal solutions Unboundedness Degeneracy Big M method Two phase method 2 Simplex for
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1
More informationAdvanced Linear Programming: The Exercises
Advanced Linear Programming: The Exercises The answers are sometimes not written out completely. 1.5 a) min c T x + d T y Ax + By b y = x (1) First reformulation, using z smallest number satisfying x z
More informationLINEAR PROGRAMMING. Introduction
LINEAR PROGRAMMING Introduction Development of linear programming was among the most important scientific advances of mid-20th cent. Most common type of applications: allocate limited resources to competing
More informationDuality in LPP Every LPP called the primal is associated with another LPP called dual. Either of the problems is primal with the other one as dual. The optimal solution of either problem reveals the information
More informationChapter 4 The Simplex Algorithm Part I
Chapter 4 The Simplex Algorithm Part I Based on Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan Lewis Ntaimo 1 Modeling
More informationThe Graphical Method & Algebraic Technique for Solving LP s. Métodos Cuantitativos M. En C. Eduardo Bustos Farías 1
The Graphical Method & Algebraic Technique for Solving LP s Métodos Cuantitativos M. En C. Eduardo Bustos Farías The Graphical Method for Solving LP s If LP models have only two variables, they can be
More informationLecture Notes on Support Vector Machine
Lecture Notes on Support Vector Machine Feng Li fli@sdu.edu.cn Shandong University, China 1 Hyperplane and Margin In a n-dimensional space, a hyper plane is defined by ω T x + b = 0 (1) where ω R n is
More informationLP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP
LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP 1 / 23 Repetition the simplex algorithm: sequence of pivots starting
More informationIntroduction to Mathematical Programming
Introduction to Mathematical Programming Ming Zhong Lecture 22 October 22, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 16 Table of Contents 1 The Simplex Method, Part II Ming Zhong (JHU) AMS Fall 2018 2 /
More informationThursday, May 24, Linear Programming
Linear Programming Linear optimization problems max f(x) g i (x) b i x j R i =1,...,m j =1,...,n Optimization problem g i (x) f(x) When and are linear functions Linear Programming Problem 1 n max c x n
More informationDistributed Real-Time Control Systems. Lecture Distributed Control Linear Programming
Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More informationReview Solutions, Exam 2, Operations Research
Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To
More informationM340(921) Solutions Problem Set 6 (c) 2013, Philip D Loewen. g = 35y y y 3.
M340(92) Solutions Problem Set 6 (c) 203, Philip D Loewen. (a) If each pig is fed y kilograms of corn, y 2 kilos of tankage, and y 3 kilos of alfalfa, the cost per pig is g = 35y + 30y 2 + 25y 3. The nutritional
More informationLINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:
LINEAR PROGRAMMING 2 In many business and policy making situations the following type of problem is encountered: Maximise an objective subject to (in)equality constraints. Mathematical programming provides
More informationLinear and Integer Optimization (V3C1/F4C1)
Linear and Integer Optimization (V3C1/F4C1) Lecture notes Ulrich Brenner Research Institute for Discrete Mathematics, University of Bonn Winter term 2016/2017 March 8, 2017 12:02 1 Preface Continuous updates
More informationLECTURE 7 Support vector machines
LECTURE 7 Support vector machines SVMs have been used in a multitude of applications and are one of the most popular machine learning algorithms. We will derive the SVM algorithm from two perspectives:
More informationOPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM
OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationNew Artificial-Free Phase 1 Simplex Method
International Journal of Basic & Applied Sciences IJBAS-IJENS Vol:09 No:10 69 New Artificial-Free Phase 1 Simplex Method Nasiruddin Khan, Syed Inayatullah*, Muhammad Imtiaz and Fozia Hanif Khan Department
More informationAdvanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 3 Simplex Method for Bounded Variables We discuss the simplex algorithm
More information