Chapter 5 Linear Programming (LP)
|
|
- Georgia Wright
- 5 years ago
- Views:
Transcription
1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider the case when f (x) is a linear function f (x) = c T x; x = (x 1 ; x ; :::; x n ) R n where c R n is a given constant vector, called cost coe cient. Example: R f (x) = x 1 4x + x = c T x; c = x ; x = 4x 5 1 x Consider standard convex constraint set = fx : Ax = b; x 0g The notation x 0 means that each component of x = [x 1 ; :::; x n ] T must be 0 : x i 0 for all i Example: = fx : 4x 1 + x = 5; x 1 0; x 0g y x 1
2 We can write the problem as minimize subject to c T x Ax = b x 0 This problem is called Linear programming problem, or linear programming (LP). In general, constraint sets involve equations and inequalities. Example: Example Production scheduling Woodworking shop data: minimize x 1 + x subject to x 1 x 6 x 1 0 Inputs Products input availability Table Chair labor Materials 1 Production Levels x 1 x Unit price 1 5 Goal: schedule production to maximize revenue Total revenue: x 1 + 5x Production constraints Labor constraint: 5x 1 + 6x 0 Materials constraint: x 1 + x 1 Physical constraints: x 1 0; x 0
3 Optimization problem: (LP) in matrix form Equivalent form maximize x 1 + 5x subject to 5x 1 + 6x 0 A = maximize subject to 5 6 ; b = x 1 + x 1 x 1 0; x 0 c T x Ax b x 1 0; x 0 0 ; c = minimize subject to c T x Ax b x 1 0; x 0 Example: Optimal Diet Nutrition table: Unit Price Vitamin Milk Eggs Daily Requirement 1 F 45 G 4 60 Intake x 1 x Unit cost 5 For consumers: Total cost: x 1 + 5x Dietary Constraints Vitamin F: x 1 + x 45
4 Vitamin G: 4x 1 + x 60 Physical constraints: x 1 0; x 0 Optimization problem: (LP) in matrix form minimize x 1 + 5x subject to x 1 + x 45 A = minimize subject to This is called Primal LP For sellers: ; b = 4 4x 1 + x 60 x 1 0; x 0 c T x Ax b x 1 0; x 0 45 ; c = 60 5 total revenue (per customer) to compete well with milk: to compete well with egg: LP for seller: This is called dual LP. Maximize Subject to ; 0 Geometric method for LP problems in -D 4
5 Consider the production scheduling example given before: maximize x 1 + 5x subject to 5x 1 + 6x 0 x 1 + x 1 x 1 0; x 0 Level sets of the objective function: x 1 + 5x = c Level sets are parallel lines, moving up as c increases The maximum value 5 is reached when the level set "exiting" the constraint set at the corner point [0; 5] T : x 1 + 5x = 5 The minimum value 0 is reached when the level set "exiting" the constraint set at the corner point [0; 0] T : x 1 + 5x = 0 In fact, unless the level sets happen to be parallel to one of the edges of the constraint set, the solution will lie on a corner point (or vertex). Even if the level sets happen to be parallel to one of the edges of the constraint set, a corner point will be an optimal solution. It turns out that solution of an LP problem (if it exists) always lies on a vertex of the constraint set. 5
6 Therefore, instead of looking for candidate solutions everywhere in the constraint set, we need only focus on the vertices. Standard form LP problems LP problem in standard form: minimize subject to c T x Ax = b x 0 where A is a m n matrix, m < n; rank (A) = m b 0 Example: In standard form: NOT in standard forms: minimize x 1 + 5x x subject to x 1 + x + 4x = 4 5x 1 x + x = 15 x 1 0; x 0; x 0 maximize x 1 + 5x x subject to x 1 + x + 4x 4 5x 1 x + x 15 x 1 0; x 0 minimize x 1 + 5x x subject to x 1 + x + 4x = 4 5x 1 x + x = 15 x 1 0; x 0; x 0 6
7 The above is not standard because b 1 = 4 < 0: To be standard form, we require b i 0: minimize x 1 + 5x x subject to x 1 + x + 4x = 4 x 1 + 4x + 8x = 8 x 1 0; x 0; x 0 The above is not standard because the matrix A 4 has a rank of 1; not "full rank". All our analyses and algorithms will apply only to standard form LP problems. What about other variations of LP problems? Any LP problem can be converted into an equivalent standard form LP problem. How to convert from given LP problem to an equivalent problem in standard form? If problem is a maximization, simply multiply the objective function by 1 to get minimization. If A not of full rank, can remove one or more rows. If a component of b is negative, say the ith component, multiply the ith constraint by 1 to obtain a positive right-hand side. What about inequality constraints? add slack variables for " " constraints add surplus variables for " " constraint split variables for free variables Converting to standard form: Slack variables Suppose we have a " " constraint of the form: a 1 x 1 + a x + + a n x n b
8 We can convert the above inequality constraint into the standard equality constraint by introducing a slack variable x n+1 The above constraint is equivalent to: a 1 x 1 + a x + a n x n + x n+1 = b x n+1 0 Converting to standard form: Surplus variables Suppose we have a constraint of the form: a 1 x 1 + a x + + a n x n b We can convert the above inequality constraint into the standard equality constraint by introducing a surplus variable x n+1 The above constraint is equivalent to: a 1 x 1 + a x + + a n x n x n+1 = b x n+1 0 Converting to standard form: Nonpositive variable Suppose one of the variables (say, x 1 ) has the constraint x 1 0 We can convert the variable into the usual nonnegative variable by changing every occurrence of x 1 by its negative x 0 1 = x 1 Example: The LP problem minimize c 1 x 1 + c x + + c n x n subject to a 1 x 1 + a x + + a n x n a n+1 x n+1 = b x 1 0 is converted standard form minimize c 1 x c x + + c n x n subject to a 1 x a x + + a n x n a n+1 x n+1 = b x
9 Converting to standard form: Free variables Suppose one of the variables (say, x 1 ) does not have any constraint We can split x 1 into its positive part and negative part by introducing two variables u 1 0 and v 1 0 : x 1 = u 1 v 1 Example: The constraint is equivalent to a 1 x 1 + a x + + a n x n = b x 0; x 0; ; x n 0 a 1 (u 1 v 1 ) + a x + + a n x n = b u 1 0; v 1 0; x 0; x 0; ; x n 0 Converting to standard form: An example Consider the LP problem (not in standard form) maximize x 1 + 5x x subject to x 1 + x + 4x 4 (1) 5x 1 x + x 15 () x 0 x 0 We rst convert the maximization problem into minimization problem by multiplying the objective function by ( 1) Next, we introduce a slack variable x 4 in the rst constraint to convert constraint (1) into x 1 + x + 4x + x 4 = 4; x 4 0 and then multiply the equation by ( 1) to make the right-hand side positive: x 1 x 4x x 4 = 4; x 4 0 9
10 Next, we introduce a surplus variable x 5 to convert the second constraint () to convert it to 5x 1 x + x x 5 = 15; x 5 0 Next, we substitute x 1 with x 6 Finally, we substitute x with x in every expression x 0 in every expression The resulting standard form or minimize (x 6 x ) + 5x 0 + x subject to (x 6 x ) + x 0 4x x 4 = 4 5 (x 6 x ) + x 0 + x x 5 = 15 x 0 ; x ; x 4 ; x 5 ; x 6 ; x 0 minimize 5x 0 + x x 6 + x subject to x 0 4x x 4 + 0x 5 x 6 + x = 4 The matrix form is x 0 + x + 0x 4 x 5 5x 6 + 5x = 15 x 0 ; x ; x 4 ; x 5 ; x 6 ; x 0 minimize c T x subject to Ax = b ; x = x 0 T x x 4 x 5 x 6 x x A = ; b = Basic solutions Consider LP problem in standard form: minimize subject to c T x Ax = b x 0 where A is a m n matrix, m < n; rank (A) = m; b 0 10
11 The feasible points are those x that satisfy Ax = b and x 0 (i.e., have nonnegative components). For m < n, there are in nitely many points x satisfying Ax = b REVIEW linear algebra: Recall there are three types of elementary row operations 1. Interchanging two rows of the matrix;. Multiplying one row of the matrix by a (nonzero) constant;. Adding a constant multiple of one row to another row. Pivoting about (i,j) position: using above three types of row operations to make the entry at (i,j) position 1, and the rest entries in the jth column zero To solve Ax = b using elementary row operations: First, we form the augmented matrix [A; b] Using Gauss-Jordan Algorithm [A; b] can be reduced by a sequence of elementary row operations to a unique reduced Echelon form. Assume that the rst m columns of A are linearly independent pivoting about (1; 1) ; (; ) ; :::; (m; m) position consecutively this will reduce [A; b] to the reduced Echelon y 1 y y = [I m ; Y; y ] ; y y = y m Name: canonical augmented matrix y m 11
12 In particular, Ax = b has a solution y 1 y y x. = = y 0 m where y is the last column in the above canonical augmented matrix Basic solution: This solution x = y T ; 0 T is called a basic solution to Ax = b The rst m columns of A are called basic columns The rst m variables x 1 ; ; x m are called basic variables The last (n m) variables x m+1 ; ; x n are called nonbasic variables In general, for any solution x of Ax = b x is called a basic solution if x has at most m non-zero components the corresponding variables (of non-zero components) are called basic variables the corresponding columns in A are called basic columns that form a basis For any m basic columns of A : a k1 ; a k ; :::; a km are linearly independent, where 1 k 1 k ::: k m : Canonical augmented matrix: k 1 -th column is e 1 = [1; 0; :::; 0] T ; k -th column is e ; ::; k m -th column is e m [A; b] = [a 1 ; a ; :::; a n ; b]!
13 In this case, a k1 ; a k ; :::; a km are basic columns, variables associated with basic columns, x k1 ; x k ; :::; x km ; are basic variables. Example: A 4 = [a 1 ; a ; a ; a 4 ] = 1 4 ; b = canonical augmented matrix is ! x and x 4 are basic variables a ; a 4 are basic columns a solution x x = this x = [0; 5; 0; 4] T is a basic solution ( m = ) There are at most n = m n! m! (n m)! basic solutions Example Consider the equation Ax = b with A = [a 1 ; a ; a ; a 4 ] = ; b = A b = #1 basic solution: x = [6; ; 0; 0] T ; basic variables x 1 ; x ; basis B = [a 1 ; a ] # basic solution: x = [0; ; 6; 0] T ; basic variables x ; x ; basis B = [a ; a ] 1
14 # basic solution: x = [0; 0; 0; ] T ; basic variables x ; x 4 ; basis B = [a ; a 4 ] #4 basic solution: x = [0; 0; 0; ] T ; basic variables x 1 ; x 4 ; basis B = [a 1 ; a 4 ] #5 basic solution: x = [0; 0; 0; ] T ; basic variables x ; x 4 ; basis B = [a ; a 4 ] Note that x = [; 1; 0; 1] T is a solution, but it is not basic To nd the last basic solution, we set x = x 4 = 0 the solution of form x = [x 1 ; 0; x ; 0] T for Ax = b is equivalent to 1 1 x1 8 = 1 1 x its augmented matrix is inconsistent. So there are only ve basic solutions Basic Feasible Solutions (BFS) Consider the constraints of the LP problem: Ax = b; x 0 Recall de nition of basic solution If x is a basic solution to Ax = b and it also satis es x 0, we call it a basic feasible solution (BFS) Note that a basic solution x is feasible i every basic variable is 0. If at least one of the basic variables is = 0, we say that the BFS is degenerate We henceforth assume nondegeneracy (i.e., all the BFSs we consider are not degenerate) 14
15 This makes the derivations simpler. The degenerate case can be handled similarly. In Example above: x = [6; ; 0; 0] T is a BFS x = [0; 0; 0; ] T is a degenerate BFS x = [0; ; 6; 0] T is a basic solution, but not feasible x = [; 1; 0; 1] T is feasible, but not basic Geometric view Geometrically, a BFS corresponds to a corner point (vertex) of the constraint set Example: Consider the constraint Ax = b; x 0, where A = 1; ; ; b = [6]: Feasible set the portion of the plane x 1 + x + x = 6 in the rst octant The BFSs are: [6; 0; 0] T ; [0; ; 0] T, and [0; 0; ] T Note that the BFSs are just the vertices of the constraint set. In general, we can think (geometrically) of a BFS as a vertex of the constraint set. Optimal solutions A feasible point that minimizes the objective function is called an optimal feasible solution A BFS that is also optimal is called an optimal basic feasible solution 15
16 Fundamental Theorem of LP Consider an LP problem in standard form. 1. If there exists a feasible solution, then there exists a BFS;. If there exists an optimal feasible solution, then there exists an optimal BFS. Or, more informally,... If the feasible set is nonempty, then it has a vertex If the problem has a minimizer (optimal solution), then one of the vertices is a minimizer Consequences of FTLP Suppose we are given an LP problem. Assume that a solution exists. To nd the solution, it su ces to look among the set of BFSs. 16
17 There are only a nite number of BFSs; in fact, at most n n! = basic solutions m m! (n m)! Therefore, the FTLP allows us to transform the original problem (in- nite number of feasible points) to a problem over a nite number of points However, the total number of BFSs may of course be very large. 50 For example, if m = 5 and n = 50, = The brute force approach of exhaustively comparing all the BFSs is impractical Therefore, we need a more computationally e cient method to nd the minimizer among the possible BFSs Simplex method: an organized way of going from one BFS to another to search for the global minimizer. Simplex Algorithms The FTLP allows us to limit search of optimal solutions to a nite number of points (BFSs). Simplex method: an organized way of going from one BFS to another to search for the global minimizer. The method uses numerical linear algebraic techniques. Sometimes, to avoid confusion, we use boldface letters for vectors Simplex method Basic idea Start with an initial basis corresponding to a BFS 1
18 We then move to an adjacent BFS (adjacent vertex) in such a way that the objective function decreases If the stopping criterion is satis ed, we stop; otherwise, we repeat the process. We move from one basic solution to an adjacent one: if a p leaves the basis and a q enters the basis, we pivot about the (r; q) position of the canonical augmented matrix p-th column q th column r-th row 1! x p is D.V.(departing variable) x q is E.V. (entering variable) Example: a ; a 4 form the original basis. To replace a by a 1 ; we need to pivot about (1; 1) (p = ; r = 1; q = 1): The new basis is fa 1 ; a 4 g ! x is D.V.; x 1 is E.V. Two questions: 1. How do we ensure that the adjacent basic solution we move to is feasible? In other words if we are given x q E.V., how do we choose a D.V. x p such that if we pivot about the (r; q) element ((r; p) is a pivot), the resulting rightmost column of the canonical augmented matrix will have all nonnegative elements? 18
19 . How do we choose E.V. x q to ensue the objective function at the new basic solution is smaller the previous value? Maintain feasibility (optional): Choosing leaving variable x p Feasibility: the last column 0 Given: A and b. Suppose we have a set of basic variables corresponding to a BFS. For simplicity, assume it is fx 1 ; :::; x m g, the corresponding basis is fa 1 ; :::; a m g Suppose we now want to choose x q as E:V:, where q > m. One of the vectors a p, 1 p m, must leave the basis. Suppose the canonical augmented matrix for the original basis is: y 1;m+1 y 1;n y 1; y ;m+1 y ;n y ;0 [A; b]! y m;m+1 y m;n y m;0 Note y i0 0 due to feasible assumption If, for example, a m+1 replaces a 1 by pivoting bout (1; m + 1) the resulting canonical augmented matrix 1 y ;n y 1;0 y 1;m+1 y 1;m+1 y 1;m y [A; b]! ;m+1 y ;n y ; y m;m+1 y m;n y m;0 1 y ;n y 1;0 y 1;m+1 y 1;m+1 y 1;m+1 y ;m+1 y y 1;n y y ;n y! 1;m+1 y ;m+1 y 1;0 ;0 y 1;m+1 y ;m+1 1;m y m;m+1 y y 1;n y y m;n y 1;m+1 y m;m+1 y 1;0 m;0 y 1;m+1 y m;m+1 1;m+1 19
20 the last column is 6 4 y 10 y 1m+1 y 0 y 10 y 1m+1 y m+1. y y 10 m0 y y mm+1 1m+1 In general, if a q replaces a p by pivoting about (r; q) ((r; p) is a pivot), the last column of new canonical augmented matrix will be i.e., r th element is y r;0 y r;q 6 4. y r;0 y r;q. y y r;0 i;0 y r;q y i;q. 5 r-th i-th 5 all other i th element is y i;0 y r;0 y r;q y i;q for i 6= p If y r;q < 0; then the feasible set is unbounded Hence, to ensue feasibility, we have to choose r such that y r;q > 0; and for all i = 1; ; :::; m; y r;0 yi;0 y r;0 y i0 y i;q = y i;q 0 =) y r;q y i;q y r;q yi;0 r = arg min : i = 1; ; :::; m y i;q We may divide i-th component in q-th column into i-th component in the last column to create "division" y 1;0 y 1;q y 1;0 =y 1;q y ; y ;q = y ;0 =y ;q y m;0 y m;0 =y m;q y m;q 0
21 In summary, to maintain feasibility, the leaving variable x p must be the position in the above "quotation" vector with the smallest value, i.e., y r;0 y r;q y i;0 y i;q x q Maintain optimality (optional): Choosing entering variable Finding a better adjacent BFS Once we select a q ; we know how to pick a p to leave the basis The question is how to choose column a q to enter the basis? The goal is that the new (adjacent) BFS should have lower objective function value Suppose the canonical augmented matrix for the original basis is: y 1;m+1 y 1;n y 1; y ;m+1 y ;n y ;0 [A; b] = y m;m+1 y m;n y m;0 basic cost function coe cients are those associated with the basic variables c 0 = [c 1 ; c ; ; c m ] The objective function for the BFS x = [y 10 ; ; y m0 ; 0; 0] T is z 0 = c T x = c T 0 x = mx c i y i0 i=1 Now, consider a new basis where a q enters the basis and a p leaves the basis. The new BFS is (with r = p in this case) x y r;0 y r;0 = y 1;0 y 1;q ; 0; ; y y m;0 y m;q ; 0; 0; y r;0 ; 0 r;q y r;q y r;q 1
22 The objective function for the new BFS is mx z = c T x y r0 = c i y i0 y iq y rq = = mx i=1 c i y i0 mx c i y i0 i=1 i=1;i6=r y r0 y iq y rq m X i=1 = z 0 + c q m X i=1 Note that the q-th column of A is + c q y r0 y rq y r0 y r0 c i y iq + c q y rq y rq! y r;0 c i y i;q y r;q 6 4 y 1;q y ;q. y m;q 5 + c q y r0 y rq Hence q-th component in c T 0 A is z q = mx c i y i;q i=1 we have z = z 0 + (c q z q ) y r;0 y r;q Hence if (c q z q ) < 0, we conclude z < z 0 ; i.e., the objective function for the new BFS is smaller De ne r i = 0 for i = 1; :::; m (basic) c i z i for i = m + 1; :::; n (nonbasic) we call r i Relative cost coe cients (RCC) for x i z = z 0 + r q y r;0 y r;q
23 Note: the RCC for a basic variable is always 0. If r q < 0, then the new BFS is better. The smaller r q, the better the new BFS In general, if x is any feasible solution, we can derive the equation nx z = c T x = z 0 + r i x i i=m+1 RCC (Relative cost coe cients) vector r is r = [r 0 ; r ; :::; r n ] T = c T c T 0 A Optimality conditions: r = c T c T 0 A 0: This leads to Theorem A BFS is optimal if and only if the corresponding RCC values (components of c T c T 0 A ) are all 0: In summary, we choose a variable x q that has the most negative RCC value as the entering variable. The Simplex Algorithm 1. Initialization: Given an initial BFS and its canonical augmented matrix.. Calculate the RCCs ( c T c T 0 A ) corresponding to the nonbasic variables.. If r j > 0 all j, then STOP the current BFS is optimal. 4. Otherwise, select a q such that r q < 0 is the most negative. 5. If in the qth column a q ; no element y iq > 0, then STOP the problem is unbounded; else, calculate r = argminfy i0 =y iq : y iq > 0g.
24 6. Update canonical augmented matrix by pivoting about the (r; q) element.. Go to step. Simplex tableau We design a speci c algorithm to compute optimal solutions We focus on the standard form: Simplex tableau minimize: z = c T x subject to: Ax = b; x 0; rank (A) = m; b > 0 Suppose that we already know one set of basic variables x 0 : Let c 0 be the basic cost coe cient in c associated with basic variable x 0 : For instance, if x 0 = (x ; x 5 ; x 8 ) be the set of basic variables, then c T 0 = [c ; c 5 ; c 8 ] : simplex tableau x T c T x 0 c 0 A b r = c T c T 0 A c T 0 b where the second column from the left (c 0 ) and the second row from the top (c T ) are for convenience in calculating the last row, and can be deleted from the nal tableau: x T x 0 A b c T c T 0 A c T 0 b 4
25 Example: Consider with the standard form minimize: f (x 1 ; x ) = 10x 1 80x subject to: x 1 + x 6 x 1 + 8x 8: minimize: s = 10 (y 1 z 1 ) 80 (y z ) subject to: (y 1 z 1 ) + (y z ) + w 1 = 6 the initial feasible solution x = 6 4 (y 1 z 1 ) + 8 (y z ) + w = 8 y 1 ; y ; z 1 ; z ; w 1 ; w 0 y 1 z 1 y z w 1 w = : 5 Thus, basic variables and basic cost coe cients: w1 0 x 0 = ; c 0 = 0 w simplex tableau is c T c T 0 A = c T w 1 0 w 0 y 1 z 1 y z w 1 w
26 the nal simplex tableau is Simplex method y 1 z 1 y z w 1 w w w The simplex algorithm moves the initial feasible but non-optimal solution to an optimal solution while maintaining feasibility through the iterative procedure. The goal is to obtain optimality condition, i.e., the last row (except for the very last element) are all non-negative. step 1. Locate the most negative number in the bottom row of the simplex tableau, excluding the last column. The variable associated with this column is E.V. We also call it the work column. If more than one candidate for the most negative numbers exist, pick any one. step. Form ratios by dividing each positive number in the working column, excluding the last row, into the element in the same row and last column. Designate the element in the work column that yields the smallest ration as the pivot element. The basic variable associated with this pivot element is D.V. If no element in the work column is positive, stop. The problem has no solution. step. Pivoting about the pivot element: Use elementary row operations to convert the pivot element to 1 and then to reduce all other elements in the work column to 0: Step 4. Swap E.V. with D.V. Replace the variable in the pivot row and the rst column by the variable in the rst row and work column. The new rst column is the current set of basic variables. Step 5. Repeat Step 1 through Step 4 until there are no negative numbers in the last row, excluding the last column. The basic feasible solution derived from the very last set of variable is a minimal point: assign to each basic variable (variables in the rst column) the number in 6
27 the same row and last column, and all others are zero. The negative number of the number in the last row and last column is the minimum value. Example: Consider the example above with the standard form copied below: y 1 z 1 y z w 1 w Solution: w w Step 1. The work column is the rst in the middle (-10 is the most negative). Step. The ratios are 6 8 = 4 : So the element is the pivot. w 1 is D.V. and y 1 is E.V. Step. We perform row operations to reduce to the following updated simplex tableau: y 1 z 1 y z w 1 w y = 1= 1= 0 w 0 0 9= 9= = The basic variable w 1 is replaced by y 1 : Now, in the updated tableau, the basic variables are y 1 and w ; and the updated basic feasible solution is y 1 = ; w = ; all others are zero. We next repeat the same steps: since there is one negative number 0 in the last row, the work column is 4 1= 9= 0 5
28 and the ratios are 1= 9= = 6 14=9 = 6 1:56 Therefore, the pivot is the one with the largest ratio: 9= : w is D.V. and y is E.V. We now perform row operations to get updated simplex tableau: pivoting by rst divided by 9= y 1 z 1 y z w 1 w y = 1= 1= 0 w =9 =9 14= and then reduce all other elements in the work column to zero y 1 z 1 y z w 1 w y = + (=9) (1=) (=9) (1=) (14=9) (1=) w =9 =9 14= =9 40= =9 simplify and replace w by y, we obtain the updated simplex tableau: y 1 z 1 y z w 1 w 8 1 y y =9 = = = Since there is no negative number in the last row, we are done. Optimal solution: z 1 y1 0=9 = ; 6 z 50 y 14=9 4 w 1 5 = 0; minimal value = 91: 11 9 w 8
29 Now, let go back to the original optimization problem: minimize: f (x 1 ; x ) = 10x 1 80x subject to: x 1 + x 6 x 1 + 8x 8: Recall that x 1 = y 1 z 1 ; x = y z : Thus, the nal solution: Arti cial LP f (x 1 ; x ) attains the minimal value of when x 1 = 0 9 ; x = 14 9 : The simplex method requires an initial basis. How to choose an initial basis? : 11 Brute force: arbitrarily choose m basic columns and transform the augmented matrix for the problem into canonical form. If rightmost column is positive, then we have a legitimate (initial) BFS. Otherwise, try again. Potentially requires Consider the standard form LP n m tries, and tries, and is therefore not practical minimize: z = c T x subject to: Ax = b; x 0; rank (A) = m; A mn ; b 0 De ne the associated Arti cial LP: minimize: y 1 + y + + y m x subject to: [A; I m ] = b y x 0 y 9
30 where y = [y 1 ; y ; ; y m ] T is called the vector of arti cial variables. Note that the Arti cial LP has an obvious initial BFS 0 b we can proceed with the simplex algorithm to solve the Arti cial LP Note that for y 0 y 1 + y + + y m 0 and the strict inequality holds if one of y i > 0 So if the minimum value of the Arti cial LP = 0; then y i must be non-basic, i.e., y i = 0 Proposition 16.1: The original LP problem has a BFS if and only if the associated arti cial problem has an optimal feasible solution with objective function value zero. In this case, the optimal solution for the arti cial LP serves as a BFS for the original LP Two-phase simplex method: Phase I: solve the arti cial problem using simplex method Phase II: use the BFS resulting from phase I to initialize the simplex algorithm to solve the original LP problem. 0
31 Example 16.4 Consider Solution: Standard form minimize: x 1 + x subject to: 4x 1 + x 1 x 1 ; x 0 x 1 + 4x 6 minimize: x 1 + x subject to: 4x 1 + x x = 1 There is no obvious feasible solution. Phase I: Solve x 1 + 4x x 4 = 6 x 1 ; :::; x 4 0 minimize: x 5 + x 6 subject to: 4x 1 + x x + x 5 = 1 x 1 + 4x x 4 + x 6 = 6 x 1 ; :::; x 6 0 x = [0; 0; 0; 0; 1; 6] is an obvious BFS simples tableau x 1 x x x 4 x 5 x 6 x x pivoting position is (; ) : x 6 is D.V., x is E.V. 1
32 Updated tableau: x 5 x 1 4 x 1 x x x 4 x 5 x Next, pivoting about (1; 1) : x 5 is D.V., x 1 is E.V. updating x 1 x x x 4 x 5 x x x Done: minimum value = 0. It produced a BFS [x 1 ; x ; x ; x 4 ] = 18 ; 6 ; 0; 0 Phase II: we can obtain the simplex tableau for the original LP immediately from above: Deleting two arti cial variable columns, then replace the last row by c T c T 0 A : c T = [; ; 0; 0] ; c T 0 = [; ] A = c T c T 0 A =
33 So the simplex tableau for the original LP : x 1 x x x 4 1 x x It is already optimal. The solution is x = T ; f = 54 Homework: 1. A cereal manufacturer wishes to produce 1000 pounds of a cereal that contains exactly 10% ber, % fat, and 5% sugar (by weight). The cereal is to be produced by combining four items of raw food material in appropriate proportions. These four items have certain combinations of ber, fat, and sugar content, and are available at various prices per pound, as shown below: The manufacturer wishes to nd the amounts of each of the above items to be used to produce the cereal in the least expensive way. Formulate the problem as a linear programming problem. What can you say about the existence of a solution to this problem?. Solve the following linear program graphically:
34 . Use the simplex method to solve the following linear program: 4. Convert the following problem to standard form and then solve it using the simplex method: 5. From textbook: #5.8, 5.1, 5.1, 5.16 (you may combine these four problems into one with several parts) 4
Ω R n is called the constraint set or feasible set. x 1
1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We
More informationDeveloping an Algorithm for LP Preamble to Section 3 (Simplex Method)
Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding
More informationYinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method
The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear
More information4. Duality and Sensitivity
4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 5: The Simplex method, continued Prof. John Gunnar Carlsson September 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 22, 2010
More informationMTH 2530: Linear Algebra. Sec Systems of Linear Equations
MTH 0 Linear Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Week # Section.,. Sec... Systems of Linear Equations... D examples Example Consider a system of
More informationChapter 4 The Simplex Algorithm Part I
Chapter 4 The Simplex Algorithm Part I Based on Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan Lewis Ntaimo 1 Modeling
More informationChapter 1: Linear Programming
Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of
More informationMATH 4211/6211 Optimization Linear Programming
MATH 4211/6211 Optimization Linear Programming Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 The standard form of a Linear
More informationContents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod
Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors
More informationTIM 206 Lecture 3: The Simplex Method
TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),
More informationLinear Programming Redux
Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains
More informationLesson 27 Linear Programming; The Simplex Method
Lesson Linear Programming; The Simplex Method Math 0 April 9, 006 Setup A standard linear programming problem is to maximize the quantity c x + c x +... c n x n = c T x subject to constraints a x + a x
More informationThe Simplex Algorithm and Goal Programming
The Simplex Algorithm and Goal Programming In Chapter 3, we saw how to solve two-variable linear programming problems graphically. Unfortunately, most real-life LPs have many variables, so a method is
More information1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations
The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear
More informationSummary of the simplex method
MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:
More informationMATH2070 Optimisation
MATH2070 Optimisation Linear Programming Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The standard Linear Programming (LP) Problem Graphical method of solving LP problem
More informationDr. S. Bourazza Math-473 Jazan University Department of Mathematics
Dr. Said Bourazza Department of Mathematics Jazan University 1 P a g e Contents: Chapter 0: Modelization 3 Chapter1: Graphical Methods 7 Chapter2: Simplex method 13 Chapter3: Duality 36 Chapter4: Transportation
More informationEND3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur
END3033 Operations Research I Sensitivity Analysis & Duality to accompany Operations Research: Applications and Algorithms Fatih Cavdur Introduction Consider the following problem where x 1 and x 2 corresponds
More information3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...
Contents Introduction to Linear Programming Problem. 2. General Linear Programming problems.............. 2.2 Formulation of LP problems.................... 8.3 Compact form and Standard form of a general
More informationF 1 F 2 Daily Requirement Cost N N N
Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever
More information9.1 Linear Programs in canonical form
9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems
More information"SYMMETRIC" PRIMAL-DUAL PAIR
"SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax
More informationNote 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)
Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical
More informationMath Models of OR: Some Definitions
Math Models of OR: Some Definitions John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Some Definitions 1 / 20 Active constraints Outline 1 Active constraints
More informationLecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,
Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method Reading: Sections 2.6.4, 3.5, 10.2 10.5 1 Summary of the Phase I/Phase II Simplex Method We write a typical simplex tableau as z x 1 x
More informationOperations Research Lecture 2: Linear Programming Simplex Method
Operations Research Lecture 2: Linear Programming Simplex Method Notes taken by Kaiquan Xu@Business School, Nanjing University Mar 10th 2016 1 Geometry of LP 1.1 Graphical Representation and Solution Example
More informationSimplex Algorithm Using Canonical Tableaus
41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau
More information3 The Simplex Method. 3.1 Basic Solutions
3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,
More informationPart 1. The Review of Linear Programming
In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm
More informationStandard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta
Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau
More informationMATHEMATICAL PROGRAMMING I
MATHEMATICAL PROGRAMMING I Books There is no single course text, but there are many useful books, some more mathematical, others written at a more applied level. A selection is as follows: Bazaraa, Jarvis
More informationLinear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004
Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define
More informationIE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3
IE 4 Principles of Engineering Management The Simple Algorithm-I: Set 3 So far, we have studied how to solve two-variable LP problems graphically. However, most real life problems have more than two variables!
More informationChap6 Duality Theory and Sensitivity Analysis
Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we
More informationPart 1. The Review of Linear Programming
In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation
More informationProfessor Alan H. Stein October 31, 2007
Mathematics 05 Professor Alan H. Stein October 3, 2007 SOLUTIONS. For most maximum problems, the contraints are in the form l(x) k, where l(x) is a linear polynomial and k is a positive constant. Explain
More informationMath 273a: Optimization The Simplex method
Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form
More informationSimplex Method for LP (II)
Simplex Method for LP (II) Xiaoxi Li Wuhan University Sept. 27, 2017 (week 4) Operations Research (Li, X.) Simplex Method for LP (II) Sept. 27, 2017 (week 4) 1 / 31 Organization of this lecture Contents:
More informationLinear Programming Duality
Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve
More informationMath Models of OR: Extreme Points and Basic Feasible Solutions
Math Models of OR: Extreme Points and Basic Feasible Solutions John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 1180 USA September 018 Mitchell Extreme Points and Basic Feasible Solutions
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More informationExample Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones
Chapter 2 THE COMPLEMENTARY PIVOT ALGORITHM AND ITS EXTENSION TO FIXED POINT COMPUTING LCPs of order 2 can be solved by drawing all the complementary cones in the q q 2 - plane as discussed in Chapter.
More informationCSC Design and Analysis of Algorithms. LP Shader Electronics Example
CSC 80- Design and Analysis of Algorithms Lecture (LP) LP Shader Electronics Example The Shader Electronics Company produces two products:.eclipse, a portable touchscreen digital player; it takes hours
More informationAM 121: Intro to Optimization Models and Methods Fall 2018
AM 121: Intro to Optimization Models and Methods Fall 2018 Lecture 5: The Simplex Method Yiling Chen Harvard SEAS Lesson Plan This lecture: Moving towards an algorithm for solving LPs Tableau. Adjacent
More informationM340(921) Solutions Problem Set 6 (c) 2013, Philip D Loewen. g = 35y y y 3.
M340(92) Solutions Problem Set 6 (c) 203, Philip D Loewen. (a) If each pig is fed y kilograms of corn, y 2 kilos of tankage, and y 3 kilos of alfalfa, the cost per pig is g = 35y + 30y 2 + 25y 3. The nutritional
More informationPrelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.
Prelude to the Simplex Algorithm The Algebraic Approach The search for extreme point solutions. 1 Linear Programming-1 x 2 12 8 (4,8) Max z = 6x 1 + 4x 2 Subj. to: x 1 + x 2
More informationMATH 445/545 Homework 2: Due March 3rd, 2016
MATH 445/545 Homework 2: Due March 3rd, 216 Answer the following questions. Please include the question with the solution (write or type them out doing this will help you digest the problem). I do not
More informationTHE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I
LN/MATH2901/CKC/MS/2008-09 THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS Operations Research I Definition (Linear Programming) A linear programming (LP) problem is characterized by linear functions
More information1 Review Session. 1.1 Lecture 2
1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions
More informationSimplex method(s) for solving LPs in standard form
Simplex method: outline I The Simplex Method is a family of algorithms for solving LPs in standard form (and their duals) I Goal: identify an optimal basis, as in Definition 3.3 I Versions we will consider:
More informationLecture 2: The Simplex method
Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.
More informationMULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.
Math324 - Test Review 2 - Fall 206 MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. Determine the vertex of the parabola. ) f(x) = x 2-0x + 33 ) (0,
More informationOPRE 6201 : 3. Special Cases
OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are
More information3. Replace any row by the sum of that row and a constant multiple of any other row.
Section. Solution of Linear Systems by Gauss-Jordan Method A matrix is an ordered rectangular array of numbers, letters, symbols or algebraic expressions. A matrix with m rows and n columns has size or
More informationThe Simplex Method for Solving a Linear Program Prof. Stephen Graves
The Simplex Method for Solving a Linear Program Prof. Stephen Graves Observations from Geometry feasible region is a convex polyhedron an optimum occurs at a corner point possible algorithm - search over
More informationOPERATIONS RESEARCH. Linear Programming Problem
OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for
More informationThe Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science
The Simplex Method Lecture 5 Standard and Canonical Forms and Setting up the Tableau Lecture 5 Slide 1 The Simplex Method Formulate Constrained Maximization or Minimization Problem Convert to Standard
More informationWeek 3: Simplex Method I
Week 3: Simplex Method I 1 1. Introduction The simplex method computations are particularly tedious and repetitive. It attempts to move from one corner point of the solution space to a better corner point
More informationSection 4.1 Solving Systems of Linear Inequalities
Section 4.1 Solving Systems of Linear Inequalities Question 1 How do you graph a linear inequality? Question 2 How do you graph a system of linear inequalities? Question 1 How do you graph a linear inequality?
More informationA Review of Linear Programming
A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex
More informationECE 307 Techniques for Engineering Decisions
ECE 7 Techniques for Engineering Decisions Introduction to the Simple Algorithm George Gross Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign ECE 7 5 9 George
More informationThe Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006
The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 1 Simplex solves LP by starting at a Basic Feasible Solution (BFS) and moving from BFS to BFS, always improving the objective function,
More information4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form
4.5 Simplex method min z = c T x s.v. Ax = b x 0 LP in standard form Examine a sequence of basic feasible solutions with non increasing objective function value until an optimal solution is reached or
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis
MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis Ann-Brith Strömberg 2017 03 29 Lecture 4 Linear and integer optimization with
More information1 The linear algebra of linear programs (March 15 and 22, 2015)
1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real
More informationTRANSPORTATION PROBLEMS
Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations
More information4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b
4.5 Simplex method LP in standard form: min z = c T x s.t. Ax = b x 0 George Dantzig (1914-2005) Examine a sequence of basic feasible solutions with non increasing objective function values until an optimal
More informationNotes taken by Graham Taylor. January 22, 2005
CSC4 - Linear Programming and Combinatorial Optimization Lecture : Different forms of LP. The algebraic objects behind LP. Basic Feasible Solutions Notes taken by Graham Taylor January, 5 Summary: We first
More informationSimplex tableau CE 377K. April 2, 2015
CE 377K April 2, 2015 Review Reduced costs Basic and nonbasic variables OUTLINE Review by example: simplex method demonstration Outline Example You own a small firm producing construction materials for
More informationIn Chapters 3 and 4 we introduced linear programming
SUPPLEMENT The Simplex Method CD3 In Chapters 3 and 4 we introduced linear programming and showed how models with two variables can be solved graphically. We relied on computer programs (WINQSB, Excel,
More informationUNIVERSITY of LIMERICK
UNIVERSITY of LIMERICK OLLSCOIL LUIMNIGH Department of Mathematics & Statistics Faculty of Science and Engineering END OF SEMESTER ASSESSMENT PAPER MODULE CODE: MS4303 SEMESTER: Spring 2018 MODULE TITLE:
More informationDistributed Real-Time Control Systems. Lecture Distributed Control Linear Programming
Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can
More informationSlide 1 Math 1520, Lecture 10
Slide 1 Math 1520, Lecture 10 In this lecture, we study the simplex method which is a powerful method for solving maximization/minimization problems developed in the late 1940 s. It has been implemented
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More informationLinear Programming and the Simplex method
Linear Programming and the Simplex method Harald Enzinger, Michael Rath Signal Processing and Speech Communication Laboratory Jan 9, 2012 Harald Enzinger, Michael Rath Jan 9, 2012 page 1/37 Outline Introduction
More informationUnderstanding the Simplex algorithm. Standard Optimization Problems.
Understanding the Simplex algorithm. Ma 162 Spring 2011 Ma 162 Spring 2011 February 28, 2011 Standard Optimization Problems. A standard maximization problem can be conveniently described in matrix form
More informationThe Graphical Method & Algebraic Technique for Solving LP s. Métodos Cuantitativos M. En C. Eduardo Bustos Farías 1
The Graphical Method & Algebraic Technique for Solving LP s Métodos Cuantitativos M. En C. Eduardo Bustos Farías The Graphical Method for Solving LP s If LP models have only two variables, they can be
More informationWeek_4: simplex method II
Week_4: simplex method II 1 1.introduction LPs in which all the constraints are ( ) with nonnegative right-hand sides offer a convenient all-slack starting basic feasible solution. Models involving (=)
More informationAM 121: Intro to Optimization
AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript
More informationDuality Theory, Optimality Conditions
5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every
More informationChapter 9: Systems of Equations and Inequalities
Chapter 9: Systems of Equations and Inequalities 9. Systems of Equations Solve the system of equations below. By this we mean, find pair(s) of numbers (x, y) (if possible) that satisfy both equations.
More informationThe use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:
Sensitivity analysis The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Changing the coefficient of a nonbasic variable
More informationMAT016: Optimization
MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The
More informationIE 400: Principles of Engineering Management. Simplex Method Continued
IE 400: Principles of Engineering Management Simplex Method Continued 1 Agenda Simplex for min problems Alternative optimal solutions Unboundedness Degeneracy Big M method Two phase method 2 Simplex for
More information3. THE SIMPLEX ALGORITHM
Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).
More informationSlack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0
Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2
More informationSystem of Linear Equations
Chapter 7 - S&B Gaussian and Gauss-Jordan Elimination We will study systems of linear equations by describing techniques for solving such systems. The preferred solution technique- Gaussian elimination-
More informationSECTION 3.2: Graphing Linear Inequalities
6 SECTION 3.2: Graphing Linear Inequalities GOAL: Graphing One Linear Inequality Example 1: Graphing Linear Inequalities Graph the following: a) x + y = 4 b) x + y 4 c) x + y < 4 Graphing Conventions:
More informationThe Simplex Algorithm
8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.
More informationSpecial cases of linear programming
Special cases of linear programming Infeasible solution Multiple solution (infinitely many solution) Unbounded solution Degenerated solution Notes on the Simplex tableau 1. The intersection of any basic
More informationLecture slides by Kevin Wayne
LINEAR PROGRAMMING I a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM Linear programming
More informationCOT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748
COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu https://moodle.cis.fiu.edu/v2.1/course/view.php?id=612 Gaussian Elimination! Solving a system of simultaneous
More informationLinear Programming. (Com S 477/577 Notes) Yan-Bin Jia. Nov 28, 2017
Linear Programming (Com S 4/ Notes) Yan-Bin Jia Nov 8, Introduction Many problems can be formulated as maximizing or minimizing an objective in the form of a linear function given a set of linear constraints
More informationLinear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming
Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)
More informationOPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM
OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation
More informationThe dual simplex method with bounds
The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the
More informationOPERATIONS RESEARCH. Michał Kulej. Business Information Systems
OPERATIONS RESEARCH Michał Kulej Business Information Systems The development of the potential and academic programmes of Wrocław University of Technology Project co-financed by European Union within European
More informationLecture 5 Simplex Method. September 2, 2009
Simplex Method September 2, 2009 Outline: Lecture 5 Re-cap blind search Simplex method in steps Simplex tableau Operations Research Methods 1 Determining an optimal solution by exhaustive search Lecture
More informationIntroduce the idea of a nondegenerate tableau and its analogy with nondenegerate vertices.
2 JORDAN EXCHANGE REVIEW 1 Lecture Outline The following lecture covers Section 3.5 of the textbook [?] Review a labeled Jordan exchange with pivoting. Introduce the idea of a nondegenerate tableau and
More information