A Review of Linear Programming

Size: px
Start display at page:

Download "A Review of Linear Programming"

Transcription

1 A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, Overview In this note we review the basic properties of linear programming including the primal simplex method, duality theory, and dual simplex method. 2 Linear Programming And Convex Sets The Standard LP is defined as min s.t. c T x Ax = b x 0 Here c, x R n, A R m n m < n, b R m. We use standard form to ease discussion of the theory and algorithms pertaining to linear programming. There are other possible standard forms, for instance: max s.t. w T x Ax b Feasible set: {x : Ax = b, x 0} 1

2 2 LINEAR PROGRAMMING AND CONVEX SETS Review of Linear Programming Example 1 (Extreme points) One constraint x 1 + x 2 + x 3 = 1: Example 2 (Two equality constraints in R 3 ) Definition 2 A set C R n is convex if x C and y C, then every point z in the line segment from x to y also belongs to C. 2

3 2 LINEAR PROGRAMMING AND CONVEX SETS Review of Linear Programming The point z is in the line segment from x to y means z = α(y x)+x for some 0 α 1 z = αy + (1 α)x is called the convex combination of x,y. Definition 2 z = α 1 x 1 + α 2 x α n x n where α 1,..., α n 0 α i = 1 called convex combination of x 1,..., x n. Definition 2 Convex hull of a finite set C R n, {x 1,..., x k } is conv{c} = {z R n : α 1,..., α k 0, αi = 1 such that z = α 1 x α n x n }. Convex hull of an arbitrary set C is defined as conv{c} def = {α 1 x α n 0x n : α i 0, α i = 1, x i C} Example 3 (convex hull) C 1 is not a convex set but the conv C is. it. Fact 2 In a general optimization problem, if the objective function is linear then min c T x s.t. x C has the same value as min c T x s.t. x conv C. In other words every optimization problem can be reduced to one which optimizes over a convex set. Unfortunately effectively determining Conv(C) is not that easy in general and may involve prohibitive computational costs. Now suppose that the set C over which we are optimizing is a a finite set of points with integer coordinates. That is suppose that our optimization problem is in fact an integer programming problem. Then, this fact says that we only need to worry about the convex hull of our integer valued points. But convex hull of a finite number of points is a polytope and optimization over polytopes is linear programming. Thus, 3

4 2 LINEAR PROGRAMMING AND CONVEX SETS Review of Linear Programming Corollary 2 Each integer program is equivalent to some linear programming problem. Again this fact should not give us much comfort from a computational point of view. The polytope obtained by taking the convex hull of our integer program may have a huge number of facets that is the resulting linear program may have a huge number of constraints (much larger than the original integer program.) Fact 2 If C 1 and C 2 are two convex sets then so is C 1 C 2. Definition 2 A point x C where C is a convex set is called extreme if x = 1x+0x 1 + x k is the only way that x can be written as convex combinations of other points in C. Example 4 (extreme points of two convex sets) Fact 2 Consider the optimization problem min c T x s.t. x C where C is convex. Then the optimum is attained at some extreme point of C. Now the feasible region of an LP F = {x : Ax = b, x 0} is a convex set, in fact a polyhedron. Thus a linear program has a finite number of extreme points. Thus to solve a linear programming problem it suffices to focus on the extreme points and try to find the one with the best objective value; that point will be the optimum over all points in the feasible set of the LP. Note 1 an optimal point does not necessarily have to be an extreme point, but among all these optimal points there will be at least one extreme point. 4

5 4 THE SIMPLEX ALGORITHM (IN MATRIX FORM) Review of Linear Programming 3 The point of the simplex method Since optimum occurs on an extreme point (which there are only a finite number of them in linear programming) we can sketch the following steps to find the optimum point: 1. First find some extreme point. 2. Check to see if it is optimal. 3. If not, move on to a neighboring extreme point which improves the objective function and repeat 1-3. We now have to give an algebraic characterization of these operations. What is an extreme point of a polyhedron in algebraic terms? What does it mean to move from point to a neighboring point? How do we decide that the current point doesn t have a better neighbor? Consider the feasible set of a standard LP: F = {x R n : Ax = b and x i 0} Fact 3 An extreme point of set F has at most m coordinates non-zero where m is the number of rows of matrix A. Equivalently at least n m coordinates are zero. Example 5 (Standard form LP) Consider a feasible set F = {x 1 + x 2 + x 3 = 3, and x i 0} Here m = 1, n = 3, and A = [1, 1, 1. As you can see from the picture in the previous page that the feasible set is a triangle and each of its extreme points have 2 nonzeros and one zero coordinate. Definition 3 A basic feasible solution of LP in standard form is a solution where n m coordinates are set equal to zero, and the remaining coordinates are non-negative. 4 The Simplex Algorithm (in Matrix Form) Consider the linear programming problem in the standard form: min s.t. c T x Ax = b x 0 (1) 5

6 4 THE SIMPLEX ALGORITHM (IN MATRIX FORM) Review of Linear Programming where A R m n, c, x R n, b R m and m < n. c T = (c 1,..., c n ) a a 1n... } a m1... {{ a mn } A x 1. x n } {{ } x = b 1. b n } {{ } b (2) Remember that the optimum occurs at an extreme point of the feasible set and that at such a point at most m of the entries are nonzero. Thus in equation Ax = b, since A is m n, if we fix m n entries of x, then only m will be left unknown; but we have m equations and we can solve for the unknown entries. Thus at each extreme point of the the coordinates of x and the corresponding columns of A and coordinates of c are labeled as basic and nonbasic. Without loss of generality let us assume that the first m coordinates of x and c and the first m columns of A are the basic ones and the rest are nonbasic. Thus the matrix A is split into basic (B) and non-basic (N) submatrices. Similarly x = (x B ; x N ) and c = (c B ; c N ). Thus a basic feasible solution (BFS) is one where x = (x B ; x N = 0) The Simplex Algorithm states that one should find an extreme point and check if it is optimal. If it is not it should move on to one of its neighboring extreme points. A neighboring extreme point has the same basic/nonbasic structure except that one of basic and nonbasic coordinates are swapped. For example x 1 and x 2 are neighbors where represents any nonbasic coordinate. x 1 = (,, 0, 0, 0,, 0,, 0, 0, 0, 0) x 2 = (0,,, 0, 0,, 0,, 0, 0, 0, 0) We can then rewrite the equations as follows: Ax = b [B, N [ xb x N = b (3) Suppose that we know x N. Then the equation Bx B + Nx N = b can be solved for x B : x B = B 1 b B 1 Nx N The value of the objective function is z = c T x. But: Thus z = c T x = c B T x B + c N T x N = c B T B 1 (b Nx N ) + c N T x N z = c B T B 1 b + ( c N T c B T B 1 N ) x N. 6

7 4 THE SIMPLEX ALGORITHM (IN MATRIX FORM) Review of Linear Programming In particular, at a BFS: x N = 0 and z = c B T B 1 b How can we determine that x B is optimal? Since we are minimizing, then, if some entry, say j of r N = c N T c B T B 1 N is negative, by replacing x j = 0 with x j = α for some α > 0 we can improve the objective. In essence we are replacing x N which is zero with αe j where 0. e j = 1. 0 with 1 in the j th entry. (The vector r is the last row of the tableau when you solve LP by hand.) The larger α the larger is the decrease in the objective function. However, we may not be able to increase α indefinitely. While we are increasing the non basic variable x j we must maintain feasibility, that is Bx b + αne j = b or equivalently x B + αb 1 Ne j = B 1 b. Thus x B will change as α changes. Let us call the matrix T def = B 1 N, and its j th column t j. (T is what you get in the tableau when you are solving linear programs by hand.) x B + αt j = B 1 b Thus so far, we have determined that since r j < 0 we can increase the j th component of x N, that is this component is entering the basis. Now we must determine which component of x B has to leave (become zero). Since x B + αt j = B 1 b must always be true as we increase α, we must adjust components of x B. Let us look at it component by component: (x B ) i + αt ij = (B 1 b) i where T ij is the i, j entry of T (that is i th entry of t j.) Now we have two distinct cases: 1. if T ij 0 then by increasing α we must increase (x B ) i to keep the balance; therefore in this case there are no problems. 2. If T ij > 0 then by increasing α we must decrease (x B ) i to keep the balance. We must however be careful not to let (x B ) i to fall below zero and become infeasible. This means that α (x B ) i /T ij. Since α (x B ) i /T ij should hold for all i where T ij < 0 we can at most set α to α = (x B ) i min {i:t ij 0} T ij If this minimum is attained at index k then the k th entry of x B has to leave the basis. 7

8 4 THE SIMPLEX ALGORITHM (IN MATRIX FORM) Review of Linear Programming Example 6 (From basic indices to basic solution) Consider the following LP: min 2x 1 3x 2 2x 3 + x 4 x 1 + x 3 + 6x 4 = 2 x 2 + 2x 3 + x 4 = 3 x {1,2,3} 0 Let us assume that some how we have found that x T B = (x 1, x 2 ) Then [ [ [ [ B =, N =, c B =, c 3 N = 1, x B = B 1 b = [ 2 3, and x N = 0. Now r N = c N T c B T B 1 N = [ 2 8, We can organize this information in tableau as you have learned from your LP courses: x 1 x 2 x 3 x 4 B 1 b Since r 4 < 0 we decide that by bringing x 4 to the basis we can improve the objective. To decide which basic variable should leave we check the largest value α can be, and which basic variable is responsible for it: α = min{2/6, 3/1} = 2 6 = 1 3. Thus x 1 has to leave the basis and x 4 enters leading to the new tableau: x 1 x 2 x 3 x 4 B 1 b Can we improve the solution to the above example any further? The answer is no. To show this we need duality theory. 8

9 5 DUALITY Review of Linear Programming 5 Duality Consider the linear programming in standard form again: min s.t. c T x Ax = b x 0 (4) We will refer to it as the primal problem. Define the Dual of this LP as: max s.t. b T y A T y c (5) The constraint A T y c may be written as A T y + s = c where s 0 are some times called slack variables. The duality theory indicates that the values of the optimal solution in the first and second problems above are equal. Lemma 5 (Weak duality:) If x is feasible for the primal and (y, s) is feasible for the dual, then b T y c T x Proof: c T x b T y = c T x (Ax) T y = x T c x T A T y = x T (c A T y) = x T s 0 The last inequality holds because both x and s consist of nonnegative entries and thus their inner product has to be nonnegative. This simple lemma reveals a few key points. First for any primal-feasible x and dual feasible (y, s) the quantity x T s is called the duality gap.second, if for some primal-feasible x and dual-feasible (y, s) we have x T s = 0 then, necessarily, x is optimal for the primal and (y, s) is optimal for the dual. Some algorithms maintain feasible primal and dual solutions and strive to improve the solutions by reducing then duality gap. Now the question is that does every linear program has duality gap zero at the optimum? The answer is yes if both primal and dual are feasible. Theorem 5 (Strong Duality Theorem:) If there exists an optimal primal x and an optimal dual y, then b T y = c T x 9

10 5 DUALITY Review of Linear Programming 5.1 Table of Duality Proof: Suppose we apply the simplex method and after some iterations we come to the point where the last row of the tableau is all nonnegative that is c T N ct B B 1 N 0. We will prove that we are at the optimum. Recall that x = (x B, x N ) with x N = 0. Set y T = c T B B 1. claim: y is dual feasible solution and optimal. We first show that it is feasible that is c A T y 0 c A T y = [ cb c N [ B T N T y = [ cb B T y c N N T y = [ cb B T ( B T c B ) c N N T ( B T c B ) Thus y is dual feasible. To show that it is optimal we prove y T b = c T x: y T b = c B T B 1 b = c B T x B = c T x noting that x N = 0 the last equality is justified. [ 0 = r N 0 Note 2 In the simplex method, at each iteration we have access to the vector y T def = c T B B 1. This vector can be plugged into the dual, but it will not satisfy the constraints: In the proof we showed that c T y T A = (0, r T N ) which is not nonnegative in general. However at the optimum y will be dual feasible and the objective value is y T b = c T B B 1 b = z, the same value as the primal objective. Therefore y T = c T B B 1 is dual optimal solution. Theorem 5 (Complementary Slackness Theorem:) If x is the primal optimal solution and (y, s) is the dual optimal solution, then x i s i = 0 for i = 1... n. Proof: At the optimal solution: c T x = b T y c T x b T y = x T s = 0 Σx i s i = 0. But because x i 0 and s i 0 for all i i x is i = 0 implies x i s i = 0 for each and every i. Another way of expressing the complementary slackness theorem is the following: If the i th inequality in the dual is strict (a T i y < c i) then the corresponding x i = 0. If x i > 0 that is x i is basic then the corresponding inequality in the dual is satisfied with equality: a T i y = c i. 5.1 Table of Duality All linear programs have a corresponding dual and not just the ones in the standard form. In general every constraint on an LP corresponds to a variable on its dual and every variable corresponds to a constraint on the dual. In addition the right hand side vector becomes the objective vector in the dual and the objective vector goes to the right hand side. To better understand how the constraints in the primal effect the variables in the dual and how the variables in the primal effect the constraints in the dual. one can refer to the following table: 10

11 5 DUALITY Review of Linear Programming 5.2 Dual Simplex Method MIN MAX x i 0 C V O A x i 0 N R S x i unconstrained = T C x i 0 O V N x i 0 A S R T = x i unconstrained Example 7 (Arbitrary dual) The dual thus becomes: min 2x 1 3x 2 + x 3 x 4 s.t. x 1 + x 3 = 5 (y 1 ) x 1 2x 2 + x 3 4 (y 2 ) x 2 + x 3 x 4 6 (y 3 ) x 1 0, x 2 0 max 5y 1 + 4y 2 + 6y 3 s.t. y 1 + y 2 2 2y 2 + y 3 3 (x 1 ) y 1 + y 2 + y 3 = 1 (x 2 ) y 3 = 1 (x 3 ) y 2 0, y Dual Simplex Method In the simplex method discussed in section 4 at each iteration we had an x which was basic and feasible (that is x 0) but not optimal (that is r N 0). The simplex method maintains feasibility while pushing toward optimality. Now let us view the simplex method from the point of view of the dual problem. Remember that each constraint in A T y c corresponds to a coordinate of x. At the optimum by complementary slackness theorem if x i > 0 then the i th constraint is satisfied with equality: a T i y = c i. Now at each iteration of simplex we have access to vector y T = c T B B 1. This vector in general is not feasible for the dual because, as we showed in the proof of the duality theorem 5 c T y T A = (c T B c T BB 1 B, c T N c T BB 1 N) = (0, r T N) 11

12 5 DUALITY Review of Linear Programming 5.2 Dual Simplex Method and since, if we are not at the optimum, then r N does not have all nonnegative components, A T y c is not satisfied. However, b T y = c T BB 1 b = c T bx B = z That is the objective function at this dual infeasible point is the same as the value of the objective function at the primal. Thus, from the point of view of the dual problem, the simplex method moves from one infeasible point y to another, at each iteration maintaining optimality condition (that is primal and dual values are equal) and attempting to get feasible (if r N 0 then y will be feasible.) This suggests an idea. Let us turn the tables in the primal problem. Suppose that we have a solution x, still basic (that is x = (x B, x N ) with x N = 0), but not necessarily feasible (that is x B 0.) Assume also that the optimality condition holds that is r T N = ct N ct B B 1 N 0. We are going to describe an algorithm that moves from one basic infeasible point to another neighboring one, at each iteration maintaining the optimality condition c T N yt N 0, and moving toward a point that is feasible; if we find such a point then it is primal-optimal. The strategy leads to the dual simplex method. In this method, we first choose the coordinate that has to leave the basis. This is easy enough: choose a coordinates i where (x B ) i < 0. Now we must decide which coordinate has to enter the basis. By removing i from the basis we are removing the i th row of the matrix B. Since y T = c T B B 1 this implies that y T new y T α(b 1 ) i. where in general A i. means the i th row of matrix A. However, we must maintain the optimality condition: c T N yt newn 0. That is: c T N ( y T α(b 1 ) i. ) N 0 c T N y T N + α(b 1 ) i. N 0 r T N + αt i. 0 Now let us look at the j th column of this inequality: r j + αt ij 0 If T ij 0 then this does not impose any condition on α: as α increases r j + αt ij remains positive. But if T ij < 0 we can increase α to at most r j /T ij, beyond that r j will become negative and optimality condition is violated. Therefore, α = min j { r j T ij : T ij < 0 The index j at which this minimum is achieved is the one that enters the basis. } 12

13 5 DUALITY Review of Linear Programming 5.2 Dual Simplex Method Example 8 (Dual simplex method) Consider the LP: min 3x 1 + 4x 2 + 5x 3 s.t. x 1 + 2x 2 + 3x 3 5 2x 1 + 2x 2 + x 3 6 x 1, x 2, x 3 0 which in standard form is min 3x 1 + 4x 2 + 5x 3 s.t. x 1 2x 2 3x 3 + x 4 = 5 2x 1 2x 2 x 3 + x 5 = 6 x 1, x 2, x 3, x 4, x 5 0 Now if we choose {4, 5} to be the indices of basic variables, then ( ) ( ) B = B 1 =, N = c T B = (0, 0), c T N = (3, 4, 5) x T B = ( 5, 4) y T = c T BB 1 = c T B = (0, 0) r T N = c T N y T N = c T N = (3, 4, 5) Clearly this solution is basic and infeasible, yet the optimality condition r N 0 holds. Notice that y is dual feasible, check it. This information can be organized in a tableau: x 1 x 2 x 3 x 4 x 5 B 1 b If we choose x 5 to leave the basis, then smallest r j / T ij is min{ 3/ 2, 4/ 2, 5/ 1} = 3/2 attained at index 1. Thus x 1 should enter the basis. Pivoting on the squared entry of the tableau we get: x 1 x 2 x 3 x 4 x 5 B 1 b /2 1-1/ /2 0-1/ /2 0 3/2 9 Now we choose x 4 to leave the basis, and min{1/1, 7/2 the basis. Pivoting on the boxed element we get: 5/2, 3/2 x 1 x 2 x 3 x 4 x 5 B 1 b 0 1 1/2-1 -1/ /2 } = 1, thus we choose x 2 to enter Now we have both feasibility and optimality satisfied, and we have the solution. 13

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Relation of Pure Minimum Cost Flow Model to Linear Programming

Relation of Pure Minimum Cost Flow Model to Linear Programming Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m

More information

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP: Dual Basic Solutions Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP: Observation 5.7. AbasisB yields min c T x max p T b s.t. A x = b s.t. p T A apple c T x 0 aprimalbasicsolutiongivenbyx

More information

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints. Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch

More information

Lesson 27 Linear Programming; The Simplex Method

Lesson 27 Linear Programming; The Simplex Method Lesson Linear Programming; The Simplex Method Math 0 April 9, 006 Setup A standard linear programming problem is to maximize the quantity c x + c x +... c n x n = c T x subject to constraints a x + a x

More information

M340(921) Solutions Problem Set 6 (c) 2013, Philip D Loewen. g = 35y y y 3.

M340(921) Solutions Problem Set 6 (c) 2013, Philip D Loewen. g = 35y y y 3. M340(92) Solutions Problem Set 6 (c) 203, Philip D Loewen. (a) If each pig is fed y kilograms of corn, y 2 kilos of tankage, and y 3 kilos of alfalfa, the cost per pig is g = 35y + 30y 2 + 25y 3. The nutritional

More information

Discrete Optimization

Discrete Optimization Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 15, 2010 Discussions: March 25, April 01 Discrete Optimization Spring 2010 s 3 You can hand in written solutions for up to two of the exercises

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. Problem 1 Consider

More information

MAT016: Optimization

MAT016: Optimization MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Linear programs, convex polyhedra, extreme points

Linear programs, convex polyhedra, extreme points MVE165/MMG631 Extreme points of convex polyhedra; reformulations; basic feasible solutions; the simplex method Ann-Brith Strömberg 2015 03 27 Linear programs, convex polyhedra, extreme points A linear

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

The Dual Simplex Algorithm

The Dual Simplex Algorithm p. 1 The Dual Simplex Algorithm Primal optimal (dual feasible) and primal feasible (dual optimal) bases The dual simplex tableau, dual optimality and the dual pivot rules Classical applications of linear

More information

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2

More information

Sensitivity Analysis

Sensitivity Analysis Dr. Maddah ENMG 500 /9/07 Sensitivity Analysis Changes in the RHS (b) Consider an optimal LP solution. Suppose that the original RHS (b) is changed from b 0 to b new. In the following, we study the affect

More information

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 1 Simplex solves LP by starting at a Basic Feasible Solution (BFS) and moving from BFS to BFS, always improving the objective function,

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

Lecture 11: Post-Optimal Analysis. September 23, 2009

Lecture 11: Post-Optimal Analysis. September 23, 2009 Lecture : Post-Optimal Analysis September 23, 2009 Today Lecture Dual-Simplex Algorithm Post-Optimal Analysis Chapters 4.4 and 4.5. IE 30/GE 330 Lecture Dual Simplex Method The dual simplex method will

More information

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

Math 273a: Optimization The Simplex method

Math 273a: Optimization The Simplex method Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form

More information

Linear and Combinatorial Optimization

Linear and Combinatorial Optimization Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality

More information

December 2014 MATH 340 Name Page 2 of 10 pages

December 2014 MATH 340 Name Page 2 of 10 pages December 2014 MATH 340 Name Page 2 of 10 pages Marks [8] 1. Find the value of Alice announces a pure strategy and Betty announces a pure strategy for the matrix game [ ] 1 4 A =. 5 2 Find the value of

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Understanding the Simplex algorithm. Standard Optimization Problems.

Understanding the Simplex algorithm. Standard Optimization Problems. Understanding the Simplex algorithm. Ma 162 Spring 2011 Ma 162 Spring 2011 February 28, 2011 Standard Optimization Problems. A standard maximization problem can be conveniently described in matrix form

More information

Linear Programming Inverse Projection Theory Chapter 3

Linear Programming Inverse Projection Theory Chapter 3 1 Linear Programming Inverse Projection Theory Chapter 3 University of Chicago Booth School of Business Kipp Martin September 26, 2017 2 Where We Are Headed We want to solve problems with special structure!

More information

March 13, Duality 3

March 13, Duality 3 15.53 March 13, 27 Duality 3 There are concepts much more difficult to grasp than duality in linear programming. -- Jim Orlin The concept [of nonduality], often described in English as "nondualism," is

More information

CHAPTER 2. The Simplex Method

CHAPTER 2. The Simplex Method CHAPTER 2 The Simplex Method In this chapter we present the simplex method as it applies to linear programming problems in standard form. 1. An Example We first illustrate how the simplex method works

More information

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis Ann-Brith Strömberg 2017 03 29 Lecture 4 Linear and integer optimization with

More information

Linear Programming. Operations Research. Anthony Papavasiliou 1 / 21

Linear Programming. Operations Research. Anthony Papavasiliou 1 / 21 1 / 21 Linear Programming Operations Research Anthony Papavasiliou Contents 2 / 21 1 Primal Linear Program 2 Dual Linear Program Table of Contents 3 / 21 1 Primal Linear Program 2 Dual Linear Program Linear

More information

Linear Programming: Chapter 5 Duality

Linear Programming: Chapter 5 Duality Linear Programming: Chapter 5 Duality Robert J. Vanderbei September 30, 2010 Slides last edited on October 5, 2010 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544

More information

Optimization Methods in Management Science

Optimization Methods in Management Science Optimization Methods in Management Science MIT 15.05 Recitation 8 TAs: Giacomo Nannicini, Ebrahim Nasrabadi At the end of this recitation, students should be able to: 1. Derive Gomory cut from fractional

More information

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Sensitivity analysis The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Changing the coefficient of a nonbasic variable

More information

II. Analysis of Linear Programming Solutions

II. Analysis of Linear Programming Solutions Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois

More information

Foundations of Operations Research

Foundations of Operations Research Solved exercises for the course of Foundations of Operations Research Roberto Cordone The dual simplex method Given the following LP problem: maxz = 5x 1 +8x 2 x 1 +x 2 6 5x 1 +9x 2 45 x 1,x 2 0 1. solve

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

3.7 Cutting plane methods

3.7 Cutting plane methods 3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x

More information

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 13 Dr. Ted Ralphs IE406 Lecture 13 1 Reading for This Lecture Bertsimas Chapter 5 IE406 Lecture 13 2 Sensitivity Analysis In many real-world problems,

More information

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2 Linear programming Saad Mneimneh 1 Introduction Consider the following problem: x 1 + x x 1 x 8 x 1 + x 10 5x 1 x x 1, x 0 The feasible solution is a point (x 1, x ) that lies within the region defined

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Linear Programming. Chapter Introduction

Linear Programming. Chapter Introduction Chapter 3 Linear Programming Linear programs (LP) play an important role in the theory and practice of optimization problems. Many COPs can directly be formulated as LPs. Furthermore, LPs are invaluable

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

TIM 206 Lecture 3: The Simplex Method

TIM 206 Lecture 3: The Simplex Method TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

Chapter 1 Linear Programming. Paragraph 5 Duality

Chapter 1 Linear Programming. Paragraph 5 Duality Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Duality Theory, Optimality Conditions

Duality Theory, Optimality Conditions 5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every

More information

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables Tomonari Kitahara and Shinji Mizuno February 2012 Abstract Kitahara

More information

Introduction to optimization

Introduction to optimization Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24 The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization)

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

Simplex Algorithm Using Canonical Tableaus

Simplex Algorithm Using Canonical Tableaus 41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau

More information

IE 400: Principles of Engineering Management. Simplex Method Continued

IE 400: Principles of Engineering Management. Simplex Method Continued IE 400: Principles of Engineering Management Simplex Method Continued 1 Agenda Simplex for min problems Alternative optimal solutions Unboundedness Degeneracy Big M method Two phase method 2 Simplex for

More information

Lecture #21. c T x Ax b. maximize subject to

Lecture #21. c T x Ax b. maximize subject to COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

ORF 522. Linear Programming and Convex Analysis

ORF 522. Linear Programming and Convex Analysis ORF 522 Linear Programming and Convex Analysis The Simplex Method Marco Cuturi Princeton ORF-522 1 Reminder: Basic Feasible Solutions, Extreme points, Optima Some important theorems last time for standard

More information

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities Linear Programming Murti V Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities murtis@umnedu September 4, 2012 Linear Programming 1 The standard Linear Programming (SLP) problem:

More information

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

More information

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20. Extra Problems for Chapter 3. Linear Programming Methods 20. (Big-M Method) An alternative to the two-phase method of finding an initial basic feasible solution by minimizing the sum of the artificial

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP Different spaces and objective functions but in general same optimal

More information

Chapter 3, Operations Research (OR)

Chapter 3, Operations Research (OR) Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z

More information

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3 IE 4 Principles of Engineering Management The Simple Algorithm-I: Set 3 So far, we have studied how to solve two-variable LP problems graphically. However, most real life problems have more than two variables!

More information

The augmented form of this LP is the following linear system of equations:

The augmented form of this LP is the following linear system of equations: 1 Consider the following LP given in standard form: max z = 5 x_1 + 2 x_2 Subject to 3 x_1 + 2 x_2 2400 x_2 800 2 x_1 1200 x_1, x_2 >= 0 The augmented form of this LP is the following linear system of

More information

A Bound for the Number of Different Basic Solutions Generated by the Simplex Method

A Bound for the Number of Different Basic Solutions Generated by the Simplex Method ICOTA8, SHANGHAI, CHINA A Bound for the Number of Different Basic Solutions Generated by the Simplex Method Tomonari Kitahara and Shinji Mizuno Tokyo Institute of Technology December 12th, 2010 Contents

More information

Operations Research Lecture 2: Linear Programming Simplex Method

Operations Research Lecture 2: Linear Programming Simplex Method Operations Research Lecture 2: Linear Programming Simplex Method Notes taken by Kaiquan Xu@Business School, Nanjing University Mar 10th 2016 1 Geometry of LP 1.1 Graphical Representation and Solution Example

More information

Week 3 Linear programming duality

Week 3 Linear programming duality Week 3 Linear programming duality This week we cover the fascinating topic of linear programming duality. We will learn that every minimization program has associated a maximization program that has the

More information

Linear programs Optimization Geoff Gordon Ryan Tibshirani

Linear programs Optimization Geoff Gordon Ryan Tibshirani Linear programs 10-725 Optimization Geoff Gordon Ryan Tibshirani Review: LPs LPs: m constraints, n vars A: R m n b: R m c: R n x: R n ineq form [min or max] c T x s.t. Ax b m n std form [min or max] c

More information

IP Cut Homework from J and B Chapter 9: 14, 15, 16, 23, 24, You wish to solve the IP below with a cutting plane technique.

IP Cut Homework from J and B Chapter 9: 14, 15, 16, 23, 24, You wish to solve the IP below with a cutting plane technique. IP Cut Homework from J and B Chapter 9: 14, 15, 16, 23, 24, 31 14. You wish to solve the IP below with a cutting plane technique. Maximize 4x 1 + 2x 2 + x 3 subject to 14x 1 + 10x 2 + 11x 3 32 10x 1 +

More information