Linear Programming. Jie Wang. University of Massachusetts Lowell Department of Computer Science. J. Wang (UMass Lowell) Linear Programming 1 / 47
|
|
- Ernest Short
- 5 years ago
- Views:
Transcription
1 Linear Programming Jie Wang University of Massachusetts Lowell Department of Computer Science J. Wang (UMass Lowell) Linear Programming 1 / 47
2 Linear function: f (x 1, x 2,..., x n ) = n a i x i, i=1 where a 1, a 2,..., a n are real numbers and x 1, x 2,..., x n are valuables. Linear equality: f (x 1, x 2,..., x n ) = b. Linear inequalities: f (x 1, x 2,..., x n ) > b. f (x 1, x 2,..., x n ) b. f (x 1, x 2,..., x n ) < b. f (x 1, x 2,..., x n ) b. Linear inequalities and linear equalities are often referred to as linear constraints. LP problems are to maximize (or minimize) a linear objective function with linear constraints. J. Wang (UMass Lowell) Linear Programming 2 / 47
3 LP Standard Form Standard form: Maximization + linear inequalities. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2 x 1 0 x 2 0. A minimization problem can be converted to an equivalent maximization problem, and vice versa. For example, maximizing f (x 1, x 2,..., x n ) under a set of constraints is equivalent to minimizing f (x 1, x 2,..., x n ) with the same set of constraints. J. Wang (UMass Lowell) Linear Programming 3 / 47
4 LP Slack Form Maximization + linear equalities with slack variables. maximize x 1 + x 2 subject to x 3 = 8 4x 1 + x 2 x 4 = 10 2x 1 x 2 x 5 = 2 5x 1 + 2x 2 x i 0 i = 1, 2, 3, 4, 5. Variables on the left-hand side of the equalities are basic variables and on the right-hand side nonbasic variables. Initially, basic variables are slack variables, and nonbasic variables are the original variables. These can be changed during the simplex procedure. The slack form turns an LP problem in a lower dimension with a set of inequality constraints to an equivalent LP problem in a higher dimension with a set of equality constraints. Equalities are much easier to handle than inequalities. J. Wang (UMass Lowell) Linear Programming 4 / 47
5 LP Applications An LP formulation for single-pair shortest path of (s, t). maximize d t subject to d v d u + w(u, v) for each edge u v E, d s = 0. Note: it s maximization, not minimization. An LP formulation for maximum flow. maximize v V f sv v V f vs subject to f uv c(u, v) for each u, v V v V f uv = v V f uv for each u V {s, t} f uv 0 for each u, v V. J. Wang (UMass Lowell) Linear Programming 5 / 47
6 Minimum-Cost Flow Each edge u v in a flow network has a capacity c(u, v) and a cost a(u, v). Want to send d units of flow from s to t while minimizing the total cost. minimize u v E a(u, v)f uv subject to for each u, v V : f uv c(u, v) for each u V {s, t}: v V f uv = v V f uv v V f sv v V f vs = d for each u, v V : f uv 0. J. Wang (UMass Lowell) Linear Programming 6 / 47
7 Multicommodity Flow Given a digraph G = (V, E) with k different commodities K 1, K 2,..., K k, where Each edge u v E has a capacity c(u, v) 0. G has no anti-parallel edges. K i = (s i, t i, d i ), representing the source, sink, and demand of commodity i. Let f i denote the flow for commodity i, where f iuv is the flow of commodity i from u to v. Let f uv = k i=1 f iuv denote the aggregate flow. Want to determine if such a flow exists. That is, there is no objective function to optimize. J. Wang (UMass Lowell) Linear Programming 7 / 47
8 Multicommodity Flow Continued optimize 0 subject to k i=1 f iuv c(u, v) for each u, v V v V f iuv v V f iuv = 0 for each u V {s i, t i } v V f i,s i,v v V f i,v,s i = d i f iuv 0 for each u, v V and i = 1, 2,..., k. J. Wang (UMass Lowell) Linear Programming 8 / 47
9 Feasible Solutions and Feasible Regions Feasible solutions are values of the variables that satisfy the constraints. Feasible region (a.k.a. simplex) is the set of feasible solutions, which is a convex polytope. A convex polytope is a geometric shape in d-dimensional space without any dents. J. Wang (UMass Lowell) Linear Programming 9 / 47
10 LP Algorithms The optimal solution occurs at the boundary of the simplex. Could be at exactly one vertex. In this case there is only one optimal solution. Could be at a line segment. In this case there are multiple optimal solutions. The simplex algorithm, while having exponential runtime in the worst case, is very efficient in practice. Other algorithms include (1) the ellipsoid method, the first-known polynomial-time algorithm; and (2) the Interior-Point method, which walks through the interior of the simplex instead of the vertices. J. Wang (UMass Lowell) Linear Programming 10 / 47
11 The Simplex Algorithm The simplex algorithm finds a basic solution from the slack form iteratively as follows: 1 Set each nonbasic variable to zero. 2 Compute the values of the basic variables from the equality constraints. If a nonbasic variable in a constraint causes a basic variable in the constraint to become 0, then the constraint is tight. 3 The goal of an iteration is to reformulate the linear program so that the basic solution (when the nonbasic variables are 0) has a greater objective value. This is done by (performing a pivot): Select a nonbasic variable x e (a.k.a. the entering variable); maximize it such that all constraints are satisfied. The variable x e becomes basic and another variable x l (a.k.a. the leaving variable) becomes nonbasic. 4 The simplex algorithm terminates when all of the coefficients appearing in the objective function become negative. J. Wang (UMass Lowell) Linear Programming 11 / 47
12 An Example Slack form: maximize 3x 1 + x 2 + 2x 3 subject to x 1 + x 2 + 3x x 1 + 2x 2 + 5x x 1 + x 2 + 2x 3 36 x 1, x 2, x 3 0. maximize 3x 1 + x 2 + 2x 3 subject to x 4 = 30 x 1 x 2 3x 3 x 5 = 24 2x 1 2x 2 5x 3 x 6 = 36 4x 1 x 2 2x 3 x 1, x 2, x 3, x 4, x 5, x 6 0. J. Wang (UMass Lowell) Linear Programming 12 / 47
13 Simplex Algorithm Example Continued Set x 1 = 0, x 2 = 0, x 3 = 0 and get x 4 = 30, x 5 = 24, and x 6 = 36. Select entering variable x 1 and leaving variable x 6. Solve the corresponding equation for x 1 to get x 1 = 9 x 2 4 2x 3 4 x 6 4. Maximize it to get x 1 = 9. Substitute the right-hand side of x 1 into each remaining equation: z = 27 + x x 3 2 3x 6 4 x 1 = 9 x 2 4 x 3 4 x 6 4 x 4 = 21 3x 2 4 5x x 6 4 x 5 = 6 3x 2 2 4x 3 + x 6 2. J. Wang (UMass Lowell) Linear Programming 13 / 47
14 Simplex Algorithm Example Continued Set x 2, x 3, and x 6 to 0 to get z = 27, x 1 = 9, x 4 = 21, and x 5 = 6. Select entering variable x 3 and leaving variable x 5. Note that x 6 cannot be chosen for it will decrease the value of z. Solve for x 3 to get x 3 = 3 2 3x x 6 8 x 5 4. The maximum value for x 3 is 3 2. Substitute x 3 s left-hand side into the rest of the equations to obtain z = x 2 16 x x 6 16 x 1 = 33 4 x x 5 8 5x 6 16 x 3 = 3 2 3x x 6 8 x 5 4 x 4 = x x 5 8 x 6 16 J. Wang (UMass Lowell) Linear Programming 14 / 47
15 Simplex Algorithm Example Continued Set x 2, x 5, and x 6 to 0 to get z = 111 4, x 1 = 33 4, x 3 = 3 2, and x 4 = Select entering variable x 2 ; maximize it to get x 2 = 4. The leaving variable is x 3 : x 2 = 4 8x 3 3 2x x 6 3. Substitute x 2 s left-hand side into the rest of the equations to obtain: z = 28 x 3 6 x 5 6 2x 6 3 x 1 = 8 + x x 5 6 x 6 3 x 2 = 4 8x 3 3 2x x 6 3 x 4 = 18 x x 5 2 J. Wang (UMass Lowell) Linear Programming 15 / 47
16 Simplex Algorithm Example Continued Now all the coefficients are negative, meaning that there is no more room for improvement. The algorithm terminates with z = 28 by setting x 3, x 5, x 6 to 0. The basic solution is x 1 = 8, x 4 = 18, x 2 = 4, x 5 = 0, x 3 = 0, x 6 = 0. Note: When ties are present, choose the variable with the smallest index (Bland s Rule). J. Wang (UMass Lowell) Linear Programming 16 / 47
17 Pivoting Pivoting takes as input a slack form. Let N denote the set of indices of nonbasic variables and B the set of indices of basic variable in the slack form. Let the slack form be given by (N, B, A, b, c, v), where v is the objective value. Let l denote the index of the leaving variable x l and e the entering variable x e. Pivoting returns ( ˆN, ˆB, Â, ˆb, ĉ, ˆv) for the new slack form. J. Wang (UMass Lowell) Linear Programming 17 / 47
18 Pivoting Continued J. Wang (UMass Lowell) Linear Programming 18 / 47
19 How Will Variables Change in Pivoting? Lemma 29.1 In a call to Pivot(N, B, A, b, c, v, l, e) with a le 0, let x denote the basic solution after this call. Then 1 x j = 0 for each j ˆN. 2 x e = b l /a le. 3 x i = b i a ie ˆb e for each i ˆB {e}. Proof. The first statement is true because the basic solutions always sets nonbasic variables to 0. For these variables we have x i = ˆb i for i ˆB, since x i = ˆb i j ˆN âijx j. Since e ˆB, line 3 of Pivot gives For i ˆB {e}, line 9 gives This completes the proof. x e = ˆb e = b l /a le. x i = ˆb i = b i a ie ˆbe. J. Wang (UMass Lowell) Linear Programming 19 / 47
20 Simplex Code J. Wang (UMass Lowell) Linear Programming 20 / 47
21 Degeneracy Each iteration of the simplex algorithm either increases the objective value or leaves the objective value unchanged. The latter phenomenon is degeneracy. Example of degeneracy: z = x 1 + x 2 + x 3 x 4 = 8 x 1 x 2 x 5 = x 2 x 3. Suppose that x 1 is chosen to be the entering variable and x 4 the leaving variable. After pivoting: z = 8 + x 3 x 4 x 1 = 8 x 2 x 4 x 5 = x 2 x 3. J. Wang (UMass Lowell) Linear Programming 21 / 47
22 Degeneracy Continued Now x 3 is the only option for being the entering variable and x 5 leaving. After pivoting: z = 8 + x 2 x 4 x 5 x 1 = 8 x 2 x 4 x 3 = x 2 x 5. The objective value of 8 has not changed, and degeneracy occurs. Fortunately, x 2 can now be chosen to be entering and x 1 leaving. After pivoting: z = 16 x 1 2x 4 x 5 x 2 = 8 x 1 x 4 x 3 = x 2 x 5. J. Wang (UMass Lowell) Linear Programming 22 / 47
23 Cycling Degeneracy may lead to cycling: The slack forms at two different iterations are identical. If we instead choose x 2 entering and x 3 leaving, then after pivoting: This is a cycling. z = 8 + x 3 x 4 x 1 = 8 x 2 x 4 x 2 = x 3 + x 5. J. Wang (UMass Lowell) Linear Programming 23 / 47
24 Simplex Runtime Cycling is the only reason that the simplex algorithm might not terminate. By choosing the variable with the smallest index, the simplex algorithm always terminate. The simplex algorithm terminates in at most ( ) n+m m iterations. Proof. This is because B = m and there are n + m variables, implying that this are at most ( ) n+m m ways to choose B. In practice, the simplex algorithm runs extremely fast. J. Wang (UMass Lowell) Linear Programming 24 / 47
25 Duality The dual of a maximization LP problem (a.k.a. primal) is a minimization LP, and vice versa. Primal: maximize subject to n j=1 c jx j n j=1 a ijx j b i for i = 1, 2,..., m x j 0 for j = 1, 2,..., n. Dual: maximize subject to m i=1 b iy i m i=1 a ijy i c i for j = 1, 2,..., n y i 0 for i = 1, 2,..., m. J. Wang (UMass Lowell) Linear Programming 25 / 47
26 Primal-Dual Example Primal: maximize 3x 1 + x 2 + 2x 3 subject to x 1 + x 2 + 3x x 1 + 2x 2 + 5x x 1 + x 2 + 2x 3 36 x 1, x 2, x 3 0. Dual: minimize 30y y y 3 subject to y 1 + 2y 2 + 4y 3 3 y 1 + 2y 2 + y 3 1 3y 1 + 5y 2 + 2y 3 2 y 1, y 2, y 3 0. J. Wang (UMass Lowell) Linear Programming 26 / 47
27 Weak LP Lemma Lemma 29.8 Let x be any feasible solution to the primal LP and y be any feasible solution to the dual LP. Then n m c j x j b i y j. j=1 i=1 Proof. We have ( n n m ) c j x j a ij y i x j (Definition of c j from dual.) j=1 j=1 = i=1 i=1 m n a ij x j y i j=1 m b i y i. i=1 (Definition of b i from primal.) J. Wang (UMass Lowell) Linear Programming 27 / 47
28 Equality of Primal and Dual Corollary Let x be a feasible solution to a primal linear program (A, b, c), and let y be a feasible solution to the corresponding dual linear program. If n m c j x j = b i y i, j=1 then x and y are optimal solutions to the primal LP and dual LP, respectively. i=1 Proof. It follows from the Weak LP Lemma. J. Wang (UMass Lowell) Linear Programming 28 / 47
29 LP Duality Theorem Suppose that the simplex algorithm returns values x = (x 1, x 2,..., x n ) for the primal LP (A, b, c). Let N and B denotes the nonbasic and basic variables for the final slack form, let c denote the coefficients in the final slack form, and let y = (y 1, y 2,..., y m ) be defined as { c n+i if (n + i) N, y i = 0 otherwise. Then x is an optimal solution to the primal LP, y is an optimal solution to the dual LP, and n m c j x j = b i y i. j=1 i=1 J. Wang (UMass Lowell) Linear Programming 29 / 47
30 Proof It suffices to show that n c j x j = j=1 m b i y i. i=1 Suppose the simplex algorithm terminates with a final slack form and the objective function z = v + j N c j x j, where c j 0 for all j N. For the basic variables in the final slack form we can set their coefficients to 0. Namely, let c j = 0 for all j B. J. Wang (UMass Lowell) Linear Programming 30 / 47
31 Proof Continued We have z = v + j N c j x j = v + j N c j x j + j B c j x j (since c j = 0 for j B) = v + n+m j=1 c j x j (since N B = {1,..., n + m}). Since for j N we have x j = 0, we have z = v. Since each step in the simplex algorithm is an elementary operation, we know that the original objective function on x must be the same to that of the final slack form. Namely, n j=1 n+m c j x j = v + c j x j j=1 = v + j N = v. c j x j + j B c j x j J. Wang (UMass Lowell) Linear Programming 31 / 47
32 Proof Continued We now show that y is a feasible solution to the dual LP and m n b i y i = c j x j. n j=1 i=1 n+m c j x j = v + c j x j = v + = v + = v + j=1 n c j x j + j=1 n c j x j + j=1 n c j x j + j=1 n+m j=n+1 m i=1 j=1 c j x j c n+ix n+i m ( y i )x n+i (by definition of y i ) i=1 J. Wang (UMass Lowell) Linear Programming 32 / 47
33 Proof Continued = v + = v + = v + = ( v n c j x j + j=1 n c j x j j=1 n c j x j j=1 m ( ( y i ) b i i=1 m b i y i + i=1 m b i y i + i=1 m ) b i y i + i=1 n j=1 m n ) a ij x j j=1 i=1 j=1 n j=1 i=1 ( c j + n (a ij x j )y i m (a ij y i )x j m a ij y i )x j i=1 (by slack form) J. Wang (UMass Lowell) Linear Programming 33 / 47
34 Proof Continued This implies that m v b i y i = 0, c j + i=1 m a ij y i = c j for j = 1, 2,..., n. i=1 Thus, m i=1 b iy i = v = z. Now we show that y is feasible. Note that c j 0, we have m c j = c j + a ij y i This completes the proof. i=1 m a ij y i. i=1 J. Wang (UMass Lowell) Linear Programming 34 / 47
35 Revisit: LP Formulation for Maximum Flow We have seen an LP formulation for maximum flow, where the objective function (s,u) f su has a number of variables f su for (s, u) E. In the formulation, variable f uv is the value of the flow from u to v for (u, v) E. Recall that the flow network does not have an edge from t to s (the residual network may change this, but we don t deal with residual networks in the LP formulation). To make an LP dual simpler, we reformulate LP for maximum flow with one variable in the objective function. To do so, we create an edge (t, s) to E with capacity c(t, s) = +. J. Wang (UMass Lowell) Linear Programming 35 / 47
36 LP Formulation for Maximum Flow The following LP models the maximum flow problem (Note: we use indexing i and j instead of u and v to place ourselves in our comfort zone): Maximize f ts (only one variable) Subject to (i, j) E {(t, s)} : f ij c(i, j) (capacity constraint) i V : (i,j) E f ij (j,i) E f ji = 0 (flow conservation) (i, j) E : f ij 0 Note that there is no need to include constraint f ts c(t, s). J. Wang (UMass Lowell) Linear Programming 36 / 47
37 Formulation of the Dual Each primal constraint corresponds to a dual variable. Let y ij be the variable for constraint f ij c(i, j), for (i, j) E {(t, s)}. Let y i be the variable for constraint f ij f ji = 0 for i V. (i,j) E (j,i) E For the objective function we look at the constraints in the primal with nonzero right-hand side and identify dual variables for these constraints. The following is the objective of the dual: Minimize c(i, j)y ij. (i,j) E {(t,s)} J. Wang (UMass Lowell) Linear Programming 37 / 47
38 Formulation of the Dual Continued Each primal variable x must correspond to a dual constraint. To form a dual constraint, follow the steps below: 1 Identify all the primal constraints containing x. 2 For each primal constraint containing x, let a be the coefficient of x and y be the dual variable for this constraint. 3 Sum up all ay s to become the left-hand side of the dual constraint for x. 4 The value of the right-hand side of the dual constraint is determined by the coefficient of x. J. Wang (UMass Lowell) Linear Programming 38 / 47
39 Formulation of the Dual Continued For the primal variable f ij with i t and j s, we observe that (1) the coefficient in the primal objective is 0; and (2) f ij appears only in the following three primal constraints: j:(i,j) E i:(j,i) E f ij f ji j:(i,j) E i:(i,j) E f ij c(i, j) (y ij ) f ji = 0 (y i ) f ij = 0 (y j ) Thus, for the primal variable f ij with i t and j s, we have the following dual constraint: y ij + y i y j 0. The sign is used because the variable appears in one constraint with the sign and the rest with the = sign. J. Wang (UMass Lowell) Linear Programming 39 / 47
40 Formulation of the Dual Continued For the primal variable f ts, we note that (1) its coefficient in the primal objective is 1 and (2) it appears in the following two constraints: j:(t,j) E j:(s,j) E f tj f sj j:(j,t) E j:(j,s) E f jt = 0 (y t ) f js = 0. (y s ) Thus, the dual constraint w.r.t. variable f ts is y t y s = 1. The = sign is used because the primal constraints involving variable f ts are equations. J. Wang (UMass Lowell) Linear Programming 40 / 47
41 The Dual and the Minimum Cut Finally, the following is the dual for the maximum-flow LP: Minimize (i,j) E {(t,s)} c(i, j)y ij Subject to (i, j) E {(t, s)} : y ij + y i y j 0 y t y s = 1 y ij, y i 0 We know that the maximum flow is equal to the summation of the capacity of minimum cut, and any feasible solution to the primal is bounded above by a feasible solution to the dual. Thus, the optimal solution to the dual is the minimum cut. J. Wang (UMass Lowell) Linear Programming 41 / 47
42 Exercise As an exercise let s consider the previous dual as primal and obtain its dual. Let x ij be the dual variable for the following primal constraint: y ij + y i y j 0 for (i, j) E {(t, s)} Let x ts denote the dual variable for the following primal constraint: y t y s = 1. To form the dual objective, we note that only primal constraint y t y s = 1 has nonzero right-hand side. Thus, the objective for the dual is Maximize x ts. J. Wang (UMass Lowell) Linear Programming 42 / 47
43 Exercise Continued To form the dual constraint, we note that each primal varialbe y ij with (i, j) (t, s) appears in the following constraints: y ij + y i y j 0 and it has coeficient c(i, j) in the primal objective function. Thus, for this variable we have a dual constraint x ij c(i, j) for (i, j) (t, s). Each primal variable y i has coeffieint 0 in the primal objective function. For i {s, t}, y i appears in the following constraints: y ij + y i y j 0 y ji + y j y i 0 for (i, j) E {(t, s)} for (j, i) E {(t, s)} J. Wang (UMass Lowell) Linear Programming 43 / 47
44 Exercise Continued Thus, the dual constraint for the primal variable y i with i {s, t} is x ij x ji 0. j:(i,j) E {(t,s)} j:(j,i) E {(t,s)} The primal variable y s appears in the following constraints: y sj + y s y j 0 for (s, j) E y ts + y t y s 0. Thus, the dual constraint for y s is x sj x ts 0. j:(s,j) E Likewise, the dual constraint for y t is x ts x jt 0. j:(j,t) E J. Wang (UMass Lowell) Linear Programming 44 / 47
45 Exercise Continued View x ij as flow from node i to node j. Because for each i V, the total flow entering node i is at least the total flow going out of i and there is an edge from t to s, all the inequaliies in the dual constrains must be equalities, namely, for each i V, x ij x ji = 0. j:(i,j) E Finally, the dual problem is Maximize j:(j,i) E x ts Subject to (i, j) E {(t, s)} : x ij c(i, j) i V : j:(i,j) E x ij j:(j,i) E x ji = 0 (i, j) E : x ij 0. This is exactly the same as the maximum flow LP formulation. J. Wang (UMass Lowell) Linear Programming 45 / 47
46 A Sample Multiple-Choice Quesion Unlike problems in quizes, the final exam consists of only multiple-choice (including true/false) quesions. Example: 1. Which is the correct dual for the following primal LP: (a) Minimize 3x 1 + 8x 2 + 4x 3 Subject to x 1 + 5x 2 3x 4 7 3x x 3 + 5x 4 1 x 1, x 2, x 3, x 4 0. Maximize 7y 1 + y 2 Subject to y 1 3y 2 3 5y y 2 4 3y 1 + 5y 2 0 y 1, y 2, y 3, y 4 0. J. Wang (UMass Lowell) Linear Programming 46 / 47
47 Sample Multiple-Choince Quesions Continued (b) (c) Maximize 7y 1 + y 2 Subject to y 1 3y 2 3 3y 1 + 5y 2 0 y 1, y 2, y 3, y 4 0. (d) None of the above. Maximize 3y 1 + 8y 2 4y 3 Subject to y 1 3y 2 3 5y y 2 4 3y 1 + 5y 2 0 y 1, y 2, y 3, y 4 0. J. Wang (UMass Lowell) Linear Programming 47 / 47
Lecture 9 Tuesday, 4/20/10. Linear Programming
UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 9 Tuesday, 4/20/10 Linear Programming 1 Overview Motivation & Basics Standard & Slack Forms Formulating
More informationIII. Linear Programming
III. Linear Programming Thomas Sauerwald Easter 2017 Outline Introduction Standard and Slack Forms Formulating Problems as Linear Programs Simplex Algorithm Finding an Initial Solution III. Linear Programming
More informationCSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017
CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =
More informationA Bound for the Number of Different Basic Solutions Generated by the Simplex Method
ICOTA8, SHANGHAI, CHINA A Bound for the Number of Different Basic Solutions Generated by the Simplex Method Tomonari Kitahara and Shinji Mizuno Tokyo Institute of Technology December 12th, 2010 Contents
More informationThe simplex algorithm
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case. It does yield insight into linear programs, however,
More informationGraphs and Network Flows IE411. Lecture 12. Dr. Ted Ralphs
Graphs and Network Flows IE411 Lecture 12 Dr. Ted Ralphs IE411 Lecture 12 1 References for Today s Lecture Required reading Sections 21.1 21.2 References AMO Chapter 6 CLRS Sections 26.1 26.2 IE411 Lecture
More informationNetwork Flows. 7. Multicommodity Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini
In the name of God Network Flows 7. Multicommodity Flows Problems 7.3 Column Generation Approach Fall 2010 Instructor: Dr. Masoud Yaghini Path Flow Formulation Path Flow Formulation Let first reformulate
More informationLecture 2: The Simplex method
Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.
More information1 Review Session. 1.1 Lecture 2
1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions
More informationCO350 Linear Programming Chapter 8: Degeneracy and Finite Termination
CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible
More informationChapter 1 Linear Programming. Paragraph 5 Duality
Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution
More informationMaximum Flow. Jie Wang. University of Massachusetts Lowell Department of Computer Science. J. Wang (UMass Lowell) Maximum Flow 1 / 27
Maximum Flow Jie Wang University of Massachusetts Lowell Department of Computer Science J. Wang (UMass Lowell) Maximum Flow 1 / 27 Flow Networks A flow network is a weighted digraph G = (V, E), where the
More informationMotivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory
Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization
More informationCO350 Linear Programming Chapter 6: The Simplex Method
CO350 Linear Programming Chapter 6: The Simplex Method 8th June 2005 Chapter 6: The Simplex Method 1 Minimization Problem ( 6.5) We can solve minimization problems by transforming it into a maximization
More informationTopic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.
CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Primal-Dual Algorithms Date: 10-17-07 14.1 Last Time We finished our discussion of randomized rounding and
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1
More informationLP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP
LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP 1 / 23 Repetition the simplex algorithm: sequence of pivots starting
More informationThe Simplex Algorithm
8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.
More informationAM 121: Intro to Optimization
AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript
More information21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.
Extra Problems for Chapter 3. Linear Programming Methods 20. (Big-M Method) An alternative to the two-phase method of finding an initial basic feasible solution by minimizing the sum of the artificial
More informationMulticommodity Flows and Column Generation
Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07
More informationThe Minimum Cost Network Flow Problem
EMIS 8374 [MCNFP Review] 1 The Minimum Cost Network Flow Problem Problem Instance: Given a network G = (N, A), with a cost c ij, upper bound u ij, and lower bound l ij associated with each directed arc
More informationMinimum cost transportation problem
Minimum cost transportation problem Complements of Operations Research Giovanni Righini Università degli Studi di Milano Definitions The minimum cost transportation problem is a special case of the minimum
More informationCSC Design and Analysis of Algorithms. LP Shader Electronics Example
CSC 80- Design and Analysis of Algorithms Lecture (LP) LP Shader Electronics Example The Shader Electronics Company produces two products:.eclipse, a portable touchscreen digital player; it takes hours
More informationA Review of Linear Programming
A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex
More informationLinear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004
Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define
More informationLinear and Integer Programming - ideas
Linear and Integer Programming - ideas Paweł Zieliński Institute of Mathematics and Computer Science, Wrocław University of Technology, Poland http://www.im.pwr.wroc.pl/ pziel/ Toulouse, France 2012 Literature
More informationBBM402-Lecture 20: LP Duality
BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to
More information(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6
The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. Problem 1 Consider
More informationCOMP3121/9101/3821/9801 Lecture Notes. Linear Programming
COMP3121/9101/3821/9801 Lecture Notes Linear Programming LiC: Aleks Ignjatovic THE UNIVERSITY OF NEW SOUTH WALES School of Computer Science and Engineering The University of New South Wales Sydney 2052,
More information- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs
LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs
More informationOptimization (168) Lecture 7-8-9
Optimization (168) Lecture 7-8-9 Jesús De Loera UC Davis, Mathematics Wednesday, April 2, 2012 1 DEGENERACY IN THE SIMPLEX METHOD 2 DEGENERACY z =2x 1 x 2 + 8x 3 x 4 =1 2x 3 x 5 =3 2x 1 + 4x 2 6x 3 x 6
More informationReview Questions, Final Exam
Review Questions, Final Exam A few general questions. What does the Representation Theorem say (in linear programming)? In words, the representation theorem says that any feasible point can be written
More informationCSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming
CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150
More information4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n
2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6
More informationIntroduction to Linear Programming
Nanjing University October 27, 2011 What is LP The Linear Programming Problem Definition Decision variables Objective Function x j, j = 1, 2,..., n ζ = n c i x i i=1 We will primarily discuss maxizming
More informationSummary of the simplex method
MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:
More informationCOT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748
COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu https://moodle.cis.fiu.edu/v2.1/course/view.php?id=612 Gaussian Elimination! Solving a system of simultaneous
More informationIntroduction to Algorithms
Introduction to Algorithms 6.046J/18.401J/SMA5503 Lecture 18 Prof. Erik Demaine Negative-weight cycles Recall: If a graph G = (V, E) contains a negativeweight cycle, then some shortest paths may not exist.
More informationOPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM
OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation
More information9.1 Linear Programs in canonical form
9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems
More information29 Linear Programming
29 Linear Programming Many problems take the form of optimizing an objective, given limited resources and competing constraints If we can specify the objective as a linear function of certain variables,
More informationOptimization methods NOPT048
Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague
More informationInput: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function
Linear programming Input: System of inequalities or equalities over the reals R A linear cost function Output: Value for variables that minimizes cost function Example: Minimize 6x+4y Subject to 3x + 2y
More informationWeek 4. (1) 0 f ij u ij.
Week 4 1 Network Flow Chapter 7 of the book is about optimisation problems on networks. Section 7.1 gives a quick introduction to the definitions of graph theory. In fact I hope these are already known
More informationLinear Programming Duality
Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve
More informationAn example of LP problem: Political Elections
Linear Programming An example of LP problem: Political Elections Suppose that you are a politician trying to win an election. Your district has three different types of areas: urban, suburban, and rural.
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More informationSlack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0
Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2
More informationLinear Programming. Operations Research. Anthony Papavasiliou 1 / 21
1 / 21 Linear Programming Operations Research Anthony Papavasiliou Contents 2 / 21 1 Primal Linear Program 2 Dual Linear Program Table of Contents 3 / 21 1 Primal Linear Program 2 Dual Linear Program Linear
More informationSensitivity Analysis
Dr. Maddah ENMG 500 /9/07 Sensitivity Analysis Changes in the RHS (b) Consider an optimal LP solution. Suppose that the original RHS (b) is changed from b 0 to b new. In the following, we study the affect
More informationOn the Number of Solutions Generated by the Simplex Method for LP
Workshop 1 on Large Scale Conic Optimization IMS (NUS) On the Number of Solutions Generated by the Simplex Method for LP Tomonari Kitahara and Shinji Mizuno Tokyo Institute of Technology November 19 23,
More information4. Duality and Sensitivity
4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair
More informationOptimisation and Operations Research
Optimisation and Operations Research Lecture 9: Duality and Complementary Slackness Matthew Roughan http://www.maths.adelaide.edu.au/matthew.roughan/ Lecture_notes/OORII/
More informationLecture 11: Post-Optimal Analysis. September 23, 2009
Lecture : Post-Optimal Analysis September 23, 2009 Today Lecture Dual-Simplex Algorithm Post-Optimal Analysis Chapters 4.4 and 4.5. IE 30/GE 330 Lecture Dual Simplex Method The dual simplex method will
More informationExercises - Linear Programming
Chapter 38 Exercises - Linear Programming By Sariel Har-Peled, December 10, 2007 1 Version: 1.0 This chapter include problems that are related to linear programming. 38.1 Miscellaneous Exercise 38.1.1
More information4.6 Linear Programming duality
4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP Different spaces and objective functions but in general same optimal
More informationThe Maximum Flow Problem with Disjunctive Constraints
The Maximum Flow Problem with Disjunctive Constraints Ulrich Pferschy Joachim Schauer Abstract We study the maximum flow problem subject to binary disjunctive constraints in a directed graph: A negative
More informationThe Simplex Algorithm: Technicalities 1
1/45 The Simplex Algorithm: Technicalities 1 Adrian Vetta 1 This presentation is based upon the book Linear Programming by Vasek Chvatal 2/45 Two Issues Here we discuss two potential problems with the
More informationDuality of LPs and Applications
Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will
More informationLP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra
LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality
More information15-780: LinearProgramming
15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear
More informationChap6 Duality Theory and Sensitivity Analysis
Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we
More informationSection Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010
Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts
More information15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018
15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018 Usual rules. :) Exercises 1. Lots of Flows. Suppose you wanted to find an approximate solution to the following
More informationDiscrete Optimization 2010 Lecture 3 Maximum Flows
Remainder: Shortest Paths Maximum Flows Discrete Optimization 2010 Lecture 3 Maximum Flows Marc Uetz University of Twente m.uetz@utwente.nl Lecture 3: sheet 1 / 29 Marc Uetz Discrete Optimization Outline
More informationLecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,
Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method Reading: Sections 2.6.4, 3.5, 10.2 10.5 1 Summary of the Phase I/Phase II Simplex Method We write a typical simplex tableau as z x 1 x
More informationIE 400: Principles of Engineering Management. Simplex Method Continued
IE 400: Principles of Engineering Management Simplex Method Continued 1 Agenda Simplex for min problems Alternative optimal solutions Unboundedness Degeneracy Big M method Two phase method 2 Simplex for
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationDEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions
DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION Part I: Short Questions August 12, 2008 9:00 am - 12 pm General Instructions This examination is
More information"SYMMETRIC" PRIMAL-DUAL PAIR
"SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More informationIntroduction to optimization
Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24 The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization)
More informationOptimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems
Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}
More informationSupplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.
Cornell University, Fall 2016 Supplementary lecture notes on linear programming CS 6820: Algorithms 26 Sep 28 Sep 1 The Simplex Method We will present an algorithm to solve linear programs of the form
More information3 The Simplex Method. 3.1 Basic Solutions
3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,
More informationOptimization methods NOPT048
Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague
More informationLinear Programming. Chapter Introduction
Chapter 3 Linear Programming Linear programs (LP) play an important role in the theory and practice of optimization problems. Many COPs can directly be formulated as LPs. Furthermore, LPs are invaluable
More informationNote 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)
Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical
More informationCS Algorithms and Complexity
CS 50 - Algorithms and Complexity Linear Programming, the Simplex Method, and Hard Problems Sean Anderson 2/15/18 Portland State University Table of contents 1. The Simplex Method 2. The Graph Problem
More informationYinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method
The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear
More informationLecture 5 January 16, 2013
UBC CPSC 536N: Sparse Approximations Winter 2013 Prof. Nick Harvey Lecture 5 January 16, 2013 Scribe: Samira Samadi 1 Combinatorial IPs 1.1 Mathematical programs { min c Linear Program (LP): T x s.t. a
More informationTermination, Cycling, and Degeneracy
Chapter 4 Termination, Cycling, and Degeneracy We now deal first with the question, whether the simplex method terminates. The quick answer is no, if it is implemented in a careless way. Notice that we
More informationΩ R n is called the constraint set or feasible set. x 1
1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We
More informationDiscrete Optimization 23
Discrete Optimization 23 2 Total Unimodularity (TU) and Its Applications In this section we will discuss the total unimodularity theory and its applications to flows in networks. 2.1 Total Unimodularity:
More information3.7 Cutting plane methods
3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x
More informationLecture #21. c T x Ax b. maximize subject to
COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the
More informationOn the number of distinct solutions generated by the simplex method for LP
Retrospective Workshop Fields Institute Toronto, Ontario, Canada On the number of distinct solutions generated by the simplex method for LP Tomonari Kitahara and Shinji Mizuno Tokyo Institute of Technology
More informationA Parametric Simplex Algorithm for Linear Vector Optimization Problems
A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear
More informationCSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1
CSCI5654 (Linear Programming, Fall 2013) Lecture-8 Lecture 8 Slide# 1 Today s Lecture 1. Recap of dual variables and strong duality. 2. Complementary Slackness Theorem. 3. Interpretation of dual variables.
More informationChapter 1: Linear Programming
Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of
More informationLinear Programming, Lecture 4
Linear Programming, Lecture 4 Corbett Redden October 3, 2016 Simplex Form Conventions Examples Simplex Method To run the simplex method, we start from a Linear Program (LP) in the following standard simplex
More informationThe use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:
Sensitivity analysis The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Changing the coefficient of a nonbasic variable
More informationAn introductory example
CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1
More informationTRANSPORTATION PROBLEMS
Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations
More informationDuality Theory, Optimality Conditions
5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More information1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016
AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 7 February 7th Overview In the previous lectures we saw applications of duality to game theory and later to learning theory. In this lecture
More informationmin 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14
The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,
More information3.10 Lagrangian relaxation
3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the
More information