Chapter 3 Solutions 1 2 (b) Δw (1) = (4, 2, 10) (4, 0, 7) = (0, 2, 3), Δw Δw (3) = ( 2, 4, 5) (4, 2, 10) = ( 6, 2, 5),

Size: px
Start display at page:

Download "Chapter 3 Solutions 1 2 (b) Δw (1) = (4, 2, 10) (4, 0, 7) = (0, 2, 3), Δw Δw (3) = ( 2, 4, 5) (4, 2, 10) = ( 6, 2, 5),"

Transcription

1 Optimization in Operations Research, nd Edition Ronald L. Rardin Solution Manual Completed download: () Chapter 3 Solutions (b) Δw () = (4,, 0) (4, 0, 7) = (0,, 3), 3-. (a) x () is feasible and only a local max because all constraints are satisfied and no nearby point has better objective value, but there are better such as (, 3). x () is infeasible and thus no sort of optimum because it violates constraint x 0. x (3) is feasible because it satisfies all constraints, but no sort of optimum because it can be improved in the neighborhood. x (4) is feasible and both a local and a global max because it satisfies all constraints and has better objective value than any other point in the feasible region. (b) x () is infeasible and thus no sort of optimum because it violates constraint x + x 4. x () is feasible because it satisfies all constraints, but no kind of optimum because it can be improved in a variety of directions. x (3) is feasible and a local minimum because it cannot be improved in the neighborhood, but not global because other feasible points such as x (4) have better objective values. x (4) is feasible and both a local and a global minimum because it satisfies all constraints and has better objective value than any other point in the feasible region. 3-. (a) y () = (, 0, 5) + (3,, 0) = (8,, 5), y () = (8,, 5) + 5(,, ) = (3, 8, 0), y (3) = (3, 8, 0) + /(0, 6, 0) = (3,, 0) (b) y () = (, 0, 5) + (, 3, ) = (4, 6, ), y () = (4, 6, ) + /(, 0, ) = (4 /, 6, ), y (3) = (4 /, 6, ) + (4, 3, ) = (5 /, 4, 6) 3-3. (a) Δw () = (4,, 7) (0,, ) = (4,, 6), Δw () = (4, 3, 9) (4,, 7) = (0,, 8), Δw (3) = (3, 3, ) (4, 3, 9) = (, 0, 3) Supplement to the nd edition of Optimization in Operations Research, by Ronald L. Rardin, Pearson Higher Education, Hoboken NJ, c 07. As of October, 05 Δw Δw (3) = (, 4, 5) (4,, 0) = ( 6,, 5), = (5, 5, 5) (, 4, 5) = (7,, 0) 3-4. (a) Nonimproving because the objective value worsens in the neighborhood along this direction. (b) Nonimproving because the objective decreases in the neighborhood along this direction. (c) Improving because the objective improves in the neighborhood along this direction. (d) Improving because the objective improves in the neighborhood along this direction. (e) Nonimproving because x (3) is a local minimum. (f ) Improving because the objective value improves in the neighborhood along this direction (a) Feasible because movement along this direction retains feasibility in the neighborhood. (b) Infeasible because any movement along this direction produces infeasibility. (c) Feasible because movement along this direction retains feasibility in the neighborhood. (d) Feasible because movement along this direction retains equality of the only active constraint. (e) Feasible because any movement along this direction violates no constraints immediately. (f ) Infeasible because movement along this direction immediately violates x (a) (4 λ) (0 + 3λ) + 3(6 λ) 5 is satisfied for all positive λ; (4 λ) 0 is satisfied for λ 4; (0 + 3λ) 0 is satisfied for all positive λ; (6 3λ) 0 is satisfied for λ 3. Thus the proper step is λ = min{4, 3} = 3. The model is not unbounded because this λ is finite. (b) (8 λ) (5 + λ) + 3(3 λ) 5 is satisfied for all positive λ; (8 λ) 0 is satisfied for λ 4; (5 + λ) 0 is satisfied for all positive λ; (3 λ) 0 is satisfied for λ 3. Thus the ope tep is pr r s λ in 4, 3 = 3. The model is not = m { } unbounded because this λ is finite. (c) (0 + λ) (0 + 3λ) + 3(4 + λ) 5 is satisfied for all positive λ; (0 + λ) 0 is satisfied for all positive λ; (0 + 3λ) 0 is

2 satisfied for all positive λ; (4 + λ) 0 is satisfied for all positive λ. Thus λ = +, and the model is unbounded. (d) (0 + λ) (4 + 7λ) + 3(3 + 4λ) 5 is satisfied for all positive λ; (0 + λ) 0 is satisfied for all positive λ; (4 + 7λ) 0 is satisfied for all positive λ; (3 + 4λ) 0 is satisfied for all positive λ. Thus the model is unbounded because steps in this direction can be arbitrarily large without losing feasibility (a) f (, 3, 4, 0, 6) = (4, 0,, 0, ) and f Δy = (4, 0,, 0, ) (, 3, 4, 0, 6) = 6, so improving for a maximize. (b) f (, 0, 9, 0, 0) = (, 0, 7, 0, ) and f Δy = (, 0, 7, 0, ) (, 0,, 0, 4) = 7 > 0, so improving for a maximize. (c) f (3, ) = ( + (3), 3 + 4) and f Δy = (5, 7) ( 7, 5) = 0, so more information is needed. (d) f (, ) = (() +, + 4) and f Δy = (5, 6) (, 3) = 3 < 0, so nonimproving for a minimize. (e) f (4, ) = ((4 5), ( + )) and f Δy = (, 4) (, ) = >0, so improving for a maximize. (f ) f (, ) = (( ) +, + ( 3)) and f Δy = (, 3) (3, ) = 0, so more information is needed (a) Δw = f (, 0, 5, ) = (3,, 0, ) (b) Δw = f (,,, 0) = (0, 4, 5, ) (c) Δw = f (3, ) = ((3 + ), 3) = ( 8, 3) (d) Δw = f (, ) = ( 4, 9 + 4()) = ( 4, 7) 3-9. (a) (4 ) + (0 ) = 5<0 so [i] is inactive; (4) (0) = 8 so [ii] is active; (4)>0 so [iii] is inactive; (0) = 0 so [iv] is active. (b) (6 ) + (4 ) = 5 so [i] is active; (6) (4) = 8 so [ii] is active; (6)>0 so [iii] inactive; (4)>0 so [iv] inactive (a) Active constraints are 3y y + 8y 3 = 4 and y 0. For these a () Δy = 3,, 8) (0, 4, ) = 0 and a (4) Δy = 0,, 0) (0, 4, ) = 4 0 as required for feasibility. (b) Active constraints are 3y y + 8y 3 = 4 and y 0. For these a () Δy = (3,, 8) (0, 4, ) = 6, infeasible for an = constraint. a (4) Δy = (0,, 0) (0, 4, ) = 4 0 infeasible for a constraint. (c) Active constraints are 3y y + 8y 3 = 4 and y 0. In the first, a () Δy = (3,, 8) (, 0, ) = 4 = 0 as required so infeasible. (d) Active constraints are 3y y + 8y 3 = 4 and y 0. For these a () Δy = (3,, 8) (,, ) = 0 as required for feasibility of an = constraint. a (3) Δy = (, 0, 0) (,, ) = infeasible fore a 0 constraint. 3-. (a) Active constraints are w + 3w 3 = 8, w + w + w 3 = 4, and w 0. Thus conditions are Δw + 3 Δw 3 = 0, Δw + Δw + Δw 3 = 0, Δw 0 (b) Active constraints are w + 3w 3 = 8,and w + w + w 3 = 4. Thus conditions are Δw + 3 Δw 3 = 0, Δw + Δw + Δw 3 = 0, (c) Active constraints are w + w = 0 and w w 8. Thus conditions are Δw + Δw = 0, Δw Δw 0 (d) Only active constraint is w + w = 0. Thus conditions are Δw + Δw = 0, 3-. (a) Δy + 5 Δy <0 (b) Direct substitution (c) Active constraints are y 0 and y + y 3. (d) Active constraints yield conditions Δy + 0 Δy 0, Δy + Δy 0. (e) Direct supstitution shows direction is feasible. Maximum step is λ = where constraint y 0 is encountered.

3 3 3 (f ) (c) z z 0 z () = (5/,3) z (3) = (3,3) 3 z (0) = (0,0) z 4 z 0 z 4 3 z ( = ) (4,0) z 3-5. (a) f Δz () = (, ) (0, 5) = 5<0 and 3-3. (a) 3 Δx 3 Δx 3 <0 (b) () f Δz = (, ) (, ) = <0 as 3( ) 3() = 6<0 as required. (c) Only first constraint with (3) + 4(9) = 69, and fourth constraint x = 0 are active at (3, 0, 9). (d) Δx + 3 Δx + 4 Δx 3 = 0 and Δx 0. (e) Direct substitution; Checking limits on λ, (3 λ) + (0 + λ) + (9 + λ) 6 required for a minimize. (b) With both directions improving at all z, the only considerations are when directions are feasible. At z (0) = (5, 3), only Δz () is feasible because Δz () has a Δz () = (0, ) (, ) = 0 for active gives λ, 3 λ 0 gives λ 3. z 3. A maximum feasible step follows Combining λ min{, 3} =. Then () () for λ = 3/5 to z x () (3, 0 +, 9 + 4) = (,, 3) Δz () = (5, 0). At () 3-4. (a) f Δz () = (4, 7) (, 0) = 8>0 z = (5, 0), further pursuit of Δz is () and f Δz () = (4, 7) (, 4) = 0>0 as infeasible, but Δz = (, ) is now feasible. required for improving directions in a maximize. (b) With both directions improving at all z, the only considerations are when directions are feasible. At z (0) = (0, 0), only Δz () is feasible because Δz () has A maximum feasible step follows Δz () for λ = 5/ to z () = (0, 5/). At z () = (0, 5/), further pursuit of Δz () is infeasible, but Δz () = (0, 5 is again feasible. A maximum feasible step follows Δz () for λ = /0 to a Δz () = (, 0) (, 4) = 0 for active z 0. A maximum feasible step follows Δz () for λ = to z () = (4, 0). At z () = (4, 0), further pursuit of Δz () is z (3) = (0, ). At z (3), both directions are infeasible, and the search terminates. (c) z z 0 infeasible, but Δz () = (, 4) is feasible. A maximum feasible step follows Δz () for λ = 3/4 to z () = (5/, 3). At z () = (5/, 3), further pursuit of Δz () is infeasible, but Δz () = (, 0) is again feasible. A maximum 3 (3) z = (0,) z () = (0,5/) z 3 z (0) = (5,3) z 0 z 5 z z() = (5,0) feasible step follows Δz () for λ = /4 to z (3) = (3, 3). At z (3), both directions are infeasible, and the search terminates (a) (3,, 0) + λ( 3, 3, 9), λ [0, ]. Setting λ = /3 in this z (3) expression yields ; no λ gives z (4). (b) (6, 4, 4) + λ(4, 4, 3), λ [0, ]. Setting λ = 3/4 in this expression yields z (3) ; no λ gives z (4). 3-7.

4 4 4 (a) x (b) 0 x x violation x From the graph, the set is not convex because part of the line segment from x () = (0, 3) to x () = (3, 0) lies outside the feasible region. From the graph, the set is convex because the line segment between every pair of feasible solutions lies entirely within the feasible region. (c) Convex because all constraints are linear. (d) Convex because all constraints are linear. (e) Not convex because fractional solutions between say x () = (0, 0, 0, 4) and x () = (0, 0, 0, 5) are infeasible. (f ) Not convex because fractional solutions between the all x j = and all x j = 0 solutions are infeasible (a) Only the first and third constraints are violated at w = 0. Adding nonnegative artificial variables w 4 in the first and w 5 in the third, and minimizing their sum, produces Phase I model: min w 4 + w 5, s.t. 40w + 30w + 0w 3 + w 4 = 50, w w 0, 4w + w 3 + w 5 0, w, w, w 3, w 4, w 5 0. Setting w 4 = 50 40(0) 30(0) 0(0) = 50 and w 5 = 0 4(0) (0) = 0 (or any higher value) completes a starting (artificially) feasible solution. (b) Only the second and third constraints are violated at w = 0. Adding nonnegative artificial variable w 3 in the second, and w 4 in the third, then minimizing their sum, produces Phase I model: min w 3 + w 4, s.t. w + w 3, w + w 3, w + w 4 w +, w, w, w 3, w 4 0. Setting w 3 = and w 4 = (or any higher values), completes a starting (artificially) feasible solution.

5 5 5 (c) All three constraints are violated at w = 0. Subtracting nonnegative artificial variable w 4 in the first, adding w 5 in the second, adding w 6 in the third, and minimizing their sum, produces Phase I model: min w 3 + w 4 + w 5, s.t. (w 3) + (w 3) w 3 4, w + w + w 4 = 5, w + w 5 3, w 3, w 4, w 5 0. Setting w 3 = (3 0) + (3 0) 4 = 4 (or any larger value), w 4 = 5 (0) = (0) = 5, and w 5 = 3 (0) = 3 (or any larger value) completes a starting (artificially) feasible solution. (d) Only the third constraint is violated at w = 0. Adding nonnegative artificial variable w 3 there and minimizing its value produces Phase I model: min w 3, s.t. w w 9, w = 4w, w + w 3, w, w 3 0. Setting w 3 = (0) = (or any greater value) completes a starting (artificially) feasible solution (a) Constraints w and w make it impossible to also satisfy w + w 5. (b) Only the first constraint is violated at w = 0. Adding nonnegative artificial variable w 3 there and minimizing its value produces Phase I model: min w 3, s.t. w + w + w 3 5, 0 w, 0 w, w 3 0. (c) Setting w 3 = 5 (or any greater value) completes a starting (artificially) feasible solution. (d) The optimal solution is w = w =, w 3 =. Phase optimal value w3 = >0 proves the original model is infeasible (a) Stop and conclude the model is infeasible because all artificial variables cannot be eliminated. (b) Drop artificial variables and proceed with Phase II from initial solution y = (6, 3, ) which is feasible because both artificials = 0 at the end of Phase I. (c) Drop artificial variables and proceed with Phase II from initial solution y = (, 3, ) which is feasible because both artificials = 0 at the end of Phase I. (d) With one artificial positive, the current (y, y, y 3 ) solution is not feasible in the original model. But there may still be such a solution because the optimum is only local. Repeat Phase I from a new starting point. 3-. Needed artificial variables and their starting (artificially) feasible values are exactly as in Exercise??. (a) For a maximize model, subtract a large multiple of the artificial variables in the objective to obtain big-m model: max w w + 5w 3 M (w 4 + w 5 ), s.t. 40w + 30w + 0w 3 + w 4 = 50, w w 0, 4w + w 3 + w 5 0, w, w, w 3, w 4, w 5 0; w 4 = 50, w 5 = 0 (b) For a minimize model, add a large multiple of the artificial variables in the objective to obtain big-m model: min w + 5w + M (w 3 + w 4 ), s.t. w + w 3, w + w 3, w + w 4 w +, w, w, w 3, w 4 0. (c) For a minimize model, add a large multiple of the artificial variables in the objective to obtain big-m model: min w + 3w + M (w 3 + w 4 + w 5 ), s.t. (w 3) + (w 3) w 3 4, w + w + w 4 = 5, w + w 5 3, w 3, w 4, w 5 0; w 3 = 4, w 4 = 5, w 5 = 3 (d) For a maximize model, subtract a large multiple of the artificial variables in the objective to obtain big-m model: max (w ) + w() M w 3, s.t. w w 9, w = 4w, w + w 3, w, w (a) Stop and conclude the model is infeasible if M is big enough because artificial variables remain positive. Otherwise, increase M and repeat the search. (b) Stop and conclude y = (6, 3, ) is a global optimum for the original model because it is feasible with all artificials =0 and optimal because a global optimum was obtained with the big-m model. (c) Conclude y = (, 3, ) is a local optimum for the original model because it is feasible with all artificials =0 but possibly not a global optimum because the big-m search yielded only a local. If desired, repeat the big-m search from a new starting point. (d) Conclude nothing be-

6 6 6 cause only a local optimum has been obtained and some artificials remain positive. Repeat the big-m search from a new starting point and/or using a larger value of M. More download link optimization in operations research solution manual pdf optimization in operations research rardin pdf download optimization in operations research by ronald l. rardin pdf optimization in operations research (nd edition) pdf ronald l rardin optimization in operations research free download optimization in operations research pdf operations research - wayne winston ronald rardin optimization in operations research pdf

IE 400: Principles of Engineering Management. Simplex Method Continued

IE 400: Principles of Engineering Management. Simplex Method Continued IE 400: Principles of Engineering Management Simplex Method Continued 1 Agenda Simplex for min problems Alternative optimal solutions Unboundedness Degeneracy Big M method Two phase method 2 Simplex for

More information

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Computational Procedure of the Simplex Method The optimal solution of a general LP problem is obtained in the following steps: Step 1. Express the

More information

Simplex Method for LP (II)

Simplex Method for LP (II) Simplex Method for LP (II) Xiaoxi Li Wuhan University Sept. 27, 2017 (week 4) Operations Research (Li, X.) Simplex Method for LP (II) Sept. 27, 2017 (week 4) 1 / 31 Organization of this lecture Contents:

More information

TIM 206 Lecture 3: The Simplex Method

TIM 206 Lecture 3: The Simplex Method TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),

More information

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10 MVE165/MMG631 Linear and Integer Optimization with Applications Lecture 4 Linear programming: degeneracy; unbounded solution; infeasibility; starting solutions Ann-Brith Strömberg 2017 03 28 Lecture 4

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

Propagation of Error Notes

Propagation of Error Notes Propagation of Error Notes From http://facultyfiles.deanza.edu/gems/lunaeduardo/errorpropagation2a.pdf The analysis of uncertainties (errors) in measurements and calculations is essential in the physics

More information

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Module - 03 Simplex Algorithm Lecture 15 Infeasibility In this class, we

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG631,Linear and integer optimization with applications The simplex method: degeneracy; unbounded solutions; starting solutions; infeasibility; alternative optimal solutions Ann-Brith Strömberg

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

Metode Kuantitatif Bisnis. Week 4 Linear Programming Simplex Method - Minimize

Metode Kuantitatif Bisnis. Week 4 Linear Programming Simplex Method - Minimize Metode Kuantitatif Bisnis Week 4 Linear Programming Simplex Method - Minimize Outlines Solve Linear Programming Model Using Graphic Solution Solve Linear Programming Model Using Simplex Method (Maximize)

More information

Operations Research Graphical Solution. Dr. Özgür Kabak

Operations Research Graphical Solution. Dr. Özgür Kabak Operations Research Graphical Solution Dr. Özgür Kabak Solving LPs } The Graphical Solution } The Simplex Algorithm } Using Software 2 LP Solutions: Four Cases } The LP has a unique optimal solution. }

More information

In Chapters 3 and 4 we introduced linear programming

In Chapters 3 and 4 we introduced linear programming SUPPLEMENT The Simplex Method CD3 In Chapters 3 and 4 we introduced linear programming and showed how models with two variables can be solved graphically. We relied on computer programs (WINQSB, Excel,

More information

Systems Analysis in Construction

Systems Analysis in Construction Systems Analysis in Construction CB312 Construction & Building Engineering Department- AASTMT by A h m e d E l h a k e e m & M o h a m e d S a i e d 3. Linear Programming Optimization Simplex Method 135

More information

Benders' Method Paul A Jensen

Benders' Method Paul A Jensen Benders' Method Paul A Jensen The mixed integer programming model has some variables, x, identified as real variables and some variables, y, identified as integer variables. Except for the integrality

More information

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3)

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3) The Simple Method Gauss-Jordan Elimination for Solving Linear Equations Eample: Gauss-Jordan Elimination Solve the following equations: + + + + = 4 = = () () () - In the first step of the procedure, we

More information

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality CSCI5654 (Linear Programming, Fall 013) Lecture-7 Duality Lecture 7 Slide# 1 Lecture 7 Slide# Linear Program (standard form) Example Problem maximize c 1 x 1 + + c n x n s.t. a j1 x 1 + + a jn x n b j

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

Formulating and Solving a Linear Programming Model for Product- Mix Linear Problems with n Products

Formulating and Solving a Linear Programming Model for Product- Mix Linear Problems with n Products Formulating and Solving a Linear Programming Model for Product- Mix Linear Problems with n Products Berhe Zewde Aregawi Head, Quality Assurance of College of Natural and Computational Sciences Department

More information

Linear programming Dr. Arturo S. Leon, BSU (Spring 2010)

Linear programming Dr. Arturo S. Leon, BSU (Spring 2010) Linear programming (Adapted from Chapter 13 Supplement, Operations and Management, 5 th edition by Roberta Russell & Bernard W. Taylor, III., Copyright 2006 John Wiley & Sons, Inc. This presentation also

More information

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive 3.4 Anticycling Lexicographic order In this section we discuss two pivoting rules that are guaranteed to avoid cycling. These are the lexicographic rule and Bland s rule. Definition A vector u R n is lexicographically

More information

Special cases of linear programming

Special cases of linear programming Special cases of linear programming Infeasible solution Multiple solution (infinitely many solution) Unbounded solution Degenerated solution Notes on the Simplex tableau 1. The intersection of any basic

More information

Chapter 4 The Simplex Algorithm Part II

Chapter 4 The Simplex Algorithm Part II Chapter 4 The Simple Algorithm Part II Based on Introduction to Mathematical Programming: Operations Research, Volume 4th edition, by Wayne L Winston and Munirpallam Venkataramanan Lewis Ntaimo L Ntaimo

More information

OPRE 6201 : 3. Special Cases

OPRE 6201 : 3. Special Cases OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are

More information

Algebraic Simplex Active Learning Module 4

Algebraic Simplex Active Learning Module 4 Algebraic Simplex Active Learning Module 4 J. René Villalobos and Gary L. Hogg Arizona State University Paul M. Griffin Georgia Institute of Technology Time required for the module: 50 Min. Reading Most

More information

Chapter 3, Operations Research (OR)

Chapter 3, Operations Research (OR) Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z

More information

The Simplex Method. Standard form (max) z c T x = 0 such that Ax = b.

The Simplex Method. Standard form (max) z c T x = 0 such that Ax = b. The Simplex Method Standard form (max) z c T x = 0 such that Ax = b. The Simplex Method Standard form (max) z c T x = 0 such that Ax = b. Build initial tableau. z c T 0 0 A b The Simplex Method Standard

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 22 October 22, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 16 Table of Contents 1 The Simplex Method, Part II Ming Zhong (JHU) AMS Fall 2018 2 /

More information

Measurement and Uncertainty

Measurement and Uncertainty Physics 1020 Laboratory #1 Measurement and Uncertainty 1 Measurement and Uncertainty Any experimental measurement or result has an uncertainty associated with it. In todays lab you will perform a set of

More information

Chapter 2 Functions and Their Graphs

Chapter 2 Functions and Their Graphs Precalculus 0th Edition Sullivan SOLUTIONS MANUAL Full download at: https://testbankreal.com/download/precalculus-0th-edition-sullivan-solutionsmanual/ Precalculus 0th Edition Sullivan TEST BANK Full download

More information

Outline. Basic Concepts in Optimization Part I. Illustration of a (Strict) Local Minimum, x. Local Optima. Neighborhood.

Outline. Basic Concepts in Optimization Part I. Illustration of a (Strict) Local Minimum, x. Local Optima. Neighborhood. Outline Basic Concepts in Optimization Part I Local and Global Optima Benoît Chachuat McMaster University Department of Chemical Engineering ChE G: Optimization in Chemical Engineering

More information

The simplex algorithm

The simplex algorithm The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case. It does yield insight into linear programs, however,

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions. Prelude to the Simplex Algorithm The Algebraic Approach The search for extreme point solutions. 1 Linear Programming-1 x 2 12 8 (4,8) Max z = 6x 1 + 4x 2 Subj. to: x 1 + x 2

More information

2001 Dennis L. Bricker Dept. of Industrial Engineering The University of Iowa. Reducing dimensionality of DP page 1

2001 Dennis L. Bricker Dept. of Industrial Engineering The University of Iowa. Reducing dimensionality of DP page 1 2001 Dennis L. Bricker Dept. of Industrial Engineering The University of Iowa Reducing dimensionality of DP page 1 Consider a knapsack with a weight capacity of 15 and a volume capacity of 12. Item # Value

More information

Sensitivity Analysis and Duality

Sensitivity Analysis and Duality Sensitivity Analysis and Duality Part II Duality Based on Chapter 6 Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan

More information

Discrete Optimization

Discrete Optimization Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 15, 2010 Discussions: March 25, April 01 Discrete Optimization Spring 2010 s 3 You can hand in written solutions for up to two of the exercises

More information

Section 4.1 Polynomial Functions and Models. Copyright 2013 Pearson Education, Inc. All rights reserved

Section 4.1 Polynomial Functions and Models. Copyright 2013 Pearson Education, Inc. All rights reserved Section 4.1 Polynomial Functions and Models Copyright 2013 Pearson Education, Inc. All rights reserved 3 8 ( ) = + (a) f x 3x 4x x (b) ( ) g x 2 x + 3 = x 1 (a) f is a polynomial of degree 8. (b) g is

More information

February 17, Simplex Method Continued

February 17, Simplex Method Continued 15.053 February 17, 2005 Simplex Method Continued 1 Today s Lecture Review of the simplex algorithm. Formalizing the approach Alternative Optimal Solutions Obtaining an initial bfs Is the simplex algorithm

More information

ECE 307 Techniques for Engineering Decisions

ECE 307 Techniques for Engineering Decisions ECE 7 Techniques for Engineering Decisions Introduction to the Simple Algorithm George Gross Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign ECE 7 5 9 George

More information

Fundamental Theorems of Optimization

Fundamental Theorems of Optimization Fundamental Theorems of Optimization 1 Fundamental Theorems of Math Prog. Maximizing a concave function over a convex set. Maximizing a convex function over a closed bounded convex set. 2 Maximizing Concave

More information

CHAPTER 2. The Simplex Method

CHAPTER 2. The Simplex Method CHAPTER 2 The Simplex Method In this chapter we present the simplex method as it applies to linear programming problems in standard form. 1. An Example We first illustrate how the simplex method works

More information

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

More information

II. Analysis of Linear Programming Solutions

II. Analysis of Linear Programming Solutions Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics Dr. Said Bourazza Department of Mathematics Jazan University 1 P a g e Contents: Chapter 0: Modelization 3 Chapter1: Graphical Methods 7 Chapter2: Simplex method 13 Chapter3: Duality 36 Chapter4: Transportation

More information

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1) Chapter 2: Linear Programming Basics (Bertsimas & Tsitsiklis, Chapter 1) 33 Example of a Linear Program Remarks. minimize 2x 1 x 2 + 4x 3 subject to x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3

More information

Linear programming I João Carlos Lourenço

Linear programming I João Carlos Lourenço Decision Support Models Linear programming I João Carlos Lourenço joao.lourenco@ist.utl.pt Academic year 2012/2013 Readings: Hillier, F.S., Lieberman, G.J., 2010. Introduction to Operations Research, 9th

More information

The Avis-Kalunzy Algorithm is designed to find a basic feasible solution (BFS) of a given set of constraints. Its input: A R m n and b R m such that

The Avis-Kalunzy Algorithm is designed to find a basic feasible solution (BFS) of a given set of constraints. Its input: A R m n and b R m such that Lecture 4 Avis-Kaluzny and the Simplex Method Last time, we discussed some applications of Linear Programming, such as Max-Flow, Matching, and Vertex-Cover. The broad range of applications to Linear Programming

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Example 1 Example 2 Example 3. Example 4 Example 5

Example 1 Example 2 Example 3. Example 4 Example 5 Section 7 5: Solving Radical Equations Radical Equations are equations that have a radical expression in one or more of the terms in the equation. Most of the radicals equations in the section will involve

More information

Linear Programming: Simplex Algorithm. A function of several variables, f(x) issaidtobelinear if it satisþes the two

Linear Programming: Simplex Algorithm. A function of several variables, f(x) issaidtobelinear if it satisþes the two Linear Programming: Simplex Algorithm A function of several variables, f(x) issaidtobelinear if it satisþes the two conditions: (i) f(x + Y )f(x) +f(y )and(ii)f(αx) αf(x), where X and Y are vectors of

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors

More information

The Transform and Conquer Algorithm Design Technique

The Transform and Conquer Algorithm Design Technique The Transform and Conquer Algorithm Design Technique In this part of the course we will look at some simple examples of another general technique for designing algorithms, namely transform and conquer

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2

More information

Linear Programming: Chapter 5 Duality

Linear Programming: Chapter 5 Duality Linear Programming: Chapter 5 Duality Robert J. Vanderbei September 30, 2010 Slides last edited on October 5, 2010 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Lesson 27 Linear Programming; The Simplex Method

Lesson 27 Linear Programming; The Simplex Method Lesson Linear Programming; The Simplex Method Math 0 April 9, 006 Setup A standard linear programming problem is to maximize the quantity c x + c x +... c n x n = c T x subject to constraints a x + a x

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1

More information

The augmented form of this LP is the following linear system of equations:

The augmented form of this LP is the following linear system of equations: 1 Consider the following LP given in standard form: max z = 5 x_1 + 2 x_2 Subject to 3 x_1 + 2 x_2 2400 x_2 800 2 x_1 1200 x_1, x_2 >= 0 The augmented form of this LP is the following linear system of

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Vectors. 1 Basic Definitions. Liming Pang

Vectors. 1 Basic Definitions. Liming Pang Vectors Liming Pang 1 Basic Definitions Definition 1. A vector in a line/plane/space is a quantity which has both magnitude and direction. The magnitude is a nonnegative real number and the direction is

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Linear Programming Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The standard Linear Programming (LP) Problem Graphical method of solving LP problem

More information

5. Simulated Annealing 5.2 Advanced Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

5. Simulated Annealing 5.2 Advanced Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini 5. Simulated Annealing 5.2 Advanced Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Acceptance Function Initial Temperature Equilibrium State Cooling Schedule Stopping Condition Handling Constraints

More information

Chapter 2. Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall 2-1

Chapter 2. Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall 2-1 Linear Programming: Model Formulation and Graphical Solution Chapter 2 2-1 Chapter Topics Model Formulation A Maximization Model Example Graphical Solutions of Linear Programming Models A Minimization

More information

Generality of Languages

Generality of Languages Increasing generality Decreasing efficiency Generality of Languages Linear Programs Reduction Algorithm Simplex Algorithm, Interior Point Algorithms Min Cost Flow Algorithm Klein s algorithm Reduction

More information

MATH 373 Section A1. Final Exam. Dr. J. Bowman 17 December :00 17:00

MATH 373 Section A1. Final Exam. Dr. J. Bowman 17 December :00 17:00 MATH 373 Section A1 Final Exam Dr. J. Bowman 17 December 2018 14:00 17:00 Name (Last, First): Student ID: Email: @ualberta.ca Scrap paper is supplied. No notes or books are permitted. All electronic equipment,

More information

ENM 202 OPERATIONS RESEARCH (I) OR (I) 2 LECTURE NOTES. Solution Cases:

ENM 202 OPERATIONS RESEARCH (I) OR (I) 2 LECTURE NOTES. Solution Cases: ENM 202 OPERATIONS RESEARCH (I) OR (I) 2 LECTURE NOTES Solution Cases: 1. Unique Optimal Solution Case 2. Alternative Optimal Solution Case 3. Infeasible Solution Case 4. Unbounded Solution Case 5. Degenerate

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 1 Simplex solves LP by starting at a Basic Feasible Solution (BFS) and moving from BFS to BFS, always improving the objective function,

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

IE 400 Principles of Engineering Management. Graphical Solution of 2-variable LP Problems

IE 400 Principles of Engineering Management. Graphical Solution of 2-variable LP Problems IE 400 Principles of Engineering Management Graphical Solution of 2-variable LP Problems Graphical Solution of 2-variable LP Problems Ex 1.a) max x 1 + 3 x 2 s.t. x 1 + x 2 6 - x 1 + 2x 2 8 x 1, x 2 0,

More information

Introduction to Linear Programming (LP) Mathematical Programming (MP) Concept

Introduction to Linear Programming (LP) Mathematical Programming (MP) Concept Introduction to Linear Programming (LP) 1. Mathematical Programming Concept 2. LP Concept 3. Standard Form 4. Assumptions 5. Consequences of Assumptions 6. Solution Approach 7. Solution Methods 8. Typical

More information

The Simplex Method for Solving a Linear Program Prof. Stephen Graves

The Simplex Method for Solving a Linear Program Prof. Stephen Graves The Simplex Method for Solving a Linear Program Prof. Stephen Graves Observations from Geometry feasible region is a convex polyhedron an optimum occurs at a corner point possible algorithm - search over

More information

Supplement: 1.1 Introduction to Vectors and Vector Functions

Supplement: 1.1 Introduction to Vectors and Vector Functions Math 151 c Lynch 1 of 6 Supplement: 1.1 Introduction to Vectors and Vector Functions The term vector is used by scientists to indicate a quantity (such as velocity or force) that has both magnitude and

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Simplex Algorithm Using Canonical Tableaus

Simplex Algorithm Using Canonical Tableaus 41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau

More information

Benders Decomposition

Benders Decomposition Benders Decomposition Yuping Huang, Dr. Qipeng Phil Zheng Department of Industrial and Management Systems Engineering West Virginia University IENG 593G Nonlinear Programg, Spring 2012 Yuping Huang (IMSE@WVU)

More information

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths Matroids Shortest Paths Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths Marc Uetz University of Twente m.uetz@utwente.nl Lecture 2: sheet 1 / 25 Marc Uetz Discrete Optimization Matroids

More information

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight. In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight. In the multi-dimensional knapsack problem, additional

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm How to Convert an LP to Standard Form Before the simplex algorithm can be used to solve an LP, the LP must be converted into a problem where all the constraints are equations and

More information

GRADUATE RECORD EXAMINATIONS. Math Review. Chapter 2: Algebra

GRADUATE RECORD EXAMINATIONS. Math Review. Chapter 2: Algebra GRADUATE RECORD EXAMINATIONS Math Review Chapter 2: Algebra Copyright 2010 by Educational Testing Service. All rights reserved. ETS, the ETS logo, GRADUATE RECORD EXAMINATIONS, and GRE are registered trademarks

More information

9.5 THE SIMPLEX METHOD: MIXED CONSTRAINTS

9.5 THE SIMPLEX METHOD: MIXED CONSTRAINTS SECTION 9.5 THE SIMPLEX METHOD: MIXED CONSTRAINTS 557 9.5 THE SIMPLEX METHOD: MIXED CONSTRAINTS In Sections 9. and 9., you looked at linear programming problems that occurred in standard form. The constraints

More information

The Graphical Method & Algebraic Technique for Solving LP s. Métodos Cuantitativos M. En C. Eduardo Bustos Farías 1

The Graphical Method & Algebraic Technique for Solving LP s. Métodos Cuantitativos M. En C. Eduardo Bustos Farías 1 The Graphical Method & Algebraic Technique for Solving LP s Métodos Cuantitativos M. En C. Eduardo Bustos Farías The Graphical Method for Solving LP s If LP models have only two variables, they can be

More information

IV. Violations of Linear Programming Assumptions

IV. Violations of Linear Programming Assumptions IV. Violations of Linear Programming Assumptions Some types of Mathematical Programming problems violate at least one condition of strict Linearity - Deterministic Nature - Additivity - Direct Proportionality

More information

Teaching Duality in Linear Programming - the Multiplier Approach

Teaching Duality in Linear Programming - the Multiplier Approach Teaching Duality in Linear Programming - the Multiplier Approach Jens Clausen DIKU, Department of Computer Science University of Copenhagen Universitetsparken 1 DK 2100 Copenhagen Ø June 3, 1998 Abstract

More information

LINEAR PROGRAMMING. Relation to the Text (cont.) Relation to Material in Text. Relation to the Text. Relation to the Text (cont.

LINEAR PROGRAMMING. Relation to the Text (cont.) Relation to Material in Text. Relation to the Text. Relation to the Text (cont. LINEAR PROGRAMMING Relation to Material in Text After a brief introduction to linear programming on p. 3, Cornuejols and Tϋtϋncϋ give a theoretical discussion including duality, and the simplex solution

More information

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can

More information

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems OPERATIONS RESEARCH Michał Kulej Business Information Systems The development of the potential and academic programmes of Wrocław University of Technology Project co-financed by European Union within European

More information

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define

More information

Math Homework 3: solutions. 1. Consider the region defined by the following constraints: x 1 + x 2 2 x 1 + 2x 2 6

Math Homework 3: solutions. 1. Consider the region defined by the following constraints: x 1 + x 2 2 x 1 + 2x 2 6 Math 7502 Homework 3: solutions 1. Consider the region defined by the following constraints: x 1 + x 2 2 x 1 + 2x 2 6 x 1, x 2 0. (i) Maximize 4x 1 + x 2 subject to the constraints above. (ii) Minimize

More information

3.4 Relaxations and bounds

3.4 Relaxations and bounds 3.4 Relaxations and bounds Consider a generic Discrete Optimization problem z = min{c(x) : x X} with an optimal solution x X. In general, the algorithms generate not only a decreasing sequence of upper

More information

Math 273a: Optimization The Simplex method

Math 273a: Optimization The Simplex method Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form

More information

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20. Extra Problems for Chapter 3. Linear Programming Methods 20. (Big-M Method) An alternative to the two-phase method of finding an initial basic feasible solution by minimizing the sum of the artificial

More information