Michælmas 2012 Operations Research III/IV 1

Size: px
Start display at page:

Download "Michælmas 2012 Operations Research III/IV 1"

Transcription

1 Michælmas 2012 Operations Research III/IV 1 An inventory example A company makes windsurfing boards and has orders for 40, 60, 75 and 25 boards respectively over the next four production quarters. All boards ordered must be available by the end of the relevant quarter and there are initially 10 boards in stock. Using regular labour, 40 boards can be made in a quarter at a cost of 200 per board. Additional boards can be made using overtime labour at a cost of 250 per board. At the end of each quarter a storage cost of 20 per board is incurred for all boards left in stock. What should the production schedule be for the next four quarters? The decision variables must include x t = the number of boards made in regular work time in quarter t and y t = the number of boards made in overtime production in quarter t as these must be specified in the production schedule. It isn t immediately obvious but it also turns out to be very useful to include a collection of inventory variables to count boards left in stock at the ends of quarters i.e. i t = the number of boards in stock at the end of quarter t. The cost of production in terms of these variables is simply 200(x 1 + x 2 + x 3 + x 4 ) + 250(y 1 + y 2 + y 3 + y 4 ) + 20(i 1 + i 2 + i 3 + i 4 ) which we will want to minimise subject to the variables satisfying all the production constraints. The simplest of these are that regular time production is limited and that negative items cannot be produced i.e. 0 x t 40 and y t 0 for each t. The only other constraints involve keeping track of the inventory. Let d t denote the number of orders for quarter t. The inventory variables must satisfy i t = i t 1 + (x t + y t ) d t, t = 1,..., 4 since the difference in inventory between the end and start of a quarter equals the difference between what is produced and what is sold. The requirement that all orders are met is equivalent to the constraints i t 0 for each t. To put this into canonical form introduce slack variables into the upper x i constraints to get x i +s i = 40 so now there are 16 non-negative variables and 8 equality constraints. The problem can now be solved by an entirely routine application of the simplex algorithm. Doing it we find the minimal cost schedule has x T = (40, 40, 40, 25), y T = (0, 10, 35, 0) and i T = (10, 0, 0, 0) for a total cost of 40, 450. The model here is not realistic but does give a flavour of how linear programming problems occur in many practical settings. Solution using the Excel Solver add-on In this little section I assume you have used Excel before. The cells on a spreadsheet usually are used like variable names but can also contain formulas. Choose a block of cells to represent the 12 variables (Excel can sort out slack variables and artificial variables for itself!) e.g B2 to E4 with B2:E2 representing the x i (and put some descriptive text in B1). Now enter the objective function formula into a cell e.g. A2 by typing in =300*SUM($B$2:$E$2) + 350*SUM($B$3:$E$3) + 20*SUM($B$4:$E$4) (the same expression without the $ signs will work I think). Similarly enter the LHS formulas for the four inventory constraints into cells A4 to A7 say. Now click the Solver... item from the Tools menu (the package may have to be installed). This should pop up a window for setting up the LP problem. Start by setting the target cell (clicking on the cell you have chosen to contain the objective fn will do it) and then the Min radio button. Set the Changing Cells to those you chose to contain the variables. Now set the constraints by clicking on the Add button. This pops up another window and in Cell Reference enter the cell containing a constraint formula with the

2 Michælmas 2012 Operations Research III/IV 2 matching constant in the Constraint window (don t forget the constraints x i 40). Add all eight constraints in this way and select Close. Check you have the right constraints set up and now choose Options and check the linear model and non-negative variable boxes, click OK and then Solve. Excel will offer you reports on the answer, sensitivity and limits. You can read the Help to see what these are about but we will cover the background theory in the lectures on postoptimal analysis. To make Excel s reports intelligible it is also helpful to learn about naming ranges of cells and other ways of setting up the constraints. I have actually set up an Excel worksheet called Pivot.xls that you can save to your own filespace from a link just below this one on the main page. It contains sheets setup for the above inventory example and also Question 9 on the problem sheet. I have also written a simple pivoting macro which you can access with the button on the first few worksheets (you may have to switch security to medium level). To use this, type in the coefficients of your problem (in canonical form), select the cell where you want to pivot (i.e. click on it) and then press the Pivot button. The first three sheets have coefficients of simple examples for you to try. Enjoy! Solution using Maple If you prefer to use Maple then it has many facilities that are useful for linear programming. Start by looking in Help for simplex. I will add more notes here when I have time. Hillier & Lieberman Chapters 1 and 3 discuss applications of operations research techniques and describe some real world examples. Chapter 3 provides a detailed yet elementary introduction into linear optimisation problems with several good examples.

3 Michælmas 2012 Operations Research III/IV 3 Lectures on the Simplex Algorithm Defn Linear programming problem: max or min the linear function c T x subject to linear constraints. The objective function has no stationary points! Geometry suggests the optimum will occur at a corner point (we will prove this later). Introductory example livestock on a farm. Let x 1 denote the number of cows kept and x 2 the number of sheep. We need to solve the following linear programming problem: maximise 250x x 2 subject to x 1 50 (cows) x (sheep) x x 2 72 (pasture) 150x x 2 10, 000 (labour) with of course x 1, x 2 N 0. Graphical method shows that the maximum occurs at the intersection of the boundary lines of the last two constraints. We can use slack variables (and other tricks) to change the form but not the content of LP problems. Canonical form i.e. max z = c T x where Ax = b, x 0 suits algebraic treatment. We can write down the introductory example in canonical form by introducing into each of the inequality constraints a non-negative variable to take up the slack i.e. maximise 250x x 2 subject to x 1 + s 1 = 50 x 2 + s 2 = 200 5x 1 + x 2 + s 3 = 360 6x 1 + x 2 + s 4 = 400 with the sign constraints x 1, x 2, s 1, s 2, s 3, s 4 0. Defn Corner points have algebraic description as basic feasible solutions. A basis is a set B of m of the x i with linearly independent columns A i and any variable in B is basic. The solution to Ax = b where the n m non-basic variables are 0 is the basic solution for B. When the basic variables have non-negative values we have a basic feasible solution (bfs). For any chosen basis we can manipulate Ax = b into the form x B = B 1 b B 1 Nx N (1.1a) where B = [A i ], x i B. This expresses the feasible set using the non-basic variables x N. Substitution into the objective function gives us the reduced costs c T c T N ct B B 1 N and these enable assessment of relative worths of adjacent corner points via Consider the equation z + (c T N ct B B 1 N)x N = c T B B 1 b. (1.1b) z + γx = c, γ < 0 and c are constants. As x increases so does z so in (1.1b) if any of the reduced costs are negative then, by increasing the relevant components of x N, we can move from the bfs to points in the feasible set where z is larger. When all the reduced costs for a given bfs are non-negative, that bfs is a global maximum to the problem. The example has at most ( 6 4) = 15 basic solutions (and by sketching the feasible set we see that in fact there are only 6 bfs s). Equations (1.1a) and (1.1b) with the basis {x 1, x 2, s 1, s 2 } can be written out in detail as follows. The parameters are B = , N = , b = ,

4 Michælmas 2012 Operations Research III/IV 4 with c T B = (250, 45, 0, 0) and ct N = (0, 0). We now use row operations (or Maple) to find B 1 from which we can calculate B 1 b, B 1 N and hence c T B B 1 b and c T B B 1 N. Doing all this we find that equation (1.1a) says x [ ] x B = x 2 s 1 = s3 1 1 s 4 s while equation (1.1b) says z + 20s s 4 = 17, 200 and we see that this basic solution is both feasible and optimal. Finding a better basis Any negative reduced cost implies there is an adjacent bfs (corner point) which is better! The problem is to find one and then re-express the problem. To do this we use a table containing all the parameters, specifically [ 1 0 T c T B B 1 N c T N 0 I B 1 N ] z x B x N = [ c T B B 1 b B 1 b ] (1.1c) where the top row contains (1.1b). We choose a variable to enter the basis, one to push out and then pivot (use row operations) to describe the problem from the new bfs (corner). Select pivot column (incoming var x l say) with neg reduced cost (most neg. is usual); with pivot column a (the x l col of B 1 N), if a i > 0 then x i = 0 when x l = (B 1 b) i /a i > 0 a i = 0 then x i = (B 1 b) i 0 for all x l a i < 0 then x i = (B 1 b) i a i x l > 0 (1.2a) (1.2b) (1.2c) so only basic vars x i with a i > 0 can be pushed out and we select the pivot row (outgoing var x i ) using the minimum ratio rule i.e. min i (B 1 b) i /a i over rows i with a i > 0 (1.2a); Gauss elimination normalise pivot row and use row operations to make all other elements of the pivot column equal zero. Example 1.2 (on the worksheet) We start with T 0 z x 1 x 2 s 1 s 2 s 3 z s s s Initial basis is (s 1, s 2, s 3 ) giving bfs (x T, s T ) = (0, 0, 4, 2, 5) with value 0. x 1 has reduced cost 3 so this bfs is not optimal. To bring x 1 in consider the ratios 4/2 = 2, 2/1 = 2 and 5/1 = 5. Both s 1 and s 2 reach 0 at x 1 = 2. Choose (at random there is no good rule for breaking such ties) s 2 to leave the basis i.e. the s 2 row is the pivot row. Row operations lead to

5 Michælmas 2012 Operations Research III/IV 5 T 1 z x 1 x 2 s 1 s 2 s 3 z s x s Again not optimal as x 2 has reduced cost 5! We continue just as above. Ratios from x 2 col are 0/3 and 3/3 (we don t use row 2 here as a 2 = 2 < 0) so s 1 leaves. Another pivot leads to T 2 z x 1 x 2 s 1 s 2 s 3 z /3 1/3 0 6 x /3 2/3 0 0 x /3 1/3 0 2 s This is a different bfs but the same corner point and again not optimal. Bring s 2 into the basis. Only a 3 from that column is positive so s 3 leaves. Another pivot... T 3 z x 1 x 2 s 1 s 2 s 3 z /3 0 1/3 7 x /3 0 2/3 2 x /3 0 1/3 3 s This table shows the unique optimal solution as both non-basic variables have strictly positive reduced costs. Question what if rc j < 0 and all the corresponding a i 0? From equation (1.2c) the nonbasic variable x j can be increased indefinitely and from equation (1.1b) i.e. the problem has an unbounded maximum. c T x = c T BB 1 b rc j x j as x j Finding an initial bfs In large problems there are too many bases for guessing to be an acceptable procedure (it s OK for small problems). We use the big M method to deal with this. Assume the problem has been pushed into canonical form (which is always possible). Any constraints which have the form (i) A j x + s j = b j 0, with s j 0 we use as they are and put s i into the starting basis. All others i.e. those in the forms (ii) A j x = b j 0, (iii) A j x s j = b j 0, with s j 0 we modify by including a non-negative artificial variable e.g. we change A j x = b j to A j x+w j = b j, w j 0. Note that the modified constraint is not equivalent to the original one! We also modify the objective function by including in it a term Mw j where M represents a huge positive value. Now we apply our algorithm, the idea being that this enormous penalty will lead us to force w j out of the basis i.e. make it zero so the original constraint is not modified after all. If we find a maximum where no artificial variables appear then that is the maximum for the original problem. If we don t then the original problem has no feasible solutions. Example 1.3 Consider the LP problem minimise 4x 1 + x 2 + x 3 where 2x 1 + x 2 + 2x 3 4, 3x 1 + 3x 2 + x 3 = 3, x 1 + x 2 + x 3 5, with x i 0.

6 Michælmas 2012 Operations Research III/IV 6 To swap to maximisation change the sign of the objective function. To make the problem canonical put slack vars into the first and third constraints. We can use s 2 in a starting basis but not s 1. We could easily guess a feasible basis but let s use the big M method. Put w 1 and w 2 into the first two constraints to get 2x 1 + x 2 + 2x 3 s 1 + w 1 = 4, 3x 1 + 3x 2 + x 3 + w 2 = 3, with w i 0. Clearly {w 1, w 2, s 2 } forms a feasible starting basis for this modified problem. The objective function now contains terms Mw 1 Mw 2 which we must substitute for with non-basic variables (basic vars have zero reduced costs!). Doing that we get so we start with z + (4 5M)x 1 + (1 4M)x 2 + (1 3M)x 3 + Ms 1 = 7M T 0 x 1 x 2 x 3 s 1 s 2 w 1 w 2 z 4 5M 1 4M 1 3M M M w w s (the z column is hidden as it never affects any of our pivot calculations). Bring x 2 into the basis (as 1 4M is extremely negative) to replace w 2 (use the ratio rule) T 1 x 1 x 2 x 3 s 1 s 2 w 1 w 2 z 3 M 0 (2 5M)/3 M 0 0 (4M 1)/3 3M 1 w / /3 3 x / /3 1 s / /3 4 and then (next pivot) bring in x 3 to replace w 1 to find that the basis {x 2, x 3, s 2 } is optimal with objective function value 11/5 obtained at x T = (0, 2/5, 9/5) with s T = (0, 14/5). The reduced costs for x 1 and s 1 are 13/5 and 2/5 respectively and so this is the unique optimum. Notice that we can either substitute a large number in for M or handle it algebraically (this is easy enough to program and avoids rounding error problems). It would be more usual to bring in x 1 at the first pivot but bringing in x 2 gets to the optimum with fewer pivots a general rule is to choose the most negative reduced cost. The primal simplex algorithm is the method we have developed for numerically solving LP problems. Step 0 Put the problem into canonical form, introducing artificial variables if necessary. Step 1 Check whether any of the reduced costs are negative. If any column of the table has reduced cost rc j < 0 and all a i 0 then the LP has an unbounded maximum, so stop; if any column of the table has rc j < 0 and at least one a i > 0 then go to step 2; otherwise go to step 3. Step 2 Choose a column with negative reduced cost, a row according to the the minimum ratio rule and pivot. Return to step 1. Step 3 An optimal bfs has been reached. If artificial variables are present in the basis the original LP is not feasible; if not, an optimal bfs has been found for the original LP problem. If any non-basic variables have reduced cost zero the optimum is not unique.

7 Michælmas 2012 Operations Research III/IV 7 On corner points and bases Two earlier claims remain to be established (i) if there are any feasible points then there is an optimal corner point and (ii) basic feasible solutions are corner points. Geometrically, corner points are defined by never lying on the line segment between any other pair of feasible points. We ll assume that A is m n of rank m below. Theorem 1.1 Corners are basic feasible solutions x is a corner point of P = {x R n : Ax = b, x 0} if and only if the columns of A for which x i > 0 are linearly independent. Proof: We use the equation Ax = n 1 x ia i where A i denotes the i th column of A and assume that P has a feasible point x. Let I = {i : x i > 0}. Then i I x ia i = n 1 x ia i = A x = b. [ ] Suppose that columns A i for i I are not linearly independent. This means there must exist constants y i (not all zero) such that i I y ia i = 0. Choose µ > 0 small enough that µ y i < x i for each i I and set y j = 0 for each j / I. Now we consider the points x + = x + µy and x = x µy. Both are members of P. To see this for x + note that x + i = x i + µy i x i µ y i > 0 for each i I, x + j = 0 for j / I and Ax + = A x + µay = A x = b. The argument for x is similar. Obviously 1 ( x + + x ) = 1 ) ( x + µy + x µy = x 2 2 so x is midway between x + and x and so cannot be a corner point of P. [ ] Suppose the A i for i I are linearly independent and that x = αv + (1 α)w for some v, w P and α (0, 1). Clearly I has not more than m members and for i / I we must have v i = w i = 0 as neither can be positive. Hence i I v ia i = i I w ia i = b which implies i I (v i w i )A i = 0 and as these columns are linearly independent v = w = x i.e. x is a corner point. This result shows that basic feasible solutions are corner points and vice versa. The same geometric idea is used to show that if there are any feasible points then corner points must exist and that one of them is optimal if there are any optimal points. Theorem 1.2 Fundamental Theorem of Linear Programming For a LP problem in canonical form (i) if there is a feasible solution then there is a basic feasible solution; (ii) if there is an optimal solution then there is an optimal bfs. Proof: (i) Suppose x is an element of the feasible set P = {x R n : Ax = b, x 0}. We have A x = n 1 x ia i = b where the A i are the columns of A. Suppose that p of the x i are non-zero (the first p say). If p m there is nothing to do as x is a basic feasible solution. If p > m then A 1,..., A p are linearly dependent so we can find y 1,..., y p, at least one of which is strictly positive, such that p 1 y ia i = 0. Let y = (y 1,..., y p, 0,..., 0). Then A( x ɛy) = b for any ɛ and picking ɛ = min { x i /y i where y i > 0} we see that x ɛy P and has at most p 1 positive components. We can repeat this process until we have a feasible point which is also basic. (ii) Assume x is actually an optimal point of P. The value of any point x ɛy is simply c T x ɛc T y and for small enough ɛ (positive or negative) x ɛy is feasible so we must have c T y = 0 to maintain optimality of x e.g. if c T y < 0 then choose ɛ < 0 to get a point more valuable than x! Hence the new point found in part (i) is also optimal.

8 Michælmas 2012 Operations Research III/IV 8 Game theory example Two people repeatedly play a game (something akin to paper/rock/scissors). Player R has a choice of 2 actions while C has 3. If R plays i and C plays j then R pays C the amount a ij as shown in the table: a ij The players choose actions with probabilities u i and v j but R has no control over the v j. A common conservative strategy is called minimax find u i to minimise the maximum expected loss (the worst over all possible v j ). R wishes to minimise ) max i {f i (u)} = max ( 2u 1 + 3u 2, u 1 u 2, u 2 where the minimum is taken over u 0 such that u 1 + u 2 = 1. This objective function is not linear in the u i but there is a trick that enables us to find an equivalent LP problem. What we do is introduce a new variable r and note that in (u 1, u 2, r) space the LP problem, minimise r subject to u 1 + u 2 = 1, f i (u) r and u i 0 for each i provides the solution to R s minimax problem. At any ū the minimal r equals max i f i (ū) and the rest of the problem is to look for the best value of u. To numerically solve an example we must deal with the free variable r. We can do this by substituting r = x 1 x 2 where the x i 0 and then R s problem is to minimise x 1 x 2 subject to u 1 + u 2 = 1 x 1 + x 2 2u 1 + 3u 2 0 x 1 + x 2 + u 1 u 2 0 x 1 + x 2 u 2 0 with x i, u i 0. Applying the simplex algorithm we find the optimal basis is {x 1, u 1, u 2, s 3 } which gives a bfs (1/7, 4/7, 3/7, 4/7) and so R pays C no more than 1/7 per round on average when playing action 1 on 4/7 of the rounds and action 2 in the other rounds. The reduced costs for s 1, s 2 and s 2 are 2/7, 5/7 and 0 respectively. By similar reasoning C s maximin strategy solves the LP problem maximise s subject to v 1 + v 2 + v 3 = 1 s + 2v 1 v 2 0 s 3v 1 + v 2 + v 3 0 with u j 0 and s free. Confirm that the optimal solution is s = 1/7 when v 1 = 2/7, v 2 = 5/7 (and hence v 3 = 0) with reduced costs 4/7 for the first slack variable and 3/7 for the second. The fact that the best strategy for each player equals the other player s reduced costs at optimum is no coincidence. Hillier & Liebermann Chapter 4 is a long and careful description of how to run the simplex algorithm with all the important issues illustrated with examples. Chapter 5 uses matrix notation to explain why the algorithm works and how to implement it efficiently with a computer. The lectures so far have covered it all except sections 4.7 and 5.3 which we will treat next (along with Chap. 6) and sections 4.8 and 4.9 which we will not consider.

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Linear Programming Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The standard Linear Programming (LP) Problem Graphical method of solving LP problem

More information

TIM 206 Lecture 3: The Simplex Method

TIM 206 Lecture 3: The Simplex Method TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

Simplex Method for LP (II)

Simplex Method for LP (II) Simplex Method for LP (II) Xiaoxi Li Wuhan University Sept. 27, 2017 (week 4) Operations Research (Li, X.) Simplex Method for LP (II) Sept. 27, 2017 (week 4) 1 / 31 Organization of this lecture Contents:

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

Today s class. Constrained optimization Linear programming. Prof. Jinbo Bi CSE, UConn. Numerical Methods, Fall 2011 Lecture 12

Today s class. Constrained optimization Linear programming. Prof. Jinbo Bi CSE, UConn. Numerical Methods, Fall 2011 Lecture 12 Today s class Constrained optimization Linear programming 1 Midterm Exam 1 Count: 26 Average: 73.2 Median: 72.5 Maximum: 100.0 Minimum: 45.0 Standard Deviation: 17.13 Numerical Methods Fall 2011 2 Optimization

More information

Linear Programming and the Simplex method

Linear Programming and the Simplex method Linear Programming and the Simplex method Harald Enzinger, Michael Rath Signal Processing and Speech Communication Laboratory Jan 9, 2012 Harald Enzinger, Michael Rath Jan 9, 2012 page 1/37 Outline Introduction

More information

Optimization 4. GAME THEORY

Optimization 4. GAME THEORY Optimization GAME THEORY DPK Easter Term Saddle points of two-person zero-sum games We consider a game with two players Player I can choose one of m strategies, indexed by i =,, m and Player II can choose

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

AM 121: Intro to Optimization Models and Methods

AM 121: Intro to Optimization Models and Methods AM 121: Intro to Optimization Models and Methods Fall 2017 Lecture 2: Intro to LP, Linear algebra review. Yiling Chen SEAS Lecture 2: Lesson Plan What is an LP? Graphical and algebraic correspondence Problems

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

In Chapters 3 and 4 we introduced linear programming

In Chapters 3 and 4 we introduced linear programming SUPPLEMENT The Simplex Method CD3 In Chapters 3 and 4 we introduced linear programming and showed how models with two variables can be solved graphically. We relied on computer programs (WINQSB, Excel,

More information

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions. Prelude to the Simplex Algorithm The Algebraic Approach The search for extreme point solutions. 1 Linear Programming-1 x 2 12 8 (4,8) Max z = 6x 1 + 4x 2 Subj. to: x 1 + x 2

More information

Simplex Algorithm Using Canonical Tableaus

Simplex Algorithm Using Canonical Tableaus 41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

2. Linear Programming Problem

2. Linear Programming Problem . Linear Programming Problem. Introduction to Linear Programming Problem (LPP). When to apply LPP or Requirement for a LPP.3 General form of LPP. Assumptions in LPP. Applications of Linear Programming.6

More information

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Computational Procedure of the Simplex Method The optimal solution of a general LP problem is obtained in the following steps: Step 1. Express the

More information

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

OPTIMISATION /09 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2 2008/09 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

9.1 Linear Programs in canonical form

9.1 Linear Programs in canonical form 9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Linear programs Optimization Geoff Gordon Ryan Tibshirani

Linear programs Optimization Geoff Gordon Ryan Tibshirani Linear programs 10-725 Optimization Geoff Gordon Ryan Tibshirani Review: LPs LPs: m constraints, n vars A: R m n b: R m c: R n x: R n ineq form [min or max] c T x s.t. Ax b m n std form [min or max] c

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 5: The Simplex method, continued Prof. John Gunnar Carlsson September 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 22, 2010

More information

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3)

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3) The Simple Method Gauss-Jordan Elimination for Solving Linear Equations Eample: Gauss-Jordan Elimination Solve the following equations: + + + + = 4 = = () () () - In the first step of the procedure, we

More information

The Simplex Method for Solving a Linear Program Prof. Stephen Graves

The Simplex Method for Solving a Linear Program Prof. Stephen Graves The Simplex Method for Solving a Linear Program Prof. Stephen Graves Observations from Geometry feasible region is a convex polyhedron an optimum occurs at a corner point possible algorithm - search over

More information

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can

More information

CO350 Linear Programming Chapter 6: The Simplex Method

CO350 Linear Programming Chapter 6: The Simplex Method CO50 Linear Programming Chapter 6: The Simplex Method rd June 2005 Chapter 6: The Simplex Method 1 Recap Suppose A is an m-by-n matrix with rank m. max. c T x (P ) s.t. Ax = b x 0 On Wednesday, we learned

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2 Linear programming Saad Mneimneh 1 Introduction Consider the following problem: x 1 + x x 1 x 8 x 1 + x 10 5x 1 x x 1, x 0 The feasible solution is a point (x 1, x ) that lies within the region defined

More information

Lecture 11 Linear programming : The Revised Simplex Method

Lecture 11 Linear programming : The Revised Simplex Method Lecture 11 Linear programming : The Revised Simplex Method 11.1 The Revised Simplex Method While solving linear programming problem on a digital computer by regular simplex method, it requires storing

More information

UNIVERSITY OF CALICUT SCHOOL OF DISTANCE EDUCATION B Sc. Mathematics (2011 Admission Onwards) II SEMESTER Complementary Course

UNIVERSITY OF CALICUT SCHOOL OF DISTANCE EDUCATION B Sc. Mathematics (2011 Admission Onwards) II SEMESTER Complementary Course UNIVERSITY OF CALICUT SCHOOL OF DISTANCE EDUCATION B Sc. Mathematics (2011 Admission Onwards) II SEMESTER Complementary Course MATHEMATICAL ECONOMICS QUESTION BANK 1. Which of the following is a measure

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm How to Convert an LP to Standard Form Before the simplex algorithm can be used to solve an LP, the LP must be converted into a problem where all the constraints are equations and

More information

Introduction to the Simplex Algorithm Active Learning Module 3

Introduction to the Simplex Algorithm Active Learning Module 3 Introduction to the Simplex Algorithm Active Learning Module 3 J. René Villalobos and Gary L. Hogg Arizona State University Paul M. Griffin Georgia Institute of Technology Background Material Almost any

More information

Graphical and Computer Methods

Graphical and Computer Methods Chapter 7 Linear Programming Models: Graphical and Computer Methods Quantitative Analysis for Management, Tenth Edition, by Render, Stair, and Hanna 2008 Prentice-Hall, Inc. Introduction Many management

More information

Lesson 27 Linear Programming; The Simplex Method

Lesson 27 Linear Programming; The Simplex Method Lesson Linear Programming; The Simplex Method Math 0 April 9, 006 Setup A standard linear programming problem is to maximize the quantity c x + c x +... c n x n = c T x subject to constraints a x + a x

More information

February 22, Introduction to the Simplex Algorithm

February 22, Introduction to the Simplex Algorithm 15.53 February 22, 27 Introduction to the Simplex Algorithm 1 Quotes for today Give a man a fish and you feed him for a day. Teach him how to fish and you feed him for a lifetime. -- Lao Tzu Give a man

More information

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and homework.

More information

Another max flow application: baseball

Another max flow application: baseball CS124 Lecture 16 Spring 2018 Another max flow application: baseball Suppose there are n baseball teams, and team 1 is our favorite. It is the middle of baseball season, and some games have been played

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

Math 273a: Optimization The Simplex method

Math 273a: Optimization The Simplex method Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form

More information

x 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2

x 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2 Lecture 1 LPs: Algebraic View 1.1 Introduction to Linear Programming Linear programs began to get a lot of attention in 1940 s, when people were interested in minimizing costs of various systems while

More information

x 4 = 40 +2x 5 +6x x 6 x 1 = 10 2x x 6 x 3 = 20 +x 5 x x 6 z = 540 3x 5 x 2 3x 6 x 4 x 5 x 6 x x

x 4 = 40 +2x 5 +6x x 6 x 1 = 10 2x x 6 x 3 = 20 +x 5 x x 6 z = 540 3x 5 x 2 3x 6 x 4 x 5 x 6 x x MATH 4 A Sensitivity Analysis Example from lectures The following examples have been sometimes given in lectures and so the fractions are rather unpleasant for testing purposes. Note that each question

More information

AM 121: Intro to Optimization Models and Methods Fall 2018

AM 121: Intro to Optimization Models and Methods Fall 2018 AM 121: Intro to Optimization Models and Methods Fall 2018 Lecture 5: The Simplex Method Yiling Chen Harvard SEAS Lesson Plan This lecture: Moving towards an algorithm for solving LPs Tableau. Adjacent

More information

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Module - 03 Simplex Algorithm Lecture 15 Infeasibility In this class, we

More information

Chapter 4 The Simplex Algorithm Part I

Chapter 4 The Simplex Algorithm Part I Chapter 4 The Simplex Algorithm Part I Based on Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan Lewis Ntaimo 1 Modeling

More information

Optimization - Examples Sheet 1

Optimization - Examples Sheet 1 Easter 0 YMS Optimization - Examples Sheet. Show how to solve the problem min n i= (a i + x i ) subject to where a i > 0, i =,..., n and b > 0. n x i = b, i= x i 0 (i =,...,n). Minimize each of the following

More information

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 13 Dr. Ted Ralphs IE406 Lecture 13 1 Reading for This Lecture Bertsimas Chapter 5 IE406 Lecture 13 2 Sensitivity Analysis In many real-world problems,

More information

Linear Programming for Planning Applications

Linear Programming for Planning Applications Communication Network Planning and Performance Learning Resource Linear Programming for Planning Applications Professor Richard Harris School of Electrical and Computer Systems Engineering, RMIT Linear

More information

UNIVERSITY of LIMERICK

UNIVERSITY of LIMERICK UNIVERSITY of LIMERICK OLLSCOIL LUIMNIGH Department of Mathematics & Statistics Faculty of Science and Engineering END OF SEMESTER ASSESSMENT PAPER MODULE CODE: MS4303 SEMESTER: Spring 2018 MODULE TITLE:

More information

Math Models of OR: Extreme Points and Basic Feasible Solutions

Math Models of OR: Extreme Points and Basic Feasible Solutions Math Models of OR: Extreme Points and Basic Feasible Solutions John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 1180 USA September 018 Mitchell Extreme Points and Basic Feasible Solutions

More information

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics Dr. Said Bourazza Department of Mathematics Jazan University 1 P a g e Contents: Chapter 0: Modelization 3 Chapter1: Graphical Methods 7 Chapter2: Simplex method 13 Chapter3: Duality 36 Chapter4: Transportation

More information

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

Week 2. The Simplex method was developed by Dantzig in the late 40-ties. 1 The Simplex method Week 2 The Simplex method was developed by Dantzig in the late 40-ties. 1.1 The standard form The simplex method is a general description algorithm that solves any LPproblem instance.

More information

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered: LINEAR PROGRAMMING 2 In many business and policy making situations the following type of problem is encountered: Maximise an objective subject to (in)equality constraints. Mathematical programming provides

More information

Linear Programming, Lecture 4

Linear Programming, Lecture 4 Linear Programming, Lecture 4 Corbett Redden October 3, 2016 Simplex Form Conventions Examples Simplex Method To run the simplex method, we start from a Linear Program (LP) in the following standard simplex

More information

ORF 522. Linear Programming and Convex Analysis

ORF 522. Linear Programming and Convex Analysis ORF 5 Linear Programming and Convex Analysis Initial solution and particular cases Marco Cuturi Princeton ORF-5 Reminder: Tableaux At each iteration, a tableau for an LP in standard form keeps track of....................

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

MAT016: Optimization

MAT016: Optimization MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 3: Linear Programming, Continued Prof. John Gunnar Carlsson September 15, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

II. Analysis of Linear Programming Solutions

II. Analysis of Linear Programming Solutions Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois

More information

February 17, Simplex Method Continued

February 17, Simplex Method Continued 15.053 February 17, 2005 Simplex Method Continued 1 Today s Lecture Review of the simplex algorithm. Formalizing the approach Alternative Optimal Solutions Obtaining an initial bfs Is the simplex algorithm

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5, Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method Reading: Sections 2.6.4, 3.5, 10.2 10.5 1 Summary of the Phase I/Phase II Simplex Method We write a typical simplex tableau as z x 1 x

More information

The Simplex Algorithm and Goal Programming

The Simplex Algorithm and Goal Programming The Simplex Algorithm and Goal Programming In Chapter 3, we saw how to solve two-variable linear programming problems graphically. Unfortunately, most real-life LPs have many variables, so a method is

More information

Math Models of OR: Some Definitions

Math Models of OR: Some Definitions Math Models of OR: Some Definitions John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Some Definitions 1 / 20 Active constraints Outline 1 Active constraints

More information

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 7 February 7th Overview In the previous lectures we saw applications of duality to game theory and later to learning theory. In this lecture

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

...(iii), x 2 Example 7: Geetha Perfume Company produces both perfumes and body spray from two flower extracts F 1. The following data is provided:

...(iii), x 2 Example 7: Geetha Perfume Company produces both perfumes and body spray from two flower extracts F 1. The following data is provided: The LP formulation is Linear Programming: Graphical Method Maximize, Z = 2x + 7x 2 Subject to constraints, 2x + x 2 200...(i) x 75...(ii) x 2 00...(iii) where x, x 2 ³ 0 Example 7: Geetha Perfume Company

More information

IE 400: Principles of Engineering Management. Simplex Method Continued

IE 400: Principles of Engineering Management. Simplex Method Continued IE 400: Principles of Engineering Management Simplex Method Continued 1 Agenda Simplex for min problems Alternative optimal solutions Unboundedness Degeneracy Big M method Two phase method 2 Simplex for

More information

Today: Linear Programming (con t.)

Today: Linear Programming (con t.) Today: Linear Programming (con t.) COSC 581, Algorithms April 10, 2014 Many of these slides are adapted from several online sources Reading Assignments Today s class: Chapter 29.4 Reading assignment for

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

OPRE 6201 : 3. Special Cases

OPRE 6201 : 3. Special Cases OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 3 Simplex Method for Bounded Variables We discuss the simplex algorithm

More information

Algebraic Simplex Active Learning Module 4

Algebraic Simplex Active Learning Module 4 Algebraic Simplex Active Learning Module 4 J. René Villalobos and Gary L. Hogg Arizona State University Paul M. Griffin Georgia Institute of Technology Time required for the module: 50 Min. Reading Most

More information

Introduction. Very efficient solution procedure: simplex method.

Introduction. Very efficient solution procedure: simplex method. LINEAR PROGRAMMING Introduction Development of linear programming was among the most important scientific advances of mid 20th cent. Most common type of applications: allocate limited resources to competing

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15 Fundamentals of Operations Research Prof. G. Srinivasan Indian Institute of Technology Madras Lecture No. # 15 Transportation Problem - Other Issues Assignment Problem - Introduction In the last lecture

More information

1 Seidel s LP algorithm

1 Seidel s LP algorithm 15-451/651: Design & Analysis of Algorithms October 21, 2015 Lecture #14 last changed: November 7, 2015 In this lecture we describe a very nice algorithm due to Seidel for Linear Programming in lowdimensional

More information

56:270 Final Exam - May

56:270  Final Exam - May @ @ 56:270 Linear Programming @ @ Final Exam - May 4, 1989 @ @ @ @ @ @ @ @ @ @ @ @ @ @ Select any 7 of the 9 problems below: (1.) ANALYSIS OF MPSX OUTPUT: Please refer to the attached materials on the

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

Introduction to linear programming using LEGO.

Introduction to linear programming using LEGO. Introduction to linear programming using LEGO. 1 The manufacturing problem. A manufacturer produces two pieces of furniture, tables and chairs. The production of the furniture requires the use of two different

More information

December 2014 MATH 340 Name Page 2 of 10 pages

December 2014 MATH 340 Name Page 2 of 10 pages December 2014 MATH 340 Name Page 2 of 10 pages Marks [8] 1. Find the value of Alice announces a pure strategy and Betty announces a pure strategy for the matrix game [ ] 1 4 A =. 5 2 Find the value of

More information

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements 3E4: Modelling Choice Lecture 7 Introduction to nonlinear programming 1 Announcements Solutions to Lecture 4-6 Homework will be available from http://www.eng.cam.ac.uk/~dr241/3e4 Looking ahead to Lecture

More information

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

More information

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis Ann-Brith Strömberg 2017 03 29 Lecture 4 Linear and integer optimization with

More information

ECE 307 Techniques for Engineering Decisions

ECE 307 Techniques for Engineering Decisions ECE 7 Techniques for Engineering Decisions Introduction to the Simple Algorithm George Gross Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign ECE 7 5 9 George

More information

Optimisation and Operations Research

Optimisation and Operations Research Optimisation and Operations Research Lecture 9: Duality and Complementary Slackness Matthew Roughan http://www.maths.adelaide.edu.au/matthew.roughan/ Lecture_notes/OORII/

More information

Optimisation. 3/10/2010 Tibor Illés Optimisation

Optimisation. 3/10/2010 Tibor Illés Optimisation Optimisation Lectures 3 & 4: Linear Programming Problem Formulation Different forms of problems, elements of the simplex algorithm and sensitivity analysis Lecturer: Tibor Illés tibor.illes@strath.ac.uk

More information

MATH 445/545 Test 1 Spring 2016

MATH 445/545 Test 1 Spring 2016 MATH 445/545 Test Spring 06 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 545 level. Please read and follow all of these

More information