APPLIED OPTIMIZATION

Size: px
Start display at page:

Download "APPLIED OPTIMIZATION"

Transcription

1 Systems Analysis Group, 1995 Uppsala University APPLIED OPTIMIZATION optimal point, f = 12 Starting solution: f=o 10 Thomas Persson January 1995 Revised in October 1998, Håkan Lanshammar

2

3 Contents 1 INTRODUCTION... l 1.1 Examples of optimization problems General formulation, mathematical program LINEAR PROGRANIMING Examples of typical problems solved by linear programming Efficient use of limited resources The alloy problem General forms of the LP problem Graphical solution to a simple LP problem Standard form of an LP problem Basic feasible solution THE SIMPLEX METHOD The idea of the simplex method Setting up a starting solution The simplex method in tabular form Other mode1 forms Multiple optima, degeneracy and cycling...i SENSITMTY ANALYSIS AND DUALITY The simplex tableau in matrix form Sensitivity analysis The dual simplex method Duality Duality and the simplex method Duality and sensitivity analysis... 30

4 5 INTEGER LINEAR PROGRCWINIING General form Some considerations on solving integer linear programming problems The branch-and-bound technique for solving ILP problems DYNAMIC PROGRAMMING NONLINEAR PROGRCWINIING Analytical conditions for minimum Some considerations on numerical algorithms Minimization of one-dimensional functions Minimization of multi-dimensional functions REFERENCES

5 1 Introduction Systems analysis is a scientific method for describing, analyzing and controlling complex systems. There are different techniques used in systems analysis. Some of them are simulation, optimization and operations research. Optimization techniques can be viewed as being a part of operations research, or as a branch of mathematics. This material is intended to be directly applicable for solving different types of optimization problems, and to cover the optimization part of the course Simulation and Operations Research given in the Master of Science program at Uppsala University. The material gives an overview of different types of optimization problems, with the emphasis on linear programming problems. The problem types are presented both from a theoretical and an applicative point of view. The topics of the course material are linear programming (LP), integer linear programming (ILP), dynamic programming (DP) and nonlinear programming (NLP). 1.1 Examples of optimization problems In this section we will give some examples of optimization problems. Ex l.l)cancan, a can manufacturing company, wants to construct a can with the largest possible volume from a specified amount (area) of sheet. We know that the area, A, and the volume, V, of a can is defined by: A A=2nr2 +2nrh V=nr2h h The problem therefore becomes v maximize V =nr h 2nr -I- 2nrh = a (a is the area) subject to r,h>o

6 Ex 1.2) Cancan decides to manufacture two different types of cans. Two different machines have to be used in the process. Each can has its specific material cost and sale price. All data is summarized in the following table: Can 1 Can 2 time in machine s 1.0 s time in machine s 1.5 s material cost 0.20 : :- sale price 0.40 : :- How many of each type of can should Cancan produce per hour in order to maximize their profit? To formulate the problem we introduce the following variables: This gives the problem x, : the number of Can 1 produced per hour x, : the number of Can 2 produced per hour f = f (x,, x,) : the profit per hour when producing and maximize f = 0.20~~ ~~ selling x, and x, cans, respectively 2x, + 1.0~~ ~~ + 1.5~~ X,,%, General formulation, mathematical program A general optimization problem can be written in the form l max f (x> subject to x E F where X= (x1 X, x ~ is ) a column ~ vector of n decision variables, and F is the feasible set. f is called the objective function or the cost function. The problem can be formulated in a more "mathematical" form:

7 This is called a mathematical program. (The term mathematical program should not be compared to computer program, but rather to mathematical model). The feasible set F in (1.1) has in (1.2) been divided into two parts: the constraints g, (x) $ O and hj (x) = 0, and the feasible set X. The feasible set X can be nonnegative numbers, integers, binary numbers, etc. Note: A constraint with "2" instead of j'<" constraint 2x1-3x O is equal to -2x1 + 3x2-6 <O. can easily be transformed, e.g. the There are some important special cases of mathematical programs. Linear programming, LP - f, g, h are linear - X is the set of nonnegative numbers Integer linear programming, ILP - f, g, h are linear - X is the set of nonnegative integers. If X={O,l} then the problem is called binary programming or zero-one programming. Nonlinear programming, NLP - f andlor g, h are nonlinear LP is described in chapter 2-4, ILP in chapter 5 and a short introduction to NLP is given in chapter 7.

8 2 Linear programming 2.1 Examples of typical problems solved by linear programming Efficient use of limited resources Consider m resources of some sort, e.g. raw material, work force, machines, financial resources, time, which are available in limited amounts denoted by bi. These resources can be used for performing n activities (production processes). Let j be one of these activities and let xj be the (unknown) leve1 of the activity j. If the production process consists of producing a specified product, then xj represents the produced quantity. We denote the quantity of the resource i (i = 1, m) necessary to produce a unit of the product j (j = 1, n) by aij. The total quantity of the resource i used for producing the n products is then determined by: Because xj represents the quantity that must be produced, it cannot be a negative number: The decision-making process implies the consideration of an economic criterion, e.g. to maximize the profit. Let pj - the unit sale price of product j and cj - the unit production cost for product j. Then the total profit is given by: The problem is to find those solutions of the inequation system (2.1) and (2.2) which maximize the profit (2.3).

9 Ex 2,l)Consider a retired joiner who makes two different types of furniture. Each piece of furniture is manufactured using three types of material. Furniture of type 1 requires 1.0 units of material 1, 2.5 units of material 2 and 1.5 units of material 3. Furniture of type 2 requires 1.5 units of material 1, 2.5 units of material 2 and 2.5 units of material 3. The joiner only have 90, 120, and 100 units of the these three materials, respectively, so he must be careful when he decides how many of each type of furniture he is going to manufacture. The material costs for the two types of furniture are 100 US$ and 90 US$, respectively. The joiner can sell furniture of type 1 for 145 US$, and furniture of type 2 for 140 US$. Assume that there is no limit of how many pieces of furniture he can sell and that he wants to maximize the profit. Introduce the following variables: x, : the quantity produced of furnitures of type 1 x, : the quantity produced of furnitures of type 2 We can now formulate the problem as max f = &x, + 50x2 ( 1.0~~ + 15x ~~ + 2.5~~ I 120 subject to 1.5~~ + 2.5~~ I 100 I x,, x, 2 o (The solution is given by x, = 20, x, = 28, f,,, = 2300 ) The alloy problem In a laboratory of a metallurgical company an alloy is produced using n raw materials Mj (j= l, n). Each raw material Mj contains the same m different substances Si (i=l, m). The contents per unit of substance Si in raw material Mj is a,. The alloy has to contain at least Li (i=l, m) units of substance Si, and at most ui (i= l, m) units of the substance Si. The unit cost of the raw material Mj is Cj. The problem is to determine the quantities of each raw material Mj in the alloy, so that its cost is minimized. The problem can be formulated as

10 n max f(x)= CC,M, j= l subject to li <CauMj <ui, i=l, m j=i Mj20, j=l,n 2.2 General forms of the LP problem A general LP problem with n variables and m constraints can always be formulated as (see the general mathematical program in (1.2)) max f =clxl +c2x c,x, allxl + a12x2 +...S a21x1 alnxn <bl + a2,x a,,x, 5 b, allxl+ a12x2 subject to a,x, <bl al+l,lxl +a1+1,2'2 +.q.+al+i,inxn =b~+i amlxl + a,,,x, a,,x, =b, X1,X2,..., X, ro Note: A "2 - inequality" can be transformed into a "2 - inequality" by multiplying both sides of the inequality by -1. For example, the inequality x, -x2 +2x3 2-4 is equivalent to -xl+x2-2x3 54. We ca11 the form (2.5) the general form of a linear programming problem. It is common, though, in real problems to have only inequality constraints, i.e. max f = clxl + c2x C,X, allxl + a12x aln%, 5 bl a2,x1 + a2,x a2,xn <b, subject to < am1'1 +a1n,x a1nnxn X1,X2,..., X, ro We ca11 the form (2.6) the canonical form of a linear programming problem

11 The LP problem (2.6) in canonical form can be formulated in matrix form: (2.7) where the vectors x, b and c, and the matrix A are defined as 2.3 Graphical solution to a simple LP problem Consider this simple LP problem: max f = 2x, +3x2 x,,x, 20 The problem can easily be solved graphically, see Fig lx2 7 feasible region Figure 2.1. Graphical solution of an LP problem The dotted line represents the objective function f and is drawn for the case when it has the value of 6. When maximizing f, we move the dotted line in the direction of its gradient, so that the value of f grows. We see that the optimal point is the

12 intersection between the two lines x, + 3x2 = 9 and x, +x2 = 5. The solution to the problem is therefore given by X, =3 x, =2 f,,, = 12 Some observations can be made from this example: The solution is never inside the feasible region but always at the boundary. We can either have no solution, one solution, or infinite number of solutions. - No solution. This is the case when the feasible set is empty - One solution. The solution is then in a corner, i.e. in an intersection between two or more constraints. - Infinite number of solutions. This can occur when the objective function is parallel to a constraint. 2.4 Standard form of an LP problem For algorithms solving LP problems, an LP problem is usually formulated in the standard form: subject to Notice the two differences from (2.7): The equality constraint, and the demand that all coefficients in the b-vector are nonnegative. Any LP problem can be represented in the standard form using the following transformations: T1) A "< - inequality" is transformed into an equality by adding a nonnegative slack variable to the left hand side of the inequation. For example, x,+x2 +2x3 14 x,+x2 +2x, +x4 =4, x, 20, where x, is a slack variable. T2) A "2 - inequality" is transformed into an equality by adding a nonnegative surplus variable to the left hand side of the inequation. For example, x,+x2+2x324 --> x,+x2+2x3-x,=4, x,>o, where x, is a surplus variable.

13 T3) Any non-positive variable xj, xj l O is transformed into a nonnegative variable by the substitution x; = - xj, x; >O. T4) A non-restricted variable xj, - l xj 5 w can be replaced by two non- f I, negative variables x; and x, as follows: x j = x; - x,, where x j, x j > O T5) Any minimization problem can be transformed into a maximization problem and vice-versa, by changing the signs of the coefficients in the objective function: min f (x) = - max[-f (x)]. For example, xep xaf minimizing fl = 5x1 + 7x2 is mathematically equivalent to maximizing f, = - fl = - 5x1-7x, in that the optimal values of x, and x, will be the same. The only difference is that the objective functions fl and f, will have opposite signs: min fl = - max f2 = - max [-f,] Ex 2.2)The problem max f = 50x1 + 40x, 2x1 +3x, l 9 x, >0,x2 I0 formulated in standard form is max f = 50x1 + 40x, 2x1-3%; + X, =9 -x4 =l2 2.5 Basic feasible solution PI Consider an LP problem in standard form: subject to

14 A is an mx n matrix, i.e. there are m constraints and n variables. In real situation problems we have more variables than constraints, so m c n. To satis@ the m constraints in the equation system Ax = b we only need m variables. The remaining n - m variables can be set equal to zero. In fact, the following statement holds: This corresponds to the formation of an m x m submatrix B of A. Provided that B is non-singular, the values of the remaining variables can be found, as there are now m equations with m unknown variables. Such a solution is called a basic solution and the m variables are called basic variables. A basic solution which is feasible (Le. all variables are nonnegative) is called a basic feasible solution. Let a j,(j= l, n) be a column vector corresponding to the constraint coefficients of x j. Then the constraint set Ax = b can be written The vectors a,, a,,,..,an are all m-vectors and they belong to a vector space of m dimensions. Any set of m vectors which are linearly independent and span the space form a basis. Any other of the n - m remaining vectors in the vector space can be expressed as a linear combination of these basic vectors. The submatrix B which corresponds to the basic vectors is called the basis matrix. We introduce the following notation: x, : column vector of the m basic variables x, : column vector of the m - n nonbasic variables equal to zero B N : m x m submatrix of A, called the basis matrix : (n- m) x m submatrix of A c, : column vector (size m) of the coefficients to the basic variables x, in the objective function c, : column vector (size m - n) of the coefficients to the nonbasic variables x, in the objective function If we know which variables that are zero, we can easily solve for x, and f! This can be seen for x, with the following computation:

15 The objective function f can be written To summarize, we have The problem is thus to determine which variables should be basic (i.e. 2 0) and which should be nonbasic (i.e. = 0).

16 3 The simplex method 3.1 The idea of the simplex method In chapter 2 it was found that if a linear programming problem has a finite optimal solution, then it has at least one optimal solution at an extreme point (a vertex) of the feasible region. The general idea of the simplex method, is to move from one vertex of the feasible region to an adjacent vertex which has a better value of the objective function, until no further improvement in the objective can be obtained. At this point, the optimal solution has been found. Ex 3.l)The simplex method applied to the example problem given in chapter 2.3 will follow this path: Iteration 2: optimal point, f = 12 Starting solution: f=o A family of LP algorithms, known as interior point methods has been developed starting in the 1980's, that can be faster for many (but so far not all) problems. These methods are characterized by constructing a sequence of trial solutions that go through the interior of the solution space, in contrast to the simplex method which stays on the boundary and examines only the corners (vertices). The vertices of the feasible region correspond to basic feasible solutions of the constraint equations associated with particular basis for the column space of the constraints. Mathematically, the simplex method moves from one vertex of the feasible region to an adjacent vertex by moving from one basis of the column space to an 'adjacent' basis. An 'adjacent' basis has one element changed: the new basis has one of the old basic vectors removed and replaced by another vector which was not part of the original basis. The vector to enter the basis is selected so that the value of the objective function will be improved at the new basic feasible solution. The vector to leave the basis is chosen to ensure that the new solution will be feasible.

17 3.2 Setting up a starting solution Consider the linear program subject to Observe the condition that b 2 O! By adding nonnegative slack variables sl, S,,..., S,, one to each constraint, we get the problem in standard form and the constraints become where I, is the m X m identity matrix and s is a column vector of the slack variables. The vectors corresponding to the m slack variables are linearly independent, and they span the space. Thus s form a basis for the space. Expressing b in terms of this basis, the basic solution is: Since b 2 0, this solution is also feasible, and hence it is a basic feasible solution. The original decision variables x all have the value zero at this solution, since the corresponding column vectors a,, a,,..., a, are not in the basis. Referring to the notation introduced in chapter 2.5, the matrix B, the basis rnatrix, is in this case I,. The basic vector x, is s, and the nonbasic vector x, is x. We can reformulate the original problem (3.1) to standard form using slack variables as follows: max f=ctx Ax =b subject to #ro (b2o) where A is an appended matrix: A= (A I,) and X is an appended vector:

18 Then a basic solution of (3.2) is x, =x = O and x, =s = b. This basic solution represents the starting solution of the simplex method. We notice that a basic solution contains at most m positive (non-zero) values, and that the remaining values are equal to zero. We also notice that in the starting solution f = 0, since the slack variables have coefficients of zero in the objective function. If the LP problem satisfies the following conditions: 1. it is a maximization problem, 2. it is in standard form (with slack variables added) and has an initial basic feasible solution x,, 3. the slack variables have coefficients of zero in the objective function, then we can apply the simplex method. 3.3 The simplex method in tabular form To keep track of the coefficients and to organize the computations, the calculations can be performed in the form of a simplex tableau. Basicvariable f S1 S2 f xl x2 a... Coefficient of..... Xn SI Sa S,,... -C, -C, -cj -'n O O O a11 a21 a12 a22. * %j aln 1 O a2 j a2n O 1 m.. a.. m.. s.. s.. %n O am1 am2 amj. ann O O *.S Table 3.1. The starting solution in the simplex tableau... s ' O s s O 1 Right side O bl b2 b, The tableau (see Table 3.1) has one column for each of the problem variables (decision and slack): xl,x,,..., xn,sl,s,,..., s,, and one column for the current basic feasible solution (initially equal to the right-hand side vector of the constraints). For completeness we also have a column for the objective function f, since we can consider f to be a permanent basic variable. The top row is the objective row with the objective coefficients with reversed sign, and the current value of f on the right-hand side. We will ca11 this row the "f-row". The other rows are just the constraint coefficients. The simplex method is iterative. It starts with the starting solution seen in Table 3.1. This tableau is called the initial tableau. During the iterations we will create new tableaux until we reach the final tableau, which gives us the optimal solution to our problem. Basically the method works at follows:

19 Step O: Formulate the problem in standard form with nonnegative right-hand sides. This gives us the starting basic feasible solution. Step 1: Select an entering variable from the current nonbasic variables in such a way that when the entering variable increases the objective function f increases. Step 2: Select the leaving variable from the current basic variables so that the new basic solution will be feasible. Step 3: Determine the values of the (new) basic variables. Step 4: Check for optimality. If the solution is not optimal, go to step 1. The steps 1-4 are done according to this: Step l: Entry criterion. Select the nonbasic variable xw with the largest negative coefficient in the current "f-row" to enter the basis. The objective function f will then, when xnj increases, increase at the fastest rate. The column corresponding to xw is marked in the simplex tableau. Step 2: Exit criterion. Select the basic variable xbi that reaches zero first when the entering variable is increased. Let tji = min is the current coefficient of the entering variable xw in constraint i, and b, is the current right-hand side in constraint i. The i which gives us the minimum ratio tji determines which basic variable to leave the basis. This row is also marked in the simplex tableau. Step 3: Pivoting. This is an adjustment process by which the column vector corresponding to xnj in the tableau becomes an identity column, with a coefficient of 1 in row i, zero elsewhere. This is achieved with row operations. Row i is the pivot row and äg is the pivot element. The process is the same as the one used in solving linear equations, i.e. Gaussian elimination. Step 4: Optimality test. If all the coefficients of the nonbasic variables in the "f-row" are nonnegative then the solution is optimal. If not, go to step 1.

20 The best way to illustrate the simplex method is by an example: Ex 3.2) The example problem given in chapter 2.3 max f = 2x1 + 3x2 is in standard form max f = 2x1 + 3x2 x, + x, +x3 =5 x, + 3x2 + x, = 9 X1>X2>X,>x4 ro where x,, x, are slack variables. The simplex method will yield these iterations: Iteration o 1 Basic variable f x3 x4 f x3 x2 f P- Coefficient of f x1 x2 x3 x4 1-2 o 1 o 1 O 1 o o o 1 o o 1 o o 1 o /3 1 o o o Righthand side 1 O In the initial tableau, iteration 0, we see that the starting solution is This solution is feasible but not optimal. In iteration 1 we have that

21 In the tableau we see that we still have negative coefficients of the nonbasic variables in the "f-row" (the coefficient of x, is -1). The solution in iteration 1 is therefore not optimal either. In iteration 2 we have the solution Now there are no negative coefficients of the nonbasic variables in the "f-row", and the solution is optimal. Since all variables are nonnegative, the solution is also feasible. In iteration 2 we thus have the final tableau. Note 1: If we rewrite the "f-row" for iteration 1 we get If we increase x, with 1 unit, f will increase with 1 since 6, the current coefficient of x,, is 1. The coefficients of the nonbasic variables in the "f-row" are usually called reduced costs. The current coefficients of the basic variables are always zero. Note 2: If we rewrite the "f-row" for the final tableau we get It is obvious that the maximum value for f is 12 for x, =x4 =O. Note 3: Compare the solution in iteration 0, 1 and 2 with the path given in example Other mode1 forms Thus far we have described how to set up the starting solution, i.e. the initial tableau, when we have all constraints in I form and all bi 20. When we have constraints in 2 form and the corresponding bi TO, or when some constraints are in = form it is not straightforward how to find an initial basic feasible solution. The standard approach is to use artificial variables for the just mentioned cases. An artificial variable is a dummy variable introduced in each constraint that

22 needs one, only for the purpose of being the initial basic variable for these constraints. As for all variables, a nonnegative constraint is placed on artificial variables. Since the value of an artificial variable must be zero in the real problem's solution, a penalty is introduced in the objective function, forcing the artificial variable to zero. The iterations of the simplex method then automatically make the artificial variables disappear, i.e. become zero, one at a time. Then the real problem is solved. The penalty in the objective function is done using the Big M method. We introduce a huge, unspecified positive number M as the coefficient of the artificial variables in the objective function. For example, if the objective function is f =2x1 +3x2 and ii?, is an artificial variable, then the objective function is modified to f =2x1 +3x2 -q Because of this penalty we thus introduce artificial variables in the objective function, which makes the columns corresponding to artificial variables not being identity columns. Therefore we can't use the artificial variables directly as starting basic variables. But as we will see in example 3.3 we can solve this with some pre-processing before we put the problem into the initial simplex tableau. 2 inequalitv constraints If constraint i is in the form 2 and bi 20, then we will add a surplus variable to get an equality constraint. For example, the constraint x, +3x2 28 will be transformed to x1 +3x2 -x, =8, x, 20 where x, is a surplus variable. But since a surplus variable comes with a negative sign, it can't be used as an initial basic variable since it will not be feasible (x, =- 8 in this case). Therefore we must add an artificial variable to the constraint. In this case the constraint will be x1 +3x2 -x3 +K =8, x3,x4 20 where F4 is an artificial variable. To ensure that F, is forced to zero, we must also modify the objective function. Equality constraints If constraint i is in the form = and bi 20, then we already have that constraint in standard form. But probably it's not easy to find an initial basic variable for that constraint, and therefore we add an artificial variable to the constraint. Of course we must also modify the objective function to force the artificial variable to zero.

23 Ex 3.3) Consider the problem (P) and (P) transformed to standard form: (p) (P) in standard form max f = 2x, +x2 max f =2xl+x2 -Mx, x,+ x2r6 =6 x,,%2 ro x1,x2,x3>z4,x5 +x5=8 where x, is a surplus variable, Z, is an artificial variable and x, is a slack variable. If we put the problem in a simplex tableau we see that we need to do one iteration to make E, a basic variable before we can use the simplex method. Basic variable Coefficient of - f %l %z %Q x4 '6 I Righthand side Now we can set up the initial tableau and apply the simplex method. Iteration O Basic variable f I Coefficient of - f xl x2 X3 %q '6 1-1-M M O O Righthand side -6M d l2 1 The solution to the problem is, according to the final tableau, f,, = 16 for x, = 8, x, = O. The slack variables are x, = 2, x, = O.

24 Note: When an artificial variable has left the basis it will never enter the basis again, so it will remain zero. Therefore it is not necessary to calculate a column corresponding to an artificial variable after it has left the basis. 3.5 Multiple optima, degeneracy and cycling Multiple optima. If a coefficient for a non-basic variable in the "f-row" is zero in an optimal simplex tableau, the variable could be increased and brought into the basis, with no effect on the objective value. A pivot operation would yield a new tableau with a different basis, but the same value of the objective function f: The indifference lines of the objective lie parallel to an edge of the feasible region, and there are multiple optimal solutions, at the vertices at either end of the edge. The pivot moves the solution from one such vertex to the other. Because the move is along the indifference line, f does not change. In general, if x* and #* are both optimal solutions, then so is any linear combination of the form hx* + (1 - h)#* with O 5 h I 1. However, only the vertex points are basic solutions. Degeneracy and cycling. If one or more variables in the basis x, are zero, the solution is degenerate. This can happen when it is ambiguous which of two (or more) basic variables are to be the leaving variable, i.e. the minimum ratio is the same for both variables. Then, in the next iteration, the variable not chosen to be the leaving variable will be zero since the two variables reach zero at the same time. This situation is common in practical problems. The effect of degeneracy is that the objective function will not increase in the next iteration since b; =O and therefore ai = min =O. If degeneracy does not occur, the objective function will increase in each iteration with the quantity Ej Oj, where Fj is the current coefficient of the entering variable and 6j is the minimum ratio for the leaving variable. Since 5 z0 in a non optimal solution, we will always have that Fj 6 j >O. Degeneracy can imply that the simplex procedure would repeat the same sequence of iterations without improving the objective function. Therefore the method would not converge to an optimal solution. This phenomenon is called cycling, and is mainly of theoretical interest. It is very seldom that cycling occurs in practice.

25 4 Sensitivity analysis and duality 4.1 The simplex tableau in matrix form An LP-problem in canonical form, i.e. max f =ctx Axlb subject to { x20 (b>o) can be transformed into the standard form max f=ctx Ax+I,x, =b subject to x,x, 2.0 (b2.0) where Im is the m x m identity matrix and x, is a column vector of the slack variables. This is basically the same standard form as presented in (3.2). We can put (4.1) directly into an initial simplex tableau in matrix form, since we can use the slack variables as basic variables: Basic variables f x, Table 4.1. The initial simplex tableau in matrix form. In chapter 2.5 we derived that f =csb and x, =B-'b Coefficient of f X X, 1 -C T o o A Im Right-hand side o b If we know which variables are basic in any iteration we can from the standard formulation of a problem get B and c;. From these we can thus calculate the value of f and x,. The simplex tableau at any iteration therefore is Basic variables f XB Coefficient of f X X, 1 c;b-~a-c~ c;b-~ O B-IA B-I Right-hand side c;~-lb ~-lb Table 4.2. The simplex tableau in matrix form at any iteration

26 We get this tableau by using the initial tableau and adding the second row multiplied from the left with c;bml to the first row, and then multiplying the second row from the left with Bml. The same reasoning can be done for problems with surplus and artificial problems. Ex 4.1) Consider the problem max f = 2x1 +x2 subject to x, +2x, 18 2x1- x,<4 Let's assume that we know that the basic variables in the optimal solution are We can now determine the solution to the problem (or the final tableau if we would like). The problem formulated in standard form is max f = 2x, +x2 subject to x1+2x2 +x, =8 2x1 - x, +x5 =4 where x,, x,, x, are slack variables. We identify B and c; as Now it is easy to calculate x, and f:

27 Hence the solution to the problem is f,, = 8 for x, = 3, x, = 2. The slack variables are x, = O, x, = 1, x, =O. 4.2 Sensitivity analysis Often it is of interest to see how a change in one or more parameters in an LP problem influences the optimal solution. It can for example be interesting to examine how a change in a resource bi affects the solution. Instead of applying the simplex method all over again to the revised problem, it is possible to use the solution (final tableau) of the original problem. This is done using the matrix form of the final simplex tableau as was described in chapter 4.1. Ex 4.2) The problem max f = 10xl + 4x, 13x1 +x, 130 subject to 2x1 + X, 125 x1,x2 ro has the final tableau Basic variables f x~ x1 Coefficient of f x1 x2 x3 x4 1 o o 2 2 O O o 1 o 1-1 Right- hand side where x,, x, are slack variables to the constraints, respectively, Assume that we want to change the problem to maxf =llxl+4x, 13x1 +x, 135 subject to 2x, + x, 125 %,,X2 ro What is the new final tableau? We know that the simplex tableau for the problem

28 max f =ctx max f =ctx subject to {%:o u Ax+I,x, =b subject to x, x, ro at any iteration is (Table 4.2) Basic variables f X B Coefficient of f X X, 1 c~b-~a-c~ c;b-~ O BmlA Bml After the changes we have that c; = (4 11) and b = (2;). To carry out the calculations we also need Bml. We can either set up the standard form of the problem and get B from there, or we can identify B-I from the final tableau since B has not changed. We choose to identify B-I from the final -2 3 tableau: B-l = ( [lo). This gives us that B1b =, c;bdlb= 130 and CEB-' =(3 1). The rest of the tableau does not change. x, and x, are basic variables, so c;b-'a- c' =(O O) as before. (Control that!) 3 The new final tableau is Right-hand side c;~-lb Bqlb Basic variable f x2 x1 Coefficient of f x1 x2 x3 x4 1 O O 3 1 o o o 1 O 1-1 Righthand side Since we have only positive coefficients in the "f-row" and the basic variables are nonnegative, the solution is optimal and feasible. We can thus be sure that this really is the final tableau. Note 1: If the revised final tableau becomes nonoptimal, i.e. it does not pass the optimality test because there are negative coefficients in the "f-row", apply the simplex method to the revised tableau until optimality is reached. Note 2: If the revised final tableau becomes infeasible, i.e. there are negative basic variables, apply the dual simplex method to the tableau until feasibility is reached. The dual simplex method is described below.

29 Sensitivity analysis can also be done using parametric programming in which we introduce some parameters 0,, 0,)... into the problem. It can then be of interest to know for which values of these parameters the former basic solution still is optimal (i.e. the new optimal solution has the same basic variables as the former), and what the new solution is. Ex 4.3) The linear programming problem (P) maxf =-5xl+5x2 +13x, has the optimal final tableau -x,+ x, + 3x, x1 + 4 x, + 10x3 < 90 X3 20 Basic variables f x2 Xfi Coefficient of f x1 x2 x3 x4 X6 1 o o 2 5 o o o O 16 O Right- hand side [~~-~). Assume that the right-hand side of the constraints to (P) is changed to For which 0 is the basic solution x. = ) still feasible? We have a feasible solution if x, = B-'b 2 0. In this case we have that 1 O O B = [ ) [ ) = [ ). 3 Peasible solution if The dual simplex method The simplex method starts with a feasible but not optimal solution, a so called suboptimal solution, and moves towards an optimal solution by striving to satisfy the optimality test. The dual simplex method, on the other hand, deals with an optimal but infeasible solution, a so called superoptimal solution, and strives to achieve feasibility. We will use this property in sensitivity analysis. Suppose that an optimal solution has been obtained by the simplex method, and it becomes of interest to make some minor changes in the problem. We then calculate the solution to the revised problem using the method described in chapter 4.2. If the former basic solution no longer is feasible, but the optimality

30 We have that B-'b= [ :)[:3=(28) and o:b-'b=(6 0) test still is satisfied, we can immediately apply the dual simplex method to this superoptimal solution. The dual simplex method is described in these five steps: Step O: Start with a superoptimal solution. Step 1: Exit criterion. Select the basic variable xbi with the most negative value to leave the basis. The row corresponding to xbi is marked in the simplex tableau. Step 2: Entry criterion. Select the nonbasic variable xw whose current coefficient Fj in the "f-row" will reach zero first when the leaving variable is increased towards zero. Let o j = { min I ä, <O}. Where ä, is the current coefficient of the leaving variable xbi in column j. The j which gives us the minimum ratio oj determines which nonbasic variable to enter the basis. This column is marked in the simplex tableau. Step 3: Pivoting. This is the same adjustment process as in the original simplex method by where the column vector corresponding to xw in the tableau becomes an identity column, with a coefficient of 1 in row i, zero elsewhere. This is achieved with row operations. Ex 4.4) Consider the LP problem given in example 4.3, Determine the solution to the problem when 8 = 2! We notice that x, = - 8 < 0 so the solution is infeasible. The change in b does not affect any of the coefficients in the "f-row" (see Table 4.2). Thus the solution still passes the optimality test, and we have a superoptimal solution! Use the dual simplex method.

31 Iter. o initial 1 final Basic variables f x 2 f X 2 Coefficient of f %I %z x, X4 '6 1 o o o -1 1 O 1 16 O 1 16 O O 1 1 O 23 1 O O -8 O o 1 o -4 1 Righthand side The solution to the problem when 0 = 2 is thus f,,, = 112 for x, = O, x, = 12, x, = 4. The slack variables are x, =x, =O. 4.4 Duality Each LP-problem has a dual problem. The original problem is called the primal problem. A problem (P) in canonical form, has the dual problem (D) min g = bty subject to ATY 2 C where y is a column vector with m elements. This means that a primal problem with the variables x,, x,,..., xn is associated with a problem with the variables y,, y,,..., y,, where n is the number of variables and m is the number of constraints in the primal problem. A problem where constraint i is an equality constraint will have the condition > O deleted, i.e. yi is unconstrained. Yi -

32 Ex 4.5) The primal problem max f = 4x1 + x, subject to I x, + 2x2 I 8 2x1 - x, = 4 x,, x, 2 o has the dual problem min g = 5y1 + By2 + 4y3 Yl+ y2 4-2~3 24 y, + 2y2 - Y, 2 1 Y,, Y, 2 0, y3 unconstrained Each constraint in the primal problem is thus associated with a variable in the dual problem. Note that if the problem (D) in (4.3) is our original problem, then its dual is the problem (P) in (4.2). There are some important properties of the primal-dual relationship which are described below, Weak duality: If x is a feasible solution for the primal problem and y is a feasible solution to the dual problem, then Proof: With the conditions given in (4.2) and (4.3) we have that ctx I (A'Y)~X = ytax 2 ytb = bty Strong duality: If x* is an optimal solution to the primal problem, then there exists an optimal solution y* to the dual problem, where Duality can be useful in such cases when it is easier to solve the dual problem than the primal problem. For example, if there are more constraints than variables, i.e. m > n, we have to introduce fewer slack and surplus variables if we solve the dual problem and the computing effort is less. Also it is more convenient to choose the form which has constraints of I-type since we then will have surplus variables instead of slack variables, which often requires the use of artificial variables. Duality can also be used in sensitivity analysis which will be described in chapter

33 4.4.1 Duality and the simplex method A useful property of the simplex method is that it not only gives the optimal solution to the primal problem, but also to the dual problem. The solution to the dual problem is found in the "f-row" as the coefficients of the slack and surplus variables. At each iteration the simplex method simultaneously gives a feasible solution x for the primal problem and a solution y for the dual problem, where ctx= bty. If X is not optimal then y is not feasible. At the final iteration the simplex method gives an optimal solution x* to the primal problem, and an optimal solution y* for the dual problem, where ctx* = bty*. The relationships between the complementary basic solutions are given in Table 4.3, and a summary of the classification of the basic solutions in Table 4.4. Primal basic solution Suboptimal Optimal Superoptimal Infeasible and non-optimal Dual basic solution I Superoptimal l Optimal Suboptimal Infeasible and non-optimal Table 4.A. Relationships between complementary basic solutions. Feasible? Yes No Satisfies the optimality test? Yes Optimal Superoptimal No Suboptimal Infeasible and non-optimal Table 4.B. Classification of basic solutions. We know that for the simplex method ctx = bty at any iteration, and that we can express the objective function f as f = ctx = CBX, = c;~-'b. An identification of terms then gives us that yt = c:b-'. Hence, if we know c: and B*' at the primal problem's optimal solution we can solve for the dual problem's optimal solution using this expression: If we recall Table 4.2, we see that the term c;bql is found in the "f-row" as the coefficients for the slack variables. Therefore, when using the simplex method it is always possible to identify the solution for the dual problem. Ex 4.6) Solve the following problem (P)! min f = 10xl + 6x, + 8x3 x,+ x, +2x3 22 5x1 + 3x, + 2x

34 In this case it is easier to solve the dual problem to (P). If we solve (P) directly we have to convert min f to max g, and use surplus variables and therefore also artificial variables. The dual problem (D) to (P) is max g = 2y1 + y, subject to ~1+5~2510 Y,+~Y, 56 2y1 + 2y258 Applying the simplex method to (D) gives the following tableau: Iter. o initial Basic variables g Y3 y, Coefficient of g Yl Yz Y3 Y4 y6 1 o O o o o 5 1 O o 3 O 1 O I Righthand side 1 final y6 g Y 3 Y4 Yl 0 2 l O 1 O O 1 O O 4 1 O -112 O O 2 O o 1 1 o o From the final tableau (in the "f-row") we see that the solution to (P) is fmin = 8 for x1 = O, x2 = O, x3 = Duality and sensitivity analysis Since linear programming is often used in economics, it is common to give an economic interpretation of duality. Consider the primal problem (4.2) and its dual (4.3). For the optimal solution max f =ming=blyl+b2y,+...+ b,,y,,, whereb, is the amount available of resource i. Each biyi can therefore be interpreted as the contribution of resource i to the profit. This gives the following interpretation of the dual variable yi : yi is the contribution to the profit per unit of resource i (i = 1, m). It measures the rate at which f could be increased by increasing the amount of resource i, i.e. bi (the increase of bi must be sufficiently small so that the set of basic variables for the optimal solution does not change).

35 The dual variable yi is often referred to as the shadow price for resource i. If a shadow pricey, is zero, then resource i is not a limitation. Constraint i is therefore not binding on the optimal solution. Such a constraint is said to be nonbinding, while a constraint corresponding to a resource with a shadow price greater than zero is a binding constraint. Ex 4.6)Recall example 2.1 about the joiner who makes two products from three types of materials. The amount available of each type is 90, 120 and 100 units. The objective function of the dual problem is therefore min g = 90y, + 120y, + 100y, The optimal solution to the dual problem is g, =2300 US$ for yl = O, y, = 15, y, = 5. This means that if we had one unit more of material 2, the profit would increase with 15 US$. If we could get one unit more of material 3, then the profit would increase with 5 US$. The shadow price for material 1 is O. This means that this resource is not limited, and an increase of the amount available of it would not give an increased profit.

36 5 Integer linear programming A special type of linear programming is when some or all decision variables are integer valued. It is often the case that the decision variables only make sense if they have integer values, for example when assigning machines or people to some activity. A shipping company that wants to know how many cargo ships they should buy would not be pleased with the answer 2.25, only an integer number would do. 5.1 General form A general integer linear programming problem can be written n max f = C cjxj j=l f Ca,xj <bi, j=l subject to < xj 20, i=1,2,..., m j= 1,2,,,.,n xj is integer, j = 1,2,..., p (p I n) \ Here we have ordered the variables so that the first n variables are integer valued. If p = n then we have an integer linear programming (ILP) problem. If p < n then the mode1 is referred to as mixed integer linear programming (MILP). men distinguishing the two types from each other, ILP is called pure integer linear programming. Sometimes it is of interest to consider problems with decisions of the type yes or no. For example, should we build a new factory here? Should we start the production of a new product? Such decisions can be represented with decision variables restricted to only two values, zero and one. Let xj represent the jth yes- or-no decision. Then, xj = 1 if decision j is yes, xj = O if decision j is no. Variables of this kind are called binary variables, or 0-1 variables. ILP problems that contain only binary variables are called binary programming, or zero-one programming. (The adjective linear is normally dropped here.)

37 5.2 Some considerations on solving integer linear programming problems One may believe that ILP problems should be relatively easy to solve: LP problems can be solved very efficiently, and the only difference between LP and ILP problems is that ILP problems has far fewer feasible solutions. Actually, ILP problems with a bounded feasible region always have a finite number of feasible solutions. Unfortunately, this is not true. ILP problems are in general much more difficult to solve than LP problems. The efficiency of the simplex method is guaranteed by the fact that if there exists an optimal solution for an LP problem, it is always a corner-point solution. This is not the case for ILP problems because the optimal solution most often is an interior point in the feasible region. Could not we then solve an ILP problem without the integer restriction, and just round off the solution? The answer is no, which this example shows. Solve the following ILP problem: max f = sxl + 4x2 I 2x1 + 2x217 Sxl + 3x2120 subject to X,, X2 2 o (x,, x2 integers The problem is shown graphically in Fig The feasible points filled black circles. x2 are shown with o 1 2 s.. ' 3 4 Figure 5.1. Graphical view of an ILP problem

38 A graphical solution shows that the optimal solution is x, = 2, x, = 1 and f = 14. If we skip the integer restriction we will have the solution x, = 1.9, x, = 1.6, giving us f = This is not an integer solution. If we round off this solution we will get x, = 2, x, = 2 which is an infeasible solution. In this simple example it is easy to test all feasible points to see which one is the optimal point. Checking each point for feasibility and, if it is feasible, calculating the value of the objective function may work for very small problems only. Consider for example a BIP problem with 30 variables, which is a fairly small problem. This gives us z3o solutions to consider is slightly more than a billion, i.e. lo9). Finding the solution to this problem will take a lot of time even on fast computers. Hence, we must find other methods. Another difficulty with ILP problems is that is there is no easy way of telling if a solution is optimal or not. This is because a point can be a local optimum. In the example given above the point x, = O, x, = 3 is a better point than all its feasible neighbours, but it is not an optimal point. Since LP problems in general are much easier to solve than ILP problems, many algorithms for ILP use an LP-solver (e.g. the simplex method) on portions of the ILP problem where the integer restriction is deleted. The corresponding LP problem to a given ILP problem is referred to as its LP-relaxation. 5.3 The branch-and-bound technique for solving ILP problems The branch-and-bound technique uses three basic steps - branching, bounding and fathorning. The basic idea is to divide the ILP problem into small subproblems. This is called branching, and is done by partitioning the entire set of feasible solutions into smaller and smaller subsets. We then need a bound on each of these subproblems of how its best feasible solution can be. The easiest way of doing so is to solve the subproblem's LP-relaxation, i.e. the subproblem without the integer restrictions. The subproblem can be fathomed and dismissed from further consideration depending on the result of the bounding. The branch-and-bound algorithm is given here: Step O: Initialization. Let f be the value of the objective function for the best h integer solution so far. Set f = - w. Solve the LP-relaxation of the whole problem. If not integer valued or infeasible, go to step 1. Step 1: Branching. b ong the remaining subproblems, select the one created most recently. Of two subproblems created at the same time (subproblems are always created in pairs), select the one with the largest bound. From the optimal solution for the LP-relaxation of that subproblem, choose a variable with a noninteger value to be the

39 branching variable. Let xj be that variable, and Xj its value in this solution. Create two new subproblems from the subproblem by adding the constraints xj 5 [ Xj] and xj 2 [ Xj] + l, respectively. ( [ Xj] is the greatest integer less than or equal to xj.) Step 2: Bounding. For the two new subproblems, obtain their upper bound f of the objective function by applying the simplex method to their LPrelaxations. Step 3: Fathoming. Apply the following three tests to the new subproblems, and discard those subproblems that are fathomed by any of the tests. Test 1. If the upper bound is not better than the best integer solution so far, i.e. f 2 j', discard that subproblem. Test 2. If its LP-relaxation has no feasible solutions, discard that subproblem. Test 3. If the optimal solution for its LP-relaxation has integer values, discard that subproblem. If the upper bound is better than the current best integer solution, i.e. f >j', then store this solution and let j' =f. Step 4: Test for optimality. If there are no remaining subproblems, then the solution which gave us j' is the optimal solution. Otherwise, return to step 1. The branch-and-bound technique will be illustrated with the following example. Ex 5.1) Solve the following ILP problem (P): max f =3xl +3x2 +9x3 subject to I 6x1-3x2 + 6x3 18 X1>X2>X3r0 x,, x2, x3 integers We begin by solving the LP-relaxation to (P). This is problem a. PI: max f = 3x, + 3x2 + 9x3-3x1 + 6x2 + 6x x1-3x2 + 6x3 1 8

40 The solution to P1 is x, = x, = 2 8, x, = O and f = 16. Since none of x, and x, are integer valued, we divide P1 into two new subproblems with x, as the branching variable. The additional constraints to the new subproblems are x, < 2 and x, 2 3. Hence, we now have the problems P2 and P3. P2: max f =3x1 +3x, +9x3 { 1-32, + 6x, + 6x3 < 8 6;-3x,+6x3<8 subject to <2 P3: max f = 3x1 + 3x2 + 9x3 subject to { 6;-3x,+6x3<8 23 (-3x1 + 6x2 + 6x3 5 8 The solution to problem P2 is x, = x, = 2, x, = $ and f = 15. P3 has no feasible solution, and is thus discarded from further consideration. The only subproblem is problem P2. Since x, is the only real valued variable, it becomes the branching variable. Problem P2 is now divided into two new subproblems with the additional constraints x, 10 and x, 2 1. This gives us the problems P4 and E. P4: Additional constraints in problem P4 compared to problem P1 are: x1<2, x,io. P5: Additional constraints in problem P5 compared to problem P1 are: x,<2, x321. The solution to problem P4 is x, = 2, x, = 2 $, x, = O and f = 13. P5 has the solution x, = x, = $, x, = l and f = 13. Neither of P4 and P5 can be discarded, and they have the same upper bound. Therefore it is arbitrarily which one them are chosen for branching. In this example we choose P4, which is divided into the subproblems and using x, as the branching variable.

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Sensitivity analysis The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Changing the coefficient of a nonbasic variable

More information

In Chapters 3 and 4 we introduced linear programming

In Chapters 3 and 4 we introduced linear programming SUPPLEMENT The Simplex Method CD3 In Chapters 3 and 4 we introduced linear programming and showed how models with two variables can be solved graphically. We relied on computer programs (WINQSB, Excel,

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

UNIT-4 Chapter6 Linear Programming

UNIT-4 Chapter6 Linear Programming UNIT-4 Chapter6 Linear Programming Linear Programming 6.1 Introduction Operations Research is a scientific approach to problem solving for executive management. It came into existence in England during

More information

OPRE 6201 : 3. Special Cases

OPRE 6201 : 3. Special Cases OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics Dr. Said Bourazza Department of Mathematics Jazan University 1 P a g e Contents: Chapter 0: Modelization 3 Chapter1: Graphical Methods 7 Chapter2: Simplex method 13 Chapter3: Duality 36 Chapter4: Transportation

More information

56:270 Final Exam - May

56:270  Final Exam - May @ @ 56:270 Linear Programming @ @ Final Exam - May 4, 1989 @ @ @ @ @ @ @ @ @ @ @ @ @ @ Select any 7 of the 9 problems below: (1.) ANALYSIS OF MPSX OUTPUT: Please refer to the attached materials on the

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

Duality in LPP Every LPP called the primal is associated with another LPP called dual. Either of the problems is primal with the other one as dual. The optimal solution of either problem reveals the information

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

Week_4: simplex method II

Week_4: simplex method II Week_4: simplex method II 1 1.introduction LPs in which all the constraints are ( ) with nonnegative right-hand sides offer a convenient all-slack starting basic feasible solution. Models involving (=)

More information

Simplex Method for LP (II)

Simplex Method for LP (II) Simplex Method for LP (II) Xiaoxi Li Wuhan University Sept. 27, 2017 (week 4) Operations Research (Li, X.) Simplex Method for LP (II) Sept. 27, 2017 (week 4) 1 / 31 Organization of this lecture Contents:

More information

SEN301 OPERATIONS RESEARCH I LECTURE NOTES

SEN301 OPERATIONS RESEARCH I LECTURE NOTES SEN30 OPERATIONS RESEARCH I LECTURE NOTES SECTION II (208-209) Y. İlker Topcu, Ph.D. & Özgür Kabak, Ph.D. Acknowledgements: We would like to acknowledge Prof. W.L. Winston's "Operations Research: Applications

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

Introduction. Very efficient solution procedure: simplex method.

Introduction. Very efficient solution procedure: simplex method. LINEAR PROGRAMMING Introduction Development of linear programming was among the most important scientific advances of mid 20th cent. Most common type of applications: allocate limited resources to competing

More information

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2

More information

2.1 THE SIMPLEX METHOD FOR PROBLEMS IN STANDARD FORM

2.1 THE SIMPLEX METHOD FOR PROBLEMS IN STANDARD FORM The Simplex Method I N THIS CHAPTER we describe an elementary version of the method that can be used to solve a linear programming problem systematically. In Chapter we developed the algebraic and geometric

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

Special cases of linear programming

Special cases of linear programming Special cases of linear programming Infeasible solution Multiple solution (infinitely many solution) Unbounded solution Degenerated solution Notes on the Simplex tableau 1. The intersection of any basic

More information

56:171 Operations Research Fall 1998

56:171 Operations Research Fall 1998 56:171 Operations Research Fall 1998 Quiz Solutions D.L.Bricker Dept of Mechanical & Industrial Engineering University of Iowa 56:171 Operations Research Quiz

More information

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems OPERATIONS RESEARCH Michał Kulej Business Information Systems The development of the potential and academic programmes of Wrocław University of Technology Project co-financed by European Union within European

More information

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20. Extra Problems for Chapter 3. Linear Programming Methods 20. (Big-M Method) An alternative to the two-phase method of finding an initial basic feasible solution by minimizing the sum of the artificial

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

Sensitivity Analysis and Duality

Sensitivity Analysis and Duality Sensitivity Analysis and Duality Part II Duality Based on Chapter 6 Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

Introduction to linear programming using LEGO.

Introduction to linear programming using LEGO. Introduction to linear programming using LEGO. 1 The manufacturing problem. A manufacturer produces two pieces of furniture, tables and chairs. The production of the furniture requires the use of two different

More information

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

The Simplex Algorithm and Goal Programming

The Simplex Algorithm and Goal Programming The Simplex Algorithm and Goal Programming In Chapter 3, we saw how to solve two-variable linear programming problems graphically. Unfortunately, most real-life LPs have many variables, so a method is

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

Overview of course. Introduction to Optimization, DIKU Monday 12 November David Pisinger

Overview of course. Introduction to Optimization, DIKU Monday 12 November David Pisinger Introduction to Optimization, DIKU 007-08 Monday November David Pisinger Lecture What is OR, linear models, standard form, slack form, simplex repetition, graphical interpretation, extreme points, basic

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1

More information

LINEAR PROGRAMMING. Introduction

LINEAR PROGRAMMING. Introduction LINEAR PROGRAMMING Introduction Development of linear programming was among the most important scientific advances of mid-20th cent. Most common type of applications: allocate limited resources to competing

More information

Answer the following questions: Q1: Choose the correct answer ( 20 Points ):

Answer the following questions: Q1: Choose the correct answer ( 20 Points ): Benha University Final Exam. (ختلفات) Class: 2 rd Year Students Subject: Operations Research Faculty of Computers & Informatics Date: - / 5 / 2017 Time: 3 hours Examiner: Dr. El-Sayed Badr Answer the following

More information

Linear Programming and the Simplex method

Linear Programming and the Simplex method Linear Programming and the Simplex method Harald Enzinger, Michael Rath Signal Processing and Speech Communication Laboratory Jan 9, 2012 Harald Enzinger, Michael Rath Jan 9, 2012 page 1/37 Outline Introduction

More information

Duality Theory, Optimality Conditions

Duality Theory, Optimality Conditions 5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Linear Programming Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The standard Linear Programming (LP) Problem Graphical method of solving LP problem

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

MAT016: Optimization

MAT016: Optimization MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The

More information

Linear Programming. H. R. Alvarez A., Ph. D. 1

Linear Programming. H. R. Alvarez A., Ph. D. 1 Linear Programming H. R. Alvarez A., Ph. D. 1 Introduction It is a mathematical technique that allows the selection of the best course of action defining a program of feasible actions. The objective of

More information

is called an integer programming (IP) problem. model is called a mixed integer programming (MIP)

is called an integer programming (IP) problem. model is called a mixed integer programming (MIP) INTEGER PROGRAMMING Integer Programming g In many problems the decision variables must have integer values. Example: assign people, machines, and vehicles to activities in integer quantities. If this is

More information

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define

More information

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I LN/MATH2901/CKC/MS/2008-09 THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS Operations Research I Definition (Linear Programming) A linear programming (LP) problem is characterized by linear functions

More information

min3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0.

min3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0. ex-.-. Foundations of Operations Research Prof. E. Amaldi. Dual simplex algorithm Given the linear program minx + x + x x + x + x 6 x + x + x x, x, x. solve it via the dual simplex algorithm. Describe

More information

Math Models of OR: Some Definitions

Math Models of OR: Some Definitions Math Models of OR: Some Definitions John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Some Definitions 1 / 20 Active constraints Outline 1 Active constraints

More information

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009 Branch-and-Bound Algorithm November 3, 2009 Outline Lecture 23 Modeling aspect: Either-Or requirement Special ILPs: Totally unimodular matrices Branch-and-Bound Algorithm Underlying idea Terminology Formal

More information

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method) Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding

More information

Linear Programming in Matrix Form

Linear Programming in Matrix Form Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

More information

56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker

56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker 56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker Answer all of Part One and two (of the four) problems of Part Two Problem: 1 2 3 4 5 6 7 8 TOTAL Possible: 16 12 20 10

More information

TIM 206 Lecture 3: The Simplex Method

TIM 206 Lecture 3: The Simplex Method TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),

More information

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information

Math 273a: Optimization The Simplex method

Math 273a: Optimization The Simplex method Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

February 17, Simplex Method Continued

February 17, Simplex Method Continued 15.053 February 17, 2005 Simplex Method Continued 1 Today s Lecture Review of the simplex algorithm. Formalizing the approach Alternative Optimal Solutions Obtaining an initial bfs Is the simplex algorithm

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP Different spaces and objective functions but in general same optimal

More information

SELECT TWO PROBLEMS (OF A POSSIBLE FOUR) FROM PART ONE, AND FOUR PROBLEMS (OF A POSSIBLE FIVE) FROM PART TWO. PART ONE: TOTAL GRAND

SELECT TWO PROBLEMS (OF A POSSIBLE FOUR) FROM PART ONE, AND FOUR PROBLEMS (OF A POSSIBLE FIVE) FROM PART TWO. PART ONE: TOTAL GRAND 1 56:270 LINEAR PROGRAMMING FINAL EXAMINATION - MAY 17, 1985 SELECT TWO PROBLEMS (OF A POSSIBLE FOUR) FROM PART ONE, AND FOUR PROBLEMS (OF A POSSIBLE FIVE) FROM PART TWO. PART ONE: 1 2 3 4 TOTAL GRAND

More information

56:171 Fall 2002 Operations Research Quizzes with Solutions

56:171 Fall 2002 Operations Research Quizzes with Solutions 56:7 Fall Operations Research Quizzes with Solutions Instructor: D. L. Bricker University of Iowa Dept. of Mechanical & Industrial Engineering Note: In most cases, each quiz is available in several versions!

More information

Simplex method(s) for solving LPs in standard form

Simplex method(s) for solving LPs in standard form Simplex method: outline I The Simplex Method is a family of algorithms for solving LPs in standard form (and their duals) I Goal: identify an optimal basis, as in Definition 3.3 I Versions we will consider:

More information

Concept and Definition. Characteristics of OR (Features) Phases of OR

Concept and Definition. Characteristics of OR (Features) Phases of OR Concept and Definition Operations research signifies research on operations. It is the organized application of modern science, mathematics and computer techniques to complex military, government, business

More information

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

3. Duality: What is duality? Why does it matter? Sensitivity through duality. 1 Overview of lecture (10/5/10) 1. Review Simplex Method 2. Sensitivity Analysis: How does solution change as parameters change? How much is the optimal solution effected by changing A, b, or c? How much

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

March 13, Duality 3

March 13, Duality 3 15.53 March 13, 27 Duality 3 There are concepts much more difficult to grasp than duality in linear programming. -- Jim Orlin The concept [of nonduality], often described in English as "nondualism," is

More information

Worked Examples for Chapter 5

Worked Examples for Chapter 5 Worked Examples for Chapter 5 Example for Section 5.2 Construct the primal-dual table and the dual problem for the following linear programming model fitting our standard form. Maximize Z = 5 x 1 + 4 x

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P) Lecture 10: Linear programming duality Michael Patriksson 19 February 2004 0-0 The dual of the LP in standard form minimize z = c T x (P) subject to Ax = b, x 0 n, and maximize w = b T y (D) subject to

More information

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive 3.4 Anticycling Lexicographic order In this section we discuss two pivoting rules that are guaranteed to avoid cycling. These are the lexicographic rule and Bland s rule. Definition A vector u R n is lexicographically

More information

Section 4.1 Solving Systems of Linear Inequalities

Section 4.1 Solving Systems of Linear Inequalities Section 4.1 Solving Systems of Linear Inequalities Question 1 How do you graph a linear inequality? Question 2 How do you graph a system of linear inequalities? Question 1 How do you graph a linear inequality?

More information

Simplex Algorithm Using Canonical Tableaus

Simplex Algorithm Using Canonical Tableaus 41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

maxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2

maxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2 ex-5.-5. Foundations of Operations Research Prof. E. Amaldi 5. Branch-and-Bound Given the integer linear program maxz = x +x x +x 6 x +x 9 x,x integer solve it via the Branch-and-Bound method (solving

More information

Systems Analysis in Construction

Systems Analysis in Construction Systems Analysis in Construction CB312 Construction & Building Engineering Department- AASTMT by A h m e d E l h a k e e m & M o h a m e d S a i e d 3. Linear Programming Optimization Simplex Method 135

More information

Chapter 1 Linear Programming. Paragraph 5 Duality

Chapter 1 Linear Programming. Paragraph 5 Duality Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution

More information

The Simplex Method. Formulate Constrained Maximization or Minimization Problem. Convert to Standard Form. Convert to Canonical Form

The Simplex Method. Formulate Constrained Maximization or Minimization Problem. Convert to Standard Form. Convert to Canonical Form The Simplex Method 1 The Simplex Method Formulate Constrained Maximization or Minimization Problem Convert to Standard Form Convert to Canonical Form Set Up the Tableau and the Initial Basic Feasible Solution

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

II. Analysis of Linear Programming Solutions

II. Analysis of Linear Programming Solutions Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois

More information

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered: LINEAR PROGRAMMING 2 In many business and policy making situations the following type of problem is encountered: Maximise an objective subject to (in)equality constraints. Mathematical programming provides

More information

c) Place the Coefficients from all Equations into a Simplex Tableau, labeled above with variables indicating their respective columns

c) Place the Coefficients from all Equations into a Simplex Tableau, labeled above with variables indicating their respective columns BUILDING A SIMPLEX TABLEAU AND PROPER PIVOT SELECTION Maximize : 15x + 25y + 18 z s. t. 2x+ 3y+ 4z 60 4x+ 4y+ 2z 100 8x+ 5y 80 x 0, y 0, z 0 a) Build Equations out of each of the constraints above by introducing

More information

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight. In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight. In the multi-dimensional knapsack problem, additional

More information

Understanding the Simplex algorithm. Standard Optimization Problems.

Understanding the Simplex algorithm. Standard Optimization Problems. Understanding the Simplex algorithm. Ma 162 Spring 2011 Ma 162 Spring 2011 February 28, 2011 Standard Optimization Problems. A standard maximization problem can be conveniently described in matrix form

More information

The Graphical Method & Algebraic Technique for Solving LP s. Métodos Cuantitativos M. En C. Eduardo Bustos Farías 1

The Graphical Method & Algebraic Technique for Solving LP s. Métodos Cuantitativos M. En C. Eduardo Bustos Farías 1 The Graphical Method & Algebraic Technique for Solving LP s Métodos Cuantitativos M. En C. Eduardo Bustos Farías The Graphical Method for Solving LP s If LP models have only two variables, they can be

More information

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science The Simplex Method Lecture 5 Standard and Canonical Forms and Setting up the Tableau Lecture 5 Slide 1 The Simplex Method Formulate Constrained Maximization or Minimization Problem Convert to Standard

More information

Chapter 4 The Simplex Algorithm Part I

Chapter 4 The Simplex Algorithm Part I Chapter 4 The Simplex Algorithm Part I Based on Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan Lewis Ntaimo 1 Modeling

More information

9.1 Linear Programs in canonical form

9.1 Linear Programs in canonical form 9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems

More information

Week 3: Simplex Method I

Week 3: Simplex Method I Week 3: Simplex Method I 1 1. Introduction The simplex method computations are particularly tedious and repetitive. It attempts to move from one corner point of the solution space to a better corner point

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3)

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3) The Simple Method Gauss-Jordan Elimination for Solving Linear Equations Eample: Gauss-Jordan Elimination Solve the following equations: + + + + = 4 = = () () () - In the first step of the procedure, we

More information

MATH 4211/6211 Optimization Linear Programming

MATH 4211/6211 Optimization Linear Programming MATH 4211/6211 Optimization Linear Programming Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 The standard form of a Linear

More information

56:171 Operations Research Midterm Exam--15 October 2002

56:171 Operations Research Midterm Exam--15 October 2002 Name 56:171 Operations Research Midterm Exam--15 October 2002 Possible Score 1. True/False 25 _ 2. LP sensitivity analysis 25 _ 3. Transportation problem 15 _ 4. LP tableaux 15 _ Total 80 _ Part I: True(+)

More information