Paul E. Fishback. Instructors Solutions Manual for Linear and Nonlinear Programming with Maple: An Interactive, Applications-Based Approach

Size: px
Start display at page:

Download "Paul E. Fishback. Instructors Solutions Manual for Linear and Nonlinear Programming with Maple: An Interactive, Applications-Based Approach"

Transcription

1 Paul E. Fishback Instructors Solutions Manual for Linear and Nonlinear Programming with Maple: An Interactive, Applications-Based Approach

2 ii

3 Contents I Linear Programming An Introduction to Linear Programming 3. The Basic Linear Programming Problem Formulation Linear Programming: A Graphical Perspective inr Basic Feasible Solutions The Simplex Algorithm The Simplex Algorithm Alternative Optimal/Unbounded Solutions and Degeneracy Excess and Artificial Variables: The Big M Method Duality Sufficient Conditions for Local and Global Optimal Solutions Quadratic Programming iii

4 iv

5 Part I Linear Programming

6

7 Chapter An Introduction to Linear Programming. The Basic Linear Programming Problem Formulation Linear Programming: A Graphical Perspective inr Basic Feasible Solutions

8 4 Chapter. An Introduction to Linear Programming. The Basic Linear Programming Problem Formulation. Express each LP below in matrix inequality form. Then solve the LP using Maple provided it is feasible and bounded. (a) maximize z=6x + 4x 2 2x + 3x 2 9 x 4 x 2 6 x, x 2, The second constraint may be rewritten as x 4 so that matrix inequality form of the LP is given by (b) maximize z=c x (.) Ax b x, 2 3 where A=, c=[ 6 4 ] 9, b= 4, and x= x. x The solution is given by x=, with z=27. maximize z=3x + 2x 2 x 4 x + 3x 2 5 2x + x 2 = x, x 2. The third constraint can be replaced by the two constraints, 2x + x 2 and 2x x 2. Thus, the matrix inequality form

9 (c).. The Basic Linear Programming Problem Formulation 5 3 of the LP is (.), with A=, c= [ 3 2 ] 4 5, b=, and 2 2 x x=. x 2 The solution is given by x= [] 3, with z=7. 4 maximize z= x + 4x 2 x + x 2 x + 3 x + x 2 5 x, x 2. (d) The third constraint is identical to x x 2 5. The matrix inequality form of the LP is (.), with A=, c=[ 4 ], b= 3, and x= x. x 5 2 [] 3 The solution is x=, with z=3. 4 minimize z= x + 4x 2 x + 3x 2 5 x + x 2 4 x x 2 2 x, x 2. To express the LP in matrix inequality form with the goal of maximization, we set c= [ 4 ] = [ 4 ]. The first two constraints may be rewritten as x 3x 2 5 and

10 6 Chapter. An Introduction to Linear Programming (e) x x 2 4. The matrix inequality form of the LP becomes (.), 3 with A=, c=[ 4 ] 5, b= 4, and x= x. x 2 2 [] 3 The solution is given by x=, with z=. maximize z=2x x 2 x + 3x 2 8 x + x 2 4 x x 2 2 x, x 2. The first and second constraints are identical to x 3x and x x 2 4, respectively. Thus, A=, c=[ 2 ], 8 b= 4, and x= x. x 2 2 (f) The LP is unbounded. minimize z=2x + 3x 2 3x + x 2 x + x 2 6 x 2. Define x = x,+ x,, where x,+ and x, are nonnegative. Then, as a maximization problem, the LP may be rewritten in terms of three decision variables as maximize z= 2(x,+ x, ) 3x 2 3(x,+ x, ) x 2 (x,+ x, )+x 2 6 x,+, x,, x 2.

11 .. The Basic Linear Programming Problem Formulation 7 The matrix inequality form of the LP becomes (.), with A = 3 3, c= [ ] x,+, b=, and x= x 6,. x 2 The solution is given by x,+ = 3 and x, = x 2 =, with z= The LP is given by minimize z=x + 4x 2 x + 2x 2 5 x x 2 2 x, x 2. The constraint involving the absolute value is identical to 2 x x 2 2, which may be written as the two constraints, x x 2 2 and x +x The matrix inequality form of the LP is (.) with A=, c= [ 4 ] 5, b= 2, and x= x. x 2 2 The solution is given by x=, with z=. 3. If x and x 2 denote the number of chairs and number of tables, respectively produced by the company, then z=5x +7x 2 denotes the revenue, which we seek to maximize. The number of square blocks needed to produce x chairs is 2x, and the number of square blocks needed to produce x 2 tables is 2x 2. Since six square blocks are available, we have the constraint, 2x + 2x 2 6. Similar reasoning, applied to the rectangular blocks, leads to the constraint x + 2x 2 8. Along with sign conditions, these results yield the LP. maximize z=5x + 7x 2 2x + 2x 2 6 x + 2x 2 8 x, x 2.

12 8 Chapter. An Introduction to Linear Programming 2 2 For the matrix inequality form, A=, c= [ 5 7 ], b= 2 ] x= [ x x 2. [] 6, and 8 The solution is given by x= [], with z= (a) If x and x 2 denote the number of grams of grass and number of grams of forb, respectively, consumed by the vole on a given day, then the total foraging time is given by z=45.55x x 2. Since the coefficients in this sum have units of minutes per gram, the units of z are minutes. The products.64x and 2.67x 2 represent the amount of digestive capacity corresponding to eating x grams of grass and x 2 grams of forb, respectively. The total digestive capacity is 3.2 gm-wet, which yields the constraint.64x x Observe that the units of the variables, x and x 2, are gm-dry. Each coefficient in this inequality has units of gm-wet per gm-dry, so the sum.64x x 2 has the desired units of gm-wet. Similar reasoning focusing on energy requirements, leads to the constraint 2.x + 2.3x Along with sign conditions, we arrive at minimize z=45.55x x 2.64x x x + 2.3x x, x 2. (b) The solution, obtained using Maple slpsolve command, is given by x = grams of grass and x grams of forb. The total time spent foraging is z 32.7 minutes. Solutions to Waypoints Waypoint... There are of course many possible combinations. Table. summarizes the outcomes for four choices. The fourth combination requires 35 gallons of stock B, so it does not satisfy the

13 .. The Basic Linear Programming Problem Formulation 9 TABLE.: Production combinations (in gallons) Premium Reg. Unleaded Profit ($) listed constraints. Of the three that do, the combination of 7 gallons premium and 5 gallons regular unleaded results in the greatest profit.

14 Chapter. An Introduction to Linear Programming.2 Linear Programming: A Graphical Perspective inr 2. (a) Maximize z=6x + 4x 2 2x + 3x 2 9 x 4 x 2 6 x, x 2, The [ feasible ] region is shown in Figure 2.2. The solution is given by 4.5 x=, with z=27. x 2 z=25 z=27 z=23 x FIGURE.: Feasible region with contours z = 23, z = 25 and z = 27.

15 .2. Linear Programming: A Graphical Perspective inr 2 (b) maximize z=3x + 2x 2 x 4 x + 3x 2 5 2x + x 2 = x, x 2. The feasible region is the [] line segment shown in Figure.2. The 3 solution is given by x=, with z=7. 4 x 2 z=5 z=7 z=3 x FIGURE.2: Feasible region with contours z = 3, z = 5 and z = 7.

16 2 Chapter. An Introduction to Linear Programming (c) minimize z= x + 4x 2 x + 3x 2 5 x + x 2 4 x x 2 2 x, x 2. The [ feasible ] region is shown in Figure.3. The solution is given by 3 x=, with z=. x 2 z=5 z=3 z= x FIGURE.3: Feasible region with contours z=5, z=3and z=. (d) maximize z=2x + 6x 2 x + 3x 2 6 x + 2x 2 5 x, x 2.

17 .2. Linear Programming: A Graphical Perspective inr 2 3 The feasible region is shown in Figure.4. The LP has alternative [] 3 optimal solutions that fall on the segment connecting x = to 6 x=. Each such solution has an objective value of z=2, and the parametric representation of the segment is given by 3t+6( t) x= t+( t) 6 3t =, t where t. x 2 z=2 z= z=8 x FIGURE.4: Feasible region with contours z=8, z= and z=2.

18 4 Chapter. An Introduction to Linear Programming (e) minimize z=2x + 3x 2 3x + x 2 x + x 2 6 x 2. The [ feasible ] region is shown in Figure.5. The solution is given by /3 x=, with z= 2 3. x 2 z= 4 3 z= z= 2 3 x FIGURE.5: Feasible region with contours z= 4 3, z=and z= The Foraging Herbivore Model), Exercise 4, from Section. is given by minimize z=45.55x x 2.64x x x + 2.3x x, x 2,

19 .2. Linear Programming: A Graphical Perspective inr 2 5 whose [ feasible ] region is shown in Figure.6. The solution is given by x, with z x 2 z=3 z=32.7 z=2 x FIGURE.6: Feasible region with contours z = 3, z = 2 and z = (a) The feasible region, along with the contours z=5, z=, and z = 5, is shown in Figure.7 (b) If M is an arbitrarily large positive real number, then the set of points falling on the contour z = M satisfies 2x x 2 = M, or, equivalently, x 2 = 2x M. However, as Figure.7 indicates, to use the portion of this line that falls within the feasible region, we must have x x 2 2. Combining this inequality with 2x x 2 = M yields x M 2. Thus, the portion of the contour z=m belonging to the feasible region consists of the ray, {(x, x 2 ) x M 2 and x 2 = 2x M}. (c) Fix M and consider the starting point on the ray from (b). Then

20 6 Chapter. An Introduction to Linear Programming x 2 z=5 z= z=5 x FIGURE.7: Feasible region with contours z = 5, z = and z = 5. x = M 2 and x 2 = 2x M=M 4, in which case all constraints and sign conditions are satisfied if M is large enough. (Actually, M 5suffices.) Since M can be made as large as we like, the LP is unbounded. 4. Suppose f is a linear function of x and x 2, and consider the LP given by maximize z= f (x, x 2 ) (.2) x x 2 x, x 2, (a) The feasible region is shown in Figure.8 (b) If f (x, x 2 )=x 2 x, then f (, )=. For all [] other feasible points, the objective value is negative. Hence, x = is the unique optimal solution. (c) If f (x, x 2 )=x, then the LP is unbounded.

21 .2. Linear Programming: A Graphical Perspective inr 2 7 x 2 x FIGURE.8: Feasible region for LP.2. (d) If f (x, x 2 )=x 2, then the LP has alternative optimal solutions. Solutions to Waypoints Waypoint.2.. The following Maple commands produce the desired feasible region: > restart:with(plots): > constraints:=[x<=8,2*x+x2<=28,3*x+2*x2<=32,x>=,x2>=]: > inequal(constraints,x=..,x2=..6,optionsfeasible=(color=grey), optionsexcluded=(color=white),optionsclosed=(color=black),thickness=); Waypoint.2.2. The following commands produce the feasible region, upon which are superimposed the contours, z = 2, z = 3, and z = 4. > restart:with(plots): > f:=(x,x2)->4*x+3*x2; f := (x, x 2 ) 4x + 3x 2 > constraints:=[x<=8,2*x+x2<=28,3*x+2*x2<=32,x>=,x2>=]: > FeasibleRegion:=inequal(constraints,x=..,x2=..6,optionsfeasible=(color=grey), optionsexcluded=(color=white),optionsclosed=(color=black),thickness=2):

22 8 Chapter. An Introduction to Linear Programming x 2 x FIGURE.9: Feasible region and object contours z = 2, z = 3, and z = 4. > ObjectiveContours:=contourplot(f(x,x2),x=..,x2=..,contours=[2,3,4],thicknes > display(feasibleregion, ObjectiveContours); The resulting graph is shown in Figure.9 Maple does not label the contours, but the contours must increase in value as x and x 2 increase. This fact, along with the graph, suggests that the solution is given by x = 4, x 2 =.

23 .3. Basic Feasible Solutions 9.3 Basic Feasible Solutions. For each of the following LPs, write the LP in the standard matrix form maximize z=c x x [A I m ] = b s x, s. Then determine all basic and basic feasible solutions, expressing each solution in vector form. Label each solution next to its corresponding point on the feasible region graph. (a) Maximize z=6x + 4x 2 2x + 3x 2 9 x 4 x 2 6 x, x 2, Note that this LP is the same as that from Exercise a of Section 2 3., where A=, c=[ 6 4 ] 9, b= 4, and x= x. Since x 6 2 s there are three constraints, s= s 2. In this case, the matrix equation, ( ) 5 2 [A I m ] s 3 [] x s = b (.3) has at most = possible basic solutions. Each is obtained by [] x setting two of the five entries of to zero and solving (.3) for s the remaining three, provided the system is consistent. Of the ten possible cases to consider, two lead to inconsistent systems. They arise from setting x = s 2 = and from setting x 2 = s 3 =. The eight remaining consistent systems yield the following results:

24 2 Chapter. An Introduction to Linear Programming (b) i. x =, x 2 =, s = 9, s 2 = 4, s 3 = 6 (basic, but not basic feasible solution) ii. x =, x 2 = 3, s =, s 2 = 4, s 3 = 3 (basic, but not basic feasible solution) iii. x =, x 2 = 6, s = 9, s 2 = 4, s 3 = (basic, but not basic feasible solution) iv. x = 4.5, x 2 =, s =, s 2 =.5, s 3 = 6 (basic feasible solution) v. x = 4, x 2 =, s =, s 2 =, s 3 = 6 (basic feasible solution) vi. x = 4, x 2 = /3, s =, s 2 =, s 3 = 7/3 (basic feasible solution) vii. x = 4.5, x 2 = 6, s =, s 2 = 8.5, s 3 = (basic, but not basic feasible solution) viii. x = 4, x 2 = 6, s = 7, s 2 =, s 3 = (basic, but not basic feasible solution) The feasible region is that from Exercise a of Section.2, and the solution is x = 4.5 and x 2 = with z=27. minimize z= x + 4x 2 x + 3x 2 5 x + x 2 4 x x 2 2 x, x 2. Note that this LP is the same as that from Exercise d of Section 3., where A=, c=[ 4 ] 5, b= 4, and x= x Since x 2 2 s there are three constraints, s= s 2. s 3 As in the previous exercise, there are ten possible cases to consider. In this situation, all ten systems are consistent. The results are as follows: i. x =, x 2 =, s = 5, s 2 = 4, s 3 = 2 (basic, but not basic feasible solution) ii. x =, x 2 = 5/3, s =, s 2 = 7/3, s 3 = /3 (basic, but not basic feasible solution) iii. x =, x 2 = 4, s = 7, s 2 =, s 3 = 6 (basic feasible solution)

25 .3. Basic Feasible Solutions 2 iv. x =, x 2 = 2, s =, s 2 = 6, s 3 = (basic, but not basic feasible solution) v. x = 5, x 2 =, s =, s 2 =, s 3 = 3 (basic, but not basic feasible solution) vi. x = 4, x 2 =, s =, s 2 =, s 3 = 2 (basic, but not basic feasible solution) vii. x = 2, x 2 =, s = 3, s 2 = 2, s 3 = (basic, but not basic feasible solution) viii. x = 7/2, x 2 = /2, s =, s 2 =, s 3 = (basic, but not basic feasible solution) ix. x = /4, x 2 = 3/4, s =, s 2 = /2, s 3 = (basic, but not basic feasible solution) x. x = 3, x 2 =, s =, s 2 =, s 3 = (basic feasible solution) The feasible region is that from Exercise c of Section.2, and the solution is x = 3 and x 2 = with z=. 2. The constraint equations of the standard form of the FuelPro LP are given by x + s = 8 (.4) 2x + 2x 2 + s 2 = 28 3x + 2x 2 + s 3 = 32 If x and s are nonbasic, then both are zero. In this case (.4) is inconsistent. Observe that if x = s =, that the coefficient matrix corresponding to (.4) is given by M= 2, which is not invertible The LP is given by maximize z=3x + 2x 2 x 4 x + 3x 2 5 2x + x 2 = x, x 2. (a) The third constraint is the combination of 2x + x 2 and 2x 2x 2. Thus, the matrix inequality form of the LP has A= and b=

26 22 Chapter. An Introduction to Linear Programming (b) Since there are four constraints, there are four corresponding slack variables, s, s 2, s 3, and s 4, which lead to the matrix equation maximize z=c x [] x [A I m ] = b s x, s, s x s where x= and s= 2. If s x 2 s 3 and s 4 are nonbasic, then both are 3 s 4 zero. In this case, the preceding matrix equation becomes x 3 x 2 = b. 2 s 2 The solution to this matrix equation yields x = 4 t, x 2 = 2+2t, s = t, and s 2 = 5 5t, where t is a free quantity. A simple calculation shows that for x, x 2, s, and s 2 to all remain nonnegative, it must x 4 be the case that t. Note that if t=, then x= = ; x 2 2 [] 3 if t=, then x=. As t increases from to, x varies along the 4 line segment connecting these two points. 4. The FuelPro LP has basic feasible solutions corresponding to the five points [] [] [] x=,,,, and. 4 4 Suppose we add a constraint that does not change the current feasible region but passes through one of these points. While there are countless such examples, a simple choice is to add x 2 4. Now consider the original FuelPro LP, with this new constraint added, and let s [ 4 denote ] the new corresponding slack variable. In the original LP, x = corresponded to a basic feasible solution in which the nonbasic variables 4 were x and s 2. In the new LP, any basic solution is obtained by setting two of the six variables, x, x 2, s, s 2, s 3, and s 4, to zero. If we again choose x and s 2 as nonbasic, then the resulting system of equations yields a basic feasible solution in which s 4 is a basic variable equal to zero. Thus, the LP is degenerate. s 2

27 .3. Basic Feasible Solutions A subset V ofr n is said to be convex is whenever two points belong to V, so does the line segment connecting them. In other words, x, x 2 V, implies that tx + ( t)x 2 V for all t. (a) Suppose that the LP is expressed in matrix inequality form as maximize z=c x (.5) Ax b x. If x and x 2 are feasible then Ax b and Ax 2 b. Thus, if t, then A (tx + ( t)x 2 )=tax + ( t)x 2 tb+( t)b = b. By similar reasoning, if x, x 2, then tx + ( t)x 2 whenever t. This shows that if x and x 2 satisfy the matrix inequality and sign conditions in (.5), then so does tx + ( t)x 2 for t. Hence, the feasible region is convex. (b) If x and x 2 are solutions of LP (.5), then both are feasible. By the previous result, x=tx + ( t)x 2 is also feasible if t. Since, x and x 2 are both solutions of the LP, they have a common objective value z = c x = c x 2. Now consider the objective value at x: c x=c (tx + ( t)x 2 ) = tc x + ( t)c x 2 = tz + ( t)z = z. Thus, x=tx + ( t)x 2 is also yields an objective value of z. Since x is also feasible, it must be optimal as well. 6. Suppose the matrix inequality form of the LP is given by maximize z=c x (.6) Ax b x,

28 24 Chapter. An Introduction to Linear Programming and let x denote an optimal solution. Since x belongs to the convex hull generated by the set of basic feasible solutions, x= p σ i x i, i= where the weightsσ i satisfyσ i for all i and p σ i = and where x, x 2,..., x p are the basic feasible solutions of (.6). Relabeling the basic feasible solutions if necessary, we may assume that c x i c x, for i p. Now consider the objective value corresponding to x. We have p c x=c σ i x i i= p = c (σ i x i ) = i= p σ i (c x i ) i= p σ i (c x ) i= = (c x ) = c x. p σ i i= Thus the objective value corresponding to x is at least as large as that corresponding to the optimal solution, x, which implies that x is also a solution of (.6) i= Solutions to Waypoints Waypoint.3.. The matrix equation [A I 3 ] x = b s has a solution given by x= and s=b. Since the corresponding system has more variables then equations, it must have infinitely many solutions.

29 .3. Basic Feasible Solutions 25 Waypoint.3.2. To solve the given matrix equation, we row reduce the augmented matrix The result is given by 4 3/2. 4 If we let s 2 and s 3 denote the free variables, then we may write s 2 = t, s 3 = s, where t, s R. With this notation, x = 4+t s, x 2 = 3 2t+s, and s = 4 t+s. We obtain the five extreme points as follows: [] 8. t=28 and s=32 yield x= and s= [] 8 2. t=2 and s=8yield x= and s= 2 8 [] 8 3. t=4and s= yield x= and s= t=s=yields x= and s= 8 5. t=and s=4 yield x= and s= 4 4 These extreme points may then be labeled on the feasible region. Waypoint The feasible region and various contours are shown in Figure?? [] 3/2 6/7 The extreme points are given by the x =, x 2 =, x 3 =, 2/7 [] and x 4 = x s 2. If c=[ 5, 2], A=, b=, x=, and s=, then the x 2 s 2 standard matrix form is given by

30 26 Chapter. An Introduction to Linear Programming x 2 z= z= 3 z= 5 z= 7.5 x FIGURE.: Feasible region with contours z =, z = 3, z = 5 and z= 7.5. maximize z=c x x [A I 2 ] = b s x, s, 3. The two constraint equations can be expressed as 3x + 2x 2 = 6 and 8x + 3x 2 = 2. Substituting x = x 2 = into this equations yields the basic solution x =, x 2 =, s = 6 and s 2 = 6. Performing the same operations using the other three extreme points from () yields the remaining three basic feasible solutions: (a) x =.5, x 2 =, s =.5 and s 2 =

31 .3. Basic Feasible Solutions 27 (b) x = 6 7, x 2= 2 7, s = and s 2 = (c) x =, x 2 = 3, s = and s 2 = 6 4. The contours in Figure?? [ indicate ] the optimal solution occurs at the.5 basic feasible solution x=, where z= 7.5.

32 28 Chapter. An Introduction to Linear Programming

33 Chapter 2 The Simplex Algorithm 2. The Simplex Algorithm Alternative Optimal/Unbounded Solutions and Degeneracy Excess and Artificial Variables: The Big M Method Duality Sufficient Conditions for Local and Global Optimal Solutions Quadratic Programming

34 3 Chapter 2. The Simplex Algorithm 2. The Simplex Algorithm. For each LP, we indicate the tableau after each iteration and highlight the pivot entry used to perform the next iteration. (a) TABLE 2.: Initial tableau z x x 2 s s 2 s 3 RHS TABLE 2.2: Tableau after first iteration z x x 2 s s 2 s 3 RHS TABLE 2.3: Tableau after second iteration z x x 2 s s 2 s 3 RHS (b) The solution is x = 3 and x 2 = 4, with z=7.

35 2.. The Simplex Algorithm 3 TABLE 2.4: Final tableau z x x 2 s s 2 s 3 RHS /5 7/5 7 -/5 3/5 3 /5-3/5 2/5 -/5 4 TABLE 2.5: Initial tableau z x x 2 s s 2 s 3 RHS The solution is x = 4 and x 2 = 5, with z=3. (c) The solution is x = 2, x 2 =, and x 3 = 74, with z= 3 3. (d) This is a minimization problem, so we terminate the algorithm when all coefficients in the top row that correspond to nonbasic variables are nonpositive. The solution is x = and x 2 = 4, with z= If we rewrite the first and second constraints as x 3x 2 8 and x x 2 4 and incorporate slack variables as usual, our initial tableau becomes that shown in Table 2.3 The difficulty we face when attempting to perform the simplex algorithm, centers on the fact that the origin, (x, x 2 )=(, ) yields negative values of s and 2. In other words, the origin corresponds to a basic, but not basic feasible, solution. We cannot read off the initial basic feasible solution as we could for Exercise (d). 3. We list those constraints that are binding for each LP. (a) Constraints 2 and 3

36 32 Chapter 2. The Simplex Algorithm TABLE 2.6: Tableau after first iteration z x x 2 s s 2 s 3 RHS TABLE 2.7: Final tableau z x x 2 s s 2 s 3 RHS 5/2 3/ /2 -/2 5 -/2 /2 5 (b) Constraints and 3 (c) Constraints and 2 (d) Constraint 2 Solutions to Waypoints Waypoint 2... We first state the LP in terms of maximization. If we set z= z, then maximize z=5x 2x 2 (2.) x 2 3x + 2x 2 6 8x + 3x 2 2 x, x 2, where z= z. The initial tableau corresponding to LP (2.) is given by Since we are now solving a maximization problem, we pivot on the highlighted entry in this tableau, which yields the following: All entries in the top row corresponding to nonbasic variables are positive. Hence x = 3 2 and x 2 = is a solution of the LP. However, the objective value corresponding to this optimal solution, for the original LP, is given by z= z= 5 2.

37 2.. The Simplex Algorithm 33 TABLE 2.8: Initial tableau z x x 2 x 3 s s 2 s 3 RHS TABLE 2.9: Tableau after first iteration z x x 2 x 3 s s 2 s 3 RHS -2/3 2/3 5/3 7/3 2/3 /3 /3 4/ TABLE 2.: Final tableau z x x 2 x 3 s s 2 s 3 RHS 3/2 /6 74/3 /2 -/6 /3 /2 -/4 /4 2 2 /2 -/2 TABLE 2.: Initial tableau z x x 2 s s 2 RHS TABLE 2.2: Final tableau z x x 2 s s 2 RHS TABLE 2.3: Initial tableau z x x 2 s s 2 s 3 RHS

38 34 Chapter 2. The Simplex Algorithm TABLE 2.4: Initial tableau z x x 2 s s 2 s 3 RHS TABLE 2.5: Final tableau z x x 2 s s 2 s 3 RHS 3/8 5/8 5/2-3/8 -/8 /2 7/8-3/8 3/2 3/8 /8 3/2

39 2.2. Alternative Optimal/Unbounded Solutions and Degeneracy Alternative Optimal/Unbounded Solutions and Degeneracy. Exercise The initial tableau is shown in Table TABLE 2.6: Initial tableau for Exercise z x x 2 s s 2 RHS After performing two iterations of the simplex algorithm, we obtain the following: TABLE 2.7: Tableau after second iteration for Exercise z x x 2 s s 2 RHS At this stage, the basic variables are x = 3 and x 2 =, with a corresponding objective value z=2. The nonbasic variable, s 2, has a coefficient of zero in the top row. Thus, letting s 2 become basic will not change the value of z. In fact, if we pivot on the highlighted entry in Table 2.7, x 2 replaces s 2 as a nonbasic variable, x = 6, and the objective value is unchanged. 2. Exercise 2 For an LP minimization problem to be unbounded, there must exist, at some stage of the simplex algorithm, a basic feasible solution in which a nonbasic variable has a negative coefficient in the top row and nonpositive coefficients in the remaining rows. 3. Exercise 3 (a) For the current basic solution to not be optimal, it must be the case that a<. However, if the LP is bounded, then the ratio test forces b>. Thus, we may pivot on row and column containing b, in which case, we obtain the following tableau:

40 36 Chapter 2. The Simplex Algorithm TABLE 2.8: Tableau obtained after pivoting on row and column containing b z x x 2 s s 2 RHS a b 3 a b 2 a b b b 3+ 2 b b b 2 b Thus, x = 3+ 2 b, x 2=, and z= 2 a b. (b) If the given basic solution was optimal, then a. However, if the LP has alternative optimal solutions, then a=. In this case, pivoting on the same entry in the original tableau as we did in (b), we would obtain the result in Table 2.8, but with a=. Thus, x = 3+ 2 b, x 2=, and z=. (c) If the LP is unbounded, then a<and b. 4. Exercise 4 (a) The coefficient of the nonbasic variable, s, in the top row of the tableau is negative. Since the coefficient of s in the remaining two rows is also negative, we see that the LP is unbounded. (b) So long as s 2 remains non basic, the equations relating s to each of x and x 2 are given by x 2s = 4 and x 2 s = 5. Thus, for each unit of increase in s, x increases by 2 units; for each unit of increase in s, x 2 increases by unit. (c) Eliminating s from the equations x 2s = 4 and x 2 s = 5, we obtain x 2 = 2 x + 3. (d) When s =, x = 4 and x 2 = 5. Since s, x 4 and x 2 5. Thus,we sketch that portion of the line x 2 = 2 x + 3 for which x 4 and x 2 5. (e) Since z=,, z=x + 3x 2, and x 2 = 2 x + 3, we have a system 5. Exercise 5 of equations whose solution is given by x = and x 2 = 6 5. (a) The LP has 2 decision variables and 4 constraints. Inspection of the

41 2.2. Alternative Optimal/Unbounded Solutions and Degeneracy 37 graph shows that the constraints are given by 2x + x 2 x + x x + x x + x 2 2. By introducing slack variables, s, s 2, s 3, and s 4, we can convert the preceding list of inequalities to a system of equations in six variables, x, x 2, s, s 2, s 3, and s 4. Basic feasible solutions are then most easily determined by substituting each of the extreme points into the system of equations and solving for the slack variables. The results are as follows: 5 [] x = = s 6 4 3/2 2 9/ = = = [] 4 Observe that the extreme point x= yields slack variable values 2 for which s = s 2 = s 3 =. This means that when any two of these three variables are chosen to be nonbasic, the third variable, which is basic, must be zero. Hence, the LP is degenerate. From a graphical perspective, the LP is degenerate because the boundaries of three 4 of the constraints intersect at the single extreme point, x= 2 (b) If the origin is the initial basic feasible solution, at least two simplex algorithm iterations are required before a degenerate basic feasible solution results. This occurs when the objective coefficient of x is larger [] than that of x 2. In this case, the intermediate iteration yields 5 x=. If the objective coefficient of x 2 is positive and larger than that of x, then at most three iterations are required.

42 38 Chapter 2. The Simplex Algorithm Solutions to Waypoints Waypoint The current basic feasible solution consists of x = a, s = 2, and x 2 = s 2 =. The LP has alternative optimal solutions if a nonbasic variable has a coefficient of zero in the top row of the tableau. This occurs when a= or a=2. 2. The LP is unbounded if both a< and a 3<, i.e., if <a 3. The LP is also unbounded if both 2 a< and a 4, i.e., if 2<a 4.

43 2.3. Excess and Artificial Variables: The Big M Method Excess and Artificial Variables: The Big M Method. (a) We let e and a denote the excess and artificial variables, respectively, that correspond to the first constraint. Similarly, we let e 2 and a 2 correspond to the second constraint. Using M=, our objective becomes one of minimizing z= x + 4x 2 + a + a 2. If s 3 is the slack variable corresponding to the third constraint our initial tableau is as follows: TABLE 2.9: Initial tableau z x x 2 e a e 2 a 2 s 3 RHS To obtain an initial basic feasible solution, we pivot on each of the highlighted entries in this tableau. The result is the following: TABLE 2.2: Tableau indicting initial basic feasible solution z x x 2 e a e 2 a 2 s 3 RHS Three iterations of the simplex algorithm are required before all coefficients in the top row of the tableau that correspond to nonbasic variables are nonpositive. The final tableau is given by TABLE 2.2: Final tableau z x x 2 e a e 2 a 2 s 3 RHS -3/4-397/4 - -7/4 5/2 -/4 /4 -/4 3/2 -/4 /4 3/4 7/2 -/2 /2 - /2 The solution is x = 7 2 and x 2= 3 2, with x 2= 5 2.

44 4 Chapter 2. The Simplex Algorithm (b) We let e and a denote the excess and artificial variables, respectively, that correspond to the first constraint. Similarly, we let e 3 and a 3 correspond to the third constraint. Using M=, our objective becomes one of minimizing z=x + x 2 + a + a 3. If s 2 is the slack variable corresponding to the second constraint our initial tableau is as follows: TABLE 2.22: Initial tableau z x x 2 e a s 2 e 3 a 3 RHS To obtain an initial basic feasible solution, we pivot on each of the highlighted entries in this tableau. The result is the following: TABLE 2.23: Tableau indicting initial basic feasible solution z x x 2 e a s 2 e 3 a 3 RHS Three iterations of the simplex algorithm lead to a final tableau given by TABLE 2.24: Final tableau z x x 2 e a s 2 e 3 a 3 RHS -3/7-697/7 -/7-2 -5/7 5/7 3/7-6 -/7 /7 2/7 6-2/7 2/7-3/7 6 The solution is x = 6 and x 2 = 6, with x 2 = 2. (c) We let a denote the artificial variable corresponding to the first (equality) constraint, and we let e 2 and a 2 denote the excess and artificial variables, respectively, that correspond to the second constraint. Using M =, our objective becomes one of maximizing z=x + 5x 2 + 6x 3 a a 2. The initial tableau is given by To obtain an initial basic feasible solution, we pivot on each of the highlighted entries in this tableau. The result is the following:

45 2.3. Excess and Artificial Variables: The Big M Method 4 TABLE 2.25: Initial tableau z x x 2 x 3 a e 2 a 2 RHS TABLE 2.26: Tableau indicting initial basic feasible solution z x x 2 x 3 a e 2 a 2 RHS Three iterations of the simplex algorithm are required before all coefficients in the top row of the tableau that correspond to nonbasic variables are nonnegative. The final tableau is given by TABLE 2.27: Final tableau z x x 2 x 3 a e 2 a 2 RHS /2 2 /2 25 The solution is x = x 2 = and x 3 = 25, with z=5. Solutions to Waypoints Waypoint The LP is given by where A= ] s= [ s s 2. maximize z=c x [] x [A I 2 ] = b s x, s, , c= [ ] 3.2, b=, x= [ x x 2 ], and

46 42 Chapter 2. The Simplex Algorithm Waypoint Let s denote the slack variable for the first constraint, and let e 2 and a 2 denote the excess and artificial variables, respectively, for the second constraint. Using M =, our objective becomes one of minimizing z=45.55x x 2 + a 2. The initial tableau is as follows: TABLE 2.28: Initial tableau z x x 2 s e 2 a 2 RHS To obtain an initial basic feasible solution, we pivot on the highlighted entry in this tableau. The result is the following: TABLE 2.29: Tableau indicting initial basic feasible solution z x x 2 s e 2 a 2 RHS Two iteration of the simplex algorithm yield the final tableau: TABLE 2.3: Final tableau z x x 2 s e 2 a 2 RHS Thus x is nonbasic so that the total foraging time is minimized when x = grams of grass and x grams of forb, in which case z 32.7 minutes

47 2.4. Duality Duality. At each stage of the simplex algorithm, the decision variables in the primal LP correspond to a basic feasible solutions. Values of the decision variables in the dual LP are recorded by the entries of the vector y. Only at the final iteration of the algorithm, do the dual variables satisfy both ya c and y, i.e., only at the final iteration do the entries of y constitute a basic feasible solution of the dual. 2. The given LP can be restated as maximize z=3x + 4x 2 x 4 x + 3x 2 5 x + 2x 2 5 x x 2 9 x + x 2 6 x x 2 6 x, x 2. In matrix inequality form, this becomes maximize z=c x Ax b x, 4 3 where A=, c= [ 3 4 ] 5 5, b=, and x= The corresponding dual is given by minimize w=y b ya c y, [ x x 2 ].

48 44 Chapter 2. The Simplex Algorithm where y= [ y y 2 y 3 y 4 y 5 ] y 6. In expanded form this is identical to minimize w=4y + 5y 2 5y 3 + 9y 4 + 6(y 5 y 6 ) y + y 2 + y 3 y 4 + (y 5 y 6 ) 3 3y 2 y 3 + y 4 + (y 5 y 6 ) y, y 2, y 3, y 4, y 5, y 6. Now define the new decision variable ỹ= y 5 y 6, which is unrestricted in sign due to the fact y 5 and y 6 are nonnegative. Since y 5 and y 6, together, correspond to the equality constraint in the primal LP, we see that ỹ is a dual variable unrestricted in sign that corresponds to an equality constraint in the primal. 3. The standard maximization LP can be written in matrix inequality form as maximize z=c t x Ax b x, where x belongs tor n, c is a column vector inr n, b belongs tor m, and A is an m-by-n matrix. The dual LP is then expressed as minimize w=y t b y t A c t y, where we have assumed the dual variable vector, y, is a column vector inr m. But, by transpose properties, this is equivalent to maximize w= b t y A t y c y. The preceding LP is a maximization problem, whose dual is given by

49 2.4. Duality 45 minimize z=x t ( c) x t ( A t ) b t x. Using transpose properties again, we may rewrite this final LP as which is the original LP. maximize z=c t x Ax b x, 4. (a) The original LP has four decision variables and three constraints. Thus, the dual LP has three decision variables and four constraints. (b) The current primal solution is not optimal, as the nonbasic variable, s, has a negative coefficient in the top row of the tableau. By the ratio test, we conclude that s will replace x as a basic variable and that s = 4 in the updated basic feasible solution. Since the coefficient of s in the top row is 2, the objective value in the primal will increase to 28. By weak duality, we can conclude that the objective value in the solution of the dual LP is no less than The dual of the given LP is which is infeasible. minimize w= y 2y 2 y y 2 2 y + y 2 y, y 2, 6. (a) The current tableau can also be expressed in partitioned matrix form as z x y yb [, ] [,, ] MA M Mb Comparing this matrix to that given in the problem, we see that

50 46 Chapter 2. The Simplex Algorithm /3 M= /2. Thus, /3 b=m (Mb) 3 6 = 3/2 3 8 = 2, 6 and A=M (MA) 3 2/3 = 3/2 3 4/3 2 3 = 2 3/2. 4 Using A and b, we obtain the original LP: maximize z=2x + 3x 2 2x + 3x 2 8 2x x 2 2 4x + x 2 6 x, x 2. (b) The coefficients of x and x 2 in the top row of the tableau are given by c+ya= [2, 3]+ [,, ]A = [ 4, ] The unknown objective value is z=yb 8 = [,, ] 2 6 = 8. (c) The coefficient of x in the top row of the tableau is -4, so an additional iteration of the simplex algorithm is required. Performing

51 2.4. Duality 47 the ratio test, we see that x replaces s 2 as a nonbasic variable. The updated decision variables become x = and x 2 = 2 3, with a corresponding objective value of z = 22. The slack variable coefficients in the top row of the tableau are s = 3, s 2= 4 3, and s 3=, which indicates that the current solution is optimal. Therefore the /3 dual solution is given by y= 4/3 with objective value w= Since the given LP has two decision variables and four constraints, the corresponding dual has four decision variables, y, y 2, y 3, and y 4 along with two constraints. In expanded form, it is given by minimize z=4y + 6y y y 4 y+2y 3 + 2y 4 y + y 2 + 3y 3 + y 4 5 y, y 2, y 3, y 4. To solve the dual LP using the simplex algorithm, we subtract excess variables, e and e 2, from the first and second constraints, respectively, and add artificial variables, a and a 2. The Big M method with M= yields an initial tableau given by TABLE 2.3: Initial tableau w y y 2 y 3 y 4 e a e 2 a 2 RHS The final tableau is TABLE 2.32: Final tableau w y y 2 y 3 y 4 e a e 2 a 2 RHS -/2-3 -5/2-985/ /2 -/2 -/2 /2 /2 5/2-2 3/2-3/2-7/2. The excess variable coefficients in the top row of this tableau are the

52 48 Chapter 2. The Simplex Algorithm additive inverses of the decision variable values in the solution of the primal. Hence, the optimal solution of the primal is given by x = 5 2 and x 2 = 6, with corresponding objective value z= By the complementary slackness property, x is the optimal solution of y y the LP provided we can find y = 2 inr y 4 such that 2 y 4 [ y A c ] i [x ] i = i 6 and We have [y ] j [b Ax ] j = j 4. b Ax =, 6/7 so that in a dual solution it must be the case that y 3 =. Furthermore, when y 3 =, y + 7y 2 + 7y 4 5 2y 5y 2 + 8y 4 4y y A c= + 2y 2 3y 4. 3y + y 2 + 5y 4 4 4y + 2y 2 + 2y 4 6y + 3y 2 + 4y 4 2 By complementary slackness again, the first, third and fourth components of this vector must equal zero, whence we have a system of 3 linear equations, whose solution is given by y = 3 24, y 2 = 37 24, and y 3 = If we define y = 3/24 37/24 47/2, then w=y b = 53 7 = cx so that x is the optimal solution of the LP by Strong Duality.

53 2.4. Duality Player s optimal mixed strategy is given by the solution of the LP maximize z Ax ze e t x= x, and Player 2 s by the solution of the corresponding dual LP, minimize w ya we t y e= y. [] Here e=. The solution of the first LP is given by x =.5 solution of the second by x =.5.7 and the.3. The corresponding objective value for each LP is z=w=.5, thereby indicating that the game is biased in Player s favor.. (a) Since A is an m-by-n matrix, b belongs tor m, and c is a row vector inr n, the matrix M has m+n rows and m+n [ columns. ] Furthermore, the transpose of a partitioned matrix,, is given A B C D A t C by t, which can be used to show M t = M. B t D t y t x (b) Define vecw =. Then w belongs tor m+n+ and has nonnegative components since x and y are primal feasible and dual fea- sible vectors belonging tor n andr m, respectively. Furthermore, [w ] m+n+ = >. We have m m A b y t Mw = A t m n c t x b t c Ax + b = A t y t c t b t y t + cx

54 5 Chapter 2. The Simplex Algorithm since the primal feasibility of x dictates Ax b, the dual feasibility of y and transpose properties imply A t y t c t, and strong duality forces b t y t = cx. (c) Lettingκ, x, and y be defined as in the hint, we have Mw m m A b κy t = A t m n c t κx b t c κ m m A b y =κ A t m n c t t b t x c Ax + b =κ A t y t c t b t y t + cx Sinceκ >, we conclude that Ax b, A t y t ct, and cx b t y t. Transpose properties permit us to rewrite the second of these matrix inequalities as y A c. Thus x and y are primal feasible and dual feasible, respectively. Moreover, the third inequality is identical to cx y b if we combine the facts b t y t = y b, each of these two quantities is a scalar, and the transpose of a scalar is itself. But the Weak Duality Theorem also dictates cx y b, which implies cx = y b. From this result, we conclude x and y constitute the optimal solutions of the (4.) and (4.3), respectively. Solutions to Waypoints Waypoint In matrix inequality form, the given LP can be written as where vecx= given by x x 2 x 3 maximize z=c x Ax b x, 2 2, c=[3, 5, 2], A=, and b= 3 5 = The dual LP is

55 2.4. Duality 5 minimize w=y b ya c y, where y= [ y y 2 y 3 ] y 4. In expanded form, this is equivalent to minimize w=2y + 2y 2 + 4y 3 + 3y 4 2y + y 2 + 3y 3 y 4 3 y + 2y 2 + y 3 + y 4 5 y + y 2 + 5y 3 + y 4 2 y, y 2, y 3, y 4. 2/3 The original LP has its solution given by x = 2/3 with z = 6 3. The solution of the dual LP is y = [ /3 7/3 ] with w = 6 3. Waypoint If Steve always chooses row one and Ed chooses columns one, two, and three with respective probabilities, x, x 2, and x 3, then Ed s expected winnings are given by f (x, x 2, x 3 )= [ ] A = x x 2 + 2x 3, which coincides with the first entry of the matrix-vector product, Ax. By similar reasoning, f 2 (x, x 2, x 3 )= [ ] A x x 2 and = 2x + 4x 2 x 3 f 3 (x, x 2, x 3 )= [ ] A = 2x + 2x 3, which equal the second and third entries of Ax, respectively. x x 2 x 3 x 3 x x 2 x 3

56 52 Chapter 2. The Simplex Algorithm 2. If Ed always chooses column one and Steve chooses rows one, two, and three with respective probabilities, y, y 2, and y 3, then Steve s expected winnings are given by g (y, y 2, y 3 )= [ y y 2 y 3 ] A = y + 2y 2 2y 3, which coincides with the first entry of the product, ya. By similar reasoning, g 2 (y, y 2, y 3 )= [ y y 2 y 3 ] A = y + 4y 2 and g 3 (y, y 2, y 3 )= [ y y 2 y 3 ] A = 2y y 2 + 2y 3, which equal the second and third entries of ya, respectively. Waypoint To assist in the formulation of the dual, we view the primal objective, z, as the difference of two nonnegative decision variables, z and z 2. Thus the primal LP can be expressed in matrix inequality form as maximize [ z 3 ] z 2 x e e A z 3 e t z 2 e t x x and z, z 2. If the dual LP has its vector of decision variables denoted by [ y w w 2 ], where y is a row vector inr 3 with nonnegative components, and both w and w 2 are nonnegative as well, then the dual is given by

57 2.4. Duality 53 minimize [ 3 y w w 2 ] [ e e A y w w 2 ] e t 3 e t y 3 and w, w 2. If we write w=w w 2, then this formulation to the dual simplifies to be minimize w ya we t y e= y.

58 54 Chapter 2. The Simplex Algorithm 2.5 Sufficient Conditions for Local and Global Optimal Solutions. If f (x, x 2 )=e (x2 +x2 2), then direct computations lead to the following results: f (x)= 2x f (x, x2) 2x 2 f (x, x 2 ) and [ ( 2+4x 2 H f (x)= ) f (x, x2) 4x ] x 2 f (x, x 2 ) 4x x 2 f (x, x 2 ) ( 2+4x 2 ) f (x, x2) 2 so that f (x ) and H f (x ) Thus, Q(x)= f (x )+ f (x ) t (x x )+ 2 (x x ) t H f (x )(x x ) x.895x x x x x 2 The quadratic approximation [ together ] with f are graphed in Figure The solution is given by x=, with z= Calculations establish that Hessian of f is given by H f (x) = φ(x)a, whereφ(x)= and where (74+.5x + x 2 + x 3 ) / A=

59 2.5. Sufficient Conditions for Local and Global Optimal Solutions 55 FIGURE 2.: Plot of f with quadratic approximation Observe thatφ(x)>for x in S and that A is positive semidefinite. (Its eigenvalues are.2,.423, and zero.) Now suppose that x belongs to S and thatλ is an eigenvalue of H f (x ) with corresponding eigenvector v. Then λ v=h f (x )v =φ(x )Av, implying λ φ(x ) is an eigenvalue of A. Sinceφ(x )> and each eigenvalue of A is nonnegative, thenλ must be nonnegative as well. Hence, H f (x ) is positive semidefinite, and f is convex.

60 56 Chapter 2. The Simplex Algorithm 2x + x 3. (a) We have f (x) = 2, which yields the critical point, x + 4x 2 4 [] 2 x =. Since H f (x)= has positive eigenvalues,λ=3± 2, 4 we see that f is strictly convex onr 2 so that x is a global minimum. 2x x (b) We have f (x)= 2 + 4, which yields the critical point, x + 4x 2 8 /3 x =. Since H /3 f (x) = [ 2 4 ] has mixed-sign eigenvalues,λ=8±2 34, x is a saddle point. 4x + 6x (c) We have f (x)= 2 6, which yields the critical point, 6x x x =. Since H f (x) = has negative eigenvalues, 6 λ= 7±3 5, we see that f is strictly concave onr 2 so that x is a global maximum. [ 4x 3 (d) We have f (x)= 24x2 + 48x ] 32, which yields a single 8x critical point at x =. The Hessian simplifies to H /2 f (x) = 2(x 2) 2 and has eigenvalues of 2(x 8 2) 2 and 8. Thus, f is concave onr 2 and x is a global maximum. cos(x ) cos(x (e) We have f (x) = 2 ), which yields four critical sin(x ) sin(x 2 ) π/2 points: x =± and x =±. The Hessian is π/2 sin(x ) cos(x H f (x)= 2 ) cos(x ) sin(x 2 ), cos(x ) sin(x 2 ) sin(x ) cos(x 2 ) π/2 which yields a repeated eigenvalue,λ=, when x =, a π/2 repeated eigenvalueλ=, when x =, and eigenvalues π/2 λ=±, when x =. Thus, x ±π/2 = is a local maximum, π/2 x = is a local minimum, and x = are saddle points. ±π/2 [ 2x /(x (f) We have f (x)= 2 + ) ], which yields a single critical point x 2

61 2.5. Sufficient Conditions for Local and Global Optimal Solutions 57 at the origin. Since f is differentiable and nonnegative onr 2, we conclude immediately that the origin is a global minimum. In addition, H f (x)= 2( x ) 2 (x 2 +)2 The eigenvalues of this matrix are positive when <x <. Thus f is is strictly convex its given domain. ( ) (g) If f (x, x 2 )=e x 2+x x, then f (x)= f (x) so that f has a single critical point at the origin. The Hessian simplifies to H f (x)= x 2 x x 2 f (x) x x 2, which has eigenvalues in terms of x given by x 2 2 f (x) λ= (x 2 +. Since f (x)>and x x2 ) f (x) 2 + x2 <, we have that 2 2 f is strictly concave on its given domain and the origin is a global minimum. 4. We have f (x)=ax b, which is the zero vector precisely when x = A b. The Hessian of f is simply A, so f is strictly convex (resp. strictly concave) onr n when A is positive definite (resp. negative definite). Thus, x is the unique global minimum (resp. unique global maximum) of f. When A= 2, b= 3, and c=6, x 4 =. x 2 7/5. The eigenvalues of 9/5 A areλ= 5± 5, so x is the global minimum. 2 n 5. Write n= n 2. Since n, we lose no generality by assuming n 3. n 3 Solving n t x=d for x 3 yields x 3 as a function of x and x 2 : f :R 2 Rwhere f (x, x 2 )=x 2 + x2 2 + (d n x n 2 x 2 ) 2. The critical point of f is most easily computed using Maple and simplifies to x = [ n d/ n 2, n d/ n 2] The eigenvalues of the Hessian are λ = 2 andλ 2 = 2 n 2 so that f is strictly convex and x n 2 is the global 3 minimum. The corresponding shortest distance is d n n 2 3

62 58 Chapter 2. The Simplex Algorithm /4 For the plane x + 2x 2 + x 3 =, n= 2 and d=. In this case, x = /7 3 3/4 with a corresponding distance of. 4 4x 3 6. The gradient of f is f (x)=, which vanishes at the origin. Since f 2x 2 is nonnegative and f ()=, we see that x = is the global minimum. The Hessian of f at the origin is H f ()= [ 2 ], which is positive semidefinite. The function f (x, x 2 )= x 4 + x2 has a saddle point at the origin because 2 f (x, )= x 4 has a minimum at x = and f (, x 2 )=x 2 has a maximum 2 at x 2 =. The Hessian of f at the origin is again H f ()= [ 2 ]. 7. The function f (x)=x 4 + x4 2 has a single critical point at x =, which is a global minimum since f is nonnegative and f ()=. The Hessian evaluated at x is the zero matrix, whose only eigenvalue is zero. Similarly, f (x)= x 4 x4 2 has a global maximum at x =, where its Hessian is also the zero matrix. 8. (a) Player s optimal mixed strategy is the solution of maximize = z 2x 3x 2 z x + 4x 2 z x + x 2 = x, x 2, 7/ which is given by x =, with corresponding objective value 3/ z = 2. Player 2 s optimal mixed strategy is the solution of the corresponding dual LP, minimize = w 2y y 2 w 3y + 4y 2 w y + y = y, y 2,

63 2.5. Sufficient Conditions for Local and Global Optimal Solutions 59 /2 which is y =, with corresponding objective value w /2 = 2. (b) We have t ] y f (x, y )= x A[ y x = x y 5x 7y + 4, whose gradient, f (x, y ) vanishes at x = 7, y =. The Hessian 2 of f is a constant matrix having eigenvaluesλ=± so that this ( 7 critical point is a saddle point. The game value is f, ) = 2 2. (c) The only solution of f (x, y )= 2 is simply ( 7, 2 (d) A game value 2% higher than 2 is 3 5. The equation f (x, y )= 3 5 simplifies to x y 5x 7y =. The graph of this relation is shown in Figure??. 9. Calculations, which are most easily performed with Maple, establish that f (x, y ) = x t ] y A[ has a saddle point at x y x = d b a+d b c, y = f ( d c a+d b c. Since d b a+d b c, d c a+d b c ) = ). ad bc a+d b c, the game is fair provided det A, or, in other words, if A is noninvertible.

64 6 Chapter 2. The Simplex Algorithm y x FIGURE 2.2: The relation x y 5x 7y =.

65 2.6. Quadratic Programming Quadratic Programming 4. (a) The Hessian of f is 6. If p= 2, A=, and b=, then the problem is given as 3 minimize f (x)= 2 xt Qx+p t x Ax=b. (b) The eigenvalues of Q areλ = (repeated) andλ = 2, so Q is positive definite and the problem is convex. Thus, [ µ x ] m m A b = A t Q p 44/9 65/9 /3. = 4/3 /3 The problem then has its solution given by x = 4/3. The corresponding objective value is f (/3, 4/3)= (a) The [ matrix ] Q is indefinite. However, the partitioned matrix, m m A A t is still invertible so that Q [ µ x ] m m A b = A t Q p 259/3 78/3 = 9/3. /3 42/3

66 62 Chapter 2. The Simplex Algorithm 9/3 Thus, the problem has a unique KKT point, x = /3, with 42/3 259/3 corresponding multiplier vector,µ =. The bordered 78/3 Hessian becomes 2 4 B= In this case there are n=3 decision variables and m=2 equality constraints. Since the determinants of both B and its leading principal minor of order 4 are positive, we see that x is the solution. 6 (b) The problem has a unique KKT point given by x = 3/4 with /4 5/2 corresponding multiplier vector,λ = 3. At x only the first two constraints of Cx d are binding. Thus, we set C= and d=, in which case B= There are n = 3 decision variables, m = equality constraints, and k = 2 binding inequality constraints. Since the determinant of B and its leading principal minor of order 4 are positive, we see that x is the solution. 7/4 / (c) The are two KKT points. The first is given by x = with 7/8.445 corresponding multiplier vectors,λ andµ

67 2.6. Quadratic Programming 63 The second constraint is binding at x, so we set C= [ 3 ] and d= [ ], in which case, A B= 3 C A t C t Q = There are n = 4 decision variables, m = 3 equality constraints, and k= binding inequality constraints. Since n m k=, we only need to check that the determinant of B itself has the same sign as ( ) m+k =, which it does. Hence, x = whereas f 7/4 / 7/8 is a solution The second KKT point is given by x with corresponding multiplier vectors,λ = andµ [] In this case neither inequality constraint is binding. Performing an analysis simi-.67 lar to that done for the first KKT point leads to a situation in which the bordered Hessian test is inconclusive. In fact, f (x ) 2., 7/4 / 7/ To show that m m A S A t = S AQ Q Q A t S Q (Q+A t S A)Q, (2.2) m m A S we will verify that the product of A t and S AQ Q Q A t S Q (Q+A t S A)Q

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics Dr. Said Bourazza Department of Mathematics Jazan University 1 P a g e Contents: Chapter 0: Modelization 3 Chapter1: Graphical Methods 7 Chapter2: Simplex method 13 Chapter3: Duality 36 Chapter4: Transportation

More information

Special cases of linear programming

Special cases of linear programming Special cases of linear programming Infeasible solution Multiple solution (infinitely many solution) Unbounded solution Degenerated solution Notes on the Simplex tableau 1. The intersection of any basic

More information

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

In Chapters 3 and 4 we introduced linear programming

In Chapters 3 and 4 we introduced linear programming SUPPLEMENT The Simplex Method CD3 In Chapters 3 and 4 we introduced linear programming and showed how models with two variables can be solved graphically. We relied on computer programs (WINQSB, Excel,

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20. Extra Problems for Chapter 3. Linear Programming Methods 20. (Big-M Method) An alternative to the two-phase method of finding an initial basic feasible solution by minimizing the sum of the artificial

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

Simplex Algorithm Using Canonical Tableaus

Simplex Algorithm Using Canonical Tableaus 41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau

More information

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Computational Procedure of the Simplex Method The optimal solution of a general LP problem is obtained in the following steps: Step 1. Express the

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

Linear Programming, Lecture 4

Linear Programming, Lecture 4 Linear Programming, Lecture 4 Corbett Redden October 3, 2016 Simplex Form Conventions Examples Simplex Method To run the simplex method, we start from a Linear Program (LP) in the following standard simplex

More information

OPRE 6201 : 3. Special Cases

OPRE 6201 : 3. Special Cases OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

II. Analysis of Linear Programming Solutions

II. Analysis of Linear Programming Solutions Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

9.1 Linear Programs in canonical form

9.1 Linear Programs in canonical form 9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION Part I: Short Questions August 12, 2008 9:00 am - 12 pm General Instructions This examination is

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Duality in LPP Every LPP called the primal is associated with another LPP called dual. Either of the problems is primal with the other one as dual. The optimal solution of either problem reveals the information

More information

Linear Programming in Matrix Form

Linear Programming in Matrix Form Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

More information

Sensitivity Analysis and Duality in LP

Sensitivity Analysis and Duality in LP Sensitivity Analysis and Duality in LP Xiaoxi Li EMS & IAS, Wuhan University Oct. 13th, 2016 (week vi) Operations Research (Li, X.) Sensitivity Analysis and Duality in LP Oct. 13th, 2016 (week vi) 1 /

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP Different spaces and objective functions but in general same optimal

More information

56:270 Final Exam - May

56:270  Final Exam - May @ @ 56:270 Linear Programming @ @ Final Exam - May 4, 1989 @ @ @ @ @ @ @ @ @ @ @ @ @ @ Select any 7 of the 9 problems below: (1.) ANALYSIS OF MPSX OUTPUT: Please refer to the attached materials on the

More information

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems OPERATIONS RESEARCH Michał Kulej Business Information Systems The development of the potential and academic programmes of Wrocław University of Technology Project co-financed by European Union within European

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis Ann-Brith Strömberg 2017 03 29 Lecture 4 Linear and integer optimization with

More information

MAT016: Optimization

MAT016: Optimization MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The

More information

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

Worked Examples for Chapter 5

Worked Examples for Chapter 5 Worked Examples for Chapter 5 Example for Section 5.2 Construct the primal-dual table and the dual problem for the following linear programming model fitting our standard form. Maximize Z = 5 x 1 + 4 x

More information

Sensitivity Analysis

Sensitivity Analysis Dr. Maddah ENMG 500 /9/07 Sensitivity Analysis Changes in the RHS (b) Consider an optimal LP solution. Suppose that the original RHS (b) is changed from b 0 to b new. In the following, we study the affect

More information

Solutions to Review Questions, Exam 1

Solutions to Review Questions, Exam 1 Solutions to Review Questions, Exam. What are the four possible outcomes when solving a linear program? Hint: The first is that there is a unique solution to the LP. SOLUTION: No solution - The feasible

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm How to Convert an LP to Standard Form Before the simplex algorithm can be used to solve an LP, the LP must be converted into a problem where all the constraints are equations and

More information

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2

More information

Simplex Method for LP (II)

Simplex Method for LP (II) Simplex Method for LP (II) Xiaoxi Li Wuhan University Sept. 27, 2017 (week 4) Operations Research (Li, X.) Simplex Method for LP (II) Sept. 27, 2017 (week 4) 1 / 31 Organization of this lecture Contents:

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

The Dual Simplex Algorithm

The Dual Simplex Algorithm p. 1 The Dual Simplex Algorithm Primal optimal (dual feasible) and primal feasible (dual optimal) bases The dual simplex tableau, dual optimality and the dual pivot rules Classical applications of linear

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Linear Programming Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The standard Linear Programming (LP) Problem Graphical method of solving LP problem

More information

New Artificial-Free Phase 1 Simplex Method

New Artificial-Free Phase 1 Simplex Method International Journal of Basic & Applied Sciences IJBAS-IJENS Vol:09 No:10 69 New Artificial-Free Phase 1 Simplex Method Nasiruddin Khan, Syed Inayatullah*, Muhammad Imtiaz and Fozia Hanif Khan Department

More information

UNIT-4 Chapter6 Linear Programming

UNIT-4 Chapter6 Linear Programming UNIT-4 Chapter6 Linear Programming Linear Programming 6.1 Introduction Operations Research is a scientific approach to problem solving for executive management. It came into existence in England during

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Sensitivity analysis The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Changing the coefficient of a nonbasic variable

More information

ECE 307 Techniques for Engineering Decisions

ECE 307 Techniques for Engineering Decisions ECE 7 Techniques for Engineering Decisions Introduction to the Simple Algorithm George Gross Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign ECE 7 5 9 George

More information

MATH 445/545 Homework 2: Due March 3rd, 2016

MATH 445/545 Homework 2: Due March 3rd, 2016 MATH 445/545 Homework 2: Due March 3rd, 216 Answer the following questions. Please include the question with the solution (write or type them out doing this will help you digest the problem). I do not

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Understanding the Simplex algorithm. Standard Optimization Problems.

Understanding the Simplex algorithm. Standard Optimization Problems. Understanding the Simplex algorithm. Ma 162 Spring 2011 Ma 162 Spring 2011 February 28, 2011 Standard Optimization Problems. A standard maximization problem can be conveniently described in matrix form

More information

Section 4.1 Solving Systems of Linear Inequalities

Section 4.1 Solving Systems of Linear Inequalities Section 4.1 Solving Systems of Linear Inequalities Question 1 How do you graph a linear inequality? Question 2 How do you graph a system of linear inequalities? Question 1 How do you graph a linear inequality?

More information

The Simplex Algorithm and Goal Programming

The Simplex Algorithm and Goal Programming The Simplex Algorithm and Goal Programming In Chapter 3, we saw how to solve two-variable linear programming problems graphically. Unfortunately, most real-life LPs have many variables, so a method is

More information

IE 400: Principles of Engineering Management. Simplex Method Continued

IE 400: Principles of Engineering Management. Simplex Method Continued IE 400: Principles of Engineering Management Simplex Method Continued 1 Agenda Simplex for min problems Alternative optimal solutions Unboundedness Degeneracy Big M method Two phase method 2 Simplex for

More information

Chapter 4 The Simplex Algorithm Part I

Chapter 4 The Simplex Algorithm Part I Chapter 4 The Simplex Algorithm Part I Based on Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan Lewis Ntaimo 1 Modeling

More information

AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1

AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1 AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1 Prof. Yiling Chen Fall 2018 Here are some practice questions to help to prepare for the midterm. The midterm will

More information

Brief summary of linear programming and duality: Consider the linear program in standard form. (P ) min z = cx. x 0. (D) max yb. z = c B x B + c N x N

Brief summary of linear programming and duality: Consider the linear program in standard form. (P ) min z = cx. x 0. (D) max yb. z = c B x B + c N x N Brief summary of linear programming and duality: Consider the linear program in standard form (P ) min z = cx s.t. Ax = b x 0 where A R m n, c R 1 n, x R n 1, b R m 1,and its dual (D) max yb s.t. ya c.

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5, Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method Reading: Sections 2.6.4, 3.5, 10.2 10.5 1 Summary of the Phase I/Phase II Simplex Method We write a typical simplex tableau as z x 1 x

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur END3033 Operations Research I Sensitivity Analysis & Duality to accompany Operations Research: Applications and Algorithms Fatih Cavdur Introduction Consider the following problem where x 1 and x 2 corresponds

More information

Sensitivity Analysis and Duality

Sensitivity Analysis and Duality Sensitivity Analysis and Duality Part II Duality Based on Chapter 6 Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3)

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3) The Simple Method Gauss-Jordan Elimination for Solving Linear Equations Eample: Gauss-Jordan Elimination Solve the following equations: + + + + = 4 = = () () () - In the first step of the procedure, we

More information

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive 3.4 Anticycling Lexicographic order In this section we discuss two pivoting rules that are guaranteed to avoid cycling. These are the lexicographic rule and Bland s rule. Definition A vector u R n is lexicographically

More information

MATH 373 Section A1. Final Exam. Dr. J. Bowman 17 December :00 17:00

MATH 373 Section A1. Final Exam. Dr. J. Bowman 17 December :00 17:00 MATH 373 Section A1 Final Exam Dr. J. Bowman 17 December 2018 14:00 17:00 Name (Last, First): Student ID: Email: @ualberta.ca Scrap paper is supplied. No notes or books are permitted. All electronic equipment,

More information

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I LN/MATH2901/CKC/MS/2008-09 THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS Operations Research I Definition (Linear Programming) A linear programming (LP) problem is characterized by linear functions

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors

More information

Optimization - Examples Sheet 1

Optimization - Examples Sheet 1 Easter 0 YMS Optimization - Examples Sheet. Show how to solve the problem min n i= (a i + x i ) subject to where a i > 0, i =,..., n and b > 0. n x i = b, i= x i 0 (i =,...,n). Minimize each of the following

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG631,Linear and integer optimization with applications The simplex method: degeneracy; unbounded solutions; starting solutions; infeasibility; alternative optimal solutions Ann-Brith Strömberg

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

Lecture 11: Post-Optimal Analysis. September 23, 2009

Lecture 11: Post-Optimal Analysis. September 23, 2009 Lecture : Post-Optimal Analysis September 23, 2009 Today Lecture Dual-Simplex Algorithm Post-Optimal Analysis Chapters 4.4 and 4.5. IE 30/GE 330 Lecture Dual Simplex Method The dual simplex method will

More information

56:171 Operations Research Fall 1998

56:171 Operations Research Fall 1998 56:171 Operations Research Fall 1998 Quiz Solutions D.L.Bricker Dept of Mechanical & Industrial Engineering University of Iowa 56:171 Operations Research Quiz

More information

Linear & Integer programming

Linear & Integer programming ELL 894 Performance Evaluation on Communication Networks Standard form I Lecture 5 Linear & Integer programming subject to where b is a vector of length m c T A = b (decision variables) and c are vectors

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

Part 1. The Review of Linear Programming Introduction

Part 1. The Review of Linear Programming Introduction In the name of God Part 1. The Review of Linear Programming 1.1. Spring 2010 Instructor: Dr. Masoud Yaghini Outline The Linear Programming Problem Geometric Solution References The Linear Programming Problem

More information

Linear Programming Duality

Linear Programming Duality Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm

More information

(includes both Phases I & II)

(includes both Phases I & II) Minimize z=3x 5x 4x 7x 5x 4x subject to 2x x2 x4 3x6 0 x 3x3 x4 3x5 2x6 2 4x2 2x3 3x4 x5 5 and x 0 j, 6 2 3 4 5 6 j ecause of the lack of a slack variable in each constraint, we must use Phase I to find

More information

TIM 206 Lecture 3: The Simplex Method

TIM 206 Lecture 3: The Simplex Method TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),

More information

UNIVERSITY OF CALICUT SCHOOL OF DISTANCE EDUCATION B Sc. Mathematics (2011 Admission Onwards) II SEMESTER Complementary Course

UNIVERSITY OF CALICUT SCHOOL OF DISTANCE EDUCATION B Sc. Mathematics (2011 Admission Onwards) II SEMESTER Complementary Course UNIVERSITY OF CALICUT SCHOOL OF DISTANCE EDUCATION B Sc. Mathematics (2011 Admission Onwards) II SEMESTER Complementary Course MATHEMATICAL ECONOMICS QUESTION BANK 1. Which of the following is a measure

More information

Linear programming. Starch Proteins Vitamins Cost ($/kg) G G Nutrient content and cost per kg of food.

Linear programming. Starch Proteins Vitamins Cost ($/kg) G G Nutrient content and cost per kg of food. 18.310 lecture notes September 2, 2013 Linear programming Lecturer: Michel Goemans 1 Basics Linear Programming deals with the problem of optimizing a linear objective function subject to linear equality

More information

Lesson 27 Linear Programming; The Simplex Method

Lesson 27 Linear Programming; The Simplex Method Lesson Linear Programming; The Simplex Method Math 0 April 9, 006 Setup A standard linear programming problem is to maximize the quantity c x + c x +... c n x n = c T x subject to constraints a x + a x

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Module - 03 Simplex Algorithm Lecture 15 Infeasibility In this class, we

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information