CO 602/CM 740: Fundamentals of Optimization Problem Set 4

Similar documents
Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Simplex Algorithm Using Canonical Tableaus

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Math 273a: Optimization The Simplex method

Special cases of linear programming

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

Lecture 2: The Simplex method

Simplex method(s) for solving LPs in standard form

AM 121: Intro to Optimization

Part 1. The Review of Linear Programming

3 The Simplex Method. 3.1 Basic Solutions

IE 400: Principles of Engineering Management. Simplex Method Continued

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

IE 5531: Engineering Optimization I

3. THE SIMPLEX ALGORITHM

Optimization (168) Lecture 7-8-9

Introduce the idea of a nondegenerate tableau and its analogy with nondenegerate vertices.

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Lesson 27 Linear Programming; The Simplex Method

Termination, Cycling, and Degeneracy

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10

OPERATIONS RESEARCH. Linear Programming Problem

The simplex algorithm

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

February 17, Simplex Method Continued

Operations Research Lecture 2: Linear Programming Simplex Method

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

9.1 Linear Programs in canonical form

1 Review Session. 1.1 Lecture 2

Review Solutions, Exam 2, Operations Research

ORF 522. Linear Programming and Convex Analysis

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

AM 121: Intro to Optimization Models and Methods Fall 2018

Farkas Lemma, Dual Simplex and Sensitivity Analysis

ECE 307 Techniques for Engineering Decisions

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

"SYMMETRIC" PRIMAL-DUAL PAIR

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

Linear Programming and the Simplex method

TIM 206 Lecture 3: The Simplex Method

Math Models of OR: Some Definitions

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Lecture: Algorithms for LP, SOCP and SDP

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

2.098/6.255/ Optimization Methods Practice True/False Questions

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

MATH 445/545 Homework 2: Due March 3rd, 2016

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

The dual simplex method with bounds

IE 5531: Engineering Optimization I

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

ORF 522. Linear Programming and Convex Analysis

Lecture slides by Kevin Wayne

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Linear Programming The Simplex Algorithm: Part II Chapter 5

The Simplex Algorithm

Math Models of OR: Sensitivity Analysis

Summary of the simplex method

A Review of Linear Programming

CO350 Linear Programming Chapter 6: The Simplex Method

Lecture 2: The Simplex method. 1. Repetition of the geometrical simplex method. 2. Linear programming problems on standard form.

Lecture 5 Simplex Method. September 2, 2009

Notes on Simplex Algorithm

4.6 Linear Programming duality

The augmented form of this LP is the following linear system of equations:

III. Linear Programming

Linear programs Optimization Geoff Gordon Ryan Tibshirani

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

Linear Programming, Lecture 4

The Simplex Algorithm: Technicalities 1

Chapter 4 The Simplex Algorithm Part II

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014

3 Does the Simplex Algorithm Work?

F 1 F 2 Daily Requirement Cost N N N

+ 5x 2. = x x. + x 2. Transform the original system into a system x 2 = x x 1. = x 1

The Simplex Method. Standard form (max) z c T x = 0 such that Ax = b.

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.

CO350 Linear Programming Chapter 6: The Simplex Method

Sensitivity Analysis and Duality

Lecture 11: Post-Optimal Analysis. September 23, 2009

1 Implementation (continued)

Ω R n is called the constraint set or feasible set. x 1

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs

COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING +

CHAPTER 2. The Simplex Method

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

MATH2070 Optimisation

Introduction to Linear and Combinatorial Optimization (ADM I)

Part 1. The Review of Linear Programming

Sensitivity Analysis and Duality in LP

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Transcription:

CO 602/CM 740: Fundamentals of Optimization Problem Set 4 H. Wolkowicz Fall 2014. Handed out: Wednesday 2014-Oct-15. Due: Wednesday 2014-Oct-22 in class before lecture starts. Contents 1 Unique Optimum 1 1.1 Solutions Problem 1......................................... 2 1.1.1 Solution Part 1........................................ 2 1.1.2 Solution Part 2........................................ 2 2 The simplex method with upper bound constraints 2 2.1 Solutions Problem 2......................................... 3 2.1.1 Solution Part 1........................................ 3 2.1.2 Solution Part 2........................................ 3 2.1.3 Solution Part 3........................................ 4 2.1.4 Solution Part 4........................................ 4 3 Degeneracy 4 3.1 Solution Question 3......................................... 4 3.1.1 Solution Part 1........................................ 4 3.1.2 Solution Part 2........................................ 5 3.1.3 Solution Part 3........................................ 6 1 Unique Optimum Consider a linear programming problem in standard form and suppose that x is an optimal basic feasible solution. Consider an optimal basis associated with x. Let B and N be the set of basic and nonbasic indices, respectively. Let I be the set of nonbasic indices i for which the corresponding reduced costs are zero. 1. Show that if I is empty, then x is the only optimal solution. 2. Show that x is the unique optimal solution if, and only if, the following linear problem has an optimal value of zero: max i I x i s.t. Ax = b (1) x i = 0, i N\I, x i 0, i B I. 1

1.1 Solutions Problem 1 1.1.1 Solution Part 1 Proof. Suppose x is an optimal basic feasible solution and I =, i.e., c N = c N c T B A B A N > 0. Wlog we can reorder the columns and assume that B = {1,...,m},N = {1,...,n}\B. Then ([ (A null(a) = range B A N) I n m The feasible directions d from x are therefore defined by ( ) ( ) ( ) [ 0 x x +αd = B db x x +α = B (A N d N x + B A ] N) w, w R+ n m, (2) N I n m where w 0 necessarily since x N = 0. By optimality, if w 0 and the corresponding 0 x = x +αd in (2), then we have further by feasibility of x that [ c T x c T x = c T (x +αd) = c T x +c T (A B A ] N) w = c T x + c T Nw, (3) i.e., we conclude that x feasible and x x implies that 0 w 0 and c T x = c T x + c T N w > ct x which means that x is not optimal. 1.1.2 Solution Part 2 Proof. : (Necessity) Suppose x is the unique optimal solution of (1), an LP. Therefore x is a BFS. Let N be the set of nonbasic indices for x. Then N and I are properly and uniquely defined. And, for all i N we have x i = 0. Since I N, it follows that x i = 0 i I. Then max i I x i = 0 over the feasible set. : (Sufficiency) Let x be an optimal BFS solution to the original LP problem. If the corresponding set I =, then x is unique (by part 1). Now suppose i 0 I N for the optimal solution x for the original program. Suppose that the optimal value for (1) is positive, 0 < i I x i, where x is an optimal BFS for (1) and therefore also feasible for the original LP. It is clear that x x. It is enough to show that x is also optimal for the original LP and so uniqueness of x fails. From (2) and feasibility we can write x = x +αd = ( x B x N ) + [ (A B A N) I n m ] I n m ]). w, 0 w R n m +,w i = 0, i N\I. (4) We can use (3) and see that x x and the objective value is c T x = c T x + c T N w = ct x, where we note that c T N w = 0 since w i = 0, i / I. Therefore, x is also optimal. 2 The simplex method with upper bound constraints Consider the LP in the form min c T x s.t. Ax = b 0 x u, where A is m n and has linearly independent rows, and u > 0. 1. Let A B(1),...,A B(m) be m linearly independent columns of A (the basic columns). { We partition the 0 if i L set of all i / {B(1),...,B(m)} into two distinct subsets L and U, where x i = u i if i U. We then solve the equation Ax = b for the basic variables x B(1),...,x B(m). Show that the resulting vector x is a basic solution. Also, show that it is a nondegenerate solution if, and only if, x i 0 and x i u i, for every basic variable x i. 2

2. For this part and the next, assume that the basic solution constructed in part 1 is feasible. We form the simplex tableau and compute the reduced costs as usual. Let x j be some nonbasic variable such that x j = 0 and c j < 0. As usual, choose the variable that leaves the basis by finding the first basic variable to violate that constraints, i.e., increase x j by θ and adjust the basic variables using x B x B θa B A j. What is the largest value for θ? How are the new basic columns determined, i.e., is there a ratio test available? 3. Let x j be some nonbasic variable such that x j = u j and c j > 0. We decrease x j by θ, and adjust the basic variables from x B to x B +θa B A j. Given that we wish to preserve feasibility, what is the largest possible value of θ? Is there a ratio test? How are the new basic columns determined? 4. Assuming that every basic feasible solution is nondegenerate, show that the cost strictly decreases with each iteration and the method terminates. 2.1 Solutions Problem 2 2.1.1 Solution Part 1 By definition a solution x is basic ifall the equalityconstraintsareactive and amongall the activeconstraints there are n of them that are linearly independent. Wolg after reordering the columns, we assume that B = {1,..., m} = {B(1),..., B(m)} are the indices for the basic (linearly independent) columns, and L = {m+1,...,m+l}, U = {m+l+1,...,n} are the indices for the lower and upper bounds. We can now set x i appropriately for i L U and solve for x B, i.e. we solve the m m linear system A B x B = b i U u i A i. This yields the unique solution x = x B x L = x B 0 (5) u x U for the linear system defined by the three index sets. By the uniqueness, this means that the active constraints defined by the three index sets is a linearly independent set of active constraints, i.e., the solution is a basic solution. By definition, a basic solution is degenerate if there are more than n active constraints. From the above we see that the equality constraints together with the constraints corresponding to the indices L U define n active constraints with L + U = n m. Therefore, the basic columns correspond to basic variables and we get a degenerate basic solution if, and only if, we have > n active constraints, i.e., the desired result follows. 2.1.2 Solution Part 2 We assume that 0 x B = A B (b i U u ia i ) u and we have a BFS as in (5). In addition, we can find the current shadow price estimates y T = c T B A B and the reduced cost vectors ct L = ct L yt A L, c T U = ct U yt A U. For j L the simplex method is unchanged, i.e., we can use Bland s rule for the entering variable since this means that x j increasing improves the objective value. However, for the leaving variable we have to ensure that no basic variable goes below zero as well as it does not go above u i. Suppose x j = 0 is a nonbasic variable that enters the basis. Then we increase x j by θ 0 and move to a new point until a constraint is violated. That is, we calculate the maximum θ 0 such that x B θa B A j is feasible: max{θ 0 0 x B θa B A j u B } 1. 0 x B θa B A j: θ min (A B Aj)i>0 2. x B θa B A j u B : θ min (A x Bi (A B Aj)i u Bi x Bi B Aj)i<0 (A B Aj)i 3

The basic variable entering the basis is determined using the Bland rule and this determines the column that enters the basis. The column that leaves the basis is determined using the above ratio test, i.e., by the first variable that reaches an upper or lower bound. 2.1.3 Solution Part 3 The argument is the same as above in Part 2 except that now the variable x j that enters the basis decreases. This can be done by changing the appropriate signs. 2.1.4 Solution Part 4 The algorithm progresses strictly towards an optimum using the same argument as for the simplex method and using the reduced costs. If the basic feasible solution is nondegenerate, then x Bi > 0 and x Bi < u Bi. So, θ > 0. At each iteration, we move from a current point a nonzero distance θ in a direction that strictly improves the cost function, so old basic feasible solutions are not revisited or no cycling. There is only a finite number of ways to choose nonbasic variables from L and U, and basic variables from B, therefore, the procedure will terminate when no choice of basic and nonbasic variables will improve the cost function. 3 Degeneracy Consider the maximization LP max [ 2.3x 1 +2.15x 2 3.55x 3 ] 0.4x( 4 0.4 0.2.4 0.2 0 s.t. x 7.8.4 7.8 0.4 0) x 0 1. Show that the simplex method cycles for this LP, i.e., it does not terminate in a finite number of iterations. 1 2. can you perturb the right hand side by a small amount and stop in a finite number of steps? 3. Do you get a solution (does the algorithm stop) using linprog in MATLAB? (Show your output.) 3.1 Solution Question 3 Solution using the tableau Iteration. 3.1.1 Solution Part 1 ( ) 0.4 0.2.4 0.2 1 0 c t =[-2.3-2.15 13.55 0.4 0 0], A=. 7.8.4 7.8 0.4 0 1 ( ) 1 0 Choose B={5,6},N={1,2,3,4}. Then A B =, c 0 1 t B =[0 0], x B = A B b=b=[0 0]t. 0-2.3-2.15 13.55 0.4 0 0 x 5 = 0 0.4 0.2-1.4-0.2 1 0 x 6 = 0-7.8-1.4 7.8 0.4 0 1 So,x 1 enters and x 5 leaves the basis.. T1 1 Hint: Let x 1 enter the basis for the first iteration; and, let x 2 enter the basis for the second iteration; break the tie by choosing the larger pivot element, i.e., the usual choice for stability of pivots in Gaussian elimination. 4

Updates: B= {1,6}, N={5,2,3,4}. 0 0-1 5.5-0.75 5.75 0 x 1 = 0 1 0.5-3.5-0.5 2.5 0 x 6 = 0 0 2.5-19.5-3.5 19.50 1 x 2 enters and x 6 leaves the basis.. T2 Updates: B= {1,2}, N={5,2,3,6}. 0 0 0-2.3-2.15 13.55 0.4 x 1 = 0 1 0 0.4 0.2-1.4-0.2 x 2 = 0 0 1-7.8-1.4 7.8 0.4 x 3 enters and x 1 leaves the basis.. T3 Updates: B= {3,2}, N={5,2,1,6}. 0 5.75 0 0-1 5.5-0.75 x 3 = 0 2.5 0 1 0.5-3.5-0.5 x 2 = 0 19.5 1 0 2.5-19.5-3.5 x 4 enters and x 2 leaves the basis.. T4 The tableau denoted T1,T3 and T2,T4 are the same up to permutation of columns. So, this process cycles. 3.1.2 Solution Part 2 Perturbing the right hand side. Let b = [0.004,0.001] t. Then repeating the tableau iteration shows that the problem is unbounded. 0-2.3-2.15 13.55 0.4 0 0 x 5 = 0.006 0.4 0.2-1.4-0.2 1 0 x 6 = 0.02-7.8-1.4 7.8 0.4 0 1 So,x 1 enters and x 5 leaves the basis.. T1 Updates: B= {1,6}, N={5,2,3,4}. 0.0345 0-1 5.5-0.75 5.75 0 x 1 = 0.015 1 0.5-3.5-0.5 2.5 0 x 6 = 0.117 0 2.5-19.5-3.5 19.5 1 x 2 enters and x 1 leaves the basis.. T2 Updates: B= {2,6}, N={5,1,3,6}. 0.0645 2 0-1.5-1.75 10.5 0 x 2 = 0.03 2 1-7 -1 5 0 x 6 = 0.042-5 0-2 -1 7 1 Since A 4 < 0, the optimal cost is, hence unbounded.. T2 5

3.1.3 Solution Part 3 When using MATLAB and linprog we get the output: Exiting: The problem is unbounded; the constraints are not restrictive enough. The MATLAB code is f= [ 2.3 2.15-13.55-0.4 ] ; A= [0.4 0.2-1.4-0.2-7.8-1.4 7.8 0.4 ]; b=[.001.00001 ] ; options=optimoptions(@linprog, Algorithm, simplex ); x=linprog(-f,a,b,[],[],zeros(4,1),[],[],options); Therefore MATLAB does not cycle. 6