Chapter 3, Operations Research (OR)

Similar documents
Review Solutions, Exam 2, Operations Research

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Math 273a: Optimization The Simplex method

The Simplex Algorithm

The dual simplex method with bounds

Discrete Optimization

Linear Programming: Simplex

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

1 Review Session. 1.1 Lecture 2

Understanding the Simplex algorithm. Standard Optimization Problems.

Linear Programming Redux

Part 1. The Review of Linear Programming

Simplex Algorithm Using Canonical Tableaus

Math Models of OR: Some Definitions

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Lecture 11: Post-Optimal Analysis. September 23, 2009

9.1 Linear Programs in canonical form

TIM 206 Lecture 3: The Simplex Method

MATH 4211/6211 Optimization Linear Programming

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Chapter 1: Linear Programming

Today: Linear Programming (con t.)

Notes taken by Graham Taylor. January 22, 2005

EE364a Review Session 5

Lesson 27 Linear Programming; The Simplex Method

Lecture 11 Linear programming : The Revised Simplex Method

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

Metode Kuantitatif Bisnis. Week 4 Linear Programming Simplex Method - Minimize

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Linear Programming: Chapter 5 Duality

Lecture 10: Linear programming duality and sensitivity 0-0

CO 250 Final Exam Guide

Lecture 5. x 1,x 2,x 3 0 (1)

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

Introduction to Mathematical Programming

Lecture slides by Kevin Wayne

Week 3 Linear programming duality

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Linear Programming. Chapter Introduction

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

BBM402-Lecture 20: LP Duality

Chapter 1 Linear Programming. Paragraph 5 Duality

Systems Analysis in Construction

A Review of Linear Programming

3 The Simplex Method. 3.1 Basic Solutions

(includes both Phases I & II)

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Summary of the simplex method

Linear Programming, Lecture 4

Summary of the simplex method

Lecture #21. c T x Ax b. maximize subject to

3. Linear Programming and Polyhedral Combinatorics

TRANSPORTATION PROBLEMS

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

The Strong Duality Theorem 1

3. Linear Programming and Polyhedral Combinatorics

"SYMMETRIC" PRIMAL-DUAL PAIR

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

MATH 445/545 Homework 2: Due March 3rd, 2016

F 1 F 2 Daily Requirement Cost N N N

Simplex Method for LP (II)

Ω R n is called the constraint set or feasible set. x 1

Chapter 1. Preliminaries

Lectures 6, 7 and part of 8

The Simplex Method for Solving a Linear Program Prof. Stephen Graves

Part 1. The Review of Linear Programming

MATH 373 Section A1. Final Exam. Dr. J. Bowman 17 December :00 17:00

1 Seidel s LP algorithm

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

(includes both Phases I & II)

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

1. (7pts) Find the points of intersection, if any, of the following planes. 3x + 9y + 6z = 3 2x 6y 4z = 2 x + 3y + 2z = 1

Introduction to linear programming using LEGO.

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

3. THE SIMPLEX ALGORITHM

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Linear Programming: Simplex Algorithm. A function of several variables, f(x) issaidtobelinear if it satisþes the two

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Special cases of linear programming

The Dual Simplex Algorithm

Duality in Linear Programming

Linear Programming and the Simplex method

The simplex algorithm

4. Duality and Sensitivity

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

OPERATIONS RESEARCH. Linear Programming Problem

Transcription:

Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z P = c T x Ax = b, x 0 n, (P) and we derived the dual linear program of (P), which we denote (D) Maximize Z D = b T y A T y c. (D) We also argued that the following relationships hold between the linear programs (P) and (D) (a) If (P) is unbounded, then (D) is infeasible. (b) If (D) is unbounded, then (P) is infeasible. (c) If x is feasible for (P), and y is feasible for (D), then b T y c T x (Weak duality). (d) If (P) and (D) are both feasible, then there exists a feasible solution x to (P), and a feasible solution y to (D), such that b T y = c T x (Strong duality). We proved (a)-(c). Furthermore, we mentioned that part (d) can be be proved by using Farkas s Lemma, which we restate below. As last time, let X := {x R n : Ax = b and x 0}. denote the set of feasible solutions to the problem (P). Farkas s Lemma can be stated as follows. Lemma 1 (Farkas Lemma): One of the following two conditions hold, but not both. (i) X =, or in other words, there exists x R n such that x 0 n and Ax = b. (ii) There exists y R m such that A T y 0 and b T y > 0. 1

There is another relation between (P) and (D), which we now prove using Farkas s Lemma. Lemma 2 If (P) is infeasible, then (D) is either unbounded or infeasible. Proof: If (P) is infeasible, then it follows from Farkas s Lemma that property (ii) of Farkas s Lemma holds. In other words, there exists y r R m such that A T y r 0 and b T y r > 0. If (D) is also infeasible, the lemma is clearly true, so we can assume (D) is feasible. Let ȳ be an arbitrary feasible solution to (D). We have A T ȳ c. Consider the line starting at ȳ in the direction y r, or in other words, the set of points of the form ȳ + αy r for some α 0. We claim that any point on this line is feasible for (D). Indeed we have A T (ȳ + αy r ) = A T ȳ + αa T y r c + 0 (since A T ȳ c and A T y r 0. Hence every point on the line starting at ȳ in the direction y r is feasible for (D). Finally, since b T (ȳ+αy r ) = b T ȳ+α(b T y r ) and b T y r > 0, it follows that the objective value for (D) of the points on this line become larger and larger as α increases. Therefore (D) is unbounded. By using basically the same proof one can also prove: (e) If (D) is infeasible, then (P) is either unbounded or infeasible. 2 Bases, basic solutions and the simplex method In preparing for the simplex method, we now introduce the concept of a basic solution of a linear program. Consider the equality constraints involved in defining the problem (P) Ax = b, and suppose this system involves more variables than equalities, and that all rows of the matrix A are linearly independent. In other words assume n m and rank(a) = m (recall that A is an m n matrix). The intuition of a basis, and a basic solution, is to solve the system Ax = b in terms of a subset of the variables x 1, x 2,..., x n. In other words, the idea is to use the equalities Ax = b to express some of the variables in terms of the remaining variables. Definition 1 Let {1, 2,..., n} index the variables in the problem (P). A basis for the linear program (P) is a subset B {1, 2,..., n} of the variables such that (i) B has size m - that is B = m. (ii) The columns of A corresponding to variables in B are linearly independent. In other words, if we let A B denote the (column induced) submatrix of A indexed by B, then rank(a B ) = m. The variables in B are called basic variables, and the remaining variables (indexed by N := {1, 2,..., n} \ B) are called non-basic variables. In the following, we let A S denote the (column induced) submatrix of A induced by the set S, where S is a subset of the variables. The system Ax = b can be solved so as to express the variables in B in terms of the variables in N by multiplying the equality system Ax = b with A 1 B on both sides. Also observe that, for a basis B of (P), the solution to the following system is unique Ax =b, (1) x j =0 for every j N. (2) The unique solution x B R n to the system (1)-(2) is called the basic solution corresponding to the basis B. Observe that a basic solution is obtained by setting a subset of the non-negativity constraints x j 0 that are involved in defining the problem (P) to equalities (x j = 0). Hence, if a basic solution x B satisfies x B j 0 for all j B, then x B is a feasible solution to (P). This motivates the following definition. 2

Definition 2 Let B {1, 2,..., n} be a basis for (P). If x B j 0 for all j B, then B is called a feasible basis, and x B is called a basic feasible solution. Otherwise B is called an infeasible basis for (P), and x B is called a basic infeasible solution. Example 1 Consider, again, the linear program that models the WGC problem. We consider the version of the problem, where slack variables x 3, x 4 and x 5 have been introduced in the constraints (see example 3 of Chapter 2). Maximize Z = 3x 1 + 5x 2 subject to x 1 + x 3 = 4 2x 2 + x 4 = 12 3x 1 + 2x 2 + x 5 = 18 x 1 0, x 2 0, x 3 0, x 4 0,x 5 0 Here we have 5 variables, and since the coefficient matrix of the corresponding system Ax = b includes the identity matrix (the coefficients on the slack variables), the rank of A is 3 for this example. Hence every basis for this problem consists of 3 variables. The set B 1 = {1, 2, 4} constitutes a basis, since the solution to the system x 1 +x 3 = 4, 2x 2 +x 4 = 12, 3x 1 +2x 2 +x 5 = 18, x 3 = 0 and x 5 = 0 is unique. The corresponding basic solution is given by (x 1, x 2, x 3, x 4, x 5 ) = (4, 3, 0, 6, 0) (see Fig. 1), and this basis is therefore a feasible basis. Observe that, when a slack variable is non-basic, then the corresponding inequality is satisfied with equality by the basic solution. The set B 2 = {1, 3, 4} is also a basis for the WGC problem. The corresponding basic solution is given by (x 1, x 2, x 3, x 4, x 5 ) = (6, 0, 2, 12, 0). Since x 3 has a negative value in this basic solution, B 2 is an infeasible basis for (P). The simplex method for solving linear programs is motivated by the following fact (which we will not prove - at least today). Theorem 1 If the linear program (P) is feasible and bounded, then then there exists a feasible basis B for (P), such that corresponding basic solution x B is an optimal solution to (P). The simplex method can now be described, informally, as follows. Step 1: Find a feasible basis B for (P). Step 2: If x B is optimal - STOP. Step 3: Find another feasible basis B of (P) that is better than B. Return to Step 2. The above steps completely describe the simplex method. However, several issues are not clear. How do we find a feasible basis for (P)? How do we verify that a given basic solution is also optimal? If we can answer this last question, and we conclude that the basis is not optimal, how do we find a better basis, and how is better defined? Although we will not go into the details at this point, we can reveal the stucture of the new basis B obtained in Step 3 from the old basis B. The basis B is obtained from B by simply interchanging a basic variable i B with a non-basic variable j {1, 2,..., n} \ B. In other words, the new basis B is of the form B = (B \ {i}) {j}, where i is basic and j is non-basic. This update of the basis is called a pivot, and the bases B and B are said to be adjacent. Example 2 Consider, again, the WGC problem as defined in Example 1. It is easy to check that the sets B 1 := {3, 4, 5}, B 2 := {2, 3, 5} and B 3 := {1, 2, 3} all define feasible bases for the WGC problem. Furthermore, these bases define a sequence of adjacent bases, and the basic solution corresponding to B 3 has (x 1, x 2 ) = (2, 6), which was demonstrated to be optimal in Chapter 1. Hence, the simplex method can solve the WGC problem with two pivots, if the sequence of bases is chosen to be B 1, B 2, B 3. 3

x 2 Infeasible 6 5 4 3 2 1 0000000 1111111 00000000 11111111 00000000 11111111 00000000 11111111 000000000 111111111 000000000 111111111 000000000 111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 00000000000 11111111111 00000000000 11111111111 00000000000 11111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 Infeasible 1 2 3 4 5 6 x 1 Figure 1: Basic feasible/infeasible solutions for the WGC problem 4

3 Finding a feasible basis: A two phase approach In the simplex algorithm described above, it is not clear how to start the algorithm. In other words, it is not clear how to find an initial feasible basis B for (P). We now describe an approach for generating such a feasible basis B for (P). Consider the following linear program (P ) Minimize Z t = m (t + i + t i ) i=1 Ax + t + t = b, x 0 n, t +, t 0 m. (P ) Observe that the equalities Ax + t + t = b involved in (P ) are the same as equalities Ax = b involved in (P), except that we have added the extra term (t + t ). Also note that the problem (P ) is always feasible (solving the system t + t = b, t + 0 m and t 0 m gives a feasible solution with x = 0 n ). Finally note that (P ) is bounded (the objective value is bounded by zero). The variables t + and t are called artificial variables. Given a feasible solution (x, t +, t ) to (P ), we have that x is feasible for (P) if and only if t + = t = 0 m. Hence (P) is feasible if and only if the optimal objective value to (P ) is zero. Furthermore, an optimal basis for (P ) in which all artificial variables are non-basic provides a basic feasible solution to (P). The problem (P ) is called the phase 1 problem associated with (P). A basic feasible solution to (P ) is directly available as follows: for every i {1, 2,..., m} such that b i < 0, declare t i to be basic, and for every i {1, 2,..., m} such that b i 0, declare t + i to be basic. Clearly this gives a basic feasible solution to (P ). Given that steps 2 and 3 of the simplex method can be performed (to be discussed later), the simplex method can solve the problem (P ) starting from this basis. If, after solving the phase 1 problem (P ), it is impossible to eliminate the artificial variables from the basis, the problem (P) is infeasible and therefore solved. Otherwise, a basic feasible solution for (P) is available, and the simplex algoritm can be started from this basic feasible solution (this second step is also called the phase 2 problem). 4 Dual basic solutions and optimality Consider a basis B {1, 2,..., n} for the problem (P). We now give an interpretation of the basis B for (P) in the dual linear program (D) of (P). This interpretation provides the key for determinating whether or not a feasible basis for (P) is also optimal for (P) (step 2 of the simplex algorithm). Also, the concept of a dual basic solution provides the key for determinating a better basis, if the current basic solution is not optimal (step 3 of the simplex algorithm). Consider, again, the dual (D) of (P) Maximize Z D = b T y A T y c. (D) For every variable j {1, 2,..., n}, let a.j denote the j th column of the matrix A. Since B is a basis for (P), the columns a.j for j B are linearly independent. The constraints A T y c of (D) can be written as (a.j ) T y c j for all j {1, 2,..., n}. (3) 5

The dual basic solution y B R m associated with the basis B of (P) is defined to be the unique solution to the system (a.j ) T y = c j for all j B. Observe that the dual basic solution y B is obtained from the subset of the constraints (3) of (D) indexed by B (by setting these inequalities to equalities). The basis B is called dual feasible, if y B satisfies the remaining constraints of (D), or in other words, if (a.j ) T y B c j for every j {1, 2,..., n} \ B. Otherwise B is called a dual infeasible basis. The following lemma shows that the problem (P) can be solved by finding a basis B for (P), which is feasible for both the linear program (P), and the linear program (D). Lemma 3 Let B be an arbitrary basis for (P). If B is a feasible basis for both (P) and (D), then x B is optimal for (P), and y B is optimal for (D). Proof: Since B is a feasible basis for both (P) and (D), x B is feasible for (P), and y B is feasible for (D) (by definition). Hence, we only need to verify optimality. From weak duality (item (c) on page 1), this is equivalent to showing c T x B = b T y B. We have c T x B = c j x B j = j B ((a.j ) T y B )x B j = (since (a.j ) T y B = c j for j B) j B (y B ) T j B a.j x B j = (y B ) T b (since j B a.j x B j = b). Lemma 3 provides the desired certificate for optimality of a basis B of (P). Observe that we have the following consequences of Lemma 3. (i) If B is a feasible basis for (P), and B is not an optimal basis for (P), then B is an infeasible basis for (D). In other words, there exists a variable x j of (P), where j {1, 2,..., n} \ B, such that c j (a.j ) T y B < 0 (the j th constraint of (D) is violated by y B ). (ii) If B is a feasible basis for (P), and the basis B satisfies c j (a.j ) T y B 0 for all j {1, 2,..., n}, then B is an optimal basis for the problem (P) (the basis B is feasible for both (P) and (D). Hence, to check whether or not a feasible basis B for (P) provides an optimal solution to (P) (step 2 of the simplex algoritm/method), we can simply compute y B (solving a non-singular system), and then check whether y B satisfies c j (a.j ) T y B 0 for all j {1, 2,..., n} (check whether B is dual feasible). If the answer is yes, we have solved the problem (P). If the answer is no, we can find some non-basic variable x k of (P) that satisfies c k (a.k ) T y B < 0. Such a variable x k will, as we will see later, be a candidate for entering the basis (step 3 of the simplex method). 6