Ω R n is called the constraint set or feasible set. x 1

Similar documents
Chapter 5 Linear Programming (LP)

MATH 4211/6211 Optimization Linear Programming

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Chapter 1: Linear Programming

TIM 206 Lecture 3: The Simplex Method

IE 5531: Engineering Optimization I

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Chapter 4 The Simplex Algorithm Part I

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Summary of the simplex method

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

"SYMMETRIC" PRIMAL-DUAL PAIR

Lesson 27 Linear Programming; The Simplex Method

The Simplex Algorithm and Goal Programming

Linear Programming Redux

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Simplex Method for LP (II)

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Simplex Algorithm Using Canonical Tableaus

9.1 Linear Programs in canonical form

MATH2070 Optimisation

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

Part 1. The Review of Linear Programming

Math Models of OR: Some Definitions

Operations Research Lecture 2: Linear Programming Simplex Method

Math 273a: Optimization The Simplex method

F 1 F 2 Daily Requirement Cost N N N

Part 1. The Review of Linear Programming

MATH 445/545 Homework 2: Due March 3rd, 2016

3 The Simplex Method. 3.1 Basic Solutions

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

OPRE 6201 : 3. Special Cases

Professor Alan H. Stein October 31, 2007

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

Chap6 Duality Theory and Sensitivity Analysis

3. Replace any row by the sum of that row and a constant multiple of any other row.

The Simplex Method for Solving a Linear Program Prof. Stephen Graves

Linear Programming Duality

1 Review Session. 1.1 Lecture 2

Lecture 2: The Simplex method

Week_4: simplex method II

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

Notes taken by Graham Taylor. January 22, 2005

AM 121: Intro to Optimization Models and Methods Fall 2018

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

1 The linear algebra of linear programs (March 15 and 22, 2015)

Summary of the simplex method

M340(921) Solutions Problem Set 6 (c) 2013, Philip D Loewen. g = 35y y y 3.

Week 3: Simplex Method I

In Chapters 3 and 4 we introduced linear programming

Section 4.1 Solving Systems of Linear Inequalities

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

ECE 307 Techniques for Engineering Decisions

Linear Programming: Simplex

Simplex method(s) for solving LPs in standard form

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

Special cases of linear programming

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

ORF 522. Linear Programming and Convex Analysis

TRANSPORTATION PROBLEMS

Math Models of OR: Extreme Points and Basic Feasible Solutions

OPERATIONS RESEARCH. Linear Programming Problem

Linear Programming and the Simplex method

AM 121: Intro to Optimization

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Lecture slides by Kevin Wayne

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

IE 400: Principles of Engineering Management. Simplex Method Continued

Chapter 9: Systems of Equations and Inequalities

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3)

UNIVERSITY of LIMERICK

The dual simplex method with bounds

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems

Slide 1 Math 1520, Lecture 10

A Review of Linear Programming

Lectures 6, 7 and part of 8

Understanding the Simplex algorithm. Standard Optimization Problems.

The Graphical Method & Algebraic Technique for Solving LP s. Métodos Cuantitativos M. En C. Eduardo Bustos Farías 1

Duality Theory, Optimality Conditions

Chapter 1. Vectors, Matrices, and Linear Spaces

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

0.0.1 Section 1.2: Row Reduction and Echelon Forms Echelon form (or row echelon form): 1. All nonzero rows are above any rows of all zeros.

February 17, Simplex Method Continued

3. THE SIMPLEX ALGORITHM

SECTION 3.2: Graphing Linear Inequalities

The simplex algorithm

4. Duality and Sensitivity

Linear Programming. (Com S 477/577 Notes) Yan-Bin Jia. Nov 28, 2017

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

Transcription:

1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We consider the case when f (x) is a linear function f (x) = c T x, x = (x 1, x 2,..., x n ) R n where c R n is a given constant vector, called cost coefficient. Example: R 3 f (x) = 3x 1 4x 2 + x 3 = c T x, c = 3 4, x = 1 x 1 x 2 x 3 Consider standard convex constraint set Ω = {x : Ax = b, x 0}

2 The notation x 0 means that each component of x = [x 1,..., x n ] T must be 0 : x i 0 for all i Example: Ω = {x : 4x 1 + x 2 = 5, x 1 0, x 2 0} y 5 4 3 2 1 0 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x We can write the problem as minimize subject to c T x Ax = b x 0 This problem is called Linear programming problem, or linear programming (LP). In general, constraint sets involve equations and inequalities.

3 Example: minimize 3x 1 + 2x 2 subject to x 1 x 2 6 x 1 0 Example Production scheduling Woodworking shop data: Inputs Products input availability Table Chair labor 5 6 30 Materials 3 2 12 Production Levels x 1 x 2 Unit price 1 5 Goal: schedule production to maximize revenue Total revenue: x 1 + 5x 2 Production constraints Labor constraint: 5x 1 + 6x 2 30 Materials constraint: 3x 1 + 2x 2 12

4 Physical constraints: x 1 0, x 2 0 Optimization problem: (LP) in matrix form maximize x 1 + 5x 2 subject to 5x 1 + 6x 2 30 3x 1 + 2x 2 12 x 1 0, x 2 0 maximize subject to c T x Ax b x 1 0, x 2 0 A = [ ] 5 6, b = 3 2 [ ] 30, c = 12 [ ] 1 5

5 Equivalent form minimize subject to c T x Ax b x 1 0, x 2 0 Example: Optimal Diet Nutrition table: Unit Price Vitamin Milk Eggs Daily Requirement λ 1 F 3 7 45 λ 2 G 4 2 60 Intake x 1 x 2 Unit cost 2 5 For consumers: Total cost: 2x 1 + 5x 2 Dietary Constraints Vitamin F: 3x 1 + 7x 2 45 Vitamin G: 4x 1 + 2x 2 60 Physical constraints: x 1 0, x 2 0

6 Optimization problem: minimize 2x 1 + 5x 2 subject to 3x 1 + 7x 2 45 4x 1 + 2x 2 60 x 1 0, x 2 0 (LP) in matrix form minimize subject to c T x Ax b x 1 0, x 2 0 A = [ ] 3 7, b = 4 2 [ ] 45, c = 60 [ ] 2 5 This is called Primal LP For sellers: total revenue (per customer) 45λ 1 + 60λ 2 to compete well with milk: 3λ 1 + 4λ 2 2 to compete well with egg: 7λ 1 + 2λ 2 5

7 LP for seller: Maximize 45λ 1 + 60λ 2 Subject to 3λ 1 + 4λ 2 2 7λ 1 + 2λ 2 5 λ 1, λ 2 0 This is called dual LP. Geometric method for LP problems in 2-D Consider the production scheduling example given before: maximize x 1 + 5x 2 subject to 5x 1 + 6x 2 30 3x 1 + 2x 2 12 x 1 0, x 2 0

8 Level sets of the objective function: x 1 + 5x 2 = c Level sets are parallel lines, moving up as c increases The maximum value 25 is reached when the level set "exiting" the constraint set at the corner point [0, 5] T : x 1 + 5x 2 = 25 The minimum value 0 is reached when the level set "exiting" the constraint set at the corner point [0, 0] T : x 1 + 5x 2 = 0 In fact, unless the level sets happen to be parallel to one of the edges of the constraint set, the solution will lie on a corner point (or vertex). Even if the level sets happen to be parallel to one of

9 the edges of the constraint set, a corner point will be an optimal solution. It turns out that solution of an LP problem (if it exists) always lies on a vertex of the constraint set. Therefore, instead of looking for candidate solutions everywhere in the constraint set, we need only focus on the vertices. Standard form LP problems LP problem in standard form: where A is a m n matrix, m < n, minimize subject to c T x Ax = b x 0 rank (A) = m b 0 Example:

10 In standard form: minimize 3x 1 + 5x 2 x 3 subject to x 1 + 2x 2 + 4x 3 = 4 5x 1 3x 2 + x 3 = 15 x 1 0, x 2 0, x 3 0 NOT in standard forms: maximize 3x 1 + 5x 2 x 3 subject to x 1 + 2x 2 + 4x 3 4 5x 1 3x 2 + x 3 15 x 1 0, x 2 0 minimize 3x 1 + 5x 2 x 3 subject to x 1 + 2x 2 + 4x 3 = 4 5x 1 3x 2 + x 3 = 15 x 1 0, x 2 0, x 3 0

11 The above is not standard because b 1 = 4 < 0. To be standard form, we require b i 0. minimize 3x 1 + 5x 2 x 3 subject to x 1 + 2x 2 + 4x 3 = 4 2x 1 + 4x 2 + 8x 3 = 8 x 1 0, x 2 0, x 3 0 The above is not standard because the matrix A 2 4 has a rank of 1, not "full rank". All our analyses and algorithms will apply only to standard form LP problems. What about other variations of LP problems? Any LP problem can be converted into an equivalent standard form LP problem. How to convert from given LP problem to an equivalent problem in standard form? If problem is a maximization, simply multiply the objective function by 1 to get minimization. If A not of full rank, can remove one or more rows. If a component of b is negative, say the ith component, multiply the ith constraint by 1 to obtain a positive right-hand side. What about inequality constraints? add slack variables for constraints

12 add surplus variables for constraint split variables for free variables Converting to standard form: Slack variables Suppose we have a constraint of the form: a 1 x 1 + a 2 x 2 + + a n x n b We can convert the above inequality constraint into the standard equality constraint by introducing a slack variable x n+1 The above constraint is equivalent to: Converting to standard form: Surplus variables Suppose we have a constraint of the form: a 1 x 1 + a 2 x 2 + a n x n + x n+1 = b x n+1 0 a 1 x 1 + a 2 x 2 + + a n x n b We can convert the above inequality constraint into the standard equality constraint by introducing a surplus variable x n+1

13 The above constraint is equivalent to: a 1 x 1 + a 2 x 2 + + a n x n x n+1 = b x n+1 0 Converting to standard form: Nonpositive variable Suppose one of the variables (say, x 1 ) has the constraint x 1 0 We can convert the variable into the usual nonnegative variable by changing every occurrence of x 1 by its negative x 1 = x 1 Example: The LP problem minimize subject to c 1 x 1 + c 2 x 2 + + c n x n a 1 x 1 + a 2 x 2 + + a n x n a n+1 x n+1 = b x 1 0

14 is converted standard form minimize subject to c 1 x 1 + c 2 x 2 + + c n x n a 1 x 1 + a 2 x 2 + + a n x n a n+1 x n+1 = b x 1 0 Converting to standard form: Free variables Suppose one of the variables (say, x 1 ) does not have any constraint We can split x 1 into its positive part and negative part by introducing two variables u 1 0 and v 1 0 : x 1 = u 1 v 1 Example: The constraint is equivalent to Converting to standard form: An example a 1 x 1 + a 2 x 2 + + a n x n = b x 2 0, x 3 0,, x n 0 a 1 (u 1 v 1 ) + a 2 x 2 + + a n x n = b u 1 0, v 1 0, x 2 0, x 3 0,, x n 0

15 Consider the LP problem (not in standard form) maximize 3x 1 + 5x 2 x 3 subject to x 1 + 2x 2 + 4x 3 4 (1) 5x 1 3x 2 + x 3 15 (2) x 2 0 x 3 0 We first convert the maximization problem into minimization problem by multiplying the objective function by ( 1) Next, we introduce a slack variable x 4 in the first constraint to convert constraint (1) into x 1 + 2x 2 + 4x 3 + x 4 = 4, x 4 0 and then multiply the equation by ( 1) to make the right-hand side positive: x 1 2x 2 4x 3 x 4 = 4, x 4 0 Next, we introduce a surplus variable x 5 to convert the second constraint (2) to convert it to 5x 1 3x 2 + x 3 x 5 = 15, x 5 0 Next, we substitute x 1 with x 6 x 7 in every expression

16 Finally, we substitute x 2 with x 2 The resulting standard form in every expression minimize 3 (x 6 x 7 ) + 5x 2 + x 3 subject to (x 6 x 7 ) + 2x 2 4x 3 x 4 = 4 5 (x 6 x 7 ) + 3x 2 + x 3 x 5 = 15 x 2, x 3, x 4, x 5, x 6, x 7 0 or minimize 5x 2 + x 3 3x 6 + 3x 7 subject to 2x 2 4x 3 x 4 + 0x 5 x 6 + x 7 = 4 3x 2 + x 3 + 0x 4 x 5 5x 6 + 5x 7 = 15 x 2, x 3, x 4, x 5, x 6, x 7 0 The matrix form is minimize subject to c T x Ax = b x 0, x = [ x 2 x 3 x 4 x 5 x 6 x 7 ] T

A = [ ] 2 4 1 0 1 1, b = 3 1 0 1 5 5 [ ] 4 15 17 Basic solutions Consider LP problem in standard form: minimize subject to c T x Ax = b x 0 where A is a m n matrix, m < n, rank (A) = m, b 0 The feasible points are those x that satisfy Ax = b and x 0 (i.e., have nonnegative components). For m < n, there are infinitely many points x satisfying Ax = b REVIEW linear algebra: Recall there are three types of elementary row operations (a) Interchanging two rows of the matrix; (b) Multiplying one row of the matrix by a (nonzero) constant; (c) Adding a constant multiple of one row to another row. Pivoting about (i,j) position: using above three types of row operations to make the entry at (i,j) position 1, and the rest entries in the jth column zero

To solve Ax = b using elementary row operations: First, we form the augmented matrix [A, b] Using Gauss-Jordan Algorithm [A, b] can be reduced by a sequence of elementary row operations to a unique reduced Echelon form. Assume that the first m columns of A are linearly independent pivoting about (1, 1), (2, 2),..., (m, m) position consecutively this will reduce [A, b] to the reduced Echelon 1 0 0 y 1 0 1 0 y 2.......... = [I m, Y, y ], y = 0 0 1 y m Name: canonical augmented matrix y 1 y 2. y m 18

19 In particular, Ax = b has a solution x = [ ] y 0 = y 1 y 2. y m 0. 0 where y is the last column in the above canonical augmented matrix Basic solution: This solution x = [ y T, 0 ] T is called a basic solution to Ax = b The first m columns of A are called basic columns The first m variables x 1,, x m are called basic variables The last (n m) variables x m+1,, x n are called nonbasic variables In general, for any solution x of Ax = b x is called a basic solution if x has at most m non-zero components the corresponding variables (of non-zero components) are called basic variables the corresponding columns in A are called basic columns that form a basis

For any m basic columns of A : a k1, a k2,..., a km are linearly independent, where 1 k 1 k 2... k m. Canonical augmented matrix: k 1 -th column is e 1 = [1, 0,..., 0] T, k 2 -th column is e 2,.., k m -th column is e m 1 0 0 0 1 0 [A, b] = [a 1, a 2,..., a n, b] 0. 0 1.... In this case, a k1, a k2,..., a km are basic columns, variables associated with basic columns, x k1, x k2,..., x km, are basic variables. Example: A 2 4 = [a 1, a 2, a 3, a 4 ] = canonical augmented matrix is [ ] 1 2 3 4 6 5 6 7 8 2 x 2 and x 4 are basic variables [ ] 1 2 3 4, b = 5 6 7 8 [ 3 2 1 1 2 0 5 ] 1 2 0 1 2 1 4 [ ] 6 2 20

21 a 2, a 4 are basic columns a solution x x = 0 5 0 4 this x = [0, 5, 0, 4] T is a basic solution ( m = 2) There are at most ( ) n m Example Consider the equation Ax = b with = n! m! (n m)! A = [a 1, a 2, a 3, a 4 ] = basic solutions [ ] 1 1 1 4, b = 1 2 1 1 [ ] [ ] 1 1 1 4 8 A b = 1 2 1 1 2 #1 basic solution: x = [6, 2, 0, 0] T, basic variables x 1, x 2, basis B = [a 1, a 2 ] [ ] 8 2

22 #2 basic solution: x = [0, 2, 6, 0] T, basic variables x 2, x 3, basis B = [a 2, a 3 ] #3 basic solution: x = [0, 0, 0, 2] T, basic variables x 3, x 4, basis B = [a 3, a 4 ] #4 basic solution: x = [0, 0, 0, 2] T, basic variables x 1, x 4, basis B = [a 1, a 4 ] #5 basic solution: x = [0, 0, 0, 2] T, basic variables x 2, x 4, basis B = [a 2, a 4 ] Note that x = [3, 1, 0, 1] T is a solution, but it is not basic To find the last basic solution, we set x 2 = x 4 = 0 the solution of form x = [x 1, 0, x 3, 0] T for Ax = b is equivalent to [ ] [ ] [ ] 1 1 x1 8 = 1 1 2 x 3 its augmented matrix [ ] 1 1 8 1 1 2 is inconsistent. So there are only five basic solutions Basic Feasible Solutions (BFS) Consider the constraints of the LP problem: Ax = b, x 0

23 Recall definition of basic solution If x is a basic solution to Ax = b and it also satisfies x 0, we call it a basic feasible solution (BFS) Note that a basic solution x is feasible iff every basic variable is 0. If at least one of the basic variables is = 0, we say that the BFS is degenerate We henceforth assume nondegeneracy (i.e., all the BFSs we consider are not degenerate) This makes the derivations simpler. The degenerate case can be handled similarly. In Example above: x = [6, 2, 0, 0] T is a BFS x = [0, 0, 0, 2] T is a degenerate BFS x = [0, 2, 6, 0] T is a basic solution, but not feasible x = [3, 1, 0, 1] T is feasible, but not basic Geometric view Geometrically, a BFS corresponds to a corner point (vertex) of the constraint set Example: Consider the constraint Ax = b, x 0, where A = [ 1, 2, 3 ], b = [6].

24 Feasible set the portion of the plane x 1 + 2x 2 + 3x 3 = 6 in the first octant The BFSs are: [6, 0, 0] T, [0, 3, 0] T, and [0, 0, 2] T Note that the BFSs are just the vertices of the constraint set. In general, we can think (geometrically) of a BFS as a vertex of the constraint set. Optimal solutions A feasible point that minimizes the objective function is called an optimal feasible solution A BFS that is also optimal is called an optimal basic feasible solution

25 Fundamental Theorem of LP Consider an LP problem in standard form. (a) If there exists a feasible solution, then there exists a BFS; (b) If there exists an optimal feasible solution, then there exists an optimal BFS. Or, more informally,... If the feasible set is nonempty, then it has a vertex If the problem has a minimizer (optimal solution), then one of the vertices is a minimizer Consequences of FTLP Suppose we are given an LP problem. Assume that a solution exists. To find the solution, it suffices to look among the set of BFSs. There are only a finite number of BFSs; in fact, at most ( ) n n! = basic solutions m m! (n m)! Therefore, the FTLP allows us to transform the original problem (infinite number of feasible points) to a problem over a finite number of points However, the total number of BFSs may of course be very large.

For example, if m = 5 and n = 50, ( ) 50 5 = 2118 760 The brute force approach of exhaustively comparing all the BFSs is impractical Therefore, we need a more computationally efficient method to find the minimizer among the possible BFSs Simplex method: an organized way of going from one BFS to another to search for the global minimizer. 26 Simplex Algorithms The FTLP allows us to limit search of optimal solutions to a finite number of points (BFSs). Simplex method: an organized way of going from one BFS to another to search for the global minimizer. The method uses numerical linear algebraic techniques. Sometimes, to avoid confusion, we use boldface letters for vectors Simplex method Basic idea Start with an initial basis corresponding to a BFS We then move to an adjacent BFS (adjacent vertex) in such a way that the objective function de-

27 creases If the stopping criterion is satisfied, we stop; otherwise, we repeat the process. We move from one basic solution to an adjacent one: if a p leaves the basis and a q enters the basis, we pivot about the (r, q) position of the canonical augmented matrix p-th column q th column.. 0 0 r-th row 1 1 0 0.. 0 0 x p is D.V.(departing variable) x q is E.V. (entering variable) Example: a 2, a 4 form the original basis. To replace a 2 by a 1, we need to pivot about (1, 1) (p =

28 2, r = 1, q = 1). The new basis is {a 1, a 4 } [ ] 1 1 3 0 8 2 0 1 1 2 x 2 is D.V., x 1 is E.V. [ ] 1 1 3 0 8 0 2 51 1 18 Two questions: (a) How do we ensure that the adjacent basic solution we move to is feasible? In other words if we are given x q E.V., how do we choose a D.V. x p such that if we pivot about the (r, q) element ((r, p) is a pivot), the resulting rightmost column of the canonical augmented matrix will have all nonnegative elements? (b) How do we choose E.V. x q to ensue the objective function at the new basic solution is smaller the previous value? Maintain feasibility (optional): Choosing leaving variable x p Feasibility: the last column 0 Given: A and b. Suppose we have a set of basic variables corresponding to a BFS. For simplicity, assume it is {x 1,..., x m }, the corresponding basis is {a 1,..., a m } Suppose we now want to choose x q as E.V., where q > m.

29 One of the vectors a p, 1 p m, must leave the basis. Suppose the canonical augmented matrix for the original basis is: 1 0 0 y 1,m+1 y 1,n y 1,0 0 1 0 y [A, b] 2,m+1 y 2,n y 2,0......... 0 0 1 y m,m+1 y m,n y m,0 Note y i0 0 due to feasible assumption If, for example, a m+1 replaces a 1 by pivoting bout (1, m + 1) the resulting canonical augmented matrix 1 y y 0 0 1 1,n y 1,0 1,m+1 y 1,m+1 y 1,m+1 [A, b] 0 1 0 y 2,m+1 y 2,n y 2,0......... 0 0 1 y m,m+1 y m,n y m,0

1 y y 0 0 1 1,n y 1,0 1,m+1 y 1,m+1 y 1,m+1 y 1 0 0 y 2,n y 1,n 1,m+1 y y 2,m+1 y 2,0 y 1,0 1,m+1 y y 2,m+1 1,m+1......... y m,m+1 y 0 1 0 y m,n y 1,n 1,m+1 y y m,m+1 y m,0 y 1,0 1,m+1 y y m,m+1 1,m+1 y 2,m+1 30 the last column is y 10 y 1m+1 y 20 y 10 y y 2m+1 1m+1. y m0 y 10 y y mm+1 1m+1 In general, if a q replaces a p by pivoting about (r, q) ((r, p) is a pivot), the last column of new canonical augmented matrix will be. y r,0 y r,q. y i,0 y r,0. y r,q y i,q r-th i-th

i.e., r th element is y r,0 y r,q all other i th element is y i,0 y r,0 y r,q y i,q for i p If y r,q < 0, then the feasible set is unbounded Hence, to ensue feasibility, we have to choose r such that y r,q > 0, and for all i = 1, 2,..., m, y i0 y ( r,0 yi,0 y i,q = y i,q y ) r,0 0 = y r,q y i,q y { r,q } yi,0 r = arg min : i = 1, 2,..., m y i,q We may divide i-th component in q-th column into i-th component in the last column to create "division" y 1,0 y 1,q y 1,0 /y 1,q y 2,0. y 2,q. = y 2,0 /y 2,q. y m,0 y m,q y m,0 /y m,q In summary, to maintain feasibility, the leaving variable x p must be the position in the above "quota- 31

32 tion" vector with the smallest value, i.e., y r,0 y r,q y i,0 y i,q Maintain optimality (optional): Choosing entering variable x q Finding a better adjacent BFS Once we select a q, we know how to pick a p to leave the basis The question is how to choose column a q to enter the basis? The goal is that the new (adjacent) BFS should have lower objective function value Suppose the canonical augmented matrix for the original basis is: 1 0 0 y 1,m+1 y 1,n y 1,0 0 1 0 y [A, b] = 2,m+1 y 2,n y 2,0......... 0 0 1 y m,m+1 y m,n y m,0 basic cost function coefficients are those associated with the basic variables c 0 = [c 1, c 2,, c m ]

33 The objective function for the BFS x = [y 10,, y m0, 0, 0] T is m z 0 = c T x = c T 0 x = c i y i0 Now, consider a new basis where a q enters the basis and a p leaves the basis. The new BFS is (with r = p in this case) [ x = y 1,0 y r,0 y 1,q, 0,, y y m,0 y r,0 y m,q, 0, 0, y ] r,0, 0 r,q y r,q y r,q i=1

34 The objective function for the new BFS is m z = c T x = = = m i=1 i=1,i r c i ( y i0 y r0 y rq y iq m c i y i0 i=1 = z 0 + Note that the q-th column of A is ( c q m i=1 ( c i y i0 y ) r0 y iq y rq ) y r0 + c q y rq y r0 y r0 c i y iq + c q y rq y rq ) m y r,0 c i y i,q y r,q i=1 y 1,q y 2,q. y m,q + c q y r0 y rq

35 Hence q-th component in c T 0 A is we have z q = m c i y i,q i=1 z = z 0 + (c q z q ) y r,0 y r,q Hence if (c q z q ) < 0, we conclude z < z 0, i.e., the objective function for the new BFS is smaller Define r i = we call r i Relative cost coefficients (RCC) for x i Note: the RCC for a basic variable is always 0. If r q < 0, then the new BFS is better. The smaller r q, the better the new BFS { 0 for i = 1,..., m (basic) c i z i for i = m + 1,..., n (nonbasic) z = z 0 + r q y r,0 y r,q

36 In general, if x is any feasible solution, we can derive the equation n z = c T x = z 0 + r i x i RCC (Relative cost coefficients) vector r is Optimality conditions: r = c T c T 0 A 0. This leads to i=m+1 r = [r 0, r 2,..., r n ] T = c T c T 0 A Theorem A BFS is optimal if and only if the corresponding RCC values (components of c T c T 0 A ) are all 0. In summary, we choose a variable x q that has the most negative RCC value as the entering variable. The Simplex Algorithm 1. Initialization: Given an initial BFS and its canonical augmented matrix. 2. Calculate the RCCs ( c T c T 0 A ) corresponding to the nonbasic variables. 3. If r j > 0 all j, then STOP the current BFS is optimal. 4. Otherwise, select a q such that r q < 0 is the most negative.

37 5. If in the qth column a q, no element y iq > 0, then STOP the problem is unbounded; else, calculate r = argmin{y i0 /y iq : y iq > 0}. 6. Update canonical augmented matrix by pivoting about the (r, q) element. 7. Go to step 2. Simplex tableau We design a specific algorithm to compute optimal solutions We focus on the standard form: minimize: z = c T x subject to: Ax = b, x 0, rank (A) = m, b > 0 Simplex tableau Suppose that we already know one set of basic variables x 0. Let c 0 be the basic cost coefficient in c associated with basic variable x 0. For instance, if x 0 = (x 2, x 5, x 8 ) be the set of basic variables, then c T 0 = [c 2, c 5, c 8 ].

38 simplex tableau x T c T x 0 c 0 A b r = c T c T 0 A c T 0 b where the second column from the left (c 0 ) and the second row from the top (c T ) are for convenience in calculating the last row, and can be deleted from the final tableau: Example: Consider x T x 0 A b c T c T 0 A c T 0 b minimize: f (x 1, x 2 ) = 120x 1 80x 2 subject to: 2x 1 + x 2 6 7x 1 + 8x 2 28.

39 with the standard form minimize: s = 120 (y 1 z 1 ) 80 (y 2 z 2 ) subject to: 2 (y 1 z 1 ) + (y 2 z 2 ) + w 1 = 6 7 (y 1 z 1 ) + 8 (y 2 z 2 ) + w 2 = 28 y 1, y 2, z 1, z 2, w 1, w 2 0 the initial feasible solution x = y 1 z 1 y 2 z 2 w 1 w 2 = 0 0 0 0 6 28. Thus, basic variables and basic cost coefficients: [ ] w1 x 0 =, c 0 = w 2 [ 0 0 ] c T c T 0 A = c T

40 simplex tableau is w 1 0 w 2 0 y 1 z 1 y 2 z 2 w 1 w 2 120 120 80 80 0 0 2 2 1 1 1 0 6 7 7 8 8 0 1 28 120 120 80 80 0 0 0 the final simplex tableau is y 1 z 1 y 2 z 2 w 1 w 2 w 1 w 2 2 2 1 1 1 0 7 7 8 8 0 1 6 28 120 120 80 80 0 0 0 Simplex method The simplex algorithm moves the initial feasible but non-optimal solution to an optimal solution while maintaining feasibility through the iterative procedure. The goal is to obtain optimality condition, i.e., the last row (except for the very last element) are all non-negative. step 1. Locate the most negative number in the bottom row of the simplex tableau, excluding the last column. The variable associated with this column is E.V. We also call it the work column. If more

than one candidate for the most negative numbers exist, pick any one. step 2. Form ratios by dividing each positive number in the working column, excluding the last row, into the element in the same row and last column. Designate the element in the work column that yields the smallest ration as the pivot element. The basic variable associated with this pivot element is D.V. If no element in the work column is positive, stop. The problem has no solution. step 3. Pivoting about the pivot element: Use elementary row operations to convert the pivot element to 1 and then to reduce all other elements in the work column to 0. Step 4. Swap E.V. with D.V. Replace the variable in the pivot row and the first column by the variable in the first row and work column. The new first column is the current set of basic variables. Step 5. Repeat Step 1 through Step 4 until there are no negative numbers in the last row, excluding the last column. The basic feasible solution derived from the very last set of variable is a minimal point: assign to each basic variable (variables in the first column) the number in the same row and last column, and all others are zero. The negative number of the number in the last row and last column is the minimum value. 41

42 Example: Consider the example above with the standard form copied below: y 1 z 1 y 2 z 2 w 1 w 2 w 1 2 2 1 1 1 0 6 w 2 7 7 8 8 0 1 28 120 120 80 80 0 0 0 Solution: Step 1. The work column is the first in the middle (-120 is the most negative). Step 2. The ratios are [ 6 28 ] [ 2 7 ] = [ 3 4 ]. So the element 2 is the pivot. w 1 is D.V. and y 1 is E.V. Step 3. We perform row operations to reduce to the following updated simplex tableau: The basic variable w 1 is replaced by y 1. y 1 z 1 y 2 z 2 w 1 w 2 y 1 1 1 1/2 1/2 1/2 0 3 w 2 0 0 9/2 9/2 7/2 1 7 0 0 20 20 60 0 360

Now, in the updated tableau, the basic variables are y 1 and w 2, and the updated basic feasible solution is y 1 = 3, w 2 = 7, all others are zero. We next repeat the same steps: since there is one negative number 20 in the last row, the work column is 1/2 9/2 20 43 and the ratios are [ 3 7 ] [ 1/2 9/2 ] = [ 6 14/9 ] = [ 6 1.56 ] Therefore, the pivot is the one with the largest ratio: 9/2. w 2 is D.V. and y 2 is E.V. We now perform row operations to get updated simplex tableau: pivoting by first divided by 9/2 y 1 z 1 y 2 z 2 w 1 w 2 y 1 1 1 1/2 1/2 1/2 0 3 w 2 0 0 1 1 7/9 2/9 14/9 0 0 20 20 60 0 360

44 and then reduce all other elements in the work column to zero y 1 z 1 y 2 z 2 w 1 w 2 y 1 1 1 0 0 1/2 + (7/9) (1/2) (2/9) (1/2) 3 (14/9) (1/2) w 2 0 0 1 1 7/9 2/9 14/9 0 0 0 0 60 140/9 40/9 360 + 14 20/9 simplify and replace w 2 by y 2, we obtain the updated simplex tableau: y 1 z 1 y 2 z 2 w 1 w 2 8 y 1 1 1 0 0 1 9 9 y 2 0 0 1 1 7/9 2/9 400 0 0 0 0 40/9 9 20 9 14/9 3520 9 Since there is no negative number in the last row, we are done. Optimal solution: [ ] [ ] z 1 y1 20/9 z =, 2 = 0, minimal value = 3520 391. 11 y 2 14/9 w 1 9 w 2

45 Now, let go back to the original optimization problem: minimize: f (x 1, x 2 ) = 120x 1 80x 2 subject to: 2x 1 + x 2 6 7x 1 + 8x 2 28. Recall that x 1 = y 1 z 1, x 2 = y 2 z 2. Thus, the final solution: f (x 1, x 2 ) attains the minimal value of 3520 9 when x 1 = 20 9, x 2 = 14 9. Artificial LP The simplex method requires an initial basis. How to choose an initial basis? 391. 11 Brute force: arbitrarily choose m basic columns and transform the augmented matrix for the problem into canonical form. If rightmost column is positive, then we have a legitimate (initial) BFS. Otherwise, try again. Potentially requires ( n m) tries, and tries, and is therefore not practical

46 Consider the standard form LP Define the associated Artificial LP: minimize: z = c T x subject to: Ax = b, x 0, rank (A) = m, A m n, b 0 minimize: y 1 + y 2 + + y m [ ] x subject to: [A, I m ] = b y [ ] x 0 y where y = [y 1, y 2,, y m ] T is called the vector of artificial variables. Note that the Artificial LP has an obvious initial BFS [ ] 0 b we can proceed with the simplex algorithm to solve the Artificial LP

47 Note that for y 0 and the strict inequality holds if one of y i > 0 y 1 + y 2 + + y m 0 So if the minimum value of the Artificial LP = 0, then y i must be non-basic, i.e., y i = 0 Proposition 16.1: The original LP problem has a BFS if and only if the associated artificial problem has an optimal feasible solution with objective function value zero. In this case, the optimal solution for the artificial LP serves as a BFS for the original LP Two-phase simplex method: Phase I: solve the artificial problem using simplex method Phase II: use the BFS resulting from phase I to initialize the simplex algorithm to solve the original LP problem.

48 Example 16.4 Consider minimize: 2x 1 + 3x 2 subject to: 4x 1 + 2x 2 12 x 1, x 2 0 x 1 + 4x 2 6 Solution:

49 Standard form There is no obvious feasible solution. Phase I: Solve minimize: 2x 1 + 3x 2 subject to: 4x 1 + 2x 2 x 3 = 12 x 1 + 4x 2 x 4 = 6 x 1,..., x 4 0 x = [0, 0, 0, 0, 12, 6] is an obvious BFS minimize: x 5 + x 6 subject to: 4x 1 + 2x 2 x 3 + x 5 = 12 x 1 + 4x 2 x 4 + x 6 = 6 x 1,..., x 6 0

50 simples tableau x 1 x 2 x 3 x 4 x 5 x 6 x 5 4 2 1 0 1 0 12 x 6 1 4 0 1 0 1 6 5 6 1 1 0 0 18 pivoting position is (2, 2) : x 6 is D.V., x 2 is E.V. Updated tableau: x 5 x 2 x 1 x 2 x 3 x 4 x 5 x 6 7 1 0 1 1 1 2 2 2 1 1 0 1 1 0 4 4 4 7 0 1 1 3 0 2 2 2 Next, pivoting about (1, 1) : x 5 is D.V., x 1 is E.V. 9 3 2 9

51 updating x 1 x 2 x 3 x 4 x 5 x 6 2 1 2 x 1 1 0 1 18 7 7 7 7 x 2 1 0 1 2 14 7 1 7 2 6 14 7 7 0 0 0 0 1 1 0 [ 18 Done: minimum value = 0. It produced a BFS [x 1, x 2, x 3, x 4 ] = 7, 6 ] 7, 0, 0 Phase II: we can obtain the simplex tableau for the original LP immediately from above: Deleting two artificial variable columns, then replace the last row by c T c T 0 A : c T = [2, 3, 0, 0], c T 0 = [2, 3] A = 1 0 2 7 1 0 1 14 1 7 2 7

52 c T c T 0 A = [ 0 0 5 14 4 7 ] So the simplex tableau for the original LP : It is already optimal. x 1 x 2 x 3 x 4 2 1 x 1 1 0 7 7 x 2 1 0 1 2 14 7 5 4 0 0 14 7 18 7 6 7 54 7 The solution is x = Homework: [ 18 7 6 7 0 0 ] T, f = 54 7 1. A cereal manufacturer wishes to produce 1000 pounds of a cereal that contains exactly 10% fiber, 2% fat, and 5% sugar (by weight). The cereal is to be produced by combining four items of raw food material in appropriate proportions. These four items have certain combinations of fiber, fat, and sugar content, and are available at various prices per pound, as shown below:

53 The manufacturer wishes to find the amounts of each of the above items to be used to produce the cereal in the least expensive way. Formulate the problem as a linear programming problem. What can you say about the existence of a solution to this problem? 2. Solve the following linear program graphically: 3. Use the simplex method to solve the following linear program: 4. Convert the following problem to standard form and then solve it using the simplex method:

54 5. From textbook: #5.8, 5.12, 5.13, 5.16 (you may combine these four problems into one with several parts) 6. Project (optional): Write a Matlab program for simplex algorithm. You may assume standard form.