MATH2070 Optimisation

Similar documents
Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

OPERATIONS RESEARCH. Linear Programming Problem

AM 121: Intro to Optimization Models and Methods

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:

Metode Kuantitatif Bisnis. Week 4 Linear Programming Simplex Method - Minimize

Systems Analysis in Construction

Linear Programming, Lecture 4

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Ω R n is called the constraint set or feasible set. x 1

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

Linear Programming and the Simplex method

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

IE 400: Principles of Engineering Management. Simplex Method Continued

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

MATH2070/2970 Optimisation

Special cases of linear programming

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

In Chapters 3 and 4 we introduced linear programming

Summary of the simplex method

Chapter 5 Linear Programming (LP)

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3)

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Sensitivity Analysis

MATH 445/545 Test 1 Spring 2016

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

MATH 445/545 Homework 2: Due March 3rd, 2016

Simplex Method for LP (II)

UNIT-4 Chapter6 Linear Programming

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

Simplex Algorithm Using Canonical Tableaus

CO350 Linear Programming Chapter 6: The Simplex Method

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

Optimisation and Operations Research

1 Review Session. 1.1 Lecture 2

The Strong Duality Theorem 1

UNIVERSITY of LIMERICK

New Artificial-Free Phase 1 Simplex Method

Summary of the simplex method

Lecture 2: The Simplex method. 1. Repetition of the geometrical simplex method. 2. Linear programming problems on standard form.

February 17, Simplex Method Continued

AM 121: Intro to Optimization

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Introduction to Mathematical Programming

Introduction. Very efficient solution procedure: simplex method.

Answer the following questions: Q1: Choose the correct answer ( 20 Points ):

TIM 206 Lecture 3: The Simplex Method

Lecture 9 Tuesday, 4/20/10. Linear Programming

Lecture 5. x 1,x 2,x 3 0 (1)

9.1 Linear Programs in canonical form

Chap6 Duality Theory and Sensitivity Analysis


3 The Simplex Method. 3.1 Basic Solutions

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Linear Programming: Chapter 5 Duality

F 1 F 2 Daily Requirement Cost N N N

MAT016: Optimization

Chapter 4 The Simplex Algorithm Part I

4.6 Linear Programming duality

Michælmas 2012 Operations Research III/IV 1

CO350 Linear Programming Chapter 6: The Simplex Method

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Lecture 2: The Simplex method

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

"SYMMETRIC" PRIMAL-DUAL PAIR

Review Solutions, Exam 2, Operations Research

Part 1. The Review of Linear Programming

Integer Programming. The focus of this chapter is on solution techniques for integer programming models.

Understanding the Simplex algorithm. Standard Optimization Problems.

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Introduction to the Simplex Algorithm Active Learning Module 3

OPRE 6201 : 3. Special Cases

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Optimisation. 3/10/2010 Tibor Illés Optimisation

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Part 1. The Review of Linear Programming

Lecture slides by Kevin Wayne

15-780: LinearProgramming

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Linear Programming. H. R. Alvarez A., Ph. D. 1

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1

The Simplex Algorithm and Goal Programming

MS-E2140. Lecture 1. (course book chapters )

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10

56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker

Lecture 11: Post-Optimal Analysis. September 23, 2009

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

Introduction to Operations Research

Lecture 10: Linear programming duality and sensitivity 0-0

4. Duality and Sensitivity

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

Transcription:

MATH2070 Optimisation Linear Programming Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart

Review The standard Linear Programming (LP) Problem Graphical method of solving LP problem The simplex procedural algorithm Extensions to LP Algorithm to include other conditions Dual Problem

The standard Linear Programming (LP) Problem Graphical method of solving LP problem The simplex procedural algorithm Extensions to LP Algorithm to include other conditions Dual Problem

Motivation Where do Linear Programming problems occur? History and Applications Became a field of interest during World War II. Applications Allocation Problems Scheduling Blending of raw materials

Initial Example A pharmaceutical company can produce two types of drug using three different resources. Each resource is in very limited supply. Production data are listed in the table below: Resource usage/unit Resource Drug 1 Drug 2 Availability Resource 1 1 0 4 2 3 2 18 3 0 2 12 Profit/unit 3 5 $1000

Formulation Resource usage/unit Resource Drug 1 Drug 2 Availability Resource 1 1 0 4 2 3 2 18 3 0 2 12 Profit/unit 3 5 $1000 Define x 1, x 2 be the number produced for each type of drug. Profit : Z = 3x 1 + 5x 2 Optimisation We want to find the optimal x 1 and x 2 profit, Z. to maximise the total

Standard LP problem In full generality, the problem is given by, Maximise: Z = c 1 x 1 + c 2 x 2 + + c n x n such that a 11 x 1 + a 12 x 2 + + a 1n x n b 1 a 21 x 1 + a 22 x 2 + + a 2n x n b 2... a m1 x 1 + a m2 x 2 + + a mn x n b m and x 1 0, x 2 0,..., x n 0. where a ij, c i are constants and b i > 0 are constants.

Using matrix notation (slightly abusing notation), Maximise: Z = c T x such that: Ax b and x 0 where the vectors and matrices are defined, c 1 a 11 a 12... a 1n c 2 a 21 a 22... a 2n c = A =....... a m1 a n2... a mn c n x = x 1 x 2. x n b = b 1 b 2. b m

Objective Function and Decision Variables Decision Variables x T = (x 1, x 2,..., x n ) where x i are the decision variables and it is assumed that x i 0. Objective Function Z = Z(x) = c T x = c 1 x 1 + c 2 x 2 + + c n x n, where c i are constants and are referred to as the cost coefficients. c i shows the linear increase/decrease in Z for unit increase in x i.

Constraints Constraint system of equations a 11 x 1 + a 12 x 2... + a 1n x n b 1 Ax b a 21 x 1 + a 22 x 2... + a 2n x n b 2.. a m1 x 1 + a n2 x 2... + a mn x n b m System of m different constraints described by the m equations with n decision variables. b is referred to the resource vector and it is assumed that b i > 0 are constants. Positivity condition Note that the resource vector b must have positive elements.

Feasible Solutions Feasible Solution Any point x satisfying Ax b and x 0 is called a feasible solution. Infeasible Solution Conversely, if a point x does not satisfy the above equations it is an infeasible solution. A feasible solution lies inside a closed region inside the n dimensional decision space. For the initial example that is a two dimensional region (x 1, x 2 ) plane.

Feasible Regions Empty Region It is possible for the feasible region to be empty. That is no point can be found to satisfy the constraints. This will be called an ill-posed problem. Feasible region is empty if constraint equations are inconsistent. Example x 2 R 1 R 2 x 1 R 1 R 2 =

Optimal solution Optimal solution A feasible solution that maximises the objective function Z is the optimal solution. The optimal solution is usually denoted x = (x 1, x 2,..., x n). The maximised objective function is denoted Z max = Z = Z(x ).

Location of Optimal Solutions The multivariate objective function Z = c T x is linear. From introductory lectures, we know that Z must lie on the boundary of the feasible region. Intro Lectures Here Optimal solution x will lie at either a corner point or an edge of the feasible polygon.

Feasible Regions The feasible region is actually a polygon in n dimensional space. Each equation in the constraint describes a plane. All equations (planes) together describe a region in space. For example,

Number of Solutions No solution Feasible region is empty. One optimal solution * Optimal solution at a corner point. Many optimal solutions Optimal solutions along an edge.

Unbounded feasible region Another scenario exists where the feasible region could be unbounded. In an example similar to the empty region. Example Unbounded feasible region x 2 x 1 + x 2 2 x 1 x 2 1 x 1

The standard Linear Programming (LP) Problem Graphical method of solving LP problem The simplex procedural algorithm Extensions to LP Algorithm to include other conditions Dual Problem

Return to example problem Resource usage/unit Resource Drug 1 Drug 2 Availability Resource 1 1 0 4 2 3 2 18 3 0 2 12 Profit/unit 3 5 $1000 Bivariate problem (two dimensional) so can easily be solved graphically.

Feasible Region Feasible region is bounded by the five straight lines: x 1 = 0, x 2 = 0, x 1 = 4, 3x 1 + 2x 2 = 18, 2x 2 = 12. x 2 9 3x 1 +2x 2 = 18 8 7 6 2x 2 = 12 5 x 1 = 4 4 3 2 Feasible region 1 0 0 1 2 3 4 5 6 x 1

Find optimal solution(s) Objective function Z = 3x 1 + 5x 2 yields a single optimal solution. x 2 9 Direction of increasing Z(= 3x 1 +5x 2) 8 7 (0,6) 6 (2,6) 36 Optimal solution (x 1,x 2 ) = (2,6) Z = 36 5 4 3 20 (4,3) 2 10 1 5 0 0 (0,0) 0 1 2 3 4 5 6 (4,0) x 1

Optimal Solutions Consider new objective function Z = 6x 1 + 4x 2 x2 9 Direction of increasing Z(= 4x1 + 6x2) 8 7 6 5 C1(2,6) 32 36 Note: Z = 6x1 +4x2 is parallel to the boundary line 3x1 +2x2 = 18 4 24 3 2 1 8 16 C2(4,3) 0 0 1 2 3 4 5 6 x1

Limitations of graphical method Graphical Limitations Graphical method is only practical in two dimensions. Alternative? The algebraic simplex algorithm. Iterative procedure. Historically created by George Dantzig in 1947.

The standard Linear Programming (LP) Problem Graphical method of solving LP problem The simplex procedural algorithm Extensions to LP Algorithm to include other conditions Dual Problem

Simplex Algorithm General overview of method Start at an initial feasible corner point. Usually x = 0. Move to an adjacent feasible corner point. Do this by finding corner with best potential increase in Z. Stop when at the optimal solution, Z. Determine optimality when other adjacent corner points result in decrease in Z.

Start at initial feasible corner point x 2 9 Direction of increasing Z(= 3x 1 +5x 2) 8 7 (0,6) 6 (2,6) 5 4 3 (4,3) 2 1 0 0 (0,0) 0 1 2 3 4 5 6 (4,0) x 1

First corner point iteration x 2 9 Direction of increasing Z(= 3x 1 +5x 2) 8 7 (0,6) 6 (2,6) 5 4 3 (4,3) 2 12 1 0 (0,0) 0 1 2 3 4 5 6 (4,0) x 1

Second corner point iteration x 2 9 Direction of increasing Z(= 3x 1 +5x 2) 8 7 (0,6) 6 5 4 (2,6) 27 3 (4,3) 2 1 0 (0,0) 0 1 2 3 4 5 6 (4,0) x 1

Third and final corner point iteration x 2 9 Direction of increasing Z(= 3x 1 +5x 2) 8 7 (0,6) 6 (2,6) 36 5 4 3 (4,3) 2 1 0 (0,0) 0 1 2 3 4 5 6 (4,0) x 1

Alternative direction: Start at initial point x 2 9 Direction of increasing Z(= 3x 1 +5x 2) 8 7 (0,6) 6 (2,6) 5 4 3 (4,3) 2 1 0 0 (0,0) 0 1 2 3 4 5 6 (4,0) x 1

Alternative direction: First corner point x 2 9 Direction of increasing Z(= 3x 1 +5x 2) 8 7 (0,6) 6 (2,6) 5 4 30 3 (4,3) 2 1 0 (0,0) 0 1 2 3 4 5 6 (4,0) x 1

Alternative direction: Second and final corner point x 2 9 Direction of increasing Z(= 3x 1 +5x 2) 8 7 (0,6) 6 (2,6) 36 5 4 30 3 (4,3) 2 1 0 (0,0) 0 1 2 3 4 5 6 (4,0) x 1

System of equations Need to move from inequality constraints to equality constraints. Introduce new variables to reduce the inequalities to equalities. Example for some x 3 0. Definition 2x 1 3x 2 5 2x 1 3x 2 + x 3 = 5 Slack variables: Each introduced variable that reduces a (less than) inequality constraint to an equality constraint is called a slack variable.

Slack Variables For each constraint introduce a new variable equal to the difference between the RHS and LHS of the constraint. That is we have Ax b, Ax b à x = b a 11 x 1 + a 12 x 2 + + a 1n x n b 1 a 21 x 1 + a 22 x 2 + + a 2n x n b 2.... a m1 x 1 + a m2 x 2 + + a mn x n b m with: x 1 0, x 2 0,, x n 0.

New constraint matrix After adding slack variables for each constraint x s = (x n+1, x n+2,..., x n+m ), a 11 x 1 + a 12 x 2 + + a 1n x n + x n+1 = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n + x n+2 = b 2...... a m1 x 1 + a m2 x 2 + + a mn x n + x n+m = b m with: x 1 0, x 2 0,..., x n 0, and: x n+1 0, x n+2 0,..., x n+m 0.

Return to example Initial constraints, x 1 4; 3x 1 + 2x 2 18; 2x 2 12 Introduce a new variable equal to the difference between the RHS and LHS of the constraint. x 3 = 4 x 1 ; x 4 = 18 3x 1 2x 2 ; x 5 = 12 2x 2. New system of constraints, x 1 4; 3x 1 + 2x 2 18; 2x 2 12. x 1 + x 3 = 4; 3x 1 + 2x 2 + x 4 = 18; 2x 2 + x 5 = 12.

Algebraic representation Turn attention to algebraic representation of corner point method. Full problem with slack variables, Maximise: Z = c 1 x 1 + c 2 x 2 + + c n x n such that a 11 x 1 + a 12 x 2 + + a 1n x n + x n+1 = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n + x n+2 = b 2...... a m1 x 1 + a m2 x 2 + + a mn x n + x n+m = b m and x 1 0, x 2 0,..., x n 0, x n+1 0,..., x n+m 0.

In tableau form Z x 1 x 2... x n x n+1 x n+2... x n+m RHS 1 c 1 c 2... c n 0 0... 0 0 0 a 11 a 12... a 1n 1 0... 0 b 1 0 a 21 a 22... a 2n 0 1... 0 b 2.............. 0 a m1 a m2... a mn 0 0... 1 b m Choose initial corner point solution at (x 1, x 2,..., x n ) = 0 Slack variables take the values of resource vector x s = (x n+1, x n+2,..., x n+m ) T = b Initially, slack variables are the basic variables. Initially, decision variables are the non-basic variables.

For the example Maximise: Z 3x 1 5x 2 = 0 such that: x 1 + x 3 = 4 3x 1 + 2x 2 + x 4 = 18 + 2x 2 + x 5 = 12 and x 1 0, x 2 0, x 3 0, x 4 0, x 5 0.

Geometric interpretation of slack variables x 2 I 4 9 I 1 I 4 8 7 F 5 6 5 x 5 = 0 F 4 I 2 I 5 4 x 4 = 0 3 x1 = 0 F 3 x3 = 0 2 1 x 2 = 0 I 3 I 5 0 0 1 2 3 4 5 6 x 1 F 1 F 2

In tableau form Step 1: Setup problem with initial corner point solution. Z x 1 x 2 x 3 x 4 x 5 RHS 1 3 5 0 0 0 0 0 1 0 1 0 0 4 0 3 2 0 1 0 18 0 0 2 0 0 1 12 Slack variables are the initial basic variables, x 3 = 4, x 4 = 18, x 5 = 12. Decision variables are set to zero, x = 0

Step 2: Move to adjacent corner point. Which direction do we move? There are two choices... x 2 9 Direction of increasing Z(= 3x 1 +5x 2) 8 7 (0,6) 6 (2,6) 5 4 3 (4,3) 2 1 0 0 (0,0) 0 1 2 3 4 5 6 (4,0) x 1

Thus x 3 is unaffected as x 2 increases, x 4 0 as x 2 9 and x 5 0 as x 2 6. Thus x 5 must leave the basis, as it goes to zero first. Choice of movement From Objective function, Z 3x 1 5x 2 = 0 Choose x2 to enter the basis. Which basic variable must leave the basis: x 3, x 4 or x 5? Fix x 1 = 0, Increase x 2 and see which basic variable reaches zero first. Constraints equations reduce to which imply x 3 = 4 2x 2 + x 4 = 18 2x 2 + x 5 = 12 x 3 = 4, x 4 = 18 2x 2, x 5 = 12 2x 2.

Algebraic rule Variable to enter the basis Choose variable that has the most negative coefficient in the first row of the Tableau. Variable to leave the basis Given variable x j is to enter the basis. Find k = arg min i=1,...,m Then variable x k is to leave the basis. b i a ij, where a ij > 0.

Algebraic Rule in practice Return to example, Basis Z x 1 x 2 x 3 x 4 x 5 RHS Ratio b i /a i2 Z 1 3 5 0 0 0 0 x 3 0 1 0 1 0 0 4 x 4 0 3 2 0 1 0 18 9 x 5 0 0 2 0 0 1 12 6 min Pivot on highlighted element. Basis Z x 1 x 2 x 3 x 4 x 5 RHS Z 1 3 0 0 0 5/2 30 x 3 0 1 0 1 0 0 4 x 4 0 3 0 0 1 1 6 x 2 0 0 1 0 0 1/2 6

Continue the algorithm Basis Z x 1 x 2 x 3 x 4 x 5 RHS Ratio b i /a i1 Z 1 3 0 0 0 5/2 30 x 3 0 1 0 1 0 0 4 4 x 4 0 3 0 0 1 1 6 2 min x 2 0 0 1 0 0 1/2 6 x 1 needs to enter the basis while x 4 leaves the basis. Basis Z x 1 x 2 x 3 x 4 x 5 RHS Ratio Z 1 0 0 0 1 3/2 36 x 3 0 0 0 1 1/3 1/3 2 x 1 0 1 0 0 1/3 1/3 2 x 2 0 0 1 0 0 1/2 6

Step 3: Stop when at Optimal solution. How do we know when we reach an optimal solution? Basis Z x 1 x 2 x 3 x 4 x 5 RHS Ratio Z 1 0 0 0 1 3/2 36 x 3 0 0 0 1 1/3 1/3 2 x 1 0 1 0 0 1/3 1/3 2 x 2 0 0 1 0 0 1/2 6 Stop when all the coefficients in the first row are non-negative. This is the optimal solution.

Interpret Tableau of Optimal solution Basis Z x 1 x 2 x 3 x 4 x 5 RHS Ratio Z 1 0 0 0 1 3/2 36 x 3 0 0 0 1 1/3 1/3 2 x 1 0 1 0 0 1/3 1/3 2 x 2 0 0 1 0 0 1/2 6 Any move from here will cause a decrease in Z. Objective function is Z = 36 x 4 3 2 x 5. Optimal solution at x = (2, 6) with Z = 36. Slack variables are x s = (2, 0, 0).

Tie Breakers: Variable entering basis It is possible that there is no unique variable to choose for the rules. Entering Variable There may be a tied situation when there are two variables with the most negative coefficient. E.g. Basis Z x 1 x 2 x 3 x 4 x 5 RHS Z 1 0 a a 0 0 C where a > 0 and C > 0. Which variable should be chosen to enter the basis? x 2 or x 3?

Tie Breakers: Variable leaving basis It is also possible to have a tie for the variable leaving the basis. For example, consider the drug problem, with the second constraint modified to 3x 1 + 2x 2 12. Initial simplex tableau: Maximise: Z = 3x 1 + 5x 2 subject to: x 1 4 3x 1 + 2x 2 12 2x 2 12 with: x 1, x 2 0. Basis Z x 1 x 2 x 3 x 4 x 5 RHS Ratio Z 1 3 5 0 0 0 0 x 3 0 1 0 1 0 0 4 x 4 0 3 2 0 1 0 12 6 choose x 5 0 0 2 0 0 1 12 6 either

Tie breaker (cont.) Basis Z x 1 x 2 x 3 x 4 x 5 RHS Ratio Z 1 3 5 0 0 0 0 x 3 0 1 0 1 0 0 4 x 4 0 3 2 0 1 0 12 6 x 5 0 0 2 0 0 1 12 6 Z 1-3 0 0 0 5/2 30 x 3 0 1 0 1 0 0 4 x 4 0 3 0 0 1-1 0 non-positive x 2 0 0 1 0 0 1/2 6 Notice the resource element is zero. Assumptions of simplex framework violated

Tie breaker (cont.) Basis Z x 1 x 2 x 3 x 4 x 5 RHS Ratio Z 1 3 5 0 0 0 0 x 3 0 1 0 1 0 0 4 x 4 0 3 2 0 1 0 12 6 x 5 0 0 2 0 0 1 12 6 Z 1 9/2 0 0 5/2 0 30 optimal x 3 0 1 0 1 0 0 4 x 2 0 3/2 1 0 1/2 0 6 x 5 0 3 0 0 1 1 0 degeneracy Degenerate point in the basis with x 5 = 0 Can cause stalling. Can cause an endless cycle.

Degenerate points x 2 9 8 7 6 5 Corner points (0,6) and (4,0) are degenerate. 4 3 2 Feasible region 1 0 0 1 2 3 4 5 6 x 1

Solution upon an edge Return to the previous pharmeutical example with Z = 6x 1 + 4x 2 x2 9 Direction of increasing Z(= 4x1 + 6x2) 8 7 6 5 C1(2,6) 32 36 Note: Z = 6x1 +4x2 is parallel to the boundary line 3x1 +2x2 = 18 4 24 3 2 1 8 16 C2(4,3) 0 0 1 2 3 4 5 6 x1

Solution upon an edge (cont) Final tableau form. Basis Z x 1 x 2 x 3 x 4 x 5 RHS Ratio Z 1 6 4 0 0 0 0 x 3 0 1 0 1 0 0 4 4 x 4 0 3 2 0 1 0 18 6 x 5 0 0 2 0 0 1 12 Z 1 0-4 0 0 0 24 x 1 0 1 0 1 0 0 4 x 4 0 0 2-3 1 0 6 2 x 5 0 0 2 0 0 1 12 6 Z 1 0 0 0 2 0 36 optimal x 1 0 1 0 1 0 0 4 x 2 0 0 1-3/2 1/2 0 3 x 5 0 0 0 3 1 1 6

Solution upon an edge (cont) Final tableau form. Basis Z x 1 x 2 x 3 x 4 x 5 RHS Ratio Z 1 0 0 0 2 0 36 optimal x 1 0 1 0 1 0 0 4 x 2 0 0 1-3/2 1/2 0 3 x 5 0 0 0 3 1 1 6 x 1, x 2, x 5 are in the basis x 3 and x 4 are out of the basis. However, x 3 has a zero coefficient in the objective row. Optimal objective function Z = 36 2x 4

Solution upon an edge (cont) Final tableau form. Basis Z x 1 x 2 x 3 x 4 x 5 RHS Ratio Z 1 0 0 0 2 0 36 optimal x 1 0 1 0 1 0 0 4 x 2 0 0 1-3/2 1/2 0 3 x 5 0 0 0 3 1 1 6 Can parametrise the solution to locate the edge. System of equations. Let x 3 = t 0. t [0, 2] x 1 = 4 t 0 t 4. x 2 = 3 + 3/2t 0 t 2. x 5 = 6 3t 0 t 2.

The standard Linear Programming (LP) Problem Graphical method of solving LP problem The simplex procedural algorithm Extensions to LP Algorithm to include other conditions Dual Problem

Minimising the Objective Function Consider finding the minimum To minimize Z = c 1 x 1 + c 2 x 2 + + c n x n. Define a new objective function Ẑ = Z. Then min Z max Ẑ. f(x) x x f(x)

Greater than or equal to constraints If the problem has a constraint of the form a T x = a 1 x 1 + a 2 x 2 + + a n x n b > 0, introduce a surplus variable x n+1 such that a 1 x 1 + a 2 x 2 + + a n x n x n+1 = b, x n+1 0. As an example, consider Maximise: Z = 3x 1 + 5x 2 subject to: x 1 4 3x 1 + 2x 2 18 2x 2 12 with: x 1, x 2 0.

Greater than constraint example x 2 9 8 7 Z = 42 6 5 4 3 Feasible region 2 1 0 0 1 2 3 4 5 6 x 1

Negative Resource Elements In the standard problem all resource elements b j (i j m) are non-negative. Suppose b j = b < 0. Then, is equivalent to a T x = a 1 x 1 + a 2 x 2 + + a n x n b, a T x = a 1 x 1 a 2 x 2 a n x n b.

Negative Decision Variable If x k 0 introduce a new variable x k = x k. Then x k 0 is equivalent to x k 0.

Unrestricted Decision Variable If x k is unrestricted in sign, introduce two new variables, x k 0 and x k 0, and let x k = x k x k.

Two-Phase Method Return to the problem, Maximise: Z = 3x 1 + 5x 2 subject to: x 1 4 3x 1 + 2x 2 18 2x 2 12 with: x 1, x 2 0. Introduce slack and surplus variables, x 1 + x 3 = 4 3x 1 + 2x 2 x 4 = 18 2x 2 + x 5 = 12

Finding an initial feasible corner point In regular framework. Set decision variables to zero. Slack variables set to resource elements. x 1 + x 3 = 4 3x 1 + 2x 2 x 4 = 18 2x 2 + x 5 = 12 If done here, implies that, x 1 = 0, x 2 = 0, x 3 = 4, x 4 = 18, x 5 = 12 Surplus variable violates the positivity constraint!

Artificial Variables Introduce an artificial variable for any constraint that involves an equality or greater than or equal to condition. For example, introduce x 6 0 in second constraint equation. The constraint equations become x 1 + x 3 = 4 3x 1 + 2x 2 x 4 + x 6 = 18 2x 2 + x 5 = 12 Set decision and surplus variables to be zero. Set slack and artificial variables to resource elements (RHS) Initial corner point (x 1, x 2, x 3, x 4, x 5, x 6 ) = (0, 0, 4, 0, 12, 18).

Not a corner point of the actual problem. The initial corner point (x 1, x 2, x 3, x 4, x 5, x 6 ) = (0, 0, 4, 0, 12, 18) is not in the feasible set. x2 9 8 7 Z = 42 6 5 4 3 Feasible region 2 1 0 0 1 2 3 4 5 6 x1

Reconcile the initial corner point Feasible solution A feasible solution of the problem with artificial variables is a feasible solution of the original problem if and only if the artificial variables are zero. In the example, the second constraint gives RHS is smaller than 18 if x 6 > 0. 3x 1 + 2x 2 x 4 = 18 x 6 3x 1 + 2x 2 x 4 = 18 if and only if x 6 = 0. Thus we must force the artificial variables to zero.

Find initial feasible solution To find a basic feasible solution to the original problem maximize, W = x k 0. artificial If max W = 0, then all artificial variables are zero. Then proceed with regular simplex algorithm.

First Phase: Maximise W Basis x 1 x 2 x 3 x 4 x 5 x 6 RHS W 0 0 0 0 0 1 0 Z 3 5 0 0 0 0 0 x 3 1 0 1 0 0 0 4 3 2 0 1 0 1 18 x 5 0 2 0 0 1 0 12 W 3 2 0 1 0 0 18 Z 3 5 0 0 0 0 0 x 3 1 0 1 0 0 0 4 x 6 3 2 0 1 0 1 18 x 5 0 2 0 0 1 0 12 After Maximisation on W is complete, remove W and all artificial variables.

First Phase: Maximise W Basis x 1 x 2 x 3 x 4 x 5 x 6 RHS Ratio W 3 2 0 1 0 0 18 Z 3 5 0 0 0 0 0 x 3 1 0 1 0 0 0 4 4 x 6 3 2 0 1 0 1 18 6 x 5 0 2 0 0 1 0 12 W 0 2 3 1 0 0 6 Z 0 5 3 0 0 0 12 x 1 1 0 1 0 0 0 4 x 6 0 2 3 1 0 1 6 3 x 5 0 2 0 0 1 0 12 6 W 0 0 0 0 0 1 0 Z 0 0 9/2 5/2 0 5/2 27 x 1 1 0 1 0 0 0 4 4 x 2 0 1 3/2 1/2 0 1/2 3 x 5 0 0 3 1 1 1 6 2

Second Phase: Maximise Z with new starting point. Basis x 1 x 2 x 3 x 4 x 5 RHS Ratio Z 0 0 9/2 5/2 0 27 x 1 1 0 1 0 0 4 4 x 2 0 1 3/2 1/2 0 3 x 5 0 0 3 1 1 6 2 Z 0 0 0 1 3/2 36 x 1 1 0 0 1/3 1/3 2 x 2 0 1 0 0 1/2 6 x 3 0 0 1 1/3 1/3 2 6 Z 0 0 3 0 5/2 42 Optimal x 1 1 0 1 0 0 4 x 2 0 1 0 0 1/2 6 x 4 0 0 3 1 1 6

Big M-method An alternative to the Two-Phase Maximise: Z = x 1 + 2x 2 subject to: x 1 + x 2 2 x 1 + x 2 1 x 2 3 with: x 1, x 2 0. Introduce slack, surplus and artificial variables as before. Modify objective function to penalise the artificial variables. Z = Z ± M x k. k artificial NB: Sign infront of M penalises against the optimatisation.

Big M-method Constraint equations x 1 + x 2 x 3 + x 6 = 2 x 1 + x 2 x 4 + x 7 = 1 x 2 + x 5 = 3 Objective function is Maximisation: Max: Z = x 1 + 2x 2 M(x 6 + x 7 ) NB: If Minimisation was desired consider, Min: Z = x 1 + 2x 2 +M(x 6 + x 7 ) M is considered to be an arbitrarily large number.

Big M- implementation: Basis x 1 x 2 x 3 x 4 x 5 x 6 x 7 RHS Ratio Z 1-2 0 0 0 M M 0 1 1 1 0 0 1 0 2-1 1 0 1 0 0 1 1 x 5 0 1 0 0 1 0 0 3 Z 1-2M-2 M M 0 0 0-3M x 6 1 1 1 0 0 1 0 2 2 x 7-1 1 0 1 0 0 1 1 1 x 5 0 1 0 0 1 0 0 3 3 Initialise tableau to ensure artificial variables are basic. Continue regular simplex algorithm.

Big M- implementation: Basis x 1 x 2 x 3 x 4 x 5 x 6 x 7 RHS Ratio Z 1-2M-2 M M 0 0 0-3M x 6 1 1 1 0 0 1 0 2 2 x 7 1 1 0 1 0 0 1 1 1 x 5 0 1 0 0 1 0 0 3 3 Z -2M-1 0 M -M 2 0 0 2M+2 -M+2 1 x 6 2 0 1 1 0 1-1 1 2 x 2 1 1 0 1 0 0 1 1 x 5 1 0 0 1 1 0-1 2 2 Z 0 0 0.5 1.5 0 M+0.5 M+1.5 2.5 x 1 1 0 0.5 0.5 0 0.5-0.5 0.5 1 x 2 0 1 0.5 0.5 0 0.5 0.5 1.5 x 5 0 0 0.5 0.5 1-0.5-0.5 1.5 3

Big M- implementation: Basis x 1 x 2 x 3 x 4 x 5 x 6 x 7 RHS Ratio Z 0 0 0.5 1.5 0 M+0.5 M+1.5 2.5 x 1 1 0 0.5 0.5 0 0.5-0.5 0.5 1 x 2 0 1 0.5 0.5 0 0.5 0.5 1.5 x 5 0 0 0.5 0.5 1 0.5 0.5 1.5 3 Z 3 0 2 0 0 M+2 M 4 x 4 2 0 1 1 0 1-1 1 x 2 1 1 1 0 0 1 0 2 x 5 1 0 1 0 1 1 0 1 1 Z 1 0 0 0 2 M M 6 opt x 4 1 0 0 1 1 0 1 2 x 2 0 1 0 0 1 0 0 3 x 5 1 0 1 0 1 1 0 1

The standard Linear Programming (LP) Problem Graphical method of solving LP problem The simplex procedural algorithm Extensions to LP Algorithm to include other conditions Dual Problem

Dual Problem Recall the regular problem in matrix notation, Primal Maximise: Z = c T x subject to: Ax b with: x 0 Refer to it as the Primal problem.

Definition of dual problem Primal Dual Maximise: Z = c T x Minimise: v = y T b subject to: Ax b subject to: y T A c T with: x 0 with: y 0 Return to initial pharmaceutical example: Maximise : Z = 3x 1 + 5x 2 such that: x 1 4 3x 1 + 2x 2 18 and x 1 0, x 2 0, x 3 0, x 4 0, x 5 0. 2x 2 12

Example: Dual construction Find a bound for optimal solution. From constraints, Z = 3x 1 + 5x 2 3 4 + 5 6 = 42 Consider a general linear combination of constraints, (x 1 )y 1 + (3x 1 + 2x 2 ) y 2 + (2x 2 )y 3 4y 1 + 18y 2 + 12y 3 Define a new objective function v := 4y 1 + 18y 2 + 12y 3 Notice, Z := 3x 1 + 5x 2 (y 1 + 3y 2 )x 1 + (2y 2 + 2y 3 ) x 2 4y 1 + 18y 2 + 12y 3 =: v

Example: continued Last statement true if y 0 and, y 1 + 3y 2 3 and 2y 2 + 2y 3 5 Dual problem is, Minimise : v = 4y 1 + 18y 2 + 12y 3 such that: y 1 + 3y 2 3 2y 2 + 2y 3 5 and y 1 0, y 2 0, y 3 0.

Useful applications Dimension of decision variables and constraint equations swap. Optimal values of Max: Z and Min: v coincide under certain conditions. Can sometimes simplify the problem to be solved.

Example Example Maximise: Z = 8x 1 + 30x 2 + 7x 3 subject to: x 1 + 5x 2 + 3x 3 10 4x 1 + 6x 2 + x 3 15 with: x 1, x 2, x 3 0. The corresponding dual problem is: Minimise: v = 10y 1 + 15y 2 subject to: y 1 + 4y 2 8 5y 1 + 6y 2 30 3y 1 + y 2 7 with: y 1, y 2 0, which can be solved graphically.

Graphical solution Minimise: v = 10y 1 + 15y 2 subject to: y 1 + 4y 2 8 5y 1 + 6y 2 30 3y 1 + y 2 7 with: y 1, y 2 0, y2 9 8 7 6 5 y1 +4y2 = 8 4 3 2 Optimal solution, v = 435 7, (y1,y2) = (36 7, 5 7 ) 1 0 0 1 2 3 4 5 6 7 8 9 10 11 y1 2y1 +y2 = 8 5y1 +6y2 = 30

Second Example Return to the Pharmaceutical problem. It has the dual problem, Minimise : v = 4y 1 + 18y 2 + 12y 3 such that: y 1 + 3y 2 3 2y 2 + 2y 3 5 and y 1 0, y 2 0, y 3 0. Standardise the problem for the simplex framework. Maximise: v = 4y 1 18y 2 12y 3 subject to: y 1 + 3y 2 y 4 + y 6 = 3 with: y 1 0, y 2 0, y 3 0. 2y 2 + 2y 3 y 5 + y 7 = 5

Solve using Two-Phase First Phase: Maximise W Basis y 1 y 2 y 3 y 4 y 5 y 6 y 7 RHS W 0 0 0 0 0 1 1 0 v 4 18 12 0 0 0 0 0 1 3 0-1 0 1 0 3 0 2 2 0-1 0 1 5 W 1 5-2 1 1 0 0 8 v 4 18 12 0 0 0 0 0 y 6 1 3 0-1 0 1 0 3 y 7 0 2 2 0-1 0 1 5

Maximise W Basis y 1 y 2 y 3 y 4 y 5 y 6 y 7 RHS Ratio W 1 5-2 1 1 0 0 8 v 4 18 12 0 0 0 0 0 y 6 1 3 0-1 0 1 0 3 1 y 7 0 2 2 0-1 0 1 5 5/2 W 2/3 0-2 -2/3 1 5/3 0 3 v 2 0 12 6 0-6 0-18 y 2 1/3 1 0-1/3 0 1/3 0 1 y 7-2/3 0 2 2/3-1 -2/3 1 3 3/2 W 0 0 0 0 0 1 1 0 v 2 0 0 2 6-2 -6-36 y 2 1/3 1 0-1/3 0 1/3 0 1 y 3-1/3 0 1 1/3-1/2-1/3 1/2 3/2

Phase Two: Discard W Notice that, Basis y 1 y 2 y 3 y 4 y 5 RHS v 2 0 0 2 6-36 y 2 1/3 1 0-1/3 0 1 y 3-1/3 0 1 1/3-1/2 3/2 Max Z = 3x 1 + 5x 2 = 36 = Max v y = (y 1, y 2, y 3 ) = (0, 1, 3/2) and x = (2, 6) The optimal x can be found as the cost coefficients of the surplus variables y 4 and y 5 in the objective row. Agrees with graphical and simplex methods used before