Fundamental Theorems of Optimization

Similar documents
Lecture 11 Linear programming : The Revised Simplex Method

The Simplex Algorithm and Goal Programming

OPERATIONS RESEARCH. Linear Programming Problem

Lesson 27 Linear Programming; The Simplex Method

The Simplex Method for Solving a Linear Program Prof. Stephen Graves

MAT016: Optimization

Chapter 4 The Simplex Algorithm Part I

The Simplex Method. Formulate Constrained Maximization or Minimization Problem. Convert to Standard Form. Convert to Canonical Form

Simplex Method for LP (II)

Part 1. The Review of Linear Programming

Lecture 10: Linear programming duality and sensitivity 0-0

Math Models of OR: Some Definitions

The Simplex Algorithm

IE 5531: Engineering Optimization I

Math Homework 3: solutions. 1. Consider the region defined by the following constraints: x 1 + x 2 2 x 1 + 2x 2 6

Linear Programming: Simplex Algorithm. A function of several variables, f(x) issaidtobelinear if it satisþes the two

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

The Graphical Method & Algebraic Technique for Solving LP s. Métodos Cuantitativos M. En C. Eduardo Bustos Farías 1

Week 3: Simplex Method I

Linear Programming and the Simplex method

Review Solutions, Exam 2, Operations Research

TIM 206 Lecture 3: The Simplex Method

1. Introduce slack variables for each inequaility to make them equations and rewrite the objective function in the form ax by cz... + P = 0.

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems

Lecture 2: The Simplex method

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.

The Simplex Method. Standard form (max) z c T x = 0 such that Ax = b.

Lecture 11: Post-Optimal Analysis. September 23, 2009

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

F 1 F 2 Daily Requirement Cost N N N

Systems Analysis in Construction

1 Review Session. 1.1 Lecture 2

Linear Programming. Chapter Introduction

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

Ω R n is called the constraint set or feasible set. x 1

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Convex Optimization & Lagrange Duality

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

4. Duality and Sensitivity

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

min3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0.

LINEAR PROGRAMMING II

Math 354 Summer 2004 Solutions to review problems for Midterm #1

Summary of the simplex method

Discrete Optimization

Week_4: simplex method II

Chapter 5 Linear Programming (LP)

3 The Simplex Method. 3.1 Basic Solutions

Introduction to linear programming using LEGO.

Simplex Method in different guises

AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1

Concept and Definition. Characteristics of OR (Features) Phases of OR

Math Models of OR: Extreme Points and Basic Feasible Solutions

Chapter 1: Linear Programming

2. Linear Programming Problem

A Review of Linear Programming

Math 273a: Optimization The Simplex method

Variants of Simplex Method

CO 250 Final Exam Guide

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

IE 400: Principles of Engineering Management. Simplex Method Continued

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur

2.098/6.255/ Optimization Methods Practice True/False Questions

4.6 Linear Programming duality

c) Place the Coefficients from all Equations into a Simplex Tableau, labeled above with variables indicating their respective columns

LINEAR PROGRAMMING. Relation to the Text (cont.) Relation to Material in Text. Relation to the Text. Relation to the Text (cont.

"SYMMETRIC" PRIMAL-DUAL PAIR

Summary of the simplex method

Slide 1 Math 1520, Lecture 10

3. THE SIMPLEX ALGORITHM

6.2: The Simplex Method: Maximization (with problem constraints of the form )

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Exam 3 Review Math 118 Sections 1 and 2

Discrete Optimization. Guyslain Naves

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

+ 5x 2. = x x. + x 2. Transform the original system into a system x 2 = x x 1. = x 1

MATH 4211/6211 Optimization Linear Programming

Chapter 3, Operations Research (OR)

Lectures 6, 7 and part of 8

February 17, Simplex Method Continued

MATHEMATICAL PROGRAMMING I

Optimization Methods in Management Science

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Optimization methods NOPT048

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Sensitivity Analysis and Duality

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

Linear programs, convex polyhedra, extreme points

M340(921) Solutions Problem Set 6 (c) 2013, Philip D Loewen. g = 35y y y 3.

4.4 The Simplex Method and the Standard Minimization Problem

9.5 THE SIMPLEX METHOD: MIXED CONSTRAINTS

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

{ move v ars to left, consts to right { replace = by t wo and constraints Ax b often nicer for theory Ax = b good for implementations. { A invertible

The Simplex Algorithm

Transcription:

Fundamental Theorems of Optimization 1

Fundamental Theorems of Math Prog. Maximizing a concave function over a convex set. Maximizing a convex function over a closed bounded convex set. 2

Maximizing Concave Functions The problem is to maximize a concave function over a convex set. FUNDAMENTAL THEOREM 1: A local optimum is a global optimum. 3

Proof Assume the contrary. Let point x be a local optimum that is NOT a global optimum, and let point y be a global optimum. Consider the line segment between points x and y. 4

Proof Global max Local max but not global max x y 5

Proof Since f(y) > f(x), the value of f for every point on the line segment between x and y other than x itself, is strictly greater than f(x). 6

Proof Line segment lies strictly above f(x) for points other than x f(x) x y 7

Proof Since the curve is concave, the value of f for every point on the line segment is greater than or equal to the value of f on the line segment between points: (x, f(x)) and (y, f(y)) 8

Proof Greater than f(x) x Curve lies strictly above the line segment for points other than x y 9

Proof by Contradiction Greater than f(x) Cannot be local maximum x Curve lies strictly above the line segment for points other than x y 10

Maximizing Convex Functions Problem: Maximizing convex functions over closed bounded convex sets. Because the feasible region is a closed bounded convex set, the feasible region is the set of points that are convex combinations of extreme points. 11

Feasible Region and Extreme Points The extreme points. Feasible region Convex combinations of extreme points Feasible region is the set of all convex combinations of extreme points 12

Maximizing Convex Functions Fundamental Theorem 2 There exists an extreme point which is the global maximum of a convex function over a closed bounded convex set. 13

Proof by Contradiction Assume the contrary. So there is a global maximum at a point p that is not an extreme point, and no global maximum occurs at an extreme point. Therefore for every extreme point q: f(p) > f(q) Point p is a convex combination of some set of extreme points. So there exists extreme points x_1, x_2,,x_k and positive scalars r_1,,r_k, that sum to 1 such that: p = sum over j of r_j * x _j 14

Proof Since f is a convex function: f(sum over j of r_j * x_j) =< Sum over j of r_j * f(x_j) If for every j, f(x_j) is strictly less than the global maximum, then the right hand side of the above inequality is strictly less than the global maximum, and so the left hand side is also strictly less than the global maximum. But, the left hand side is f(p). Hence p is not a global maximum. Contradiction! 15

Remembering the Fundamental Theorems Graphically Local maximum is global maximum There exists an extreme point which is a global maximum 16

Introduction to Linear Programming 17

Maximizing Linear Functions Which of the fundamental theorems can we apply if the objective function is linear? 18

Maximizing Linear Functions Which of the fundamental theorems can we apply if the objective function is linear? BOTH fundamental theorems because linear functions are both concave functions and convex functions! 19

Maximizing Linear Functions Maximizing linear functions over closed bounded convex sets allows us to use both theorems. So: Every local maximum is a global maximum, and There exists an extreme point that is a global maximum. 20

Algorithm for Linear Objective Functions Start at an extreme point. While the extreme point is not a local maximum do traverse along the boundary of the feasible region to a better neighboring extreme point, i.e., a point with a higher value of the objective function. 21

Linear Feasible Regions Can we simplify the problem if the feasible region is bounded by hyperplanes? The feasible region is a polyhedron with a finite number of extreme points. So, we know the algorithm will stop because the while loop never returns to the same extreme point. So, the loop can iterate at most N times where N is the number of extreme points of the polyhedron representing the feasible region. 22

Algorithm for Linear Problems Start at an extreme point. If it is not a local maximum find a direction of improvement along a boundary to a neighboring better extreme point. 23

Algorithm for Linear Problems Start at an extreme point. If it is not a local maximum find a direction of improvement along a boundary to a neighboring better extreme point. 24

Algorithm for Linear Problems If the extreme point is not a local maximum find a direction of improvement along a boundary to a neighboring better extreme point. 25

Algorithm for Linear Problems If the extreme point is a local maximum, stop. 26

Linear Programming Linear programming is mathematical programming where the objective function and constraints are linear. Example: Maximize z Where 3x 0 + 2x 1 = z Subject to: 2x 0 + x 1 =< 4 x 0 + 2x 1 =< 6 x 0, x 1 >= 0 27

Canonical Form For now, we restrict ourselves to problems of the form: Maximize z where c.x = z Subject to: A.x =< b x >= 0 Where: c is an row vector of length n, x is a column vector of length n, A is an m x n matrix, and b is a column vector of length m with all values non-negative 28

Example of Canonical Form Example: Maximize z Where 3x 0 + 2x 1 = z Subject to: 2x 0 + x 1 =< 4 x 0 + 2x 1 =< 6 x 0, x 1 >= 0 C = [ 3, 2] A = b = 2 1 1 2 4 6 x = x 0 x 1 29

Another Canonical Form Convert the inequalities to equalities. Example: Maximize z Where 3x 0 + 2x 1 = z Subject to: 2x 0 + x 1 =< 4 x 0 + 2x 1 =< 6 x 0, x 1 >= 0 Example: Maximize z Where 3x 0 + 2x 1 + 0s 0 + 0s 1 = z Subject to: 2x 0 + x 1 + s 0 + 0s 1 = 4 x 0 + 2x 1 + 0s 0 + s 1 = 6 x 0, x 1, s 0, s 1 >= 0 30

The Alternative Canonical Form Max z Where c.x = z Subject to: A. x =< b x >= 0 Max z Where c.x + 0.s = z Subject to: A. x + I.s = b x, s >= 0 Here I is the identity matrix 31

Relationship to Economics The constraints represent constraints on resources. The columns represent activities. The objective function represents revenue. 32

Relationship to Economics Maximize z Where 4x 0 + 5x 1 + 9x 2 = z Subject to: 2x 0 + x 1 + 3x 2 =< 6 x 0 + 2x 1 + 4x 2 =< 9 x 0, x 1 >= 0 Example: A furniture maker has two scarce resources: wood and labor. The company has 6 units of wood and 9 units of labor. The company can make small tables, chairs, or cupboards. A table requires 2 units of wood and 1 unit of labor and produces revenue of 4 units. A chair requires 1 unit of wood and 2 units of labor and produces a revenue of 5 units. A cupboards requires 3 units of wood, 4 of labor and produces a revenue of 9 units. How many tables, chairs and cupboards should the company make to maximize revenue? 33

Definition: Basic Feasible Solution A feasible solution is a vector X such that: A.X =< b, and X >= 0. A basic solution is one in which at most m variables are non-zero (and so at least n-m variables are strictly zero), and the m, possibly non-zero variables correspond to linearly independent columns. Let B be the matrix obtained by putting together the columns of the possibly non-zero variables. Let the m-vector formed by putting these variables together be x B. Then the basic solution is x B = B -1.b, and all other variables are strictly zero. 34

Examples of Basic Solutions Maximize z Where 4x 0 + 5x 1 + 9x 2 + 0s 0 + 0s 1 = z Subject to: 2x 0 + x 1 + 3x 2 + s 0 + 0s 1 = 6 x 0 + 2x 1 + 4x 2 + 0s 0 + s 1 = 9 x 0, x 1 >= 0 Example 1: Basic variables are s 0, s 1. Non-basic variables are x 0, x 1, x 2. The columns corresponding to the basic variables are the last two columns, i.e., 1 0 0 1 So, the basic solution is s 0, s 1 = 6, 9 and x 0, x 1, x 2 = 0, 0, 0 35

Examples of Basic Solutions Maximize z Where 4x 0 + 5x 1 + 9x 2 + 0s 0 + 0s 1 = z Subject to: 2x 0 + x 1 + 3x 2 + s 0 + 0s 1 = 6 x 0 + 2x 1 + 4x 2 + 0s 0 + s 1 = 9 x 0, x 1 >= 0 Example 2: Basic variables are x 0, x 1. Non-basic variables are x 2, s 0, s 1. The columns corresponding to the basic variables are the first two columns, i.e., 2 1 1 2 So, the basic solution is x 0, x 1 = 1, 4 and x 2, s 0, s 1 = 0, 0, 0 36

Examples of Basic Solutions Maximize z Where 4x 0 + 5x 1 + 9x 2 + 0s 0 + 0s 1 = z Subject to: 2x 0 + x 1 + 3x 2 + s 0 + 0s 1 = 6 x 0 + 2x 1 + 4x 2 + 0s 0 + s 1 = 9 x 0, x 1 >= 0 Example 3: Basic variables are x 1, s 0. Non-basic variables are x 0, x 2, s 1. The columns corresponding to the basic variables are the first two columns, i.e., 1 1 2 0 So, the basic solution is x 1, s 0 = 4.5, 1.5 and x 0, x 2, s 1 = 0, 0, 0 37

Economic Meaning of Positive Slack Variables The slack variable is the variable we added to turn a constraint into an equality. We have one slack variable for every constraint. If a slack variable is positive in a solution, that means that the inequality constraint is not tight. In other words, we are not using all of that resource. This implies that the resource is not scarce. 38

Theorem A solution is an extreme point of the feasible region if and only if it is a basic feasible solution. Proof: First prove that every basic feasible solution is an extreme point. Then prove that every feasible solution that is not a basic solution, is not an extreme point. 39

Every basic feasible solution is extreme Proof: Let p be a basic feasible solution. Let u and v be two feasible solutions, distinct from p, such that the line segment between u and v passes through p. Thus p is a weighted average of u and v with positive weights. Since u and v are feasible, their elements are nonnegative. The only way that weighted average (with positive weights) of non-negative numbers is strictly zero is for the numbers themselves to be strictly zero too. 40

Proof continued Hence the values of non-basic variables in u and v are strictly zero. Therefore, the values of the basic variables in both u and v must be: B -1.b This is the same as the solution for p. Hence u and v are not distinct from p: contradiction! 41

Theorem: Non-basic is not Extreme Proof: Assume there are m+1 or more variables that are strictly positive in a solution p. The column corresponding to at least one of these variables, say, is linearly dependent on the remaining columns. Consider a solution u obtained by perturbing solution p by increasing x k by arbitrarily small positive epsilon and adjusting m positive variables in p so that u is feasible. Consider a solution v, obtained in the same way by decreasing x k by arbitrarily small positive delta. Show that p can be obtained by a linear combination of u and v. 42

Theorem Consider the problem: Max z where c.x = z subject to A.x + I.s = b, and x,s>=0. The basic feasible solution x = 0, s = b is locally optimum if all the elements of c are non-positive. Proof: c.x is non-positive since c is non-positive and x is non-negative. Hence z = 0 is an optimal solution. 43

The Algorithm Start with a basic feasible solution: say x = 0, s = b, where s is the vector of slacks. While locally-optimum extreme point is not found: do: Increase the value of a non-basic variable that improves the objective function until a basic variable becomes zero. Modify the basis as follows: Replace the basic variable that has become zero by the variable that became positive. 44

The Algorithm: Computational Steps Start with a basic feasible solution: say x = 0, s = b, where s is the vector of slacks. Always maintain a problem in the canonical form: max z where c.x + 0.s = z, subject to A.x + I.s = b, and x, s >= 0 While an element of c is positive: do: Increase the value of a non-basic variable corresponding to a positive c until a basic variable becomes zero. Modify the basis as follows: Replace the basic variable that has become zero by the variable corresponding to the positive element of c. Convert to canonical form. 45

Examples of Simplex Algorithm Maximize z Where 4x 0 + 5x 1 + 9x 2 + 0s 0 + 0s 1 = z Subject to: 2x 0 + x 1 + 3x 2 + s 0 + 0s 1 = 6 x 0 + 2x 1 + 4x 2 + 0s 0 + s 1 = 9 x 0, x 1, x 2, s 0, s 1 >= 0 Problem is in canonical form: Max z Where c.x + 0.s = z Subject to: A.x + I.s = b x, s >= 0 And where b is non-negative. A basic feasible solution (and hence an extreme point) is s = b, x = 0 There are coefficients of c that are positive. Increase any variable with a positive coefficient, say x 0 46

Examples of Simplex Algorithm Maximize z Where 4x 0 + 5x 1 + 9x 2 + 0s 0 + 0s 1 = z Subject to: 2x 0 + x 1 + 3x 2 + s 0 + 0s 1 = 6 x 0 + 2x 1 + 4x 2 + 0s 0 + s 1 = 9 x 0, x 1, x 2, s 0, s 1 >= 0 Keep increasing x 0 until some currently basic variable decreases in value to 0. How large can we make x 0? Which basic variable decreases to 0 first? 47

Examples of Simplex Algorithm Maximize z Where 4x 0 + 5x 1 + 9x 2 + 0s 0 + 0s 1 = z Subject to: 2x 0 + x 1 + 3x 2 + s 0 + 0s 1 = 6 x 0 + 2x 1 + 4x 2 + 0s 0 + s 1 = 9 x 0, x 1, x 2, s 0, s 1 >= 0 Keep increasing x 0 until some currently basic variable decreases in value to 0. How large can we make x 0? Which basic variable decreases to 0 first? When x 0 increases to 3, s 0 decreases to 0 while s 1 is still positive. So, at the next basic feasible solution (and hence extreme point) the basic variables are x 0 and s 1, while the other variables, x 1, x 2 and s 0 are strictly 0. 48

Examples of Simplex Algorithm Maximize z Where 4x 0 + 5x 1 + 9x 2 + 0s 0 + 0s 1 = z Subject to: 2x 0 + x 1 + 3x 2 + s 0 + 0s 1 = 6 x 0 + 2x 1 + 4x 2 + 0s 0 + s 1 = 9 x 0, x 1, x 2, s 0, s 1 >= 0 Old basic variables: s 0, s 1. New basic variables: x 0, s 1. Convert to canonical form for new basic variables. Convert to canonical form by pivoting on the element in the column of the incoming basic variable (column 1) and in the row of the outgoing basic variable (row 1). 49

Examples of Simplex Algorithm Maximize z Where 4x 0 + 5x 1 + 9x 2 + 0s 0 + 0s 1 = z Subject to: 2x 0 + x 1 + 3x 2 + s 0 + 0s 1 = 6 x 0 + 2x 1 + 4x 2 + 0s 0 + s 1 = 9 Old basic variables: s 0, s 1. New basic variables: x 0, s 1. Convert to canonical form for new basic variables. pivot x 0, x 1, x 2, s 0, s 1 >= 0 Convert to canonical form by pivoting on the element in the column of the incoming basic variable (column 1) and in the row of the outgoing basic variable (row 1). 50

Examples of Simplex Algorithm Maximize z Where 4x 0 + 5x 1 + 9x 2 + 0s 0 + 0s 1 = z Subject to: 2x 0 + x 1 + 3x 2 + s 0 + 0s 1 = 6 x 0 + 2x 1 + 4x 2 + 0s 0 + s 1 = 9 To pivot, our goal is to make the column vector into a unit vector. So we want to transform the column [4, 2, 1] to [0, 1, 0]. So, divide first constraint by 2. x 0, x 1, x 2, s 0, s 1 >= 0 pivot 51

Examples of Simplex Algorithm Maximize z Where 4x 0 + 5x 1 + 9x 2 + 0s 0 + 0s 1 = z Subject to: 1.x 0 + 0.5x 1 + 1.5x 2 + 0.5s 0 + 0s 1 = 3 x 0 + 2x 1 + 4x 2 + 0s 0 + s 1 = 9 pivot x 0, x 1, x 2, s 0, s 1 >= 0 To pivot, our goal is to make the column vector into a unit vector. So we want to transform the column [4, 1, 1] to [0, 1, 0]. So, subtract 4 times first constraint from objective function. 52

Examples of Simplex Algorithm Maximize z Where 0x 0 + 3x 1 + 3x 2 2s 0 + 0s 1 = z - 12 Subject to: 1.x 0 + 0.5x 1 + 1.5x 2 + 0.5s 0 + 0s 1 = 3 x 0 + 2x 1 + 4x 2 + 0s 0 + s 1 = 9 pivot x 0, x 1, x 2, s 0, s 1 >= 0 To pivot, our goal is to make the column vector into a unit vector. So we want to transform the column [0, 1, 1] to [0, 1, 0]. So, subtract first constraint from second. 53

Examples of Simplex Algorithm Maximize z Where 0x 0 + 3x 1 + 3x 2 2s 0 + 0s 1 = z - 12 Subject to: 1.x 0 + 0.5x 1 + 1.5x 2 + 0.5s 0 + 0s 1 = 3 0.x 0 + 1.5x 1 + 2.5x 2-0.5s 0 + s 1 = 6 pivot x 0, x 1, x 2, s 0, s 1 >= 0 To pivot, our goal is to make the column vector into a unit vector. So we want to transform the column [0, 1, 1] to [0, 1, 0]. So, subtract first constraint from second. 54

Examples of Simplex Algorithm Maximize z Where 0x 0 + 3x 1 + 3x 2 2s 0 + 0s 1 = z - 12 Subject to: 1.x 0 + 0.5x 1 + 1.5x 2 + 0.5s 0 + 0s 1 = 3 0.x 0 + 1.5x 1 + 2.5x 2-0.5s 0 + s 1 = 6 pivot x 0, x 1, x 2, s 0, s 1 >= 0 The problem is again in canonical form. A basic feasible solution is x 0 = 3 and s 1 = 6 Are there elements of c that are positive? Yes, coefficient of x 1 is positive. So, increase x 1. 55

Examples of Simplex Algorithm Maximize z Where 0x 0 + 3x 1 + 3x 2 2s 0 + 0s 1 = z - 12 Subject to: 1.x 0 + 0.5x 1 + 1.5x 2 + 0.5s 0 + 0s 1 = 3 0.x 0 + 1.5x 1 + 2.5x 2-0.5s 0 + s 1 = 6 What basic variable drops to 0 first when x 1 is increased? x 0, x 1, x 2, s 0, s 1 >= 0 pivot 56

Examples of Simplex Algorithm Maximize z Where 0x 0 + 3x 1 + 3x 2 2s 0 + 0s 1 = z - 12 Subject to: 1.x 0 + 0.5x 1 + 1.5x 2 + 0.5s 0 + 0s 1 = 3 0.x 0 + 1.5x 1 + 2.5x 2-0.5s 0 + s 1 = 6 What basic variable drops to 0 first when x 1 is increased? x 0, x 1, x 2, s 0, s 1 >= 0 pivot When x 1 is increased to 4, s 1 drops to 0 while x 0 remains positive. So, the new basic variables are x 0 and x 1. 57

Examples of Simplex Algorithm Maximize z Where 0x 0 + 3x 1 + 3x 2 2s 0 + 0s 1 = z - 12 Subject to: 1.x 0 + 0.5x 1 + 1.5x 2 + 0.5s 0 + 0s 1 = 3 0.x 0 + 1.5x 1 + 2.5x 2-0.5s 0 + s 1 = 6 x 0, x 1, x 2, s 0, s 1 >= 0 Old basic variables: x 0, s 1 New basic variables: x 0, x 1 Convert to canonical form with new basic variables. Pivot on column of incoming basic variable and row of outgoing basic variables 58

Examples of Simplex Algorithm Maximize z Where 0x 0 + 3x 1 + 3x 2 2s 0 + 0s 1 = z - 12 Subject to: 1.x 0 + 0.5x 1 + 1.5x 2 + 0.5s 0 + 0s 1 = 3 0.x 0 + 1.5x 1 + 2.5x 2-0.5s 0 + s 1 = 6 x 0, x 1, x 2, s 0, s 1 >= 0 Pivot on column of incoming basic variable and row of outgoing basic variables pivot 59

Examples of Simplex Algorithm Maximize z Where 0x 0 + 0x 1-2x 2 5s 0-2s 1 = z - 24 Subject to: 1.x 0 + 0x 1 + 2/3x 2 + 1/3s - 1/3s 0 1 = 1 0.x 0 + 1x 1 + 5/3x 2-1/3s 0 + 2/3s 1 = 4 Problem is once again in canonical form. x 0, x 1, x 2, s 0, s 1 >= 0 The basic feasible solution for this canonical form is x 0 = 1, x 1 = 4, with all other variables x 2, s 0, s 1 being 0. Since all coefficients of c are now negative, the solution is a local (and hence global) maximum. 60