MATHEMATICAL PROGRAMMING I

Similar documents
4. Duality and Sensitivity

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Chapter 5 Linear Programming (LP)

Numerical Optimization

Linear and Integer Programming - ideas

Part 1. The Review of Linear Programming Introduction

Linear Programming and the Simplex method

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics

1 The linear algebra of linear programs (March 15 and 22, 2015)

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

OPERATIONS RESEARCH. Linear Programming Problem

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Lecture 6 Simplex method for linear programming

Integer programming: an introduction. Alessandro Astolfi

Linear Algebra. James Je Heon Kim

3.7 Cutting plane methods

MAT 2009: Operations Research and Optimization 2010/2011. John F. Rayman

Chapter 1. Preliminaries

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

1 Review Session. 1.1 Lecture 2

An introductory example

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Lectures 6, 7 and part of 8

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

z = f (x; y) f (x ; y ) f (x; y) f (x; y )

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Linear programs, convex polyhedra, extreme points

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Chapter 1: Linear Programming

Part 1. The Review of Linear Programming

a11 a A = : a 21 a 22

ECON0702: Mathematical Methods in Economics

x 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

AM 121: Intro to Optimization Models and Methods

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

Ω R n is called the constraint set or feasible set. x 1

Fundamental Theorems of Optimization

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Optimization methods NOPT048

Choose three of: Choose three of: Choose three of:

CO 250 Final Exam Guide

Linear Programming. (Com S 477/577 Notes) Yan-Bin Jia. Nov 28, 2017

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Notes taken by Graham Taylor. January 22, 2005

F 1 F 2 Daily Requirement Cost N N N

3.3 Easy ILP problems and totally unimodular matrices

The Kuhn-Tucker Problem

Linear Programming Redux

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008

4.3 - Linear Combinations and Independence of Vectors

Introduction to Linear Algebra. Tyrone L. Vincent

Advanced Microeconomics Fall Lecture Note 1 Choice-Based Approach: Price e ects, Wealth e ects and the WARP

Introduction to Linear and Combinatorial Optimization (ADM I)

The Simplex Algorithm

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

Duality of LPs and Applications

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

Introduction to Integer Programming

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

New Artificial-Free Phase 1 Simplex Method

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

Lecture slides by Kevin Wayne

3 The Simplex Method. 3.1 Basic Solutions

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

15-780: LinearProgramming

Lecture 1 Introduction

Submodular Functions, Optimization, and Applications to Machine Learning

Integer Programming, Part 1

Math 5593 Linear Programming Week 1

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

3. THE SIMPLEX ALGORITHM

MAT-INF4110/MAT-INF9110 Mathematical optimization

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

Elementary maths for GMT

Introduction to Integer Linear Programming

Linear Programming Inverse Projection Theory Chapter 3

The Simplex Algorithm and Goal Programming

AM 121: Intro to Optimization! Models and Methods! Fall 2018!

AM 121: Intro to Optimization

Nonlinear Programming (NLP)

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Linear Programming Notes

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

{ move v ars to left, consts to right { replace = by t wo and constraints Ax b often nicer for theory Ax = b good for implementations. { A invertible

Linear Programming. 1 An Introduction to Linear Programming

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

Assignment 1: From the Definition of Convexity to Helley Theorem

Mathematical Preliminaries

It is convenient to introduce some notation for this type of problems. I will write this as. max u (x 1 ; x 2 ) subj. to. p 1 x 1 + p 2 x 2 m ;

Transcription:

MATHEMATICAL PROGRAMMING I Books There is no single course text, but there are many useful books, some more mathematical, others written at a more applied level. A selection is as follows: Bazaraa, Jarvis and Sherali. Linear Programming and Network Flows. Wiley, 2nd Ed. 199 A solid reference text. Christos Papadimitriou and Kenneth Steiglitz Combinatorial Optimization: Algorithms and Complexity. Dover, 1998. Recommended - good value. Gass, Saul I. Linear Programming: Methods and Applications, 5th edition. Thomson,1985. Dantzig, George B. Linear Programming and Extensions, Princeton University Press, 196. The most widely cited early textbook in the eld. Chvatal, V., Linear Programming, Freeman, 198. Luenberger, D. Introduction to Linear and Nonlinear Programming, Addison Wesley, 1984. Wolsey, Laurence A. Integer programming, Wiley, 1998. Taha, H. Operations Research: An Introduction Prentice-Hall, 7th Ed. 2. (More applied, many examples) Winston, Wayne Operations Research Applications & Algorithms, Duxbury Press, 1997 (Totally applied) Useful websites 1. FAQ page at Optimization Technology Center Northwestern University and Argonne National Laboratory http://www-unix.mcs.anl.gov/otc/guide/faq/linear-programming-faq.html 2. My notes are currently at: http://www.maths.man.ac.uk/~mkt/new_teaching.htm 1

1. Introduction De nition A linear programming problem (or LP) is the optimization (maximization or minimization) of a linear function of n real variables subject to a set of linear constraints. Example 1.1 The following is a LP problem in n = 2 non-negative variables x 1 ; x 2 : maximize x 1 +x 2 O.F. subject to x 1 + x 2 6 Constraint 1 x 1 +2x 2 8 Constraint 2 x 1; x 2 Non-negativity The variables x 1 ; x 2 are the decision variables which can be represented as a vector x in the positive quadrant of a real 2D space R 2. The function f (x 1 ; x 2 ) = x 1 + x 2 we wish to maximize is known as the objective function (OF) and represents the value of a particular choice of x 1 and x 2. The two inequalities that have to be satis ed by a feasible solution to our problem are known as the LP constraints. Finally, the constraints x 1 ; x 2 represent non-negativity of the problem variables. The set of x-values, i.e. all pairs (x 1 ; x 2 ), satisfying all the constraints is a subset S R 2 known as the LP s feasible region. For minimization problems, the value of the OF is required to be as small as possible and f (x 1 ; x 2 ) = f (x) is often referred to as a cost function. Sometimes we denote the objective function by z (x) or z (x). Notes Graphical solution of this example (which will be covered in lectures) is only possible for problems in two variables. Finding the maximum of z (x) is equivalent to nding the minimum of z (x) so we can, for theoretical purposes and without loss of generality (w.l.o.g.), consider either max or min problems only. Any additive constant in z (x) can also be ignored. A problem with a variable x that can take positive or negative values (known as free or unrestricted in sign (u.r.s.) variables) can easily be incorporated into a LP by de ning x = u v with u; v : LP problems are commonly formulated with a mixture of, and = constraints. 2

Example 1.2 A rm manufactures two products A and B. To produce each product requires a certain amount of processing on each of three machines I, II, III. The processing time (hours) per unit production of A,B are as given in the table I II III A.5.4.2 B.25..4 The total available production time of the machines I, II, III is 4 hours, 6 hours and hours respectively, each week. If the unit pro t from A and B is $5 and $ respectively, determine the weekly production of A and B which will maximize the rm s pro t. Formulation: Let x 1 be the no. of item A to produce per week Let x 2 be the no. of items of B to produce per week Producing x 1 units of Product A consumes :5x 1 hours on machine I and contributes 5x 1 towards pro t. Producing x 2 items of Product B requires in addition :25x 2 hours on machine I and contributes x 2 towards pro t. The following formulation seeks to maximize pro t: Maximize 5x 1 + x 2 (Objective Function) subject to :5x 1 + :25x 2 4 Constraints :4x 1 + :x 2 6... :2x 1 + :4x 2... x 1 ; x 2 Non-negativity This is an optimization problem in 2 non-negative decision variables x 1 ; x 2 (the unknowns) and constraints (not counting the non-negativity constraints). More generally, notice that each constraint row can be regarded as a resource constraint. The solution to the LP in this case tells us how best to use scarce resources. Examples of resources that often vary linearly with amounts of production are manpower, materials, time.

Example 1. (The diet problem) How to optimize the choice of n foods (e.g. animal feed) when each food has some of each of m nutrients? Suppose a ij = amount of i th nutrient in a unit of j th food, i = 1; :::; m j = 1; :::; n r i = yearly requirement of the i th nutrient, i = 1; :::; m x j = yearly consumption of the j th food, j = 1; :::; n c j = cost per unit of the i th food, j = 1; :::; n: We seek the "best" yearly diet represented by a vector x that satis es the nutritional requirement Ax r and interpret "best" as least cost min c T x s.t. Ax r x 1.1 Standard Form For an LP in standard form, all the constraints are equalities. (apart from non-negativity constraints) Suppose there are m such equality constraints. The LP can be a maximization (MAX) or a minimization (MIN) problem. Let x = (x 1 ; :::; x n ) T be n non-negative real variables. c T = (c 1 ; c 2 ; :::; c n ) be a set of real (OF) coe cients A = (a ij ) be a m n matrix of real coe cients b = (b 1 ; :::; b m ) be a non-negative real r.h.s. vector (sometimes called the requirements vector) The general LP in standard form with n variables and m constraints (MINimization form) is 4

Minimize c 1 x 1 + c 2 x 2 + ::: + c n x n = P n j=1 c jx j subject to a 11 x 1 + a 12 x 2 + ::: + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + ::: + a 2n x n = b 2... a m1 x 1 + a m2 x 2 + ::: + a mn x n = b m and x 1 ; x 2 :::; x n For mathematical convenience, note that b i for each i (as mentioned above) rows of A will be assumed to be linearly independent The last condition (a technicality) ensures for m n that a set of m linearly independent columns of A can be found (known as a basis of R m ). Example 1.1 (contd.) To convert this problem to standard form, we introduce two nonnegative slack variables s 1 ; s 2 and rewrite the set of constraints as x 1 + x 2 6 x 1 +2x 2 8 x 1 + x 2 +s 1 = 6 x 1 +2x 2 +s 2 = 8 where are equivalent since s 1 ; s 2. Notice that the problem dimensions are changed to m = 2; n = 4: 1.2 Vector-matrix notation We can write the LP (standard min/maximization form) concisely as Min/max subject to c T x Ax = b x (SF) Note that x is to be interpreted component-wise as each x j : Equivalently, Min/max c T x j Ax = b; x where x = (x 1 ; :::; x n ) T c T = (c 1 ; :::; c n ) is a column vector is a conformable row-vector. Note: In the subsequent notes we will not always adhere strictly (pedantically) to bold face for matrices and vectors. Books also adopt di erent conventions. Where confusion is unlikely we may also write x (the vector x) as a 5

row vector with or without a transpose sign. e.g. x = (1; ; ; 5) rather than x T. Usually vectors are in lower case, the exception being A j to denote the j th column of the matrix A: A = B @ a 11 a 12 : : : a 1n a 21 a 22 : : : a 2n.. 1 C A and b = B @ b 1 b 2. 1 C A. a m1 : : : : : : a mn b m Assumptions We suppose that m n; in fact the rank of A is m (full row rank).,the rows of A are linearly independent (no redundant constraints).,it is possible to choose (usually in many ways) a subset of m linearly independent columns of A; to form a basis. B = A j(1) ; A j(2) ; :::; A j(m) The matrix formed from these columns is called the basis matrix B: 1. Canonical form In Example 1.1, the constraints are all in the same direction and the original formulation may be written brie y in canonical maximization form where x = A = x1 maximize subject to ; c x T = 1 2 1 1 6 ; b = 1 2 8 c T x Ax b x (CF1) The problem minimize subject to c T x Ax b x (c.f. diet problem) is said to be in canonical minimization form. (CF2) Notice the direction of the constraint inequalities is determined by whether we have a MAX or a MIN problem. (Intuitively) When maximizing remember that we have a ceiling-type constraint and, when minimizing, a oor-type constraint. 6

1.4 General LP problems Any LP problem may be structured into either standard form (SF) or one of the canonical forms (CF1), (CF2) Example 1.4 minimize x 1 2x 2 x subject to x 1 + 2x 2 +x 14 x 1 +2x 2 +4x 12 x 1 x 2 +x = 2 x 1; x 2 u.r.s: x a) Convert the LP to standard form Let x 1 = u 1 v 1 ; x 2 = u 2 v 2 ; x = ( + x ) with x and u j ; v j (j = 1; 2) 1. Introduce a slack variable s 1 to Constraint 1 Introduce a surplus variable s 2 to Constraint 2 This results in minimize u 1 v 1 2u 2 +2v 2 +x (+9) subject to u 1 v 1 + 2u 2 2v 2 x +s 1 = 17 u 1 v 1 +2u 2 2v 2 4x s 2 = 24 u 1 v 1 u 2 +v 2 x = 5 u 1 ; v 1 ; u 2 ; v 2 ; x ; s 1 ; s 2 b) Obtain the canonical minimization form To reverse the inequality in Constraint 1 we multiplied by -1. Replace the equality a T x = b in Constraint by a T x b and a T x b then reverse the latter constraint by a sign change minimize u 1 v 1 2u 2 +2v 2 +x subject to u 1 +v 1 2u 2 +2v 2 +x 17 u 1 v 1 +2u 2 2v 2 4x 24 u 1 v 1 u 2 +v 2 x 5 u 1 +v 1 +u 2 v 2 +x 5 u 1 ; v 1 ; u 2 ; v 2 ; x c) Convert the problem into a maximization Change the objective function (OF) to maximize u 1 +v 1 +2u 2 2v 2 x 7

2. Basic solution and extreme points 2.1 Basic solutions The constraints of an LP in standard form are an underdetermined linear equation system A x = b mn n1 m1 with m < n: There are fewer equations than unknowns ) an in nite number of solutions. De nition A solution x to (2.1) corresponding to some basis matrix B that is obtained by setting n m remaining components of x to zero and solving for the remaining m variables is known as a basic solution. If, in addition, x such a solution is said to be feasible for the LP. If we assume (w.l.o.g) that the entries of A; x and b are integers, we can bound from above the absolute value of the components of any basic solution. Lemma (c.f. Papadimitriou & Steiglitz, p.) Let x = (x 1 ; :::; x n ) be a basic solution. Then jx j j m! m 1 where = max fja ij jg i;j = max fjb jjg j=1;:::;m Proof Trivial if x j is non-basic, since x j = : For x j a basic variable, its value is the sum of products mx b ij b j j=1 of elements of B 1 multiplied by elements of b: Each element of B 1 is given by B 1 = Adj A det A 8

Now j det Aj is integer valued, therefore the denominator 1. Adj A is the matrix of cofactors. Each cofactor is the determinant of a.(m 1) (m 1) matrix, i.e. the sum of (m 1)! products of m 1 elements of A: Therefore each element of B 1 is bounded in modulus by (m 1)! m 1 Because each x j is the sum of m elements of B 1 multiplied by an element of b j ; we have jx j j m! m 1 as required. Example 2.1 Consider the LP min 2x 2 + x 4 + 5x 7 subject to x 1 + x 2 + x + x 4 = 4 x 1 + x 5 = 2 x + x 6 = x 2 + x + x 7 = 6 x 1 ; x 2 ; x ; x 4 ; x 5 ; x 6 ; x 7 One basis is B = fa 4 ; A 5 ; A 6 ; A 7 g ; which corresponds to the matrix B = I: the corresponding basic solution is x = (; ; ; 4; 2; ; 6) : Another basis corresponds to B = fa 2 ; A 5 ; A 6 ; A 7 g with basic solution x = (; 4; ; ; 2; ; 6) : Note that x is not a feasible solution, since x 7 < : Remark: The basis feasible solutions (BFS) of an LP are precisely the vertices or extreme points (EP s) of the feasible region. We will show that the optimum (if it exists) is achieved at a vertex. Let B be a m m non-singular submatrix of A (m columns of A). Let x B denote the components of x corresponding to B and x N denote the remaining n m (zero) components. For convenience of notation we may reorder the columns of A so that the rst m columns relate to B and the remaining columns to a m (n m) submatrix N. Then Ax = B N x B x N = Bx B + Nx N = b 9

Since x N = for this basic solution x we obtain Bx B = b x B = B 1 b (2.2) De nition: A BFS (and the corresponding vertex) is called degenerate if it contains more than n m zeros. i.e. Some component of x B is zero,() the basic solution is degenerate. Lemma If two distinct bases correspond to the same BFS x then then x is degenerate. Proof Suppose that B and B both determine the same BFS x: Then x has zeros in all the n m columns not in B: Some such column must belong to B so x is degenerate. Example 2.2 Determine all the basic solutions to the system x 1 + x 2 6 x 2 x 1 ; x 2 Solution Introduce slack variables s 1 ; s 2 to write the system in standard form x 1 +x 2 +s 1 = 6 x 2 +s 2 = or in matrix form (with m = 2; n = 4) 1 1 1 1 1 B @ x 1 x 2 s 1 s 2 1 C A = 6 : A x = b (24) (41) (21) Set n m = 2 variables to zero to obtain a basic solution if the resulting B-matrix is invertible (so columns of B form a basis or minimal spanning set of R m ). 1

1 1 1. Set s 1 = s 2 = then B = 1 1 1 6 x B = B 1 b = 1 and B 1 = = 1 1 1 x = x T B ; xt N = (; ; ; ) T is a BFS. 1 2. Set x 2 = s 1 = : B = = I 1 2 = B 1 6 x B = B 1 b = b = : so x = (6; ; ; ) T is a BFS. Continue to examine a total of We obtain (Ex.) the four BFS s 1 6 x 1 = B C @ A x 2 = B @ 4 2 = 4!2! 2! = 6 selections of basic variables. 1 C A x = B @ 1 C A x 4 = B @ 6 1 C A Ex. The corners or vertices of the feasible region in (x 1 ; x 2 ) space are (; ) ; (; ) ; (6; ) ; (; ) : 11

Theorem 1 (Existence of a Basic Feasible Solution) Given a LP in standard form where A is (m n) of rank m i) If there is a feasible solution there is a BFS ii) If the LP has an optimal solution, there is an optimal BFS. Proof i) Let A be partitioned by columns as (A 1 ja 2 j:::ja n ) ie. A j denotes the j th column of A (an m vector) Suppose that x = (x 1 ; x 2 ; :::; x n ) T is a feasible solution. Then Ax = x 1 A 1 + x 2 A 2 + ::: + x n A n = b where x j ; each j: Let x have p strictly positive components and renumber the columns of A so these are the rst p components x 1 ; x 2 ; :::; x p. Then Ax = x 1 A 1 + x 2 A 2 + ::: + x p A p = b (1) Case 1 A 1 ; :::A p are linearly independent. Then p m. If p = m then A 1 ; :::A m form a basis. i.e. they span R m : If p < m we can add additional columns from A to complete a basis. Assigning a value zero to the corresponding variables x p+1 ; :::; x m results in a (degenerate) BFS. Case 2 A 1 ; :::A p are linearly dependent. By de nition, 9 a non-trivial linear combination of the A j s summing to zero i.e. y 1 A 1 + y 2 A 2 + ::: + y p A p = (2) where some y j > can be assumed. Eq. (1) - "Eq. (2) gives is true for any ": (x 1 "y 1 ) A 1 + (x 2 "y 2 )A 2 + ::: + (x p "y p )A p = b () Let y T = (y 1 ; y 2 ; :::; y p ; ; :::; ). The vector x "y satis es (2.1). Consider " " ; i.e. increasing from a value of zero and let xj " = min y j > y j 12

be the minimum ratio over positive components y j : For this value of "; at least one coe cient in () is zero and x most p 1 strictly positive coe cients. "y has at Repeating this process as necessary, we eventually obtain a set of linearly independent columns fa j g. We are thus back to Case 1 and conclude that we can construct a BFS given a feasible solution. ii) Let x T = (x 1 ; x 2 ; :::; x n ) be an optimal ()feasible) solution to LP with the strictly positive components x 1 ; :::; x p (after reordering). Consider the same two cases as before. Case 1 (A 1 ; :::A p are linearly independent) If p < m; the procedure described before results in an optimal BFS whose OF value P c j x j is unchanged through addition of components with value x j =. Case 2 (A 1 ; :::A p are linearly dependent) The value of the solution x "y is c T (x "y) = c T x "c T y (4) For " su ciently small, x "y is a feasible solution (all components ) of value c T x "c T y. However, because x is optimal, the value of (4) is not permitted to be less than c T x (for minimization): Therefore c T y =, and (4) does not change in value, though the number of strictly positive components of x is reduced. Example 2. (illustrating fundamental theorem) Consider the following LP in standard form: Maximize 8x 1 +6x 2 s. t. x 1 + x 2 +s 1 = 1 2x 1 + x 2 +s 2 = 15 5x 1 +1x 2 +s = 8 x j j = 1; 2 s i i = 1; 2; 1. Identify x and the constants A; b; c for this problem. 2. Construct a BFS from the given feasible solution x T = (x 1 ; x 2 ; s 1 ; s 2 ; s ) = (; 65; 5; 25; ) with value 6. 1

Let y T = (y 1 ; y 2 ; y ; y 4 ; ) and seek y such that Ay = or y 1 + y 2 +y = 2y 1 + y 2 +y 4 = 5y 1 +1y 2 = With equations and 4 unknowns, there are an in nite number of possible choices. e.g. let y T = ( 2; 1; 1; ; ) and note that c T y = 1 < : x " y = ( + 2"; 65 "; 5 "; 25 "; ) T The minimum ratio over positive y s is 65 min 1 ; 5 1 ; 25 = 5 Let x = x 5y = (4; 6; ; 1; ) T with value 6 5 ( 1) = 68 The columns of A corresponding to x 1 ; x 2 ; s 2 form the basis matrix 1 1 1 B = @ 2 1 1 A 5 1 which is invertible (verify e.g. jbj 6= ). The term basis refers to the vectors A 1 ; A 2 ; A 4 which span R (in general R m ) the space of the columns of A: Note: Some books refer to B simply as the basis. ) x = (4; 6; ; 1; ) T is a BFS Ex. Draw the feasible region S and show that x is a corner of S. 14

2.2 Geometry of LP (Extreme points) Regarding the vector x as a point in n-dimensional space R n provides an alternative geometric view and further insight into the solution of LP problems. Convex sets Let p q 2 R n : The line segment P Q consists of all points p+ (1 where < < 1. ) q {Such points are termed convex linear combinations of p and q: More generally, a convex linear combination of p 1 ; p 2 ; :::; p k is P k i=1 ip i with i and P k i=1 i = 1.} De nition A set K R n is convex if, for x 1 ; x 2 2 K and for every < < 1; the point x 1 + (1 ) x 2 belongs to K. Result The feasible region (FR) of a LP in standard form is convex. F = fx j Ax = b; x g Proof Let x 1 ; x 2 2 F: Consider x = x 1 + (1 ) x 2 for < < 1 so x is a solution of Ax = b: Ax = A [x 1 + (1 ) x 2 ] = Ax 1 + (1 ) Ax 2 = b+ (1 ) b = b Also < < 1 and x 1 ; x 2 )x 1 + (1 ) x 2 ) x is a feasible solution of the system Ax = b ie. x 2 S: Some further de nitions useful in understanding the geometric nature of an LP are as follows: The region to one side of an inequality of the form x 2 R n ja T x b a (closed) halfspace is The region x 2 R n ja T x = b is a hyperplane [an (n 1) dimensional region, subspace if b = ] A polyhedral set or a polyhedron is the intersection of a nite number of halfspaces 15

A bounded polyhedron (one that doesn t extend to in nity in any direction) is termed a polytope. Result The FR of an LP containing a mixture of equality and inequality constraints is also a polyhedron. Proof Observe that Ax = b can be written as Ax b and Ax b The extreme points (EP s) or vertices of a polyhedron play a very important part in LP because, if an LP has a nite optimal solution, it is achieved at a vertex. De nition An extreme point of a convex set K is a point which cannot be expressed as a convex linear combination of two distinct points of K. i.e. x 2 K is an extreme point if and only if @ y; z 2 K (y 6= z) such that x = y+ (1 ) z Theorem 2 (Equivalence of EP s and BFS s) We show that for LP in standard form and i) BFS) EP and ii) EP)BFS Proof i) Let x be a BFS to the LP in standard form. Suppose (for contradiction) that (w.l.o.g.) the rst p components fx j g p j=1 are strictly positive and x j = for j > p. Then Ax = b reduces to where fa j g are linearly independent. x 1 A 1 + x 2 A 2 + ::: + x p A p = b If x is not an extreme point, 9 two distinct points y,z 2 F such that x = y+ (1 ) z for and < < 1: For i > p; x i = = y i + (1 ) z i and so y i = z i = : (since y i ; z i because y, z 2 F and ; 1 > ) Therefore y; z have at most p non zero components, Therefore y 1 A 1 + y 2 A 2 + ::: + y p A p = b z 1 A 1 + z 2 A 2 + ::: + z p A p = b (y 1 z 1 )A 1 + (y 2 z 2 )A 2 + ::: + (y p z p )A p = with not all coe cients zero (because y 6= z). This contradicts our assumption that fa j g are linearly independent. 16

ii) Let x be an extreme point of F with precisely p non-zero components, so x 1 A 1 + x 2 A 2 + ::: + x p A p = b (w.l.o.g.) with x 1 ; x 2 ; :::; x p > and x i = (i > p) : Suppose (for contradiction) that x is not a BFS. i.e. the columns of A are linearly dependent y 1 A 1 + y 2 A 2 + ::: + y p A p = for some coe cients fy j g p j=1 not all zero. De ne the n vector y = (y 1 ; y 2 ; :::; y p ; ; :::; ) T so that Ay = : We can nd " su ciently small so that x 1 = x + "y and x 2 = x "y. [NB. x 1 6= x 2 because y 6= ]. Now x 1 and x 2 belong to F because Ax 1 = A (x + "y) = Ax + "Ay = Ax = b and similarly for x 2. Since x = 1 2 (x 1 + x 2 ) x can be written as a linear combination of distinct points of F; contradicting our assumption that x is an EP of S: Consequence We can re-phrase the fundamental theorem of LP in terms of extreme points; 1. If the feasible region F is non-empty, it has at least one EP 2. If the LP has a nite optimal solution (always true if F is bounded), it has an optimal solution which is an EP of F: Representation of convex polytopes Any point in a convex polytope (i.e. a bounded polyhedron) can be represented as a convex linear combination of its extreme points. This enables an alternative proof of the fundamental theorem. of Note S has a finite number of extreme points, since there are a maximum n m sets of basic variables. 17

Theorem (Fundamental Theorem restated) A linear objective function c T x achieves its minimum over a convex polytope (bounded polyhedron) at an extreme point of S: Proof Let x 1 ; x 2 ; :::; x k be the set of EP s of S: Any x 2 S has the representation x = 1 x 1 + 2 x 2 + ::: + k x k for some set of coe cients f i g with i each i and P k i=1 i = 1 and c T x = 1 c T x 1 + 2 c T x 2 + ::: + k c T x k = 1 z 1 + 2 z 2 + ::: + k z k ; say Let z o = min fz i g k i=1 be the minimum OF value at any vertex. Then z i z for each i; giving c T x 1 z + 2 z + ::: + K z = ( 1 + 2 + ::: + k ) z = z If x is optimal, c T x z so c T x = z showing that the optimal value of the LP is achieved at a vertex with minimum value z : 18