Linear Programming Inverse Projection Theory Chapter 3

Similar documents
Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

4.6 Linear Programming duality

Chapter 1. Preliminaries

Review Solutions, Exam 2, Operations Research

IE 5531: Engineering Optimization I

Discrete Optimization

Summary of the simplex method

Part 1. The Review of Linear Programming

Chap6 Duality Theory and Sensitivity Analysis

Linear programs, convex polyhedra, extreme points

4. Duality and Sensitivity

Duality and Projections

MAT-INF4110/MAT-INF9110 Mathematical optimization

3. Linear Programming and Polyhedral Combinatorics

Introduction to Mathematical Programming

Understanding the Simplex algorithm. Standard Optimization Problems.

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

CO 250 Final Exam Guide

Chapter 1 Linear Programming. Paragraph 5 Duality

3. Linear Programming and Polyhedral Combinatorics

Week 3 Linear programming duality

F 1 F 2 Daily Requirement Cost N N N

A Review of Linear Programming

56:171 Operations Research Fall 1998

Lecture 7 Duality II

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

The Simplex Algorithm

Lecture 5. x 1,x 2,x 3 0 (1)

IP Cut Homework from J and B Chapter 9: 14, 15, 16, 23, 24, You wish to solve the IP below with a cutting plane technique.

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Integer Programming, Part 1

Lagrangian Relaxation in MIP

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

LP Relaxations of Mixed Integer Programs

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

Sensitivity Analysis and Duality in LP

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Duality Theory, Optimality Conditions

Linear Programming Duality

The Dual Simplex Algorithm

Sensitivity Analysis

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

Lecture 9: Dantzig-Wolfe Decomposition

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Lecture 10: Linear programming duality and sensitivity 0-0

OPRE 6201 : 3. Special Cases

Linear Programming, Lecture 4

Linear Programming. Chapter Introduction

ORIE 6300 Mathematical Programming I August 25, Lecture 2

EE364a Review Session 5

How to Take the Dual of a Linear Program

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

56:171 Operations Research Midterm Exam--15 October 2002

Systems Analysis in Construction

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

1 Review Session. 1.1 Lecture 2

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Duality of LPs and Applications

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Linear and Integer Optimization (V3C1/F4C1)

Key Things We Learned Last Time. IE418 Integer Programming. Proving Facets Way #2 Indirect. A More Abstract Example

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Summary of the simplex method

36106 Managerial Decision Modeling Linear Decision Models: Part II

"SYMMETRIC" PRIMAL-DUAL PAIR

MAT016: Optimization

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

Special cases of linear programming

March 13, Duality 3

II. Analysis of Linear Programming Solutions

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

Linear Programming and the Simplex method

Brief summary of linear programming and duality: Consider the linear program in standard form. (P ) min z = cx. x 0. (D) max yb. z = c B x B + c N x N

56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15

AN INTRODUCTION TO CONVEXITY

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

Duality in Linear Programming

1 Simplex and Matrices

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

Linear and Combinatorial Optimization

Part 1. The Review of Linear Programming Introduction

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

An introductory example

OPERATIONS RESEARCH. Linear Programming Problem

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

Part 1. The Review of Linear Programming

Transcription:

1 Linear Programming Inverse Projection Theory Chapter 3 University of Chicago Booth School of Business Kipp Martin September 26, 2017

2 Where We Are Headed We want to solve problems with special structure! Real problems have special structure! One such structure is min c x + f y s.t. Ax + By b We learned how to project out the x variables. Later we will see how to do this and take advantage of special structure.

Where We Are Headed We want to solve problems with special structure! Real problems have special structure! Another example of special structure is min c x s.t. Ax b Bx d Objective, we are going to project out constraints and replace with variables.

Where We Are Headed So, the big picture is: Eliminate variables replace variables with constraints. This process yields multipliers that are the extreme rays and extreme points of the dual problem. Eliminate constraints replace constraints with variables. This process yields variables that correspond to the extreme points and extreme rays of the primal problem. Bottom line: depending on structure, replace constraints with variables or variables with constraints.

5 Special Structure Projection or inverse projection?

6 Special Structure Projection or inverse projection?

Outline Inverse Projection Finite Basis Theorem Fundamental Theorem of Linear Programming Solving Linear Programs with Inverse Projection Dual Relationships Sensitivity Analysis

8 Inverse Projection Consider the following system of inequalities: x 2 2 0.5x 1 x 2 8 0.5x 1 + x 2 3 x 1 x 2 6 x 1, x 2 0

9 Inverse Projection The geometry (we will come back to this) 14 12 10 8 6 4 2 x 2 z 3 =7 4 z 5 =4 3 z 1 =2 Feasible Region z 6 =4 z 4 =14 2 4 6 8 10 12 x 1

Inverse Projection Let s transform the problem slightly: x 2 2x 0 x 3 = 0 0.5x 1 x 2 + 8x 0 x 4 = 0 0.5x 1 + x 2 + 3x 0 x 5 = 0 x 1 x 2 + 6x 0 x 6 = 0 x 0 = 1 x 1, x 2, x 0, x 3, x 4, x 5 x 6 0 What have I done? What would you do if there were variables that were unrestricted in sign?

11 Inverse Projection Eliminate the first constraint. Create a new variable for each combination of positive and negative coefficients. What does this remind you of? Where does the.5 come from? y 1 y 2 1 x 2 1 1 2 x 0 0.5 1 x 3 1 This leads to the variable transformation: x 2 = y 1 + y 2 x 0 = 0.5y 1 x 3 = y 2

Inverse Projection This transformation leads to the system (y 1 + y 2 ) y 1 y 2 = 0 0.5x 1 (y 1 + y 2 ) + 4y 1 x 4 = 0 0.5x 1 + (y 1 + y 2 ) + 1.5y 1 x 5 = 0 x 1 (y 1 + y 2 ) + 3y 1 x 6 = 0 0.5y 1 = 1 x 1, y 1, y 2, x 4, x 5 x 6 0 Why are the y variables nonnegative?

13 Inverse Projection Simplify and drop the 0 = 0 constraint. 0.5x 1 + 3y 1 y 2 x 4 = 0 0.5x 1 + 2.5y 1 + y 2 x 5 = 0 x 1 + 2y 1 y 2 x 6 = 0 0.5y 1 = 1 x 1, y 1, y 2, x 4, x 5, x 6 0 Why was the original system converted from inequalities to equalities?

Inverse Projection Now eliminate the constraint 0.5x 1 + 3y 1 y 2 x 4 = 0 through a variable transformation. v 1 v 2 v 3 v 4 0.5 x 1 2 2 3 y 1 1/3 1/3 1 y 2 1 1 1 x 4 1 1 The appropriate variable transformation is then x 1 = 2v 1 + 2v 2 y 1 = (1/3)v 3 +(1/3)v 4 y 2 = v 1 + v 3 x 4 = v 2 + v 4

Inverse Projection This transformation gives the system: v 2 + (11/6)v 3 + (5/6)v 4 x 5 = 0 v 1 + 2v 2 (1/3)v 3 + (2/3)v 4 x 6 = 0 (1/6)v 3 + (1/6)v 4 = 1 v 1, v 2, v 3, v 4, x 5, x 6 0 In terms of the original variables x 1 = 2v 1 + 2v 2 x 2 = v 1 + (4/3)v 3 + (1/3)v 4 x 0 = (1/6)v 3 + (1/6)v 4 x 3 = v 1 + v 3 x 4 = v 2 + v 4 x 5 = x 5 x 6 = x 6

Inverse Projection Next eliminate the constraint v 2 + (11/6)v 3 + (5/6)v 4 x 5 = 0 through a variable transformation. w 1 w 2 w 3 w 4 1 v 2 1 1 (11/6) v 3 (6/11) (6/11) (5/6) v 4 (6/5) (6/5) 1 x 5 1 1 The corresponding variable transformation is v 2 = w 1 + w 2 v 3 = (6/11)w 1 + (6/11)w 4 v 4 = (6/5)w 2 + (6/5)w 3 x 5 = w 3 + w 4

Inverse Projection This gives the system v 1 + (20/11)w 1 + (14/5)w 2 + (4/5)w 3 (2/11)w 4 x 6 = 0 (1/11)w 1 + (1/5)w 2 + (1/5)w 3 + (1/11)w 4 = 1 v 1, w 1, w 2, w 3, w 4, x 6 0 Transforming back to the original variables gives x 1 = 2v 1 + 2w 1 + 2w 2 x 2 = v 1 + (8/11)w 1 + (2/5)w 2 + (2/5)w 3 + (8/11)w 4 x 0 = (1/11)w 1 + (1/5)w 2 + (1/5)w 3 + (1/11)w 4 x 3 = v 1 + (6/11)w 1 + (6/11)w 4 x 4 = 1w 1 + (11/5)w 2 + (6/5)w 3 x 5 = w 3 + w 4 x 6 = x 6

Inverse Projection Observation: variable w 1 has five positive coefficients in the transformation matrix. Since only three constraints have been eliminated this means that w 1 is not necessary and can be dropped. Why? Refer back to Corollary 2.25 in Section 2.5 in Chapter 2 and note the analogy with dropping constraints when projecting out variables.

19 Inverse Projection Finally, eliminate the constraint v 1 + (14/5)w 2 + (4/5)w 3 (2/11)w 4 x 6 = 0 through a variable transformation. z 1 z 2 z 3 z 4 z 5 z 6 1 v 1 1 1 14/5 w 2 5/14 5/14 4/5 w 3 5/4 5/4 2/11 w 4 11/2 11/2 11/2 1 x 6 1 1 1 The corresponding variable transformation is v 1 = z 1 +z 2 w 2 = (5/14)z 3 +(5/14)z 4 w 3 = (5/4)z 5 +(5/4)z 6 w 4 = (11/2)z 1 +(11/2)z 3 +(11/2)z 5 x 6 = z 2 +z 4 +z 6

20 Inverse Projection Substituting back gives (1/2)z 1 + (4/7)z 3 + (1/14)z 4 + (3/4)z 5 + (1/4)z 6 = 1 z 1, z 2, z 3, z 4, z 5, z 6 0 x 1 = 2z 1 +2z 2 +(5/7)z 3 +(5/7)z 4 x 2 = 5z 1 +z 2 +(29/7)z 3 +(1/7)z 4 +(9/2)z 5 +(1/2)z 6 x 0 = (1/2)z 1 +(4/7)z 3 +(1/14)z 4 +(3/4)z 5 +(1/4)z 6 x 3 = 4z 1 +z 2 +3z 3 +3z 5 x 4 = (11/14)z 3 +(11/14)z 4 +(3/2)z 5 + (3/2)z 6 x 5 = (11/2)z 1 +(11/2)z 3 (27/4)z 5 +(5/4)z 6 x 6 = z 2 +z 4 +z 6

21 Inverse Projection How do we get the extreme points? What about extreme rays? 14 12 10 8 6 4 2 x 2 z 3 =7 4 z 5 =4 3 z 1 =2 Feasible Region z 6 =4 z 4 =14 2 4 6 8 10 12 x 1

Inverse Projection Exercise relate the extreme points of the picture to solutions of the system resulting from inverse projection.

Inverse Projection If we have the polyhedron x 2 2 0.5x 1 x 2 8 0.5x 1 + x 2 3 x 1 x 2 6 x 1, x 2 0 The underlying recession cone (which is a polyhedral cone) is x 2 0 0.5x 1 x 2 0 0.5x 1 + x 2 0 x 1 x 2 0 x 1, x 2 0

Inverse Projection What does this recession cone look like? x 2 0 0.5x 1 x 2 0 0.5x 1 + x 2 0 x 1 x 2 0 x 1, x 2 0 What is the relationship to the inverse projection problem?

25 Inverse Projection First, let s take the problem (1/2)z 1 + (4/7)z 3 + (1/14)z 4 + (3/4)z 5 + (1/4)z 6 = 1 drop z 3 (why), make the substitution z 1, z 2, z 3, z 4, z 5, z 6 0 z 1 = 2λ 1, z 2 = λ 2, z 4 = 14λ 4, z 5 = 4 3 λ 5, z 6 = 4λ 6 and rewrite as λ 1 + λ 4 + λ 5 + λ 6 = 1 λ 1, λ 2, λ 4, λ 5, λ 6 0

26 Inverse Projection Let s find points in the polyhedron. x 1 = 4λ 1 + 2λ 2 + 10λ 4 x 2 = 10λ 1 + 1λ 2 + 2λ 4 + 6λ 5 + 2λ 6 λ 1 + λ 4 + λ 5 + λ 6 = 1 λ 1, λ 2, λ 4, λ 5, λ 6 0 If x 1 = 2 and x 2 = 4, what is the corresponding value of λ? Try λ 2 = 1, λ 5 = 1/4 and λ 6 = 3/4. What did I do in terms of geometry?

Inverse Projection Oh no! I just thought of something new to worry about! There is no end to it. Worry! Worry! Worry! We just showed that if x 1 = 2 and x 2 = 4, then there is a corresponding value of λ. But here is what worries me. How do we know we can get very point in the polyhedron? (Recall my worries about the projection cone.)

Inverse Projection Is it really true that P = {x Ax = b, x 0} = conv({x 1,..., x q })+cone({x q+1,..., x r }) where each x i for i = 1,..., r corresponds to a λ i? When we did projection, a key result is that projection yields all of the extreme points and extreme rays of the dual polyhedron. We need a similar result for inverse projection!

Inverse Projection Here is another way to look at the problem. The original problem is: min c x Ax = b x 0 The transformed problem (x = Tz and T 0) that we solve is: min c Tz ATz = b z 0 What do we need to prove?

Inverse Projection My worries really boil down to, is the following: I can clearly see that if we are given a z then we have a corresponding x = T z that is feasible for the original linear program. But here is where I start to get nervous: given an x feasible to my original system, how do I know there is a corresponding z that makes the new system feasible and satisfies x = T z? So many worries, so little time.

31 Inverse Projection Proposition 3.2: If P = {(x, x 0 ) R n+1 Ax bx 0 = 0, x 0 = 1, x 0} then (x, x 0 ) P if and only if there is a y such that (x, x 0, y) is feasible to the transformed system. Let s go the hard way and show if we are given a point x in P then there is always a feasible solution to the transformed system. It is sufficient to show that this is the case for projecting out one constraint. Why?

Inverse Projection In general, we construct the transformation as follows: (see picture on next page) x 0 = x i = n 2 j=1 n 2 j=1 y 0j /( b 1 ). y ij /a 1i, i = 1,..., n 1, n 1 x n1 +j = y ij /a 1,n1 +j, j = 1,..., n 2. i=0 Note: without loss, b 1 < 0.

33 Inverse Projection Apply inverse projection to the system Ax bx 0 = 0, x 0 = 1, x 0. -b 1 x y 0 0 01 y 0n2 y 02 a 11 x 1 1 y 11 y 12 n 1 +1 a 1,n1 +1 x n 1 +1 y 1n2 y 21 a 12 x 2 2... y 22 y 1n2 y n11 y n1 2 n 1 +2... n 1 +n 2 a 1,n1 +2 x n 1 +2 a 1,n1 +n 2 x n 1 +n 2 y n1 n 2 a 1n1 x n 1 n 1

34 Inverse Projection A key part of the proof: If (x, x 0 ) is feasible to P how can you find a y so that (x, x 0, y) is feasible to the lifted system?

35 Inverse Projection Example: Assume the constraint to project out is 5x 1 + 3x 2 x 3 2x 4 + 4x 0 = 0. The transformation matrix is y 03 y 04 y 13 y 14 y 23 y 24 1 1 4 x 0 4 4 1 1 5 x 1 5 5 1 1 3 x 2 3 3 1 x 3 1 1 1 1 1 1 2 x 4 2 2 2 What does the corresponding network look like?

36 Inverse Projection Solve a transportation problem! For a given x, supply equals demand! -b 1 x y 0 0 01 y 0n2 y 02 a 11 x 1 1 y 11 y 12 n 1 +1 a 1,n1 +1 x n 1 +1 y 1n2 y 21 a 12 x 2 2... y 22 y 1n2 y n11 y n1 2 n 1 +2... n 1 +n 2 a 1,n1 +2 x n 1 +2 a 1,n1 +n 2 x n 1 +n 2 y n1 n 2 a 1n1 x n 1 n 1

Finite Basis Theorem Summary: We have taken a polyhedron P = {x R n Ax b, x 0} and replaced it with P = {(x, y, x 0 ) R n R m R Ax Iy bx 0 = 0, x 0, y 0, x 0 = 1} where P = proj x (P ).

Finite Basis Theorem We applied inverse projection to P to get Ax Iy bx 0 = 0, x 0 = 1, x 0 where d 1 z 1 + + d q z q = 1 (1) z 1,..., z q, z q+1..., z r 0 (2) x y x 0 = Tz. (3) Why does this do it for us? 38

39 Finite Basis Theorem Let x 1,..., x r denote the columns of T in (3) after scaling by 1/d i for i = 1,..., r. Then (1) - (3) implies (x, y, x 0 ) P if and only if x y x 0 = q r z i x i + z i x i (4) i=1 i=q+1 where q i z i = 1, z i 0, i = 1,..., q.

Finite Basis Theorem The result expressed in (4) is a finite basis theorem. The finite set of vectors {x 1,..., x r } are a finite basis for the polyhedron P This result for polyhedra is known as the Minkowski finite basis theorem. Thus a polyhedron can be expressed either as the intersection of a finite number of linear inequalities or in finite basis form.

Finite Basis Theorem Here is another way to think about the Minkowski finite basis theorem. The finite basis theorem says that any polyhedron, P, can be expressed as P = conv({x 1,..., x q }) + cone({x q+1,..., x r }). In other words a polyhedron is the Minkowski sum of a polytope and a cone. What do we mean by the Minkowski sum of two sets?

Finite Basis Theorem In the case of a polyhedral cone C = {x R n Ax 0} the finite basis theorem says C = cone({x q+1,..., x r }). Thus we have the nice duality that a polyhedral cone is finitely generated and (by Weyl) a finitely generate cone is a polyhedral cone.

Finite Basis Theorem A few comments. 1. Since P = proj x (P ) the finite basis theorem applies to any polyhedron of the form P = {x R n Ax b}. 2. We can assume, without loss, that the points in the set {x 1,..., x q } are extreme points of P the points in the set {x q+1,..., x r } are the extreme rays of the recession cone {x R n Ax 0, }. 3. When A is a rational matrix and b is a rational vector, then {x 1,..., x r } are rational vectors. Why?

Fundamental Theorem of Linear Program Theorem 3.4: If the linear program min {c x Ax = b, x 0 } has an optimal solution, then there is an optimal extreme point solution to this linear program. If there is not an optimal solution, then the linear program is either unbounded or infeasible. By the way, how does the simplex algorithm work?

45 Fundamental Theorem of Linear Program Proof: The fundamental theorem of linear programming is a simple consequence of the finite basis theorem. If P = {x Ax = b, x 0 } is not empty then there are x 1,..., x q, x q+1,..., x r in R n such that P = conv({x 1,..., x q }) + cone({x q+1,..., x r }). Without loss, assume the set {x 1,..., x q } is minimal with respect to taking subsets. Then each x 1,..., x q is an extreme point of P. If x P then, x = r z i x i, i=1 q z i = 1, z i 0, i = 1,..., r. i=1

46 Fundamental Theorem of Linear Program Proof (continued): If x is an optimal solution to the linear program, then c x i 0 for i = q + 1,..., r. Why? The optimality of x implies z i = 0 if c x i > 0 for i = q + 1,..., r. Why? Then any x i for which z i > 0, i = 1,..., q is an optimal solution. Why?

Solving Linear Programs Use inverse projection to solve min{c x Ax b}. Consider the linear program min 2x 1 3x 2 x 2 2 0.5x 1 x 2 8 0.5x 1 + x 2 3 x 1 x 2 6 x 1, x 2 0 Forget the objective function! Do inverse projection on the constraints.

48 Solving Linear Programs Inverse projection on the constraints gave (1/2)z 1 + (4/7)z 3 + (1/14)z 4 + (3/4)z 5 + (1/4)z 6 = 1 z 1, z 2, z 3, z 4, z 5, z 6 0 x 1 = 2z 1 +2z 2 +(5/7)z 3 +(5/7)z 4 x 2 = 5z 1 +z 2 +(29/7)z 3 +(1/7)z 4 +(9/2)z 5 +(1/2)z 6 x 0 = (1/2)z 1 +(4/7)z 3 +(1/14)z 4 +(3/4)z 5 +(1/4)z 6 x 3 = 4z 1 +z 2 +3z 3 +3z 5 x 4 = (11/14)z 3 +(11/14)z 4 +(3/2)z 5 + (3/2)z 6 x 5 = (11/2)z 1 +(11/2)z 3 (27/4)z 5 +(5/4)z 6 x 6 = z 2 +z 4 +z 6

49 Solving Linear Programs Transform the objective function by c x c Tz = 11z 1 + z 2 11z 3 + z 4 (27/2)z 5 (3/2)z 6 where T is the transformation matrix. What are the components of T?

50 Solving Linear Programs The transformed problem is min 11z 1 + z 2 11z 3 + z 4 (27/2)z 5 (3/2)z 6 (1/2)z 1 + (4/7)z 3 + (1/14)z 4 + (3/4)z 5 + (1/4)z 6 = 1 z 1, z 2, z 3, z 4, z 5, z 6 0

Solving Linear Programs So the problem is: min 11z 1 + z 2 11z 3 + z 4 (27/2)z 5 (3/2)z 6 (1/2)z 1 + (4/7)z 3 + (1/14)z 4 + (3/4)z 5 + (1/4)z 6 = 1 How can you tell if this is unbounded? Solve by finding the best bang-for-buck ratio. z 1, z 2, z 3, z 4, z 5, z 6 0 11 (1/2) = 22, 11 (4/7) = 19.25, 1 (1/14) = 14, (27/2) (3/4) = 18, (3/2) (1/4) = 6. Therefore the optimal solution is z 1 = 2 which corresponds to an optimal x 1 = 4, x 2 = 10. The optimal objective function value is 22.

Dual Relationships Consider the following primal-dual pair min 2x 1 3x 2 x 2 2 0.5x 1 x 2 8 0.5x 1 + x 2 3 x 1 x 2 6 x 1, x 2 0 max 2u 1 8u 2 3u 3 6u 4 0.5u 2 0.5u 3 + u 4 2 u 1 u 2 + u 3 u 4 3 u 1, u 2, u 3, u 4 0.

Dual Relationships Solve the dual with projection 2u 1 + 8u 2 + 3u 3 + 6u 4 + z 0 0 (E0) 0.5u 2 0.5u 3 + u 4 2 (E1) u 1 u 2 + u 3 u 4 3 (E2) u 1 0 (E3) u 2 0 (E4) u 3 0 (E5) u 4 0 (E6) Project out u 1 6u 2 + 5u 3 + 4u 4 + z 0 6 (E0) + 2(E2) u 2 + u 3 u 4 3 (E2) + (E3) 0.5u 2 0.5u 3 + u 4 2 (E1) u 2 0 (E4) u 3 0 (E5) u 4 0 (E6)

Dual Relationships 6u 2 + 5u 3 + 4u 4 + z 0 6 (E0) + 2(E2) u 2 + u 3 u 4 3 (E2) + (E3) 0.5u 2 0.5u 3 + u 4 2 (E1) u 2 0 (E4) u 3 0 (E5) u 4 0 (E6) Let y 1 and y 2 be dual variables associated with the first two rows, x 1 associated with equation (E1) and x 4, x 5, x 6 associated with (E4), (E5), and (E6), respectively. The objective function is max z 0. Then the dual problem is: min + 2x 1 6y 1 3y 2 0.5x 1 + 6y 1 y 2 x 4 = 0 0.5x 1 + 5y 1 + y 2 x 5 = 0 x 1 + 4y 1 y 2 x 6 = 0 y 1 = 1 x 1, y 1, y 2, x 4, x 5, x 6 0

Primal-Dual Relationships Summary What did we just do? Took the dual of our primal problem Projected out a variable of the dual Took the dual of the projected problem Observed the resulting primal was equal to eliminating a constraint in the primal If I project out all of the variables in the dual what do the multipliers give me?

56 Primal-Dual Relationships Summary Getting stuck: How can you get stuck with inverse projection? What would you do when you get stuck?

Primal-Dual Relationships Summary One more time! Eliminate variables replace variables with constraints. This process yields multipliers that are the extreme rays and extreme points of the dual problem. Eliminate constraints replace constraints with variables. This process yields the extreme points and extreme rays of the primal problem.

58 Sensitivity Analysis The result of inverse projection on the linear program is (1/2)z 1 + (4/7)z 3 + (1/14)z 4 + (3/4)z 5 + (1/4)z 6 = 1 z 1, z 2, z 3, z 4, z 5, z 6 0 x 1 = 2z 1 +2z 2 +(5/7)z 3 +(5/7)z 4 x 2 = 5z 1 +z 2 +(29/7)z 3 +(1/7)z 4 +(9/2)z 5 +(1/2)z 6 x 0 = (1/2)z 1 +(4/7)z 3 +(1/14)z 4 +(3/4)z 5 +(1/4)z 6 x 3 = 4z 1 +z 2 +3z 3 +3z 5 x 4 = (11/14)z 3 +(11/14)z 4 +(3/2)z 5 + (3/2)z 6 x 5 = (11/2)z 1 +(11/2)z 3 (27/4)z 5 +(5/4)z 6 x 6 = z 2 +z 4 +z 6

Sensitivity Analysis Let c 1 be the objective function coefficient on variable x 1 and c 2 the objective function coefficient on variable x 2. If z 0 is the optimal objective function value then Or, z 0 = min {c Tz i 4c 1 + 10c 2, (5/4)c 1 + (29/4)c 2 10c 1 + 2c 2, 0c 1 + 6c 2, 0c 1 + 2c 2 }. z 0 4c 1 + 10c 2 z 0 (5/4)c 1 + (29/4)c 2 z 0 10c 1 + 2c 2 z 0 0c 1 + 6c 2 z 0 0c 1 + 2c 2 and z 0 is maximized.

Sensitivity Analysis Consider the linear program we have been working with but with the objective function coefficient of x 1 now 5. MIN 5 X1-3 X2 SUBJECT TO 2) X2 >= 2 3) 0.5 X1 - X2 >= - 8 4) - 0.5 X1 + X2 >= - 3 5) X1 - X2 >= - 6 END

Sensitivity Analysis LP OPTIMUM FOUND AT STEP 2 OBJECTIVE VALUE = -18.0000000 VARIABLE VALUE REDUCED COST X1.0000000 2.000000 X2 6.000000.0000000 ROW SLACK OR SURPLUS DUAL PRICE 1-18.00000 1.000000 2 4.000000 -.0000000 3 2.000000 -.0000000 4 9.000000 -.0000000 5.0000000-3.000000

Sensitivity Analysis RANGES IN WHICH THE BASIS IS UNCHANGED: OBJ COEFFICIENT RANGES VARIABLE CURRENT ALLOWABLE ALLOWABLE COEF INCREASE DECREASE X1 5.000000 INFINITY 2.000000 X2-3.000000 3.000000 2.000000 RIGHTHAND SIDE RANGES ROW CURRENT ALLOWABLE ALLOWABLE RHS INCREASE DECREASE 2 2.000000 4.000000 INFINITY 3-8.000000 2.000000 INFINITY 4-3.000000 9.000000 INFINITY 5-6.000000 4.000000 2.000000

Sensitivity Analysis Some definitions: Allowable increase (decrease) on an objective function coefficient for variable x i is the maximum increase (decrease) on c i such that the current optimal primal solution remans an optimal solution. Increasing (decreasing) a coefficient by its allowable increase decrease introduces alternative primal optima. The reduced cost on a variable is zero if the variable is currently positive in an optimal solution. If the variable is at zero, then the reduced cost on the variable is equal to its allowable decrease (minimization).

Sensitivity Analysis Assume that the linear program min{c Ax b, x 0} has an optimal primal-dual solution (x, u, w) where u is the optimal dual vector for Ax b and w is the optimal dual vector on the x 0. Alternative definition: the reduced cost on variable x j is w j. Note that w j is equal to c j u A j where A j is column j of A. For homework you will explore the relationship between these definitions.

Sensitivity Analysis Questions: Why is 2 the allowable decrease the objective function coefficient of x 1? Why is 3 the allowable increase the objective function coefficient of x 2? Why is 2 the reduced cost on x 1? See http://faculty.chicagobooth.edu/kipp.martin/root/ htmls/coursework/36900/datafiles/obj_sensitivity.xlsx