Lecture 10: Linear programming duality and sensitivity 0-0

Similar documents
Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Chapter 1 Linear Programming. Paragraph 5 Duality

4.6 Linear Programming duality

Linear and Combinatorial Optimization

Introduction to linear programming using LEGO.

Review Solutions, Exam 2, Operations Research

Part 1. The Review of Linear Programming

Duality Theory, Optimality Conditions

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Linear Programming Duality

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

2.098/6.255/ Optimization Methods Practice True/False Questions

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

IE 5531: Engineering Optimization I

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Sensitivity Analysis and Duality in LP

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010


Chap6 Duality Theory and Sensitivity Analysis

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

LINEAR PROGRAMMING II

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

Discrete Optimization

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

1 Review Session. 1.1 Lecture 2

March 13, Duality 3

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

EE364a Review Session 5

F 1 F 2 Daily Requirement Cost N N N

"SYMMETRIC" PRIMAL-DUAL PAIR

Chapter 3, Operations Research (OR)

The Simplex Algorithm

56:270 Final Exam - May

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Lecture 5. x 1,x 2,x 3 0 (1)

Understanding the Simplex algorithm. Standard Optimization Problems.

ECE 307- Techniques for Engineering Decisions

M340(921) Solutions Problem Set 6 (c) 2013, Philip D Loewen. g = 35y y y 3.

Lectures 6, 7 and part of 8

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems

Simplex Algorithm Using Canonical Tableaus

4. Duality and Sensitivity

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Fundamental Theorems of Optimization

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Thursday, May 24, Linear Programming

Lecture 11: Post-Optimal Analysis. September 23, 2009

Week 3 Linear programming duality

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Convex Optimization & Lagrange Duality

OPERATIONS RESEARCH. Linear Programming Problem

Summary of the simplex method

Linear Programming Inverse Projection Theory Chapter 3

Brief summary of linear programming and duality: Consider the linear program in standard form. (P ) min z = cx. x 0. (D) max yb. z = c B x B + c N x N

Lecture #21. c T x Ax b. maximize subject to

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

SAMPLE QUESTIONS. b = (30, 20, 40, 10, 50) T, c = (650, 1000, 1350, 1600, 1900) T.

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

CS711008Z Algorithm Design and Analysis

Sensitivity Analysis and Duality

BBM402-Lecture 20: LP Duality

Linear Programming and the Simplex method

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15

MATH 373 Section A1. Final Exam. Dr. J. Bowman 17 December :00 17:00

Special cases of linear programming

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

Lecture 2: The Simplex method

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

Worked Examples for Chapter 5

MATH2070 Optimisation

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

The Simplex Method for Solving a Linear Program Prof. Stephen Graves

CS711008Z Algorithm Design and Analysis

II. Analysis of Linear Programming Solutions

COMP3121/9101/3821/9801 Lecture Notes. Linear Programming

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

Another max flow application: baseball

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Lecture slides by Kevin Wayne

Lecture Note 18: Duality

UNIT-4 Chapter6 Linear Programming

IE 5531 Midterm #2 Solutions

Ω R n is called the constraint set or feasible set. x 1

3. Linear Programming and Polyhedral Combinatorics

UNIVERSITY OF CALICUT SCHOOL OF DISTANCE EDUCATION B Sc. Mathematics (2011 Admission Onwards) II SEMESTER Complementary Course

LINEAR PROGRAMMING. Relation to the Text (cont.) Relation to Material in Text. Relation to the Text. Relation to the Text (cont.

Today: Linear Programming (con t.)

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

An introductory example

Transcription:

Lecture 10: Linear programming duality and sensitivity 0-0

The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to A T y c, y 0 m

The dual of the LP in standard form 2 and minimize z = c T x (P) subject to Ax = b, x 0 n maximize w = b T y (D) subject to A T y c, y free

Rules for formulating dual LPs 3 We say that an inequality is canonical if it is of [respectively, ] form in a maximization [respectively, minimization] problem We say that a variable is canonical if it is 0 The rule is that the dual variable [constraint] for a primal constraint [variable] is canonical if the other one is canonical. If the direction of a primal constraint [sign of a primal variable] is the opposite from the canonical, then the dual variable [dual constraint] is also opposite from the canonical

Further, the dual variable [constraint] for a primal equality constraint [free variable] is free [an equality constraint] 4 Summary: primal/dual constraint dual/primal variable canonical inequality 0 non-canonical inequality 0 equality unrestricted (free)

Weak Duality Theorem 5 If x is a feasible solution to (P) and y a feasible solution to (D), then c T x b T y Similar relation for the primal dual pair (2) (1): the max problem never has a higher objective value Proof. Corollary: If c T x = b T y for a feasible primal dual pair (x, y) then they must be optimal

Strong Duality Theorem 6 Strong duality is here established for the pair (P), (D) If one of the problems (P) and (D) has a finite optimal solution, then so does its dual, and their optimal objective values are equal Proof.

Complementary Slackness Theorem 7 Let x be a feasible solution to (1) and y a feasible solution to (2). Then x is optimal to (1) and y optimal to (2) if and only if x j (c j y T A j ) = 0, j = 1,..., n, (3a) y i (A i x b i ) = 0, i = 1..., m, (3b) where A j is the j th column of A, A i the i th row of A Proof.

Necessary and sufficient optimality conditions: Strong duality, the global optimality conditions, and the KKT conditions are equivalent for LP 8 We have seen above that the following statement characterizes the optimality of a primal dual pair (x, y): x is feasible in (1), y is feasible in (2), and complementarity holds In other words, we have the following result (think of the KKT conditions!):

Take a vector x R n. For x to be an optimal solution to the linear program (1), it is both necessary and sufficient that (a) x is a feasible solution to (1); (b) corresponding to x there is a dual feasible solution y R m to (2); and (c) x and y together satisfy complementarity (3) 9 This is precisely the same as the KKT conditions! Those who wishes to establish this note that there are no multipliers for the x 0 n constraints, and in the KKT conditions there are. Introduce such a multiplier vector and see that it can later be eliminated

Further: suppose that x and y are feasible respectively in (1) and (2). Then, the following are equivalent: 10 (a) x and y have the same objective value; (b) x and y solve (1) and (2); (c) x and y satisfy complementarity

The Simplex method and the global optimality conditions 11 The Simplex method is remarkable in that it satisfies two of the three conditions at every BFS, and the remaining one is satisfied at optimality: x is feasible after Phase-I has been completed x and y always satisfy complementarity. Why? If x j is in the basis, then it has a zero reduced cost, implying that the dual constraint j has no slack. If the reduced cost of x j is non-zero (slack in dual constraint j), then its value is zero

The feasibility of y T = c T B B 1 is not fulfilled until we reach an optimal BFS. How is the incoming criterion related to this? We introduce as an incoming variable a variable which has the best reduced cost. Since the reduced cost measures the dual feasibility of y, this means that we select the most violated dual constraint; at the new BFS, that constraint is then satisfied (since the reduced cost then is zero). The Simplex method hence works to try to satisfy dual feasibility by forcing a move such that the most violated dual constraint becomes satisfied! 12

Farkas Lemma revisited Let A R m n and b R m. Then, exactly one of the systems 13 Ax = b, x 0 n, (I) and A T y 0 n, b T y > 0, (II) has a feasible solution, and the other system is inconsistent Proof.

An application of linear programming: The Diet Problem 14 First motivated by the US Army s desire to meet nutritional requirements of the field GI s while minimizing the cost George Stigler made an educated guess of the optimal solution to linear program using a heuristic method; his guess for the cost of an optimal diet was 39.93 per year (1939 prices) In the fall of 1947, Jack Laderman of the Mathematical Tables Project of the National Bureau of Standards solved Stigler s model with the new simplex method

The first large scale computation in optimization 15 The LP consisted of nine equations in 77 unknowns. It took nine clerks using hand-operated desk calculators 120 man days to solve for the optimal solution of 39.69. Stigler s guess for the optimal solution was off by only 24 cents per year Variations can be solved on the internet!

Sensitivity analysis, I: Shadow prices are derivatives of a convex function! 16 Suppose an optimal BFS is non-degenerate. Then, c T x = b T y = c T B B 1 b varies linearly as a function of b around its given value Non-degeneracy also implies that y is unique. Why? Perturbation function b v(b) given by v(b) = min c T x s.t. Ax = b, x 0 n = max b T y s.t. A T y c = max k K bt y k K: set of DFS. v a piece-wise linear, convex function

Fact: v convex (and finite in a neighbourhood of b) implies that v differentiable at b iff it has a unique subgradient there 17 Here: derivative w.r.t. b is y, that is, the change in the optimal value from a change in the right-hand side b equals the dual optimal solution

Sensitivity analysis, II: Perturbations in data 18 How to find a new optimum through re-optimization when data has changed If an element of c changes, then the old BFS is feasible but may not be optimal. Check the new value of the reduced cost vector c and change the basis if some sign has changed

If an element of b changes, then the old BFS is optimal but may not be feasible. Check the new value of the vector B 1 b and change the basis if some sign has changed. Since the BFS is infeasible but optimal, we use a dual version of the Simplex method: the Dual Simplex method 19 Find a negative basic variable x j outgoing basic variable x s Choose among the non-basic variables for which the element B 1 N sj < 0; means that the new basic variable will become positive Choose the incoming variable so that c keeps its sign

Decentralized planning 20 Consider the following profit maximization problem: maximize z = p T x = B 1 B 2 s.t.... C B m m p T i x i, i=1 x 1 x 2. x m b 1 b 2. b m c, x i 0 n i, i = 1,..., m,

for which we have the following interpretation: 21 We have m independent subunits, responsible for finding their optimal production plan While they are governed by their own objectives, we (the Managers) want to solve the overall problem of maximizing the company s profit The constraints B i x i b i, x i 0 n i describe unit i s own production limits, when using their own resources

The units also use limited resources that are the same 22 The resource constraint is difficult as well as unwanted to enforce directly, because it would make it a centralized planning process We want the units to maximize their own profits individually But we must also make sure that they do not violate the resource constraints Cx c (This constraint is typically of the form m i=1 C ix i c) How? ANSWER: Solve the LP dual problem!

Generate from the dual solution the dual vector y for the joint resource constraint Introduce an internal price for the use of this resource, equal to this dual vector Let each unit optimize their own production plan, with an additional cost term This will then be a decentralized planning process Each unit i will then solve their own LP problem to 23 maximize x i [p i C T i y] T x i, subject to B i x i b i, x 0 n i,

resulting in an optimal production plan! 24 Decentralized planning, is related to Dantzig Wolfe decomposition, which is a general technique for solving large-scale LP by solving a sequence of smaller LP:s More on such techniques in the Project course