Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Similar documents
Lecture 10: Linear programming duality and sensitivity 0-0

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

IE 5531: Engineering Optimization I

Review Solutions, Exam 2, Operations Research

Linear and Combinatorial Optimization

Chapter 1 Linear Programming. Paragraph 5 Duality

Part 1. The Review of Linear Programming

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Introduction to linear programming using LEGO.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

Linear Programming Duality

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Duality Theory, Optimality Conditions

4.6 Linear Programming duality

Chap6 Duality Theory and Sensitivity Analysis

"SYMMETRIC" PRIMAL-DUAL PAIR

LINEAR PROGRAMMING II

Sensitivity Analysis and Duality

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

Understanding the Simplex algorithm. Standard Optimization Problems.

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Week 3 Linear programming duality

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

Lecture 5. x 1,x 2,x 3 0 (1)

F 1 F 2 Daily Requirement Cost N N N

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

II. Analysis of Linear Programming Solutions

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality and Sensitivity

MATH 373 Section A1. Final Exam. Dr. J. Bowman 17 December :00 17:00

Discrete Optimization

Sensitivity Analysis and Duality in LP

Chapter 3, Operations Research (OR)

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Optimisation and Operations Research

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Lecture #21. c T x Ax b. maximize subject to

56:270 Final Exam - May

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

BBM402-Lecture 20: LP Duality

The Simplex Algorithm

Brief summary of linear programming and duality: Consider the linear program in standard form. (P ) min z = cx. x 0. (D) max yb. z = c B x B + c N x N

EE364a Review Session 5

CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2)


Today: Linear Programming (con t.)

Linear Programming Inverse Projection Theory Chapter 3

Optimality, Duality, Complementarity for Constrained Optimization

Simplex Algorithm Using Canonical Tableaus

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

The Dual Simplex Algorithm

OPERATIONS RESEARCH. Linear Programming Problem

Introduction to Mathematical Programming

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora

Another max flow application: baseball

AM 121: Intro to Optimization Models and Methods

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Lecture 11: Post-Optimal Analysis. September 23, 2009

Duality and Projections

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

An introductory example

Worked Examples for Chapter 5

LINEAR PROGRAMMING. Relation to the Text (cont.) Relation to Material in Text. Relation to the Text. Relation to the Text (cont.

CO 250 Final Exam Guide

SAMPLE QUESTIONS. b = (30, 20, 40, 10, 50) T, c = (650, 1000, 1350, 1600, 1900) T.

1 Review Session. 1.1 Lecture 2

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

The dual simplex method with bounds

Conic Linear Optimization and its Dual. yyye

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Duality of LPs and Applications

The Strong Duality Theorem 1

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

How to Take the Dual of a Linear Program

Notes taken by Graham Taylor. January 22, 2005

ORIE 6300 Mathematical Programming I August 25, Lecture 2

Lectures 6, 7 and part of 8

M340(921) Solutions Problem Set 6 (c) 2013, Philip D Loewen. g = 35y y y 3.

A Concise Pivoting-Based Algorithm for Linear Programming

CS711008Z Algorithm Design and Analysis

CS261: A Second Course in Algorithms Lecture #8: Linear Programming Duality (Part 1)

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.

+ 5x 2. = x x. + x 2. Transform the original system into a system x 2 = x x 1. = x 1

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

Linear programming: Theory

SEN301 OPERATIONS RESEARCH I LECTURE NOTES

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

MAT016: Optimization

Relation of Pure Minimum Cost Flow Model to Linear Programming

A Review of Linear Programming

Simplex method(s) for solving LPs in standard form

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

Transcription:

Lecture 10: Linear programming duality Michael Patriksson 19 February 2004 0-0 The dual of the LP in standard form minimize z = c T x (P) subject to Ax = b, x 0 n, and maximize w = b T y (D) subject to A T y c, y free 2 The canonical primal dual pair A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n, and minimize w = b T y (2) subject to A T y c, y 0 m Rules for formulating dual LPs We say that an inequality is canonical if it is of [respectively, ] form in a maximization [respectively, minimization] problem. We say that a variable is canonical if it is 0. The rule is that the dual variable [constraint] for a primal constraint [variable] is canonical if the other one is canonical. If the direction of a primal constraint [sign of a primal variable] is the opposite from the canonical, then the dual variable [dual constraint] is also opposite from the canonical. 1 3

Further, the dual variable [constraint] for a primal equality constraint [free variable] is free [an equality constraint]. Summary: primal/dual constraint dual/primal variable canonical inequality 0 non-canonical inequality 0 equality unrestricted (free) Strong Duality Theorem 11.6 In the compendium, strong duality is established for the pair (P) and (D). Here, we establish the result for the pair (1), (2). If one of the problems (1) and (2) has a finite optimal solution, then so does its dual, and their optimal objective values are equal. Proof. The idea behind the proof is as follows. We suppose first that it is the primal maximization problem that has a finite optimal solution. (This is without loss of generality.) We then state an optimal BFS representing an optimal extreme point x, and 4 6 Weak Duality Theorem 11.4 If x is a feasible solution to (P) and y a feasible solution to (D), then c T x b T y. Similar relation for the primal dual pair (2) (1): the max problem never has a higher objective value. Proof. c T x (A T y) T x = y T Ax = y T b = b T y. Corollary: If c T x = b T y for a feasible primal dual pair (x, y) then they must be optimal. thereafter establish (a) that from the optimality condition that the reduced costs are non-positive, we can construct a dual vector of the form y T = c T B B 1 which is feasible in (2). (b) We show that the objective values c T x and b T y are equal. Hence, the dual vector must be optimal in its problem. (a) Suppose we have added slacks s R m, and represented an optimal extreme point x through a basic/non-basic partitioning of (x, s) and correspondingly of (A, I m ) and (c, 0 m ). Suppose that the basis is optimal. Then, all the reduced costs of the x and s variables are non-positive. 5 7

Hence, c T x = c T c T B B 1 A (0 n ) T and cs = (0 m ) T c T B B 1 I m (0 m ) T. Now, let us define a dual vector as follows: y T := c T B B 1. We notice that this choice is identical to that which we saw was provided in the pricing step of the Simplex method. Therefore, we can say that this vector is provided for free from having used the Simplex method in finding an optimal BFS. Then, c T y T A (0 n ) T and y T I m (0 m ) T. In other words, A T y c and y 0 m, that is, y := (c T B B 1 ) T is feasible in (2). Complementary Slackness Theorem 11.12 Let x be a feasible solution to (1) and y a feasible solution to (2). Then x is optimal to (1) and y optimal to (2) if and only if xj(cj y T A j) = 0, j = 1,..., n, (3a) yi(ai x bi) = 0, i = 1..., m, (3b) where A j is the jth column of A and Ai the ith row of A. Proof. From Weak and Strong Duality we have both that c T x (A T y) T x = y T Ax = y T b = b T y 8 10 (b) Note that at the BFS chosen, z = c T B B 1 b = y T b = w. By the construction of the dual vector y in this way, it will always have the same value as the primal BFS! In particular, it has here the same objective value as the primal optimal one. Use the above Corollary. and that c T x = b T y holds. Since we have equality throughout above, we have that 0 = [c A T y] T x = y T [Ax b]. Since each term is sign restricted, each one must be zero. We are done with this direction. The converse argument follows similarly: weak duality plus complementarity implies strong duality. 9 11

Necessary and sufficient optimality conditions: Strong duality, the global optimality conditions, and the KKT conditions are equivalent for LP We have seen above that the following statement characterizes the optimality of a primal dual pair (x, y): x is feasible in (1), y is feasible in (2), and complementarity holds. In other words, we have the following result (think of the KKT conditions!): Further: suppose that x and y are feasible in (1) and (2). Then, the following are equivalent: (a) x and y have the same objective value; (b) x and y solve (1) and (2); (c) x and y satisfy complementarity. 12 14 Theorem 11.14: Take a vector x R n. For x to be an optimal solution to the linear program (1), it is both necessary and sufficient that (a) x is a feasible solution to (1); (b) corresponding to x there is a dual feasible solution y R m to (2); and (c) x and y together satisfy complementarity (3). This is precisely the same as the KKT conditions! Those who wishes to establish this note that there are no multipliers for the x 0 n constraints, and in the KKT conditions there are. Introduce such a multiplier vector and see that it can later be eliminated. The Simplex method and the global optimality conditions The Simplex method is remarkable in that it satisfies two of the three conditions at every BFS, and the remaining one is satisfied at optimality: x is feasible after Phase-I has been completed. x and y always satisfy complementarity. Why? If xj is in the basis, then it has a zero reduced cost, implying that the dual constraint j has no slack. If the reduced cost of xj is non-zero, then its value is zero. 13 15

The feasibility of y T = c T B B 1 is not fulfilled until we reach an optimal BFS. How is the incoming criterion related to this? We introduce as an incoming variable that variable which has the best reduced cost. Since the reduced cost measures the dual feasibility of y, this means that we select the most violated dual constraint; at the new BFS, that constraint is then satisfied (since the reduced cost then is zero). The Simplex method hence works to try to satisfy dual feasibility! Proof. If (I) has a solution x, then b T y = x T A T y > 0. But x 0 n, so A T y 0 n cannot hold, which means that (II) is infeasible. Assume that (II) is infeasible. Consider the linear program maximize b T y (4) subject to A T y 0 n, y free, 16 18 Farkas Lemma revisited Let A be an m n matrix and b an m 1 vector. Then exactly one of the systems Ax = b, (I) x 0 n, and A T y 0 n, (II) b T y > 0, has a feasible solution, and the other system is inconsistent. and its dual program minimize (0 n ) T x (5) subject to Ax = b, x 0 n. Since (II) is infeasible, y = 0 m is an optimal solution to (4). Hence the Strong Duality Theorem 11.6 gives that there exists an optimal solution to (5). This solution is feasible to (I). 17 19

Decentralized planning Consider the following profit maximization problem: s.t. maximize z = p T x = B1 B2 m p T i xi, i=1... Bm C x1 x2. xm b1 b2. bm c, xi 0 n i, i = 1,..., m, The units also use limited resources that are the same. The resource constraint is difficult as well as unwanted to enforce directly, because it would make it a centralized planning process. We want the units to maximize their own profits individually. But we must also make sure that they do not violate the resource constraints Cx c. (This constraint is typically of the form m i=1 C ixi c.) How? 20 22 for which we have the following interpretation: We have m independent subunits, responsible for finding their optimal production plan. While they are governed by their own objectives, we (the Managers) want to solve the overall problem of maximizing the company s profit. The constraints Bixi bi, xi 0 n i describe unit i s own production limits, when using their own resources. ANSWER: Solve the LP dual problem! Generate from the dual solution the dual vector y for the joint resource constraint. Introduce an internal price for the use of this resource, equal to this dual vector. Let each unit optimize their own production plan, with an additional cost term. This will then be a decentralized planning process. 21 23

Each unit i will then solve their own LP problem to maximize [pi C T i y] T xi, x i subject to Bixi bi, x 0 n i, resulting in an optimal production plan! Decentralized planning, is related to Dantzig Wolfe decomposition, which is a general technique for solving large-scale LP by solving a sequence of smaller LP:s. 24