Duality. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, / 49

Similar documents
Lecture 5. x 1,x 2,x 3 0 (1)

Linear Programming Duality

Today: Linear Programming (con t.)

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin

Part IB Optimisation

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

Zero-Sum Games Public Strategies Minimax Theorem and Nash Equilibria Appendix. Zero-Sum Games. Algorithmic Game Theory.

Massachusetts Institute of Technology 6.854J/18.415J: Advanced Algorithms Friday, March 18, 2016 Ankur Moitra. Problem Set 6

Introduction to Linear Programming

Optimization 4. GAME THEORY

Game Theory: Lecture 3

Summary of the simplex method

Optimization (168) Lecture 7-8-9

The Strong Duality Theorem 1

1 Primals and Duals: Zero Sum Games

Lecture Notes 3: Duality

1 Basic Game Modelling

Review Solutions, Exam 2, Operations Research

CO759: Algorithmic Game Theory Spring 2015

Optimisation and Operations Research

M340(921) Solutions Practice Problems (c) 2013, Philip D Loewen

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

Part 1. The Review of Linear Programming

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

Lectures 6, 7 and part of 8

Chapter 1 Linear Programming. Paragraph 5 Duality

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Linear Programming: Chapter 5 Duality

LINEAR PROGRAMMING III

4.6 Linear Programming duality

An introductory example

Lecture 11: Post-Optimal Analysis. September 23, 2009

15-780: LinearProgramming

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Optimality, Duality, Complementarity for Constrained Optimization

BBM402-Lecture 20: LP Duality

The Simplex Algorithm

Understanding the Simplex algorithm. Standard Optimization Problems.

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

Introduction to optimization

Algorithmic Game Theory and Applications. Lecture 4: 2-player zero-sum games, and the Minimax Theorem

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

COMP3121/9101/3821/9801 Lecture Notes. Linear Programming

Chapter 3, Operations Research (OR)

29 Linear Programming

Optimisation and Operations Research

2018 년전기입시기출문제 2017/8/21~22

Chap6 Duality Theory and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Lecture 7 Duality II

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

CSC304: Algorithmic Game Theory and Mechanism Design Fall 2016

Planning and Optimization

Theory and Internet Protocols

Lecture 1 Introduction

Week 3 Linear programming duality

Computing Minmax; Dominance

MAT016: Optimization

CS261: A Second Course in Algorithms Lecture #8: Linear Programming Duality (Part 1)

Lecture 10: Linear programming duality and sensitivity 0-0

1 Review Session. 1.1 Lecture 2

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

EE364a Review Session 5

CS711008Z Algorithm Design and Analysis

1 Seidel s LP algorithm

Lecture #21. c T x Ax b. maximize subject to

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

How to Take the Dual of a Linear Program

The Simplex Algorithm: Technicalities 1

Introduction to Algorithms

Lecture 9 Tuesday, 4/20/10. Linear Programming

Lecture 3: Semidefinite Programming

Sensitivity Analysis and Duality

Linear Programming, Lecture 4

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

Discrete Optimization

Module 8 Linear Programming. CS 886 Sequential Decision Making and Reinforcement Learning University of Waterloo

Algorithmic Game Theory and Applications. Lecture 5: Introduction to Linear Programming

IE 5531: Engineering Optimization I

Lecture 15: October 15

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

An example of LP problem: Political Elections

Polynomial-time Computation of Exact Correlated Equilibrium in Compact Games

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Network Flows. CS124 Lecture 17

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

December 2014 MATH 340 Name Page 2 of 10 pages

Math 5593 Linear Programming Week 1

Another max flow application: baseball

Game Theory. 4. Algorithms. Bernhard Nebel and Robert Mattmüller. May 2nd, Albert-Ludwigs-Universität Freiburg

1 Motivation. Game Theory. 2 Linear Programming. Motivation. 4. Algorithms. Bernhard Nebel and Robert Mattmüller May 15th, 2017

II. Analysis of Linear Programming Solutions

Transcription:

Duality Maximize c T x for x F = {x (R + ) n Ax b} If we guess x F, we can say that c T x is a lower bound for the optimal value without executing the simplex algorithm. Can we make similar easy guesses establishing upper bounds for the optimal value? Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 1 / 49

Example Maximize 5x 1 + 6x 2 + 3x 3 Subject to 5x 1 + 6x 2 + 3x 3 50 4x 1 + 3x 2 + 5x 3 5 x 1 + 2x 2 x 3 1 x 1, x 2, x 3 0 Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 2 / 49

Getting upper bounds To get an upper bound on the achievable value of any solution, we can look for a positive linear combination of the constraints upper bounding the objective function. How can we find the best upper bound that can be proved in this way? The best possible upper bound that can be desribed by a linear program! Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 3 / 49

Weak Duality Theorem A R m n, b R m 1, c R n 1. Primal program P: Maximize c T x under Ax b, x 0. Dual program D: Minimize b T y under A T y c, y 0. If x is a feasible solution to P and y is a feasible solution to D then the value c T x is smaller than or equal to the value b T y. Proof: c T x (y T A)x = y T (Ax) y T b Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 4 / 49

Some remarks If P is unbounded, then D is infeasible. The dual program to the dual program is the primal program. If D is unbounded then P is infeasible. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 5 / 49

(Strong) Duality Theorem If P has an optimal solution x then D has an optimal solution y and c T x = b T y. For any LP maximization instance we can write down an LP minimization instance with the same optimal value. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 6 / 49

A similar duality theorem Value of Maximal flow = Size of Minimal cut. This duality theorem was crucial for showing correctness of Ford-Fulkerson. The LP duality theorem is similarly tied to the correctness of the simplex algorithm. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 7 / 49

How to find the optimal dual solution Maximize 5x 1 + 4x 2 + 3x 3 Subject to 2x 1 + 3x 2 + x 3 5 4x 1 + x 2 + 2x 3 11 3x 1 + 4x 2 + 2x 3 8 x 1, x 2, x 3 0 Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 8 / 49

Final dictionary Maximize z subject to x 1, x 2,..., x 6 0 and x 3 = 1 + x 2 + 3x 4 2x 6 x 1 = 2 2x 2 2x 4 + x 6 x 5 = 1 + 5x 2 + 2x 4 z = 13 3x 2 x 4 x 6 Opt. Primal solution: x 2 = x 4 = x 6 = 0, x 3 = 1, x 1 = 2, x 5 = 1. Opt. Dual solution: y 1 = 1, y 2 = 0, y 3 = 1. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 9 / 49

Proof of strong duality theorem Primal program P: Maximize c T x under Ax b, x 0. Solve P using two phase simplex method, obtaining optimal solution x. Last row of last dictionary: Let y i = c n+i, i = 1, 2.,..., m. Must show: n+m z = z + c k x k. k=1 1 y is a feasible solution to D: Minimize b T y under A T y c, y 0. 2 b T y = z. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 10 / 49

Proof of strong duality theorem Last row of last dictionary: Last row of first dictionary n+m z = z + c k x k. k=1 z = n c j x j j=1 The two expressions are equivalent for all x in n {x R n+m i {1,.., m} : x n+i = b i a ij x j.} j=1 Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 11 / 49

Proof of strong duality theorem For all x {x R n+m i {1,.., m} : x n+i = b i n j=1 a ijx j }: n n+m c j x j = z + c k x k j=1 = z + = (z k=1 n c j x j + j=1 m i=1 m i=1 b i y i ) + ( y i )(b i n ( c j + j=1 m i=1 n a ij x j ) j=1 a ij y i )x j Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 12 / 49

Proof of strong duality theorem For all x R n : (z m i=1 b i y i ) + n m ( c j + ( a ij yi ) c j )x j = 0 j=1 j : c j = c j + i=1 m i=1 a ij y i z = m i=1 b i y i Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 13 / 49

Proof of strong duality theorem j : c j = c j + m i=1 a ij y i j : c j m i=1 a ij y i so y is a feasible solution to the dual program. z = m i=1 b i y i = b T y so y has the same objective function value as x. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 14 / 49

Consequences of duality theorem (to be seen) Software solving LP programs to optimality can be easily checked (by running the software on D as well as P). Solving linear programs to optimality is as easy as solving systems of linear inequalities (by solving the system P, D, c T x = b T y.) Dual simplex algorithm (solve D rather than P) is sometimes faster than primal simplex algorithm. Optimal mixed strategies in zero-sum games are unexploitable (Von Neuman (co-)invented linear programming because of the application to two-player games!). Ye s interior point algorithm works by maintaining a solution to the dual and the primal program simultaneously. In general, one may very often gain tremendous insight into a problem phrased as a linear program by looking at its dual. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 15 / 49

Consequence Software solving LP programs to optimality can be easily checked. Give the software the primal program as well as the dual program. The solution to the dual problem is a certificate that the solution to the primal program is optimal. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 16 / 49

Linear Inequalities Problem Input: A R m n, b R m. Output: If {x R n Ax b} =, report Infeasible, otherwise output x so that Ax b. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 17 / 49

Linear Programming Input: A R m n, b R m, c R n, Output: x F maximizing c, x where F = {x R n Ax b} F is called the set of feasible solutions to the program. Exceptions: If F =, report Infeasible. If v R x F : c, x > v, report Unbounded. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 18 / 49

Algorithm for LP, using LI Convert LP instance to instance P in standard form. Check if P infeasible using LI algorithm. If so, report Infeasible. If not, construct dual D of P. Use LI algorithm to find x and y so that x satifies constraints of P, y satisfies constraints of D and c T x = b T y. If no such (x, y) exist, report Unbounded, otherwise return x. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 19 / 49

The Ellipsoid Method The ellipsoid algorithm (1979) for Linear Programming works by using the reduction and solving the Linear Inequalities Problem. The ellipsoid algorithm was the first polynomial time algorithm for Linear Programming but it is unpractical. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 20 / 49

Dual simplex algorithm Proof of the duality theorem Doing the simplex algorithm on the primal program can also give us the solution to the dual program. Similarly, doing the simplex algorithm on the dual program can also give us the solution to the primal program: The dual simplex algorithm. Can be useful as empiric running time of Simplex Algorithm is roughly Θ(m log n). Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 21 / 49

Bill matching game Max and Miney play the following game: They each, in secret, hide either a one-dollar bill or a hundred-dollar bill (of their own money). Then the bills are revealed. If they differ, Max gets both. If they are the same, Miney gets both. Would you rather be Max or Miney? Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 22 / 49

Many (but not all) people choose to be Max - he only has to bet 1 dollar to possibly win 100 dollars. On the other hand, if he chooses the strategy of betting 1 dollar, a simple counter strategy of Miney is to also bet 1 dollar. So who has the advantage and how should the game be played? How Max should play the game depends on how Miney is going to play the game. But suppose Max has no clue about that! Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 23 / 49

Cautious strategy (for Max) Don t try to second guess what Miney might do. Play the game so that the loss is as small as possible assuming worst case behavior of Miney (with negative loss = gain). This leads Max to bet 1 dollar... The cautious strategy for Miney is also to bet 1 dollar... but then Max loses 1 dollar every time he plays with Miney! Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 24 / 49

Randomized cautious strategy (for Max) Play the game in a randomized way so that the expected loss is as small as possible, assuming worst case behavior of Miney. A randomized strategy is also called a Mixed Strategy. A deterministic strategy is also called a Pure Strategy. Strategy: Bet 1 dollar with probability p and 100 dollars with probability 1 p. How to choose p? Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 25 / 49

Randomized cautious strategy (for Max) If Miney bets 1 dollar, Max s expected gain is g = p ( 1) + (1 p) 1 = 1 2p. If Miney bets 100 dollars, Max s expected gain is g = p 100 + (1 p) ( 100) = 200p 100 Choose p so that g is maximized, where g = min(1 2p, 200p 100). Solution: p = 1 2, g = 0. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 26 / 49

Randomized cautious strategy (for Max) Finding Max s cautious mixed strategy can be formulated as a linear program. Find (p, g) maximizing g so that p 0 p 1 g 1 2p g 200p 100 Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 27 / 49

Are cautious strategies too cautious? In real life, if you are very timid you may get exploited by bullies. Right? Suppose both players play cautiously. Suppose Max learns for sure that Miney will play cautiously. Can he then exploit her by deviating from his cautious strategy? Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 28 / 49

Randomized cautious strategy (for Miney) Play the game in a randomized way so that the expected loss is as small as possible, assuming worst-case behavior of Max. Bet 1 dollar with probability q and 100 dollars with probability 1 q. If Max bets 1 dollar, Miney s expected loss is l = q ( 1) + (1 q) 100 = 100 101q. If Max bets 100 dollars, Miney s expected loss is l = q 1 + (1 q) ( 100) = 101q 100 Choose q so that l is minimized, where l = max(100 101q, 101q 100). Solution: q = 100 101, l = 0. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 29 / 49

Finding Miney s cautious mixed strategy can be formulated as a linear program. Find (q, l) minimizing l so that q 0 q 1 l 100 101q l 101q 100 Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 30 / 49

0=0 Max s guaranteed lower bound on his expected gain when he plays his cautious mixed strategy is equal to Miney s guaranteed upper bound on her expected loss when she plays her cautious mixed strategy. Thus Max cannot exploit Miney if he learns that she will play by the cautious strategy. A priori, this is not obvious - intuitively, the cautious strategies are very timid and pessimistic. Since the bounds are the same, both Max and Miney can announce their strategies before playing without making the other player wish to change strategy as a result. The two cautious strategies are together called a Nash Equilibrium for the game. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 31 / 49