Duality Maximize c T x for x F = {x (R + ) n Ax b} If we guess x F, we can say that c T x is a lower bound for the optimal value without executing the simplex algorithm. Can we make similar easy guesses establishing upper bounds for the optimal value? Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 1 / 49
Example Maximize 5x 1 + 6x 2 + 3x 3 Subject to 5x 1 + 6x 2 + 3x 3 50 4x 1 + 3x 2 + 5x 3 5 x 1 + 2x 2 x 3 1 x 1, x 2, x 3 0 Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 2 / 49
Getting upper bounds To get an upper bound on the achievable value of any solution, we can look for a positive linear combination of the constraints upper bounding the objective function. How can we find the best upper bound that can be proved in this way? The best possible upper bound that can be desribed by a linear program! Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 3 / 49
Weak Duality Theorem A R m n, b R m 1, c R n 1. Primal program P: Maximize c T x under Ax b, x 0. Dual program D: Minimize b T y under A T y c, y 0. If x is a feasible solution to P and y is a feasible solution to D then the value c T x is smaller than or equal to the value b T y. Proof: c T x (y T A)x = y T (Ax) y T b Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 4 / 49
Some remarks If P is unbounded, then D is infeasible. The dual program to the dual program is the primal program. If D is unbounded then P is infeasible. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 5 / 49
(Strong) Duality Theorem If P has an optimal solution x then D has an optimal solution y and c T x = b T y. For any LP maximization instance we can write down an LP minimization instance with the same optimal value. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 6 / 49
A similar duality theorem Value of Maximal flow = Size of Minimal cut. This duality theorem was crucial for showing correctness of Ford-Fulkerson. The LP duality theorem is similarly tied to the correctness of the simplex algorithm. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 7 / 49
How to find the optimal dual solution Maximize 5x 1 + 4x 2 + 3x 3 Subject to 2x 1 + 3x 2 + x 3 5 4x 1 + x 2 + 2x 3 11 3x 1 + 4x 2 + 2x 3 8 x 1, x 2, x 3 0 Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 8 / 49
Final dictionary Maximize z subject to x 1, x 2,..., x 6 0 and x 3 = 1 + x 2 + 3x 4 2x 6 x 1 = 2 2x 2 2x 4 + x 6 x 5 = 1 + 5x 2 + 2x 4 z = 13 3x 2 x 4 x 6 Opt. Primal solution: x 2 = x 4 = x 6 = 0, x 3 = 1, x 1 = 2, x 5 = 1. Opt. Dual solution: y 1 = 1, y 2 = 0, y 3 = 1. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 9 / 49
Proof of strong duality theorem Primal program P: Maximize c T x under Ax b, x 0. Solve P using two phase simplex method, obtaining optimal solution x. Last row of last dictionary: Let y i = c n+i, i = 1, 2.,..., m. Must show: n+m z = z + c k x k. k=1 1 y is a feasible solution to D: Minimize b T y under A T y c, y 0. 2 b T y = z. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 10 / 49
Proof of strong duality theorem Last row of last dictionary: Last row of first dictionary n+m z = z + c k x k. k=1 z = n c j x j j=1 The two expressions are equivalent for all x in n {x R n+m i {1,.., m} : x n+i = b i a ij x j.} j=1 Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 11 / 49
Proof of strong duality theorem For all x {x R n+m i {1,.., m} : x n+i = b i n j=1 a ijx j }: n n+m c j x j = z + c k x k j=1 = z + = (z k=1 n c j x j + j=1 m i=1 m i=1 b i y i ) + ( y i )(b i n ( c j + j=1 m i=1 n a ij x j ) j=1 a ij y i )x j Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 12 / 49
Proof of strong duality theorem For all x R n : (z m i=1 b i y i ) + n m ( c j + ( a ij yi ) c j )x j = 0 j=1 j : c j = c j + i=1 m i=1 a ij y i z = m i=1 b i y i Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 13 / 49
Proof of strong duality theorem j : c j = c j + m i=1 a ij y i j : c j m i=1 a ij y i so y is a feasible solution to the dual program. z = m i=1 b i y i = b T y so y has the same objective function value as x. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 14 / 49
Consequences of duality theorem (to be seen) Software solving LP programs to optimality can be easily checked (by running the software on D as well as P). Solving linear programs to optimality is as easy as solving systems of linear inequalities (by solving the system P, D, c T x = b T y.) Dual simplex algorithm (solve D rather than P) is sometimes faster than primal simplex algorithm. Optimal mixed strategies in zero-sum games are unexploitable (Von Neuman (co-)invented linear programming because of the application to two-player games!). Ye s interior point algorithm works by maintaining a solution to the dual and the primal program simultaneously. In general, one may very often gain tremendous insight into a problem phrased as a linear program by looking at its dual. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 15 / 49
Consequence Software solving LP programs to optimality can be easily checked. Give the software the primal program as well as the dual program. The solution to the dual problem is a certificate that the solution to the primal program is optimal. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 16 / 49
Linear Inequalities Problem Input: A R m n, b R m. Output: If {x R n Ax b} =, report Infeasible, otherwise output x so that Ax b. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 17 / 49
Linear Programming Input: A R m n, b R m, c R n, Output: x F maximizing c, x where F = {x R n Ax b} F is called the set of feasible solutions to the program. Exceptions: If F =, report Infeasible. If v R x F : c, x > v, report Unbounded. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 18 / 49
Algorithm for LP, using LI Convert LP instance to instance P in standard form. Check if P infeasible using LI algorithm. If so, report Infeasible. If not, construct dual D of P. Use LI algorithm to find x and y so that x satifies constraints of P, y satisfies constraints of D and c T x = b T y. If no such (x, y) exist, report Unbounded, otherwise return x. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 19 / 49
The Ellipsoid Method The ellipsoid algorithm (1979) for Linear Programming works by using the reduction and solving the Linear Inequalities Problem. The ellipsoid algorithm was the first polynomial time algorithm for Linear Programming but it is unpractical. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 20 / 49
Dual simplex algorithm Proof of the duality theorem Doing the simplex algorithm on the primal program can also give us the solution to the dual program. Similarly, doing the simplex algorithm on the dual program can also give us the solution to the primal program: The dual simplex algorithm. Can be useful as empiric running time of Simplex Algorithm is roughly Θ(m log n). Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 21 / 49
Bill matching game Max and Miney play the following game: They each, in secret, hide either a one-dollar bill or a hundred-dollar bill (of their own money). Then the bills are revealed. If they differ, Max gets both. If they are the same, Miney gets both. Would you rather be Max or Miney? Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 22 / 49
Many (but not all) people choose to be Max - he only has to bet 1 dollar to possibly win 100 dollars. On the other hand, if he chooses the strategy of betting 1 dollar, a simple counter strategy of Miney is to also bet 1 dollar. So who has the advantage and how should the game be played? How Max should play the game depends on how Miney is going to play the game. But suppose Max has no clue about that! Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 23 / 49
Cautious strategy (for Max) Don t try to second guess what Miney might do. Play the game so that the loss is as small as possible assuming worst case behavior of Miney (with negative loss = gain). This leads Max to bet 1 dollar... The cautious strategy for Miney is also to bet 1 dollar... but then Max loses 1 dollar every time he plays with Miney! Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 24 / 49
Randomized cautious strategy (for Max) Play the game in a randomized way so that the expected loss is as small as possible, assuming worst case behavior of Miney. A randomized strategy is also called a Mixed Strategy. A deterministic strategy is also called a Pure Strategy. Strategy: Bet 1 dollar with probability p and 100 dollars with probability 1 p. How to choose p? Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 25 / 49
Randomized cautious strategy (for Max) If Miney bets 1 dollar, Max s expected gain is g = p ( 1) + (1 p) 1 = 1 2p. If Miney bets 100 dollars, Max s expected gain is g = p 100 + (1 p) ( 100) = 200p 100 Choose p so that g is maximized, where g = min(1 2p, 200p 100). Solution: p = 1 2, g = 0. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 26 / 49
Randomized cautious strategy (for Max) Finding Max s cautious mixed strategy can be formulated as a linear program. Find (p, g) maximizing g so that p 0 p 1 g 1 2p g 200p 100 Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 27 / 49
Are cautious strategies too cautious? In real life, if you are very timid you may get exploited by bullies. Right? Suppose both players play cautiously. Suppose Max learns for sure that Miney will play cautiously. Can he then exploit her by deviating from his cautious strategy? Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 28 / 49
Randomized cautious strategy (for Miney) Play the game in a randomized way so that the expected loss is as small as possible, assuming worst-case behavior of Max. Bet 1 dollar with probability q and 100 dollars with probability 1 q. If Max bets 1 dollar, Miney s expected loss is l = q ( 1) + (1 q) 100 = 100 101q. If Max bets 100 dollars, Miney s expected loss is l = q 1 + (1 q) ( 100) = 101q 100 Choose q so that l is minimized, where l = max(100 101q, 101q 100). Solution: q = 100 101, l = 0. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 29 / 49
Finding Miney s cautious mixed strategy can be formulated as a linear program. Find (q, l) minimizing l so that q 0 q 1 l 100 101q l 101q 100 Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 30 / 49
0=0 Max s guaranteed lower bound on his expected gain when he plays his cautious mixed strategy is equal to Miney s guaranteed upper bound on her expected loss when she plays her cautious mixed strategy. Thus Max cannot exploit Miney if he learns that she will play by the cautious strategy. A priori, this is not obvious - intuitively, the cautious strategies are very timid and pessimistic. Since the bounds are the same, both Max and Miney can announce their strategies before playing without making the other player wish to change strategy as a result. The two cautious strategies are together called a Nash Equilibrium for the game. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, 2006 31 / 49