Discrete Optimization

Similar documents
Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Farkas Lemma, Dual Simplex and Sensitivity Analysis

The Simplex Algorithm

CO 250 Final Exam Guide

IE 5531: Engineering Optimization I

Duality of LPs and Applications

Discrete Optimization

EE364a Review Session 5

Week 3 Linear programming duality

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Chapter 1 Linear Programming. Paragraph 5 Duality

Review Solutions, Exam 2, Operations Research

1. Consider the following polyhedron of an LP problem: 2x 1 x 2 + 5x 3 = 1 (1) 3x 2 + x 4 5 (2) 7x 1 4x 3 + x 4 4 (3) x 1, x 2, x 4 0

Linear Programming Inverse Projection Theory Chapter 3

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Today: Linear Programming (con t.)

Chapter 3, Operations Research (OR)

3. Linear Programming and Polyhedral Combinatorics

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

3. Linear Programming and Polyhedral Combinatorics

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

A Review of Linear Programming

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Lecture #21. c T x Ax b. maximize subject to

BBM402-Lecture 20: LP Duality

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Lectures 6, 7 and part of 8

4.6 Linear Programming duality

Lecture 10: Linear programming duality and sensitivity 0-0

CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2)

6.854J / J Advanced Algorithms Fall 2008

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Linear and Integer Optimization (V3C1/F4C1)

Lecture 5 January 16, 2013

Conic Linear Optimization and its Dual. yyye

Duality and Projections

Understanding the Simplex algorithm. Standard Optimization Problems.

Linear Programming Duality

ORIE 6300 Mathematical Programming I August 25, Lecture 2

1 Review Session. 1.1 Lecture 2

7. Lecture notes on the ellipsoid algorithm

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Chapter 1. Preliminaries

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

An introductory example

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora

Optimeringslära för F (SF1811) / Optimization (SF1841)

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Lecture 11: Post-Optimal Analysis. September 23, 2009

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Integer Programming, Part 1

MAT-INF4110/MAT-INF9110 Mathematical optimization

"SYMMETRIC" PRIMAL-DUAL PAIR

Linear Programming. Chapter Introduction

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

Lecture 7 Duality II

Lecture 5. x 1,x 2,x 3 0 (1)

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Lecture notes on the ellipsoid algorithm

March 13, Duality 3

Introduction to optimization

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Multicommodity Flows and Column Generation

CO759: Algorithmic Game Theory Spring 2015

Lecture Notes 3: Duality

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

Part 1. The Review of Linear Programming

Sensitivity Analysis and Duality in LP

Linear and Combinatorial Optimization

Summary of the simplex method

CS261: A Second Course in Algorithms Lecture #8: Linear Programming Duality (Part 1)

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Minimum cost transportation problem

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Advanced Linear Programming: The Exercises

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Optimisation and Operations Research

MAT016: Optimization

1 Review of last lecture and introduction

F 1 F 2 Daily Requirement Cost N N N

Theory and Internet Protocols

Duality Theory, Optimality Conditions

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017

Lecture 15 (Oct 6): LP Duality

Duality in Linear Programming

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Math Models of OR: Some Definitions

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Introduction to Discrete Optimization

2.098/6.255/ Optimization Methods Practice True/False Questions

4. The Dual Simplex Method

Lecture 2: The Simplex method

Another max flow application: baseball

Relation of Pure Minimum Cost Flow Model to Linear Programming

Transcription:

Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 15, 2010 Discussions: March 25, April 01 Discrete Optimization Spring 2010 s 3 You can hand in written solutions for up to two of the exercises marked with (*) or ( ) to obtain bonus points. The duedate for this is April 15, 2010, before the exercise session starts. Math students are restricted to exercises marked with (*). Non-math students can choose between (*) and ( ) exercises. Exercise 1 (*) Prove the following statement: Let A R m n be a matrix and b R m be a vector. The system Ax = b, x 0 has a solution if and only if for all λ R m with λ T A 0 one has λ T b 0. Hint: Use duality theory! The statement we want to prove is known as Farkas Lemma. Let c = (0,...,0) T R n, and consider the linear program min{c T x : Ax = b, x 0} (1) and its dual max{b T y : A T y 0}. Set λ := y. We get the equivalent LP max{ b T λ : A T λ 0}. (2) Assume that Ax = b, x 0 has a solution. Thus (1) is feasible. Since c = 0, is is also bounded as every feasible solution has the same objective value. y duality theory this implies that (2) is feasible and bounded and 0 = min{c T x : Ax = b, x 0} = max{ b T λ : A T λ 0}. Hence for each λ with A T λ 0 we have b T λ 0. This shows the first part of the claim. For the converse, assume that λ T b 0 for all λ with λ T A 0. Observe that the linear program (2) is feasible, as λ = 0 is a solution. Since λ T b 0 for all feasible solutions, the linear program is bounded. y duality theory, its dual (2) is feasible which asserts the claim. Exercise 2 1

1. Consider the linear program max{c T x : Ax b} with c R n, A R m n and b R n. The dual of the linear program is min{b T y : A T y = c, y 0}. Let x and y be feasible solutions to the primal and dual LP, respectively. Prove the following statement: The vectors x and y are optimal solutions to their respective LPs if and only if for each i = 1,...,m we have a T i x = b i or y i = 0. Hint: Consider the inner product (b Ax) T y and show that it is 0 if and only if x and y are optimal. Why is this sufficient to show? 2. Consider the following linear program: max x 1 + x 2 subject to 2x 1 + x 2 6 x 1 + 2x 2 8 3x 1 + 4x 2 22 x 1 + 5x 2 23 Show that (4/3,10/3) is an optimal solution by computing an optimal solution of the dual using the first part of this exercise. 1. The statement we want to prove is the so called complementary slackness theorem. Consider the inner product (b Ax) T y. (3) As x is a feasible solution to the primal LP, we have b Ax 0. ecause y is a feasible solution to the dual LP, we have y 0. Thus (3) is 0 if and only if for each i = 1,...,m we have that b a T i x = 0 or y i = 0. Hence it is sufficient to show that (3) is 0 if and only if x and y are optimal solutions. Observe that (b Ax) T y = b T y x T A T y = b T y x T c. The second equality used that fat that y is a feasible solution to the dual LP. Using duality theory, we know that x and y are optimal if and only if b T y = c T x. In other words, (3) is 0 if and only if x and y are optimal solutions. 2

2. The dual LP is the following: min 6y 1 + 8y 2 + 22y 3 + 23y 4 subject to 2y 1 + y 2 + 3y 3 + y 4 = 1 y 1 + 2y 2 + 4y 3 + 5y 4 = 1 y 1, y 2, y 3, y 4 0 Observe that the solution (4/3,10/3) satisfies the first two constraints of the primal LP with equality, while the last two constraints are satisfied with strict inequality. Applying the complementary slackness theorem to this situation, we get that if x is optimal, then every optimal solution y to the dual has y 3 = y 4 = 0. Thus the two constraints of the dual simplify to 2y 1 + y 2 = 1 y 1 + 2y 2 = 1. The system has full rank, thus there is a unique solution: y 1 = y 2 = 1 0. Thus y = 3 ( 1 3, 1,0,0) is feasible. The complementary slackness theorem tells us that x and y are 3 optimal for their respective LPs. We check this by computing the objective values: 6y 1 +8y 2 +22y 3 +23y 4 = 14 3 = x 1 + x 2. Weak duality asserts that x and y are optimal. Exercise 3 Consider the linear program with c R n, A R m n and b R n. The dual is max{c T x : Ax b} min{b T y : A T y = c, y 0}. The following table describes nine situations for the status of primal and dual LPs. Decide which one of these situations are possible/impossible. If they are possible, give an example. Otherwise give a short argument why they are impossible. Primal Finite optimum Unbounded Infeasible Dual Finite optimum Unbounded Infeasible 3

Primal and dual feasible and bounded is possible: Example is c = b = (0) and A = (0). Primal feasible and bounded, dual unbounded is impossible: Assume that Ax b is has a solution x. Then by weak duality, c T x a lower bound for all solutions to the dual, in contradiction to the fact that the dual is unbounded. Primal feasible and bounded, dual infeasible is impossible: If the primal has an optimal solution, the duality theorem tells us that the dual has an optimal solution as well. In particular the dual is feasible. Primal unbounded and dual feasible and bounded is impossible: Assume that A T y = c has a solution y. Then by weak duality, b T y is an upper bound for all solutions to the primal, in contradiction to the fact that the primal is unbounded. Primal unbounded, dual unbounded is impossible: As seen above, if the primal is unbounded, then the dual is infeasible. Primal unbounded, dual infeasible is possible: Example is c = (1), b = (0) and A = (0). Primal infeasible, dual feasible and bounded is impossible: With the strong duality theorem, if the dual is feasible and bounded, so is the primal. Primal infeasible, dual unbounded is possible: Example is c = (0), b = ( 1) and A = (0). Primal and dual infeasible is possible: Example is c = (1), b = ( 1) and A = (0). Exercise 4 Suppose you are given an oracle algorithm, which for a given polyhedron P = { x Rñ : Ã x b} gives you a feasible solution or asserts that there is none. Show that using a single call of this oracle one can obtain an optimum solution for the LP max{c T x : x R n ; Ax b}, assuming that the LP is feasible and bounded. Hint: Use duality theory! The LP is feasible and bounded, thus an optimum solution must exist. Strong duality tells us that the dual min{b T y : A T y = c, y 0} is feasible and bounded. For optimal solutions x of the primal and y of the dual we have b T y = c T x. 4

Thus every point (x, y ) of the polyhedron c T x = b T y Ax b A T y = c y 0 is optimal. Hence with one oracle call for the polyhedron above we get an optimal solution of the LP. Exercise 5 ( ) A directed graph D = (V, A) is a tuple consisting of a set of vertices V and a set of arcs A V V. Given an arc a = (u, v) A, the vertex u is called the tail of a and v is called the head of a. For some vertex v V, the set of arcs whose tail is v is called δ out (v) := {a A : a = (v,u) for some u V }. Analogous, the set of arcs whose head is v is called δ i n (v) := {a A : a = (u, v) for some u V }. Let s, t V and let c : A N 0 be a capacity function. A function f : A N 0 is an s t-flow, if for every vertex v V \{s, t} we have flow conservation, i.e. a δ i n (v) f (a) a δ out (v) f (a) = 0. Moreover, 0 f (a) c(a) must hold for each a A. The value of a flow f is defined as a δ i n (t) f (a) a δ out (t) f (a). Formulate the problem of finding an s t-flow of maximum value as a linear program. Hint: Use the so called node-arc incidence matrix. It has a row for each vertex and a column for each arc. Each column consists of zeros, except for the entries corresponding to the head and the tail of the arc: The head gets a (1) entry and the tail gets a ( 1) entry. Let A be the node arc incidence matrix of the graph D, with the columns of s and t removed. We claim that the max flow LP is: max a δ i n (t) x a a δ out (t) x a (4) subject toax = 0 (5) x c (6) x 0. (7) 5

Every solution to this LP is a flow: The vector x encodes the flow on each arc of the graph. The constraints (5) ensure that for every node v V, v s, t, the incoming flow matches the outgoing flow. I.e. these constraints ensure flow conservation. The constraints (6) and (7) ensure that the capacity constraints and nonnegativity is satisfied. Finally, the objective function encodes the value of the flow. Hence an optimal solution to the LP corresponds to an optimal flow in the graph. Exercise 6 (*) In the lecture you have seen the simplex algorithm for linear programs of the form max{c T x : Ax b}. We will now derive a simplex algorithm for linear programs of the form min{c T x : Ax = b, x 0} (8) with c R n and A R m n, b R m. Throughout the exercise we assume that (8) is feasible and bounded, and that A has full row rank. For i {1,...,n} we define A i as the i -th column of A. Moreover, for some subset {1,...,n}, A is the matrix A restricted to the columns corresponding to elements of. A subset {1,...,n} with = n such that A has full rank is called a basis. The vector x R n defined as x i := 0 for all i and x := A 1 b is called the basic solution associated to. Note that x is a feasible solution to (8) if and only if x 0. Given a basis and let j {1,...,n}, j. The vector d R n defined as d j = 1, d i = 0 for all i and d := A 1 A j is called the j -th basic direction. Assume that the solution x associated to is feasible. Moreover assume that x > 0. 1. Show that there is a θ > 0 such that x + θd is a feasible solution. Give a formula to compute the largest θ such that x + θd is feasible. 2. Let θ be maximal. Show that there is a basis such that x + θ d is the basic solution associated to. 3. Let x = x + θd. Show that the objective value of x changes by θ ( c j c T A 1 A j ). 4. Consider a basis with basic feasible solution x. Show that if c c T A 1 A 0, then x is an optimal solution to (8). This suggests the following algorithm: Start with some basis whose associated basic solution is feasible. Compute c := c c T A 1 A. If c 0, we have an optimal solution (see 4). Otherwise, let j be such that c j < 0. Part 2 and 3 show that if we change the basis, we find a feasible solution with an improved objective value. We repeat these steps until the vector c is nonnegative. This is the way the simplex algorithm usually is introduced in the literature. This algorithm is exactly the same as the one you learned in the lecture. To get an intuition why this is true, show the following: 6

5. Given a basis, show that its associated basic solution is feasible if and only if is a roof of the LP dual to (8). 6. Consider a basis and its associated feasible basic solution x. As seen before, is also a roof in the dual LP. Let y be the vertex of that roof. Show that for any j {1,...,n} we have c j < 0 if and only if A T j y > c j. The dual of (8) is max{b T y : A T y c}. (9) 1. Observe that Ad = 0 holds. Thus for any θ and x := x + θd we have Ax = Ax + θad = b + 0 = b. Thus x is a feasible solution if and only if x 0. Let J := {i {1,...,n} : d i < 0}. Observe that by construction J. If J =, then x + θd 0 for all θ 0. Thus x is feasible for every θ 0 and a maximal θ does not exist. If J, then x 0 if and only if x i + θd i 0 for all i J. We conclude that is maximal. 2. Let i be such that θ = x i d i. θ := min{ x i d i : i J} Let := (\{i }) {j }. A has full rank because the old basis vector A i can be written as linear combination of : 0 = Ad implies A i = 1 d i A d. Let x := x + θ d. Observe that x i = x i x i d i d i = 0 and hence b = Ax = A x. This shows that x is the basic solution associated to. 3. Observe that c T x c T x = c T x + θc T d c T x = θ(c j d j c T A 1 A j ) = θ ( c j c T A 1 A ) j. 4. We prove the claim by giving a dual solution of the same objective value. We have that c c T A 1 A 0, which we can rewrite to A T (A T c ) c. Thus the vector y := A T c R m is a feasible solution for the dual (9). Observe that b T y = b T A T c = c T (A 1 b) = ct x = c T x. The weak duality theorem asserts that x is optimal. 7

5. Since is a basis, A has full rank, and so has A T. Thus the first two properties for a roof are satisfied. We know from the lecture that is a roof of (9) if and only if the objective vector b can be written as a conic combination of the rows in A T. Let x be the basic solution associated to. Thus we have A x = b. In other words, x gives the coefficients of the unique linear combination of b using the rows of A T. Observe that this linear combination is a conic combination if and only if x 0. 6. The vertex of the roof is given as the (unique) solution to the system Thus y = A T c Now fix some j. We have A T y = c. c j := c j c T A A j = c j A T j (A T c ). Thus the dual constraint A T j y c j is violated if and only if c j < 0. 8