ORIE 6300 Mathematical Programming I August 25, Lecture 2

Similar documents
Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Discrete Optimization

Today: Linear Programming (con t.)

Linear Programming Inverse Projection Theory Chapter 3

IE 5531: Engineering Optimization I

MAT016: Optimization

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora

The Simplex Algorithm

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Lecture 15 (Oct 6): LP Duality

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

Lecture 5. x 1,x 2,x 3 0 (1)

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Introduction to optimization

Lecture #21. c T x Ax b. maximize subject to

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

The dual simplex method with bounds

Duality of LPs and Applications

CS261: A Second Course in Algorithms Lecture #8: Linear Programming Duality (Part 1)

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.

Math 210 Finite Mathematics Chapter 4.2 Linear Programming Problems Minimization - The Dual Problem

x 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017

EE364a Review Session 5

1 Seidel s LP algorithm

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

CO350 Linear Programming Chapter 6: The Simplex Method

Linear Programming Duality

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1 Review of last lecture and introduction

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Linear Programming. Operations Research. Anthony Papavasiliou 1 / 21

How to Take the Dual of a Linear Program

An introductory example

Linear and Combinatorial Optimization

TIM 206 Lecture 3: The Simplex Method

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Part 1. The Review of Linear Programming

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Lecture Notes 3: Duality

Lecture 7 Duality II

Lecture 11: Post-Optimal Analysis. September 23, 2009

6.854J / J Advanced Algorithms Fall 2008

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

CO 250 Final Exam Guide

Global Optimization of Polynomials

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin

Review Solutions, Exam 2, Operations Research

Linear and Integer Optimization (V3C1/F4C1)

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Summary of the simplex method

Chapter 3, Operations Research (OR)

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Linear Programming, Lecture 4

Linear programming: Theory

The Strong Duality Theorem 1

4.6 Linear Programming duality

LINEAR PROGRAMMING II

16.1 L.P. Duality Applied to the Minimax Theorem

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

Chap6 Duality Theory and Sensitivity Analysis

Linear Programming Redux

Another max flow application: baseball

3. Linear Programming and Polyhedral Combinatorics

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Linear Programming Methods

Part 1. The Review of Linear Programming

Understanding the Simplex algorithm. Standard Optimization Problems.

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

CS711008Z Algorithm Design and Analysis

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

A Review of Linear Programming

Support Vector Machines

Lesson 27 Linear Programming; The Simplex Method

Lecture 13: Linear programming I

Optimization (168) Lecture 7-8-9

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

Benders Decomposition

ORIE 6300 Mathematical Programming I August 25, Recitation 1

Cutting Plane Methods I

3. Linear Programming and Polyhedral Combinatorics

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

Lecture 6: Conic Optimization September 8

Relation of Pure Minimum Cost Flow Model to Linear Programming

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

Lecture: Introduction to LP, SDP and SOCP

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Lecture 7: Lagrangian Relaxation and Duality Theory

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

CS261: Problem Set #3

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Transcription:

ORIE 6300 Mathematical Programming I August 25, 2016 Lecturer: Damek Davis Lecture 2 Scribe: Johan Bjorck Last time, we considered the dual of linear programs in our basic form: max(c T x : Ax b). We also considered an algebraic method for testing optimality of solution: To help find the optimal solution, we generate upper bounds on the value of feasible solutions by taking linear combinations of rows of constraints equal to c. That is, we find y such that A T y = c, with y 0. This gives an upper bound y T b on c T x. Then, the best upper bound is given by the following LP: min(y T b : A T y = c, y 0). We call this the dual LP. We also mentioned two theorems regarding the dual LP: The weak duality theorem, which states that the value of the primal LP is at most the value of the dual, and the strong duality theorem, which states that if there exists a feasible solution to either the primal or the dual LP, then the two LPs have the same value. Today we will consider linear programs in other standard forms. We will derive rules for how to take the duals of these other linear programs. See the Chapter 5 of Chvátal for a good summary of these rules. We will consider the standard form of linear programs, which has as a precise, technical definition. Most algorithms used for solving linear programs assume that the input is given in the standard form, and the formulation is not restrictive in any representational sense. The standard form is defined as: min( c T x : x 0, Āx = b). This formulation is the dual of our basic form, and we have introduced the bars on each of A, b, and c to distinguish this from the dual that we derived last class: min(y T b : y 0, A T y = c); Our new primal is equivalent to that dual in the following way: Translation: b = c, c = b, Ā = A T We saw last time in the bakery example that duality was helpful for finding solutions to linear programs, and there are two ways to think about taking the dual of the standard form of the LP. 1. By reducing to the form considered in last lecture: max(c T x : Ax b) 2. By finding best bound on primal First, we will rewrite the linear program min( c T x : x 0, Āx = b) in our basic form (namely, as a maximization problem with constraints), and take its dual using the rules we derived last time. This leads to a messy dual, and we will work on simplifying what we obtain. To understand better what the dual is, we will also derive the dual using the idea that the dual is a way to derive sharp bounds on the primal. We start by converting our LP into our basic maximization form that uses inequalities only. First we need to change from min to max, so instead of minimizing c T x, we can consider maximizing c T x. This only switches the sign of the result (but not the absolute value, nor the optimal solution). We will switch the sign back later. Furthermore, our basic form consisted only inequalities, so we consider: max c T x 2-1

Āx b Āx b Ix 0, where I is the identity matrix. To get the dual, we introduce one dual variable for each row (i.e. for each inequality constraint) of the primal, and so we have one dual variable corresponding to each row of the resulting matrix above. We have three sets of rows, so we will use three different names for the three sets of variables: y = (s, t, w) where the variables s, t and w correspond to the blocks of rows in Ā, Ā and I, respectively. So, using the dual formulation from last lecture, we have the following dual LP min y T Ā Ā I T b b 0 y = c y 0. Since y = (s, t, w), we obtain min s t w T b b 0 Ā Ā I s t w = c s, t, w 0, T which can be rewritten as min s T b t T b Ā T s ĀT t Iw = c s, t, w 0. We will simplify this. First recall that when creating the basic form equivalent, we switched the sign of the objective, so let s switch this back now: instead of minimizing s T b t T b we will maximize s T b + t T b. Further notice that s and t never occur by themselves any place, only t s, so we can use a new variable z = t s. Both s and t had to be nonnegative, but t s can be of any sign and any value. The objective function then reads as max z T b, and the constraints will be ĀT z Iw = c. Note that this change makes an equivalent LP in the sense that given any feasible solution using t and s, we get a feasible solution of the same value by setting z = t s, and similarly if we have a feasible solution for the LP using the variable z, we can get a feasible 2-2

solution to the previous LP of the same value by setting t i = z i and s i = 0 if z i 0, and s i = z i and t i = 0 if z i < 0. The LP looks better if we multiply the constraints by 1. What we have so far is max z T b w 0 Ā T z + Iw = c. We will further simplify what we get by eliminating the variables w. Our equations can be rewritten to say that Iw = c ĀT z. The only constraint we have about w is that it is nonnegative, and there is no w in the objective function, so we can write the constraint as c ĀT z 0, and not have the variables w at all. The final form that we obtained for the dual is max z T b Ā T z c. Recall that we use Ā, b, and c to denote A T, c, and b, respectively. Substituting these back into the the dual LP above, we note that we obtain an LP that is exactly the same as the primal LP that we started from last time: max z T c We have proved the following theorem. Az b. Theorem 1 The dual of the dual LP is the same as the original primal LP. Next we will work on understanding the dual, by creating it as a bound for the primal. Here is the bounding idea via an example: min x 1 + x 2 2x 2 + x 3 = 7 x 1 x 3 = 2 x 1, x 2, x 3 0 If we add these two equations, we get that x 1 + 2x 2 = 9 for all feasible solutions (and that this is the only way to eliminate x 3 from the result). The objective function is to minimize x 1 + x 2, and not x 1 +2x 2. An equivalent form of what we have is that (1/2)x 1 +x 2 = 9/2 for any primal feasible x vector. From this form, we see that x 1 + x 2 must be at least 9/2 since x 1 is non-negative. Thus we get a lower bound on the LP by taking linear combinations of the equality constraints y 1 (ā 1 x = b 1 ) y 2 (ā 2 x = b 2 ) + y n (ā n x = b n ) 2-3

such that the number of copies of each x j is at most c j. This implies that: y 1 ā 1j + y 2 ā 2j +... + y n ā nj c j. The bound we obtain is y T b. Thus for any feasible x, we have that c T x y T b if Ā T y c. Hence, the best bound of this form is the following linear program: max y T b Ā T y c. This is once again the dual of the original primal LP. So far, we saw how to derive the dual for LPs in two forms: max(c T x : Ax b), and min(c T x : x 0, Ax = b). The idea of understanding the dual as a bound for the primal leads to very simple rules for creating dual linear programs without having to convert to any normal form. Here is a set of rules for maximization type problems. For simplicity, we will only have = and type constraints, i.e., I suggest that you take all type constraints for a maximization problem, and multiply them by 1 to get type constraints before taking the dual. This form has some nonnegative variables, and some unconstrained ones, some equations and some inequalities as constraints. max c T x + c T x Ax + Ā x b Âx + Ã x = b (primal LP) x 0 The dual has nonnegative variables y corresponding to the inequality constraints, variables ȳ unconstrained in sign corresponding to the equations, and the nonnegative variables x give rise to inequalities in the dual, whereas the unconstrained variables x give rise to equations in the dual. The dual LP is: In summary, min y T b + ȳ T b A T y + ÂT ȳ c Ā T y + ÃT ȳ = c (dual LP) y 0 Primal Dual Max Min constraint variable 0 = constraint variable unconstrained variable 0 constraint variable unconstrained = constraint 2-4

Recall that the dual of the dual is the primal. So for an LP in the minimization form we can use the above forms to take the dual. As an aside it can be mentioned that solving linear inequalitites is essentially as difficult as solving linear programs. Earlier discussions also suggests an algorithm for solving max(c T x : Ax b) knowing only that max(c T x : Ax b) BU 0. Assuming that we have a lower bound B0 L = ct x 0 for some x 0 in the feasable region, we can then follow the scheme below for k = 0, 1, 2...K do if x s.t. c T x (BL k + Bk U )/2 and Ax b then L = c T x ; U = BU k ; x k+1 = x ; else U = (BL k + Bk U )/2; K = Bk L ; x k+1 = x k ; end end return (x K ) As an exercise the reader is welcome to prove that 0 c T x c T x K (B 0 U B0 L )/2K. 2-5