EE364a Review Session 5

Similar documents
14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Convex Optimization M2

Convex Optimization Boyd & Vandenberghe. 5. Duality

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

5. Duality. Lagrangian

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Lagrangian Duality and Convex Optimization

Convex Optimization & Lagrange Duality

On the Method of Lagrange Multipliers

Lagrange duality. The Lagrangian. We consider an optimization program of the form

EE/AA 578, Univ of Washington, Fall Duality

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Lecture: Duality.

Discrete Optimization

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

Linear and Combinatorial Optimization

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

ICS-E4030 Kernel Methods in Machine Learning

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

4. Algebra and Duality

Lecture: Duality of LP, SOCP and SDP

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Primal/Dual Decomposition Methods

Convex Optimization and SVM

Lecture 18: Optimization Programming

subject to (x 2)(x 4) u,

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Today: Linear Programming (con t.)

Homework Set #6 - Solutions

A Brief Review on Convex Optimization

CS-E4830 Kernel Methods in Machine Learning

Constrained Optimization and Lagrangian Duality

Tutorial on Convex Optimization: Part II

Review Solutions, Exam 2, Operations Research

CSCI : Optimization and Control of Networks. Review on Convex Optimization

Part IB Optimisation

12. Interior-point methods

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Duality of LPs and Applications

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Lecture 10: Linear programming duality and sensitivity 0-0

Chapter 3, Operations Research (OR)

Chapter 1 Linear Programming. Paragraph 5 Duality

BBM402-Lecture 20: LP Duality

Lecture 7 Duality II

Duality Theory, Optimality Conditions

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Chapter 1. Preliminaries

Farkas Lemma, Dual Simplex and Sensitivity Analysis

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

LECTURE 10 LECTURE OUTLINE

Lectures 6, 7 and part of 8

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Linear Programming Inverse Projection Theory Chapter 3

Convex Optimization and Support Vector Machine

IE 521 Convex Optimization Homework #1 Solution

Lagrangian Duality Theory

Week 3 Linear programming duality

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Interior Point Algorithms for Constrained Convex Optimization

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

Summary of the simplex method

4.6 Linear Programming duality

Understanding the Simplex algorithm. Standard Optimization Problems.

LINEAR PROGRAMMING II

Part 1. The Review of Linear Programming

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

Lecture #21. c T x Ax b. maximize subject to

Lecture 5. x 1,x 2,x 3 0 (1)

Approximate Farkas Lemmas in Convex Optimization

CO 250 Final Exam Guide

Convex Optimization and Modeling

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Agenda. 1 Duality for LP. 2 Theorem of alternatives. 3 Conic Duality. 4 Dual cones. 5 Geometric view of cone programs. 6 Conic duality theorem

Conic Linear Optimization and its Dual. yyye

A Review of Linear Programming

1 Review Session. 1.1 Lecture 2

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

Optimization for Machine Learning

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Tutorial on Convex Optimization for Engineers Part II

IE 5531: Engineering Optimization I

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

EE 227A: Convex Optimization and Applications October 14, 2008

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

Lecture 8. Strong Duality Results. September 22, 2008

Applications of Linear Programming

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

Optimeringslära för F (SF1811) / Optimization (SF1841)

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Lecture Note 18: Duality

Constrained optimization

Transcription:

EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1

Complementary slackness consider the (not necessarily convex) optimization problem, minimize f 0 (x) subject to f i (x) 0, h i (x) = 0, i = 1,...,m i = 1,...,p. let x, and (λ,ν ) be primal and dual optimal points, and suppose strong duality holds (i.e., p = d ). this means, f 0 (x ) = g(λ,ν ) ( = inf x f 0 (x ) + f 0 (x ). f 0 (x) + m λ if i (x) + i=1 m λ if i (x ) + i=1 ) p νi h i (x) i=1 p νi h i (x ) i=1 EE364a Review Session 5 2

this result has several important conclusions. x minimizes L(x, λ,ν ) over x (although note that L(x, λ,ν ) can have other minimizers). we ll see an application of this soon. another conclusion is that m λ if i (x ) = 0. i=1 what does this mean? answer. λ f i (x ) = 0, or, equivalently, or, i = 1,...,m. λ i > 0 f i (x ) = 0, f i (x ) < 0 λ i = 0. EE364a Review Session 5 3

Example: minimizing a linear function over a rectangle let s take a look at a familiar problem, minimize c T x subject to 0 x 1. what is the solution? answer. if c i > 0, then x i = 0, if c i < 0 then x i = 1, and if c i = 0, then 0 x i 1. we can look at this from another point of view. let us associate dual variables λ with the constraint 0 x, and dual variables µ with the constraint x 1. the Lagrangian is L(x, λ, µ) = c T x λ T x + µ T (x 1). this is bounded if and only if c = λ µ. any dual optimal point (λ,µ ) must satisfy c = λ µ. EE364a Review Session 5 4

suppose c i > 0, what can we say about λ i x i and µ i? answer. λ i > 0, so by complementary slackness x i = 0, and therefore µ i = 0. suppose c i < 0, what can we say about λ i x i and µ i? answer. µ i > 0, so by complementary slackness x i = 1, and therefore λ i = 0. what about when c i = 0? answer. then λ i µ i = 0. but we cannot have both λ i > 0 and µ i > 0, since that would imply both x i = 0 and x i = 1. as a result we must have λ i = 0 and µ i = 0. can we write down λ and µ? answer. yes, λ is the positive part of c, and µ is the negative part of c. i.e., λ = max(c,0), µ = min(c,0). EE364a Review Session 5 5

the Lagrange multipliers λ and ν are also a measure of how active a constraint is at the optimum. for example, if we relax the first constraint by u 1 x 1. then λ 1 = p u 1 u1 =0 the optimal Lagrange multipliers are the local sensitivies of the optimal value with respect to constraint perturbations.. EE364a Review Session 5 6

Primal and dual optimal points suppose strong duality holds, i.e., we have p = d, and both p and d are achieved. when can we find a primal optimal point x, from a dual optimal point (λ, ν )? consider the Lagrangian, evaluated at (λ, ν ), L(x, λ,ν ) = f 0 (x) + m λ if i (x) + i=1 p νi h i (x). i=1 suppose the minimizer of L(x, λ,ν ), x, is unique. since x minimizes L(x, λ,ν ), then we must have x = x. EE364a Review Session 5 7

Example: least-norm solution over a polyhedron consider the problem, minimize x T x subject to Ax b, with x R 1000, A R 10 1000. we will assume that A is full rank. the Lagrangian is L(x, λ) = x T x + λ T (Ax b), with minimizer x = (1/2)A T λ. the dual problem can be expressed as maximize (1/4)λ T AA T λ b T λ subject to λ 0. does strong duality hold? answer. yes, because we can solve Ax = b. EE364a Review Session 5 8

suppose λ is optimal for the dual problem, can we find a primal optimal point? and if so, is it unique? answer. yes, x = (1/2)A T λ minimizes the Lagrangian, but x is unique, so x = x. for the previous example, we can uniquely construct a primal optimal point x, given a dual optimal point λ. how many variables does the primal problem have? answer. 1000 how many variables does the dual problem have? answer. 10 this is one advantage of solving the dual problem and then constructing the solution of the primal problem from the solution of the dual. We will see later however, that if we fully exploit the structure of the primal problem, the time taken to solve these two problems are approximately the same. EE364a Review Session 5 9

Example: linear program now consider the problem, minimize c T x subject to Ax b, where A is skinny and full rank. suppose also that strong duality holds, and p and d are achieved. the Lagrangian is, L(x, λ) = c T x + λ T (Ax b). this is bounded if and only if A T λ + c = 0. the dual problem can be expressed as, maximize b T λ subject to A T λ + c = 0 λ 0. EE364a Review Session 5 10

can we find a primal optimal point x from a dual optimal point λ by minimizing the Lagrangian? answer. no, we have, L(x, λ ) = b T λ. any x minimizes L(x, λ ), but not every x is optimal for the primal problem. can we do this by any other method? answer. using complementary slackness, we see that if λ i > 0, then we must have a T i x = b i. so if the solution to the resulting set of linear equations is unique, it is a primal optimal point. it not always possible to construct a primal optimal point from a dual optimal point, even when strong duality holds. EE364a Review Session 5 11

Theorems of alternatives we want to determine the feasibility of the following system of (not necessarily convex) linear inequalities and equalities f i (x) 0, i = 1,...,m, h i (x) = 0, i = 1,...,p. (1) we will assume the domain D = m i=1 dom f i p i=1 dom h i is nonempty. we can write this problem as minimize 0 subject to f i (x) 0, h i (x) = 0, i = 1,...,m i = 1,...,p. p = { 0 (1) is feasible (1) is infeasible. EE364a Review Session 5 12

the dual function is g(λ, ν) = inf x D ( m λ i f i (x) + i=1 ) p ν i h i (x) i=1 what happens if there exist λ 0, and ν for which g(λ, ν) > 0? answer. d =. what happens if there does not exist λ 0, and ν for which g(λ, ν) > 0? answer. d = 0. EE364a Review Session 5 13

so, d = { λ 0, g(λ, ν) > 0 is feasible 0 λ 0, g(λ, ν) > 0 is infeasible. recall d p. if (1) is feasible, then p = 0, so d = 0, which means that the system λ 0, g(λ, ν) > 0 (2) is infeasible. if (2) is feasible, then d =, which means that p =, and so (1) is infeasible. note that both systems can be infeasible. (1) and (2) are called weak alternatives, since at most one of the two is feasible. two systems are called strong alternatives if exactly one of the two alternatives holds. EE364a Review Session 5 14

for theorems of alternatives, whether or not the system of inequalities is strict makes a difference. for instance, if we had the system, f i (x) < 0, i = 1,...,m, h i (x) = 0, i = 1,...,p, (3) then (2) would not longer be the weak alternative. instead the weak alternative would be, λ 0, λ 0, g(λ, ν) 0. (4) EE364a Review Session 5 15

Farkas lemma the system of inequalities Ax 0, c T x < 0, where A R m n, c R n, and the system of inequalities A T y + c = 0, y 0, are strong alternatives. how can we show this? EE364a Review Session 5 16

solution. use LP duality. consider the LP and its dual minimize c T x subject to Ax 0, maximize 0 subject to A T y + c = 0 y 0. the primal LP is homogeneous, and so the optimal value is 0, if the primal inequality system is not feasible, and if the the primal inequality system is feasible. the dual has optimal value 0 if the dual inequality system is feasible, and optimal value if the dual inequality system is not feasible. since p = d (why?), the two systems are strong alternatives. EE364a Review Session 5 17

Application: arbitrage consider n assets with prices p 1, p 2,...,p n. at the end of the investment period the value of the assets is v 1, v 2,...,v n. x 1,x 2,...,x n represents the initial investment in each asset (x j < 0 means that we are owe x j amount of asset j). cost of initial investment is p T x, and final value of the investment is v T x. v is uncertain, there are m possible scenarios, v (1),...,v (m). if scenario i occurs, then the final value of the investment is v (i)t x. if there is an investment x with p T x < 0, and v (i)t x 0, for i = 1,...,m, then an arbitrage is said to exist. EE364a Review Session 5 18

in finance, it is often assumed that no arbitrage exists. which means that the system of inequalities V x 0, p T x < 0 is infeasible (V is a matrix with rows v (1)T,...,v (m)t ). by Farkas lemma, the above system is infeasible if and only if there exists y such that V T y = p, y 0. suppose that V is known, and all the prices except the last price p n is known. we wish to find the set of prices p n that are consistent with the no-arbitrage assumption. what kind of set is this? solution. clearly, we must have p 0, which is consistent with intuition. the set must therefore be an interval (it is the projection of a polyhedron onto R). we can find this interval by solving a pair of LPs. EE364a Review Session 5 19

to find the minimum p n we solve, minimize p n subject to V T y = p y 0. and to find the maximum p n, we solve the same LP with maximization. EE364a Review Session 5 20