Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Similar documents
14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Lagrange duality. The Lagrangian. We consider an optimization program of the form

4. Algebra and Duality

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

EE364a Review Session 5

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lagrangian Duality Theory

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Conic Linear Optimization and its Dual. yyye

Lagrangian Duality. Evelien van der Hurk. DTU Management Engineering

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality (Continued) min f ( x), X R R. Recall, the general primal problem is. The Lagrangian is a function. defined by

Additional Homework Problems

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Lagrangian Duality and Convex Optimization

CS-E4830 Kernel Methods in Machine Learning

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Applications of Linear Programming

Constrained Optimization and Lagrangian Duality

Lecture: Duality of LP, SOCP and SDP

Convex Optimization and Support Vector Machine

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

5. Duality. Lagrangian

Convex Optimization M2

On the Method of Lagrange Multipliers

4TE3/6TE3. Algorithms for. Continuous Optimization

subject to (x 2)(x 4) u,

Part IB Optimisation

Convex Optimization & Lagrange Duality

Homework Set #6 - Solutions

Linear and non-linear programming

Nonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6

Convex Optimization and Modeling

Convex Optimization. Lecture 12 - Equality Constrained Optimization. Instructor: Yuanzhang Xiao. Fall University of Hawaii at Manoa

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

Lecture 18: Optimization Programming

Lagrange Relaxation and Duality

Nonlinear Programming

CS Lecture 8 & 9. Lagrange Multipliers & Varitional Bounds

Convex Optimization Boyd & Vandenberghe. 5. Duality

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

Lecture: Duality.

ICS-E4030 Kernel Methods in Machine Learning

Generalization to inequality constrained problem. Maximize

Lecture 8. Strong Duality Results. September 22, 2008

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

Gauge optimization and duality

Optimality, Duality, Complementarity for Constrained Optimization

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Lecture Note 18: Duality

Agenda. 1 Duality for LP. 2 Theorem of alternatives. 3 Conic Duality. 4 Dual cones. 5 Geometric view of cone programs. 6 Conic duality theorem

10 Numerical methods for constrained problems

Chap 2. Optimality conditions

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

10-725/ Optimization Midterm Exam

Lagrangian Methods for Constrained Optimization

Linear and Combinatorial Optimization

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Lagrange Relaxation: Introduction and Applications

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Lecture 3 January 28

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

Duality Theory of Constrained Optimization

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

EE/AA 578, Univ of Washington, Fall Duality

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Summer School: Semidefinite Optimization

Math 5593 Linear Programming Week 1

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

Primal/Dual Decomposition Methods

1 Review Session. 1.1 Lecture 2

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

12. Interior-point methods

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Applied Lagrange Duality for Constrained Optimization

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

Solving Dual Problems

3.10 Lagrangian relaxation

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

DYNAMIC DUALIZATION IN A GENERAL SETTING

LECTURE 10 LECTURE OUTLINE

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b.

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Chapter 1. Preliminaries

Lecture #21. c T x Ax b. maximize subject to

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods

Math 273a: Optimization Subgradients of convex functions

Nonlinear Optimization: What s important?

Part 1. The Review of Linear Programming

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

Transcription:

Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14

24 Dual Problems Consider a primal convex optimization problem minimize f (x) subject to g(x) 0 h(x) = 0 where g(x) = (g 1 (x),, g L (x)) and h(x) = (h 1 (x),, h M (x)) Following the Lagrange multiplier method, we form a modified objective function called the Lagrangian Λ(x, λ, µ) = f (x) + L λ l g l (x) + l=1 M µ m h m (x) m=1 = f (x) + λ T g(x) + µ T h(x) where λ = (λ 1,, λ L ) and µ = (µ 1,, µ M ) are called dual variables P Saengudomlert (2015) Optimization Session 4 2 / 14

Define the dual function 1 q(λ, µ) = inf Λ(x, λ, µ) x X where X is the domain set for all functions For λ 0, the dual function provides a lower bound on f To see why, let x F, ie, any primal feasible solution Then, q(λ, µ) = inf x X Λ(x, λ, µ) Λ(x, λ, µ) = f (x ) + λ T g(x ) }{{} + µ T h(x ) f (x ) }{{} 0 for λ 0 =0 1 The infimum (inf) is the largest lower bound The supremum (sup) is the smallest upper bound To see why we need inf in addition to min, consider interval (1, 2) It has no minimum but has the infimum equal to 1 P Saengudomlert (2015) Optimization Session 4 3 / 14

Denote the supremum of the dual function by q = sup q(λ, µ) λ 0,µ R M For λ 0, since q(λ, µ) f (x ) for any feasible x and any optimal solution must be feasible, then q(λ, µ) f Since q is the supremum of q(λ, µ) for all λ 0 and µ R M, q f Theorem 29 (Weak duality theorem): q f P Saengudomlert (2015) Optimization Session 4 4 / 14

Example 23: Minimize f (x) = (x + 1) 2 subject to x 0 The Lagrangian is Λ(x, λ) = (x + 1) 2 λx The dual function is q(λ) = inf x R (x + 1)2 λx = 1 (2 λ)2, 4 The dual function is concave with the maximum equal to 1 Since the primal optimal cost is f = 1, it is clear that q(λ) is a lower bound of f Note that q = 1 dual optimal cost (maximum) 1 2 4 dual feasible set P Saengudomlert (2015) Optimization Session 4 5 / 14

The dual problem is maximize q(λ, µ) subject to λ 0 A dual feasible solution is a point (λ, µ) such that λ 0 and (λ, µ) is in the domain of q D q = {(λ, µ) q(λ, µ) > } Let F d denote the feasible set of the dual problem NOTE: If the dual problem has an optimal solution, q is the optimal cost f q is called the duality gap From weak duality, the duality gap is always nonnegative P Saengudomlert (2015) Optimization Session 4 6 / 14

Problem 26: Let N 2 be a positive integer Let c 1,, c N > 0 Consider the following convex optimization problem minimize subject to N c i x i i=1 N e x i 1 i=1 Write down the dual function in terms of N, c 1,, c N and the dual variables Write down the dual problem P Saengudomlert (2015) Optimization Session 4 7 / 14

25 Lagrange Multipliers Definition (Lagrange multipliers): Dual variables (λ, µ ) F d are called the Lagrange multipliers for the primal problem if λ 0 and f = inf x X Λ(x, λ, µ ) = q(λ, µ ) NOTE: Lagrange multipliers may or may not exist When they exist, strong duality holds, ie, f = q Proof: By definitions of (λ, µ ) and q, f = q(λ, µ ) q From weak duality, q f The two inequalities yield f = q P Saengudomlert (2015) Optimization Session 4 8 / 14

NOTE (cont): Strong duality does not imply existence of Lagrange multipliers When the Lagrange multipliers exist, they may not be unique The above two properties will be demonstrated by upcoming examples If strong duality holds and there is a dual optimal solution, then a dual optimal solution is a set of Lagrange multipliers The above property means that we can first try solving the dual problem If we find a dual optimal solution with zero duality gap, then we have found a set of Lagrange multipliers P Saengudomlert (2015) Optimization Session 4 9 / 14

Example 23: Minimize f (x) = (x + 1) 2 subject to x 0 The Lagrangian is Λ(x, λ) = (x + 1) 2 λx The dual function is q(λ) = 1 (2 λ)2 4 The dual problem is to maximize q(λ) = 1 (2 λ) 2 /4 subject to λ 0 The dual optimal solution is λ = 2, with the dual optimal cost q = 1 Since q = f, there is no duality gap, and λ = 2 is a unique Lagrange multiplier dual optimal cost (maximum) 1 2 4 dual feasible set P Saengudomlert (2015) Optimization Session 4 10 / 14

Example 24: Consider minimizing f (x) = x 1 + x 2 subject to x 1 0 The Lagrangian is The dual function is Λ(x, λ) = x 1 + x 2 λx 1 = (1 λ)x 1 + x 2 q(λ) = inf x R 2(1 λ)x 1 + x 2 = Since D q =, the dual problem is infeasible It follows that there is no Lagrange multiplier P Saengudomlert (2015) Optimization Session 4 11 / 14

Example 25: Consider minimizing f (x) = x subject to x 2 0 Since x = 0 is the only feasible solution, x = 0 is optimal with f = 0 The Lagrangian is Λ(x, λ) = x + λx 2 The dual function is q(λ) = inf x R x + λx 2 = 1 4λ Note that D q = (0, ) The dual problem is then to maximize 1/4λ subject to λ > 0 Since q = sup λ>0 1/4λ = 0, there is no duality gap However, there is no dual optimal solution It follows there there is no Lagrange multiplier P Saengudomlert (2015) Optimization Session 4 12 / 14

Example 26: Consider minimizing f (x) = x subject to x 0 By inspection, x = 0 is optimal with f = 0 The Lagrangian is Λ(x, λ) = x λx The dual function is q(λ) = inf x λx, x R which is 0 for λ [ 1, 1] and is undefined for λ / [ 1, 1] It follows that any λ [0, 1] is dual optimal with q = 0 Since q = f, any λ [0, 1] is a Lagrange multiplier P Saengudomlert (2015) Optimization Session 4 13 / 14

Slator Conditions Several sets of conditions can guarantee the existence of Lagrange multipliers One such set are the Slator conditions Slator conditions: 1 X is a convex set 2 f, g 1,, g L are convex functions 3 h 1,, h M are such that h(x) can be expressed as Ax + b for some matrix A and vector b In addition, A has full rank, ie, rank(a) = M 4 The primal optimal cost f is finite 5 There exists a feasible solution x X such that g(x ) < 0 NOTE: The last conditions says that there is at least one feasible point in the strict interior of the feasible set P Saengudomlert (2015) Optimization Session 4 14 / 14