Convex Optimization Theory. Chapter 4 Exercises and Solutions: Extended Version

Similar documents
Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Convex Optimization Theory

LECTURE 10 LECTURE OUTLINE

Nonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6

Convex Analysis and Optimization Chapter 2 Solutions

LECTURE SLIDES ON BASED ON CLASS LECTURES AT THE CAMBRIDGE, MASS FALL 2007 BY DIMITRI P. BERTSEKAS.

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Convex Optimization Theory. Chapter 3 Exercises and Solutions: Extended Version

Convex Analysis and Optimization Chapter 4 Solutions

LECTURE 12 LECTURE OUTLINE. Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions

LECTURE SLIDES ON CONVEX ANALYSIS AND OPTIMIZATION BASED ON LECTURES GIVEN AT THE MASSACHUSETTS INSTITUTE OF TECHNOLOGY CAMBRIDGE, MASS

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

Lecture 2: Convex Sets and Functions

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b.

LECTURE 9 LECTURE OUTLINE. Min Common/Max Crossing for Min-Max

Primal Solutions and Rate Analysis for Subgradient Methods

LECTURE SLIDES ON CONVEX OPTIMIZATION AND DUALITY THEORY TATA INSTITUTE FOR FUNDAMENTAL RESEARCH MUMBAI, INDIA JANUARY 2009 PART I

Lecture 3. Optimization Problems and Iterative Algorithms

Extended Monotropic Programming and Duality 1

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

lim sup f(x k ) d k < 0, {α k } K 0. Hence, by the definition of the Armijo rule, we must have for some index k 0

Introduction to Convex Analysis Microeconomics II - Tutoring Class

Convex Optimization Theory. Athena Scientific, Supplementary Chapter 6 on Convex Optimization Algorithms

Convexity, Duality, and Lagrange Multipliers

Convex Optimization. (EE227A: UC Berkeley) Lecture 4. Suvrit Sra. (Conjugates, subdifferentials) 31 Jan, 2013

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Additional Homework Problems

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

Convex Optimization Theory. Athena Scientific, Supplementary Chapter 6 on Convex Optimization Algorithms

LECTURE 4 LECTURE OUTLINE

Fenchel Duality between Strong Convexity and Lipschitz Continuous Gradient

1 Convex Optimization Models: An Overview

4. Algebra and Duality

Lecture 8. Strong Duality Results. September 22, 2008

Linear and non-linear programming

8. Conjugate functions

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

In Progress: Summary of Notation and Basic Results Convex Analysis C&O 663, Fall 2009

Optimization for Machine Learning

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

IE 521 Convex Optimization Homework #1 Solution

Solving Dual Problems

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

Lecture 4: Optimization. Maximizing a function of a single variable

Asteroide Santana, Santanu S. Dey. December 4, School of Industrial and Systems Engineering, Georgia Institute of Technology

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives

Lecture 1: Background on Convex Analysis

Inequality Constraints

Lagrangian Duality and Convex Optimization

Minimax Problems. Daniel P. Palomar. Hong Kong University of Science and Technolgy (HKUST)

Subgradient Methods in Network Resource Allocation: Rate Analysis

Convex Optimization M2

Lecture 5. The Dual Cone and Dual Problem

10 Numerical methods for constrained problems

Lagrangian Duality Theory

LECTURE 13 LECTURE OUTLINE

5. Duality. Lagrangian

Chapter 4. Vector Space Examples. 4.1 Diffusion Welding and Heat States

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Math 273a: Optimization Subgradients of convex functions

Date: July 5, Contents

Math 273a: Optimization Subgradients of convex functions

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Lagrange Multipliers with Optimal Sensitivity Properties in Constrained Optimization

Constrained Optimization and Lagrangian Duality


Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

General Notation. Exercises and Problems

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods

Victoria Martín-Márquez

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

Convex Optimization & Lagrange Duality

Introduction to Functional Analysis

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Some Background Math Notes on Limsups, Sets, and Convexity

Solution Methods for Stochastic Programs

Math 273a: Optimization Convex Conjugacy

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets

DEF 1 Let V be a vector space and W be a nonempty subset of V. If W is a vector space w.r.t. the operations, in V, then W is called a subspace of V.

Math 5052 Measure Theory and Functional Analysis II Homework Assignment 7

Optimality Conditions for Nonsmooth Convex Optimization

Contents. Appendix D (Inner Product Spaces) W-51. Index W-63

Math 3013 Problem Set 4

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

Optimality Conditions for Constrained Optimization

Dual Space of L 1. C = {E P(I) : one of E or I \ E is countable}.

Lecture Note 5: Semidefinite Programming for Stability Analysis

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

GENERAL VECTOR SPACES AND SUBSPACES [4.1]

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

OHSX XM511 Linear Algebra: Multiple Choice Exercises for Chapter 2

BASICS OF CONVEX ANALYSIS

Introduction to Nonlinear Stochastic Programming

Transcription:

Convex Optimization Theory Chapter 4 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

LAST UPDATE February 20, 2010 CHAPTER 4: EXERCISES AND SOLUTIONS 4.1 (Augmented Lagrangian Duality for Equality Constraints) Construct an augmented Lagrangian framework and derive the dual function similar to the one of Section 4.2.4 for the case of the equality constraints, i.e. when we have the constraint Ex = d in place of g(x) 0, where E is an m n matrix and d R m. Solution: The dual function has the form q c (µ) = inf f(x)+µ (Ex d)+ c 2 Ex d 2}. 4.2 In the context of Section 4.2.2, let F(x,u) = f 1(x) + f 2(Ax + u), where A is an m n matrix, and f 1 : R n (, ] and f 2 : R m (, ] are closed convex functions. Show that the dual function is q(µ) = f 1(A µ) f 2( µ), where f 1 and f 2 are the conjugate functions of f 1 and f 2, respectively. Note: This is the Fenchel duality framework discussed in Section 5.3.5. Solution: From Section 4.2.1, the dual function is q(µ) = p ( µ), where p is the conjugate of the function p(u) = inf x R nf(x,u). This set of exercises will be periodically updated as new exercises are added. Many of the exercises and solutions given here were developed as part of my earlier convex optimization book [BNO03] (coauthored with Angelia Nedić and Asuman Ozdaglar), and are posted on the internet of that book s web site. The contribution of my coauthors in the development of these exercises and their solutions is gratefully acknowledged. Since some of the exercises and/or their solutions have been modified and also new exercises have been added, all errors are my sole responsibility. 2

By using the change of variables z = Ax+u in the following calculation, we have p ( µ) = sup µ u inf f1 (x)+f 2 (Ax+u) }} u x = sup µ (z Ax) f 1 (x) f 2 (z) } z,x = f 1 (A µ)+f 2 ( µ), where f 1 and f 2 are the conjugate functions of f 1 and f 2, respectively. Thus, q(µ) = f 1 (A µ) f 2 ( µ). 4.3 (An Example of Lagrangian Duality) Consider the problem minimize f(x) subject to x X, e ix = d i, i = 1,...,m, (4.1) where f : R n R is a convex function, X is a nonempty convex set, and e i and d i are given vectors and scalars, respectively. Consider the min common/max crossing framework where M is the subset of R m+1 given by M = and assume that w <. (e 1x d 1,...,e mx d ) } m,f(x) x X, (a) Show that w is equal to the optimal value of problem (4.1), and that the max crossing problem is to maximize q(µ) given by q(µ) = inf f(x)+ } m µ i(e ix d i). i=1 (b) Show that the corresponding set M is convex. (c) Show that if X is compact, then q = w. (d) Show that if there exists a vector x ri(x) such that e ix = d i for all i = 1,...,m, then q = w and the max crossing problem has an optimal solution. Solution:(a)Itiseasilyseenthatw istheminimalvalueoff(x)subjectto x conv(x) and e i x = d i, i = 1,...,m. The corresponding max crossing problem is given by q = sup µ R m q(µ), 3

where q(µ) is given by q(µ) = inf (u,w) M w+µ u} = inf (b) Consider the set f(x)+ } m µ i (e i x d i). (u1 M =,...,u m,w ) } there existsx X such that e i x d i = u i, i, f(x) w. i=1 We show that M is convex. To this end, we consider vectors (u,w) M and (ũ, w) M, and we show that their convex combinations lie in M. The definition of M implies that for some x X and x X, we have f(x) w, f( x) w, e i x d i = u i, i = 1,...,m, e i x d i = ũ i, i = 1,...,m. For any α [0,1], we multiply these relations with α and 1-α, respectively, and add. By using the convexity of f, we obtain f ( αx+(1 α) x ) αf(x)+(1 α)f( x) αw +(1 α) w, e i( αx+(1 α) x ) di = αu i +(1 α)ũ i, i = 1,...,m. In viewofthe convexityofx, wehaveαx+(1 α) x X, sotheseequations imply that the convex combination of (u,w) and (ũ, w) belongs to M, thus proving that M is convex. (c) We prove this result by showing that all the assumptions of Min Common/Max Crossing Theorem I are satisfied. By assumption, w is finite. It follows from part (b) that the the set M is convex. Therefore, we only need to show that for every sequence (u k,w k ) } M with u k 0, there holds w liminf k w k. Consider a sequence (u k,w k ) } M with u k 0. Since X is compact and f is convex by assumption (which implies that f is continuous by Prop. 1.4.6), it follows from Prop. 1.1.9(c) that set M is compact. Hence, the sequence (u k,w k ) } has a subsequence that converges to some (0,w) M. Assume without loss of generality that (uk,w k ) } converges to (0,w). Since (0,w) M, we get w = inf (0,w) M w w = liminf k w k, proving the desired result, and thus showing that q = w. (d) We prove this result by showing that all the assumptions of Min Common/Max Crossing Theorem II are satisfied. By assumption, w is finite. 4

It follows from part (b) that the set M is convex. Therefore, we only need to show that the set D = (e 1 x d 1,...,e mx d m ) x X } contains the origin in its relative interior. The set D can equivalently be written as D = E X d, where E is a matrix, whose rows are the vectors e i, i = 1,...,m, and d is a vector with entries equal to d i, i = 1,...,m. By Prop. 1.3.6, it follows that ri(d) = E ri(x) d. Hence the assumption that there exists a vector x ri(x) such that Ex d = 0 implies that 0 ri(d), thus showing that q = w and that the max crossing problem has an optimal solution. 4.4 (Lagrangian Duality and Compactness of the Constraint Set) Consider the problem of Exercise 4.3, but assume that f is linear and X is compact (instead of f and X being convex). Show that q is equal to the minimal value of f(x) subject to x conv(x) and e ix = d i, i = 1,...,m. Hint: Show that (e conv(m) = 1x d 1,...,e mx d ) m,f(x) x conv(x) }, and use Exercise 4.3(c). Solution: Clearly, the max crossing values corresponding to M and conv(m) are equal [this is true generically, since closed halfspaces containing M also contain conv(m)]. The expression for conv(m) in the hint follows from the linearity of f. Thus, the min common/max crossing framework for conv(m) corresponds to the problem of minimizing f(x) subject to x conv(x) and e i x = d i, i = 1,...,m. Since M is compact, conv(m) is also compact, and the result follows from Exercise 4.3(c). 5