Additional Homework Problems

Similar documents
Duality Theory of Constrained Optimization

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Convex Optimization M2

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

EE 227A: Convex Optimization and Applications October 14, 2008

5. Duality. Lagrangian

Lecture 8. Strong Duality Results. September 22, 2008

4. Algebra and Duality

Lagrange duality. The Lagrangian. We consider an optimization program of the form

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

ICS-E4030 Kernel Methods in Machine Learning

Convex Optimization Boyd & Vandenberghe. 5. Duality

Lagrange Relaxation and Duality

Convex Optimization & Lagrange Duality

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Lagrangian Duality Theory

Constrained Optimization and Lagrangian Duality

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

Convex Optimization Theory. Athena Scientific, Supplementary Chapter 6 on Convex Optimization Algorithms

Chap 2. Optimality conditions

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Lecture: Duality.

Optimization for Machine Learning

Applications of Linear Programming

Convex Optimization and Modeling

Constrained Optimization

LECTURE 10 LECTURE OUTLINE

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

Nonlinear Optimization: What s important?

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

subject to (x 2)(x 4) u,

1 Review of last lecture and introduction

Summer School: Semidefinite Optimization

Iteration-complexity of first-order penalty methods for convex programming

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

4TE3/6TE3. Algorithms for. Continuous Optimization

LECTURE 13 LECTURE OUTLINE

CS-E4830 Kernel Methods in Machine Learning

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

DUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

Linear and non-linear programming

Lecture: Duality of LP, SOCP and SDP

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 6 Fall 2009

12. Interior-point methods

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

Part IB Optimisation

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Solving Dual Problems

4. Convex optimization problems

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b.

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

EE/AA 578, Univ of Washington, Fall Duality

Symmetric and Asymmetric Duality

CSCI : Optimization and Control of Networks. Review on Convex Optimization

Duality (Continued) min f ( x), X R R. Recall, the general primal problem is. The Lagrangian is a function. defined by

Global Optimization of Polynomials

A Brief Review on Convex Optimization

Numerical Optimization

Miscellaneous Nonlinear Programming Exercises

Math 273a: Optimization Subgradients of convex functions

Written Examination

Lecture 7: Convex Optimizations

Nonlinear Programming

Applied Lagrange Duality for Constrained Optimization

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Linear and Combinatorial Optimization

Nonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6

Gauge optimization and duality

Introduction to Semidefinite Programming (SDP)

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

Optimality Conditions for Constrained Optimization

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

8. Geometric problems

Chapter 14 Unconstrained and Constrained Optimization Problems

Lecture: Cone programming. Approximating the Lorentz cone.

Largest dual ellipsoids inscribed in dual cones

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Introduction to Nonlinear Stochastic Programming

Primal Solutions and Rate Analysis for Subgradient Methods

Extended Monotropic Programming and Duality 1

Homework Set #6 - Solutions

Lagrangian Duality and Convex Optimization

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

Generalization to inequality constrained problem. Maximize

Pattern Classification, and Quadratic Problems

Inequality Constraints

Constrained optimization: direct methods (cont.)

Optimisation in Higher Dimensions

10-725/ Optimization Midterm Exam

Transcription:

Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1

2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}. Considering IR n + as a cone, prove that IR+ n =IR+, n thus showing that IR n + is self-dual. { } n n 2. Let Q = x IR n x 2 1 j=2 x j. Q n is called the second-order cone, the Lorentz cone, or the ice-cream cone (I am not making this up). Considering Q n as a cone, prove that (Q n ) = Q n, thus showing that Q n is self-dual. 3. Prove Corollary 3 of the notes on duality theory, which asserts that the existence of Slater point for the conic dual problem guarantees strong duality and that the primal attains its optimum. 4. Consider the following minimax problems: min max φ(x, y) and max min φ(x, y) x X y Y y Y x X where X and Y are nonempty compact convex sets in IR n and IR m, respectively, and φ(x, y) is convex in x for fixed y, and is concave in y for fixed x. (a) Show that min x X max y Y φ(x, y) max y Y min x X φ(x, y) in the absence of any convexity/concavity assumptions on X, Y, and/or φ(, ). (b) Show that f(x) :=max y Y φ(x, y) is a convex function in x and that g(y) :=min x X φ(x, y) is a concave function in y. (c) Use a separating hyperplane theorem to prove: min max φ(x, y) = max min φ(x, y). x X y Y y Y x X 5. Let X and Y be nonempty sets in IR n,and let f( ),g( ) :IR n IR. Consider the following conjugate functions f ( ) and g ( ) definedas follows: f (u) := inf {f(x) u t x}, x X and t g (u) := sup {g(x) u x}. x X

3 (a) Construct a geometric interpretation of f ( ) and g ( ). (b) Show that f ( ) is a concave function on X := {u f (u) > }, and g ( ) is a convex function on Y := {u g (u) < + }. (c) Prove the following weak duality theorem between the conjugate primal problem inf{f(x) g(x) x X Y } and the conjugate dual problem sup{f (u) g (u) u X Y }: inf{f(x) g(x) x X Y } sup{f (u) g (u) u X Y }. (d) Now suppose that f( ) is a convex function, g( ) isaconcave function, intx inty, and inf{f(x) g(x) x X Y } is finite. Show that equality in part (5c) holds true and that sup{f (u) g (u) u X Y } is attained for some u = u. (e) Consider a standard inequality constrained nonlinear optimization problem using the following notation: OP : minimum x f (x) s.t. ḡ 1 (x) 0,. ḡ m (x) 0, x X. By suitable choices of f( ),g( ),X,and Y, formulate this problem as an instance of the conjugate primal problem inf{f(x) g(x) x X Y }. What is the form of the resulting conjugate dual problem sup{f (u) g (u) u X Y }? 6. Consider the following problem: z =min x 1 + x 2 s.t. x2 1 + x 2 2 =4, 2x 1 x 2 4. (a) Formulate the Lagrange dual of this problem by incorporating both constraints into the objective function via multipliers u 1,u 2. (b) Compute the gradient of L (u) at the point ū = (1, 2).

4 (c) Starting from u =(1, 2), perform one iteration of the steepest ascent method for the dual problem. In particular, solve the following problem where d = L (ū): max α L ( u + αd ) s.t. u + αd 0, α 0. 7. Prove Remark 1 of the notes on conic duality, that the dual of the dual is the primal for the conic dual problems of Section 13 of the duality notes. 8. Consider the following very general conic problem: GCP : z = minimum x T c x s.t. Ax b K 1 x K 2, where K 1 IR m and K 2 IR n are each a closed convex cone. Derive the following conic dual for this problem: GCD : v =maximum y b T y s.t. c A T y K 2 y K, 1 and show that the dual of GCD is GCP. How is the conic format of Section 13 of the duality notes a special case of GCP? n 9. For a (square) matrix M IR n n, define trace(m ) = M jj, and for j=1 two matrices A, B IR k l define Prove that: k l A B := A ij B ij. i=1 j=1

5 (a) A B = trace(a T B). (b) trace(mn) = trace(nm). 10. Let S+ k k denote the cone of positive semi-definite symmetric matrices, namely S k k = {X S k k v T Xv 0 for all v R n + }. Considering ( ) S+ k k as a cone, prove that S+ k k = S+ n n, thus showing that S+ k k is self-dual. 11. Consider the problem: P: z = minimum x1,x 2,x 3 x 1 s.t. x 2 +x 3 = 0 x 1 10 (x 1,x 2 ) x 3, where denotes the Euclidean norm. Then note that this problem is feasible (set x 1 = x 2 = x 3 = 0), and that the first and third constraints combine to force x 1 = 0 in any feasible, solution, whereby z =0. We will dualize on the the first two constraints, setting X := {(x 1,x 2,x 3 ) (x 1,x 2 ) x 3 }. Using multipliers u 1,u 2 for the first two constraints, our Lagrangian is: L(x 1,x 2,x 3,u 1,u 2 )= x 1 +u 1 (x 2 +x 3 )+u 2 ( x 1 10) = 10u 2 +(1 u 2,u 1,u 1 ) T (x 1,x 2,x 3 ). Then L (u 1,u 2 )= min 10u 2 +(1 u 2,u 1,u 1 ) T (x 1,x 2,x 3 ). (x 1,x 2 ) x 3 (i) Show that: 10u 2 if (1 u 2,u 1 ) u 1 L (u 1,u 2 )= if (1 u2,u 1 ) >u 1,

6 and hence the dual problem can be written as: D: v =maximum u1,u 2 10u 2 s.t. (1 u 2,u 1 ) u 1 u 1 IR,u 2 0. (ii) Show that v = 10, and that the set of optimal solutions of D is comprised of those vectors (u 1,u 2 )=(α, 1) for all α 0. Hence both P and D attain their optima with a finite duality gap. (iii) In this example the primal problem is a convex problem. Why is there nevertheless a duality gap? What hypotheses are absent that otherwise would guarantee strong duality?