Lecture 10. ( x domf. Remark 1 The domain of the conjugate function is given by

Similar documents
4. Convex optimization problems

Lecture: Convex Optimization Problems

Convexity II: Optimization Basics

Convex optimization problems. Optimization problem in standard form

Convex Optimization. 4. Convex Optimization Problems. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University

Convex Optimization Problems. Prof. Daniel P. Palomar

Lecture 23: November 19

4. Convex optimization problems

Duality Uses and Correspondences. Ryan Tibshirani Convex Optimization

Lecture 4: Convex Functions, Part I February 1

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Conve functions 2{2 Conve functions f : R n! R is conve if dom f is conve and y 2 dom f 2 [0 1] + f ( + (1 ; )y) f () + (1 ; )f (y) (1) f is concave i

8. Geometric problems

5. Duality. Lagrangian

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi

Lecture: Examples of LP, SOCP and SDP

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

8. Geometric problems

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 14: Newton s Method

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

Convex Functions. Pontus Giselsson

Convex Optimization & Lagrange Duality

ELEG5481 SIGNAL PROCESSING OPTIMIZATION TECHNIQUES 6. GEOMETRIC PROGRAM

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization and Modeling

Convex Optimization M2

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives

LECTURE 12 LECTURE OUTLINE. Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions

Lecture: Duality of LP, SOCP and SDP

9. Interpretations, Lifting, SOS and Moments

A function(al) f is convex if dom f is a convex set, and. f(θx + (1 θ)y) < θf(x) + (1 θ)f(y) f(x) = x 3

Lecture: Duality.

8. Conjugate functions

Introduction to Alternating Direction Method of Multipliers

Constrained Optimization and Lagrangian Duality

Lecture 3. Optimization Problems and Iterative Algorithms

Optimization and Optimal Control in Banach Spaces

Lecture 3 January 28

IE 521 Convex Optimization

Lecture 1: January 12

Lecture 1: Background on Convex Analysis

In applications, we encounter many constrained optimization problems. Examples Basis pursuit: exact sparse recovery problem

Statistical Geometry Processing Winter Semester 2011/2012

EE 227A: Convex Optimization and Applications October 14, 2008

i.e., into a monomial, using the Arithmetic-Geometric Mean Inequality, the result will be a posynomial approximation!

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term

15. Conic optimization

arxiv: v3 [math.oc] 22 Jan 2018

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Dual Methods. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Lecture 26: April 22nd

Operations Research Lecture 4: Linear Programming Interior Point Method

13. Convex programming

12. Interior-point methods

Robust-to-Dynamics Linear Programming

The proximal mapping

Duality revisited. Javier Peña Convex Optimization /36-725

Math 273a: Optimization Subgradients of convex functions

Lecture 2: Weighted Majority Algorithm

Lecture: Cone programming. Approximating the Lorentz cone.

Lecture 6: Conic Optimization September 8

Lecture 8. Strong Duality Results. September 22, 2008

Lecture Note 5: Semidefinite Programming for Stability Analysis

9. Geometric problems

Sparse Optimization Lecture: Basic Sparse Optimization Models

12. Interior-point methods

Linear and non-linear programming

Convex Optimization Lecture 6: KKT Conditions, and applications

Lecture 5. The Dual Cone and Dual Problem

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

LECTURE 13 LECTURE OUTLINE

Lecture 14: Optimality Conditions for Conic Problems

Lagrangian Duality for Dummies

Lecture 2: Convex functions

Math 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential.

Global Optimization of Polynomials

BASICS OF CONVEX ANALYSIS

CHAPTER 3 : QUADRARIC FUNCTIONS MODULE CONCEPT MAP Eercise 1 3. Recognizing the quadratic functions Graphs of quadratic functions 4 Eercis

Lecture 5: September 15

ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications

LMI Methods in Optimal and Robust Control

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)

1 Kernel methods & optimization

Stochastic geometric optimization with joint probabilistic constraints

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

10701 Recitation 5 Duality and SVM. Ahmed Hefny

Operations Research Letters

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18

EE/AA 578, Univ of Washington, Fall Duality

ECONOMIC OPTIMALITY. Date: October 10, 2005.

Transcription:

10-1 Multi-User Information Theory Jan 17, 2012 Lecture 10 Lecturer:Dr. Haim Permuter Scribe: Wasim Huleihel I. CONJUGATE FUNCTION In the previous lectures, we discussed about conve set, conve functions (properties and eamples), and we present some operations that preserve conveity. This lecture we will continue the discussion about conve functions; definition and eamples of conjugate functions will be given, and we will present some general conve optimization problems. Definition 1 (Conjugate function) Let f : R n R. The function f : R n R defined as is called the conjugate function of f. This definition is illustrated in Fig. 1. ( f (y) sup y T f () ), (1) domf Remark 1 The domain of the conjugate function is given by { ( domf = y R n : sup y T f () ) } <. (2) domf Remark 2 The conjugate function, f, is a conve function. This can be easily verified using that fact that the supremum of a set of conve functions (in our case, for a fied, the difference y T f () is a linear function of y and hence the difference is conve) is conve function. Note that this is true whether or not f is conve. Remark 3 y T stands for inner product in finite space dimensions. In infinite dimensional vector spaces, the conjugate function is defined via an appropriate inner product. This operation play an important role in many applications such as duality. In the following, we present some eamples of conjugate functions.

10-2 y f () (0, f (y)) Fig. 1. The conjugate function f (y) is the maimum gap between the linear function y and the function f (). Eample 1 (Affine function) f () = a + b. By definition, the conjugate function is given by f (y) = sup (y a b). As a function of, the difference is bounded iff y a = 0. The conjugate function is given f (y) = b, and the domain is domf = {a}. Eample 2 (Negative logarithm) f () = log, with domf = R ++. By definition, the conjugate function is given by f (y) = sup (y+log). As a function of, the difference is bounded iff y < 0, and reaches its maimum at = 1. The conjugate y function is given f (y) = log( y) 1, and the domain is domf = R ++. Eample 3 (Eponential) f () = e. By definition, the conjugate function is given by f (y) = sup (y e ). As a function of, the difference is bounded iff y > 0, and reaches its maimum at = lny. The conjugate function is given f (y) = ylny y, and the domain is domf = R ++. Eample 4 (Entropy) f () = log, with domf = R ++. By definition, the conjugate function is given by f (y) = sup (y log). As a function of, the difference is bounded for all y, and reaches its maimum at = e (y 1). The conjugate function is given f (y) = e (y 1), and the domain is domf = R. So the conjugate of entropy is

10-3 related to the eponential family. Eample 5 (Log-sum-ep) f () = log n i=1 e i, with domf = R n. By definition, the conjugate function is given by f (y) = sup (y log n i=1 e i ). By setting the gradient with respect to to zero, the critical points are given by y i = e i n i=1 e i i = 1,...,n. (3) The domain of f is domf = {y R n : 0 y i 1, n i=1 y i = 1}. The conjugate function is given f (y) = n i=1 y ilogy i. Net, we present two interesting properties of conjugate function. Properties of conjugate function f (y)+f () T y. Conjugate of the conjugate: if f is conve function, and its epigraph is closed set, then f = f. II. CONVEX OPTIMIZATION PROBLEMS The general optimization problem is given by f 0 () s.t. f i () 0; i = 1,...,m h j () = 0; j = 1,...,p, (4) where f 0 (),{f i ()} m i=1 are conve functions, and {h j()} p j=1 are affine functions. The optimization problem (4) describe the problem of finding an that imizes the objective f 0 subject to (s.t.) the constraintsf i () 0; i = 1,...,m, and h j () = 0; j = 1,...,p. In the following, we present some classes of optimization problems that can be efficiently solved using Matlab. Linear Programg: linear programg is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints.

10-4 Its feasible region (the space of all possible solutions), is a conve polyhedron. Mathematically, the optimization problem in this case can be in the following form c T +d s.t. G h A = b. (5) Linear Fraction: linear fraction optimization problem can be written as c T +d e T +f s.t. G h A = b e T +f > 0. (6) Let us transform this optimization problem to a linear one (5). This can be easily done by introducing auiliary variables defined as and z = y = 1 e T +f, e T +f. Using these new variables the objective function becomec T y dz, and the constraints become G h Gy hz A = b Ay = bz e T +f > 0 z 0 e T y +fz = 1 New constraint. Therefore the optimization problem is given by y,z ct y dz

10-5 s.t. Gy hz Ay = bz z 0, e T y +fz = 1. (7) Quadratic Optimization Problem: the quadratic optimization problem is given by where P S n +. 1 2 T P+q T +r s.t. G h A = b. (8) Second Order Cone Programg: the second order cone optimization problem is given by f T s.t. A i +b 2 c T i +b i; i = 1,...,n F = g. (9) Geometric Programg: this class of problems is not conve, but we will transform it to a conve one. Definition 2 The function f () = c q 1 1 q 2 2... qn, where c 0, is called monomial. Definition 3 The functionf () = K k=1 c k q 1,k 1 q 2,k 2... q n,k n, is called posynomial. The Geometric optimization problem is given by f 0 () s.t. f i () 0; i = 1,...,m n h j () = 0; j = 1,...,p, (10) where f 0 (),{f i ()} m i=1 are posynomial functions, and {h j()} p j=1 functions. are monomial

10-6 Eample 6 Consider the following optimization problem,y,z y is a Geometric optimization problem. s.t. 2 3 2 + 3y z y y = z2. Geometric programs are not (in general) conve optimization problems, but they can be transformed to conve problems by a change of variables and a transformation of the objective and constraint functions. We will use the variables defined as i = e y i. Using this transformation, the Geometric optimization problem given in (10), can be rewritten as y log s.t. log ( )] ep a 0,k,m y m +b 0,k [ K0 k=1 [ Ki m ( )] ep a i,k,m y m +b i,k 0; i = 1,...,m k=1 m c j,m y m +d j = 0; j = 1,...,p. (11) m Since the objective function, and the first constraint are conve (log-sum eponential function), and the second constraint is affine, this problem is conve optimization one. Semi-Definite Programg (SDP) the optimization problem is given by e T s.t. 1 F 1 +...+ n F n 0 A = b, (12) REFERENCES [1] S. Boyd, Conve Optimization. Cambridge University Press, 2004.