Convex analysis and profit/cost/support functions

Similar documents
Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Rockefellar s Closed Functions

A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives

Convex Analysis and Economic Theory Fall 2018 Winter 2019

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Optimality Conditions for Nonsmooth Convex Optimization

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Optimization and Optimal Control in Banach Spaces

7) Important properties of functions: homogeneity, homotheticity, convexity and quasi-convexity

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

BASICS OF CONVEX ANALYSIS

Dedicated to Michel Théra in honor of his 70th birthday

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS

Subdifferential representation of convex functions: refinements and applications

1 Convexity, concavity and quasi-concavity. (SB )

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

Convex Analysis and Economic Theory Winter 2018

On a Class of Multidimensional Optimal Transportation Problems

Introduction to Correspondences

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

Convex Functions. Pontus Giselsson

Suppose R is an ordered ring with positive elements P.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

Structure of R. Chapter Algebraic and Order Properties of R

Scalar multiplication and addition of sequences 9

In N we can do addition, but in order to do subtraction we need to extend N to the integers

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

In N we can do addition, but in order to do subtraction we need to extend N to the integers

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

Mathematics 530. Practice Problems. n + 1 }

Helly's Theorem and its Equivalences via Convex Analysis

LECTURE 9 LECTURE OUTLINE. Min Common/Max Crossing for Min-Max

Chapter 2 Convex Analysis

Lecture 1: Background on Convex Analysis

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

8. Conjugate functions

Convex Sets and Minimal Sublinear Functions

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Translative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation

Chapter 2: Preliminaries and elements of convex analysis

Convex Optimization Notes

Convex Functions and Optimization

Duality. for The New Palgrave Dictionary of Economics, 2nd ed. Lawrence E. Blume

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

On John type ellipsoids

Maximal Monotone Inclusions and Fitzpatrick Functions

Lectures on Geometry

Constraint qualifications for convex inequality systems with applications in constrained optimization

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Mathematical Economics: Lecture 16

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators

Convex Analysis and Economic Theory Winter 2018

BORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA. Talk given at the SPCOM Adelaide, Australia, February 2015

Constrained maxima and Lagrangean saddlepoints

Division of the Humanities and Social Sciences. Sums of sets, etc.

Convex Analysis and Economic Theory Winter 2018

When are Sums Closed?

Postulate 2 [Order Axioms] in WRW the usual rules for inequalities

Aliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide

The Fitzpatrick Function and Nonreflexive Spaces

Global Maximum of a Convex Function: Necessary and Sufficient Conditions

Applied Mathematics Letters

Lecture 4: Optimization. Maximizing a function of a single variable

Lecture 5: The Bellman Equation

Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces

Convex Analysis and Economic Theory Winter 2018

Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces

Duality (Continued) min f ( x), X R R. Recall, the general primal problem is. The Lagrangian is a function. defined by

Parcours OJD, Ecole Polytechnique et Université Pierre et Marie Curie 05 Mai 2015

On the (Non-)Differentiability of the Optimal Value Function When the Optimal Solution Is Unique

The sum of two maximal monotone operator is of type FPV

1. Supremum and Infimum Remark: In this sections, all the subsets of R are assumed to be nonempty.

Notes on Ordered Sets

Convex Optimization Theory

Nonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6

Introduction to optimal transport

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

DUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS

Galois Connections. Roland Backhouse 3rd December, 2002

Robust Duality in Parametric Convex Optimization

Convex Analysis Notes. Lecturer: Adrian Lewis, Cornell ORIE Scribe: Kevin Kircher, Cornell MAE

Lecture 7 Monotonicity. September 21, 2008

2.2 Some Consequences of the Completeness Axiom

arxiv:math/ v1 [math.fa] 4 Feb 1993

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

Merit Functions and Descent Algorithms for a Class of Variational Inequality Problems

II KLUWER ACADEMIC PUBLISHERS. Abstract Convexity and Global Optimization. Alexander Rubinov

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Due date: Monday, February 6, 2017.

IE 521 Convex Optimization

LECTURE SLIDES ON BASED ON CLASS LECTURES AT THE CAMBRIDGE, MASS FALL 2007 BY DIMITRI P. BERTSEKAS.

On duality theory of conic linear problems

LECTURE 12 LECTURE OUTLINE. Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions

Convex Optimization and Modeling

Transcription:

Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m Convex analysts may give one of two definitions for the support function of A as either an infimum or a supremum Recall that the supremum of a set of real numbers is its least upper bound and the infimum is its greatest lower bound If A has no upper bound, then by convention sup A = and if A has no lower bound, then inf A = For the empty set, sup A = and inf A = ; otherwise inf A sup A (This makes a kind of sense: Every real number is an upper bound for the empty set, since there is no member of the empty set that is greater than Thus the least upper bound must be Similarly, every real number is also a lower bound, so the infimum is ) Thus support functions (as infima or suprema) may assume the values and By convention, 0 = 0; if > 0 is a real number, then = and ( ) = ; and if < 0 is a real number, then = and ( ) = These conventions are used to simplify statements involving positive homogeneity Rather than choose one definition, I shall give the two definitions different names based on their economic interpretation Profit maximization The profit function π A of A is defined by π A (p) = sup p y y A Cost minimization The cost function c A of A is defined by c A (p) = inf y A p y Clearly, π A (p) = c A ( p) Clearly c A (p) = π A ( p) π A is convex, lower semicontinuous, and positively homogeneous of degree 1 c A is concave, upper semicontinuous, and positively homogeneous of degree 1 1

KC Border Convex analysis and profit/cost/support functions 2 Positive homogeneity of π A is obvious given the conventions on multiplication of infinities To see that it is convex, let g x be the linear (hence convex) function defined by g x (p) = x p Then π A (p) = sup x A g x (p) Since the pointwise supremum of a family of convex functions is convex, π A is convex Also each g x is continuous, hence lower semicontinuous, and the supremum of a family of lower semicontinuous functions is lower semicontinuous See my notes on maximization The set {p R m : π A (p) < } is a closed convex cone, called the effective domain of π A, and denoted dom π A The effective domain will always include the point 0 provided A is nonempty By convention π (p) = for all p, and we say that π is improper If A = R m, then 0 is the only point in the effective domain of π A It is easy to see that the effective domain dom π A of π A is a cone, that is, if p dom π A, then p dom π A for every 0 (Note that {0} is a (degenerate) cone) It is also straightforward to show that dom π A is convex For if π A (p) < and π A (q) <, for 0 1, by convexity of π A, we π A ( x + (1 )y ) πa (p) + (1 )π A (q) < The closedness of dom π A is more difficult Positive homogeneity of c A is obvious given the conventions on multiplication of infinities To see that it is concave, let g x be the linear (hence concave) function defined by g x (p) = x p Then c A (p) = inf x A g x (p) Since the pointwise infimum of a family of concave functions is concave, c A is concave Also each g x is continuous, hence upper semicontinuous, and the infimum of a family of upper semicontinuous functions is upper semicontinuous See my notes on maximization The set {p R m : c A (p) > } is a closed convex cone, called the effective domain of c A, and denoted dom c A The effective domain will always include the point 0 provided A is nonempty By convention c (p) = for all p, and we say that c is improper If A = R m, then 0 is the only point in the effective domain of c A It is easy to see that the effective domain dom c A of c A is a cone, that is, if p dom c A, then p dom c A for every 0 (Note that {0} is a (degenerate) cone) It is also straightforward to show that dom c A is convex For if c A (p) > and c A (q) >, for 0 1, by concavity of c A, we c A ( x + (1 )y ) ca (p) + (1 )c A (q) > The closedness of dom c A is more difficult Recoverability Separating Hyperplane Theorem If A is a closed convex set, and x does not belong to A, then there is a nonzero p satisfying p x > π A (p) Separating Hyperplane Theorem If A is a closed convex set, and x does not belong to A, then there is a nonzero p satisfying p x < c A (p)

KC Border Convex analysis and profit/cost/support functions 3 For a proof see my notes Note that given our conventions, A may be empty From this theorem we easily get the next proposition For a proof see my notes Note that given our conventions, A may be empty From this theorem we easily get the next proposition The closed convex hull co A of The closed convex hull co A of co A = { y R m : ( p R m ) [ p y π A (p) ]} co A = { y R m : ( p R m ) [ p y c A (p) ]} Now let f be a continuous real-valued function defined on a closed convex cone D We can extend f to all of R m by setting it to outside of D if f is convex or if f is concave If f is convex and positively homogeneous of degree 1, define A = { y R m : ( p R m ) [ p y f(p) ]} Then A is closed and convex and f = π A If f is concave and positively homogeneous of degree 1, define A = { y R m : ( p R m ) [ p y f(p) ]} Then A is closed and convex and f = c A Extremizers are subgradients If ỹ(p) maximizes p over A, that is, if ỹ(p) belongs to A and p ỹ(p) p y for all y A, then ỹ(p) is a subgradient of π A at p That is, If ŷ(p) minimizes p over A, that is, if ŷ(p) belongs to A and p ŷ(p) p y for all y A, then ŷ(p) is a supergradient of c A at p That is, π A (p) + ỹ(p) (q p) π A (q) ( ) c A (p) + ŷ(p) (q p) c A (q) ( ) for all q R m for all q R m To see this, note that for any q R m, by definition we q ỹ(p) π A (q) Now add π A (p) p ỹ(p) = 0 to the left hand side to get the subgradient inequality Note that π A (p) may be finite for a closed convex set A, and yet there may be no maximizer For instance, let A = {(x, y) R 2 : x < 0, y < 0, xy 1} Then for p = (1, 0), we π A (p) = 0 as (1, 0) ( 1/n, n) = 1/n, but (1, 0) (x, y) = x < 0 for each (x, y) A Thus there is no maximizer in A To see this, note that for any q R m, by definition we q ŷ(p) c A (q) Now add c A (p) p ŷ(p) = 0 to the left hand side to get the supergradient inequality Note that x A (p) may be finite for a closed convex set A, and yet there may be no minimizer For instance, let A = {(x, y) R 2 : x > 0, y > 0, xy 1} Then for p = (1, 0), we π A (p) = 0 as (1, 0) (1/n, n) = 1/n, but (1, 0) (x, y) = x > 0 for each (x, y) A Thus there is no minimizer in A

KC Border Convex analysis and profit/cost/support functions 4 It turns out that if there is no maximizer of p, then π A has no subgradient at p In fact, the following is true, but I won t present the proof, which relies on the Separating Hyperplane Theorem (See my notes for a proof) Theorem If A is closed and convex, then x is a subgradient of π A at p if and only if x A and x maximizes p over A It turns out that if there is no minimizer of p, then c A has no supergradient at p In fact, the following is true, but I won t present the proof, which relies on the Separating Hyperplane Theorem (See my notes for a proof) Theorem If A is closed and convex, then x is a supergradient of c A at p if and only if x A and x minimizes p over A Comparative statics Consequently, if A is closed and convex, and ỹ(p) is the unique maximizer of p over A, then π A is differentiable at p and Consequently, if A is closed and convex, and ŷ(p) is the unique minimizer of p over A, then c A is differentiable at p and ỹ(p) = π A(p) ( ) ŷ(p) = c A(p) ( ) To see that differentiability of π A implies the profit maximizer is unique, consider q of the form p ± e i, where e i is the i th unit coordinate vector, and > 0 The subgradient inequality for q = p + e i is ỹ(p) e i π A (p + e i ) π A (p) and for q = p e i is ỹ(p) e i π A (p e i ) π A (p) Dividing these by and respectively yields To see that differentiability of c A implies the cost minimizer is unique, consider q of the form p ± e i, where e i is the i th unit coordinate vector, and > 0 The supergradient inequality for q = p + e i is ŷ(p) e i c A (p + e i ) c A (p) and for q = p e i is ŷ(p) e i c A (p e i ) c A (p) Dividing these by and respectively yields so yi (p) π A(p + e i ) π A (p) yi (p) π A(p e i ) π A (p) so yi (p) c A(p + e i ) c A (p) yi (p) c A(p e i ) c A (p) π A (p e i ) π A (p) y i (p) π A(p+e i ) π A (p) c A (p+e i ) c A (p) y i (p) c A(p e i ) c A (p) Letting 0 yields ỹ i (p) = D i π A (p) Thus if π A is twice differentiable at p, that is, if the maximizer ỹ(p) is differentiable with respect to p, then the i th component D j ỹ i (p) = D ij π A (p) ( ) Letting 0 yields ŷ i (p) = D i c A (p) Thus if c A is twice differentiable at p, that is, if the minimizer ŷ(p) is differentiable with respect to p, then the i th component D j ŷ i (p) = D ij c A (p) ( )

KC Border Convex analysis and profit/cost/support functions 5 Consequently, the matrix [ ] D j ỹ i (p) is positive semidefinite Consequently, the matrix [ ] D j ŷ i (p) is negative semidefinite In particular, D i ỹ i 0 In particular, D i ŷ i 0 Even without twice differentiability, from the subgradient inequality, we π A (p) + ỹ(p) (q p) π A (q) π A (q) + ỹ(q) (p q) π A (p) so adding the two inequalities, we get (ỹ(p) ỹ(q) ) (p q) 0 Even without twice differentiability, from the supergradient inequality, we c A (p) + ŷ(p) (q p) c A (q) c A (q) + ŷ(q) (p q) c A (p) so adding the two inequalities, we get (ŷ(p) ŷ(q) ) (p q) 0 Thus if q differs from p only in its i th component, say q i = p i + p i, then we ỹ i p i 0 Dividing by the positive quantity ( p i ) 2 does not change this inequality, so ỹ i p i 0 Thus if q differs from p only in its i th component, say q i = p i + p i, then we ŷ i p i 0 Dividing by the positive quantity ( p i ) 2 does not change this inequality, so ŷ i p i 0 Cyclical monotonicity and empirical restrictions A real function g : X R R is increasing if x y = g(x) g(y) That is, g(x) g(y) and x y the same sign An equivalent way to say this is ( g(x) g(y) )( x y) 0 for all x, y, A real function g : X R R is decreasing if x y = g(x) g(y) That is, g(x) g(y) and x y the opposite sign An equivalent way to say this is ( g(x) g(y) )( x y) 0 for all x, y, which can be rewritten as g(x)(y x) + g(y)(x y) 0 for all x, y which can be rewritten as g(x)(y x) + g(y)(x y) 0 for all x, y

KC Border Convex analysis and profit/cost/support functions 6 We can generalize this to a function g from R m into R m like this: Definition A function g : X R m R m is monotone (increasing) if g(x) (y x) + g(y) (x y) 0 for all x, y X We already seen that the (sub)gradient of π A is monotone (increasing) More is true Definition A mapping g : X R m R m is cyclically monotone (increasing) if for every cycle x 0, x 1,, x n, x n+1 = x 0 in X, we g(x 0 ) (x 1 x 0 ) + + g(x n ) (x n+1 x n ) 0 If f : X R m R is convex, and g : X R m R m is a selection from the subdifferential of f, that is, if g(x) is a subgradient of f at x for every x, then g is cyclically monotone (increasing) Proof : Let x 0, x 1,, x n, x n+1 = x 0 be a cycle in X From the subgradient inequality at x i, we or f(x i ) + g(x i ) (x i+1 x i ) f(x i+1 ) g(x i ) (x i+1 x i ) f(x i+1 ) f(x i ) for each i = 0,, n Summing gives n g(x i ) (x i+1 x i ) 0, i=0 where the right-hand side takes into account f(x n+1 ) = f(x 0 ) We can generalize this to a function g from R m into R m like this: Definition A function g : X R m R m is monotone (decreasing) if g(x) (y x) + g(y) (x y) 0 for all x, y X We already seen that the (super)gradient of c A is monotone (decreasing) More is true Definition A mapping g : X R m R m is cyclically monotone (decreasing) if for every cycle x 0, x 1,, x n, x n+1 = x 0 in X, we g(x 0 ) (x 1 x 0 ) + + g(x n ) (x n+1 x n ) 0 If f : X R m R is concave, and g : X R m R m is a selection from the superdifferential of f, that is, if g(x) is a supergradient of f at x for every x, then g is cyclically monotone (decreasing) Proof : Let x 0, x 1,, x n, x n+1 = x 0 be a cycle in X From the supergradient inequality at x i, we or f(x i ) + g(x i ) (x i+1 x i ) f(x i+1 ) g(x i ) (x i+1 x i ) f(x i+1 ) f(x i ) for each i = 0,, n Summing gives n g(x i ) (x i+1 x i ) 0, i=0 where the right-hand side takes into account f(x n+1 ) = f(x 0 )

KC Border Convex analysis and profit/cost/support functions 7 Corollary The profit maximizing points correspondence ŷ is cyclically monotonic (increasing) Remarkably the converse is true Theorem (Rockafellar) Let X be a convex set in R m and let φ: X R m be a cyclically monotone (increasing) correspondence (that is, if every selection from φ is a cyclically monotone (increasing) function) Then there is a lower semicontinuous convex function f : X R such that φ(x) f(x) for every x Corollary The cost minimizing points correspondence ŷ is cyclically monotonic (decreasing) Remarkably the converse is true Theorem (Rockafellar) Let X be a convex set in R m and let φ: X R m be a cyclically monotone (decreasing) correspondence (that is, if every selection from φ is a cyclically monotone (decreasing) function) Then there is an upper semicontinuous concave function f : X R such that φ(x) f(x) for every x