Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m Convex analysts may give one of two definitions for the support function of A as either an infimum or a supremum Recall that the supremum of a set of real numbers is its least upper bound and the infimum is its greatest lower bound If A has no upper bound, then by convention sup A = and if A has no lower bound, then inf A = For the empty set, sup A = and inf A = ; otherwise inf A sup A (This makes a kind of sense: Every real number is an upper bound for the empty set, since there is no member of the empty set that is greater than Thus the least upper bound must be Similarly, every real number is also a lower bound, so the infimum is ) Thus support functions (as infima or suprema) may assume the values and By convention, 0 = 0; if > 0 is a real number, then = and ( ) = ; and if < 0 is a real number, then = and ( ) = These conventions are used to simplify statements involving positive homogeneity Rather than choose one definition, I shall give the two definitions different names based on their economic interpretation Profit maximization The profit function π A of A is defined by π A (p) = sup p y y A Cost minimization The cost function c A of A is defined by c A (p) = inf y A p y Clearly, π A (p) = c A ( p) Clearly c A (p) = π A ( p) π A is convex, lower semicontinuous, and positively homogeneous of degree 1 c A is concave, upper semicontinuous, and positively homogeneous of degree 1 1
KC Border Convex analysis and profit/cost/support functions 2 Positive homogeneity of π A is obvious given the conventions on multiplication of infinities To see that it is convex, let g x be the linear (hence convex) function defined by g x (p) = x p Then π A (p) = sup x A g x (p) Since the pointwise supremum of a family of convex functions is convex, π A is convex Also each g x is continuous, hence lower semicontinuous, and the supremum of a family of lower semicontinuous functions is lower semicontinuous See my notes on maximization The set {p R m : π A (p) < } is a closed convex cone, called the effective domain of π A, and denoted dom π A The effective domain will always include the point 0 provided A is nonempty By convention π (p) = for all p, and we say that π is improper If A = R m, then 0 is the only point in the effective domain of π A It is easy to see that the effective domain dom π A of π A is a cone, that is, if p dom π A, then p dom π A for every 0 (Note that {0} is a (degenerate) cone) It is also straightforward to show that dom π A is convex For if π A (p) < and π A (q) <, for 0 1, by convexity of π A, we π A ( x + (1 )y ) πa (p) + (1 )π A (q) < The closedness of dom π A is more difficult Positive homogeneity of c A is obvious given the conventions on multiplication of infinities To see that it is concave, let g x be the linear (hence concave) function defined by g x (p) = x p Then c A (p) = inf x A g x (p) Since the pointwise infimum of a family of concave functions is concave, c A is concave Also each g x is continuous, hence upper semicontinuous, and the infimum of a family of upper semicontinuous functions is upper semicontinuous See my notes on maximization The set {p R m : c A (p) > } is a closed convex cone, called the effective domain of c A, and denoted dom c A The effective domain will always include the point 0 provided A is nonempty By convention c (p) = for all p, and we say that c is improper If A = R m, then 0 is the only point in the effective domain of c A It is easy to see that the effective domain dom c A of c A is a cone, that is, if p dom c A, then p dom c A for every 0 (Note that {0} is a (degenerate) cone) It is also straightforward to show that dom c A is convex For if c A (p) > and c A (q) >, for 0 1, by concavity of c A, we c A ( x + (1 )y ) ca (p) + (1 )c A (q) > The closedness of dom c A is more difficult Recoverability Separating Hyperplane Theorem If A is a closed convex set, and x does not belong to A, then there is a nonzero p satisfying p x > π A (p) Separating Hyperplane Theorem If A is a closed convex set, and x does not belong to A, then there is a nonzero p satisfying p x < c A (p)
KC Border Convex analysis and profit/cost/support functions 3 For a proof see my notes Note that given our conventions, A may be empty From this theorem we easily get the next proposition For a proof see my notes Note that given our conventions, A may be empty From this theorem we easily get the next proposition The closed convex hull co A of The closed convex hull co A of co A = { y R m : ( p R m ) [ p y π A (p) ]} co A = { y R m : ( p R m ) [ p y c A (p) ]} Now let f be a continuous real-valued function defined on a closed convex cone D We can extend f to all of R m by setting it to outside of D if f is convex or if f is concave If f is convex and positively homogeneous of degree 1, define A = { y R m : ( p R m ) [ p y f(p) ]} Then A is closed and convex and f = π A If f is concave and positively homogeneous of degree 1, define A = { y R m : ( p R m ) [ p y f(p) ]} Then A is closed and convex and f = c A Extremizers are subgradients If ỹ(p) maximizes p over A, that is, if ỹ(p) belongs to A and p ỹ(p) p y for all y A, then ỹ(p) is a subgradient of π A at p That is, If ŷ(p) minimizes p over A, that is, if ŷ(p) belongs to A and p ŷ(p) p y for all y A, then ŷ(p) is a supergradient of c A at p That is, π A (p) + ỹ(p) (q p) π A (q) ( ) c A (p) + ŷ(p) (q p) c A (q) ( ) for all q R m for all q R m To see this, note that for any q R m, by definition we q ỹ(p) π A (q) Now add π A (p) p ỹ(p) = 0 to the left hand side to get the subgradient inequality Note that π A (p) may be finite for a closed convex set A, and yet there may be no maximizer For instance, let A = {(x, y) R 2 : x < 0, y < 0, xy 1} Then for p = (1, 0), we π A (p) = 0 as (1, 0) ( 1/n, n) = 1/n, but (1, 0) (x, y) = x < 0 for each (x, y) A Thus there is no maximizer in A To see this, note that for any q R m, by definition we q ŷ(p) c A (q) Now add c A (p) p ŷ(p) = 0 to the left hand side to get the supergradient inequality Note that x A (p) may be finite for a closed convex set A, and yet there may be no minimizer For instance, let A = {(x, y) R 2 : x > 0, y > 0, xy 1} Then for p = (1, 0), we π A (p) = 0 as (1, 0) (1/n, n) = 1/n, but (1, 0) (x, y) = x > 0 for each (x, y) A Thus there is no minimizer in A
KC Border Convex analysis and profit/cost/support functions 4 It turns out that if there is no maximizer of p, then π A has no subgradient at p In fact, the following is true, but I won t present the proof, which relies on the Separating Hyperplane Theorem (See my notes for a proof) Theorem If A is closed and convex, then x is a subgradient of π A at p if and only if x A and x maximizes p over A It turns out that if there is no minimizer of p, then c A has no supergradient at p In fact, the following is true, but I won t present the proof, which relies on the Separating Hyperplane Theorem (See my notes for a proof) Theorem If A is closed and convex, then x is a supergradient of c A at p if and only if x A and x minimizes p over A Comparative statics Consequently, if A is closed and convex, and ỹ(p) is the unique maximizer of p over A, then π A is differentiable at p and Consequently, if A is closed and convex, and ŷ(p) is the unique minimizer of p over A, then c A is differentiable at p and ỹ(p) = π A(p) ( ) ŷ(p) = c A(p) ( ) To see that differentiability of π A implies the profit maximizer is unique, consider q of the form p ± e i, where e i is the i th unit coordinate vector, and > 0 The subgradient inequality for q = p + e i is ỹ(p) e i π A (p + e i ) π A (p) and for q = p e i is ỹ(p) e i π A (p e i ) π A (p) Dividing these by and respectively yields To see that differentiability of c A implies the cost minimizer is unique, consider q of the form p ± e i, where e i is the i th unit coordinate vector, and > 0 The supergradient inequality for q = p + e i is ŷ(p) e i c A (p + e i ) c A (p) and for q = p e i is ŷ(p) e i c A (p e i ) c A (p) Dividing these by and respectively yields so yi (p) π A(p + e i ) π A (p) yi (p) π A(p e i ) π A (p) so yi (p) c A(p + e i ) c A (p) yi (p) c A(p e i ) c A (p) π A (p e i ) π A (p) y i (p) π A(p+e i ) π A (p) c A (p+e i ) c A (p) y i (p) c A(p e i ) c A (p) Letting 0 yields ỹ i (p) = D i π A (p) Thus if π A is twice differentiable at p, that is, if the maximizer ỹ(p) is differentiable with respect to p, then the i th component D j ỹ i (p) = D ij π A (p) ( ) Letting 0 yields ŷ i (p) = D i c A (p) Thus if c A is twice differentiable at p, that is, if the minimizer ŷ(p) is differentiable with respect to p, then the i th component D j ŷ i (p) = D ij c A (p) ( )
KC Border Convex analysis and profit/cost/support functions 5 Consequently, the matrix [ ] D j ỹ i (p) is positive semidefinite Consequently, the matrix [ ] D j ŷ i (p) is negative semidefinite In particular, D i ỹ i 0 In particular, D i ŷ i 0 Even without twice differentiability, from the subgradient inequality, we π A (p) + ỹ(p) (q p) π A (q) π A (q) + ỹ(q) (p q) π A (p) so adding the two inequalities, we get (ỹ(p) ỹ(q) ) (p q) 0 Even without twice differentiability, from the supergradient inequality, we c A (p) + ŷ(p) (q p) c A (q) c A (q) + ŷ(q) (p q) c A (p) so adding the two inequalities, we get (ŷ(p) ŷ(q) ) (p q) 0 Thus if q differs from p only in its i th component, say q i = p i + p i, then we ỹ i p i 0 Dividing by the positive quantity ( p i ) 2 does not change this inequality, so ỹ i p i 0 Thus if q differs from p only in its i th component, say q i = p i + p i, then we ŷ i p i 0 Dividing by the positive quantity ( p i ) 2 does not change this inequality, so ŷ i p i 0 Cyclical monotonicity and empirical restrictions A real function g : X R R is increasing if x y = g(x) g(y) That is, g(x) g(y) and x y the same sign An equivalent way to say this is ( g(x) g(y) )( x y) 0 for all x, y, A real function g : X R R is decreasing if x y = g(x) g(y) That is, g(x) g(y) and x y the opposite sign An equivalent way to say this is ( g(x) g(y) )( x y) 0 for all x, y, which can be rewritten as g(x)(y x) + g(y)(x y) 0 for all x, y which can be rewritten as g(x)(y x) + g(y)(x y) 0 for all x, y
KC Border Convex analysis and profit/cost/support functions 6 We can generalize this to a function g from R m into R m like this: Definition A function g : X R m R m is monotone (increasing) if g(x) (y x) + g(y) (x y) 0 for all x, y X We already seen that the (sub)gradient of π A is monotone (increasing) More is true Definition A mapping g : X R m R m is cyclically monotone (increasing) if for every cycle x 0, x 1,, x n, x n+1 = x 0 in X, we g(x 0 ) (x 1 x 0 ) + + g(x n ) (x n+1 x n ) 0 If f : X R m R is convex, and g : X R m R m is a selection from the subdifferential of f, that is, if g(x) is a subgradient of f at x for every x, then g is cyclically monotone (increasing) Proof : Let x 0, x 1,, x n, x n+1 = x 0 be a cycle in X From the subgradient inequality at x i, we or f(x i ) + g(x i ) (x i+1 x i ) f(x i+1 ) g(x i ) (x i+1 x i ) f(x i+1 ) f(x i ) for each i = 0,, n Summing gives n g(x i ) (x i+1 x i ) 0, i=0 where the right-hand side takes into account f(x n+1 ) = f(x 0 ) We can generalize this to a function g from R m into R m like this: Definition A function g : X R m R m is monotone (decreasing) if g(x) (y x) + g(y) (x y) 0 for all x, y X We already seen that the (super)gradient of c A is monotone (decreasing) More is true Definition A mapping g : X R m R m is cyclically monotone (decreasing) if for every cycle x 0, x 1,, x n, x n+1 = x 0 in X, we g(x 0 ) (x 1 x 0 ) + + g(x n ) (x n+1 x n ) 0 If f : X R m R is concave, and g : X R m R m is a selection from the superdifferential of f, that is, if g(x) is a supergradient of f at x for every x, then g is cyclically monotone (decreasing) Proof : Let x 0, x 1,, x n, x n+1 = x 0 be a cycle in X From the supergradient inequality at x i, we or f(x i ) + g(x i ) (x i+1 x i ) f(x i+1 ) g(x i ) (x i+1 x i ) f(x i+1 ) f(x i ) for each i = 0,, n Summing gives n g(x i ) (x i+1 x i ) 0, i=0 where the right-hand side takes into account f(x n+1 ) = f(x 0 )
KC Border Convex analysis and profit/cost/support functions 7 Corollary The profit maximizing points correspondence ŷ is cyclically monotonic (increasing) Remarkably the converse is true Theorem (Rockafellar) Let X be a convex set in R m and let φ: X R m be a cyclically monotone (increasing) correspondence (that is, if every selection from φ is a cyclically monotone (increasing) function) Then there is a lower semicontinuous convex function f : X R such that φ(x) f(x) for every x Corollary The cost minimizing points correspondence ŷ is cyclically monotonic (decreasing) Remarkably the converse is true Theorem (Rockafellar) Let X be a convex set in R m and let φ: X R m be a cyclically monotone (decreasing) correspondence (that is, if every selection from φ is a cyclically monotone (decreasing) function) Then there is an upper semicontinuous concave function f : X R such that φ(x) f(x) for every x