A Curious Property of Convex Functions and Mechanism Design

Similar documents
Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Convex Sets with Applications to Economics

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Approximate Revenue Maximization with Multiple Items

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives

17.1 Hyperplanes and Linear Forms

Set, functions and Euclidean space. Seungjin Han

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Functions and Optimization

Math 341: Convex Geometry. Xi Chen

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

Only Intervals Preserve the Invertibility of Arithmetic Operations

Optimality Conditions for Nonsmooth Convex Optimization

Approximate Revenue Maximization with Multiple Items

Continuity. Matt Rosenzweig

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

Convex analysis and profit/cost/support functions

Convex Functions. Pontus Giselsson

The Menu-Size Complexity of Auctions

CSCI : Optimization and Control of Networks. Review on Convex Optimization

A Brief Review on Convex Optimization

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Convex Optimization Boyd & Vandenberghe. 5. Duality

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Lecture 1: Background on Convex Analysis

BASICS OF CONVEX ANALYSIS

Chapter 2: Preliminaries and elements of convex analysis

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Optimality, Duality, Complementarity for Constrained Optimization

Lecture 8: Basic convex analysis

1 Definition of the Riemann integral

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0

In Progress: Summary of Notation and Basic Results Convex Analysis C&O 663, Fall 2009

5. Duality. Lagrangian

Helly's Theorem and its Equivalences via Convex Analysis

Real Analysis. Joe Patten August 12, 2018

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ.

ON POSITIVE, LINEAR AND QUADRATIC BOOLEAN FUNCTIONS

Mathematical Economics: Lecture 16

Chapter 2 Convex Analysis

The Menu-Size Complexity of Revenue Approximation

The Menu-Size Complexity of Revenue Approximation

Real Analysis Problems

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

Econ Slides from Lecture 1

Optimization and Optimal Control in Banach Spaces

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Lecture 4: Convex Functions, Part I February 1

Convex Optimization Notes

Solutions Final Exam May. 14, 2014

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Analysis Notes. Lecturer: Adrian Lewis, Cornell ORIE Scribe: Kevin Kircher, Cornell MAE

Strong Dual for Conic Mixed-Integer Programs

Lecture 2: Convex functions

Introduction and Math Preliminaries

Real Analysis, 2nd Edition, G.B.Folland Elements of Functional Analysis

8. Conjugate functions

h(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote

8. Geometric problems

Lecture: Duality.

Characterization of quadratic mappings through a functional inequality

Mixed-integer nonlinear programming, Conic programming, Duality, Cutting

4. Convex optimization problems

Maths Class 11 Chapter 5 Part -1 Quadratic equations

Analysis and Linear Algebra. Lectures 1-3 on the mathematical tools that will be used in C103

SOLUTIONS TO ADDITIONAL EXERCISES FOR II.1 AND II.2

Functions and Equations

Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011

The proximal mapping

11. Equality constrained minimization

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

CVaR and Examples of Deviation Risk Measures

Problem Set 5: Solutions Math 201A: Fall 2016

J. Banasiak Department of Mathematics and Applied Mathematics University of Pretoria, Pretoria, South Africa BANACH LATTICES IN APPLICATIONS

Bounds on the Menu-Size of Approximately Optimal Auctions via Optimal-Transport Duality

Functional Analysis Exercise Class

Almost Convex Functions: Conjugacy and Duality

ECARES Université Libre de Bruxelles MATH CAMP Basic Topology

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Convex Optimization. Convex Analysis - Functions

APPLICATIONS OF DIFFERENTIABILITY IN R n.

REVIEW OF ESSENTIAL MATH 346 TOPICS

PROXIMAL POINT ALGORITHMS INVOLVING FIXED POINT OF NONSPREADING-TYPE MULTIVALUED MAPPINGS IN HILBERT SPACES

SOLUTIONS FOR PROBLEMS 1-30

Convex Analysis Background

Immerse Metric Space Homework

Module 2: First-Order Partial Differential Equations

arxiv: v1 [math.ds] 27 Jul 2017

On the Complexity of Optimal Lottery Pricing and Randomized Mechanisms

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

Preliminary notes on auction design

Mathematics 426 Robert Gross Homework 9 Answers

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

ECON 4117/5111 Mathematical Economics Fall 2005

Formulas for probability theory and linear models SF2941

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

Transcription:

A Curious Property of Convex Functions and Mechanism Design Sergiu Hart August 11, 217 Abstract The solution of a simple problem on convex functions that has nothing to do with mechanism design namely, the largest convex function with given values on the axes makes use of the payoff functions of mechanism design. 1 The S-Transform Let b : R d R be a a real convex function defined on the nonnegative orthant of the d-dimensional Euclidean space; without loss of generality assume that b() =. For each z R d let s(z) := b (z;z) b(z), where b (z;z) := lim δ (b(z δz) b(z))/δ is the directional derivative of b at z in the direction z (see September 212 (first version); October 213 (revised and expanded); November 216 (minor corrections). The author thanks Noam Nisan, Phil Reny, and Benji Weiss for useful discussions. Research partially supported by an ERC Advanced Investigator Grant. The Hebrew University of Jerusalem (Center for the Study of Rationality, Institute of Mathematics, and Department of Economics). E-mail: hart@huji.ac.il Web site: http://www.ma.huji.ac.il/hart 1

Rockafellar 197). The function s satisfies s() = and s(z) = z b(z) b(z) at all points z where b is differentiable (i.e., at almost every z). When d = 1, we have s (z) = zb (z) b(z), and s is a nondecreasing function (for instance, when b is C 2 we have s (z) = zb (z) ); that is however no longer true when d > 1 (cf. Hart and Reny 215). Let d = 2. For every z R 2, let β(t) := b(tz)/t, then β (t) = s(tz)/t 2, and so one can reconstruct the function b from the function 1 s: b(z) := 1 s(tz) t 2 dt. Let Sb denote the function s obtained from b; we will call it the S- transform of b. 2 A Curious Property of the s Functions Let b 1 (x) and b 2 (y) be two nondecreasing convex functions defined on R, with b 1 () = b 2 () =. Assume for simplicity that b 1 and b 2 are continuously differentiable (i.e., in C 1 ; see below for the extension to the nondifferentiable case). Let b (x,y) be the largest convex function b on R 2 such that b(x, ) = b 1 (x) and b(,y) = b 2 (y); we will refer to this as the two-axes condition. The function b is well defined since there is always a function satisfying the two-axes condition (e.g., b(x,y) = b 1 (x) b 2 (y)), and the maximum of the collection of such convex functions is a convex function. 1 Assume for instance that b(z) = for all z in a neighborhood of. 2

Define 2 s 1 (x) := xb 1(x) b 1 (x), s 2 (y) := yb 2(y) b 2 (y), s (x,y) := xb x(x,y) yb y(x,y) b (x,y). Thus s 1 := Sb 1, s 2 := Sb 2, and s = Sb. We write λ for 1 λ. Theorem 1 Assume that 3 sup x s 1 (x) = sup y s 2 (y) >. Then for every x,y > there is < λ < 1 such that ) ( ) b (x,y) = λb 1 λ λb ỹ 2, λ ) ( ) ỹ s (x,y) = s 1 = s 2, λ λ b x(x,y) = b 1, and ( λ) ) ỹ b y(x,y) = b 2. λ Thus, b is the envelope of the b i functions along equi-s lines. What appears curious and intriguing here is that the solution of a simple problem on convex functions that has nothing to do with mechanism design (namely, the largest convex function with given values on the axes) involves the payoff functions from mechanism design (with s the seller payoff function and b the buyer payoff function). Proof. Extend b 1 and b 2 to R 2 : put b 1 (x, ) := b 1 (x) when y = and b 1 (x,y) := otherwise; similarly, put b 2 (,y) := b 2 (y) when x = and 2 The two partial derivatives of b are denoted b x and b y. 3 See below for the extension to the general case. 3

b 2 (x,y) := otherwise. Then b 1 and b 2 are convex functions on R 2 ; let b := conv {b 1,b 2 } be their convex hull (see Rockafellar 197, page 37), i.e., the greatest convex function such that b (x,y) b i (x,y) for i = 1, 2. The function b is given by 4 epi b = conv (epi b 1 epi b 2 ), and satisfies (see Theorem 5.6 in Rockafellar 197) { b (x,y) = inf λb 1 (x 1,y 1 ) λb 2 (x 2,y 2 ) : λ(x 1,y 1 ) λ(x } 2,y 2 ) = (x,y). λ 1 For x,y >, the expression λb 1 (x 1,y 1 ) λb 2 (x 2,y 2 ) is finite only when < λ < 1, (x 1,y 1 ) = (x/λ, ), and (x 2,y 2 ) = (,y/ λ), and so { ) ( )} b (x,y) = inf λb 1 <λ<1 λ λb ỹ 2 ; (1) λ for y =, it is finite only when λ = 1, and so b (x, ) = b 1 (x); and for x =, only when λ =, and so b (,y) = b 2 (y). Thus b is in fact the greatest convex function satisfying the axes condition, i.e., b b. Next, the derivative of λb 1 (x/λ) λb 2 (y/ λ) with respect to λ is ) b 1 λb 1 ( λ λ) xλ ) ( ) ( ) ỹ 2 b 2 λb ỹ 2 λ λ ( ) ỹ ) = s 2 s 1. λ λ ( ) y λ 2 This is a nondecreasing function of 5 λ, and it vanishes when ) ( ) ỹ s 1 = s 2, λ λ which yields thus its minimal value. Finally, using the envelope theorem gives b x(x,y) = λb 1(x/λ) (1/λ) = 4 epi f denotes the epigraph of f, i.e., {(z,α) R 2 R : f(z) α}. 5 With s 2 (y/ λ) s 1 (x/λ) nonnegative as λ and nonpositive as λ 1 (recall that s i () = < s i (t) for t large enough). 4

b 1(x/λ) and b y(x,y) = λb 2(y/ λ) (1/ λ) = b y (y/ λ), and thus s (x,y) = λs 1 (x/λ) λs 2 (y/ λ) = λs 1 (x/λ) = λs 2 (y/ λ). Remarks. (a) Geometrically, the graph of the function b is obtained by connecting with straight lines all pairs of points ((x, ),b 1 (x)) and ((,y),b 2 (y)) that satisfy s 1 (x) = s 2 (y). (b) If f 1 and f 2 are the buyer payoff functions in two single-good IC and IR mechanisms thus b 1(x),b 2(y) 1 for all x,y then s 1 and s 2 are the corresponding seller payoff functions. 6 In this case the functions b and s are the payoff functions of the buyer and the seller, respectively, in a two-good IC and IR mechanism. Moreover, along each line connecting (x, ) with (,y) such that s 1 (x) = s 2 (y), the corresponding menu item (q 1,q 1,s) is constant: q 1 (x,y) = b 1(x), q 2 (x,y) = b 2(y), and s(x,y) = s 1 (x) = s 2 (y). In particular, the mechanism corresponding to b is monotonic (i.e., s is a nondecreasing function); moreover, the collection of all allocations q(x, y) = (q 1 (x,y),q 2 (x,y)) is well-ordered, i.e., for any (x,y) and (x,y ), either q(x,y) q(x,y ) or q(x,y) q(x,y ). (c) Alternative characterization: For every c, let h c (x,y) be the largest affine function with h c (, ) = c such that h c (x, ) b 1 (x) and h c (,y) b 2 (y) for all x,y. Then b (x,y) = sup c h c (x,y). (d) If, say, sup x s 1 (x) =: M 1 < M 2 := sup y s 2 (y), then let ȳ be such that s 2 (ȳ) = M 1 ; the characterization of Theorem 1 holds for all (x,y) with y < ȳ, and for y ȳ we have b (x,y) = b 2 (y) (the infimum in (1) is reached as λ ). (e) If the function f i are not C 1, then f i and s i are defined almost every- 6 A function b : R R is a buyer payoff function iff it is convex, its derivatives b (x) lie in the interval [,1], and b() =. Such a function b(x) lies in the convex hull of the functions [x p] for p and the identically function. The corresponding seller payoff function is a nondecreasing function with s() =, which lies in the convex hull of the functions p1 x p for all p and the identically function (without loss of generality we have made s continuous from the right). 5

where. For every t let x t := inf(x : s 1 (x) t} and y t := inf{y : s 2 (y) t} (could be ), then the graph of b (x,y) is obtained by connecting with straight lines all pairs of points ((x t, ),b 1 (x t )) and ((,y t ),b 2 (y t )). This includes the case of (d) above where sup x s 1 (x) and sup y s 2 (y) may be different (if, say, x t = and y t is finite, then the line becomes {((x,y t ),b 2 (y t )) : x }, i.e., in (x,y) space it is parallel to the x-axis). Moreover, we have s 1 (x) = s 2 (y) = s (x,y) = t 1 x x t =1 dt = t 1 y y t =1 dt = t 1 x x t y y t =1 dt = 1 x x t 1 dt, 1 y y t 1 dt, 1 x x t y y t 1 dt, and so, for random variables X and Y, E [s 1 (X)] = E [s 2 (Y )] = E [s (X,Y )] = [ X P [ Y P P ] 1 dt, x t ] 1 dt, y [ t X Y ] 1 x t y t dt. (f) In the symmetric case where b 1 (t) = b 2 (t), it is easy to see that b(x,y) = b 1 (x y) = b 2 (x y). This is used in Theorem 28 in Hart and Nisan (212). (g) When b 1(x) [, 1] for all x then b 1 lies in the closed convex hull generated by the functions [x p] for p. Similarly for b 2. However, this does not imply that b lies in the closed convex hull of the functions 6

[a 1 x a 2 y p] with a 1,a 2 [, 1] and p. An example: b 1 (x) = max {, 12 } x 1,x 3 b 2 (y) = max {, 25 } y 1,y 3 b(x, y) = max {, 12 x 25 } y 1,x y 3. Then b(x,y) = 1 [ 3 3 2 x 6 ] 5 y 3 2 [ 3 3 4 x 9 = 1 [x 45 ] 2 y 2 3 5 ] 1 y 3 ] [ 5 6 x y 1 3 (in the first decomposition, where the weights 1/3 2/3 = 1, we have the linear coefficients 3/2, 6/5 > 1; in the second decomposition, where all linear coefficients are 1, we have 1/2 3/5 > 1). Note that b 1 (x) = 1 2 [x 2] 1 2 [x 4] b 2 (y) = 2 [ y 5 ] 3 [ y 1 ] 5 2 5 3 3 A Two-Good Revenue Maximization Problem Theorem 2 Let F be a two-dimensional cumulative distribution function with density function f. Assume that there is a = (a 1,a 2 ) such that f(x,y) = when x < a 1 or y < a 2, and for (x,y) a the function f(x,y) is differentiable and satisfies xf x (x,y) α 1 f(x,y) and yf y (x,y) α 2 f(x,y) (2). 7

for some α 1,α 2 with α 1 α 2 = 3. Then, to maximize revenue, it suffices to consider functions b as obtained from Theorem 1. Proof. Let b correspond to a two-dimensional IC and IR mechanism, then R(b, F) = sup (xb x (x,y) yb y (x,y) b(x,y))f(x,y) dx dy. M>a 1,a 2 a 2 a 1 For each y we integrate by parts the xb x (x,y)f(x,y) term: a 1 b x (x,y)xf(x,y) dx = [b(x,y)xf(x,y)] M a 1 = b(a 1,y)a 1 f(a 1,y) b(m,y)mf(m,y) a 1 a 1 b(x,y) (f(x,y) xf x (x,y)) dx. b(x,y) (f(x,y) xf x (x,y)) dx Similarly for the yb y (x,y)f(x,y) term; altogether (do not forget the b(x,y)f(x,y) term) we get r M (b) = a 1 M a 2 b(a 1,y)f(a 1,y) dy a 2 a 2 a 2 a 1 a 1 a 2 b(m,y)f(m,y) dy M a 1 a 1 b(x,a 2 )f(x,a 2,y) dx b(x,m)f(x,m) dx b(x,y) ( α 1 f(x,y) xf x (x,y)) dx dy b(x,y) ( α 2 f(x,y) yf y (x,y)) dy dx (we split the 3b(x,y)f(x,y) term into two parts, α 1 b(x,y)f(x,y)α 2 b(x,y)f(x,y), which appear in the last two integrals). Fixing the functions b(,a 2 ) and b(a 1, ), and thus the first two integrals above, in order to maximize r M (b) we should take b(x,y) as large as possible (all the coefficients of b in the other integrals are nonnegative), and thus b is the maximal convex function with the given values on the axes. Finally, let 8

M. If F is a product distribution (i.e., the goods values are independent: F = F 1 F 2, with densities f 1 and f 2, respectively), then condition (2) becomes xf 1(x) α 1 f(x) and yf 2(y) α 2 f(y) for α 1 α 2 = 3; cf. Wang and Tang (217). If moreover F 1 = F 2 (i.e., i.i.d. goods), then b is symmetric and 7 b(x,y) = b(a,x y a) = b(x y a,a), and so b corresponds to bundling; cf. Theorem 28 in Hart and Nisan (212). Remark. The resulting function s is monotonic; cf. Hart and Reny (215). 4 Higher Dimensions 4.1 One-Dimensional Axes Conditions Let b i : R R be convex functions with b i () =, and let b : R d R be the maximal convex function such that b(xe i ) = b i (x) for every x and 1 i d, where e i is the i th unit vector in R d. Theorem 1 and its proof easily generalize to any dimension d. Theorem 3 Assume that sup x s i (x) = S > for all i. Then for every x R d there are < λ i < 1 with d i=1 λ i = 1 such that b (x) = d ( ) xi λ i b i, i=1 λ i ( ) s xi (x) = s i, and λ ( i ) b i(x) = b xi i. 7 Make the change of variables x = x a and y = y a. λ i 9

4.2 Higher-Dimensional Axes Conditions In general, given boundary conditions b i, the maximal convex function b is given by (cf. (1)) { b (x) = inf λ i b i (y i ) : i i λ i y i = 1, i λ i = 1, λ i The parallel of Theorems 1 and 3 is slightly more complicated. }. (3) We illustrate it with the special case where d = 3 and the given boundary conditions are b 1 (,x 2,x 3 ),b 2 (x 1,,x 3 ) and b 3 (x 1,x 2, ). If the infimum in (3) is attained at a point where all λ i >, then we have y i j R for i j with x = 3 i=1 λ iy i and (µ 1,µ 2,µ 3,ν) such that s i (y i ) = ν = s (x) b i (y i ) = µ x j = b (x) j x j b (x) = 3 ( λ i b ) i y i. (If some λ i = then the corresponding first-order conditions become inequalities.) i=1 If in addition we are in the symmetric case where b i β for all i, then b is also symmetric, and it is given by b (x) = { β ( 1 2 (x 1 x 2 x 3 ), 1 2 (x 1 x 2 x 3 ) ), if x i 1 2 (x 1 x 2 x 3 ) for all i, β(x j x k,x i ), if x i 1 2 (x 1 x 2 x 3 ) for some i. Note that x 1 (x 1 x 2 x 2 )/2 is equivalent to x 1 x 2 x 3 (and in this case we get λ 1 = in (3)). 1

References Hart, S. and N. Nisan (212), Approximate Revenue Maximization with Multiple Items, The Hebrew University of Jerusalem, Center for Rationality DP-66; arxiv 124.1846; revised (217). Hart, S. and P. J. Reny (215), Maximal Revenue with Multiple Goods: Nonmonotonicity and Other Observations, Theoretical Economics 1, 893 922. Rockafellar, T. R. (197), Convex Analysis, Princeton University Press. Tang, P. and Z. Wang (217), Optimal Mechanisms with Simple Menus, Journal of Mathematical Economics 69, 54 7. 11