ECON2285: Mathematical Economics

Similar documents
ECON2285: Mathematical Economics

ECON0702: Mathematical Methods in Economics

Tutorial 3: Optimisation

Lecture Notes for Chapter 12

The Kuhn-Tucker Problem

September Math Course: First Order Derivative

z = f (x; y) f (x ; y ) f (x; y) f (x; y )

ECON0702: Mathematical Methods in Economics

Constrained optimization.

Recitation 2-09/01/2017 (Solution)

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Microeconomics, Block I Part 1

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

1 Objective. 2 Constrained optimization. 2.1 Utility maximization. Dieter Balkenborg Department of Economics

Econ Slides from Lecture 14

BEEM103 UNIVERSITY OF EXETER. BUSINESS School. January 2009 Mock Exam, Part A. OPTIMIZATION TECHNIQUES FOR ECONOMISTS solutions

ECON 186 Class Notes: Optimization Part 2

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Microeconomic Theory-I Washington State University Midterm Exam #1 - Answer key. Fall 2016

Microeconomic Theory: Lecture 2 Choice Theory and Consumer Demand

a = (a 1; :::a i )

Lecture 6: Discrete-Time Dynamic Optimization

Lecture 8: Basic convex analysis

Nonlinear Programming (NLP)

Final Exam - Math Camp August 27, 2014

Mathematical Economics: Lecture 16

Constrained Optimization

EconS Microeconomic Theory I Recitation #5 - Production Theory-I 1

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Maximum Value Functions and the Envelope Theorem

Week 7: The Consumer (Malinvaud, Chapter 2 and 4) / Consumer November Theory 1, 2015 (Jehle and 1 / Reny, 32

Mathematical Economics. Lecture Notes (in extracts)

Calculus and optimization

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003

Review of Optimization Methods

ECON2285: Mathematical Economics

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

How to Characterize Solutions to Constrained Optimization Problems

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

Tutorial 2: Comparative Statics

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

Nonlinear Programming and the Kuhn-Tucker Conditions

Optimization. A first course on mathematics for economists

Implicit Function Theorem: One Equation

II. An Application of Derivatives: Optimization

Chapter 4. Maximum Theorem, Implicit Function Theorem and Envelope Theorem

Chapter 4 Differentiation

Properties of Walrasian Demand

EC487 Advanced Microeconomics, Part I: Lecture 2

Recitation #2 (August 31st, 2018)

MATH529 Fundamentals of Optimization Constrained Optimization I

ARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2.

The Kuhn-Tucker and Envelope Theorems

ECON501 - Vector Di erentiation Simon Grant

Economics 101. Lecture 2 - The Walrasian Model and Consumer Choice

Econ 101A Problem Set 1 Solution

Chiang/Wainwright: Fundamental Methods of Mathematical Economics

Partial Differentiation

Advanced Microeconomics Fall Lecture Note 1 Choice-Based Approach: Price e ects, Wealth e ects and the WARP

E 600 Chapter 4: Optimization

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

It is convenient to introduce some notation for this type of problems. I will write this as. max u (x 1 ; x 2 ) subj. to. p 1 x 1 + p 2 x 2 m ;

Microeconomics CHAPTER 2. THE FIRM

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions

Lecture 3 - Axioms of Consumer Preference and the Theory of Choice

Structural Properties of Utility Functions Walrasian Demand

The Envelope Theorem

Microeconomic Theory. Microeconomic Theory. Everyday Economics. The Course:

Week 10: Theory of the Firm (Jehle and Reny, Chapter 3)

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2)

Notes on Consumer Theory

i) This is simply an application of Berge s Maximum Theorem, but it is actually not too difficult to prove the result directly.

Bi-Variate Functions - ACTIVITES

Advanced Microeconomics I: Consumers, Firms and Markets Chapters 1+2

STUDY MATERIALS. (The content of the study material is the same as that of Chapter I of Mathematics for Economic Analysis II of 2011 Admn.

The Ohio State University Department of Economics. Homework Set Questions and Answers

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

ECON 5111 Mathematical Economics

5 Handling Constraints

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

EconS 501 Final Exam - December 10th, 2018

Optimization Theory. Lectures 4-6

Final Exam Advanced Mathematics for Economics and Finance

Econ Review Set 2 - Answers

CHAPTER 1-2: SHADOW PRICES

EconS 301. Math Review. Math Concepts

Lecture 4: Optimization. Maximizing a function of a single variable

Economics 101A (Lecture 3) Stefano DellaVigna

In the Ramsey model we maximized the utility U = u[c(t)]e nt e t dt. Now

Suggested Solutions to Problem Set 2

Topic 5: The Difference Equation

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

ARE202A, Fall Contents

The Ramsey Model. Alessandra Pelloni. October TEI Lecture. Alessandra Pelloni (TEI Lecture) Economic Growth October / 61

CONSTRAINED OPTIMALITY CRITERIA

Solow Growth Model. Michael Bar. February 28, Introduction Some facts about modern growth Questions... 4

MATH2070 Optimisation

z = f (x; y) = x 3 3x 2 y x 2 3

Transcription:

ECON2285: Mathematical Economics Yulei Luo SEF of HKU September 9, 2017 Luo, Y. (SEF of HKU) ME September 9, 2017 1 / 81

Constrained Static Optimization So far we have focused on nding the maximum or minimum value of a function without restricting the choice variables. In many economic problems, the choice variables must be constrained by economic considerations. E.g., consumers maximizes their utility functions with the budget constraints. A rm minimizes the cost of production with the constraint of production technology. Such constraints may lower the maximum (or increase the minimum) for the objective function being maximized (or minimized). Because we are not able to choose freely among all choice variables, the objective function may not be as large as it can be. The constraints would be said to be nonbinding if we could obtain the same level of the objective function with or without imposing the constraint. Luo, Y. (SEF of HKU) ME September 9, 2017 2 / 81

Finding the Stationary Values For illustration, consider a consumer s choice problem: maximize his utility as follows u (x 1, x 2 ) = x 1 x 2 + 2x 1, (1) subject to the budget constraint 4x 1 + 2x 2 = 60. (2) For this simple constrained problem, we can solve it by substituting the budget constraint into the objective utility function without using any new technique: The budget constraint can be rewritten as x 2 = 60 4x 1 2 = 30 2x 1,which combining with the utility function gives u (x 1 ) = x 1 (30 2x 1 ) + 2x 1. (3) By setting u x 1 = 32 4x 1 = 0, we get x1 = 8 and x 2 = 14. Since d 2 u = 4 < 0, the stationary value constitutes a constrained dx1 2 maximum. Luo, Y. (SEF of HKU) ME September 9, 2017 3 / 81

The Lagrange multiplier method However, when the constraint is itself a complicated function or when the constraint cannot be solved to express one variable as an explicit function of the other variables, the technique of substitution and elimination is not enough to solve the constrained optimization problem. We will introduce a new method: the Lagrange multiplier method to solve the constrained optimization problem. The essence of the LM method is to convert a constrained extremum problem into a form such that the FOC of the free extremum problem can still be applied. In general, given an objective function subject to the constraint z = f (x, y) (4) g (x, y) = c where c is a constant, (5) we can write the Lagrange function as follows Z = L (x, y, λ) = f (x, y) + λ [c g (x, y)]. (6) Luo, Y. (SEF of HKU) ME September 9, 2017 4 / 81

(Conti.) The symbol, λ, is called a Lagrange multiplier and will be determined and discussed later. If c g (x, y) = 0 always holds, the last term of Z will vanish regardless of the value of λ. Hence, nding the constrained maximum value of z is equivalent to nding a critical value of Z. The question now is: How can we make the parenthetical expression in (6) vanish? Let s proceed to do so, treating λ also as an additional choice variable (in addition to x and y). From the Lagrange function (6), the FOCs are L x = f x λg x = 0, (7) L y = f y λg y = 0, (8) L λ = c g (x, y) = 0. (9) The nal equation will automatically guarantee the satisfaction of the constraint. Since λ [c g (x, y)] = 0, the stationary values of Z in (6) must be identical with those of (4), subject to (5). Luo, Y. (SEF of HKU) ME September 9, 2017 5 / 81

Reconsider the above consumer s choice problem, rst, we can write the Lagrange function as follows Z = L (x 1, x 2, λ) = x 1 x 2 + 2x 1 + λ [60 (4x 1 + 2x 2 )]. (10) The FOCs are: L x 1 = x 2 + 2 4λ = 0, (11) L x 2 = x 1 2λ = 0, (12) L λ = 60 (4x 1 + 2x 2 ) = 0, (13) solving the above equation for the critical values gives x 1 = 8, x 2 = 14, and λ = 4. (14) Luo, Y. (SEF of HKU) ME September 9, 2017 6 / 81

Summary of the Procedure Step 1: Forming the Lagrange function Z = L(x, y, λ) = f (x, y) + λ[c g(x, y)]. (15) Step 2: Find the critical points of the Lagrangian function L(x, y, λ) by computing L/ x, L/ y, and L/ λ and setting each equal to 0 to solve for optimal (x, y, λ ) : L x = 0, L y L = 0, and λ = 0. Note that since λ just multiplies the constraint in the de nition of L, the equation L/ λ = 0 is equivalent to the constraint c g(x, y) = 0. Note that by introducing the Lagrange multiplier λ into the constrained problem, we have transformed a two-variable constrained problem to the three-variable unconstrained problem of nding the critical points of a function L(x, y, λ). Luo, Y. (SEF of HKU) ME September 9, 2017 7 / 81

Total Di erential Approach In the discussion of the free extremum of z = f (x, y), it is learned that the necessary FOC can be stated in terms of the total di erential dz : dz = f x dx + f y dy = 0. (16) This statement remains valid after adding the constraint g(x, y) = c. However, with the constraint, we can no longer take both dx and dy as arbitrary change as before because now dx and dy are dependent each other: g(x, y) = c =) (dg =) g x dx + g y dy = 0. (17) Luo, Y. (SEF of HKU) ME September 9, 2017 8 / 81

(Conti.) The FOC in terms of total di erential becomes dz = 0 subject to g(x, y) = c and g x dx + g y dy = 0. (18) In order to satisfy this necessary FOC, we must have f x g x = f y g y, (19) which together with the constraint g(x, y) = c will provide two equations to solve for the critical values of x and y. Hence, the total di erential approach yields the same FOC conditions as the Lagrange multiplier method. Note that the LM method gives the value of λ as a direct by-product. Luo, Y. (SEF of HKU) ME September 9, 2017 9 / 81

An Interpretation of the Lagrange Multiplier The Lagrange multiplier λ measures the sensitivity of Z (which is the value of Z in optimum) to the change in the constraint. In other words, it gives us a new measure of value of the scarce resources (The e ect of an increase in c would indicate how the optimal solution is a ected by a relaxation of the constraint). If we can express the optimal values of (x, y, λ ) as implicit functions of c : x = x (c) ; y = y (c) ; λ = λ (c) and all of which have continuous derivatives. Further, we have the following identities: f x (x, y ) λ g x (x, y ) = 0, (20) f y (x, y ) λ g y (x, y ) = 0, (21) c g (x, y ) = 0. (22) Luo, Y. (SEF of HKU) ME September 9, 2017 10 / 81

(Conti.) and the Lagrange function in optimum can be written as Z = L(x, y, λ ) = f (x, y ) + λ [c g(x, y )], (23) which means that dz dc dx = f x dc + f dy y dc + dλ dc [c g(x, y )] {z } =0 +λ 1 g dx x dc + g dy y dc 0 1 0 1 = B @ f x = λ. λ g {z x } =0 (FOC for x) dc + B @ f y C A dx λ g y {z } =0 (FOC for x) C A dy dc + λ Luo, Y. (SEF of HKU) ME September 9, 2017 11 / 81

Generalization: n-variable and Multi-constraint Case The optimization problem can be formed as follows: max min z = f (x 1,, x n ), (24) fx 1,,x n g fx 1,,x n g It follows that the Lagrange function is subject to : g(x 1,, x n ) = c (25) Z = L (x 1,, x n, λ) = f (x 1,, x n ) + λ [c g(x 1,, x n )], (26) for which the FOCs are f i (x 1,, x n ) = λg i (x 1,, x n ), i = 1,, n. g(x 1,, x n ) = c Luo, Y. (SEF of HKU) ME September 9, 2017 12 / 81

(Conti.) If the objective function has more than one variable, say, two constraints The Lagrange function is for which the FOCs are g(x 1,, x n ) = c and h(x 1,, x n ) = d (27) Z = f (x 1,, x n ) + λ [c g(x 1,, x n )] (28) +µ [d h(x 1,, x n )], f i (x 1,, x n ) = λg i (x 1,, x n ) + µh i (x 1,, x n ), i = 1,, n. g(x 1,, x n ) = c h(x 1,, x n ) = d. Luo, Y. (SEF of HKU) ME September 9, 2017 13 / 81

Second-Order Conditions Note that even though Z is indeed a standard type of extremum w.r.t. the choice variables, it is not so w.r.t. the Lagrange multiplier. (23) shows that unlike (x, y ), if λ is replaced by any other value of λ, no e ect will be produced on Z since c g(x, y ) = 0. Thus the role played by λ in the optimal solution is di ers basically from that of x and y. While it is safe to treat λ as another choice variable in the discussion of FOCs, we should treat λ di erently in the discussion of SOCs. The new SOCs can again be stated in terms of the SO total di erential d 2 z. The presence of the constraint will entail certain signi cant modi cations. Luo, Y. (SEF of HKU) ME September 9, 2017 14 / 81

Second-Order Total Di erential The constraint g(x, y) = c implies that dg = g x dx + g y dy = 0. dx and dy are no longer both arbitrary: we may take dx as an arbitrary change, but then dy is dependent on dx, i.e., dy = (g x /g y ) dx. Note that since g x and g y depend on x and y, dy also depends on x and y.thus, d 2 z = d (dz) = (dz) dx + (dz) dy x y (fx dx + f y dy) (fx dx + f y dy) = dx + dy x y dy dy = f xx dx + f xy dy + f y dx + f yx dx + f yy dy + f y x y = f xx dx 2 + 2f xy dxdy + f yy dy 2 dy + f y x dx + f dy y y dy = f xx dx 2 + 2f xy dxdy + f yy dy 2 + f y d 2 y Luo, Y. (SEF of HKU) ME September 9, 2017 15 / 81

(Conti.) The last term disquali es d 2 z as a quadratic form. But d 2 z can be transformed into a quadratic form by virtue of the constraint g(x, y) = c : dg = 0 =) d (dg) = g xx dx 2 + 2g xy dxdy + g yy dy 2 + g y d 2 y = 0. (29) Solving the last equation for d 2 y and substituting the result in the expression of d 2 z gives d 2 f y z = f xx g xx dx 2 f y f y + 2 f xy g xy dxdy + f yy g yy g y g y g y = (f xx λg xx ) dx 2 + 2 (f xy λg xy ) dxdy + (f yy λg yy ) dy 2 = Z xx dx 2 + 2Z xy dxdy + Z yy dy 2 where Z xx = f xx λg xx, Z xy = f xy λg xy, and Z yy = f yy λg yy are from partially di erentiating the derivatives in (7) and (8). Luo, Y. (SEF of HKU) ME September 9, 2017 16 / 81

Second-Order Conditions For a constrained extremum problem, the SO necessary and su cient conditions are still determined by the SO total di erential d 2 z for dx and dy satisfying dg = g x dx + g y dy = 0. Theorem (SO su cient conditions) For maximum of z : d 2 z negative de nite, subject to dg = 0. For minimum of z : d 2 z positive de nite, subject to dg = 0. Theorem (SO necessary conditions) For maximum of z : d 2 z negative semide nite, subject to dg = 0. For minimum of z : d 2 z positive semide nite, subject to dg = 0. Luo, Y. (SEF of HKU) ME September 9, 2017 17 / 81

The Bordered Hessian As in the case of free extremum, it is possible to express the SO su cient condition in determinantal form. In the constrained-extremum case we will use what is known a bordered Hessian. Let s rst analyze the conditions for the sign de niteness of a two-variable quadratic form, subject to a linear constraint: q = au 2 + 2huv + bv 2 subject to αu + βv = 0 (33) Since the constraint means that v = (α/β) u, q = aβ 2 2hαβ + bα 2 u 2 which means that q is positive (negative) de nite i aβ 2 2hαβ + bα 2 > 0(< 0). β 2, (34) Luo, Y. (SEF of HKU) ME September 9, 2017 18 / 81

(Conti.) It so happens that the following symmetric determinant 0 α β α a h β h b = aβ2 2hαβ + bα 2. (35) Consequently, we can state that q is positive de nite subject to αu + βv = 0 negative de nite 0 α β i α a h < 0 β h b =. (36) > 0 Luo, Y. (SEF of HKU) ME September 9, 2017 19 / 81

(Conti.) Note that the determinant used in this criterion is nothing but the discriminant of the original quadratic form a h h b, with a border placed on top and a similar border on the left: The border is merely composed of two coe cients α and β from the constraint, plus a zero in the principal diagonal. When applied to the quadratic form d 2 z, the (plain) discriminant consists of the Hessian Z xx Z xy Z xy Z yy. Given the constraint g x dx + g y dy = 0, d 2 z is positive de nite negative de nite s.t. dg = 0 i 0 g x g y g x Z xx Z xy g y Z xy Z yy {z } jhj = < 0 > 0 Luo, Y. (SEF of HKU) ME September 9, 2017 20 / 81

Example: Consumer s Utility Maximization Problem max u (x 1, x 2 ) = x 1 x 2 subject to x 1 + x 2 = B. (37) 1 + r Step1: The Lagrange function is x 2 Z = L (x 1, x 2, λ) = u (x 1, x 2 ) + λ B x 1. (38) 1 + r Step 2: The FOCs are Z λ = B x x 2 1 = 0, (39) 1 + r Z = x 2 λ = 0, (40) x 1 Z λ = x 1 = 0, (41) x 2 1 + r which can be used to solve for optimal level of x1 = B/2 and = B (1 + r) /2. x 2 Luo, Y. (SEF of HKU) ME September 9, 2017 21 / 81

(Conti.) Next, we should check the SO su cient condition for a maximum. The bordered Hessian for this problem is 1 0 1 1+r H = 1 0 1 1 1+r 1 0 = 2 > 0. (42) 1 + r Thus the SO su cient condition is satis ed for a maximum. Luo, Y. (SEF of HKU) ME September 9, 2017 22 / 81

-Variable Case When the optimization problem takes the form: max min fx 1,,x n g fx 1,,x n g z = f (x 1,, x n ), subject to: g(x 1,, x n ) = c, (43) the Lagrange function is Z = f (x 1,, x n ) + λ [c g(x 1,, x n )], (44) and dx 1,, dx n satisfy the relation: dg = g 1 dx 1 + + g n dx n = 0. (45) which implies that the bordered Hessian is H 0 g 1 g n g 1 Z 11 Z 1n =.... g n Z n1 Z nn Luo, Y. (SEF of HKU) ME September 9, 2017 23 / 81

(Conti.) its bordered leading principal minors are 0 g 1 g 2 H 2 = g 1 Z 11 Z 12 g 2 Z 21 Z 22, 0 g 1 g 2 g 3 H 3 = g 1 Z 11 Z 12 Z 13 g 2 Z 21 Z 22 Z 23, g 3 Z 31 Z 32 Z 33, H = H n Theorem (Conditions for Maximum) (1) FO necessary condition: Z λ = Z 1 = = Z n = 0. (2) SO su cient condition: H2 > 0, H3 < 0., ( 1) n Hn > 0. Theorem (Conditions for Minimum) (1) FO necessary condition: Z λ = Z 1 = = Z n = 0. (2) SO su cient condition: H 2 < 0, H 3 < 0., H n < 0. Luo, Y. (SEF of HKU) ME September 9, 2017 24 / 81

Quasiconcavity and Quasiconvexity For an unconstrained optimization problem (a free extremum problem), the concavity (convexity) of the objective function guarantees the existence of absolute maximum (absolute minimum). For a constrained optimization problem, we will show that the quasiconcavity (quasiconvexity) of the objective function guarantees the existence of absolute maximum (absolute minimum). Quasiconcavity (quasiconvexity), like concavity (convexity), can be either strict or nonstrict. Luo, Y. (SEF of HKU) ME September 9, 2017 25 / 81

De nition A function is quasiconcave (quasiconvex) i for any pair of distinct points u and v in the convex domain of f, and for 0 < θ < 1, f (v) f (u) implies that f (θu + (1 θ) v) f (u) (f (θu + (1 θ) v) f (v)). (46) Further, if the weak inequality ( ) is replaced by the strict inequality < ( > ), f is said to be strictly quasiconcave (strictly quasiconvex). Fact Quasiconcavity (quasiconvexity) is a weaker condition than concavity (convexity). Luo, Y. (SEF of HKU) ME September 9, 2017 26 / 81

Theorem (Negative of a function) If f (x) is quasiconcave (strictly quasiconcave), then (strictly quasiconvex). f (x) is quasiconvex Proof. Use the fact that multiplying an inequality by inequality. 1 reverses the sense of Theorem (Concavity vs. quasiconcavity) Any (strictly) concave (convex) function is (strictly) quasiconcave (quasiconvex) function, but the converse is not true. Proof. Using the de nitions of concavity and quasiconcavity to prove. f (θu + (1 θ) v) θf (u) + (1 θ) f (v) f (u) (because we assume that f (v) f (u)) Luo, Y. (SEF of HKU) ME September 9, 2017 27 / 81

Theorem (Linear function) If f (x) is linear, then it is quasiconcave as well as quasiconvex. Proof. Using the fact that if f (x) is linear, then it is concave as well as convex. Fact Unlike concave (convex) functions, a sum of two quasiconcave (quasiconvex) functions is not necessarily quasiconcave (quasiconvex). Luo, Y. (SEF of HKU) ME September 9, 2017 28 / 81

Sometimes it is easier to check quasiconcavity and quasiconvexity by the following alternative de nitions. We rst introduce the concept of convex sets. De nition If, for any two points in set S, the line segment connecting these two points lies entirely in S, then S is said to be a convex set. A corresponding algebraic de nition is: A set S is convex i for any two points u 2 S and v 2 S, and for every scalar θ 2 [0, 1], it is true that w = θu + (1 θ) v 2 S. Note that u and v can be in the space with any dimension. Luo, Y. (SEF of HKU) ME September 9, 2017 29 / 81

Fact To qualify as a convex set, the set of points must contain no holes, and its boundary must not be indented anywhere. See Figure 11.8 in the book. Fact Note that convex sets and convex functions are distinct concepts. In describing a function, the word convex speci es how a curve or surface bends itself-it must form a valley. But in describing a set, the word speci es how the points in the set are packed together - they must not allow any holes and the boundary must not be indented. Luo, Y. (SEF of HKU) ME September 9, 2017 30 / 81

De nition A function f (x), where x is a vector of variables, is quasiconcave (quasiconvex) i for any constant k, the set S = fxjf (x) kg (S = fxjf (x) kg) is convex. See Figure 12.5 (a), (b), and (c). The three functions in Figure 12.5 contain concave as well as convex segments and hence are neither concave or convex. But the function in Fig 12.5 (a) is quasiconcave because for any value of k, the set S is convex. The function in Fig 12.5 (b) is quasiconvex because for any value of k, the set S is convex. The monotonic function in Fig 12.5 (b) is quasiconcave as well as quasiconvex because both S and S are convex. Hence, given that S is convex, we can only conclude that the function f is quasiconcave, but not necessarily concave. Examples: (1) Z = x 2 (x 0) is quasiconvex as well as quasiconcave since both S and S are convex. (2) Z = (x a) 2 + (y b) 2 is quasiconvex since S is convex. Luo, Y. (SEF of HKU) ME September 9, 2017 31 / 81

If the function z = f (x 1,, x n ) is twice continuously di erentiable, quasiconcavity and quasiconvexity can be checked by the rst and second partial derivatives of the function. De ne a bordered determinant as follows: B 0 f 1 f n f 1 f 11 f 1n =... f n f n1 f nn (47) Note that the determinant B is di erent from the bordered Hessian H : Unlike H, the border in B is composed of the rst derivatives of the function f rather than the constraint function g. Hence, if B satis es the SO su cient condition for strict quasiconcavity (will be speci ed below), H must also satisfy the SO su cient condition for constrained maximization problem. Luo, Y. (SEF of HKU) ME September 9, 2017 32 / 81

(Conti.) We can de ne successive principal minors of B as follows: B 1 = 0 f 1 f 1 f 11, 0 f 1 f 2 B 2 = f 1 f 11 f 12 f 2 f 21 f 22,, B n = B (48) A su cient condition for f to be quasiconcave on the nonnegative domain is that B 1 < 0, B 2 > 0,, ( 1) n B n > 0. (49) For quasiconvexity, the corresponding condition is that B 1 < 0, B 2 < 0,, B n < 0. (50) A necessary condition for f to be quasiconcave on the nonnegative domain is that B 1 0, B 2 0,, ( 1) n B n 0. (51) For quasiconvexity, the corresponding condition is that B 1 0, B 2 0,, B n 0. (52) Luo, Y. (SEF of HKU) ME September 9, 2017 33 / 81

Example: Consider z = f (x 1, x 2 ) = x 1 x 2, since f 1 = x 2, f 2 = x 1, f 11 = f 22 = 0, f 12 = f 21 = 1, the relevant principal minors are B1 = 0 x 2 x 2 0 = x 2 2 0, B2 = 0 x 2 x 1 x 2 0 1 x 1 1 0 Thus, z = x 1 x 2 is quasiconcave on the nonnegative domain. = 2x 1x 2 0. (53) Luo, Y. (SEF of HKU) ME September 9, 2017 34 / 81

Example: Show that z = f (x, y) = x a y b (x > 0, y > 0, a, b 2 (0, 1)) is quasiconcave. Since f x = ax a 1 y b, f y = bx a y b 1, f xx = a (a 1) x a 2 y b, f yy = b (b 1) x a y b 2, f xy = f yx = abx a 1 y b 1, B 1 = B 2 = 0 f x f x 0 ax = a 1 y b 2 < 0, (54) 0 f x f y f x f xx f xy f y f yx f yy = 2ab (a + b) x 3a 2 y 3b 2 > 0, (55) which means that the function is quasiconcave. Luo, Y. (SEF of HKU) ME September 9, 2017 35 / 81

(Conti.) In this case, the condition for concavity can be expressed as f xx f yy fxy 2 = ha (a 1) x a 2 y bi h b (b 1) x a y b 2i habx a 1 y b 1i 2 (56) = ab (a 1) (b 1) x 2a 2 y 2b 2 a 2 b 2 x 2a 2 y 2b 2 = ab [1 a b] x 2a 2 y 2b 2 (57) and this expression is positive (as required for concavity) for 1 a b > 0 =) a + b < 1. Luo, Y. (SEF of HKU) ME September 9, 2017 36 / 81

Fact When the function in the constraint is linear, that is, g(x 1,, x n ) = a 1 x 1 + + a n x n = c, the bordered determinant B and the bordered Hessian H have the following relationship: B = λ 2 H. (58) Hence, in the linear constraint case, the two bordered determinants always have the same sign at the stationary point. Fact (Relative vs. absolute extreme) If a function is quasiconcave (quasiconvex), by the same reasons for concave (convex) functions, its relative maximum (relative minimum) is an absolute maximum (absolute minimum). Luo, Y. (SEF of HKU) ME September 9, 2017 37 / 81

Utility Maximization and Consumer Demand Consider the following two-commodity consumer optimization problem: max u (x, y) (u x > 0, u y > 0) (59) subject to the budget constraint where P x, P y, and B are given exogenously. The Lagrange function is then P x x + P y y = B (60) Z = u (x, y) + λ (B P x x P y y). (61) Luo, Y. (SEF of HKU) ME September 9, 2017 38 / 81

The FOCs are From the last two equations, we have Z λ = B P x x P y y = 0 (62) Z x = u x λp x = 0 (63) Z y = u y λp y = 0. (64) u x u y = P x P y, (65) where u x u y = MRS xy is called the marginal rate of substitution (MRS) of x for y. Thus, we have the well-known equality: MRS xy = P x P y which is the necessary condition for the interior optimal solution. See the gure of the indi erence curve. Luo, Y. (SEF of HKU) ME September 9, 2017 39 / 81

If the bordered Hessian is positive, i.e., 0 P x P y P x u xx u xy P y u yx u yy = 2P x P y u xy Py 2 u xx Px 2 u yy > 0, in which all elements are evaluated at the optimum, then the stationary value of u is maximum. Luo, Y. (SEF of HKU) ME September 9, 2017 40 / 81

Static Optimization with Inequality Constraints So far we have considered optimization problem with equality constraints. Now we shall consider constraints that may be satis ed as inequailities in the solution. Consider the simple optimization problem with inequality constraints: max f (x, y) subject to g (x, y) c. (66) fx,y g2s We seek the largest value attained by f (x, y) in the admissible or feasible set S of all pairs (x, y) satisfying g (x, y) c. Note that problems where one wants to minimize f (x, y) subject to fx, yg 2 S can be handled by instead studying the problem of maximizing f (x, y) subject to fx, yg 2 S. This problem can be solved by using an extended Lagrangian multiplier method introduced above, and it involves examining the stationary points of f in the interior of the feasible set S and the behavior of f on the boundary of S. This new method is originally proposed by two Princeton mathematicians: H. W. Kuhn and A.W. Tucker. Luo, Y. (SEF of HKU) ME September 9, 2017 41 / 81

Theorem (Recipe for solving the optimization problem with inequality constraints.) A. Associate a Lagrange multiplier λ with the constraint g (x, y) c, and de ne the Lagrangian function as follows Z = L(x, y) = f (x, y) + λ[c g(x, y)]. (67) B. Equate the partial derivatives of Z w.r.t. x and y to zeros: f x λg x = 0, f y λg y = 0. (68) C. Introduce the complementary slackness condition D. Require (x, y) to satisfy the constraint λ 0 and λ[c g(x, y)] = 0. (69) g (x, y) c (70) Luo, Y. (SEF of HKU) ME September 9, 2017 42 / 81

(Conti.) Step C (the complementary slackness condition) is tricky. It requires that λ is nonnegative and moreover that λ = 0 if g (x, y) < c. Thus, if λ > 0, we must have g (x, y) = c. Note that the Lagrange multiplier λ can be interpreted as a price (it is called the shadow price) associated with increasing the right-hand side c of the resource constraint g (x, y) c by 1 unit. With this interpretation, prices are nonnegative, and if the resource constraint is not binding because g (x, y) < c at the optimum, the price associated with increasing by one unit is 0. It is possible to have both λ = 0 and g (x, y) = c. The two inequalities are complementary in the sense that at most one can be slack, that is, at most one can hold with inequality. Equivalently, at least one must be an equality. Luo, Y. (SEF of HKU) ME September 9, 2017 43 / 81

(Conti.) Conditions (68) and (69) are called the Kuhn-Tucker conditions. Note that they are necessary conditions for the above problem. Note that with an inequality constraint, one will have Z λ = c g(x, y) > 0 at an optimum if the constraint holds with inequality at that point. For this reason, we wouldn t di erentiate the Lagrangian w.r.t. λ. Example: Consider the following problem max fx,y g2s f (x, y) = x 2 + y 2 + y 1 (71) The Lagrange function is subject to : g (x, y) = x 2 + y 2 1. (72) Z = x 2 + y 2 + y 1 + λ 1 x 2 + y 2. Luo, Y. (SEF of HKU) ME September 9, 2017 44 / 81

The Trial-and-Error Approach to Search Optimal Solutions (Conti.) The FOCs are then The complementary slackness condition is Z x = 2x 2λx = 0 (73) Z y = 2y + 1 2λy = 0. (74) λ 0 and λ 1 x 2 + y 2 = 0. (75) we want to nd all pairs (x, y) that satisfy these conditions for some suitable value of λ. Begin by looking at (73). This condition implies that 2x (1 λ) = 0. There are two possibilities: λ = 1 or x = 0. If λ = 1, then (74) implies that 1 = 0, a contradiction. Hence, x = 0. Luo, Y. (SEF of HKU) ME September 9, 2017 45 / 81

(Conti.) Since x = 0, Suppose that x 2 + y 2 = 1, we have y = 1. We try y = 1 rst. Substituting y = 1 into (74) implies that λ = 3/2 and (75) is satis ed. Hence, (x, y) = (0, 1) with λ = 3/2 is a candidate for optimality. Similarly, if y = 1, λ = 1/2 and (75) is also satis ed. Hence, (x, y) = (0, 1) with λ = 1/2 is also a candidate for optimality. Finally, consider the case where x = 0 and x 2 + y 2 < 1. In this case, (75) implies that λ = 0 and (74) implies y = 1/2. Hence, (x, y) = (0, 1/2) with λ = 0 is also a candidate for optimality. We then conclude that there are 3 candidates for optimality: f (0, 1) = 1, f (0, 1) = 1, f (0, 1/2) = 5/4. Because we want to maximize a continuous function over a closed bounded set, by the Extreme value theorem there is a solution to the problem. Thus, (x, y) = (0, 1) solved the maximization problem. Luo, Y. (SEF of HKU) ME September 9, 2017 46 / 81

Consider the n-variable problem max f (x 1,, x n ) s.t. 8 < : g 1 (x 1,, x n ) c 1 g m (x 1,, x n ) c m (76) Theorem (Recipe for Solving the General Problem with n-variable) A). Write down the Lagrange function L = f (x 1,, x n ) + m j=1 λ j (c j g j (x 1,, x n )) (77) where λ j is the Lagrange multiplier associated with the j th constraint. Luo, Y. (SEF of HKU) ME September 9, 2017 47 / 81

Theorem (conti.) B) Equate all the FOCs to 0, for each i = 1,, n : f x i m g j λ j = 0. (78) j=1 x i C) Impose the complementary slackness conditions: λ j 0 ( = 0 if c j g j (x 1,, x n ) > 0), j = 1,, m. (79) D) Require x 1,, x n to satisfy the constraints g j (x 1,, x n ) c j, j = 1,, m. (80) Luo, Y. (SEF of HKU) ME September 9, 2017 48 / 81

Theorem (Kuhn-Tucker Su cient Conditions) Consider (66) and suppose that (x, y ) satis es conditions (68), (69) and (70). If the Lagrange function is concave, then (x, y ) solves the problem. Proof. If (x, y ) satis es the conditions in (68), then (x, y ) is a stationary point of the Lagrangian. Because a stationary point of the concave Lagrangian will maximize the function, we have L(x, y ) = f (x, y ) + λ[c g(x, y )] f (x, y) + λ[c g(x, y)] Rearranging the terms gives f (x, y ) f (x, y) λ[g(x, y ) g(x, y)] (81) Luo, Y. (SEF of HKU) ME September 9, 2017 49 / 81

Proof. (conti.) If c g (x, y ) > 0, then by (69), we have λ = 0, so (81) implies that f (x, y ) f (x, y). On the other hand, if c g(x, y ) = 0, then λ[g(x, y ) g(x, y)] = λ[c g(x, y)]. Here λ 0, and c g(x, y) 0 for all (x, y) satisfying the constraint. Hence, (x, y ) solves problem (66). Luo, Y. (SEF of HKU) ME September 9, 2017 50 / 81

A Special Case: Nonnegativity Conditions on the Variables Many economic variables must be nonnegative by their very nature. It is not di cult to incorporate such constraints in our above formulation. For example, x 0 can be expressed as h (x, y) = x 0, and we introduce an additional Lagrange multiplier to go with it. Consider the problem max f (x, y) subject to g (x, y) c, x 0, y 0. (82) fx,y g2s Note that it can be rewritten as max f (x, y) subject to g (x, y) c, x 0, y 0. (83) fx,y g2s Luo, Y. (SEF of HKU) ME September 9, 2017 51 / 81

(Conti.) The Lagrange function is then Z = L(x, y, λ) = f (x, y) + λ[c g(x, y)] + µ 1 x + µ 2 y. (84) The FOCs are f x λg x + µ 1 = 0 (85) f y λg y + µ 2 = 0 (86) λ 0, λ[c g(x, y)] = 0 (87) µ 1 0, µ 1 x = 0 (88) µ 2 0, µ 2 y = 0, (89) which is equivalent to f x λg x 0 ( = 0 if x > 0) f y λg y 0 ( = 0 if y > 0) λ 0 ( = 0 if c g(x, y) > 0) Luo, Y. (SEF of HKU) ME September 9, 2017 52 / 81

The same idea can obviously be extended to the n-variable problem 8 < g 1 (x 1,, x n ) c 1 max f (x 1,, x n ) s.t., (90) : g m (x 1,, x n ) c m x 1 0,, x n 0. The necessary FOCs for the solution of (90) are that, for each i = 1,, n : f x i m g j λ j j=1 x i 0 ( = 0 if x i > 0) (91) λ j 0 ( = 0 if c j g j (x, y) > 0), j = 1,, (92) m. Luo, Y. (SEF of HKU) ME September 9, 2017 53 / 81

Example: The consumer s maximization problem is subject to The Lagrange function is max u = u (x, y) P x x + P y y B (93) c x x + c y y C (94) x 0 (95) y 0 (96) Z = u (x, y) + λ 1 [B (P x x + P y y)] + λ 2 [C (c x x + c y y)] +µ 1 x + µ 2 y Suppose that u (x, y) = xy 2, B = 100, P x = P y = 1, C = 120, c x = 2, and c y = 1. Luo, Y. (SEF of HKU) ME September 9, 2017 54 / 81

(Conti.) The Kuhn-Tucker conditions are Z x = y 2 λ 1 2λ 2 0, x 0, xz x = 0. equivalently, y 2 λ 1 2λ 2 0 (= 0 if x > 0) Z y = 2xy λ 1 λ 2 0, y 0, yz y = 0. λ 1 0, λ 1 [100 (x + y)] = 0, 100 (x + y) 0. λ 2 0, λ 2 [120 (2x + y)] = 0, 120 (2x + y) 0. Again, the solution procedure involves a certain amount of trial and error. We can rst choose one of the constraints to be nonbinding and solve for x and y. Once found, use these values to test if the constraint chosen to be nonbinding is violated. If it is, then redo the procedure choosing another constraint to be nonbinding. If violation of the nonbinding constraint occurs again, then we can assume both constraints bind and the solution is determined only by the constraints. Luo, Y. (SEF of HKU) ME September 9, 2017 55 / 81

(Conti.) Step 1: Assume that the second constraint is nonbinding in the solution (that is, 120 > 2x + y), so that λ 2 = 0 by complementary slackness. But let x, λ 1, and y be positive so that: Z x = y 2 λ 1 = 0 Z y = 2xy λ 1 = 0 100 = x + y Solving for x and y yields a trial solution x = 33 1 3, y = 662 3. But substituting this into the second constraint gives 2 33 1 + 66 2 3 3 = 1331 3 > 120. In other words, this solution violates the second constraint and must be rejected. Luo, Y. (SEF of HKU) ME September 9, 2017 56 / 81

(Conti.) Step 2: Reverse the assumption so that λ 1 = 0 and let x, λ 2, and y be positive so that: Z x = y 2 λ 2 = 0 Z y = 2xy λ 2 = 0 120 = 2x + y Solving for x and y yields a trial solution x = 20, y = 80 which implies that λ 2 = 3200. These solutions together with λ 1 = 0 satisfy all constraints. Thus we accept them as the nal solution to the Kuhn-Tucker conditions. Luo, Y. (SEF of HKU) ME September 9, 2017 57 / 81

Example: Quasi-linear Preference Consumer s problem is to choose two commodities to maximize his utility function: u(x, y) = y + a ln (x) (97) subject to px + qy I, (98) x 0 (99) y 0 (100) where a is a given positive constant. p and q are both positive prices. First, we construct the Lagrange function as follows L = y + a ln (x) + λ [I px qy] + +µ 1 x + µ 2 y (101) Luo, Y. (SEF of HKU) ME September 9, 2017 58 / 81

(Conti.) The Kuhn-Tucker conditions can be written as a a λp 0, x 0, λp x = 0 (102) x x 1 λq 0, y 0, (1 λq) y = 0 (103) I px qy 0, λ 0, (I px qy) λ = 0. (104) in which we have 2 3 possibilities of equalities or inequalities because there are two nonnegative variables and one inequality constraint. First, note that the budget constraint must be binding (that is, px + qy = I, all available income must be used up) because the marginal utility is positive: u x (x, y) > 0 and u y (x, y) > 0, that is, consuming more can result in higher levels of utility. Formally, if the BC is not binding, then λ = 0, which means that a x 0 and 1 0, both are contradiction. Now we can reduce the number of possibilities to 4 : x > 0, x = 0, y > 0, y = 0. We can also rule out the possibility that x = 0 and y = 0 because they don t satisfy the BC: px + qy = I > 0. Luo, Y. (SEF of HKU) ME September 9, 2017 59 / 81

(Conti.) If x = 0 and y = I /q > 0, the second line in the Kuhn-Tucker conditions means that λ = 1 q and the rst line in the KT conditions implies that p q, which is a contradiction. Intuitively, the initial one unit increase in x will result in positively in nite utility, so zero consumption in x can t be optimal. Luo, Y. (SEF of HKU) ME September 9, 2017 60 / 81

(Conti.) If y = 0 and x = I /p > 0, the rst line in the KT conditions implies that a λp = 0 =) λ = a x I. (105) Substituting it into the second line of the KT conditions gives 1 a q 0 =) I aq. I If these parameters (I, a, q) satisfy this condition, then x = I /p and y = 0 is a candidate optimal solution. Finally, if both x and y are positive, the rst two lines in the KT conditions gives a x λp = 0 = 1 λq =) x = aq/p and y = I /q a. (106) Hence, if I > aq (y > 0), then it is also a candidate optimal solution. Luo, Y. (SEF of HKU) ME September 9, 2017 61 / 81

(Conti.) In sum, the optimal solution eventually depends on the values of parameters (I, a, q): x = I /p and y = 0 if I aq. x = aq/p and y = I /q a if I > aq. Luo, Y. (SEF of HKU) ME September 9, 2017 62 / 81

The Envelop Theorem for Unconstrained Optimization A maximum-value function is an objective function where the choice variables have been assigned their optimal values. Thus, this function indirectly becomes a function of the parameters only through the parameters e ects on the optimal values of the choice variables, and is also referred to as the indirect objective function. The indirect objective function traces out all the maximum values of the objective functions these parameters vary. Hence, the IOF is an envelop of the set of optimized objective functions generated by varying the parameters. Consider max u = f (x, y, φ) where x and y are choice variables and φ is a parameter. Luo, Y. (SEF of HKU) ME September 9, 2017 63 / 81

The FOC necessary conditions: f x (x, y, φ) = f y (x, y, φ) = 0. (107) If the SO conditions are met, these two equations implicitly de ne the solutions x = x (φ) and y = y (φ). Substituting these solutions back into the f gives the IOF (or the maximum value function) Di erentiating V (φ) w.r.t. φ gives V (φ) = f (x (φ), y (φ), φ). (108) dv dφ = f x x φ + f y y φ + f φ = f φ. This result means that at the optimum (f x = f y = 0), as φ varies, with x and y allowed to adjust, dv d φ gives the same result as if x and y are treated as constants (only the direct e ect need to be considered). This is the essence of the Envelop theorem. Luo, Y. (SEF of HKU) ME September 9, 2017 64 / 81

The Envelop Theorem for Constrained Optimization The problem becomes max u = f (x, y, φ) s.t. g (x, y, φ) = 0. The Lagrangian is then Z = f (x, y, φ) λg (x, y, φ). The FOCs are Z x = f x λg x = 0 Z y = f y λg y = 0 Z λ = g (x, y, φ) = 0. The optimal solution is then x = x (φ), y = y (φ), λ = λ (φ). Luo, Y. (SEF of HKU) ME September 9, 2017 65 / 81

Substituting these solutions back into the f gives the IOF (or the maximum value function) Di erentiating V (φ) w.r.t. φ gives Further, note that since we have V (φ) = f (x (φ), y (φ), φ). (109) dv dφ = f x x φ + f y y φ + f φ. (110) g (x (φ), y (φ), φ) = 0, x g x φ + g y y φ + g φ = 0. (111) Multiplying λ on both sides and then combining it with the expression gives the Envelop theorem for constrained optimization: of dv d φ dv dφ = (f x λg x ) x φ + (f y λg y ) y φ + f φ λg φ = Zφ. (112) Luo, Y. (SEF of HKU) ME September 9, 2017 66 / 81

which means that this function is homogenous function of degree 0 (j 0 = 1). Luo, Y. (SEF of HKU) ME September 9, 2017 67 / 81 Homogeneous Functions De nition A function is said to be homogenous of degree r, if multiplication of each of its independent variables by a constant j will alter the value of the function by the proportion j r, that is, f (jx 1,, jx n ) = j r f (x 1,, x n ). (113) In economics applications, we assume that j is usually taken to be positive. Example Given the function f (x, y, w) = x/y + 2w/3x, if we multiply each variable by j, we get f (jx, jy, jw) = x/y + 2w/3x = j 0 f (x, y, w), (114)

De nition Production functions are usually homogeneous functions of degree 1. They are often referred to as linearly homogeneous functions. Assume that the production function has the following form: Q = f (K, L) (115) The mathematical assumption of linear homogeneity would amount to the economic assumption of constant returns to scale (CRTS), because linear homogeneity means that raising all inputs j-fold will always raise the output (value of the function) exactly j-fold also. Luo, Y. (SEF of HKU) ME September 9, 2017 68 / 81

Fact Given the LH production function Q, the average production of labor (APL) and of capital (APK) can be expressed as functions of the capital-labor ratio, k = K /L alone. Fact Multiplying each independent variable by a factor j = 1/L and using the property of linear homogeneity, we have APL = Q L = f K L, 1 = f (k, 1) = φ (k). (116) APK = Q K = Q L L K = φ (k) k (117) Therefore, while the production function is homogeneous degree one, both APL and APK are homogenous of degree zero in the K and L. Luo, Y. (SEF of HKU) ME September 9, 2017 69 / 81

Fact Given the LH production function Q, the marginal production of labor (MPL) and of capital (MPK) can be expressed as functions of the capital-labor ratio, k = K /L alone. Fact To nd the marginal products, we rewrite the total product as Q = Lφ (k).di erentiating it w.r.t. K and L gives MPK = Q K MPL = Q L = L φ (k) K = φ (k) + L φ (k) L = φ (k) φ 0 (k) k (k) k = Ldφ dk K = φ0 (k) = φ (k) + Lφ 0 (k) K L 2 They are also homogenous of degree zero in the K and L (they remain the same as long as k is held constant. Luo, Y. (SEF of HKU) ME September 9, 2017 70 / 81

This theorem says that the value of a LH function can always be expressed as a sum of terms, each of which is one of the independent variables and the FO partial derivative w.r.t. to that variable. Hence, under conditions of CRTS, if each input is paid the amount of its marginal product, the total product will be exhausted by the distributive shares for all the input factors, or the pure economic pro t will be zero. Luo, Y. (SEF of HKU) ME September 9, 2017 71 / 81 Theorem (Euler s Theorem) If Q = f (K, L) is linearly homogeneous, then Proof. K Q K + L Q L = Q. K Q K + L Q L = K φ 0 (k) + L φ (k) φ 0 (k) k = Lφ (k) = Q.

Example (Cobb-Douglas Production Function) One speci c production function is the Cobb-Douglas function: Q = AK α L β (118) where A is a positive constant, α and β are positive fractions. Major features of this production are: (1) It is homogenous of degree (α + β). (2) In the special case of α + β = 1, it is linearly homogeneous. (3) Its isoquants are negatively sloped throughout and strictly convex for positive values of K and L. (4) It is strictly quasiconcave for positive K and L. Luo, Y. (SEF of HKU) ME September 9, 2017 72 / 81

Example (Cobb-Douglas Production Function) (conti.) For the special case α + β = 1, Q = AK α L 1 α = ALk α. (119) APL = Q L = Ak α, APK = Q K = Ak α 1. (120) MPK = Q K = Aαk α The Euler theorem can be veri ed as follows: K Q K + L Q L 1, MPL = Q L = A (1 α) k α. (121) = KAαk α 1 + LA (1 α) k α = k α [AαL + A (1 α) L] = ALk α = Q. Luo, Y. (SEF of HKU) ME September 9, 2017 73 / 81

Example (CES Production Function) A constant elasticity of substitution (CES) production function is Q = A δk ρ + (1 δ) L ρ 1/ρ (A > 0; 0 < δ < 1; 1 < ρ 6= 0) (122) where δ is the distributive parameter, like α in the CD function, and ρ is the substitution parameter-which has no counterpart in the CD function. First, the CES production function is homogeneous of degree one h jq = A δ (jk ) ρ + (1 δ) (jl) ρi 1/ρ. (123) Second, CD function is a special case of the CES function. When ρ! 0, the CES function approaches the CD function. Luo, Y. (SEF of HKU) ME September 9, 2017 74 / 81

Example (CES Production Function) (conti.) Proof. Taking log on both side of the CES production function and then taking the limitation gives ln Q A = lim ρ!0 ln [δk ρ + (1 δ) L ρ ] ρ = lim ρ!0 δk ρ ln K (1 δ) L ρ ln L δk ρ + (1 δ) L ρ = ln Q = AK δ L 1 δ. in which we use the L Hopital s rule. K δ L 1 δ. =) Luo, Y. (SEF of HKU) ME September 9, 2017 75 / 81

Least-Cost Combination of Inputs (Application of the Homogenous PF) Minimizing cost problem: subject to the output constraint min C = ap a + bp b (124) fa,bg Q (a, b) = Q 0 (125) where Q 0, P a, and P b are given exogenously. The marginal products are positive: Q a > 0, Q b > 0. The Lagrangian can be written as Z = ap a + bp b + µ [Q 0 Q (a, b)] (126) Luo, Y. (SEF of HKU) ME September 9, 2017 76 / 81

(Conti.) The FOCs are The last two equation imply that Z µ = Q 0 Q (a, b) = 0 (127) Z a = P a µq a = 0 (128) Z b = P b µq b = 0 (129) P a Q a = P b Q b = µ, (130) which means that at the optimum, the input price marginal product ratio must be the same for each input. This equality can be rewritten as P a = Q a, MRT ab, (131) P b Q b where MRT ab is the marginal rate of technical substitution of a for b. See Figure 12.8 in CW. Luo, Y. (SEF of HKU) ME September 9, 2017 77 / 81

(Conti.) Su cient SOC can be used to ensure a minimum cost after the FOCs are met. That is, the problem needs a negative bordered Hessian: H = 0 Q a Q b Q a µq aa µq ab Q b µq ba µq bb = µ Q aaqb 2 2Q ab Q a Q b + Q bb Qa 2 < (132) Note that the curvature of an isoquant is represented by the second derivative: d 2 b da 2 = d db = d da da da h Q aa + Q ab = = 1 Q 3 b Q aa Q 2 b Qa Q b = Qaa + Q ab db da Qb Qba + Q Q 2 b i i Qa Q b Q b hq ba + Q Qa bb Q b Q a Q 2 b 2Q ab Q a Q b + Q bb Qa 2 Hence, the satisfaction of the su cient SOC implies that d 2 b da 2 > 0 because µ > 0 and Q b > 0. That is, the isoquant is strictly convex. Luo, Y. (SEF of HKU) ME September 9, 2017 78 / 81

Now assume that Q = Aa α b β. The optimum implies that P a P b = Q a Q b = αb βa b =) a = β P a = a constant α P b See Figure 12.9. The expansion path serves to describe the least-cost combinations required to produce varying levels of Q 0 and is the locus of the points of tangency. In the above case, the EP is a straight line. Note that any homogenous production can give rise to a linear EP. A more general class of functions, known as homothetic functions, can produce linear EPs too. De nition Homotheticity can arise from a composite function in the form where Q (a, b) is homogenous of degree r. H = h (Q (a, b)) h 0 (Q) 6= 0 (133) Luo, Y. (SEF of HKU) ME September 9, 2017 79 / 81

(Conti.) Note that H (a, b) is in general not homogenous in a and b. Nonetheless, the EPs of it are linear: Slope of H isoquant = = h0 (Q) Q a H b h 0 = (Q) Q b = Slope of Q isoquant H a Q a Q b = constant for any given b a given the linearity of the EPs of Q. Check two examples: H = Q 2 (H is also a homogenous function) and H = exp (Q) (H is not a homogenous function). Luo, Y. (SEF of HKU) ME September 9, 2017 80 / 81

What is the e ect of the change in the exogenous input price ratio P a P b on the optimal ratio b a? We introduce a concept the elasticity of substitution: relative change in σ = relative change in b a Pa P b = d b a / d Pa P b / b a Pa P b = d b a /d Pa P b, b a / Pa P b (134) the larger the σ, the greater the substitution between the two inputs. If b a is considered as a function of P a P b, then σ will be the ratio of a marginal function to an average function. For the CD function, b a = β α P a P b, which means that σ = 1. (135) For the CES function introduced above, Q L = P L =) K δ 1/(1+ρ) 1/(1+ρ) Q K P K L = PL =) σ = 1 1 δ P K 1 + ρ. Luo, Y. (SEF of HKU) ME September 9, 2017 81 / 81