Lecture Notes for Chapter 12 Kevin Wainwright April 26, 2014 1 Constrained Optimization Consider the following Utility Max problem: Max x 1, x 2 U = U(x 1, x 2 ) (1) Subject to: Re-write Eq. 2 B = P 1 x 1 + P 2 x 2 (2) x 2 = B P 2 P 1 P 2 x 1 (Eq.2 ) Now x 2 = x 2 (x 1 ) and dx 2 Sub into Eq. 1 for x 2 dx 1 = P 1 P 2 U = U(x 1, x 2 (x 1 )) (3) Eq. 3 is an unconstrained function of one variable, x 1 Differentiate, using the Chain Rule 1
du = U + U dx 2 = 0 dx 1 x 1 x 2 dx 1 From Eq. 2 we know dx 2 dx 1 = P 1 P 2 Therefore: ( du = U 1 + U 2 P ) 1 = 0 dx 1 P 2 OR U 1 U 2 = P 1 P 2 2
This is our usual condition that MRS(x2 ; x1 ) = willingness to grade equals his ability to trade. P1 P2 or the consumer s The More General Constrained Maximum Problem Max: y = f (x1 ; x2 ) (4) Subject to: g(x1 ; x2 ) = 0 (5) Take total di erentials of Eq. 4 and Eq. 5 dy = f1 dx1 + f2 dx2 = 0 (6) dg = g1 dx1 + g2 dx2 = 0 (7) 3
or Eq.6 dx 1 = f 2 f 1 dx 2 Eq. 7 dx 1 = g 2 g 1 dx 2 Subtract 6 from [ 7 ( )] ( ) dx 1 dx 1 = g 2 g 1 f 2 f 1 dx 2 = f2 f 1 g 2 g 1 dx 2 = 0 Therefore f 2 = g 2 f 1 g 1 Eq. 8: says that the level curves of the objective function must be tangent to the level curves of the constraint 1.1 Lagrange Multiplier Approach Create a new function called the Lagrangian: L = f(x 1, x 2 ) + λg (x 1, x 2 ) since g (x 1, x 2 ) = 0 when the constraint is satisfied L = f(x 1, x 2 ) + zero We have created a new independent variable λ (lambda), which is called the Lagrangian Multiplier. We now have a function of three variables; x 1, x 2,and λ Now we Maximize L = f(x 1, x 2 ) + λg (x 1, x 2 ) 4
First Order Conditions From Eq. 2 and 3 we get: L λ = L λ = g (x 1, x 2 ) = 0 Eq. 1 L 1 = L x 1 = f 1 + λg 1 = 0 Eq. 2 L 2 = L x 2 = f 2 + λg 2 = 0 Eq. 3 f 1 f 2 = λg 1 λg 2 = g 1 g 2 From the 3 F.O.C. s we have 3 equations and 3 unknowns (x 1, x 2, λ). In principle we can solve for x 1, x 2, and λ. 1.1.1 Example 1: Let: Subject to: Lagrange: U = xy 10 = x + y P x = P y = 1 F.O.C. From (2) and (3) we see that: L = f(x, y) + λ(g(x, y)) L = xy + λ(10 x y) L λ = 10 x y = 0 Eq. 1 L x = y λ = 0 Eq. 2 L y = x λ = 0 Eq. 3 y x = λ λ From (1) and (4) we get: = 1 or y = x Eq. 4 5
10 x x = 0 or x = 5 and y = 5 From either (2) or (3) we get: λ = 5 1.1.2 Example 2: Utility Maximization Maximize subject to Set up the Lagrangian Equation: u = 4x 2 + 3xy + 6y 2 x + y = 56 L = 4x 2 + 3xy + 6y 2 + λ(56 x y) Take the first-order partials and set them to zero L x = 8x + 3y λ = 0 L y = 3x + 12y λ = 0 L λ = 56 x y = 0 From the first two equations we get 8x + 3y = 3x + 12y x = 1.8y Substitute this result into the third equation 56 1.8y y = 0 y = 20 therefore x = 36 λ = 348 6
1.1.3 Example 3: Cost minimization A firm produces two goods, x and y. Due to a government quota, the firm must produce subject to the constraint x + y = 42. The firm s cost functions is c(x, y) = 8x 2 xy + 12y 2 The Lagrangian is The first order conditions are L = 8x 2 xy + 12y 2 + λ(42 x y) L x = 16x y λ = 0 L y = x + 24y λ = 0 L λ = 42 x y = 0 (8) Solving these three equations simultaneously yields 1.1.4 Example 4: x = 25 y = 17 λ = 383 Max: Subject to: Langrange: U = x 1 x 2 B = P 1 x 1 + P 2 x 2 L = x 1 x 2 + λ (B P 1 x 1 P 2 x 2 ) F.O.C. L λ = B P 1 x 1 P 2 x 2 = 0 Eq. 1 L 1 = x 2 λp 1 = 0 Eq. 2 L 2 = x 1 λp 2 = 0 Eq. 3 7
From Eq. (2) and (3) Solve for x 1 From (2) and (3) Sub into (1) ( x 2 x 1 = P 1 P 2 ) = MRS x 2 = P 1 P 2 x 1 ( ) P1 B = P 1 x 1 + P 2 x 1 = 2P 1 x 1 P 2 x 1 = B 2P 1 and x 2 = B 2P 2 The solution to x 1and x 2 are the Demand Functions for x 1 and x 2 1.1.5 Properties of Demand Functions 1. "Homogenous of degree zero" multiply prices and income by α x 1 = αb 2 (αp 1 ) = B 2P 1 2. "For normal goods demand has a negative slope" x 1 = B P 1 2P1 2 < 0 3. "For normal goods Engel curve positive slope" x 1 B = 1 2P 1 > 0 In this example x 1 and x 2are both normal goods (rather than inferior or giffen) 8
Given: U = x 1 x 2 And: x 1 = B 2P 1 and x 2 = B 2P 2 Substituting into the utility function we get: ( ) ( ) B B U = x 1, x 2 = 2P 1 2P ( ) 2 B 2 U = 4P 1 P 2 Now we have the utility expressed as a function of Prices and Income U = U(P 1 P 2, B) is "The Indirect Utility Function" At U = U 0 = B2 4P 1 P 2 we can re-arrange to get: B = 2P 1 2 1 P 1 2 2 U 1 2 }{{ 0} This is the "Expenditure Function" 1.2 Minimization and Lagrange Min x, y P x X + P y Y Subject to U 0 = U(x, y) Lagrange L = P x X + P y Y + λ(u 0 U(x, y)) 9
F.O.C. From (2) and (3) we get L λ = U 0 U(x, y) = 0 Eq. 1 L x = P x λ U x = 0 Eq. 2 L y = P y λ U y = 0 Eq. 3 P x = λu x = U x = MRS P y λu y U }{{ y } (The same result as in the MAX problem) Solving (1), (2), and (3) by Cramer s Rule, or some other method, we get: x = x(p x, P y, U 0 ) y = y(p x, P y, U 0 ) λ = λ(p x, P y, U 0 ) 1.3 Second Order Conditions 1. (a) To determine whether the Lagrangian is at a Max or Min we use an approach similar to the Hessian in unconstrained cases. (b) Second order conditions are determined from the Bordered Hessian (c) There are two ways of setting up a bordered Hessian (d) We will look at both ways since both forms are used equally in economic literature (e) Both ways are equally good. Given Max: F.O.C. s f(x, y) + λ(g(x, y) 10
L λ = g(x, y) = 0 Eq. 1 L x = f x λg x = 0 Eq. 2 L y = f y λg y = 0 Eq. 3 For the 2nd order conditions, totally differentiate the F.O.C. s with respect to x, y, and λ g x dx + g y dy = 0 No λ in Eq. 1 (1 ) (f xx + λg xx )dx + (f xy + λg xy )dy + g x dλ = 0 (2 ) Matrix From (f yy + λg yy )dy + (f yx + λg yx )dx + g y dλ = 0 (3 ) Bordered Hessian { }}{ 0 g x g y g x (f xx + λg xx ) (f xy + λg xy ) dλ dx g y (f yx + λg yx ) (f yy + λg yy ) dy Or written as 0 g x g y g x L xx L xy dλ dx g y L yx L yy dy }{{} (Where L xx =f xx +λg xx etc...) Notice that the Bordered Hessian is the ordinary Hessian bordered by the first partial derivatives of the constraint. 11
H = L xx L xy L yx L yy Where H = 0 g x g y g x L xx L xy g y L yx L yy H is ordinary (unconstrained) Hessian H is bordered (constrained) Hessian 1.4 Determining Max or Min with a Single Constraint 2 Variable Case 3 Variabl Case Hα = 0 g x g y g x L xx L xy g y L yx L yy is Max if Hα > 0 is Min if Hα < 0 0 g 1 g 2 g 3 H 3 = g 1 L 11 L 12 L 13 g 2 L 21 L 22 L 23 g 3 L 31 L 32 L 33 is Max if H2 > 0, H3 < 0 is Min if H2 < 0, H3 > 0 n-variable case Max: H2 > 0, H3 < 0, H 4 > 0... ( 1) n Hn > 0 Min: H2 < 0, H3 < 0,... Hn < 0 12
1.5 Altrnative form of Bordered Hessian Given Max x,y F.O.C s f(x, y) + λg(x, y) L x = f x λg x = 0 L y = f y λg y = 0 L λ = g(x, y) = 0 Bordered Hessian f xx + λg xx f xy + λg xy g x f yx + λg yx f yy + λg yy g y d x d y g x g y 0 d λ Rules for Max or Min are the same for this form as well. 1.5.1 Example Max F.O.C. s S.O.C. s x = B 2P x xy + λ(b P x x P y y) L x = y λp x = 0 L y = x λp y = 0 L λ = B P x x P y y = 0 }{{} y = B 2P y λ = B 2P x P y 0 1 P x H2 = 1 0 P y P x P y 0 13
Det = 0 + ( 1) 1 P x P y 0 + P x 1 P x 0 P y = P xp y + P x P y = 2P x P y > 0 Therefore L is a Max 1.5.2 Example Min F.O.C. s P x x + P y y + λ(u 0 xy) L x = P x λy = 0 L y = P y λx = 0 L λ = U 0 xy = 0 x = P 1 2 y U 2 1 0 P 1 2 x S.O.C. s y = P 1 2 x U 1 2 0 P 1 2 y λ = U 0 λ y λ 0 x d x d y y x 0 d λ H λ y = λ x 0 + ( y) λ y 0 x = λxy + λxy H = 2λxy < 0 Therefore L is a Min 1.6 Interpreting λ Given Max By solving the F.O.C. s we get U(x, y) + λ (B P x x P y y) 1 2 0 P 2 1 x P 1 2 y x = x(p x, P y, B) y = y(p x, P y, B) λ = λ(p x, P y, B) 14
Sub x, y, λ back into the Lagrange L = U (x, y ) + λ (B P x x P y y ) Differentiate with respect to the constant,b L B = U dx x db +U dy dx y db λ P x db λ P y dy db +λ db db +(B P xx P y y ) dλ db Or L B = (U x λ dx P x ) }{{} db +(U y λ dy P y ) }{{} db +(B P xx P y y dλ ) }{{} db +λ =0 =0 =0 L B = λ = in utility from in the constant = Marginal Utility of Money 2 Extensions and Applications of Constrained Optimization 2.1 Income and Substitution Effects (The Slutsky Equation) Consider: Max Subject to The FOC s U = U(x 1, x 2 ) B = P λ x 1 + P 2 x 2 15
L 1 = U 1 λp 1 = 0 L 2 = U 2 λp 2 = 0 L 3 = B P 1 x 1 P 2 x 2 = 0 Solving the FOC s gives x * 1, x * 2, λ Totally differentiate the FOC s with respect to EVERY variable U 11 dx 1 + U 12 dx 2 P 1 dλ λdp 1 = 0 U 21 dx 1 + U 22 dx 2 P 2 dλ λdp 2 = 0 P 1 dx 1 P 2 dx 2 x 1dP 1 x 2dP 2 + dβ = 0 Take exogenous differentials (dp 1, dp 2, dβ) to the other side and set up matrix U 11 U 12 P 1 U 21 U 22 P 2 dx 1 λdp 1 dx 2 = λdp 2 P 1 P 2 0 dλ dβ + x 1 dp 1 + x 2 dp 2 { Where H > 0 } Set dp 1 = dp 2 = 0 find dx 1 dx 1 dβ = dβ 0 U 12 P 1 0 U 22 P 2 1 P 2 0 H U 12 P 1 U 22 P 2 = ( 1) H = H 31 H Where H 31 = U 12 P 1 U 22 P 2 = ( U 12P 1 + U 22 P 1 ) 16
Therefore dx 1 dβ = H31 H 0 (?) Now set dp 2 = dβ = 0 U 11 U 12 P 1 U 21 U 22 P 2 P 1 P 2 0 dx 1 dp 1 dx 2 dp 1 dλ dp 1 = λ 0 x 1 dx 1 dp 1 = Cramer s Rule Expand Column 1 {}}{ λ U 12 P 1 0 U 22 P 2 x 1 P 2 0 H (H 11 ) U (H 31 ) 22 P 2 P 2 0 U 12 P 1 U 22 P 2 = λ H + x 1 H dx 1 = λ H 11 H 31 dp 1 H + x 1 H H 11 = P 2 2 < 0 H 31 = U 12 P 2 + U 22 P 1 But So dx 1 dβ = H 31 H dx 1 = λ H 11 dx 1 dp 1 H x 1 dβ = dx 1 dp 1 U held constant + ( x 1 ) dx 1 dβ 17
2.2 The Slutsky Equation jh11 j jh31 j + x1 H H dx dx1 + ( x1 ) = dp1 d = fpure Substitution E ectg + fincome E ectg dx1 = dp1 A to B = B to C = 3 jh11 j jhj "Substitution E ect" 31 j x1 jh jhj "Income E ect" Homogenous Functions 3.1 Constant Returns to Scale =) Given y = f (x1 ; x2 ; :::xn ) if we change all the inputs by a factor of t, then 18
f(tx 1, tx 2,...tx n ) = tf(x 1, x 2,...x n ) = ty ie. if we double inputs, we double output = A constant returns to scale production function is said to be: HOMOGENOUS of DEGREE ONE or LINEARLY HO- MOGENOUS 3.2 Homogenous of Degree r A function, Y = f(x 1,..., x n ) is said to be Homogenous of Degree r if Example Let f(x 1, x 2 ) = x 1 x 2 change all x i s by t f(tx 1, tx 2,...tx n ) = t r f(x 1, x 2,...x n ) f(tx 1, tx 2 ) = (tx 1 )(tx 2 ) = t 2 (x 1 x 2 ) = t 2 f(x 1 x 2 ) Therefore f(x 1, x 2 ) = x 1 x 2 is homogenous of degree 2 3.3 Cobb-Douglas Let output, Y = f(k, L) = L α K 1 α {where 0 1} Multiply K, L by t 19
f(tl, tk) = (tl) α (tk) 1 α Therefore L α K 1 α is H.O.D one. General Cobb-Douglas: y=l α K β = t α+1 α L α K 1 α tl α K 1 α f(tl, tk) = (tl) α (tk) β = t α+β L α K β L α K β is homogenous of degree α + β 3.4 Further properties of Cobb-Douglas Given y = L α K 1 α MP L = dy dl = dlα 1 K 1 α = α ( ) 1 α K L MP K = dy dk = (1 α)lα K α = (1 α) MP L and MP K are homogenous of degree zero MP L (tl, tk) = α ( ) 1 α tk = α tl MP L and MP K depend only on the K L ratio 20 ( ) α K L ( ) 1 α K L
3.5 The Marginal Rate of Technical Substitution 1 (K M PL L) M RT S = = M PK (1 )( K L) = 1 K L MRTS is homogenous of degree zero The slope of the isoquant (MRTS) depends only on the not the absolute levels of K and L K L ratio, Along any ray from the origin the isoquants are parallel. This is true for all homogenous functions regardless of the degree. Given: f (tx1 ; :::txn ) = tr f (x1 ; :::xn ) Di erentiate both sides with respect to x1 df d(tx1 ) df = tr d(tx) dx1 dx1 21
But d(tx 1 ) = t dx 1 df df t = tr d(tx 1 ) dx 1 df d(tx 1 ) = tr t df r 1 df = t dx 1 dx 1 Therefore: For any function homogenous of degree r, that function s first partial derivatives are homogenous of degree r 1. 3.6 Monotonic Transformations and Homothetic Functions Let y = f(x 1, x 2 )and Let z = g(y) {where g (y) > 0 and f(x 1, x 2 ) is H.O.D. r} g(y) is a monotonic transformation of y We know: MRT S = f 1 fx = dx 2 dx 1 Totally differentiate z = g(y) and set dz = 0 or dz = dg dy ( dg dx 2 dx 1 = dy dx 1 + dg dx 1 dy ) ( dy dy 1 ( ) ( ) = dg dy dy 1 dx 2 dx 1 ) dy dx 2 = 0 dx 2 ( ) dy dx 1 ( dy dx 2 ) = f 1 f 2 22
The slope of the level curves (isoquants) are invariant to monotonic transformations. A monotonic transformation of a homogenous function creates a homothetic function Homothetic functions have the same slope properties along a ray from the origin as the homogenous function. However, homothetic functions are NOT homogenous. Example: Let f(x 1, x 2 ) = x 1, x 2 {where r = 2} Let: z = g(y) = ln(x 1, x 2 ) = ln x 1 + ln x 2 g(f(tx 1, tx 2 )) = ln(tx 1 ) + ln(tx 2 ) = 2 ln t + ln x 1 + ln x 2 t r ln(x 1, x 2 ) Properties of Homothetic Functions 1. A homothetic function has the same shaped level curves as the homogenous function that was transformed to create it. 2. Homogenous production functions cannot produce U-shaped average cost curves, but a homothetic function can. 3. Slopes of Level Curves (ie. Indifference Curves) For homothetic functions the slope of their level curves only depend on the ratio of quantities. ie. If: y = f(x 1, x 2 ) is homothetic ( ) Then: f 1 x f 2 = g 2 x 1 23
3.7 Euler s Theorem Let f(x 1, x 2 ) be homogenous of degree r Then f(tx 1, tx 2 ) = t r f(x 1, x 2 ) Differentiate with respect to t Since: dtx i dt df d (tx 1 ) + df d(tx 2 ) d(tx 1 ) dt d(tx 2 ) dt = x i for all i df d(tx 1 ) x 1 + = rt r 1 f(tx 1, tx 2 ) df d(tx 2 ) x 2 = rt r 1 f(tx 1, tx 2 ) This is true for all values of t, so let t = 1 df x 1 + df x 2 = f 1 x 1 + f 2 x 2 = rf(x 1, x 2 ) } dx 1 dx 2 {{} "Euler s Theorm" If y = f(l, K) is constant returns to scale Then y=mp L L + MP K K (Euler s Theorm) Example: Let y = L α K 1 α Where: From Euler s Theorm MP L = αl α 1 K 1 α MP K = (1 α)l α K α 24
y = MP L L + MP K K = ( αl α 1 K 1 α) L + ( (1 α)l α K α) K = αl α 1 K 1 α + (1 α)l α K α = [d + (1 α)] L α K 1 α = L α K 1 α = y 3.7.1 Euler s Theorm and Long Run Equilibrium Suppose q = f(k, L) is H.O.D 1 Then the profit function for a perfectly competitive firm is F.O.C s π = pq rk wl π = pf(k, L) rk wl dπ dl = pf L w = 0 dπ dk = pf K r = 0 {f L = MP L f K = MP K } or MP L = w p, MP K = r p are necessary conditions for Profit Maximization Therefore, at the optimum From Euler s Theorem π = pf(k L ) wl rk f(k L ) = MP K K + MP L L 25
Substitute into π π = P [MP K K + MP L L ] wl rk OR π = [wl + rk ] wl rk = 0 Long Run π=0 3.7.2 26
Concavity and Quasiconcavity 3.7.3 Concavity: Convex level curves and concave in scale Necessary for unconstrained optimum 3.7.4 Quasi-Concavity: Only has convex level curves Necessary for constrained optimum Example: 27
1. Concave: y = x 1 3 1 x 1 3 2 is H.O.D. 2/3 (diminishing returns) MRT S = x 2 x 1 2. Quasi -Concave: y = x 2 1x 2 2 is H.O.D. 4 (increasing returns) MRT S = x 2 x 1 REVIEW: When to use the Implicit Function Theorem (Jacobian) GENERAL FORM: Max U(x, y) + λ(β P x x P y y) F.O.C. L x = U x λp x = 0 (Eq 1) L y = U y λp y = 0 (Eq 2) L λ = β P x x P y y (Eq 3) Equations 1, 2, and 3 IMPLICITLY DEFINE x = x (β, P x, P y ) y = y (β, P x, P y ) λ = λ (β, P x, P y ) S.O.C. 0 P x P y H = P x U xx U xy > 0 (by assumption) P y U yx U yy Find dx dp x : use Implicit Function Theorem 28
SPECIFIC FORM: Max F.O.C xy + λ(β P x x P y y) L x = y λp x = 0 (Eq 1) L y = x λp y = 0 (Eq 2) L λ = β P x x P y y (Eq 3) Equations 1, 2, and 3 EXPLICITLY DEFINE x = β αp x y = β αp y λ = β αp xp y S.O.C. 0 P x P y H = P x 0 1 = 2P xp y > 0 P y 1 0 dx To find: dp x Differentiate x directly dx dp x = β αp x < 0 2 3.8 Review: When to use the Implicit Function Theorem (Jacobian)?? 3.8.1 General Form Max F.O.C. U(x, y) + λ(b P x x + P y y) L x : U x λp x = 0 Eq. 1 L y : U y λp y = 0 Eq. 2 L λ : B P x x + P y y = 0 Eq. 3 29
Equations 1, 2, and 3 IMPLICITY define S.O.C. Find dx dp x H = 3.8.2 Specific Form Max x = x (B, P x, P y ) y = y (B, P x, P y ) λ = λ (B, P x, P y ) 0 P x P y P x U xx U xy P y U yx U yy : use Implicit Function Theorem xy + λ(b P x x + P y y) > 0 (By Assumption) F.O.C L x : y λp x = 0 Eq. 1 L y : x λp y = 0 Eq. 2 L λ : B P x x + P y y = 0 Eq. 3 Equations 1, 2, and 3 EXPLICITLY define S.O.C. H = x = B 2P x y = B 2P y λ = B 2P x P y Find dx dp x : Differentiate x directly dx dp x = B 2P < 0 x 2 0 P x P y P x 0 1 P y 1 0 = 2P xp y > 0 30