Optimization Paul Schrimpf September 12, 2018

Size: px
Start display at page:

Download "Optimization Paul Schrimpf September 12, 2018"

Transcription

1 Optimization Paul Schrimpf September 1, 018 University of British Columbia Economics 56 cba1 Today s lecture is about optimization Useful references are chapters 1-5 of Diit (1990), chapters of Simon and Blume (1994), chapter 5 of Carter (001), and chapter 7 of De la Fuente (000) We typically model economic agents as optimizing some objective function Consumers maimize utility Eample 01 A consumer with income y chooses a bundle of goods 1,, n with prices p 1,, p n to maimize utility u() Firms maimize profits ma u( 1,, n ) st p p n n y 1,, n 1 Eample 0 A firm produces a good y with price p, and uses inputs R k with prices w R k The firm s production function is f : R k R The firm s problem is ma p y y, wt st f () y 1 Unconstrained optimization Although most economic optimization problems involve some constraints it is useful to begin by studying unconstrained optimization problems There are two reasons for this One is that the results are somewhat simpler and easier to understand without constraints The second is that we will sometimes encounter unconstrained problems For eample, we can substitute the constraint in the firm s problem from eample 0 to obtain an unconstrained problem ma p f () w T 11 Notation and definitions An optimization problem refers to finding the maimum or minimum of a function, perhaps subject to some constraints In economics, the most common optimization problems are utility maimization and profit maimization Because of this, we will state most of our definitions and results for maimization problems 1This work is licensed under a Creative Commons Attribution-ShareAlike 40 International License 1See appendi A if any notation is unclear 1

2 Of course, we could just as well state each definition and result for a minimization problem by reversing the direction of most inequalities We will start by eamining generic unconstrained optimization problems of the form ma where R n and f : R n R To allow for the possibility that the domain of f is not all of R n, we will let U R n, be the domain, and write ma U f () for the maimum of f on U If F ma f (), we mean that f () F for all and f ( ) F for some Definition 11 F ma U f () is the maimum of F on U if f () F for all and f ( ) F for some There may be more than one such We denote the set of all such that f ( ) F by arg ma U f () and might write arg ma f (), or, if we know there is only one such,, we sometimes write arg ma f () Definition 1 U is a maimizer of f on U if f ( ) ma U f () The set of all maimizers is denoted arg ma U f () f () Definition 13 U is a strict maimizer of f if f ( ) > f () for all Definition 14 f has a local maimum at if δ > 0 such that f (y) f () for all y in U and within δ distance of (we will use the notation y N δ () U, which could be read y in a δ neighborhood of intersected with U) Each such is called a local maimizer of f If f (y) < f () for all y, y N δ () U, then we say f has a strict local maimum at When we want to be eplicit about the distinction between local maimum and the maimum in definition 11, we refer to the later as the global maimum Eample 11 Here are some eamples of functions from R R and their maima and minima (1) f () is minimized at 0 with minimum 0 () f () c has minimum and maimum c Any is a maimizer (3) f () cos() has maimum 1 and minimum 1 πn for any integer n is a maimizer (4) f () cos() + / has no global maimizer or minimizer, but has many local ones A related concept to the maimum is the supremum Definition 15 The supremum of f on U is S is written, S sup U f (), and means that S f () for all U and for any A < S there eists A U such that f ( A ) > A

3 The supremum is also called the least upper bound The main difference between the supremum and maimum is that the supremum can eist when the maimum does not Eample 1 Let R and f () 1 Then ma 1+ R f () does not eist, but sup R f () 0 f An important fact about real numbers is that if f : R n R is bounded above, ie there eists B R such that f () B for all R n, then sup R n f () eists and is unique The infimum is to the minimum as the supremum is to the maimum 1 First order conditions Suppose we want to maimize f () If we are at and travel some small distance in direction v, then we can approimate how much f will change by its directional derivative, ie f ( + v) f () + d f (; v) If there is some direction v with d f (; v) 0, then we could take that v or v step and reach a higher function value Therefore, if is a local maimum, then it must be that d f (; v) 0 for all v From theorem B1 this is the same as saying i 0 for all i Theorem 11 Let f : R n R, and suppose f has a local maimum and f is differentiable at Then i 0 for i 1,, n Proof The paragraph preceding the theorem is an informal proof The only detail missing is some care to ensure that the is sufficiently accurate If you are interested, see the past course notes for details lec10optimizationpdf The first order condition is the fact that i local maimizer or minimizer of f 0 is a necessary condition for to be a Definition 16 Any point such that f is differentiable at and D f 0 is call a critical point of f If f is differentiable, f cannot have local minima or maima (=local etrema) at noncritical points f might have a local etrema its critical points, but it does not have to Consider f () 3 f (0) 0, but 0 is not a local maimizer or minimizer of F Similarly, if f : R R, f () 1, then the partial derivatives at 0 are 0, but 0 is not a local minimum or maimum of f This section will use some results on partial and directional derivatives See appendi B and the summer math review for reference 3

4 Second order conditions To determine whether a given critical point is a local minimum or maimum or neither we can look at the second derivative of the function Let f : R n R and suppose is a critical point Then i ( ) 0 To see if is a local maimum, we need to look at f ( + h) for small h We could try taking a first order epansion, f ( + h) f ( ) + D f h, but we know that D f 0, so this epansion is not useful for comparing f ( ) with f ( + h) If f is twice continuously differentiable, we can instead take a second order epansion of f around f ( + v) f ( ) + D f v + 1 vt D f v where D f is the matri of f s second order partial derivatives Since is a critical, point D f 0, so f ( + v) f ( ) 1 vt D f v We can see that is a local maimum if 1 vt D f v < 0 for all v 0 The above inequality will be true if v T D f v < 0 for all v 0 The Hessian, D f is just some symmetric n by n matri, and v T D F v is a quadratic form in v This motivates the following definition Definition 1 Let A be a symmetric matri, then A is Negative definite if T A < 0 for all 0 Negative semi-definite if T A 0 for all 0 Positive definite if T A > 0 for all 0 Positive semi-definite if T A 0 for all 0 Indefinite if 1 st T 1 A 1 > 0 and some other such that T A < 0 Later, we will derive some conditions on A that ensure it is negative (semi-)definite For now, just observe that if D F is negative semi-definite, then must be a local maimum If D F is negative definite, then is a strict local maimum The following theorem restates the results of this discussion Theorem 1 Let F : R n R be twice continuously differentiable and let be a critical point If (1) The Hessian, D F, is negative definite, then is a strict local maimizer () The Hessian, D F, is positive definite, then is a strict local minimizer (3) The Hessian, D F, is indefinite, is neither a local min nor a local ma (4) The Hessian is positive or negative semi-definite, then could be a local maimum, minimum, or neither Proof The main idea of the proof is contained in the discussion preceding the theorem The only tricky part is carefully showing that our approimation is good enough 4

5 When the Hessian is not positive definite, negative definite, or indefinite, the result of this theorem is ambiguous Let s go over some eamples of this case Eample 1 F : R R, F() 4 The first order condition is 4 3 0, so 0 is the only critical point The Hessian is F () 1 0 at However, 4 has a strict local minimum at 0 f Eample F : R R, F( 1, ) 1 The first order condition is DF ( 1, 0) 0, so the 1 0, R are all critical points The Hessian is ( ) 0 D F 0 0 This is negative semi-definite because h T D F h h 0 Also, graphing the 1 function would make it clear that 1 0, R are all (non-strict) local maima Eample 3 F : R R, F( 1, ) The first order condition is DF ( 1, 4 3 ) 0, so the (0, 0) is a critical point The Hessian is ( ) 0 D F 0 1 This is negative semi-definite at 0 because h T D F 0 h h 0 However, 0 is not a 1 local maimum because F(0, ) > F(0, 0) for any 0 0 is also not a local minimum because F( 1, 0) < F(0, 0) for all 1 0 In each of these eamples, the second order condition is inconclusive because h T D F h 0 for some h In these cases we could determine whether is a local maimum, local minimum, or neither by either looking at higher derivatives of F at, or look at D F for all in a neighborhood of We will not often encounter cases where the second order condition is inconclusive, so we will not study these possibilities in detail Eample 4 (Competitive multi-product firm) Suppose a firm has produces k goods using n inputs with production function f : R n R k The prices of the goods are p, and the prices of the inputs are w, so that the firm s profits are Π() p T f () w T 5

6 The firm chooses to maimize profits The first order condition is or without using matrices, OPTIMIZATION ma p T f () w T p T D f w 0 k j1 p j j i ( ) w i for i 1,, n The second order condition is that k j1 p f j j ( ) D [p T 1 f ] k j1 p f j j 1 n ( ) must be negative semidefinite a k j1 p f j j 1 n ( ) k j1 p f j j ( ) n a There is some awkwardness in the notation for the second derivative of a function from R n R k when k > 1 The first partial derivatives can be arranged in an k n matri There are then k n n second partial derivatives When k 1 it is convenient to write these in an n n matri When k > 1, one could arrange the second partial derivates in 3-d array of dimension k n n, or one could argue that an kn n matri makes sense These higher order derivatives are called tensors and physicists and mathematicians do have some notation for working with them However, tensors do not come up that often in economics Moreover, we can usually avoid needing any such notation by rearranging some operations In the above eample, f : R n R k, so D f should be avoided (especially if we combine it with other matri and vector algebra), but p T f () is a function from R n to R, so D [p T f ] is unambiguously an n n matri 3 Constrained optimization To begin our study of constrained optimization, let s consider a consumer in an economy with two goods Eample 31 Let 1 and be goods with prices p 1 and p The consumer has income y The consumer s problem is ma 1, u( 1, ) st p p y In economics 101, you probably analyzed this problem graphically by drawing indifference curves and the budget set, as in figure 1 In this figure, the curved lines are indifference curves That is, they are regions where utility is constant So going outward from the ais they show all ( 1, ) such that u( 1, ) c 1, u( 1, ) c, u( 1, ) c 3, and u( 1, ) c 4 with c 1 < c < c 3 < c 4 The diagonal line is the budget constraint All points below this line satisfy the constraint p p y The optimal 1, is the point where an indifference curve is tangent to the budget 6

7 line The slope of the budget line is p 1 p The slope of the indifference curve can be found by implicitly differentiating, So the optimum satisfies u ( )d 1 + u ( )d 0 1 u 1 u d ( ) u 1 d 1 The ratio of marginal utilities equals the ratio of marginal prices Rearranging, we can also say that u 1 p 1 p 1 p u p Suppose we have a small amount of additional income If we spend this on 1, we u would get /p 1 more of 1, which would increase our utility by 1 p 1 At the optimum, the marginal utility from spending additional income on either good should be the same Let s call this marginal utility of income µ, then we have µ u 1 p 1 or u µp u µp 0 This is the standard first order condition for a constrained optimization problem Now, let s look at a generic maimization problem with equality constraints u p ma f () st h() c where f : R n R and h : R n R m As in the unconstrained case, consider a small perturbation of by some small v The function value is then approimately, and the constraint value is f ( + v) f () + D f v h( + v) h() + Dh v + v will satisfy the constraint only if Dh v 0 Thus, is a constrained local maimum if for all v such that Dh v 0 we also have D f v 0 Now, we want to epress this condition in terms of Lagrange multipliers Remember that Dh is an m n matri of partial derivatives, and D f is a 1 n row vector of partial 7 u

8 Figure 1 Indifference curves derivatives If n and m 1 this condition can be written as implies h 1 ()v 1 + h ()v 0 ()v 1 + ()v 0 1 Solving the first of these equations for v, substituting into the second and rearranging gives 1 () h 1 () () h (), which as in the consumer eample above, we can rewrite as µ h µ h 0 for some constant µ Similar reasoning would show that for any n and m 1, if for all v such that Dh v 0 we also have D f v 0, then it is also true that there eists µ such that i µ h i 0 for i 1,, n Equivalently, using matri notation D f µdh Note that conversely, if such a µ eists, then trivially for any v with Dh v 0, we have D f v µdh v 0 8

9 When there are multiple constraints, ie m > 1, it would be reasonable to guess that Dh v 0 implies D f v 0 if and only if there eists µ R m such that m h j µ j 0 i i j1 or in matri notation D f µ T Dh This is indeed true It is tedious to show using algebra, but will be easy to show later using some results from linear algebra about the row, column, and null spaces of matrices A heuristic geometric argument supporting the previous claims is as follows Imagine the level sets or contour lines of h( ) The feasible points that satisfy the constraint lie on the contour line with h() 0 Similarly imagine the contour lines of f ( ) If is a constrained maimizer, then as we move alone the contour line h( ) 0, we must stay on the same contour line (or set if f is flat) of f In other words, the contour lines of h and f must be parallel at Gradients give the direction of steepest ascent and are perpendicular to contour lines Therefore, if the gradients of h and f are parallel, then so are the contour lines Gradients being parallel just means that they should be multiples of one another, ie D f µ T Dh for some µ T The following theorem formally states the preceding result Theorem 31 (First order condition for maimization with equality constraints) Let f : R n R and h : R n R m be continuously differentiable Suppose is a constrained local maimizer of f subject to h() c Also assume that Dh has rank m Then there eists µ R m such that (, µ ) is a critical point of the Lagrangian, ie for i 1,, n and j 1,, m L(, µ) f () µ T (h() c) L (, µ ) µ T h ( ) 0 i i i L (, µ ) h( ) c 0 µ j The assumption that Dh has rank m is needed to make sure that we don t divide by zero when defining the Lagrange multipliers When there is a single constraint, it simply says that at least one of the partial derivatives of the constraint is non-zero This assumption is called the non-degenerate constraint qualification It is rarely an issue for equality constraints, but can sometimes fail as in the following eamples Eercise 31 (1) Solve the problem: ma α 1 log 1 + α log st (p p y) 3 0 What goes wrong? How does it differ from the following problem: ma α 1 log 1 + α log st (p p y) 0 9

10 () Solve the problem: ma 1 + st 1, Lagrange multipliers as shadow prices Recall that in our eample of consumer choice, the Lagrange multiplier could be interpreted as the marginal utility of income A similar interpretation will always apply Consider a constrained maimization problem, ma From 31, the first order conditions are f () st h() c D f µ T Dh 0 h( ) c 0 What happens to and f ( ) if c changes? Let (c) denote the maimizer as a function of c Differentiating the constraint with respect to c shows that By the chain rule, Dh (c)d c I D c ( f ( (c)) ) D f (c)d c Using the first order condition to substitute for D f (c), we have D c ( f ( (c)) ) µ T Dh (c)d c µ T Thus, the multiplier, µ, is the derivative of the maimized function with respect to c In economic terms, the multiplier is the marginal value of increasing the constraint Because of this µ j is called the shadow price of c j Eample 3 (Cobb-Douglas utility ) Consider the consumer s problem in eample 31 with Cobb-Douglas utility, The first order conditions are Solving for 1 and yields ma α 1 1, 1 α st p p y α 1 α α p 1µ 0 α 1 1 α α 1 p µ 0 p p y 0 1 α 1 y α 1 + α p 1 α y α 1 + α p 10

11 The ependiture share of each good p j j y, is constant with respect to income Many economists, going back to Engel (1857) have studied how ependiture shares vary with income See Lewbel (008) for a brief review and additional references If we solve for µ, we find that α α 1 1 αα y α1+α 1 µ (α 1 + α ) α 1+α p α 1 1 pα µ is the marginal utility of income in this model As an eercise you may want to verify that if we were to take a monotonic transformation of the utility function (such as log( α 1 1 α ) or (α 1 1 α )β ) and re-solve the model, then we would obtain the same demand functions, but a different marginal utility of income 3 Inequality constraints Now let s consider an inequality instead of equality constraint ma f () st g() b When the constraints binds, ie g( ) b, the situation is eactly the same as with an equality constraint However, the constraints do not necessarily bind at a local maimum, so we must allow for that possibility Suppose is a constrained local maimum To begin with, let s consider a simplified case where there is only one constraint, ie g : R n R If the constraint does not bind, then for small changes to, the constraint will still not bind In other words, locally it as though we have an unconstrained problem Then, as in our earlier analysis of unconstrained problems, it must be that D f 0 Now, suppose the constraint does bind Consider a small change in, v Since is a local maimum, it must be that if f ( + v) > f ( ), then + v must violate the constraint, g( + v) > b Taking first order epansions, we can say that if D f v > 0, then D g v > 0 This will be true if D f λd g for some λ > 0 Note that the sign of the multiplier λ matters here To summarize the results of the previous paragraph If is a constrained local maimum, then there eists λ 0 such that D f λd g 0 Furthermore if λ > 0, then g( ) b If g( ) < b, then λ 0 (it is also possible, but rare for both λ 0 and g( ) b) This situation where if one inequality is strict and then another holds with equality and vice versa is called a complementary slackness condition Essentially the same result holds with multiple inequality constraints Theorem 3 (First order condition for maimization with inequality constraints) Let f : R n R and g : R n R m be continuously differentiable Suppose is a local maimizer of f subject to g() b Suppose that the first k m constraints, bind g j ( ) b j for j 1k and that the Jacobian for these constraints, g 1 1 g k 1 11 g 1 n g k n

12 has rank k Then, there eists λ R m such that for we have λ j L(, λ) f () λ T (g() b) L (, λ ) T g λ ( ) 0 i i i L (, λ ) λ ( λ j g j ( ) b ) 0 j λ j 0 g j ( ) b j for i 1,, n and j 1,, m Moreover for each j at least if λ > 0 then g j j ( ) b j and if g j ( ) < b, then λ 0 (complementary slackness) j Some care needs to be taken regarding the sign of the multipliers and the setup of the problem Given the setup of our problem, ma f () st g() b, if we write the Lagrangian as L(, λ) f () λ T (g() b), then the λ j 0 If we instead wrote the Lagrangian as L(, λ) f () + λ T (g() b), then we would get negative multipliers If we have a minimization problem, min f () st g() b, the situation is reversed f () λ T (g() b) leads to negative multipliers, and f () + λ T (g() b) leads to positive multipliers Reversing the direction of the inequality in the constraint, ie g() b instead of g() b, also switches the sign of the multipliers Generally it is not very important if you end up with multipliers with the wrong sign You will still end up with the same solution for The sign of the multiplier does matter for whether it is the shadow price of b or b To solve the first order conditions of an inequality constrained maimization problem, you would first determine or guess which constraints do and do not bind Then you would impose λ j 0 for the non-binding constraints and solve for and the remaining components λ If the resulting do lead to the constraints not binding that you guessed not binding, then that is a possible local maimum To find all possible local maima, you would have to repeat this for all possible combinations of constraints binding or not There m such possibilities, so this process can be tedious Eample 33 Let s solve The Lagrangian is ma 1 st L(, λ) 1 λ 1 ( 1 + 3) λ ( 1 + 3) 1

13 The first order conditions are 0 λ 1 1 4λ λ 1 λ 0 λ 1 ( 1 + 3) 0 λ ( 1 + 3) Now, we must guess which constraints bind Since the problem is symmetric in 1 and, we only need to check three possibilities instead of all four The three possibilities are neither constraint binds, both bind, or one binds and one does not If neither constraint binds, then λ 1 λ 0, and the first order conditions imply 1 0 This results in a function value of 0 This is feasible, but it is not the maimum since for eample, small positive 1 and would also be feasible and give a higher function value Instead, suppose both constraints bind Then the solutions to and are 1 ±1 and ±1 Substituting into the first order condition and solving for λ 1 and λ gives 0 1 λ 1 4λ 0 1 4λ 1 λ 1 6 λ 1 λ Both λ 1 and λ are positive, so complementary slackness is satisfied Finally, let s consider the case where the first constraint binds but not the second In this case λ 0, and taking the ratio of the first order conditions gives or 1 1 Substituting this into the first constraint yields , so 1 3/, and 3/4 However, then /4 > 3 The second constraint is 1 violated, so this cannot be the solution Fortunately, the economics of a problem often suggest which constraints will bind, and you rarely actually need to investigate all m possibilities Eample 34 (Quasi-linear preferences) A consumer s choice of some good(s) of interest and spending on all other goods z is often modeled using quasi-linear preferences U(, z) u() + z The consumer s problem is ma u() + z st p + z y,z 0 z 0 13

14 Let λ, µ, and µ z be the multipliers on the constraints The first order conditions are u () λp + µ 0 1 λ + µ z 0 along with the complementary slackness conditions λ 0 p + z y µ 0 0 µ z 0 z 0 There are 3 8 possible combinations of constraints binding or not However, we can eliminate half of these possibilities by observing that if the budget constraint is slack, p + z < y, then increasing z is feasible and increases the objective function Therefore, the budget constraint must bind, p + z y and λ > 0 There are four remaining possibilities Having 0 z is not possible because then the budget constraint would not bind This leaves three possibilities (1) 0 < and 0 < z That implies µ µ z 0 Rearranging the first order conditions gives that u () p and from the budget constraint z y p Depending on u, p, and y, this could be the solution () 0 and 0 < z That implies that µ z 0 and µ > 0 From the budget constraint, z y From the first order conditions we have u (0) p + µ Since µ must be positive, this equation requires that u (0) < p This also could be the solution depending on u, p, and y (3) 0 < and 0 z That implies that µ 0 From the budget constraint, y/p Combining the first order conditions to eliminate λ gives 1 + µ z u (y/p) p Since µ z > 0, this equation requires that u (y/p) > p This also could be the solution depending on u, p, and y Eercise 3 Show that replacing the problem ma f () st g() b with an equality constrained problem with slack variables s, ma,s leads to the same conclusions as theorem 3 f () st g() s b, s 0 33 Second order conditions As with unconstrained optimization, the first order conditions from the previous section only give a necessary condition for to be a local maimum of f () subject to some constraints To verify that a given that solves the first order condition is a local maimum, we must look at the second order condition As in 14

15 the unconstrained case, we can take a second order epansion of f () around f ( + v) f ( ) D f v + 1 vt D f v 1 vt D f v This is a constrained problem, so any + v must satisfy the constraints as well As before, what will really matter are the equality constraints and binding inequality constraints To simplify notation, let s just work with equality constraints, say h() c We can take a first order epansion of h around to get Then v satisfies the constraints if h( + v) h( ) + Dh v c h( ) + Dh v c Dh v 0 Thus, is a local maimizer of f subject to h() c if v T D f v 0 for all v such that that Dh v 0 The following theorem precisely states the result of this discussion Theorem 33 (Second order condition for constrained maimization) Let f : U R be twice continuously differentiable on U, and h : U R l and g : U R m be continuously differentiable on U R n Suppose interior(u) and there eists µ R l and λ R m such that for we have λ j L(, λ, µ) f () λ T (g() b) µ T (h() c) L (, λ ) λ ( ) µ T h ( ) 0 i i i i L (, λ ) h l ( ) c 0 µ l L (, λ ) λ ( λ j g( ) c ) 0 j λ j 0 g( ) b T g 15

16 Let B be the matri of the derivatives of the binding constraints evaluated at, If h 1 1 h l B 1 g 1 1 g k 1 v T D f v < 0 for all v 0 such that Bv 0, then is a strict local constrained maimizer for f subject to h() c and g() b Recall from above that an n by n matri, A, is negative definite if T A < 0 for all 0 Similarly, we say that A is negative definite on the null space3 of B if T A < 0 for all N(B) \ {0} Thus, the second order condition for constrained optimization could be stated as saying that the Hessian of the objective function must be negative definite on the null space of the Jacobian of the binding constraints The proof is similar to the proof of the second order condition for unconstrained optimization, so we will omit it h 1 n h l n g 1 n g k n 4 Comparative statics Often, a maimization problem will involve some parameters For eample, a consumer with Cobb-Douglas utility solves: ma α 1 1, 1 α st p p y The solution to this problem depends α 1, α, p 1, p, and y We are often interested in how the solution depends on these parameters In particular, we might be interested in the maimized value function (which in this eample is the indirect utility function) v(p 1, p, y, α 1, α ) ma α 1 1, 1 α st p p y We might also be interested in the maimizers 1 (p 1, p, y, α 1, α ) and (p 1, p, y, α 1, α ) (which in this eample are the demand functions) 41 Envelope theorem The envelope theorem tells us about the derivatives of the maimum value function Suppose we have an objective function that depends on some parameters θ Define the value function as v(θ) ma f (, θ) st h() c Let (θ) denote the maimizer as function of θ, ie (θ) arg ma f (, θ) st h() c 3The null space of an m n matri B is the set of all R n such that B 0 We will discuss null spaces in more detail later in the course 16

17 By definition v(θ) f ( (θ), θ) Applying the chain rule we can calculate the derivative of v For simplicity, we will treat and θ as scalars, but the same calculations work when they are vectors dv dθ θ ( (θ), θ) + ( (θ), θ) d dθ (θ) From the first order condition, we know that ( (θ), θ) d h µ dθ ( (θ)) d dθ However, since (θ) is a constrained maimum for each θ, it must be that h( (θ)) c for each θ Therefore, 0 d dθ h( (θ)) h ( (θ)) d 0 dθ Thus, we can conclude that dv dθ θ ( (θ), θ) More generally, you might also have parameters in the constraint, v(θ) ma f (, θ) st h(, θ) c Following the same steps as above, you can show that dv dθ θ µ h L θ θ Note that this result also covers the previous case where the constraint did not depend on θ In that case, h dv 0 and we are left with just θ dθ θ The following theorem summarizes the above discussion Theorem 41 (Envelope) Let f : R n+k R and h : R n+k R m be continuously differentiable Define v(θ) ma f (, θ) st h(, θ) c where R n and θ R k Then D θ v θ D θ f,θ µ T D θ h,θ where µ are the Lagrange multipliers from theorem 31, D θ f,θ denotes the 1 k matri of partial derivatives of f with respect to θ evaluated at (, θ), and D θ h,θ denotes the m k matri of partial derivatives of f with respect to θ evaluated at (, θ) Eample 41 (Consumer demand) Some of the core results of consumer theory can be shown as simple consequences of the envelope theorem Consider a consumer choosing goods R n with prices p and income y Define v(p, y) ma u() st p T y 0 17

18 In this contet, v is called the indirect utility function From the envelope theorem, v µ p i (p, y) i v y µ Taking the ratio we have a relationship between the (Marshallian) demand function,, and the indirect utility function, v i (p, y) p i v y This is known as Roy s identity Now consider the mirror problem of minimizing ependiture given a target utility level, e(p, ū) min p T st u() ū In general, e is a maimum value function, but in this particular contet, it is the consumer s ependiture function Using the envelope theorem again, e (p, ū) h p i (p, ū) i where h (p, ū) is the constrained minimizer It is the Hicksian (or compensated) i demand function Finally, using the fact that h i (p, ū) (p, e(p, ū)) and differentiating with respect i to p k gives Slutsky s equation h i i + i p k p k y e p k i p k + i y k Slutsky s equation is useful because we can determine the sign of some of these derivatives From above we know that e (p, ū) h p i (p, ū) i so h i (p, ū) p j e p j p i (p, ū) If we fi p and 0 h (p, ū), we know that u( 0 ) ū, so 0 satisfies the contraint in the minimum ependiture problem Therefore, for any p, p T 0 e( p, ū) min p T st u() ū 18

19 Taking a second order epansion, this gives p T 0 p T 0 e( p, ū) e(p, ū) D p e (p,ū) ( p p) + 1 ( p p)t D pe (p,ū) ( p p) 0 ( p p) T D pe (p,ū) ( p p) where the second line uses the fact that D p e (p,ū) T 0 Hence, we know that D pe (p,ū) D p h (p,ū) is negative semi-definite In particular, the diagonal, h i p i, must be less than or equal to zero Hicksian demand curves must slope down Marshallian demand curves usually do too, but might not when the income effect, i y, is large i We can also analyze how the maimizer varies with the parameters We do this by totally differentiating the first order condition Consider an equality constrained problem, ma f (, θ) st h(, θ) c where R n, θ R s, and c R m The optimal much satisfy the first order condition and the constraint j m k1 µ k h k j 0 h k (, θ) c 0 for j 1,, n, and k 1,, m Suppose θ changes by dθ, let d and dµ be the amounts that and µ have to change by to make the first order conditions still hold These must satisfy, n l1 f j l d l + s r1 j θ r dθ r ( m n h k µ k d l + j l k1 l1 s r1 h k j θ r dθ r n l1 ) h k l d l + m k1 s r1 dµ k h k j 0 h k θ r dθ r 0 where the first equation is for j 1,, n and the second is for k 1,, m We have n + m equations to solve for the n + m unknown d and dµ We can epress this system of equations more compactly using matrices ( ) ( ) ( D f Dµ T h (D h) T d D θ f ) D θ µt h dθ D h 0 dµ D θ h where D f is the n n matri of second partial derivatives of f with respect to, D θ µt h is the n s matri of second partial derivatives of µ T h with respect to combinations of and θ, etc Whether epressed using matrices or not, this system of equations is a bit unwieldy Fortunately in most applications, there will be some simplification For eample, if the 19

20 constraints do not depend on θ, we simply get d (D f ) 1 D dθ θ f }{{} D θ In other situations, the constraints might depend on θ, but s, m, and/or n might just be one or two A similar result holds with inequality constraints, ecept at θ where changing θ changes which constraints bind or do not Such situations are rare, so we will not worry about them Eample 4 (Production theory) Consider a competitive multiple product firm facing output prices p R k and input prices w R n The firm s profits as function of prices is π(p, w) ma y, pt y w T st y f () 0 The first order conditions are p T λ T 0 w T + λ T D f 0 y f () 0 Total differentiating with respect to p, holding w constant gives Combining we can get dp T dλ T 0 dλ T D f + d T D λ T f 0 dy D f d 0 dp T dy d T D λ T f d Notice that if we assume the constraint binds and substitute it into the objective function, then the second order condition for this problem is that v T D p T f v < 0 for all v If the second order condition holds, then we must have d T D λ T f d > 0 Therefor dp T dy > 0 Increasing output prices increases output As an eercise, you could use similar reasoning to show that dw T d < 0 Increasing input prices decreases input demand Eample 43 (contracting with productivity shocks) In class, we were going over the following eample, but did not have time to finish, so I am including it here Firm s problem with not-contractible, observed by firm but not worker ma q i [ i f (h i ) c i ] c(),h() i {H,L} 0

21 st W 0 q i [U(c i ) + V(1 h i )] 0 (1) i L f (h H ) c H L f (h L ) c L () H f (h L ) c L H f (h H ) c H (3) Which constraint binds? Compare c i and h i with first-best contract Recall The first best contract (when is observed and contractible so that constraints () and (3) are not present) had c H c L and h H > h L, and in particular i f (h i ) V (1 h i ) U (c i ) With this contract, if is only known to the firm, the firm would always want to lie and declare that productivity is high Therefore, we know that at least one of ()-(3) must bind Moreover, we should suspect that constraint () will bind We can verify that this is the case by looking at the first order conditions Let µ be the multiplier for (1), λ L for () and λ H for (3) The first order conditions are [c L ] : 0 q L + µq L U (c L ) λ L + λ H [c H ] : 0 q H + µq H U (c H ) + λ L λ H [h L ] : 0 q L L f (h L ) µq L V (1 h L ) + λ L L f (h L ) λ H H f (h L ) [h H ] : 0 q H H f (h H ) µq H V (1 h H ) λ L L f (h H ) + λ H H f (h H ) Now, let s deduce what constraints must bind Possible cases: (1) Both () and (3) slack Then, we have the first-best problem, but as noted above, the solution to the first best problme violates (), so this case is not possible () Both () and (3) bind Then, and c H c L L [ f (h H ) f (h L )] c H c L H [ f (h H ) f (h L )] Assuming H > L, this implies c H c L c and h H h L h The first order conditions for [c i ] imply that λ H λ L q L (1 µu (c)) q H (1 µu (c)) Since each q > 0, the only possibility is that 1 µu (c) 0 and λ H λ L λ However, then the first order condtioins for [h i ] imply that µv (1 h) f q L L + λ( L H ) q H H + λ( H L ) (h) q L q H ( H L )(1 + λ + λ ) 0 q L q H but this is impossible since H > L, λ 0, q L > 0 and q H > 0 1

22 (3) (3) binds, but () is slack This is impossible Subtracting one constraint from the other gives ( H L )[ f (h H ) f (h L )] 0 which implies h H h L On the other hand if (3) binds and () is slack, then and c H c L H [ f (h H ) f (h L )] c H c L < L [ f (h H ) f (h L )] Since H > L and f is increasing, this implies h L > h H, a contradiction (4) () binds, but (3) is slack This is the only possibility left Now we have and c H c L L [ f (h H ) f (h L )] c H c L < H [ f (h H ) f (h L )] Since H > L and f is increasing, this implies h H > h L, and c H > c L Using the fact that λ H 0 and combining [c L ] and [h L ] gives: L f (h L ) V (1 h L ) U (c L ) An interpretation of this is that when productivity is low, the marginal production of labor is set equal to marginal rate of substitution between labor and consumption This contract features an efficient choice of labor relative to consumption in the low productivity state To see what happens in the high productivity state, combining [c H ] and [h H ] leads to H f (h H ) q H λ L L / H V (1 h H ) q H λ L U (5) (c H ) Also, note that q H λ L µu (c H ) > 0, and L < H, so q H λ L L / H q H λ L H f (h H ) < V (1 h H ) U (c H ) (4) > 1 Hence, in the high productivity state, there is a distortion in the tradeoff between labor and consumption This distortion is necessary to prevent the firm from wanting to lie about productivity We could proceed further and try to eliminate λ L from these epressions, but I do not think it leads to any more useful conclusions (6) Appendi A Notation R is the set of real numbers R n is the set of vectors of n real numbers R k is read in R k and means that is vector of k real numbers f : R n R means f is a function from R n to R That is, f s argument is an n-tuple of real numbers and its output is a single real number U R n means U is a subset of R n

23 When convenient we will treat R k as a k 1 matri, so w T k i1 w i i N δ () is a δ neighborhood of, meaning the set of points within δ distance of For R n, we will use Euclidean distance, so that N δ () is the set of y R n such that n i1 ( i y i ) < δ Appendi B Review of derivatives Partial and directional derivatives were discussed on the summer math review, so we will just briefly restate their definitions and some key facts here Definition B1 Let f : R n R The ith partial derivative of f is f ( 01,, 0i + h, 0n ) f ( 0 ) ( 0 ) lim i h 0 h The ith partial derivative tells you how much the function changes as its ith argument changes Definition B Let f : R n R k, and let v R n the directional derivative in direction v at is f ( + αv) f () d f (; v) lim α 0 α where α R is a scalar An important result relating partial to directional derivatives is the following Theorem B1 Let f : R n R and suppose its partial derivatives eist and are continuous in a neighborhood of 0 Then n d f (; v) v i ( 0 ) i i1 in this case we will say that f is differentiable at 0 It is convenient to gather partial derivatives of a function into a matri For a function f : R n R, we will gather its partial derivatives into a 1 n matri, ( ) D f 1 () n () We will simply call this matri the derivative of f at This helps reduce notation because for eample we can write n d f (; v) v i ( 0 ) D f v i i1 Similarly, we can define second and higher order partial and directional derivatives Definition B3 Let f : R n R The ijth partial second derivative of f is f i j ( 0 ) lim h 0 i ( 01,, 0j + h, 0n ) 3 h i ( 0 )

24 Definition B4 Let f : R n R k, and let v, w R n the directional derivative in directions v and w at is d d f ( + αw; v) d f (; v) f (; v, w) lim α 0 α where α R is a scalar Theorem B Let f : R n R and suppose its first and second partial derivatives eist and are continuous in a neighborhood of 0 Then n n d f f (; v, w) v i ( 0 )w j i j j1 i1 in this case we will say that f is twice differentiable at 0 Additionally, if f is twice differentiable, then f i j f j j We can gather the second partials of f into an n n matri, D 1 1 n f, f 1 n f n and then write d f (; v, w) w T D f w D f is also called the Hessian of f f f 4

25 References Carter, Michael 001 Foundations of mathematical economics MIT Press De la Fuente, Angel 000 Mathematical methods and models for economists Cambridge University Press Diit, Avinash K 1990 Optimization in economic theory Oford University Press Oford Engel, Ernst 1857 Die Productions- und Consumptionsverhaeltnisse des Koenigsreichs Sachsen Zeitschrift des Statistischen Bureaus des Koniglich Sachsischen Ministeriums des Inneren (8-9) Lewbel, Arthur 008 Engel curve In The New Palgrave Dictionary of Economics, edited by Steven N Durlauf and Lawrence E Blume Basingstoke: Palgrave Macmillan URL Simon, Carl P and Lawrence Blume 1994 Mathematics for economists Norton New York 5

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013 PAUL SCHRIMPF OCTOBER 24, 213 UNIVERSITY OF BRITISH COLUMBIA ECONOMICS 26 Today s lecture is about unconstrained optimization. If you re following along in the syllabus, you ll notice that we ve skipped

More information

The Kuhn-Tucker and Envelope Theorems

The Kuhn-Tucker and Envelope Theorems The Kuhn-Tucker and Envelope Theorems Peter Ireland EC720.01 - Math for Economists Boston College, Department of Economics Fall 2010 The Kuhn-Tucker and envelope theorems can be used to characterize the

More information

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):

More information

The Kuhn-Tucker and Envelope Theorems

The Kuhn-Tucker and Envelope Theorems The Kuhn-Tucker and Envelope Theorems Peter Ireland ECON 77200 - Math for Economists Boston College, Department of Economics Fall 207 The Kuhn-Tucker and envelope theorems can be used to characterize the

More information

CHAPTER 1-2: SHADOW PRICES

CHAPTER 1-2: SHADOW PRICES Essential Microeconomics -- CHAPTER -: SHADOW PRICES An intuitive approach: profit maimizing firm with a fied supply of an input Shadow prices 5 Concave maimization problem 7 Constraint qualifications

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Economics 205 Exercises

Economics 205 Exercises Economics 05 Eercises Prof. Watson, Fall 006 (Includes eaminations through Fall 003) Part 1: Basic Analysis 1. Using ε and δ, write in formal terms the meaning of lim a f() = c, where f : R R.. Write the

More information

1. Sets A set is any collection of elements. Examples: - the set of even numbers between zero and the set of colors on the national flag.

1. Sets A set is any collection of elements. Examples: - the set of even numbers between zero and the set of colors on the national flag. San Francisco State University Math Review Notes Michael Bar Sets A set is any collection of elements Eamples: a A {,,4,6,8,} - the set of even numbers between zero and b B { red, white, bule} - the set

More information

Constrained Optimization

Constrained Optimization Constrained Optimization Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 General Problem Consider the following general constrained optimization problem:

More information

Optimization. A first course on mathematics for economists

Optimization. A first course on mathematics for economists Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45

More information

INTRODUCTORY MATHEMATICS FOR ECONOMICS MSCS. LECTURE 3: MULTIVARIABLE FUNCTIONS AND CONSTRAINED OPTIMIZATION. HUW DAVID DIXON CARDIFF BUSINESS SCHOOL.

INTRODUCTORY MATHEMATICS FOR ECONOMICS MSCS. LECTURE 3: MULTIVARIABLE FUNCTIONS AND CONSTRAINED OPTIMIZATION. HUW DAVID DIXON CARDIFF BUSINESS SCHOOL. INTRODUCTORY MATHEMATICS FOR ECONOMICS MSCS. LECTURE 3: MULTIVARIABLE FUNCTIONS AND CONSTRAINED OPTIMIZATION. HUW DAVID DIXON CARDIFF BUSINESS SCHOOL. SEPTEMBER 2009. 3.1 Functions of more than one variable.

More information

Review of Optimization Basics

Review of Optimization Basics Review of Optimization Basics. Introduction Electricity markets throughout the US are said to have a two-settlement structure. The reason for this is that the structure includes two different markets:

More information

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7 Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum

More information

CHAPTER 3: OPTIMIZATION

CHAPTER 3: OPTIMIZATION John Riley 8 February 7 CHAPTER 3: OPTIMIZATION 3. TWO VARIABLES 8 Second Order Conditions Implicit Function Theorem 3. UNCONSTRAINED OPTIMIZATION 4 Necessary and Sufficient Conditions 3.3 CONSTRAINED

More information

Sometimes the domains X and Z will be the same, so this might be written:

Sometimes the domains X and Z will be the same, so this might be written: II. MULTIVARIATE CALCULUS The first lecture covered functions where a single input goes in, and a single output comes out. Most economic applications aren t so simple. In most cases, a number of variables

More information

Chiang/Wainwright: Fundamental Methods of Mathematical Economics

Chiang/Wainwright: Fundamental Methods of Mathematical Economics Chiang/Wainwright: Fundamental Methods of Mathematical Economics CHAPTER 9 EXERCISE 9.. Find the stationary values of the following (check whether they are relative maima or minima or inflection points),

More information

ECONOMIC OPTIMALITY. Date: October 10, 2005.

ECONOMIC OPTIMALITY. Date: October 10, 2005. ECONOMIC OPTIMALITY 1. FORMAL STATEMENT OF THE DECISION PROBLEM 1.1. Statement of the problem. ma h(, a) (1) such that G(a) This says that the problem is to maimize the function h which depends on and

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2 LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 2010/11 Math for Microeconomics September Course, Part II Problem Set 1 with Solutions 1. Show that the general

More information

Econ Slides from Lecture 14

Econ Slides from Lecture 14 Econ 205 Sobel Econ 205 - Slides from Lecture 14 Joel Sobel September 10, 2010 Theorem ( Lagrange Multipliers ) Theorem If x solves max f (x) subject to G(x) = 0 then there exists λ such that Df (x ) =

More information

It is convenient to introduce some notation for this type of problems. I will write this as. max u (x 1 ; x 2 ) subj. to. p 1 x 1 + p 2 x 2 m ;

It is convenient to introduce some notation for this type of problems. I will write this as. max u (x 1 ; x 2 ) subj. to. p 1 x 1 + p 2 x 2 m ; 4 Calculus Review 4.1 The Utility Maimization Problem As a motivating eample, consider the problem facing a consumer that needs to allocate a given budget over two commodities sold at (linear) prices p

More information

Finite Dimensional Optimization Part I: The KKT Theorem 1

Finite Dimensional Optimization Part I: The KKT Theorem 1 John Nachbar Washington University March 26, 2018 1 Introduction Finite Dimensional Optimization Part I: The KKT Theorem 1 These notes characterize maxima and minima in terms of first derivatives. I focus

More information

Microeconomics, Block I Part 1

Microeconomics, Block I Part 1 Microeconomics, Block I Part 1 Piero Gottardi EUI Sept. 26, 2016 Piero Gottardi (EUI) Microeconomics, Block I Part 1 Sept. 26, 2016 1 / 53 Choice Theory Set of alternatives: X, with generic elements x,

More information

Test code: ME I/ME II, 2004 Syllabus for ME I. Matrix Algebra: Matrices and Vectors, Matrix Operations, Determinants,

Test code: ME I/ME II, 2004 Syllabus for ME I. Matrix Algebra: Matrices and Vectors, Matrix Operations, Determinants, Test code: ME I/ME II, 004 Syllabus for ME I Matri Algebra: Matrices and Vectors, Matri Operations, Determinants, Nonsingularity, Inversion, Cramer s rule. Calculus: Limits, Continuity, Differentiation

More information

In economics, the amount of a good x demanded is a function of the price of that good. In other words,

In economics, the amount of a good x demanded is a function of the price of that good. In other words, I. UNIVARIATE CALCULUS Given two sets X and Y, a function is a rule that associates each member of X with eactly one member of Y. That is, some goes in, and some y comes out. These notations are used to

More information

Graphing and Optimization

Graphing and Optimization BARNMC_33886.QXD //7 :7 Page 74 Graphing and Optimization CHAPTER - First Derivative and Graphs - Second Derivative and Graphs -3 L Hôpital s Rule -4 Curve-Sketching Techniques - Absolute Maima and Minima

More information

Hicksian Demand and Expenditure Function Duality, Slutsky Equation

Hicksian Demand and Expenditure Function Duality, Slutsky Equation Hicksian Demand and Expenditure Function Duality, Slutsky Equation Econ 2100 Fall 2017 Lecture 6, September 14 Outline 1 Applications of Envelope Theorem 2 Hicksian Demand 3 Duality 4 Connections between

More information

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form: 3.3 Gradient Vector and Jacobian Matri 3 3.3 Gradient Vector and Jacobian Matri Overview: Differentiable functions have a local linear approimation. Near a given point, local changes are determined by

More information

1.4 FOUNDATIONS OF CONSTRAINED OPTIMIZATION

1.4 FOUNDATIONS OF CONSTRAINED OPTIMIZATION Essential Microeconomics -- 4 FOUNDATIONS OF CONSTRAINED OPTIMIZATION Fundamental Theorem of linear Programming 3 Non-linear optimization problems 6 Kuhn-Tucker necessary conditions Sufficient conditions

More information

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy. April 15, 2009 CHAPTER 4: HIGHER ORDER DERIVATIVES In this chapter D denotes an open subset of R n. 1. Introduction Definition 1.1. Given a function f : D R we define the second partial derivatives as

More information

Microeconomic Theory -1- Introduction

Microeconomic Theory -1- Introduction Microeconomic Theory -- Introduction. Introduction. Profit maximizing firm with monopoly power 6 3. General results on maximizing with two variables 8 4. Model of a private ownership economy 5. Consumer

More information

Economics 101. Lecture 2 - The Walrasian Model and Consumer Choice

Economics 101. Lecture 2 - The Walrasian Model and Consumer Choice Economics 101 Lecture 2 - The Walrasian Model and Consumer Choice 1 Uncle Léon The canonical model of exchange in economics is sometimes referred to as the Walrasian Model, after the early economist Léon

More information

Part I Analysis in Economics

Part I Analysis in Economics Part I Analysis in Economics D 1 1 (Function) A function f from a set A into a set B, denoted by f : A B, is a correspondence that assigns to each element A eactly one element y B We call y the image of

More information

Utility Maximization Problem

Utility Maximization Problem Demand Theory Utility Maximization Problem Consumer maximizes his utility level by selecting a bundle x (where x can be a vector) subject to his budget constraint: max x 0 u(x) s. t. p x w Weierstrass

More information

Review of Optimization Methods

Review of Optimization Methods Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,

More information

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions. Seminars on Mathematics for Economics and Finance Topic 3: Optimization - interior optima 1 Session: 11-12 Aug 2015 (Thu/Fri) 10:00am 1:00pm I. Optimization: introduction Decision-makers (e.g. consumers,

More information

How to Characterize Solutions to Constrained Optimization Problems

How to Characterize Solutions to Constrained Optimization Problems How to Characterize Solutions to Constrained Optimization Problems Michael Peters September 25, 2005 1 Introduction A common technique for characterizing maximum and minimum points in math is to use first

More information

18 19 Find the extreme values of f on the region described by the inequality. 20. Consider the problem of maximizing the function

18 19 Find the extreme values of f on the region described by the inequality. 20. Consider the problem of maximizing the function 940 CHAPTER 14 PARTIAL DERIVATIVES 14.8 EXERCISES 1. Pictured are a contour map of f and a curve with equation t, y 8. Estimate the maimum and minimum values of f subject to the constraint that t, y 8.

More information

Applications I: consumer theory

Applications I: consumer theory Applications I: consumer theory Lecture note 8 Outline 1. Preferences to utility 2. Utility to demand 3. Fully worked example 1 From preferences to utility The preference ordering We start by assuming

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

Chapter 9. Derivatives. Josef Leydold Mathematical Methods WS 2018/19 9 Derivatives 1 / 51. f x. (x 0, f (x 0 ))

Chapter 9. Derivatives. Josef Leydold Mathematical Methods WS 2018/19 9 Derivatives 1 / 51. f x. (x 0, f (x 0 )) Chapter 9 Derivatives Josef Leydold Mathematical Methods WS 208/9 9 Derivatives / 5 Difference Quotient Let f : R R be some function. The the ratio f = f ( 0 + ) f ( 0 ) = f ( 0) 0 is called difference

More information

Constrained optimization.

Constrained optimization. ams/econ 11b supplementary notes ucsc Constrained optimization. c 2016, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

More information

Optimization. 1 Some Concepts and Terms

Optimization. 1 Some Concepts and Terms ECO 305 FALL 2003 Optimization 1 Some Concepts and Terms The general mathematical problem studied here is how to choose some variables, collected into a vector =( 1, 2,... n ), to maimize, or in some situations

More information

E 600 Chapter 4: Optimization

E 600 Chapter 4: Optimization E 600 Chapter 4: Optimization Simona Helmsmueller August 8, 2018 Goals of this lecture: Every theorem in these slides is important! You should understand, remember and be able to apply each and every one

More information

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions UBC Economics 526 October 17, 2012 .1.2.3.4 Section 1 . max f (x) s.t. h(x) = c f : R n R, h : R n R m Draw picture of n = 2 and m = 1 At optimum, constraint tangent to level curve of function Rewrite

More information

Week 7: The Consumer (Malinvaud, Chapter 2 and 4) / Consumer November Theory 1, 2015 (Jehle and 1 / Reny, 32

Week 7: The Consumer (Malinvaud, Chapter 2 and 4) / Consumer November Theory 1, 2015 (Jehle and 1 / Reny, 32 Week 7: The Consumer (Malinvaud, Chapter 2 and 4) / Consumer Theory (Jehle and Reny, Chapter 1) Tsun-Feng Chiang* *School of Economics, Henan University, Kaifeng, China November 1, 2015 Week 7: The Consumer

More information

Gi en Demand for Several Goods

Gi en Demand for Several Goods Gi en Demand for Several Goods Peter Norman Sørensen January 28, 2011 Abstract The utility maimizing consumer s demand function may simultaneously possess the Gi en property for any number of goods strictly

More information

3. Neoclassical Demand Theory. 3.1 Preferences

3. Neoclassical Demand Theory. 3.1 Preferences EC 701, Fall 2005, Microeconomic Theory September 28, 2005 page 81 3. Neoclassical Demand Theory 3.1 Preferences Not all preferences can be described by utility functions. This is inconvenient. We make

More information

ACCUPLACER MATH 0311 OR MATH 0120

ACCUPLACER MATH 0311 OR MATH 0120 The University of Teas at El Paso Tutoring and Learning Center ACCUPLACER MATH 0 OR MATH 00 http://www.academics.utep.edu/tlc MATH 0 OR MATH 00 Page Factoring Factoring Eercises 8 Factoring Answer to Eercises

More information

Duality. for The New Palgrave Dictionary of Economics, 2nd ed. Lawrence E. Blume

Duality. for The New Palgrave Dictionary of Economics, 2nd ed. Lawrence E. Blume Duality for The New Palgrave Dictionary of Economics, 2nd ed. Lawrence E. Blume Headwords: CONVEXITY, DUALITY, LAGRANGE MULTIPLIERS, PARETO EFFICIENCY, QUASI-CONCAVITY 1 Introduction The word duality is

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

Equations and Inequalities

Equations and Inequalities Equations and Inequalities Figure 1 CHAPTER OUTLINE 1 The Rectangular Coordinate Systems and Graphs Linear Equations in One Variable Models and Applications Comple Numbers Quadratic Equations 6 Other Types

More information

Calculus of Variation An Introduction To Isoperimetric Problems

Calculus of Variation An Introduction To Isoperimetric Problems Calculus of Variation An Introduction To Isoperimetric Problems Kevin Wang The University of Sydney SSP Working Seminars, MATH2916 May 4, 2013 Contents I Lagrange Multipliers 2 1 Single Constraint Lagrange

More information

GS/ECON 5010 section B Answers to Assignment 1 September Q1. Are the preferences described below transitive? Strictly monotonic? Convex?

GS/ECON 5010 section B Answers to Assignment 1 September Q1. Are the preferences described below transitive? Strictly monotonic? Convex? GS/ECON 5010 section B Answers to Assignment 1 September 2011 Q1. Are the preferences described below transitive? Strictly monotonic? Convex? Explain briefly. The person consumes 2 goods, food and clothing.

More information

AP Calculus AB Summer Assignment

AP Calculus AB Summer Assignment AP Calculus AB Summer Assignment Name: When you come back to school, it is my epectation that you will have this packet completed. You will be way behind at the beginning of the year if you haven t attempted

More information

Properties of Walrasian Demand

Properties of Walrasian Demand Properties of Walrasian Demand Econ 2100 Fall 2017 Lecture 5, September 12 Problem Set 2 is due in Kelly s mailbox by 5pm today Outline 1 Properties of Walrasian Demand 2 Indirect Utility Function 3 Envelope

More information

Polynomial Functions of Higher Degree

Polynomial Functions of Higher Degree SAMPLE CHAPTER. NOT FOR DISTRIBUTION. 4 Polynomial Functions of Higher Degree Polynomial functions of degree greater than 2 can be used to model data such as the annual temperature fluctuations in Daytona

More information

Last Revised: :19: (Fri, 12 Jan 2007)(Revision:

Last Revised: :19: (Fri, 12 Jan 2007)(Revision: 0-0 1 Demand Lecture Last Revised: 2007-01-12 16:19:03-0800 (Fri, 12 Jan 2007)(Revision: 67) a demand correspondence is a special kind of choice correspondence where the set of alternatives is X = { x

More information

EC487 Advanced Microeconomics, Part I: Lecture 2

EC487 Advanced Microeconomics, Part I: Lecture 2 EC487 Advanced Microeconomics, Part I: Lecture 2 Leonardo Felli 32L.LG.04 6 October, 2017 Properties of the Profit Function Recall the following property of the profit function π(p, w) = max x p f (x)

More information

Week 9: Topics in Consumer Theory (Jehle and Reny, Chapter 2)

Week 9: Topics in Consumer Theory (Jehle and Reny, Chapter 2) Week 9: Topics in Consumer Theory (Jehle and Reny, Chapter 2) Tsun-Feng Chiang *School of Economics, Henan University, Kaifeng, China November 15, 2015 Microeconomic Theory Week 9: Topics in Consumer Theory

More information

1 Objective. 2 Constrained optimization. 2.1 Utility maximization. Dieter Balkenborg Department of Economics

1 Objective. 2 Constrained optimization. 2.1 Utility maximization. Dieter Balkenborg Department of Economics BEE020 { Basic Mathematical Economics Week 2, Lecture Thursday 2.0.0 Constrained optimization Dieter Balkenborg Department of Economics University of Exeter Objective We give the \ rst order conditions"

More information

Chapter 4. Maximum Theorem, Implicit Function Theorem and Envelope Theorem

Chapter 4. Maximum Theorem, Implicit Function Theorem and Envelope Theorem Chapter 4. Maximum Theorem, Implicit Function Theorem and Envelope Theorem This chapter will cover three key theorems: the maximum theorem (or the theorem of maximum), the implicit function theorem, and

More information

MATH 1325 Business Calculus Guided Notes

MATH 1325 Business Calculus Guided Notes MATH 135 Business Calculus Guided Notes LSC North Harris By Isabella Fisher Section.1 Functions and Theirs Graphs A is a rule that assigns to each element in one and only one element in. Set A Set B Set

More information

First Variation of a Functional

First Variation of a Functional First Variation of a Functional The derivative of a function being zero is a necessary condition for the etremum of that function in ordinary calculus. Let us now consider the equivalent of a derivative

More information

4.3 How derivatives affect the shape of a graph. The first derivative test and the second derivative test.

4.3 How derivatives affect the shape of a graph. The first derivative test and the second derivative test. Chapter 4: Applications of Differentiation In this chapter we will cover: 41 Maimum and minimum values The critical points method for finding etrema 43 How derivatives affect the shape of a graph The first

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Nonlinear Programming (NLP)

Nonlinear Programming (NLP) Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume

More information

Notes I Classical Demand Theory: Review of Important Concepts

Notes I Classical Demand Theory: Review of Important Concepts Notes I Classical Demand Theory: Review of Important Concepts The notes for our course are based on: Mas-Colell, A., M.D. Whinston and J.R. Green (1995), Microeconomic Theory, New York and Oxford: Oxford

More information

In order to master the techniques explained here it is vital that you undertake plenty of practice exercises so that they become second nature.

In order to master the techniques explained here it is vital that you undertake plenty of practice exercises so that they become second nature. Maima and minima In this unit we show how differentiation can be used to find the maimum and minimum values of a function. Because the derivative provides information about the gradient or slope of the

More information

Convex Optimization Overview (cnt d)

Convex Optimization Overview (cnt d) Conve Optimization Overview (cnt d) Chuong B. Do November 29, 2009 During last week s section, we began our study of conve optimization, the study of mathematical optimization problems of the form, minimize

More information

Econ 121b: Intermediate Microeconomics

Econ 121b: Intermediate Microeconomics Econ 121b: Intermediate Microeconomics Dirk Bergemann, Spring 2012 Week of 1/29-2/4 1 Lecture 7: Expenditure Minimization Instead of maximizing utility subject to a given income we can also minimize expenditure

More information

z = f (x; y) f (x ; y ) f (x; y) f (x; y )

z = f (x; y) f (x ; y ) f (x; y) f (x; y ) BEEM0 Optimization Techiniques for Economists Lecture Week 4 Dieter Balkenborg Departments of Economics University of Exeter Since the fabric of the universe is most perfect, and is the work of a most

More information

ARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2.

ARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2. ARE211, Fall 2005 LECTURE #18: THU, NOV 3, 2005 PRINT DATE: NOVEMBER 22, 2005 (COMPSTAT2) CONTENTS 5. Characteristics of Functions. 1 5.1. Surjective, Injective and Bijective functions 1 5.2. Homotheticity

More information

Math 2414 Activity 1 (Due by end of class July 23) Precalculus Problems: 3,0 and are tangent to the parabola axis. Find the other line.

Math 2414 Activity 1 (Due by end of class July 23) Precalculus Problems: 3,0 and are tangent to the parabola axis. Find the other line. Math 44 Activity (Due by end of class July 3) Precalculus Problems: 3, and are tangent to the parabola ais. Find the other line.. One of the two lines that pass through y is the - {Hint: For a line through

More information

Matrices and Systems of Equations

Matrices and Systems of Equations M CHAPTER 3 3 4 3 F 2 2 4 C 4 4 Matrices and Systems of Equations Probably the most important problem in mathematics is that of solving a system of linear equations. Well over 75 percent of all mathematical

More information

Competitive Consumer Demand 1

Competitive Consumer Demand 1 John Nachbar Washington University May 7, 2017 1 Introduction. Competitive Consumer Demand 1 These notes sketch out the basic elements of competitive demand theory. The main result is the Slutsky Decomposition

More information

Computational Optimization. Constrained Optimization Part 2

Computational Optimization. Constrained Optimization Part 2 Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints

More information

Tvestlanka Karagyozova University of Connecticut

Tvestlanka Karagyozova University of Connecticut September, 005 CALCULUS REVIEW Tvestlanka Karagyozova University of Connecticut. FUNCTIONS.. Definition: A function f is a rule that associates each value of one variable with one and only one value of

More information

Structural Properties of Utility Functions Walrasian Demand

Structural Properties of Utility Functions Walrasian Demand Structural Properties of Utility Functions Walrasian Demand Econ 2100 Fall 2017 Lecture 4, September 7 Outline 1 Structural Properties of Utility Functions 1 Local Non Satiation 2 Convexity 3 Quasi-linearity

More information

Unconstrained Multivariate Optimization

Unconstrained Multivariate Optimization Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued

More information

The Ohio State University Department of Economics. Homework Set Questions and Answers

The Ohio State University Department of Economics. Homework Set Questions and Answers The Ohio State University Department of Economics Econ. 805 Winter 00 Prof. James Peck Homework Set Questions and Answers. Consider the following pure exchange economy with two consumers and two goods.

More information

Maximum Value Functions and the Envelope Theorem

Maximum Value Functions and the Envelope Theorem Lecture Notes for ECON 40 Kevin Wainwright Maximum Value Functions and the Envelope Theorem A maximum (or minimum) value function is an objective function where the choice variables have been assigned

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

Differentiation. 1. What is a Derivative? CHAPTER 5

Differentiation. 1. What is a Derivative? CHAPTER 5 CHAPTER 5 Differentiation Differentiation is a technique that enables us to find out how a function changes when its argument changes It is an essential tool in economics If you have done A-level maths,

More information

Comparative Statics. Autumn 2018

Comparative Statics. Autumn 2018 Comparative Statics Autumn 2018 What is comparative statics? Contents 1 What is comparative statics? 2 One variable functions Multiple variable functions Vector valued functions Differential and total

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

ECON 186 Class Notes: Optimization Part 2

ECON 186 Class Notes: Optimization Part 2 ECON 186 Class Notes: Optimization Part 2 Jijian Fan Jijian Fan ECON 186 1 / 26 Hessians The Hessian matrix is a matrix of all partial derivatives of a function. Given the function f (x 1,x 2,...,x n ),

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

Intro to Nonlinear Optimization

Intro to Nonlinear Optimization Intro to Nonlinear Optimization We now rela the proportionality and additivity assumptions of LP What are the challenges of nonlinear programs NLP s? Objectives and constraints can use any function: ma

More information

Economics 101A (Lecture 3) Stefano DellaVigna

Economics 101A (Lecture 3) Stefano DellaVigna Economics 101A (Lecture 3) Stefano DellaVigna January 24, 2017 Outline 1. Implicit Function Theorem 2. Envelope Theorem 3. Convexity and concavity 4. Constrained Maximization 1 Implicit function theorem

More information

0.1. Linear transformations

0.1. Linear transformations Suggestions for midterm review #3 The repetitoria are usually not complete; I am merely bringing up the points that many people didn t now on the recitations Linear transformations The following mostly

More information

The Fundamental Welfare Theorems

The Fundamental Welfare Theorems The Fundamental Welfare Theorems The so-called Fundamental Welfare Theorems of Economics tell us about the relation between market equilibrium and Pareto efficiency. The First Welfare Theorem: Every Walrasian

More information

ECON 186 Class Notes: Derivatives and Differentials

ECON 186 Class Notes: Derivatives and Differentials ECON 186 Class Notes: Derivatives and Differentials Jijian Fan Jijian Fan ECON 186 1 / 27 Partial Differentiation Consider a function y = f (x 1,x 2,...,x n ) where the x i s are all independent, so each

More information

Final Exam - Math Camp August 27, 2014

Final Exam - Math Camp August 27, 2014 Final Exam - Math Camp August 27, 2014 You will have three hours to complete this exam. Please write your solution to question one in blue book 1 and your solutions to the subsequent questions in blue

More information

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal

More information