EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline"

Transcription

1 LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC /11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for optimization (Quadratic forms). Lecture 2: Tools for optimization (Taylor s expansion) and Unconstrained optimization. Lecture 3: Concavity, convexity, quasi-concavity and economic applications. Lecture 4: Constrained Optimization I: Equality Constraints, Lagrange Theorem. Lecture 5: Constrained Optimization II: Inequality Constraints, Kuhn-Tucker Theorem. Lecture 6: Constrained optimization III: The Maximum Value Function, Envelope Theorem, Implicit Function Theorem and Comparative Statics.

2 Lecture 1: Tools for optimization: Quadratic Forms and Taylor s formulation What is a quadratic form? Quadratic forms are useful because: (i) the simplest functions after linear ones; (ii) conditions for optimization techniques are stated in terms of quadratic forms; (iii) economic optimization problems have a quadratic objective function, such as risk minimization problems in finance, where riskiness is measured by the quadratic variance of the returns from investments. Among the functions of one variable, the simplest functions with a unique global extremum are the pure quadratics: y = x 2 and y = x 2. The level curve of a general quadratic form in R 2 is a 11 x a 12 x 1 x 2 + a 22 x 2 2 = b and can take the form of an ellipse, a hyperbola, a pair of lines, or possibly, the empty set. Definition: A quadratic form on R n is a real valued function Q(x 1, x 2,..., x n ) = i j a ij x i x j The general quadratic form of ( can be written as a 11 x a 12 x 1 x 2 + a 22 x 2 2 x 1 x 2 ) ( a 11 a 12 0 a 22 ) ( x1 x 2 ). 2

3 Each quadratic form can be represented as Q(x) = x T Ax where A is a (unique) symmetric matrix: a 11 a 12 /2... a 1n /2 a 21 /2 a a 2n / a n1 /2 a 2n /2... a nn Conversely if A is a symmetric matrix, then the real valued function Q(x) = x T Ax, is a quadratic form. 3

4 Definiteness of quadratic forms The function always takes the value 0 when x = 0. We focus on the question of whether x = 0 is a max, a min, or neither. For example when y = ax 2 then if a > 0, ax 2 is non negative and equals 0 only when x = 0. This is positive definite, and x = 0 is a global minimizer. If a < 0, then the function is negative definite. In two dimensions, x x 2 2 is positive definite, whereas x 2 1 x 2 2 is negative definite, whereas x 2 1 x 2 2 is indefinite, since it can take both positive and negative values. There are two intermediate cases: if the quadratic form is always non negative but also equals 0 for non zero x s, is positive semidefinite, such as (x 1 + x 2 ) 2 which can be 0 for points such that x 1 = x 2. A quadratic form which is never positive but can be zero at points other than the origin is called negative semidefinite. We apply the same terminology for the symmetric matrix A, that is, the matrix A is positive semi definite if Q(x) = x T Ax is positive semi definite and so on. 4

5 Definition: let A be an (n n) symmetric matrix. Then A is: (a) positive definite if x T Ax > 0 for all x 0 in R n, (b) positive semi definite if x T Ax 0 for all x 0 in R n, (c) negative definite if x T Ax < 0 for all x 0 in R n, (d) negative semi definite if x T Ax 0 for all x 0 in R n, (e) indefinite x T Ax > 0 for some x 0 in R n and x T Ax < 0 for some x 0 in R n. Application (later this week): a function y = f(x) of one variable is concave if its second derivative f (x) 0 on some interval. The generalization of this result to higher dimensions states that a function is concave on some region if its second derivative matrix is negative semidefinite for all x in the region. Testing the definiteness of a matrix: Definition: The determinant of a matrix is a unique scalar associated with the matrix. Computing the determinant of a matrix: For a (2 2) matrix A = ( a11 a 12 a 21 a 22 ) the det A or A is For A = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 11 a 22 a 12 a 21 the determinant is: ( a22 a 23 ) ( a21 a 23 ) ( a21 a 22 ) a 11 det a 12 det + a 13 det. a 32 a 33 a 31 a 33 a 31 a 32 5

6 Definition: Let A be an (n n) matrix. A (k k) submatrix of A formed by deleting (n k) columns, say columns (i 1, i 2,..., i n k ) and the same (n k) rows from A, (i 1, i 2,..., i n k ), is called a kth order principal submatrix of A. The determinant of a (k k) principal submatrix is called a kth order principal minor of A. Example: for a general (3 3) matrix A = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 there is one third order principal minor, which is det(a). There are three second ordered principal minors and three first order principal minors. Definition: Let A be an (n n) matrix. The kth order principal submatrix of A obtained by deleting the last (n k) rows and columns from A is called the kth order leading principal submatrix of A, denoted by A k. Its determinant is called the kth order leading principal minor of A, denoted by A k. Let A be an (n n) symmetric matrix. Then A is positive definite if and only if all its n leading principal minors are strictly positive. A is negative definite if and only if all its n leading principal minors alternate in sign as follows: A 1 < 0, A 2 > 0, A 3 < 0 etc. The kth order leading principal minor should have the same sign as ( 1) k. A is positive semidefinite if and only if every principal minor of A is non negative. 6

7 A is negative semidefinite if and only if every principal minor of odd order is non positive and every principal minor of even order is non negative. Diagonal matrices: A = a a a 3. These also correspond to the simplest quadratic forms: x T A x = a 1 x a 2 x a 3 x 2 3. This quadratic form will be positive (negative) definite if and only if all the a is are positive (negative). It will be positive semidefinite if and only if all the a i ; s are non negative and negative semidefinite if and only if all the a is are non positive. If there are two a is of opposite signs, it will be indefinite. Let A be a (2 2) matrix then: Q(x 1, x 2 ) = (x 1, x 2 ) ( a b b c ) ( x1 x 2 ) = ax bx 1 x 2 + cx 2 2 If a = 0, then Q cannot be negative or positive definite since Q(1, 0) = 0. So assume that a 0 and add and subtract b 2 x 2 2/a to get: Q(x 1, x 2 ) = ax bx 1 x 2 + cx b2 a x2 2 b2 a x2 2 = a(x bx 1x 2 a + b2 a 2 x2 2) b2 a x2 2 + cx 2 2 = a(x 1 + b a x 2) 2 + (ac b2 ) x 2 2 a 7

8 If both coefficients, a and (ac b 2 )/a are positive, then Q will never be negative. It will equal 0 only when x 1 + b x a 2 = 0 and x 2 = 0 in other words, when x 1 = 0 and x 2 = 0. In other words, if a b a > 0 and det A = b c > 0 then Q is positive definite. Conversely, if Q is positive definite then both a and det A = ac b 2 are positive. Similarly, Q will be negative definite if and only if both coefficient are negative, which occurs if and only if a < 0 and ac b 2 > 0, that is, when the leading principal minors alternative in sign. If ac b 2 < 0. then the two coefficients will have opposite signs and Q will be indefinite. Examples of (2 2) matrixes: ( ) 2 3 Consider A =. Since A 1 = 2 and A 2 = 5, A is positive 3 7 definite. ( ) 2 4 Consider B =. Since B 1 = 2 and B 2 = 2, B is indefinite. 4 7 Taylor s formulation: The second tool that we need for maximization is Taylor s series. For functions from R 1 to R 1, the Taylor s approximation is f(a + h) f(a) + f (a)h The approximate equality holds in the following sense. Write f(a + h) as f(a + h) = f(a) + f (a)h + R(h; a) 8

9 R(h; a) is the difference between the two sides of the approximation, and by the definition of the derivative f (a), we have R(h;a) h 0 as h 0. Geometrically, this is the formalization of the approximation of the graph of f by its tangent line at (a, f(a)). Analytically, it describes the best approximation of f by a polynomial of degree 1. Definition: the kth order Taylor polynomial of f at x = a is where Example: P k (a + h) = f(a) + f (a)h + f (a) 2! h f [k] (a) h k k! f(a + h) P k (a + h) = R k (h; a) where lim h 0 R k (h; a) h k = 0 we compute the first and second order Taylor polynomial of the exponential function f(x) = e x at x = 0. All the derivatives of f at x = 0 equal 1. Then: P 1 (h) = 1 + h P 2 (h) = 1 + h + h2 2 For h =.2, then P 1 (.2) = 1.2 and P 2 (.2) = 1.22 compared with the actual value of e.2 which is For functions of several variables: F (a + h) F (a) + F (a)h F x n (a)h n where R 1(h;a) h 0 as h 0. This is the approximation of order 1. Alternatively F (a + h) = F (a) + DF a h + R 1 (h; a) 9

10 where DF a = ( ) F (a),..., F x n (a). For order two, the analogue for f (a) 2! h 2 is where D 2 F a is the Hessian matrix: 1 2 ht D 2 F a h, D 2 F a = 2 F 2 x 1 x=a... 2 F x n x=a F x=a x n... 2 F x=a 2 x n. The extension for order k then trivially follows. 10

11 Lecture 2: Unconstrained optimization. Optimization plays a crucial role in economic problems. We start with unconstrained optimization problems. Definition of extreme points Definition: The ball B(x, r) centred at x of radius r is the set of all vectors y in R n whose distance from x is less than r, that is B(x, r) = {y R n ; y x < r}. Definition: suppose that f(x) is a real valued function defined on a subset C of R n. A point x in C is: 1. A global maximizer for f(x) on C if f(x ) f(x) for all x C. 2. A strict global maximizer for f(x) on C if f(x ) > f(x) for all x C such that x x. 3. A local maximizer for f(x) if there is a strictly positive number δ such that f(x ) f(x) for all x C for x B(x, δ). 4. A strict local maximizer for f(x) if there is a strictly positive number δ such that f(x ) > f(x) for all x C for x B(x, δ) and x x. 5. A critical point for f(x) if the first partial derivative of f(x) exists at x and f(x ) x i = 0 for i = 1, 2,..., n. Example: find the critical points of F (x, y) = x 3 y 3 + 9xy. We set F x = 3x2 + 9y = 0; F y = 3y2 + 9x = 0 11

12 the critical points are (0, 0) and (3, 3). Do extreme points exist? Theorem (Extreme Value Theorem): Suppose that f(x) is a continuous function defined on C, which is compact (closed and bounded) in R n. Then there exists a point x in C, at which f has a maximum, and there exists a point x in C, at which f has a minimum. Thus, f(x ) f(x) f(x ) for all x C. Functions of one variable Necessary condition for maximum in R : Suppose that f(x) is a differentiable function on an interval I. If x is a local maximizer of f(x), then either x is an end point of I or f (x ) = 0. Second order sufficient condition for a maximum in R : Suppose that f(x), f (x), f (x) are all continuous on an interval in I and that x is a critical point of f(x). Then: 1. If f (x) 0 for all x I, then x is a global maximizer of f(x) on I. 2. If f (x) < 0 for all x I for x x, then x is a strict global maximizer of f(x) on I. 3. If f (x ) < 0 then x is a strict local maximizer of f(x) on I. 12

13 Functions of several variables First order necessary conditions for a maximum in R n : Suppose that f(x) is a real valued function for which all first partial derivatives of f(x) exist on a subset C R n. If x is an interior point of C that is a local maximizer of f(x), then x is a critical point of f(x), that is f(x ) x i = 0 for i = 1, 2,..., n. Can we say whether (0, 0) or (3, 3) are a local maximum or a local minimum then? For this we have to consider the Hessian, or the matrix of the second order partial derivatives. Note that this is a symmetric matrix since crosspartial derivatives are equal (if the function has continuous second order partial derivatives, Clairaut s / Schwarz s theorem). Second order sufficient conditions for a local maximum in R n Suppose that f(x) is a real valued function for which all first and second partial derivatives of f(x) exist on a subset C R n. Suppose that x is a critical point of f. Then: If D 2 f(x ) is negative (positive) definite, then x is a strict local maximizer (minimizer) of f(x). It is also true that if x is an interior point and a maximum (minimum) of f, then D 2 f(x ) is negative (positive) semidefinite. But it is not true that if x is a critical point, and D 2 f(x ) is negative (positive) semidefinite, then x is a local maximum. A counterexample is f(x) = x 3, which has the property that D 2 f(0) is semidefinite, but x = 0 is not a maximum or minimum. 13

14 Back to the example of F (x, y) = x 3 y 3 + 9xy. Compute the Hessian: D 2 F (x, y) = ( 6x 9 9 6y ) The first order leading principle minor is 6x and the second order leading principal minor is det (D 2 F (x, y)) = 36xy 81. At (0, 0) these two minors are 0 and 81 and hence the matrix is indefinite and this point is neither a local min or a local max (it is a saddle point). At (3, 3) these two minors are positive and hence it is a strict local minimum of F. Note that it is not a global minimum (why?). Sketch of proof: F (x + h) = F (x ) + DF (x )h ht D 2 F (x )h + R(h) Ignore R(h) and set DF (x ) = 0. Then F (x + h) F (x ) 1 2 ht D 2 F (x )h If D 2 F (x ) is negative definite, then for all small enough h 0, the right hand side is negative. Then F (x + h) < F (x ) for small enough h or in other words, x is a strict local maximizer of F. Concavity and convexity Definition: A real valued function f defined on a convex subset U of R n is concave, if for all x, y in U and for all t [0, 1] : f(tx + (1 t)y) tf(x) + (1 t)f(y) 14

15 A real valued function g defined on a convex subset U of R n is convex, if for all x, y in U and for all t [0, 1] : g(tx + (1 t)y) tg(x) + (1 t)g(y) Notice: f is concave if and only if f is convex. Notice: linear functions are both convex and concave. A convex set: Definition: A set U is a convex set if for all x U and y U, then for all t [0, 1] : tx + (1 t)y U Concave and convex functions need to have convex sets as their domain. Otherwise, the conditions above fail. Let f be a continuous and differentiable function on a convex subset U of R n. Then f is concave on U if and only if for all x, y in U : f(y) f(x) Df(x)(y x) = f(x) (y 1 x 1 ) f(x) (y n x n ) x n Proof on R 1 : since f is concave, then tf(y) + (1 t)f(x) f(ty + (1 t)x) t(f(y) f(x)) + f(x) f(x + t(y x)) f(y) f(x) f(x + t(y x)) f(x) t f(y) f(x) f(x + h) f(x) (y x) h for h = t(y x). Taking limits when h 0 this becomes f(y) f(x) f (x)(y x). 15

16 If f is a continuous and differentiable concave function on a convex set U and if x 0 U, then Df(x 0 )(y x 0 ) 0 implies f(y) f(x 0 ), and if this holds for all y U, then x 0 maximizer of f. is a global Proof: we know that: f(y) f(x 0 ) Df(x 0 )(y x 0 ) 0 Hence also f(y) f(x 0 ) 0. Let f be a continuous twice differentiable function whose domain is a convex open subset U of R n. If f is a concave function on U and Df(x 0 ) = 0 for some x 0, then x 0 is a global maximum of f on U. A continuous twice differentiable function f on an open convex subset U of R n is concave on U if and only if the Hessian D 2 f(x) is negative semidefinite for all x in U. The function f is a convex function if and only if D 2 f(x) is positive semidefinite for all x in U. Second order sufficient conditions for global maximum (minimum) in R n : Suppose that x is a critical point of a function f(x) with continuous first and second order partial derivatives on R n. Then x is: 1. a global maximizer for f(x) if D 2 f(x) is negative (positive) semidefinite on R n. 2. a strict global maximizer for f(x) if D 2 f(x) is negative (positive) definite on R n. 16

17 The property that critical points of concave functions are global maximizers is an important one in economic theory. For example, many economic principles, such as marginal rate of substitution equals the price ratio, or marginal revenue equals marginal cost are simply the first order necessary conditions of the corresponding maximization problem as we will see. Ideally, as economist would like such a rule also to be a sufficient condition guaranteeing that utility or profit is being maximized, so it can provide a guideline for economic behaviour. This situation does indeed occur when the objective function is concave. 17

18 Lecture 3: Concavity, convexity, quasi-concavity and economic applications Recall: Definition: A set U is a convex set if for all x U and y U, then for all t [0, 1] : tx + (1 t)y U Concave and convex functions need to have convex sets as their domain. Recall: A real valued function f defined on a convex subset U of R n is concave, if for all x, y in U and for all t [0, 1] : f(tx + (1 t)y) tf(x) + (1 t)f(y) Why are concave functions so useful in economics? Let f 1,..., f k be concave functions, each defined on the same convex subset U of R n. Let a 1, a 2,..., a k be positive numbers. Then a 1 f 1 + a 2 f a k f k is a concave function on U. (Proof: in class). Consider the problem of maximizing profit for a firm whose production function is y = g(x), where y denotes output and x denote the input bundle. If p denotes the price of output and w i is the cost per unit of input i, then the firm s profit function is Π(x) = pg(x) (w 1 x 1 + w 2 x w n x n ) The profit function is a concave function if the production function is concave. It arises because (w 1 x 1 + w 2 x w n x n ) is concave and g is concave and from the result above. 18

19 The first order conditions: p g x i = w i for i = 1, 2,..., n are both necessary and sufficient for an interior profit maximizer. Quasiconcave and quasiconvex functions Definition: a level set of function f defined on U in R n is: X f a = {x U f(x) = a} This could be a point, a curve, a plane. Definition: a function f defined on a convex subset U of R n is quasiconcave if for every real number a, C + a = {x U f(x) a} is a convex set. Thus, the level sets of the function bound convex subsets from below. Definition: a function f is quasiconvex if for every real number a, C a = {x U f(x) a} is a convex set. Thus, the level sets of the function bound convex subsets from above. Every concave function is quasiconcave and every convex function is quasiconvex. 19

20 Proof: Let x and y be two points in C + a so that f(x) a and f(y) a. Then f(tx + (1 t)y) tf(x)+(1 t)f(y) ta+(1 t)a = a So tx + (1 t)y is in C a + and hence this set is convex. We have shown that if f is concave, it is also quasi-concave. Try to show that every convex function is quasi-convex. This is the second advantage of concave functions in economics. Concave functions are quasi-concave. Quasi-concavity is simply a desirable property when we talk about economic objective functions such as preferences (why?). The property that the set above any level set of a concave function is a convex set is a natural requirement for utility and production functions. For example, consider an indifference curve C of the concave utility function U. Take two bundles on this indifference curve. The set of bundles which are preferred to them, is a convex set. In particular, the bundles that mix their contents are in this preferred set. Then, given any two bundles, a consumer with a concave utility function will always prefer a mixture of the bundles to any of them. A more important advantage of the shape of the indifference curve is that it displays a diminishing marginal rate of substitution. As one moves left to right along the indifference curve C increasing consumption of good 1, the consumer is willing to give up more and more units of good one to gain an additional unit of good 2. This property is a property of concave utility functions because each level set forms the boundary of a convex region. Any (positive) monotonic transformation of a concave function is quasiconcave. Let y = f(x) be an increasing function on R 1. Easy to see graphically that the function is both quasiconcave and quasiconvex. The same applies for a 20

21 decreasing function. A single peaked function is quasiconcave. Consider the following utility function Q(x, y) = min{x, y}. The region above and to the right of any of this function s level sets is a convex set and hence Q is quasi-concave. Let f be a function defined on a convex set U in R n. Then, the following statements are equivalent: (i) f is a quasiconcave function on U (ii) For all x, y U and t [0, 1], f(x) f(y) implies f(tx + (1 t)y) f(y) (iii) For all x, y U and t [0, 1], f(tx + (1 t)y) min{f(x), f(y)} You will prove this in class. 21

22 Lecture 4: Constrained Optimization I: The Lagrangian We now analyze optimal allocation in the presence of scarce resources; after all, this is what economics is all about. Consider the following problem: max f(x 1, x 2,..., x n ) x 1,x 2,...,x n where (x 1, x 2,..., x n ) R n must satisfy: g 1 (x 1, x 2,..., x n ) b 1,.., g k (x 1, x 2,..., x n ) b k and h 1 (x 1, x 2,..., x n ) = c 1,.., h m (x 1, x 2,..., x n ) = c m. The function f is called the objective function, while the g and h functions are the constraint functions: inequality constraint (g) and equality constraints (h). An example: utility maximization: max U(x 1, x 2,..., x n ) x 1,x 2,...,x n subject to p 1 x 1 + p 2 x p n x n I x 1 0, x 2 0,..., x n 0 In this case we can treat the latter constraints as x i 0. 22

23 Equality constraints: The simple case of two variables and one equality constraint: max f(x 1, x 2 ) x 1,x 2 subject to p 1 x 1 + p 2 x 2 = I Geometrical representation: draw the constraint on the (x 1, x 2 ) plane. Draw representative samples of level curves of the objective function f. The goal is to find the highest valued level curve of f which meets the constraint set. It cannot cross the constraint set; it therefore must be tangent to it. Need to find the slope of the level set of f: f(x 1, x 2 ) = a Use total differentiation: f(x 1, x 2 ) dx 1 + f(x 1, x 2 ) x 2 dx 2 = 0 Then: dx 2 dx 1 = f(x 1, x 2 ) / f(x 1, x 2 ) x 2 So, the slope of the level set of f at x is f (x )/ f x 2 (x ) 23

24 The slope of the constraint at x is h (x )/ h x 2 (x ) and hence x satisfies: or: f (x ) f x 2 (x ) = f (x ) h (x ) = h (x ) h x 2 (x ) f x 2 (x ) h x 2 (x ) Let us denote by µ this common value: f (x ) h (x ) = and then we can re-write these two equations: f x 2 (x ) h x 2 (x ) = µ f (x ) µ h (x ) = 0 f (x ) µ h (x ) x 2 x 2 = 0 We therefore have three equations with three unknowns: f (x ) µ h (x ) = 0 f (x ) µ h (x ) x 2 x 2 = 0 h(x 1, x 2) = c We can then form the Lagrangian function: L(x 1, x 2, µ) = f(x 1, x 2 ) µ(h(x 1, x 2 ) c) 24

25 and then find the critical point of L, by setting: L = 0 L x 2 = 0 L µ = 0 and this gives us the same equations as above. The variable µ is called the Lagrange multiplier. We have reduced a constrained problem in two variables to an unconstrained problem in three variables. A caveat: it cannot be that h (x ) = h x 2 (x ) = 0. Thus, the constraint qualification is that x is not a critical point of h. Formally, let f and h be continuous functions of two variables. Suppose that x = (x 1, x 2) is a solution to max f(x 1, x 2 ) subject to h(x 1, x 2 ) = c and that x is not a critical point of h. Then there is a real number µ such that (x 1, x 2, µ ) is a critical point of the Lagrange function L(x 1, x 2, µ) = f(x 1, x 2 ) µ(h(x 1, x 2 ) c). An example: max (x 1 x 2 ) x 1,x 2 subject to x 1 + 4x 2 = 16 The constraint qualification is satisfied. 25

26 L(x 1, x 2, µ) = x 1 x 2 µ(x 1 + 4x 2 16) and the first order conditions are: x 2 µ = 0 x 1 4µ = 0 x 1 + 4x 2 16 = 0 and the only solution is x 1 = 8, x 2 = 2, µ = 2. A similar anlaysis easily extends to the case of several equality constraints. 26

27 Inequality constraints: With equality constraints, we had the following equations: f (x ) µ h (x ) = 0 f (x ) µ h (x ) x 2 x 2 = 0 Or: Or: ( f (x ) f x 2 (x ) = µ h (x ) h x 2 (x ) f(x ) = µ h(x ). ) And we had no restrictions on µ. The simple case of two variables and one inequality constraint: max x 1,x 2 f(x 1, x 2 ) subject to g(x 1, x 2 ) b Graphical representation: In the graph, the solution is where the level curve of f meets the boundary of the constraint set. This means that the constraint is binding. There is a tangency at the solution. So when the constraint is binding, is it the same as an equality constraint? But now when we look graphically at the constraint optimization problem, even when the constraint is binding, we would have a restriction on the Lagrange 27

28 multiplier. The gradients are again in line so that one is multiplier of the other: f(x ) = λ g(x ). But now the sign of λ is important: the gradients must point in the same direction also because otherwise we can increase f and still satisfy the constraint. This means that λ 0. This is the main difference between inequality and equality constraints. We still form the Lagrangian: L(x 1, x 2, µ) = f(x 1, x 2 ) λ(g(x 1, x 2 ) b) and then find the critical point of L, by setting: But what about L λ? L = f λ g = 0 L = f λ g = 0 x 2 x 2 x 2 Suppose that the optimal solution is when g(x 1, x 2 ) < b. At this point, the constraint is not binding, as the optimal solution is at the interior. The point x of the optimal solution is a local maximum (it is an unconstrained maximum). Thus: f (x ) = f (x ) = 0 x 2 We can still use the Lagrangian, provided that we set λ = 0! In other words, either the constraint is binding so that g(x 1, x 2 ) b = 0, or that it is not binding and then λ = 0. In short, the following complementary slackness condition has to be satisfied: λ(g(x 1, x 2 ) b) = 0. 28

29 Lecture 5: Constrained Optimization II: Inequality Constraints We describe formally the constrained optimization problem with inequality constraints: Let f and g be continuous functions of two variables. Suppose that x = (x 1, x 2) is a solution to max f(x 1, x 2 ) subject to g(x 1, x 2 ) b and that x is not a critical point of g if g(x 1, x 2) = b. Then given the Lagrange function L(x 1, x 2, λ) = f(x 1, x 2 ) λ(g(x 1, x 2 ) b), there is a real number λ such that: L(x, λ ) = 0 L(x, λ ) x 2 = 0 λ (g(x 1, x 2) b) = 0 λ 0 g(x 1, x 2) b An example: ABC is a perfectly competitive, profit maximizing firm, producing y from input x according to x.5. The price of output is 2, and of input is 1. Negative levels of x are impossible. Also, the firm cannot buy more than a > 0 units of input. The firm s maximization problem is therefore max f(x) = 2x.5 x subject to g(x) = x a (and x 0 which we will ignore now).the Lagrangian is: L(x, λ) = 2x.5 x λ[x a] 29

30 The first order condition is: x.5 1 λ = 0 Let us write all the information that we have: x.5 1 λ = 0 λ(x a) = 0 λ 0 x a And solve the system of equations. It is the easiest to divide it in two cases: when λ > 0 and when λ = 0. Suppose that λ > 0. This means that the constraint is binding. Then we know that x = a. The full solution is therefore: x = a, λ = 1 a 1 When is this solution viable? We need to keep consistency so if we assume that λ > 0 then we need to insure it: 1 a 1 > 0 a < 1 What if λ = 0? this means that the constraint is not binding. From the first order condition: x.5 1 = 0 x = 1 The full solution is therefore: x = 1, λ = 0 and this solution holds for all a 1. 30

31 Several Inequality constraints: The generalization is easy: however, now some constraints may be binding and some may be not binding. An example: We have to maximize f(x, y, z) = (xyz) subject to the constraints that x+y+z 1 and that x 0, y 0 and z 0. The Lagrangian is xyz λ 1 (x + y + z 1) + λ 2 x + λ 3 y + λ 4 z Solving the Lagrange problem will give us a set of critical points. The optimal solution will be a subset of this. But we can already restrict this set of critical points because it is obvious that λ 2 = 0 = λ 3 = λ 4. If one of these is positive, for example λ 2 > 0, then it must mean by complementary slackness, that x = 0. But then the value of xyz is 0, and obviously we can do better than that (for example, when x = y = z =.1). Thus, the non-negativity conditions cannot bind. This leaves us with a problem with one constraint, and we have to decide whether λ 1 > 0 or λ 1 = 0. But obviously, the constraint must bind. If x + y + z < 1 we can increase one of the variables, satisfy the constraint, and increase the value of the function. From the first order conditions: xy λ 1 = 0 zy λ 1 = 0 xz λ 1 = 0 we then find that xy = yz = zx and hence it follows that x = y = z = 1 3 optimal solution. at the 31

32 We have looked at: max f(x, y) subject to g(x, y) b.. We have characterized necessary conditions for a maximum. So that if x is a solution to a constrained optimization problem (it maximizes f subject to some constraints), it is also a critical point of the Lagrangian. We find the critical points of the Lagrangian. Can we then say that these are the solutions for the constrained optimization problem? In other words: Can we say that these are maximizers of the Lagrangian, and if these are maximizers of the Lagrangian, are these also maximizers of f (subject to the constraint)? To determine the answer, let (x, y, λ) satisfy all necessary conditions for a maximum. It is clear that if x, y is a maximizer of the Lagrangian, it also maximizes f. To see this note that λ[g(x, y ) b] = 0. Thus, f(x, y ) = f(x, y ) λ(g(x, y ) b). By λ 0 and g(x, y) b for all other (x, y), then f(x, y) λ(g(x, y) b) f(x, y). Since x, y maximizes the Lagrangian, then for all other x, y : f(x, y ) λ(g(x, y ) b) f(x, y) λ(g(x, y) b) which implies that f(x, y ) f(x, y) So that if x, y g(x, y) b. maximizes the Lagrangian, it also maximizes f(x, y) subject to Recall the main results from unconstrained optimization: 32

33 If f is a concave function defined on a convex subset X in R n, x 0 is a point in the interior in which Df(x 0 ) = 0, then x 0 maximizes f(x) in X, that is, f(x) f(x 0 ) for all x. You have shown in class that in the constrained optimization problem, if f is concave and g is convex, then the Lagrangian function is also concave. This means that we can use first order conditions. 33

34 The Kuhn-Tucker Theorem: Consider the problem of maximizing f(x) subject to the constraint that g(x) b. Assume that f and g are differentiable, f is concave, g is convex, and that the constraint qualification holds. Then x solves this problem if and only if there is a scalar λ such that L(x, λ) x i = x i f(x ) λ x i g(x ) = 0 for all i λ 0 g(x ) b λ[b g(x )] = 0 Mechanically (that is, without thinking...), one can solve constrained optimization problems in the following way: Form the Lagrangian L(x, λ) = f(x) λ(g(x) b). Suppose that there exist λ such that the first order conditions are satisfied, that is: L(x, λ ) x i = 0 for all i λ 0 λ i (g(x i ) b) = 0 Assume that g 1 to g e are binding and that g e+1 to g m are not binding. Write (g 1,.., g e ) as g E. Assume also that the Hessian of L with respect to x at x, λ is negative definite on the linear constraint set {v : Dg E (x )v = 0}, that is: v 0, Dg E (x )v = 0 v T (D 2 xl(x, λ ))v < 0, 34

35 Then x is a strict local constrained max of f on the constraint set. To check this condition, we form the bordered Hessian: ( Q = ) 0 Dg E (x ) Dg E (x ) T DxL(x 2, λ ) If the last n e leading principal minors of Q alternate in sign with the sign of the determinant of the largest matrix the same as the sign of ( 1) n, then sufficient second order conditions hold for a candidate point to be a solution of a constrained maximization problem. 35

36 Lecture 6: Constrained Optimization III: Maximum value functions Profit functions and indirect utility functions are example of maximum value functions, whereas cost functions and expenditure functions are minimum value functions. Maximum value function, a definition: If x(b) solves the problem of maximizing f(x) subject to g(x) b, the maximum value function is v(b) = f(x(b)). The maximum value function, is non decreasing. Maximum value functions and the interpretation of the Lagrange multiplier Consider the problem of maximizing f(x 1, x 2,..., x n ) subject to the k inequality constraints g(x 1, x 2,..., x n ) b 1,..., g(x 1, x 2,..., x n ) b k where b = (b 1,..., b k ). Let x 1(b ),..., x n(b ) denote the optimal solution and let λ 1 (b ),..., λ k (b ) be the corresponding Lagrange multipliers. Suppose that as b varies near b, then x 1(b ),..., x n(b ) and λ 1 (b ),..., λ k (b ) are differentiable functions and that x (b ) satisfies the constraint qualification. Then for each j = 1, 2,..., k : λ j (b ) = b j f(x (b )) Proof: For simplicity, we do here the case of a single equality constraint, and with f and g being functions of two variables. The Lagrangian is L(x, y, λ; b) = f(x, y) λ(h(x, y) b) 36

37 The solution satisfies: 0 = L x (x (b), y (b), λ (b); b) = f x (x (b), y (b)) λ (b) h x (x (b), y (b), λ (b)), 0 = L y (x (b), y (b), λ (b); b) = f y (x (b), y (b)) λ (b) h y (x (b), y (b), λ (b)), for all b. Furthermore, since h(x (b), y (b)) = b for all b, h x (x, y ) x (b) + h b y (x, y ) y (b) b for every b. Therefore, using the chain rule, we have: = 1 df(x (b), y (b)) db = f x (x, y ) x (b) + f b y (x, y ) y (b) b = λ (b)[ h x (x, y ) x (b) + h b y (x, y ) y (b) ] b = λ (b). The economic interpretation of the multiplier as a shadow price : For example, in the application for a firm maximizing profits, it tells us how valuable another unit of input would be to the firm s profits, or how much the maximum value changes for the firm when the constraint is relaxed. In other words, it is the maximum amount the firm would be willing to pay to acquire another unit of input. Recall that So that L(x, y, λ) = f(x, y) λ(g(x, y) b), d db f(x(b), y(b); b) = λ(b) = L(x(b), y(b), λ(b); b) b 37

38 Hence, what we have found above is simply a particular case of the envelope theorem, which says that d db f(x(b), y(b); b) = L(x(b), y(b), λ(b); b) b Maximum value functions and Envelope theorem: Consider the problem of maximizing f(x 1, x 2,..., x n ) subject to the k equality constraints h 1 (x 1, x 2,..., x n, c) = 0,..., h k (x 1, x 2,..., x n, c) = 0 Let x 1(c),..., x n(c) denote the optimal solution and let µ 1 (c),..., µ k (c) be the corresponding Lagrange multipliers. Suppose that x 1(c),..., x n(c) and µ 1 (c),..., µ k (c) are differentiable functions and that x (c) satisfies the constraint qualification. Then for each j = 1, 2,..., k : d dc f(x (c); c) = c L(x (c), µ(c); c) Note: if h i (x 1, x 2,..., x n, c) = 0 can be expressed as some h i(x 1, x 2,..., x n ) c = 0, then we are back at the previous case, in which we have found that But the statement is more general. d dc f(x (c), c) = c L(x (c), µ(c); c) = λ j (c) We will prove this for the simple case of an unconstrained problem. Let φ(x; a) be a continuous function of x R n and the scalar a. For any a,consider the maximization problem of max φ(x; a). Let x (a) be the solution of this problem and a continuous and differentiable function of a. We will show that d da φ(x (a); a) = a φ(x (a); a) 38

39 We compute via the chain rule that d da φ(x (a); a) = i φ (x (a); a) x i φ (a) + x i a a (x (a); a) = φ a (x (a); a) since φ x i (x (a); a) = 0 for all i by the first order conditions. Intuitively, when we are already at a maximum, changing slightly the parameters of the problem or the constraints, does not affect the value through changes in the solution x (a), because φ x i (x (a); a) = 0. When we use the envelope theorem we have to make sure though that we do not jump to another solution in a discrete manner. 39

40 Comparative Statics More generally in economic theory, once we pin down an equilibrium or a solution to an optimization problem, we are interested in how the exogenous variables change the value of the endogenous variables. We have been using the Implicit Function Theorem (IFT) throughout without stating and explaining why we can use it. The IFT allows us to be assured that a set of simultaneous equations: F 1 (y 1,..., y n ; x 1,..., x m ) = 0 F 2 (y 1,..., y n ; x 1,..., x m ) = 0. F n (y 1,..., y n ; x 1,..., x m ) = 0 will define a set of implicit functions: y 1 = f 1 (x 1,..., x m ) y 2 = f 2 (x 1,..., x m ). y n = f n (x 1,..., x m ) In other words, what the conditions of the IFT serve to do is to assure that the n equations can in principle be solved for the n variables, y 1,..., y n, even if we may not be able to obtain the solution in an explicit form. 40

41 Given the set of simultaneous equations above, if the functions F 1,.., F n all have continuous partial derivatives with respect to all x and y variables, and if at a point (y, x ) that solves the set of simultaneous equations the determinant of the (n n) Jacobian w.r.t. the y-variables is not 0: J = F 1 F 1 y 1 F y y n F 2 F 2 F y 1 y y n F n y 1 F n y 2... F n y n 0 then there exists an m dimensional neighbourhood of x in which the variables y 1..., y n are functions of x 1,..., x m according to the f i functions defined above. These functions are satisfied at x and y. They also satisfy the set of simultaneous equations for every vector x in the neighborhood, thereby giving to the set of simultaneous equations above the status of a set of identities in this neighbourhood. Moreover, the implicit functions f i are continuous and have continuous partial derivatives with respect to all the x variables. It is then possible to find the partial derivatives of the implicit functions without having to solve them for the y variables. Taking advantage of the fact that in the neighborhood of the solution, the set of equations have a status of identities, we can take the total differential of each equation and write df j = 0. When considering only dx 1 0 and setting the rest dx i = 0, the result, in matrix notation, is (we will go through an example later in class): F 1 F 1 y 1 F y y n F 2 F 2 F y 1 y y n F n y 1 F n y 2... F n y n y 1 y 2... y n = F 1 F 2... F n 41

42 Finally, since J is non zero there is a unique nontrivial solution to this linear system, which by Cramer s rule can be identified in the following way: y j = Jj J. This is for general problems. Optimization problems have a unique feature: the condition that indeed J 0. (What is J? it is simply the matrix of partial second derivatives of L, or what we call the bordered Hessian). We will see that later on. This means that indeed we can take the maximum value function, or set of equilibrium conditions, totally differentiate them and find how the endogenous variables change with the exogenous ones in the neighbourhood of the solution. For example, for the case of optimization with one equality constraint: F 1 (λ, x, y; b) = 0 F 2 (λ, x, y; b) = 0 F 3 (λ, x, y; b) = 0 is given by b g(x, y) = 0 f x λg x = 0 f y λg y = 0 We need to ensure that the Jacobian is not zero and then then we can use total differentiation. 42

43 Coming back to the condition about the Jacobian, we need to ensure that: J = F 1 λ F 2 λ F 3 λ F 1 x F 2 x F 3 x F 1 y F 2 x F 3 y 0 or: 0 g x g y g x f xx λg xx f xy λg xy g y f xy λg xy f yy λg yy 0 but the determinant of J, is that of the bordered Hessian H. Whenever sufficient second order conditions are satisfied, we know that the determinant of the bordered Hessian is not zero (in fact it is positive). Now we can totally differentiate the equations: g x dx + g y d y 1db = 0 (f xx λg xx )dx + (f xy λg xy )dy g x dλ = 0 (f yx λg yx )dx + (f yy λg yy )dy g y dλ = 0 where at the equilibrium solution, one can then solve for x b, y b, λ b. 43

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2 LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 2010/11 Math for Microeconomics September Course, Part II Problem Set 1 with Solutions 1. Show that the general

More information

EC400 Math for Microeconomics Syllabus The course is based on 6 sixty minutes lectures and on 6 ninety minutes classes.

EC400 Math for Microeconomics Syllabus The course is based on 6 sixty minutes lectures and on 6 ninety minutes classes. London School of Economics Department of Economics Dr Francesco Nava Offi ce: 32L.3.20 EC400 Math for Microeconomics Syllabus 2016 The course is based on 6 sixty minutes lectures and on 6 ninety minutes

More information

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Mathematical Economics: Lecture 16

Mathematical Economics: Lecture 16 Mathematical Economics: Lecture 16 Yu Ren WISE, Xiamen University November 26, 2012 Outline 1 Chapter 21: Concave and Quasiconcave Functions New Section Chapter 21: Concave and Quasiconcave Functions Concave

More information

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy. April 15, 2009 CHAPTER 4: HIGHER ORDER DERIVATIVES In this chapter D denotes an open subset of R n. 1. Introduction Definition 1.1. Given a function f : D R we define the second partial derivatives as

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

1 Convexity, concavity and quasi-concavity. (SB )

1 Convexity, concavity and quasi-concavity. (SB ) UNIVERSITY OF MARYLAND ECON 600 Summer 2010 Lecture Two: Unconstrained Optimization 1 Convexity, concavity and quasi-concavity. (SB 21.1-21.3.) For any two points, x, y R n, we can trace out the line of

More information

Math (P)refresher Lecture 8: Unconstrained Optimization

Math (P)refresher Lecture 8: Unconstrained Optimization Math (P)refresher Lecture 8: Unconstrained Optimization September 2006 Today s Topics : Quadratic Forms Definiteness of Quadratic Forms Maxima and Minima in R n First Order Conditions Second Order Conditions

More information

Lecture Notes for Chapter 12

Lecture Notes for Chapter 12 Lecture Notes for Chapter 12 Kevin Wainwright April 26, 2014 1 Constrained Optimization Consider the following Utility Max problem: Max x 1, x 2 U = U(x 1, x 2 ) (1) Subject to: Re-write Eq. 2 B = P 1

More information

Boston College. Math Review Session (2nd part) Lecture Notes August,2007. Nadezhda Karamcheva www2.bc.

Boston College. Math Review Session (2nd part) Lecture Notes August,2007. Nadezhda Karamcheva www2.bc. Boston College Math Review Session (2nd part) Lecture Notes August,2007 Nadezhda Karamcheva karamche@bc.edu www2.bc.edu/ karamche 1 Contents 1 Quadratic Forms and Definite Matrices 3 1.1 Quadratic Forms.........................

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

Optimization. A first course on mathematics for economists

Optimization. A first course on mathematics for economists Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45

More information

Review of Optimization Methods

Review of Optimization Methods Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,

More information

Constrained Optimization

Constrained Optimization Constrained Optimization Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 General Problem Consider the following general constrained optimization problem:

More information

E 600 Chapter 4: Optimization

E 600 Chapter 4: Optimization E 600 Chapter 4: Optimization Simona Helmsmueller August 8, 2018 Goals of this lecture: Every theorem in these slides is important! You should understand, remember and be able to apply each and every one

More information

ARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2.

ARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2. ARE211, Fall 2005 LECTURE #18: THU, NOV 3, 2005 PRINT DATE: NOVEMBER 22, 2005 (COMPSTAT2) CONTENTS 5. Characteristics of Functions. 1 5.1. Surjective, Injective and Bijective functions 1 5.2. Homotheticity

More information

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2002

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2002 Test 1 September 20, 2002 1. Determine whether each of the following is a statement or not (answer yes or no): (a) Some sentences can be labelled true and false. (b) All students should study mathematics.

More information

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003 OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003 UNCONSTRAINED OPTIMIZATION 1. Consider the problem of maximizing a function f:ú n 6 ú within a set A f ú n. Typically, A might be all of ú

More information

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7 Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum

More information

ECON 186 Class Notes: Optimization Part 2

ECON 186 Class Notes: Optimization Part 2 ECON 186 Class Notes: Optimization Part 2 Jijian Fan Jijian Fan ECON 186 1 / 26 Hessians The Hessian matrix is a matrix of all partial derivatives of a function. Given the function f (x 1,x 2,...,x n ),

More information

Microeconomics I. September, c Leopold Sögner

Microeconomics I. September, c Leopold Sögner Microeconomics I c Leopold Sögner Department of Economics and Finance Institute for Advanced Studies Stumpergasse 56 1060 Wien Tel: +43-1-59991 182 soegner@ihs.ac.at http://www.ihs.ac.at/ soegner September,

More information

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions UBC Economics 526 October 17, 2012 .1.2.3.4 Section 1 . max f (x) s.t. h(x) = c f : R n R, h : R n R m Draw picture of n = 2 and m = 1 At optimum, constraint tangent to level curve of function Rewrite

More information

Econ Slides from Lecture 14

Econ Slides from Lecture 14 Econ 205 Sobel Econ 205 - Slides from Lecture 14 Joel Sobel September 10, 2010 Theorem ( Lagrange Multipliers ) Theorem If x solves max f (x) subject to G(x) = 0 then there exists λ such that Df (x ) =

More information

g(t) = f(x 1 (t),..., x n (t)).

g(t) = f(x 1 (t),..., x n (t)). Reading: [Simon] p. 313-333, 833-836. 0.1 The Chain Rule Partial derivatives describe how a function changes in directions parallel to the coordinate axes. Now we shall demonstrate how the partial derivatives

More information

Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions

Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions Teng Wah Leo 1 Calculus of Several Variables 11 Functions Mapping between Euclidean Spaces Where as in univariate

More information

Economics 101A (Lecture 3) Stefano DellaVigna

Economics 101A (Lecture 3) Stefano DellaVigna Economics 101A (Lecture 3) Stefano DellaVigna January 24, 2017 Outline 1. Implicit Function Theorem 2. Envelope Theorem 3. Convexity and concavity 4. Constrained Maximization 1 Implicit function theorem

More information

II. An Application of Derivatives: Optimization

II. An Application of Derivatives: Optimization Anne Sibert Autumn 2013 II. An Application of Derivatives: Optimization In this section we consider an important application of derivatives: finding the minimum and maximum points. This has important applications

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

x +3y 2t = 1 2x +y +z +t = 2 3x y +z t = 7 2x +6y +z +t = a

x +3y 2t = 1 2x +y +z +t = 2 3x y +z t = 7 2x +6y +z +t = a UCM Final Exam, 05/8/014 Solutions 1 Given the parameter a R, consider the following linear system x +y t = 1 x +y +z +t = x y +z t = 7 x +6y +z +t = a (a (6 points Discuss the system depending on the

More information

MATH529 Fundamentals of Optimization Constrained Optimization I

MATH529 Fundamentals of Optimization Constrained Optimization I MATH529 Fundamentals of Optimization Constrained Optimization I Marco A. Montes de Oca Mathematical Sciences, University of Delaware, USA 1 / 26 Motivating Example 2 / 26 Motivating Example min cost(b)

More information

Basics of Calculus and Algebra

Basics of Calculus and Algebra Monika Department of Economics ISCTE-IUL September 2012 Basics of linear algebra Real valued Functions Differential Calculus Integral Calculus Optimization Introduction I A matrix is a rectangular array

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Properties of Walrasian Demand

Properties of Walrasian Demand Properties of Walrasian Demand Econ 2100 Fall 2017 Lecture 5, September 12 Problem Set 2 is due in Kelly s mailbox by 5pm today Outline 1 Properties of Walrasian Demand 2 Indirect Utility Function 3 Envelope

More information

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012 (Homework 1: Chapter 1: Exercises 1-7, 9, 11, 19, due Monday June 11th See also the course website for lectures, assignments, etc) Note: today s lecture is primarily about definitions Lots of definitions

More information

Optimization Theory. Lectures 4-6

Optimization Theory. Lectures 4-6 Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

BEEM103 UNIVERSITY OF EXETER. BUSINESS School. January 2009 Mock Exam, Part A. OPTIMIZATION TECHNIQUES FOR ECONOMISTS solutions

BEEM103 UNIVERSITY OF EXETER. BUSINESS School. January 2009 Mock Exam, Part A. OPTIMIZATION TECHNIQUES FOR ECONOMISTS solutions BEEM03 UNIVERSITY OF EXETER BUSINESS School January 009 Mock Exam, Part A OPTIMIZATION TECHNIQUES FOR ECONOMISTS solutions Duration : TWO HOURS The paper has 3 parts. Your marks on the rst part will be

More information

ECON 5111 Mathematical Economics

ECON 5111 Mathematical Economics Test 1 October 1, 2010 1. Construct a truth table for the following statement: [p (p q)] q. 2. A prime number is a natural number that is divisible by 1 and itself only. Let P be the set of all prime numbers

More information

Midterm Exam, Econ 210A, Fall 2008

Midterm Exam, Econ 210A, Fall 2008 Midterm Exam, Econ 0A, Fall 008 ) Elmer Kink s utility function is min{x, x }. Draw a few indifference curves for Elmer. These are L-shaped, with the corners lying on the line x = x. Find each of the following

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

Maximum Theorem, Implicit Function Theorem and Envelope Theorem

Maximum Theorem, Implicit Function Theorem and Envelope Theorem Maximum Theorem, Implicit Function Theorem and Envelope Theorem Ping Yu Department of Economics University of Hong Kong Ping Yu (HKU) MIFE 1 / 25 1 The Maximum Theorem 2 The Implicit Function Theorem 3

More information

Chapter 2: Unconstrained Extrema

Chapter 2: Unconstrained Extrema Chapter 2: Unconstrained Extrema Math 368 c Copyright 2012, 2013 R Clark Robinson May 22, 2013 Chapter 2: Unconstrained Extrema 1 Types of Sets Definition For p R n and r > 0, the open ball about p of

More information

ARE211, Fall2015. Contents. 2. Univariate and Multivariate Differentiation (cont) Taylor s Theorem (cont) 2

ARE211, Fall2015. Contents. 2. Univariate and Multivariate Differentiation (cont) Taylor s Theorem (cont) 2 ARE211, Fall2015 CALCULUS4: THU, SEP 17, 2015 PRINTED: SEPTEMBER 22, 2015 (LEC# 7) Contents 2. Univariate and Multivariate Differentiation (cont) 1 2.4. Taylor s Theorem (cont) 2 2.5. Applying Taylor theory:

More information

Session: 09 Aug 2016 (Tue), 10:00am 1:00pm; 10 Aug 2016 (Wed), 10:00am 1:00pm &3:00pm 5:00pm (Revision)

Session: 09 Aug 2016 (Tue), 10:00am 1:00pm; 10 Aug 2016 (Wed), 10:00am 1:00pm &3:00pm 5:00pm (Revision) Seminars on Mathematics for Economics and Finance Topic 2: Topics in multivariate calculus, concavity and convexity 1 Session: 09 Aug 2016 (Tue), 10:00am 1:00pm; 10 Aug 2016 (Wed), 10:00am 1:00pm &3:00pm

More information

Useful Math for Microeconomics

Useful Math for Microeconomics Useful Math for Microeconomics Jonathan Levin Antonio Rangel September 2001 1 Introduction Most economic models are based on the solution of optimization problems. These notes outline some of the basic

More information

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced

More information

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions. Seminars on Mathematics for Economics and Finance Topic 3: Optimization - interior optima 1 Session: 11-12 Aug 2015 (Thu/Fri) 10:00am 1:00pm I. Optimization: introduction Decision-makers (e.g. consumers,

More information

Nonlinear Programming (NLP)

Nonlinear Programming (NLP) Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume

More information

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2)

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2) Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2) Tsun-Feng Chiang *School of Economics, Henan University, Kaifeng, China September 27, 2015 Microeconomic Theory Week 4: Calculus and Optimization

More information

Summary Notes on Maximization

Summary Notes on Maximization Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject

More information

Lecture 8: Basic convex analysis

Lecture 8: Basic convex analysis Lecture 8: Basic convex analysis 1 Convex sets Both convex sets and functions have general importance in economic theory, not only in optimization. Given two points x; y 2 R n and 2 [0; 1]; the weighted

More information

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions Unconstrained UBC Economics 526 October 18, 2013 .1.2.3.4.5 Section 1 Unconstrained problem x U R n F : U R. max F (x) x U Definition F = max x U F (x) is the maximum of F on U if F (x) F for all x U and

More information

Principles in Economics and Mathematics: the mathematical part

Principles in Economics and Mathematics: the mathematical part Principles in Economics and Mathematics: the mathematical part Bram De Rock Bram De Rock Mathematical principles 1/65 Practicalities about me Bram De Rock Office: R.42.6.218 E-mail: bderock@ulb.ac.be Phone:

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1 AREA, Fall 5 LECTURE #: WED, OCT 5, 5 PRINT DATE: OCTOBER 5, 5 (GRAPHICAL) CONTENTS 1. Graphical Overview of Optimization Theory (cont) 1 1.4. Separating Hyperplanes 1 1.5. Constrained Maximization: One

More information

MATHEMATICS FOR ECONOMISTS. Course Convener. Contact: Office-Hours: X and Y. Teaching Assistant ATIYEH YEGANLOO

MATHEMATICS FOR ECONOMISTS. Course Convener. Contact: Office-Hours: X and Y. Teaching Assistant ATIYEH YEGANLOO INTRODUCTION TO QUANTITATIVE METHODS IN ECONOMICS MATHEMATICS FOR ECONOMISTS Course Convener DR. ALESSIA ISOPI Contact: alessia.isopi@manchester.ac.uk Office-Hours: X and Y Teaching Assistant ATIYEH YEGANLOO

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

Partial Derivatives. w = f(x, y, z).

Partial Derivatives. w = f(x, y, z). Partial Derivatives 1 Functions of Several Variables So far we have focused our attention of functions of one variable. These functions model situations in which a variable depends on another independent

More information

z = f (x; y) f (x ; y ) f (x; y) f (x; y )

z = f (x; y) f (x ; y ) f (x; y) f (x; y ) BEEM0 Optimization Techiniques for Economists Lecture Week 4 Dieter Balkenborg Departments of Economics University of Exeter Since the fabric of the universe is most perfect, and is the work of a most

More information

Mathematical Preliminaries

Mathematical Preliminaries Mathematical Preliminaries Economics 3307 - Intermediate Macroeconomics Aaron Hedlund Baylor University Fall 2013 Econ 3307 (Baylor University) Mathematical Preliminaries Fall 2013 1 / 25 Outline I: Sequences

More information

Economics 205 Exercises

Economics 205 Exercises Economics 05 Eercises Prof. Watson, Fall 006 (Includes eaminations through Fall 003) Part 1: Basic Analysis 1. Using ε and δ, write in formal terms the meaning of lim a f() = c, where f : R R.. Write the

More information

3.5 Quadratic Approximation and Convexity/Concavity

3.5 Quadratic Approximation and Convexity/Concavity 3.5 Quadratic Approximation and Convexity/Concavity 55 3.5 Quadratic Approximation and Convexity/Concavity Overview: Second derivatives are useful for understanding how the linear approximation varies

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

Mathematical Preliminaries for Microeconomics: Exercises

Mathematical Preliminaries for Microeconomics: Exercises Mathematical Preliminaries for Microeconomics: Exercises Igor Letina 1 Universität Zürich Fall 2013 1 Based on exercises by Dennis Gärtner, Andreas Hefti and Nick Netzer. How to prove A B Direct proof

More information

Lagrange Multipliers

Lagrange Multipliers Lagrange Multipliers (Com S 477/577 Notes) Yan-Bin Jia Nov 9, 2017 1 Introduction We turn now to the study of minimization with constraints. More specifically, we will tackle the following problem: minimize

More information

ECON2285: Mathematical Economics

ECON2285: Mathematical Economics ECON2285: Mathematical Economics Yulei Luo Economics, HKU September 17, 2018 Luo, Y. (Economics, HKU) ME September 17, 2018 1 / 46 Static Optimization and Extreme Values In this topic, we will study goal

More information

. This matrix is not symmetric. Example. Suppose A =

. This matrix is not symmetric. Example. Suppose A = Notes for Econ. 7001 by Gabriel A. ozada The equation numbers and page numbers refer to Knut Sydsæter and Peter J. Hammond s textbook Mathematics for Economic Analysis (ISBN 0-13- 583600-X, 1995). 1. Convexity,

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

Math Camp Notes: Everything Else

Math Camp Notes: Everything Else Math Camp Notes: Everything Else Systems of Dierential Equations Consider the general two-equation system of dierential equations: Steady States ẋ = f(x, y ẏ = g(x, y Just as before, we can nd the steady

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Preliminary draft only: please check for final version

Preliminary draft only: please check for final version ARE211, Fall2012 CALCULUS4: THU, OCT 11, 2012 PRINTED: AUGUST 22, 2012 (LEC# 15) Contents 3. Univariate and Multivariate Differentiation (cont) 1 3.6. Taylor s Theorem (cont) 2 3.7. Applying Taylor theory:

More information

Econ Slides from Lecture 8

Econ Slides from Lecture 8 Econ 205 Sobel Econ 205 - Slides from Lecture 8 Joel Sobel September 1, 2010 Computational Facts 1. det AB = det BA = det A det B 2. If D is a diagonal matrix, then det D is equal to the product of its

More information

Week 7: The Consumer (Malinvaud, Chapter 2 and 4) / Consumer November Theory 1, 2015 (Jehle and 1 / Reny, 32

Week 7: The Consumer (Malinvaud, Chapter 2 and 4) / Consumer November Theory 1, 2015 (Jehle and 1 / Reny, 32 Week 7: The Consumer (Malinvaud, Chapter 2 and 4) / Consumer Theory (Jehle and Reny, Chapter 1) Tsun-Feng Chiang* *School of Economics, Henan University, Kaifeng, China November 1, 2015 Week 7: The Consumer

More information

Final Exam Advanced Mathematics for Economics and Finance

Final Exam Advanced Mathematics for Economics and Finance Final Exam Advanced Mathematics for Economics and Finance Dr. Stefanie Flotho Winter Term /5 March 5 General Remarks: ˆ There are four questions in total. ˆ All problems are equally weighed. ˆ This is

More information

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22.

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22. 61 Matrices Definition: A Matrix A is a rectangular array of the form A 11 A 12 A 1n A 21 A 22 A 2n A m1 A m2 A mn The size of A is m n, where m is the number of rows and n is the number of columns The

More information

EconS 301. Math Review. Math Concepts

EconS 301. Math Review. Math Concepts EconS 301 Math Review Math Concepts Functions: Functions describe the relationship between input variables and outputs y f x where x is some input and y is some output. Example: x could number of Bananas

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Monotone Function. Function f is called monotonically increasing, if. x 1 x 2 f (x 1 ) f (x 2 ) x 1 < x 2 f (x 1 ) < f (x 2 ) x 1 x 2

Monotone Function. Function f is called monotonically increasing, if. x 1 x 2 f (x 1 ) f (x 2 ) x 1 < x 2 f (x 1 ) < f (x 2 ) x 1 x 2 Monotone Function Function f is called monotonically increasing, if Chapter 3 x x 2 f (x ) f (x 2 ) It is called strictly monotonically increasing, if f (x 2) f (x ) Convex and Concave x < x 2 f (x )

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Lecture 4: Optimization. Maximizing a function of a single variable

Lecture 4: Optimization. Maximizing a function of a single variable Lecture 4: Optimization Maximizing or Minimizing a Function of a Single Variable Maximizing or Minimizing a Function of Many Variables Constrained Optimization Maximizing a function of a single variable

More information

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44 Chapter 13 Convex and Concave Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44 Monotone Function Function f is called monotonically increasing, if x 1 x 2 f (x 1 ) f (x 2 ) It

More information

EC487 Advanced Microeconomics, Part I: Lecture 2

EC487 Advanced Microeconomics, Part I: Lecture 2 EC487 Advanced Microeconomics, Part I: Lecture 2 Leonardo Felli 32L.LG.04 6 October, 2017 Properties of the Profit Function Recall the following property of the profit function π(p, w) = max x p f (x)

More information

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris University of California, Davis Department of Agricultural and Resource Economics ARE 5 Lecture Notes Quirino Paris Karush-Kuhn-Tucker conditions................................................. page Specification

More information

Optimization. Sherif Khalifa. Sherif Khalifa () Optimization 1 / 50

Optimization. Sherif Khalifa. Sherif Khalifa () Optimization 1 / 50 Sherif Khalifa Sherif Khalifa () Optimization 1 / 50 Y f(x 0 ) Y=f(X) X 0 X Sherif Khalifa () Optimization 2 / 50 Y Y=f(X) f(x 0 ) X 0 X Sherif Khalifa () Optimization 3 / 50 A necessary condition for

More information

a = (a 1; :::a i )

a = (a 1; :::a  i ) 1 Pro t maximization Behavioral assumption: an optimal set of actions is characterized by the conditions: max R(a 1 ; a ; :::a n ) C(a 1 ; a ; :::a n ) a = (a 1; :::a n) @R(a ) @a i = @C(a ) @a i The rm

More information

DEPARTMENT OF MANAGEMENT AND ECONOMICS Royal Military College of Canada. ECE Modelling in Economics Instructor: Lenin Arango-Castillo

DEPARTMENT OF MANAGEMENT AND ECONOMICS Royal Military College of Canada. ECE Modelling in Economics Instructor: Lenin Arango-Castillo Page 1 of 5 DEPARTMENT OF MANAGEMENT AND ECONOMICS Royal Military College of Canada ECE 256 - Modelling in Economics Instructor: Lenin Arango-Castillo Final Examination 13:00-16:00, December 11, 2017 INSTRUCTIONS

More information

Constrained optimization.

Constrained optimization. ams/econ 11b supplementary notes ucsc Constrained optimization. c 2016, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

ECON 255 Introduction to Mathematical Economics

ECON 255 Introduction to Mathematical Economics Page 1 of 5 FINAL EXAMINATION Winter 2017 Introduction to Mathematical Economics April 20, 2017 TIME ALLOWED: 3 HOURS NUMBER IN THE LIST: STUDENT NUMBER: NAME: SIGNATURE: INSTRUCTIONS 1. This examination

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Static Problem Set 2 Solutions

Static Problem Set 2 Solutions Static Problem Set Solutions Jonathan Kreamer July, 0 Question (i) Let g, h be two concave functions. Is f = g + h a concave function? Prove it. Yes. Proof: Consider any two points x, x and α [0, ]. Let

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information