Boston College. Math Review Session (2nd part) Lecture Notes August,2007. Nadezhda Karamcheva www2.bc.

Size: px
Start display at page:

Download "Boston College. Math Review Session (2nd part) Lecture Notes August,2007. Nadezhda Karamcheva www2.bc."

Transcription

1 Boston College Math Review Session (2nd part) Lecture Notes August,2007 Nadezhda Karamcheva www2.bc.edu/ karamche 1

2 Contents 1 Quadratic Forms and Definite Matrices Quadratic Forms Definiteness of Quadratic Forms Test for Definiteness Unconstrained Optimization First order conditions Second Order Sufficient Conditions Second Order Necessary Conditions Global Max and Min (Sufficient Conditions) Economic Applications Constrained Optimization Equality Constraints Meaning of the Multiplier Second Order Sufficient Conditions Calculus of Several Variables Weierstrass s and Mean Value Theorems Taylor Polynomials on R Taylor Polynomials in R n

3 1 Quadratic Forms and Definite Matrices 1.1 Quadratic Forms There are several reasons for studying the quadratic forms. The natural starting point for the study of optimization problems is the simples such problem: the optimization of a quadratic form. Furthermore, the second order conditions that distinguish max from min in economic optimization problems are stated in terms of quadratic forms. Finally, a number of economic optimization problems have a quadratic objective function, for example, risk minimization problems in finance, where riskiness is measured by the quadratic variance of the returns from investments or least squares minimization problems in econometrics (OLS). Definition 1 (Quadratic Form) A quadratic form on R n is a real valued function Q(.): R n R : x R n Q(x) of the form: Q(x 1,..., x n ) = i j a ij x i x j in which each term is a monomial of degree two. As Theorem 13.3 suggests, there is a close relationship between symmetric matrices and quadratic forms. For every quadratic form Q(x) there is a symmetric matrix A, such that Q(x) = x T Ax, and every symmetric matrix A yields a quadratic form Q(x) = x T Ax. Example: The general two dimensional quadratic form a 11 x a 12 x 1 x 2 + a 22 x 2 21 can be written as: [ ] [ 1 a x1 x 11 a ] [ ] 2 12 x1 2 1 a 2 12 a 22 x Definiteness of Quadratic Forms A quadratic form always takes on the value of zero at the point x = 0. Its distinguishing characteristics is the set of values it takes when x 0. We focus here on the question of whether x = 0 is a max, a min or neither of the quadratic forms under consideration. 3

4 Example: Consider on variable quadratic form y = ax 2.If a > 0, then always ax 2 0 and equals 0 only when x = 0. Such is called positive definite, and thus, x = 0 is global maximizer. If a < 0, then always ax 2 0 and equals 0 when x = 0. Such a quadratic form is called negative definite, thus, x = 0 is its global maximizer. Now let s move to two or more dimensions. A symmetric matrix is called positive definite, positive semidefinite, negative definite, etc. according to the definiteness of the corresponding quadratic form Q(x) = x T Ax. Note that Definition 2 (Definiteness) Let A be a n n symmetric matrix, then A is a)positive definite if x T Ax > 0 for all x 0 in R n b)positive semidefinite if x T Ax 0 for all x 0 in R n c)negative definite if x T Ax < 0 for all x 0 in R n d)negative semidefinite if x T Ax 0 for all x 0 in R n e)indefinite if x T Ax > 0 for some x 0 in R n and < 0 for some other x 0 in R n Remark 1 (Definiteness of Quadratic Forms:) A matrix that is positive(negative) definite is automatically positive(negative) semidefinite, but not the other way around. Otherwise,every symmetric matrix falls into one of the above five categories. Example: Consider these quadratic forms Quadratic form Definiteness Q 1 (x 1, x 2 ) = x x 2 2 always> 0 at (x 1, x 2 ) (0, 0) positive definite Q 2 (x 1, x 2 ) = x 2 1 x 2 2 always< 0 at (x 1, x 2 ) (0, 0) negative definite Q 3 (x 1, x 2 ) = x 2 1 x 2 2 takes both negative and positive values indefinite that is Q 3 (1, 0) = 1, Q 3 (0, 1) = 1 Q 4 (x 1, x 2 ) = (x 1 + x 2 ) 2 always 0, but at some x 0 Q 4 (x ) = 0 positive semidefinite Q 5 (x 1, x 2 ) = (x 1 + x 2 ) 2 always 0, but at some x 0 Q 5 (x ) = 0 negative semidefinite Application 1 : The definiteness of a symmetric matrix plays an important role in economic theory. For example, for a function y = f(x) of one variable, the sign of the second derivative f (x 0 ) at a critical point x 0 gives a necessary and a sufficient condition for determining if whether x 0 is a maximum of f, a minimum of f, or neither The generalization of this second derivative test to 4

5 higher dimensions involves checking whether the second derivative matrix (a Hessian) of f is positive definite, negative definite, or indefinite at a critical point of f. Application 2 : In a similar vein, a function y = f(x) of one variable is concave if its second derivative f (x) is 0 on some interval. The generalization of this result to higher dimension states that a function is concave on some region if its second derivative matrix is negative semidefinite for all x in the region. Now we will go over simple test for the definiteness of a quadratic form or of a symmetric matrix. First, consider the definitions of a principal minor and leading principal minor. 1.3 Test for Definiteness Definition 3 (Principal Minor) Let A be a n n matrix. A k k submatrix of A formed by deleting the n k columns, say columns i 1, i 2,..., i n k and the corresponding n-k rows, from A, is called a kth order principal submatrix of A. The determinant of a k k principal submatrix is called a kth order principal minor of A. Example: Let A be a 3 3,matrix, then there are 1. one third order principal minor det(a) a 2. three second order principal minor 11 a 12 formed by deleting column 3 and row 3; 11 a 13 a formed by deleting column 2 and row 2; 22 a 23 a 31 a 33 a 32 a 33 a 21 a 22 a formed by deleting column 1 and row 1. 3.three first order principal minors a 11, formed by deleting the last 2 rows and columns; a 22 formed by deleting the first and the third columns and corresponding rows; a 33. formed by deleting the first 2 rows and columns. Among the kth order principal minors of a given matrix, there is one special one that we want to highlight. 5

6 Definition 4 (Leading Principal Minor) Let A be a n n matrix. The kth order principal submatrix of A obtained by deleting the last n k rows and the last n k columns from A is called the kth order leading principal submatrix of A, denoted by A k. Its determinant is called the kth order leading principal minor of a, denoted by A k An n n matrix has n leading principal submatrices-the top leftmost 1 1 submatrix, the top-leftmost 2 2 submatrix, etc. Example: Let A be a 3 3 matrix, then the leading principal minors a are: a 11, 11 a 12 a 21 a 22, A. The below theorem provides an algorithm which uses the leading principal minors to determine the definiteness of a given matrix. Theorem 1 (Definiteness of Symmetric Matrices) Let A be an n n symmetric matrix. Then, a) A is positive definite if and only if all its n leading principal miniors are (strictly) positive b)a is negative definite if and only if its n leading principal minors alternate in sign as follows: A 1 < 0, A 2 > 0, A 3 < 0, etc., the kth order leading principal minor should have the same sign as ( 1) k c)if some kth order leading principal minor of A(or some pair of them) is nonzero but does not fit either of the above two sign patters, then A is indefinite. This case occurs when A has a negative kth order leading principal minor for an even integer k or when A has a negative kth order leading principal minor and a positive lth order principal minor for two distinct odd integers k and l Theorem 2 (Semi-definiteness of Symmetric Matrices) Let A be an n n symmetric matrix.then, A is positive semidefinite if and only if every principal minor of A is 0; A is negative semidefinite if and only if every principal minor of odd order 0 is and every principal minor of even order is 0 6

7 2 Unconstrained Optimization 2.1 First order conditions Definition 5 (r-ball) Let X be a vector in R n, let r be a positive number.the r-ball about X is the set B r defined as B r {X R n : X X < r}. Definition 6 (Open Set) A set U in R n is open if for each X U there exist an open ball B r (X) around X completely contained in U. Definition 7 (Max) Let F : U R 1 be a real-valued function of n variables, whose domain U is a subset of R n, then a) A point X U is a max of F on U if F F (X) for all X U b) A point X U is a strict max of F onu if F > F (X) for all X X in U. c) A point X U is a local max of F on U if there is a r-ball B r about X such that F F (X) for all X B r U. d) A point X U is a strict local max of F on U if there is a r-ball B r about X such that F > F (X) for all X X in B r U. Definition 8 (Interior Point) X is an interior point of a set U R n if there is a r-ball B r about X such that B r U. Definition 9 (Critical Point) The n-vector X is a critical point of a function F : U R 1 element of C 1 with domain U R n if X satisfies x i = 0 for all i = 1,...n. Remark 2 (F.O.C. for Max and Min with 1 Variable) If x 0 is an interior max or min of f : U R 1 where U R 1, then x 0 is a critical point of f. 7

8 Note that a function is neither increasing nor decreasing on an interval about an interior max or min, that is f (x 0 ) = 0 or undefined. Theorem 3 (F.O.C. for Max or Min with n Variables) Let F : U R 1 be a C 1 function defined on U R n.if (i)x is a local max or min of F in U and if (ii) X is an interior point of U, then x i = 0 i = 1,...n. Proof. Let B = B r be a ball about X in U such that F F (X) for all X B. Since X maximizes F on B, along each line segment through X that lies in B and that is parallel to one of the axes, F takes on its maximum value at X.In other words, x i maximizes F on B, i.e. x i maximizes the function of one variable x i F (x 1,..., x i 1, x i, x i+1,..., x n) for x i (x i r, x i + r). Apply the one-variable maximization criterion (Remark 1) to conclude that =... = = 0. The argument for the min case is similar. Q.E.D. Suggested Exercise 1 Find the local maxs and mins of F (x, y) = x 3 y 3 + 9xy. 2.2 Second Order Sufficient Conditions Definition 10 (Gradient, Jacobian and Hessian) i)the gradient vector of a function F : U R 1 element of C 1 with domain U R n evaluated at X is the array F of first derivatives of F of the form F. Note that F is a n x 1 vector. ii)the Jacobian matrix of an array of functions F [F 1, F 2,..., F m ] T, where each function is element of C 1, evaluated at X is the array of the form 8

9 DF = 1 (X ) 1 (X ) 2 (X ) m (X ) m Note that for one single function F : U R 1 with U R n, the Jacobian vector is DF = [ x 2 (X ) ] iii) The Hessian matrix of a function F C 2 evaluated at X is the array D 2 F of second derivatives of F of the form D 2 F = 2 F (X ) 2 F F (X ) 2 F Note that the Hessian matrix has size n n. = F 11 F 1n..... F n1 F nn Remark 3 (Young s Theorem) Suppose that F : U R 1 where U R n, is a C 2 function in an open set J U. Then for all X J and for each pair of indixes i, j : 2 F x i x j = 2 F x j x i. Proposition 1 (Symmetry of the Hessian) The Hessian matrix of a function F : U R 1 element of C 2 with U R n, is symmetric. Proof. Follows directly from Remark 3. Q.E.D. Theorem 4 (S.O.S.C. for Max and Min) Let F : U R 1 element of C 2 with open domain in U R n. Suppose X is a critical point (see def.9) of F. a)if the Hessian D 2 F is a negative definite matrix, then X is a strict local maximum of F. b) If the Hessian D 2 F is a positive definite matrix, then X is a strict local minimum of F.c) If the Hessian D 2 F is indefinite, then X is neither a local maximum nor minimum of F. X,in such case, is called a saddle point. 9

10 Later when we develop the theory on Taylor polynomials we will show how to approximate a C 2 function by its Taylor polynomial of order two about X. Proof (Sketch of a proof) (the details on Taylor expansion are in the last section). For part (a), let X be a critical point of F, let h R n use a Taylor expansion of F around X and get F (X + h) = F + DF h h D 2 F h + R(h). Where DF is the Jacobian of F, D 2 F is the Hessian and R(h) is the remainder term such that R(h) 0 as h 0. Ignore the remainder term,recall that DF = 0 (since X is a critical point) and get F (X + h) F 1 2 h D 2 F h. For small h 0, since D 2 F is negative definite (by hypothesis) 1 2 h D 2 F h < 0 then F (X + h) F < 0 or F (X + h) < F hence X is a strict local maximum. A similar argument follows for part (b) and (c). Q.E.D. Remark 4 (Theorem 4 Restated) Let F : U R 1 element of C 2 with open domain in U R n.suppose X is a critical point so that x i = 0 for all i = 1,..., n. Then a) If the n leading principal minors of the Hessian alternate signs, this is F F x1 x 1 < 0, x1 x 1 F F x1 x x1 x 2 F x2 x 1 F x2 x 2 > 0, 1 F x1 x 2 F x1 x 3 F x2 x 1 F x2 x 2 F x2 x 3 < 0...at X, then X F x3 x 1 F x3 x 2 F x3 x 3 is a strict local max of F. b) If the n leading principal minors of the Hessian are all positive, i.e. F F x1 x 1 > 0, x1 x 1 F F x1 x x1 x 2 F x2 x 1 F x2 x 2 > 0, 1 F x1 x 2 F x1 x 3 F x2 x 1 F x2 x 2 F x2 x 3 > 0...at X, then X F x3 x 1 F x3 x 2 F x3 x 3 is a strict local min of F. c) If some non-zero leading principal minors of the Hessian violate the sign pattern in (b) and (c), then X is a saddle point of F. Note that the remark restates Theorem 4 with equivalent definitions of positive definiteness and negative definiteness. 10

11 Suggested Exercise 2 Find the critical points of F (x, y) = x 3 y 3 + 9xy.Find the Hessian matrix and apply the Theorem 4 or the Remark following it to find max or min. 2.3 Second Order Necessary Conditions Note that if the statements in the section above (sufficient conditions) are stated as A = B, now we state B = A. Second order necessary conditions for max and min of a function are weaker than the sufficient conditions. Local max requires to replace negative definite by negative semidefinite and local min requires to replace positive definite by positive semidefinite. That is why we do not use the qualifier strict. Theorem 5 (S.O.N.C. for Max and Min) Let F : U R 1 element of C 2 with open domanin U R n.suppose X is an interior point of U and suppose that X is a local min of F (respectively local max) then DF = 0, and D 2 F is positive semidefinite (respectively negative semidefinite). Proof is given in the last section. Remark 5 (Theorem 5 Restated) Let F : U R 1 element of C 2 with open domanin U R n.suppose X is an interior point of U a) If X is a local max of F then DF = 0 and all the principal minors of the Hessian D 2 F of odd order are 0 and all the principal minors of even order are 0. b) If X is a local min of F then DF = 0 and all the principal minors of the Hessian D 2 F are 0. Note that this remark restates Theorem 5 with alternative definitions of positive and negative semidefiniteness for the Hessian matrix. Suggested Exercise 3 Find the local max or min of the following two funcitons: f 1 = x 4 and f 2 = x 4 11

12 Suggested Exercise 4 Find the local max or min of the following function: F (x, y) = x 4 + x 2 6xy + 3y 2 Suggested Exercise 5 For the function F (x, y, z) = x 2 + 6xy + y 2 3yz + 4z 2 10x 5y 21z defined on R 3, find the critical points and classify them as local max, local min, saddle point, or can t tell. 2.4 Global Max and Min (Sufficient Conditions) Definition 11 (Convex Set) A set U R n is convex if whenever X and Y are points of U, the line segment l(x, Y ) {tx + (1 t)y : 0 t 1} joining X to Y is also in U. Definition 12 (Concave Function) A real-valued function g xy : U R 1 with convex domain U R n is concave function if for all X, Y in U and for all t [0, 1], g xy (tx + (1 t)y ) tg xy (X) + (1 t) g xy (Y ). Definition 13 (Convex Function) A real-valued function g xy : U R 1 with convex domain U R n is covex function if g xy ( ) is concave. Remark 6 Do not confuse the notion of convex function with that of convex set. A set U is a convex set if whenever x and y are points in U, the line segment joining x to y is also in U. The definition of concave or convex function requires that whenever f is defined at x and y, it is defined on the line segment joining them. Thus concave and covex functions are required to have convex domains. Remark 7 Many introductory calculus texts call convex functions concave up and concave functions concave down. We will stick with the more classical terms- convex and concave. Definition 14 (Restriction) A restriction of F : U R 1 with U convex and U R n to every line segment connecting X, Y in U is g xy (t) = F (tx + (1 t)y ) for t [0, 1]. We usually develop a geometric intuition for concave and convex functions of one variable in Calculus I. We can recognize a concave function by 12

13 its graph, because the inequality in the definition of a concave function has the following geometric interpretation: A function f is concave if and only if any secant line connecting two points on the graph of f lies belowthe graph. A function is convex if and only if any secant line connecting two points on its graph lies above the graph. In developing an intuition for concave function of several variables, it is useful to notice that a function of n variables on a convex set U is concave if and only if its restriction to any line segment in U is concave function of one variable. This should be intuitively clear since the definition of a concave function is a statement about its behavior on line segments. Lemma 1 Let F : U R 1 be a function with convex domain U R n. Then F is concave if and only if its restriction to every line segment in U is concave function. Proof. Let X, Y be arbitrary points in U, let g xy (t) F (tx + (1 t)y ), where t [0, 1], i.e. g xy is a restriction of F to every line segment in U. (We will proceed in two steps) i)( =). Suppose g xy is concave, we need to prove that F is concave, thus: F (tx + (1 t)y ) = g xy (t) by definition = g xy (t 1 + (1 t) 0) tg xy (1) + (1 t)g xy (0) since g xy is concave = tf (X) + (1 t)f (Y ) by definition of g xy ii)(= ).Suppose F is concave, we want to prove that g xy (t) is concave. Fix s 1 and s 2. g xy (ts 1 + (1 t)s 2 ) = F {(ts 1 + (1 t)s 2 )X + (1 ts 1 (1 t)s 2 )Y } = F {t[s 1 X + (1 s 1 )Y ] + (1 t)[s 2 X + (1 s 2 )Y ]} tf (s 1 X + (1 s 1 )Y ) + (1 t)f (s 2 X + (1 s 2 )Y ) = tg xy (s 1 ) + (1 t)g xy (s 2 ) The first line follows by definition of g xy, the second by rearranging terms, the third by concavity of F, the last by definition of g xy. Q.E.D. 13

14 Lemma 2 Let g xy : I R 1 be a C 1 function on an interval I R 1. Then: a) g xy is concave iff g xy (Y ) g xy (X) Dg xy (X)(Y X) for all X, Y I. a) g xy is convex iff g xy (Y ) g xy (X) Dg xy (X)(Y X) for all X, Y I. Proof. For part (a) consider i)(= ). Suppose g xy is concave on I, let X, Y I and let t (0, 1] then tg xy (Y ) + (1 t)g xy (X) g xy (ty + (1 t)x) by concavity of g xy, follows that gxy(ty +(1 t)x) gxy(x) g xy (Y ) g xy (X) (Y X), let t(y X), note t(y X) that 0 as t 0. Rewrite the last expression as g xy (Y ) g xy (X) g xy(x+ ) g xy(x) (Y X), let t 0, hence 0 and get g xy (Y ) g xy (X) Dg xy (X)(Y X). ii)( =) Suppose g xy ( X) g xy (Ỹ ) Dg xy(ỹ )( X Ỹ ) for any X, Ỹ I, consider in particular the pair X = X and Ỹ = (1 t)x + ty, then g xy (X) g xy ((1 t)x + ty ) Dg xy ((1 t)x + ty )(X [(1 t)x + ty ]) = tdg xy ((1 t)x + ty )(Y X). Now consider the pair X = Y and Ỹ = (1 t)x + ty then g xy (Y ) g xy ((1 t)x + ty ) Dg xy ((1 t)x + ty )(Y [(1 t)x + ty ]) = (1 t)dg xy ((1 t)x + ty )(Y X). Next, multiply the first inequality by(1 t), the second by t and add them to get (1 t)g xy (X) + tg xy (Y ) g xy ((1 t)x + ty ), hence g xy is concave. This proves part (a), part (b) follows similar lines. Q.E.D. Lemma 3 Let F : U R 1 be a C 1 function with convex domain U R n. a) F is concave on U iff for all X, Y on U, F (Y ) F (X) DF (X)(Y X). b) F is convex on U iff for all X, Y on U, F (Y ) F (X) DF (X)(Y X). 14

15 Proof. By Lemma 1, F is concave iff for every pair X,Y, the restriction function g xy (t) F (ty + (1 t)x) is concave, by Lemma 2 g xy is concave iff for every t [0, 1], g xy (t) g xy ([1 t]) Dg xy ([1 t])(t [1 t]). In particular for t = 1, by definition of g xy this is equivalent to F (Y ) F (X) DF (X)(X Y ). This proves part (a), part (b) follows from a similar argument. Q.E.D. Theorem 6 (Global Max and Min) Let F : U R 1 be a C 2 function with open and convex domain U R n. a) The following three conditions are equivalent: i) F is concave function in U. ii)f (Y ) F (X) DF (X)(Y X) for all X, Y in U. iii) D 2 F (X) is negative semidefinite for all X in U. b)the following three conditions are equivalent: i) F is convex function in U. ii) F (Y ) F (X) DF (X)(Y X) for all X, Y in U. iii) D 2 F (X) is positive semidefinite for all X in U. c) If F is a concave function on U and DF = 0 for a X in U then X is a global max of F on U. d) If F is a convex function on U and DF = 0 for a X in U then is a global min of F on U. Proof. I)To prove: ai aii. Follows from Lemma 3. II)To prove: ai aiii. i) ( =). As in the proof of Lemma 3, let X, Y be arbitrary points on U, let g xy (t) F (ty +(1 t)x). Follows that F is concave iff g xy is concave, which is equivalent to D 2 g xy (t) 0 for all t [0, 1]. Let Z ty + (1 t)x, recall Dg xy (t) DF (Z)(Y X), then D 2 g xy (t) (Y X) D 2 F (Z)(Y X). By hypothesis D 2 F (Z) is negative semidefinite, D 2 g xy (t) (Y X) D 2 F (Z)(Y X) 0,hence D 2 g xy (t) 0, therefore g xy (t) is concave, then F is concave. ii) (= ). Suppose F is concave on U, let Z be an arbitrary point in U and let V be an arbitrary displacement vector in R n. We want to show that V D 2 F (Z)V 0. Since U is an open set, there is t 0 > 0 such that Y =Z+ t 0 V U, since F is concave g xy is concave and D 2 g xy (0) 0 then 0 D 2 g xy (0) = (Y Z) D 2 F (Z)(Y Z) = (t 0 V )D 2 F (Z)(t 0 V ) = t 2 0V D 2 F (Z)V, hence D 2 F (Z) is negative semidefinite for all Z in U. 15

16 (I) and (II) prove the part (a) of the theorem, part (b) follows a similar argument presented in (I) and (II). III) To prove: part c of the theorem. If F is concave we already showed that F (Y ) F (X) DF (X)(Y X) for any X, Y in U.In particular consider X = X and notice that DF = 0 by hypothesis, then F (Y ) F DF (Y X ) imply F (Y ) F 0 therefore X is a global max. This proves (c), part (d) is shown following a similar argument. Q.E.D. 2.5 Economic Applications Suggested Exercise 6 A firm uses two inputs to produce a single product. If its production function is Q = x 1/4 y 1/4 and if it sells its output for a dollar a unit and buys each input for $4 dollars a unit, find its profit-maximizing input bundle. (Check the second order conditions.) Suggested Exercise 7 Suppose that a firm has a Cobb-Douglas production function Q = x a y b and that it faces output price p and input prices w and r, respectively. Solve the first order conditions for a profit-maximizing input bundle. Use the second order conditions to determine the values of the parameters a, b, p, w, r for which this solution is a global max. Suggested Exercise 8 Least Squares Analysis 3 Constrained Optimization 3.1 Equality Constraints The class of problems of constrained maximization can be written as: maxf (X) (1) where X R n must satisfy h 1 (X) = c 1,, h m (X) = c m (2) 16

17 and g 1 (X) b 1,, g k (X) b k (3) c js and b is are constants. The function F ( ) is called objective function, while h 1,, h m and g 1,, g k are called constraint functions. Note that the h is are equality constraints and g js are inequality constraints. The problem concerning to us is limited to equality constraints. This problem can be thought as the problem of finding the n-tuple X = (x 1,, x n) that maximizes (1) subject to the constraints (2). For illustrative purposes consider the problem with one equality constraint: max F (x 1, x 2 ) subject to h 1 (x 1, x 2 ) = c 1. A geometric visualization of this problem is given in Figure 1. Note from Figure 1 that at the point X = (x 1, x 2) the level curve of F ( ), F 1, and the constraint are tangents to each other, i.e. both have a common slope.we will explore this observation in order to find a characterization of the solution for this class of problems and its generalization to m restrictions. Figure 1 17

18 In order to find the derivative of the level curve at the optimum point X, recall from the implicit function theorem that for a function H(y 1, y 2 ), dy 2 dy 1 = H y 1 (Y o )/ H y 2 (Y o ),where Y o is a fixed point. In particular according to Figure 1, the slope dx 2 dx 1 for F and h must be the same at X, this is: / x 2 = h 1 / h 1 x 2 which can be written as / h 1 = x 2 / h 1 x 2 = λ (4) where λ is the common value of the slope at X.We are assuming that the ratios above do not have zero denominator. Now rewrite (4) as two equations λ h 1 = 0 (5) x 2 λ h 1 x 2 = 0 (6) and note that the system (5),(6) has three unknowns: (x 1, x 2, λ). We need to take into account a third equation in order to solve the system, such equation is the constraint h 1 ( ) itself: h 1 c 1 = 0 (7) Hence, the pair (X, λ) that maximizes F (x 1, x 2 ) subject to h 1 (x 1, x 2 ) = c 1 solves the system (5-7). The system (5-7) can be written in the form of a function L with three arguments x 1, x 2, λ. This function is called Lagrangian function L(x 1, x 2, λ) F (x 1, x 2 ) λ[h 1 (x 1, x 2 ) c 1 ] and λ is called Lagrange multiplier.the equations of the system (5-7) can be recovered by considering L( ) = 0, L( ) x 2 = 0, L( ) = 0.This is by λ considering the f.o.c. of the unconstrained maximization of L( ). We can think that the Lagrange method transforms a constrained maximization into an unconstrained maximization by constructing a Lagrange 18

19 function.the transformation requires the introduction of one more variable in the Lagrange function:the multiplier.it is important to note that the transformation requires h 1 0 or h 1 x 2 0 or both, otherwise the ratios (4) were not well defined.this is called constraint qualification.if the constraint is linear, the constraint qualification will automatically be satisfied. Note that the constraint qualification in the problem presented requires the Jacobian of the constraint fuction to be different from zero, this is Dh 1 [0, 0].For the case of a constraint function with domain in R n we require Dh 1 O where O is a 1 n vector of zeros. The constraint qualification for the more general case of m equality constraints requires the matrix DH be of rank m,where H is a columnvector of functions h i ( ). This is H = [h 1 (X),..., h m (X)] and DH = h 1 (X h ) h m (X h ) m is of size m n. This implies that the constraint set has a well-defined n-m dimensional tangent plane everywhere. Definition 15 (N.D.C.Q.) The functions h 1,... h m satisfy the nondegenerate constraint qualification if at X the rank of the Jacobian matrix DH is m. Theorem 7 (Constrained Max and Min) Let F (X), h 1 (X),..., h m (X) be C 1 functions with domain in R n.consider the problem: maximize F (X) subject to h 1 (X) = c 1,..., h m (X) = c m. Suppose that: (i) X satisfies h 1 = c 1,..., h m = c m, (ii) X is a local max or min of F ( ) on C h {X : h 1 (X) = c 1,..., h m (X) = c m }, (iii) X satisfies the NDCQ condition. Then there exists λ (λ 1,..., λ m) sucha that (X, λ ) is a critical point of the Lagrangian: 19

20 L(X, λ) F (X) λ 1 [h 1 (X) c 1 ]... λ m [h m (X) c m ]. L In other words, (X, λ ) = 0,..., L (X, λ ) = 0 and L λ 1 (X, λ ) = 0,..., L λ m (X, λ ) = 0. Proof. By assumption (iii) X satisfies the NDCQ condition hence the rank of the Jacobian matrix DH is m. Next, we claim that the Jacobian matrix of the column-vector of functions G = [F h 1 h 2... h m ] is not of maximum rank at X., i.e. the matrix DG = (X )... (X ) h 1 (X h ) h m (X h )... m (8) has rank strictly less than m+1. To prove this claim we are going to assume that the rank of DG is m+1 and generate a contradiction. Let c 0 = F and consider the system of equations F (X) = c 0 h 1 (X) = c 1. h m (X) = c m. (9) We know by assumption (i) and definition of c 0, that X is a solution of (9). Notice that the matrix (8) is the Jacobian associated to the system (9) and assume that the rank of DG is m+1. By the Implicit Function Theorem we can perturbate the c s and still find a solution for the system (9), in particular we can find a solution X such that the perturbed system F ( X) = c 0 + ε h 1 ( X) = c 1. h m ( X) = c m is satisfied, where ε is a small positive number. The last m equations of the system satisfy the constraints of the original problem. However F ( X) > 20

21 F (X). A contradiction to assumption (ii) of the theorem. This proves our claim. Next, the claim just proved implies that there exist scalars α 0, α 1,..., α n such that: (X h ) 1 (X h ) m (X ) α 0. +α 1. h 1 + +α m. h m = (10) Next claim that since X satisfies the NDCQ condition then α 0 0. To prove this (second) claim simply notice that if α 0 = 0 then by (10) the gradient vectors of the constraint functions are linearly dependent, this is α 1 h 1 +α 2 h α m h m = 0. Letting λ 1 = α 1 α 0,..., λ m = αm α 0, (10) can be written as F λ 1 h 1... λ m h m = 0 Suggested Exercise 9 Solve the following constrained maximization problem: maximize f(x 1, x 2 ) = x 1 x 2 subject to h(x 1, x 2 ) x 1 + 4x 2 = 16 Suggested Exercise 10 Solve the following constrained maximization problem: maximize f(x 1, x 2 ) = x 2 1x 2 subject to (x 1, x 2 ) in the constraint set C h = {(x 1, x 2 ) : 2x x 2 2 = 3}. Suggested Exercise 11 Find the max and min of f(x, y, z) = yz + xz subject to y 2 + z 2 = 1 and xz = Meaning of the Multiplier Theorem 8 (Meaning of the Multiplier) Let F (X), h 1 (X),..., h m (X) be C 1 function with domain in R n, let C = (c 1,..., c m ) a m-tuple of exogenous parameters and consider the problem: max F (X) subject to h 1 (X) = c 1,..., h m (X) = c m. (11) Suppose that : (i) The pair X, λ is the solution to the aforementioned problem, where λ = (λ 1,..., λ m ) are the corresponding Lagrange multipliers. 21

22 (ii) assume that x 1(c 1,..., c m ),..., x n(c 1,..., c m ) and λ 1(c 1,..., c m ),..., λ n(c 1,..., c m ) are differentiable functions in C. Then for each j = 1,..., m, c j F (x 1(c 1,..., c m ),..., x n(c 1,..., c m )) = λ j(c 1,..., c m ). Proof. Let X (C) x 1(c 1,..., c m ),..., x n(c 1,..., c m ). Since X (C) is a solution to the problem (11), by Theorem 5 for every j = 1,..., m (X (C)) = m j=1 λ j (c 1,..., c m ) h j (X (C)) (X (C)) = m j=1 λ j (c 1,..., c m ) h j (X (C)) hence for a generic x i we have x i (X (C)) = m j=1 λ j (c 1,..., c m ) h j x i (X (C)) (12) Moreover, since h j (X (C)) = c j then for all j = 1,..., m differentiating with respect to c j we have: h j (X (C)) dx 1 dc j + + h j (X (C)) dxn dc j, this is n i=1 h j x i (X (C)) dx i dc j = 1 (13) note also that by Chain Rule, differentiating h j (X (C)) = c j w.r.t. c 1 for j 1 : h j (X (C)) dx h j (X (C)) dxn n i=1 = dc j = 0 h j x i (X (C)) dx i = 0 (14) Next, using the Chain Rule in F ( ) to differentiate w.r.t. c 1 we get df (x 1,..., x n) = ( ) dx x 1 1 ( ) + + ( ) dx x n n ( ) this is df (X (C)) = n i=1 (X (C)) dx x i i (X (C)) (15) 22

23 substituting (X (C)) from (12) into (15) we get x i df (X (C)) = n or df (X (C)) = m m i=1 j=1 j=1 λ j (c 1,..., c m ) h j x i (X (C)) dx i (X (C)) λ j (c 1,..., c m ) n i=1 h j x i (X (C)) dx i (X (C)) now recognize that from (14) the second summation is zero for all j 1, then the last expression simplifies to df (X (C)) = λ 1 (c 1,..., c m ) n i=1 now from (13) we know that n h 1 i=1 λ 1 (c 1,..., c m ). Q.E.D. h 1 x i (X (C)) dx i (X (C)) x i ( ) dx i ( ) = 1 hence df (X (C)) = From Theorem 8 we conclude that the Lagrange multiplier can be interpreted as the change in the maximum achieved by the objective function if we relax (tighten) in a small amount the corresponding constraint. Consider for example the case in which F is a utility function with two arguments x 1 and x 2,suppose that h(x 1, x 2 ) = Y 0 is the budget constraint given the income of Y 0 dollars. The Lagrange multiplier associated to this problem tells us how much happiness increases( measured by F ) if we increase a little the amount of income of the individual, this is called the marginal utility of income. Suggested Exercise 12 Redo exercise 10 using the constraint 2x x 2 2 = Second Order Sufficient Conditions Theorem 9 (S.O.S.C. for Constrained Max) Let F (X), h 1 (X),..., h m (X) be C 2 function with domain in R n and consider the problem maximize F (X) subject to h 1 (X) = c 1,..., h m (X) = c m. Suppose that: (i) X = (x 1,..., x n) satisfies h j ( ) = c j for j = 1,..., m. 23

24 (ii) There exists λ = (λ 1,..., λ m) such that (X, λ ) is a critical point in the Lagrangian. This is L = 0,..., L = 0, L λ 1 = 0,..., L λ m = 0 at (X, λ ). (iii) The Hessian of L with respect to X at (X, λ ), DXL(X 2, λ ) is negative definite on the linear constraint set {V : DHV = 0}, that is V 0,and DHV = 0 = V DXL(X 2, λ )V < 0 Then, X is a strict local constrained max of F. The proof is left as an exercise. Suggested Exercise 13 In exercise 10, use the second order conditions to determine which of the critical points are local maxima and local minima. 4 Calculus of Several Variables 4.1 Weierstrass s and Mean Value Theorems Definition 16 (Compact Set) A closed and bounded set on R n is called a compact set. Definition 17 (Connected Set) A subset U R 1 is connected if for all x, z U if x y z then y U. Remark 8 (Weirstrass s Theorem) Let F : U R 1 be a continuous function whose domain is a compact subset U R n. Then there exist points X m and X M in U such that F (X m ) F (X) F (X M ) for all X U. These X m and X M are the global min and max of F in U. Theorem 10 (Rolle s Theorem) Suppose that F : [a, b] R 1 is continuous on [a, b] and C 1 on (a, b). If F (a) = F (b) = 0 then there is a point c (a, b) such that F (c) = 0. Proof. If F is constant in its domain then F (c) = 0 follows directly. Assume without loss of generality that F is sometimes positive on (a, b). By Weierstrass s theorem, F achieves its max at some point c [a, b]. Since F (c) > 0, c must be in (a, b). By the usual first order condition for an interior max on R 1, f (c) = 0 Q.E.D. 24

25 Theorem 11 (MeanValueTheorem) Let F : U R 1 be a continuous function whose domain is a connected interval U R 1. For any points a, b in U there is a point c between a and b such that F (b) F (a) = F (c)(b a). F (b) F (a) Proof. Construct the function g 0 (x) F (b) F (x) + (x b). b a Note that g 0 (a) = 0 and g 0 (b) = 0.By Rolle s theorem, there is a point c (a, b) so that g 0(c) = 0. Differentiating g 0 (x) with respect to x we get g 0(x) = F F (b) F (a) (x) + now evaluating this expression at c and equating b a to zero F F (b) F (a) (c) = or F (b) F (a) = F (c)(b a). Q.E.D. b a 4.2 Taylor Polynomials on R 1 Theorem 12 (Taylor Polynomials in One Dimension) Let F : U R 1 be a C 2 function whose domain is a connected interval U R 1. For any points a and a + h in U there exists a point c 2 between a and a + h such that F (a + h) = F (a) + F (a)h + 1F (c 2 2 )h 2 Proof. Fix a and a + h in U. Define a function g 1 (x) F (x) [F (a) + F (a)(x a)] M 1 (x a) 2,where M 1 = 1 h 2 [F (a + h) F (a) F (a)h], so that g 1 (a + h) = 0.Note that g 1 (a) = 0 and g 1(x) = F (x) F (a) 2M 1 (x a), thus g 1 (a) = g 1(a) = 0. Since g 1 (a) = g 1 (a + h) = 0 we can apply Rolle s theorem and conclude that there exists c 1 between a and a + h so that g 1(c 1 ) = 0. Moreover, since g 1(c 1 ) = 0 and g 1(a) = 0 then by Rolle s theorem again we conclude that there is a c 2 between c 1 and a so that g 1(c 2 ) = 0. Next note that g 1(x) = F (x) 2M 1, then F (c 2 ) = 2M 1 therefore F (c 2 ) = 2 1 [F (a + h) F (a) F (a)h] or F (a + h) = F (a) + F (a)h + h 1 2 F (c 2 2 )h 2. The term 1F (c 2 2 )h 2 is called the residual of the Taylor expansion and is denoted by R 1 (h; a) 1F (c 2 2 )h 2. Note that R 1(h;a) = 1F (c h 2 2 )h and c 2 a whenever h 0 because we know that c 2 lies between a and a + h. Thus R lim 1 (h;a) 1 h 0 = lim h h 0 F (c 2 2 )h 1F (a) 0 = 0. Q.E.D. 2 25

26 Remark 9 (Taylor Polynomials of Order k) Let F : U R 1 be a C k+1 function whose domain is a connected interval U R 1. For any points a and a + h in U there exists a point c between a and a + h such that F (a+h) = F (a)+f (a)h+ 1F (a)h F [k] (a)h [k] + 1 F [k+1] (c )h [k+1]. 2 k! k+1! The proof follows the same lines as the proof of Theorem 12, for k = 2 redefine g 1 (x) F (x) [F (a) + F (a)(x a) F (a)(x a) 2 ] M 1 (x a) 3, where M 1 = 1 h 3 [F (a + h) F (a) F (a)h 1 2 F (a)h 2 ]. Similarly to the case of Theorem 12, we define the residual as R k (h; a) 1 F [k+1] (c )h [k+1], where c a as h 0 then the residual has the k+1! R property: lim k (h;a) 1 h 0 = lim h k h 0 F [k+1] (a) h[k+1] = 0 k+1! h k 4.3 Taylor Polynomials in R n Theorem 13 (Taylor Polynomial in n Dimensions) Let F : U R 1 be a C 2 function whose domain is an open set U R n. Let a be a point in U. Then there exists a C 2 function h R 2 (h; a) such that for any point a + h in U with the property that the line segment l(a, a + h) lies in U: F (a + h) = F (a) + DF (a)h h D 2 F (a)h + R 2 (h; a), where R 2(h;a) h 2 0 as h 0. Proof. Define the C 1 function φ : [0, 1] U by φ(t) a + t h, that is φ(t) parameterizes the line segment l(a, a + h). Define f : [0, 1] R 1 by f F (φ(t)). Applying the one dimensional Taylor expansion of Remark 9 to f around zero we know that for any t [0, 1] there exists c 1 such that f(t) = f(0) + f (0)t f (0)t 2 + R(c 1 ) now by definition of f and the Chain Rule note that f (t) = DF (a + th)h and f (t) = h D 2 F (a + th)h then the expression above implies F (a + th) = F (a) + DF (a)ht h td 2 F (a)ht + R(c 1 ), define h ht and get F (a + h) = F (a) + DF (a) h h D 2 F (a) h + R(c 1 ). Finally note that R(c 1) h 2 = o(1). Q.E.D. 26

27 Remark 10 (Taylor Polynomials of Order k in n Dimensions) Let F : U R 1 with U R n be a C k function in a ball B around a U. Then there exists a C k function R k ( ; a) such that for any point a + h in B F (a+h) = F (a)+df (a)(h)+ 1 2! D2 F (a)(h, h) k! Dk F (a)(h,..., h)+ R k (h; a) where R k(h;a) h k 0 as h 0 and D k F (a)(h,..., h) [ k F k 1,...,k n x k xkn n where the summation is over all n-tuples (k 1,..., k n ) of nonnegative integers such that k k n = k. The proof is similar to the proof of Theorem 13. We need to parameterize F on a line segment φ and use Remark 10. (a)h k 1 1 h k h kn n ], Suggested Exercise 14 Compute the first and second order Taylor polynomial of the exponential function f(x) = e x at x = 0. Suggested Exercise 15 Compute the Taylor polynomial of order 2 of F (x, y) = x 1/4 y 3/4 at (1, 1). At this point we are ready to present a proof of Theorem 5. For ease of the exposition, Theorem 5 is presented again. Theorem 5 (proof of) Let F : U R 1 element of C 2 with open domanin U R n.suppose X is an interior point of U and suppose that X is a local min of F (respectively local max) then DF = 0, and D 2 F is positive semidefinite (respectively negative semidefinite). Proof. Claim that DF = 0 and W D 2 F W < 0 imply that the function t F (X + tw ) is decreasing in t for a small positive t. To prove the claim fix X and W and consider a Taylor expansion of F (t) around X. This is F (X +tw ) = F +DF W + 1W D 2 F W +R 2 2. Next, t small imply R 2 0 and imposing DF = 0 we have F (X + tw ) = F + 1W D 2 F W then W D 2 F W < 0 imply F (X + tw ) < 2 F that is, the function F is decreasing around X. Next, since F is decreasing in a neighborhood of X, X can not be a local min. Therefore if a critical point X is a local min of F there is no 27

28 W such that W D 2 F W < 0; in other words V D 2 F V 0 for any V R n. Hence D 2 F is positive semidefinite. The case for max follows a similar argument. Q.E.D. 28

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for

More information

Math (P)refresher Lecture 8: Unconstrained Optimization

Math (P)refresher Lecture 8: Unconstrained Optimization Math (P)refresher Lecture 8: Unconstrained Optimization September 2006 Today s Topics : Quadratic Forms Definiteness of Quadratic Forms Maxima and Minima in R n First Order Conditions Second Order Conditions

More information

Mathematical Economics: Lecture 16

Mathematical Economics: Lecture 16 Mathematical Economics: Lecture 16 Yu Ren WISE, Xiamen University November 26, 2012 Outline 1 Chapter 21: Concave and Quasiconcave Functions New Section Chapter 21: Concave and Quasiconcave Functions Concave

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4

More information

Review of Optimization Methods

Review of Optimization Methods Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,

More information

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2 LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 2010/11 Math for Microeconomics September Course, Part II Problem Set 1 with Solutions 1. Show that the general

More information

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy. April 15, 2009 CHAPTER 4: HIGHER ORDER DERIVATIVES In this chapter D denotes an open subset of R n. 1. Introduction Definition 1.1. Given a function f : D R we define the second partial derivatives as

More information

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2)

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2) Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2) Tsun-Feng Chiang *School of Economics, Henan University, Kaifeng, China September 27, 2015 Microeconomic Theory Week 4: Calculus and Optimization

More information

Nonlinear Programming (NLP)

Nonlinear Programming (NLP) Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume

More information

EC400 Math for Microeconomics Syllabus The course is based on 6 sixty minutes lectures and on 6 ninety minutes classes.

EC400 Math for Microeconomics Syllabus The course is based on 6 sixty minutes lectures and on 6 ninety minutes classes. London School of Economics Department of Economics Dr Francesco Nava Offi ce: 32L.3.20 EC400 Math for Microeconomics Syllabus 2016 The course is based on 6 sixty minutes lectures and on 6 ninety minutes

More information

Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions

Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions Teng Wah Leo 1 Calculus of Several Variables 11 Functions Mapping between Euclidean Spaces Where as in univariate

More information

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions Unconstrained UBC Economics 526 October 18, 2013 .1.2.3.4.5 Section 1 Unconstrained problem x U R n F : U R. max F (x) x U Definition F = max x U F (x) is the maximum of F on U if F (x) F for all x U and

More information

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions. Seminars on Mathematics for Economics and Finance Topic 3: Optimization - interior optima 1 Session: 11-12 Aug 2015 (Thu/Fri) 10:00am 1:00pm I. Optimization: introduction Decision-makers (e.g. consumers,

More information

Chapter 2: Unconstrained Extrema

Chapter 2: Unconstrained Extrema Chapter 2: Unconstrained Extrema Math 368 c Copyright 2012, 2013 R Clark Robinson May 22, 2013 Chapter 2: Unconstrained Extrema 1 Types of Sets Definition For p R n and r > 0, the open ball about p of

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

x +3y 2t = 1 2x +y +z +t = 2 3x y +z t = 7 2x +6y +z +t = a

x +3y 2t = 1 2x +y +z +t = 2 3x y +z t = 7 2x +6y +z +t = a UCM Final Exam, 05/8/014 Solutions 1 Given the parameter a R, consider the following linear system x +y t = 1 x +y +z +t = x y +z t = 7 x +6y +z +t = a (a (6 points Discuss the system depending on the

More information

ARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2.

ARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2. ARE211, Fall 2005 LECTURE #18: THU, NOV 3, 2005 PRINT DATE: NOVEMBER 22, 2005 (COMPSTAT2) CONTENTS 5. Characteristics of Functions. 1 5.1. Surjective, Injective and Bijective functions 1 5.2. Homotheticity

More information

MA102: Multivariable Calculus

MA102: Multivariable Calculus MA102: Multivariable Calculus Rupam Barman and Shreemayee Bora Department of Mathematics IIT Guwahati Differentiability of f : U R n R m Definition: Let U R n be open. Then f : U R n R m is differentiable

More information

8. Constrained Optimization

8. Constrained Optimization 8. Constrained Optimization Daisuke Oyama Mathematics II May 11, 2018 Unconstrained Maximization Problem Let X R N be a nonempty set. Definition 8.1 For a function f : X R, x X is a (strict) local maximizer

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

Preliminary draft only: please check for final version

Preliminary draft only: please check for final version ARE211, Fall2012 CALCULUS4: THU, OCT 11, 2012 PRINTED: AUGUST 22, 2012 (LEC# 15) Contents 3. Univariate and Multivariate Differentiation (cont) 1 3.6. Taylor s Theorem (cont) 2 3.7. Applying Taylor theory:

More information

Functions of Several Variables

Functions of Several Variables Jim Lambers MAT 419/519 Summer Session 2011-12 Lecture 2 Notes These notes correspond to Section 1.2 in the text. Functions of Several Variables We now generalize the results from the previous section,

More information

Optimization Theory. Lectures 4-6

Optimization Theory. Lectures 4-6 Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:

More information

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44 Chapter 13 Convex and Concave Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44 Monotone Function Function f is called monotonically increasing, if x 1 x 2 f (x 1 ) f (x 2 ) It

More information

Monotone Function. Function f is called monotonically increasing, if. x 1 x 2 f (x 1 ) f (x 2 ) x 1 < x 2 f (x 1 ) < f (x 2 ) x 1 x 2

Monotone Function. Function f is called monotonically increasing, if. x 1 x 2 f (x 1 ) f (x 2 ) x 1 < x 2 f (x 1 ) < f (x 2 ) x 1 x 2 Monotone Function Function f is called monotonically increasing, if Chapter 3 x x 2 f (x ) f (x 2 ) It is called strictly monotonically increasing, if f (x 2) f (x ) Convex and Concave x < x 2 f (x )

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

Session: 09 Aug 2016 (Tue), 10:00am 1:00pm; 10 Aug 2016 (Wed), 10:00am 1:00pm &3:00pm 5:00pm (Revision)

Session: 09 Aug 2016 (Tue), 10:00am 1:00pm; 10 Aug 2016 (Wed), 10:00am 1:00pm &3:00pm 5:00pm (Revision) Seminars on Mathematics for Economics and Finance Topic 2: Topics in multivariate calculus, concavity and convexity 1 Session: 09 Aug 2016 (Tue), 10:00am 1:00pm; 10 Aug 2016 (Wed), 10:00am 1:00pm &3:00pm

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then 1. x S is a global minimum point of f over S if f (x) f (x ) for any x S. 2. x S

More information

1 Convexity, concavity and quasi-concavity. (SB )

1 Convexity, concavity and quasi-concavity. (SB ) UNIVERSITY OF MARYLAND ECON 600 Summer 2010 Lecture Two: Unconstrained Optimization 1 Convexity, concavity and quasi-concavity. (SB 21.1-21.3.) For any two points, x, y R n, we can trace out the line of

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Optimization. A first course on mathematics for economists

Optimization. A first course on mathematics for economists Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45

More information

Constrained Optimization

Constrained Optimization Constrained Optimization Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 General Problem Consider the following general constrained optimization problem:

More information

2015 Math Camp Calculus Exam Solution

2015 Math Camp Calculus Exam Solution 015 Math Camp Calculus Exam Solution Problem 1: x = x x +5 4+5 = 9 = 3 1. lim We also accepted ±3, even though it is not according to the prevailing convention 1. x x 4 x+4 =. lim 4 4+4 = 4 0 = 4 0 = We

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Lecture 4: Optimization. Maximizing a function of a single variable

Lecture 4: Optimization. Maximizing a function of a single variable Lecture 4: Optimization Maximizing or Minimizing a Function of a Single Variable Maximizing or Minimizing a Function of Many Variables Constrained Optimization Maximizing a function of a single variable

More information

Static Problem Set 2 Solutions

Static Problem Set 2 Solutions Static Problem Set Solutions Jonathan Kreamer July, 0 Question (i) Let g, h be two concave functions. Is f = g + h a concave function? Prove it. Yes. Proof: Consider any two points x, x and α [0, ]. Let

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions UBC Economics 526 October 17, 2012 .1.2.3.4 Section 1 . max f (x) s.t. h(x) = c f : R n R, h : R n R m Draw picture of n = 2 and m = 1 At optimum, constraint tangent to level curve of function Rewrite

More information

B553 Lecture 3: Multivariate Calculus and Linear Algebra Review

B553 Lecture 3: Multivariate Calculus and Linear Algebra Review B553 Lecture 3: Multivariate Calculus and Linear Algebra Review Kris Hauser December 30, 2011 We now move from the univariate setting to the multivariate setting, where we will spend the rest of the class.

More information

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

E 600 Chapter 4: Optimization

E 600 Chapter 4: Optimization E 600 Chapter 4: Optimization Simona Helmsmueller August 8, 2018 Goals of this lecture: Every theorem in these slides is important! You should understand, remember and be able to apply each and every one

More information

Economics 205 Exercises

Economics 205 Exercises Economics 05 Eercises Prof. Watson, Fall 006 (Includes eaminations through Fall 003) Part 1: Basic Analysis 1. Using ε and δ, write in formal terms the meaning of lim a f() = c, where f : R R.. Write the

More information

Chapter 3: Constrained Extrema

Chapter 3: Constrained Extrema Chapter 3: Constrained Extrema Math 368 c Copyright 2012, 2013 R Clark Robinson May 22, 2013 Chapter 3: Constrained Extrema 1 Implicit Function Theorem For scalar fn g : R n R with g(x ) 0 and g(x ) =

More information

Index. Cambridge University Press An Introduction to Mathematics for Economics Akihito Asano. Index.

Index. Cambridge University Press An Introduction to Mathematics for Economics Akihito Asano. Index. , see Q.E.D. ln, see natural logarithmic function e, see Euler s e i, see imaginary number log 10, see common logarithm ceteris paribus, 4 quod erat demonstrandum, see Q.E.D. reductio ad absurdum, see

More information

Math 273a: Optimization Basic concepts

Math 273a: Optimization Basic concepts Math 273a: Optimization Basic concepts Instructor: Wotao Yin Department of Mathematics, UCLA Spring 2015 slides based on Chong-Zak, 4th Ed. Goals of this lecture The general form of optimization: minimize

More information

z = f (x; y) f (x ; y ) f (x; y) f (x; y )

z = f (x; y) f (x ; y ) f (x; y) f (x; y ) BEEM0 Optimization Techiniques for Economists Lecture Week 4 Dieter Balkenborg Departments of Economics University of Exeter Since the fabric of the universe is most perfect, and is the work of a most

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

Properties of Walrasian Demand

Properties of Walrasian Demand Properties of Walrasian Demand Econ 2100 Fall 2017 Lecture 5, September 12 Problem Set 2 is due in Kelly s mailbox by 5pm today Outline 1 Properties of Walrasian Demand 2 Indirect Utility Function 3 Envelope

More information

II. An Application of Derivatives: Optimization

II. An Application of Derivatives: Optimization Anne Sibert Autumn 2013 II. An Application of Derivatives: Optimization In this section we consider an important application of derivatives: finding the minimum and maximum points. This has important applications

More information

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form: 0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything

More information

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as Reading [SB], Ch. 16.1-16.3, p. 375-393 1 Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x

More information

Basics of Calculus and Algebra

Basics of Calculus and Algebra Monika Department of Economics ISCTE-IUL September 2012 Basics of linear algebra Real valued Functions Differential Calculus Integral Calculus Optimization Introduction I A matrix is a rectangular array

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013 PAUL SCHRIMPF OCTOBER 24, 213 UNIVERSITY OF BRITISH COLUMBIA ECONOMICS 26 Today s lecture is about unconstrained optimization. If you re following along in the syllabus, you ll notice that we ve skipped

More information

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

ECON 186 Class Notes: Optimization Part 2

ECON 186 Class Notes: Optimization Part 2 ECON 186 Class Notes: Optimization Part 2 Jijian Fan Jijian Fan ECON 186 1 / 26 Hessians The Hessian matrix is a matrix of all partial derivatives of a function. Given the function f (x 1,x 2,...,x n ),

More information

E 600 Chapter 3: Multivariate Calculus

E 600 Chapter 3: Multivariate Calculus E 600 Chapter 3: Multivariate Calculus Simona Helmsmueller August 21, 2017 Goals of this lecture: Know when an inverse to a function exists, be able to graphically and analytically determine whether a

More information

6 Optimization. The interior of a set S R n is the set. int X = {x 2 S : 9 an open box B such that x 2 B S}

6 Optimization. The interior of a set S R n is the set. int X = {x 2 S : 9 an open box B such that x 2 B S} 6 Optimization The interior of a set S R n is the set int X = {x 2 S : 9 an open box B such that x 2 B S} Similarly, the boundary of S, denoted@s, istheset @S := {x 2 R n :everyopenboxb s.t. x 2 B contains

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

Functions of Several Variables

Functions of Several Variables Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Higher order derivative

Higher order derivative 2 Î 3 á Higher order derivative Î 1 Å Iterated partial derivative Iterated partial derivative Suppose f has f/ x, f/ y and ( f ) = 2 f x x x 2 ( f ) = 2 f x y x y ( f ) = 2 f y x y x ( f ) = 2 f y y y

More information

The Derivative. Appendix B. B.1 The Derivative of f. Mappings from IR to IR

The Derivative. Appendix B. B.1 The Derivative of f. Mappings from IR to IR Appendix B The Derivative B.1 The Derivative of f In this chapter, we give a short summary of the derivative. Specifically, we want to compare/contrast how the derivative appears for functions whose domain

More information

Caculus 221. Possible questions for Exam II. March 19, 2002

Caculus 221. Possible questions for Exam II. March 19, 2002 Caculus 221 Possible questions for Exam II March 19, 2002 These notes cover the recent material in a style more like the lecture than the book. The proofs in the book are in section 1-11. At the end there

More information

Econ Slides from Lecture 14

Econ Slides from Lecture 14 Econ 205 Sobel Econ 205 - Slides from Lecture 14 Joel Sobel September 10, 2010 Theorem ( Lagrange Multipliers ) Theorem If x solves max f (x) subject to G(x) = 0 then there exists λ such that Df (x ) =

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Copyrighted Material. L(t, x(t), u(t))dt + K(t f, x f ) (1.2) where L and K are given functions (running cost and terminal cost, respectively),

Copyrighted Material. L(t, x(t), u(t))dt + K(t f, x f ) (1.2) where L and K are given functions (running cost and terminal cost, respectively), Chapter One Introduction 1.1 OPTIMAL CONTROL PROBLEM We begin by describing, very informally and in general terms, the class of optimal control problems that we want to eventually be able to solve. The

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Mathematical Preliminaries for Microeconomics: Exercises

Mathematical Preliminaries for Microeconomics: Exercises Mathematical Preliminaries for Microeconomics: Exercises Igor Letina 1 Universität Zürich Fall 2013 1 Based on exercises by Dennis Gärtner, Andreas Hefti and Nick Netzer. How to prove A B Direct proof

More information

g(t) = f(x 1 (t),..., x n (t)).

g(t) = f(x 1 (t),..., x n (t)). Reading: [Simon] p. 313-333, 833-836. 0.1 The Chain Rule Partial derivatives describe how a function changes in directions parallel to the coordinate axes. Now we shall demonstrate how the partial derivatives

More information

Week 3: Sets and Mappings/Calculus and Optimization (Jehle September and Reny, 20, 2015 Chapter A1/A2)

Week 3: Sets and Mappings/Calculus and Optimization (Jehle September and Reny, 20, 2015 Chapter A1/A2) Week 3: Sets and Mappings/Calculus and Optimization (Jehle and Reny, Chapter A1/A2) Tsun-Feng Chiang *School of Economics, Henan University, Kaifeng, China September 20, 2015 Week 3: Sets and Mappings/Calculus

More information

Math Camp Notes: Linear Algebra I

Math Camp Notes: Linear Algebra I Math Camp Notes: Linear Algebra I Basic Matrix Operations and Properties Consider two n m matrices: a a m A = a n a nm Then the basic matrix operations are as follows: a + b a m + b m A + B = a n + b n

More information

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 EC9A0: Pre-sessional Advanced Mathematics Course Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 1 Infimum and Supremum Definition 1. Fix a set Y R. A number α R is an upper bound of Y if

More information

14 March 2018 Module 1: Marginal analysis and single variable calculus John Riley. ( x, f ( x )) are the convex combinations of these two

14 March 2018 Module 1: Marginal analysis and single variable calculus John Riley. ( x, f ( x )) are the convex combinations of these two 4 March 28 Module : Marginal analysis single variable calculus John Riley 4. Concave conve functions A function f( ) is concave if, for any interval [, ], the graph of a function f( ) is above the line

More information

3.5 Quadratic Approximation and Convexity/Concavity

3.5 Quadratic Approximation and Convexity/Concavity 3.5 Quadratic Approximation and Convexity/Concavity 55 3.5 Quadratic Approximation and Convexity/Concavity Overview: Second derivatives are useful for understanding how the linear approximation varies

More information

Differentiation. f(x + h) f(x) Lh = L.

Differentiation. f(x + h) f(x) Lh = L. Analysis in R n Math 204, Section 30 Winter Quarter 2008 Paul Sally, e-mail: sally@math.uchicago.edu John Boller, e-mail: boller@math.uchicago.edu website: http://www.math.uchicago.edu/ boller/m203 Differentiation

More information

Chapter 8: Taylor s theorem and L Hospital s rule

Chapter 8: Taylor s theorem and L Hospital s rule Chapter 8: Taylor s theorem and L Hospital s rule Theorem: [Inverse Mapping Theorem] Suppose that a < b and f : [a, b] R. Given that f (x) > 0 for all x (a, b) then f 1 is differentiable on (f(a), f(b))

More information

Mathematical Foundations II

Mathematical Foundations II Mathematical Foundations 2-1- Mathematical Foundations II A. Level and superlevel sets 2 B. Convex sets and concave functions 4 C. Parameter changes: Envelope Theorem I 17 D. Envelope Theorem II 41 48

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1 AREA, Fall 5 LECTURE #: WED, OCT 5, 5 PRINT DATE: OCTOBER 5, 5 (GRAPHICAL) CONTENTS 1. Graphical Overview of Optimization Theory (cont) 1 1.4. Separating Hyperplanes 1 1.5. Constrained Maximization: One

More information

Econ Slides from Lecture 8

Econ Slides from Lecture 8 Econ 205 Sobel Econ 205 - Slides from Lecture 8 Joel Sobel September 1, 2010 Computational Facts 1. det AB = det BA = det A det B 2. If D is a diagonal matrix, then det D is equal to the product of its

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Differentiable Functions

Differentiable Functions Differentiable Functions Let S R n be open and let f : R n R. We recall that, for x o = (x o 1, x o,, x o n S the partial derivative of f at the point x o with respect to the component x j is defined as

More information

The Inverse Function Theorem 1

The Inverse Function Theorem 1 John Nachbar Washington University April 11, 2014 1 Overview. The Inverse Function Theorem 1 If a function f : R R is C 1 and if its derivative is strictly positive at some x R, then, by continuity of

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information

Economics 101A (Lecture 3) Stefano DellaVigna

Economics 101A (Lecture 3) Stefano DellaVigna Economics 101A (Lecture 3) Stefano DellaVigna January 24, 2017 Outline 1. Implicit Function Theorem 2. Envelope Theorem 3. Convexity and concavity 4. Constrained Maximization 1 Implicit function theorem

More information

Lecture 8: Basic convex analysis

Lecture 8: Basic convex analysis Lecture 8: Basic convex analysis 1 Convex sets Both convex sets and functions have general importance in economic theory, not only in optimization. Given two points x; y 2 R n and 2 [0; 1]; the weighted

More information

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012 (Homework 1: Chapter 1: Exercises 1-7, 9, 11, 19, due Monday June 11th See also the course website for lectures, assignments, etc) Note: today s lecture is primarily about definitions Lots of definitions

More information

Mathematical Foundations -1- Convexity and quasi-convexity. Convex set Convex function Concave function Quasi-concave function Supporting hyperplane

Mathematical Foundations -1- Convexity and quasi-convexity. Convex set Convex function Concave function Quasi-concave function Supporting hyperplane Mathematical Foundations -1- Convexity and quasi-convexity Convex set Convex function Concave function Quasi-concave function Supporting hyperplane Mathematical Foundations -2- Convexity and quasi-convexity

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Problem Set 2 Solutions

Problem Set 2 Solutions EC 720 - Math for Economists Samson Alva Department of Economics Boston College October 4 2011 1. Profit Maximization Problem Set 2 Solutions (a) The Lagrangian for this problem is L(y k l λ) = py rk wl

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information