. This matrix is not symmetric. Example. Suppose A =

Size: px
Start display at page:

Download ". This matrix is not symmetric. Example. Suppose A ="

Transcription

1 Notes for Econ by Gabriel A. ozada The equation numbers and page numbers refer to Knut Sydsæter and Peter J. Hammond s textbook Mathematics for Economic Analysis (ISBN X, 1995). 1. Convexity, Quadratic Forms, and Minors et A denote a matrix. It does not have to be square. A minor of A of order r is obtained by deleting all but r rows and r columns of A, then taking the determinant of the resulting r r matrix. [p. 466; this page has a good example.] One can show that a matrix of dimension m n has ( ) ( ) m!/[r! (m r)!] n!/[r! (n r)!] minors of order r for r = 1,..., n. Now let A denote a square matrix. A principal minor of A of order r is obtained by deleting all but r rows and the corresponding r columns of A, then taking the determinant of the resulting r r matrix. [p. 536] (For example, if you keep the first, third, and fourth rows, then you have to keep the first, third, and fourth columns.) A principal minor of A of order r is denoted by r of A. [p. 638] One can show that a square matrix of dimension n n has n!/[r! (n r)!] principal minors of order r for r = 1,..., n. Again let A denote a square matrix. A leading principal minor of A of order r is obtained by deleting all but the first r rows and the first r columns of A, then taking the determinant of the resulting r r matrix. [p. 534] A leading principal minor of A of order r is denoted by D r of A. A square matrix of dimension n n has only 1 leading principal minor of order r for r = 1,..., n. Example. Suppose A = This matrix is not symmetric. Usually one is interested in the minors only of symmetric matrices, but there is nothing wrong with finding the minors of this non-symmetric matrix. The leading principal minor of order 1 of A is D 1 = 1. There are four principal minors of order 1 of A; they are the 1 s: 1 = D 1, 6, 11, and 16. There are sixteen minors of A of order 1. [important] 1

2 The leading principal minor of order 2 of A is D 2 = There are six principal minors of order 2 of A; they are the 2 s: = D 2 (from rows and columns 1 and 2), (from rows and columns 1 and 3), (from rows and columns 1 and 4), (from rows and columns 2 and 3), (from rows and columns 2 and 4), and (from rows and columns 3 and 4) There are thirty-six minors of A of order 2. The leading principal minor of order 3 of A is D 3 = There are four principal minors of order 3 of A; they are the 3 s: = D (from rows and columns 1, 2, and 3), (from rows and columns 1, 2 and 4), (from rows and columns 1, and 4), and (from rows and columns 2, 3 and 4) There are sixteen minors of A of order 3. The leading principal minor of order 4 of A is D 4 = A. There is only one principal minor of order 4 of A; it is 4 and it is equal to A. There is only one minor of order 4 of A; it is A. [End of Example] et f be a C 2 function mapping R n into R 1. Denote the Hessian matrix of f(x) by 2 f(x); this matrix has dimension n n. et D r of 2 f(x) denote the rth-order leading principal minor of the Hessian of f. et r of 2 f(x) denote all the rth-order principal minors of the Hessian of f.. Proposition 1. One has D r of 2 f(x) > 0 for r = 1,..., n and for all x S (1) 2 f(x) is positive definite for all x S (2) = f(x) is strictly convex on S. (3) 2

3 Also, All the r of 2 f(x) 0 for r = 1,..., n and for all x S (4) 2 f(x) is positive semidefinite for all x S (5) f(x) is convex on S. (6) If 2 f(x) is replaced by an arbitrary symmetric matrix, it is still true that (1) (2) and (4) (5). Proof. (1) (2): Th (a), p (2) = (3): 17.24, p. 637 or Th (b), p (4) (5): p (5) (6): 17.26, p. 638, or Th (b), p The typical procedure is to check (1) first. If (1) doesn t apply because one of the signs was strictly negative, then the contrapositive of (4) iff (6) tells you that the function is not convex. (This is because each D i i.) [Proposition 1 : Similarly, Also, ] D r of 2 f(x) alternate in sign beginning with < 0 for r = 1,..., n and x S (1 ) 2 f(x) is negative definite for all x S (2 ) = f(x) is strictly concave on S. (3 ) All the r of 2 f(x) alternate in sign beginning with 0 for r = 1,..., n and x S (4 ) 2 f(x) is negative semidefinite for all x S (5 ) f(x) is concave on S. (6 ) Section of your textbook has an excellent treatment of quasiconvexity and quasiconcavity. You should learn it. However, you should replace Theorem on p. 647 with the remarks which come between here and the end of this section. Simon and Blume (p. 526) give the following definition (they use different notation and a different but equivalent statement): et S be an open convex 3

4 subset of R n. A C 1 function f : S R is quasiconvex at x S if and only if f(x ) (z x ) > 0 implies f(z) > f(x ) for all z S. (7) One says that f is quasiconvex on S if (8) holds for all x S. [Aside: A quasiconcave function is defined by reversing both inequalities in (7).] Simon and Blume (p. 527) give the following definition: et S be an open convex subset of R n. A C 1 function f : S R is pseudoconvex at x S if f(x ) (z x ) 0 implies f(z) f(x ) for all z S. (8) One says that f is pseudoconvex on S if (8) holds for all x S. [Aside: A pseudoconcave function is defined by reversing both inequalities in (8).] 1 1 Simon and Blume actually give instead of as the second relation in (8); however, their definition of pseudoconcavity completely agrees with mine, and I believe their definition of pseudoconvexity is wrong. To see why, combine Simon and Blume s Theorems 21.2 and 21.3: f concave f(z) f(x ) f(x ) (z x ). So as they note in their Corollary 21.4: f concave f(z) f(x ) f(x ) (z x ) 0 f(z) f(x ). Then Simon and Blume define a function f to be pseudoconcave if, as I wrote above: By analogy and by Simon and Blume s Theorem 21.2, f convex f(z) f(x ) f(x ) (z x ) ; then in the spirit of their Corollary 21.4: f(x ) (z x ) 0 f(z) f(x ). f convex f(z) f(x ) f(x ) (z x ) 0 f(z) f(x ). The analogous definition of f being pseudoconvex is therefore f(x ) (z x ) 0 f(z) f(x ) which is (8) above. My definition is also the same as the one in S. Schaible, Secondorder Characterizations of Pseudoconvex Quadratic Functions, Journal of Optimization Theory and Applications, vol. 21 no. 1, Jan. 1977, pp Also, note that since everyone defines pseudoconcavity by f(x ) (z x ) 0 f(z) f(x ), and since that implies ( f(x )) (z x ) 0 f(z) f(x ), my definition ensures that if f is pseudoconcave then f is pseudoconvex, as is appropriate. 4

5 Pseudoconvexity is not very interesting in and of itself (though see point 4 of Section 4 below); its main use lies in its similarity with quasiconvex functions. By Theorem of Simon and Blume (p. 528), one has: Proposition 2. [Pseudoconvexity and Quasiconvexity.] et S be a convex subset of R n and let f : S R be a C 1 function. Then: 1. f is pseudoconvex on S = f is quasiconvex on S. 2. if S is open and if f(x) 0 for all x S, then: f is pseudoconvex on S f is quasiconvex on S. [Proposition 2 : Similarly, ] 1. f is pseudoconcave on S = f is quasiconcave on S. 2. if S is open and if f(x) 0 for all x S, then: f is pseudoconcave on S f is quasiconcave on S. The neat thing about pseudoconvex functions is that there is an easy second derivative test for such functions, a test that is also usually the easiest check that a given function is quasiconvex [I am partially quoting from Simon and Blume p. 528]. Here is that test [Simon and Blume Th p. 530]: Proposition 3. [Test of Pseudoconvexity.] et f be a C 2 function defined in an open, convex set S in R n. Define the bordered Hessian determinants δ r (x), r = 1,..., n by 0 f 1 (x) f r(x) f 1 (x) f 11 (x) f 1r δ r (x) = (x) f r(x) f r1 (x) f rr(x) A sufficient condition for f to be pseudoconvex is that δ r (x) < 0 for r = 2,..., n, and all x S. [Proposition 3 : Similarly, a sufficient condition for f to be pseudoconcave is that δ r(x) alternate in sign beginning with > 0 for r = 2,..., n, and all x S.] In other words, if you want to check quasiconvexity, you usually check pseudoconvexity, using Proposition 3, then appeal to Proposition 2. 5

6 2. First-Order Conditions This is Theorem 18.6 from p. 437 of Simon and Blume: Proposition 4. Suppose that f, h 1,..., h j, and g 1,..., g k are C 1 functions of n variables. Suppose that x R n is a local minimum of f(x) on the constraint set defined by the j equalities and k inequalities h 1 (x) = c 1,..., h j (x) = c j, (9) g 1 (x) b 1,..., g k (x) b k. (10) Without loss of generality, we can assume that the first k 0 inequality constraints are binding ( active ) at x and that the other k k 0 inequality constraints are not binding. Suppose that the following nondegenerate constraint qualification is satisfied: the rank at x of the Jacobian matrix of equality constraints and the binding inequality constraints is k 0 + j as large as it can be. Form the agrangian (x, λ, µ) = f(x) + h 1 (x ) x 1. h j (x ) x 1 g 1 (x ) x 1. g k0 (x ) x 1 h 1 (x ) x n.... h j (x ) x n g 1 (x ) x n.... g k0 (x ) x n j λ i [c i h i (x)] + i=1 k µ i [b i g i (x)]. i=1 Then there exist multipliers λ and µ such that: 1. (x, λ, µ )/ λ i = 0 for all i = 1,..., j. This is equivalent to: h i (x ) = c i for all i = 1,..., j. [important: & conditions 1 3 below] 6

7 2. (x, λ, µ )/ x i = 0 for all i = 1,..., n. (Or, if in addition one requires that x i 0, this condition becomes: x i 0, (x, λ, µ )/ x i 0, and x i [ (x, λ, µ )/ x i ] = 0.) [p ] 3. µ i 0, g i(x ) b i 0, and µ i [g i(x ) b i ] = 0 for all i = 1,..., k. These three conditions are often called the Kuhn-Tucker conditions. The last condition is sometimes called the complementary slackness condition. Be sure to read the Note on p (Also, there are other constraint qualifications you can use instead of the nondegenerate constraint qualification used above. The other constraint qualifications are given in Th , p. 476 of Simon and Blume.) [Proposition 4 : For a maximum, change (10) to: g 1(x) b 1,..., g k (x) b k. (10 ) Then the agrangian is formed in the same way. Condition 1 is unchanged. The first sentence of Condition 2 is unchanged. In the second, parenthesized sentence of Condition 2, the sign of (x, λ, µ )/ x i should be changed from 0 to 0. Condition 3 becomes: µ i 0, g i(x ) b i 0, and µ i [g i(x ) b i] = 0 for all i = 1,..., k.] By the way, it is easy to prove that if g 1,..., g k are all quasiconcave, then the set of x s which satisfy (10) is a convex set. Here is the proof. For i = 1, 2,..., k, define S i = {x: g i (x) b i }. This is an upper level set of g i. It is convex if g i is a quasiconcave function. If g 1,..., g k are all quasiconcave, then S 1,..., S k are all convex, and their intersection k i=1 S i is also convex [[17.13] p. 620]. This intersection is the set of all x which satisfy the constraints (10). This completes the proof. (See the conditions on g 1,..., g k in Section 4 points 3, 4, and 5.) [Aside: Similarly, if g 1,..., g k are all quasiconvex, then {x: (10 ) is satisfied} is a convex set. See Section 4 points 3, 4, and 5.] 3. Second-Order Conditions: ocal et be the agrangian of the optimization problem. In Section 2, I named the agrange multipliers λ if they were associated with one of the j equality constraints and µ if they were associated with one of the k 7

8 inequality constraints. In this section: (a) ignore all the nonbinding inequality constraints at (x, λ, µ ); and (b) rename the agrange multipliers of the binding inequality constraints λ j+1, λ j+2,..., λ m, where: m is the number of equality constraints plus the number of binding inequality constraints. It is allowed to have m = 0; if m = 0 then there are no agrange multipliers. Denote the m binding agrange multipliers collectively by λ. et there be n variables with respect to which the optimization is occurring; denote these variables collectively by x. First, a heuristic explanation from Simon and Blume (p. 399). Using a multidimensional version of Taylor s Theorem, one can show that f(x + z) = f(x ) + f(x ) z zt [ 2 f(x ) ] z + R(z) (11) where a T superscript denotes the transpose and where R(z) is a remainder term. I will write f(x + z) f(x ) + f(x ) z zt [ 2 f(x ) ] z. (12) The first order condition gives f(x) = 0 at x. Therefore f(x + z) f(x ) 1 2 zt [ 2 f(x ) ] z. (13) If f(x ) is to really be a local minimum, then the left-hand side of (13) should be greater than or equal to zero, which is equivalent to 2 f(x ) being positive semidefinite. Furthermore, if f(x ) is to really be a strict local minimum, then the left-hand side of (13) should be greater than zero, which is equivalent to 2 f(x ) being positive definite. [Aside: Similarly, if f(x ) is to really be a local maximum, then the left-hand side of (13) should be less than or equal to zero, which is equivalent to 2 f(x ) being negative semidefinite. If f(x ) is to really be a strict local maximum, then the left-hand side of (13) should be less than zero, which is equivalent to 2 f(x ) being negative definite.] et 2 be the following particular Hessian of the agrangian: first differentiate with respect to all the agrange multipliers, then differentiate 8

9 it with respect to the original variables x. 2 = 1 λ 1 1 λ 2 1 λ m 1 x 1 1 x 2 1 x n 2 λ 1 2 λ 2 2 λ m 2 x 1 2 x 2 λ 2 x n mλ 1 mλ 2 mλ m mx 1 mx 2 mx n x 1 λ 1 x 1 λ 2 x 1 λ m x 1 x 1 x 1 x 2 x 1 x n x 2 λ 1 x 2 λ 2 x 2 λ m x 2 x 1 x 2 x 2 x 2 x n x nλ 1 x nλ 2 x nλ m x nx 1 x nx 2 x nx n If you do this right, 2 should have an m m zero matrix in its upper left-hand corner: ( 2 λλ ) ( λx 0 ) = x xx λx = T λx xx where λx is an m n matrix and where a T superscript denotes the transpose. One has the following result: Proposition 5a. A sufficient condition for the point (x, λ ) identified in [important] Proposition 4 to be a strict local minimum is that ( 1) m has the same sign as all of the following when they are evaluated at (x, λ ): D 2m+1 of 2, D 2m+2 of 2,..., D m+n of 2. [Proposition 5a : Similarly, one will have a strict local maximum if, when they are evaluated at (x, λ ), the following alternate in sign beginning with the sign of ( 1) m+1 : D 2m+1 of 2, D 2m+2 of 2,..., D m+n of 2.] If m = 0, this is equivalent to the condition that 2 (which in such a case equals 2 f(x)) be positive definite, which occurs iff f(x) is strictly convex. (See the discussion concerning (13).) If m > 0, suppose as above that the binding constraints are written h 1 (x) = c 1, h 2 (x) = c 2,..., h m (x) = c m. Abbreviate this by h(x) = c. As Simon and Blume write (p. 459), the natural linear constraint set for this problem is the hyperplane which is tangent to the constraint set {x R n : h(x) = c} at the point x.... the tangent space to {h(x) = c} at the 9

10 point x is the set of vectors z such that h(x ) z = 0 [i.e., h i (x ) z = 0 for all i = 1, 2,..., n] (considered as vectors with their tails at x ). It can [not very be shown that the above second-order sufficient condition for a minimum important] (Proposition 5a) is equivalent to saying that, for any non-zero vector z which is tangent to the constraint set that is, for any non-zero vector z which obeys h(x ) z = 0 (14) holds. Formally: Proposition 5b. A sufficient condition for the point (x, λ ) identified in Proposition 4 to be a strict local minimum is that for any non-zero vector z which obeys h(x ) z = 0, one has z T [ 2 x (x, λ ) ] z > 0. (14) In a sense, then, 2 is positive definite with respect to vectors which satisfy a linearized version of the constraints (or which satisfy the constraints to first order ). [See also p. 530.] [Proposition 5b : A sufficient condition for a strict local maximum is that the sign in (14) be reversed. When this holds, there is a sense in which 2 is negative definite with respect to vectors which satisfy a linearized version of the constraints.] There is a second-order necessary condition for a minimum, also. This condition is that the strict inequality > in (14) should be replaced by a weak inequality. Formally: Proposition 6b. A necessary condition for the point (x, λ ) identified in Proposition 4 to be a local minimum is that for any non-zero vector z which obeys h(x ) z = 0, one has z T [ 2 x (x, λ ) ] z 0. (15) In other words, 2 should be positive semidefinite with respect to vectors which satisfy a linearized version of the constraints (or which satisfy the constraints to first order ). [Proposition 6b. A necessary condition for a local maximum is that the sign in (15) be reversed. When this holds, there is a sense in which 2 is negative semidefinite with respect to vectors which satisfy a linearized version of the constraints.] To check the condition given in Proposition 6b, using (4) and (5) for inspiration, the most obvious analog of Proposition 5a would be: 10

11 Incorrect Guess: A necessary condition for the point (x, λ ) identified in Proposition 4 to be a local minimum is that ( 1) m or zero have the same sign as all of the following when they are evaluated at (x, λ ): 2m+1 of 2, 2m+2 of 2,..., m+n of 2. This guess is incorrect because it is needlessly strict. It is actually not necessary to check every one of 2m+1 of 2, 2m+2 of 2,..., m+n of 2 ; you only have to check some of them. The following proposition describes which of the s have to be checked. Proposition 6a. Define i of 2 to be the subset of i of 2 formed by only considering those i of 2 which retain (parts of) the first m rows and first m columns of 2. (If m = 0, there is no difference between the s and the s.) Then a necessary condition for the point (x, λ ) identified in Proposition 4 to be a local minimum is that ( 1) m or zero have the same sign as all of the following when they are evaluated at (x, λ ): 2m+1 of 2, 2m+2 of 2,..., m+n of 2. [See Theorem M.D.3 (ii) on p. 938 of Mas-Colell, Whinston, and Green, and see p. 468 of Simon and Blume. For the case of m = 0, Proposition 6a is the same result as given in Th d p. 639 of your textbook.] The typical procedure is to check Proposition 5a first. If Proposition 5a doesn t apply because one of the signs was strictly the same as ( 1) m+1, then Proposition 6a tells you that (x, λ ) is not a local minimum point. (This is because each D i i.) [Proposition 6a : The version of Proposition 6a for a local maximum requires that the following, if they are evaluated at (x, λ ), alternate in sign beginning with the sign of ( 1) m+1 or zero (then having the sign of ( 1) m+2 or zero and so forth): 2m+1 of 2, 2m+2 of 2,..., m+n of 2.] While it is nice that Proposition 6a tells you to check only the i s instead of all the i s, there still are very many i s in a typical problem. Here is an example. Example. Suppose n = 6 and m = 2. Then 2 is 8 8. Proposition 5a requires checking D 5, D 6, D 7, and D 8. Proposition 6a requires checking 11

12 5, 6, 7, and 8. For all these s, you keep rows and columns 1 and 2; this is what makes the i s a subset of the i s. For example, each 5 will keep rows and columns 1 and 2, then choose three out of the remaining six rows and columns. The number of combinations of x things taken y at a time is x!/ ( y! (x y)! ). Hence the number of ways of choosing 3 out of the remaining 6 rows and columns is 6!/ ( 3! (6 3)! ) = 20 ways, so there are 20 5 s (compared with 56 5 s and one D 5 ). Similarly, there are 6!/ ( 4! (6 4)! ) = 15 6 s (compared with 28 6 s and one D 6 ), and 6!/ ( 5! (6 5)! ) = 6 7 s (compared with 8 7 s and one D 7 ). There is only one 8 ; it equals 8 and D 8. If you are curious, in this example the twenty 5 s contain the following rows and columns: ( = D 5), , , , , , , , , , , , , , , , , , , and The fifteen 6 s contain the following rows and columns: ( = D 6), , , , , , , , , , , , , , and The six 7 s contain the following rows and columns: ( = D 7), , , , , and Second-Order Conditions: Global et (x, λ ) be a point identified in Proposition 4. et j be the number of equality constraints and k be the number of inequality constraints. 1. If j = k = 0 (an unconstrained problem) and f(x) is convex, then x is a global minimum point of f in S. (The converse also holds.) [p. 631] Furthermore, if j = k = 0 and f(x) is strictly convex, then x is the unique global minimum point of f in S. (The converse also holds.) 2. If k = 0 (only equality constraints) and (x) is convex in x, then x is a global constrained minimum point of f. [p. 667] Furthermore, if k = 0 and (x) is strictly convex in x, then x is the unique global constrained minimum point of f. 3. If j = 0 (only inequality constraints), if g 1,..., g k are all concave, and if f(x) is convex, then x is a global constrained minimum point of f. [p. 699] Furthermore, if j = 0 and f(x) is strictly convex, then x is the unique global constrained minimum point of f. 12

13 4. Suppose either: (a) j = 0 (only inequality constraints); or (b) j > 0 but all the equality constraints are linear. Then if the binding g 1,..., g k are all quasiconcave, and if f(x) is pseudoconvex, then x is a global constrained minimum point of f. [p. 700; Simon and Blume Th p. 528 and Th p. 532; and Mas-Colell, Whinston, and Green Th. M.K.3 pp ] 5. If the constraint set defined in Proposition 4 is a convex set which occurs, for example, when j = 0 and g 1,..., g k are all quasiconcave and if f(x) is strictly quasiconvex, then x is the unique global constrained minimum point of f. If f(x) is quasiconvex but not strictly so, then the set of solutions x is a convex set. [Mas-Colell, Whinston, and Green Th. M.K.4 p See also Simon and Blume Th p. 533.] Note on (5): Strict quasiconcavity and by analogy, strict quasiconvexity is defined on p. 646 of your text. It means that the contour lines cannot have straight segments. [Aside: Similarly, using the problem defined in Proposition 4 and (10 ): 1. If j = k = 0 (an unconstrained problem) and f(x) is concave, then x is a global maximum point of f in S. (The converse also holds.) Furthermore, if j = k = 0 and f(x) is strictly concave, then x is the unique global maximum point of f in S. (The converse also holds.) 2. If k = 0 (only equality constraints) and (x) is concave in x, then x is a global constrained maximum point of f. Furthermore, if k = 0 and (x) is strictly concave in x, then x is the unique global constrained maximum point of f. 3. If j = 0 (only inequality constraints), if g 1,..., g k are all convex, and if f(x) is concave, then x is a global constrained maximum point of f. Furthermore, if j = 0 and f(x) is strictly concave, then x is the unique global constrained maximum point of f. 4. Suppose either: (a) j = 0 (only inequality constraints); or (b) j > 0 but all the equality constraints are linear. Then if g 1,..., g k are all quasiconvex, and if f(x) is pseudoconcave, then x is a global constrained maximum point of f. 5. If the constraint set defined in Proposition 4 is a convex set which occurs, for example, when j = 0 and g 1,..., g k are all quasiconvex and if f(x) is strictly quasiconcave, then x is the unique global constrained maximum point of f. If f(x) is quasiconcave but not strictly so, then the set of solutions x is a convex set. ] 13

or R n +, R n ++ etc.;

or R n +, R n ++ etc.; Mathematical Prerequisites for Econ 7005, Microeconomic Theory I, and Econ 7007, Macroeconomic Theory I, at the University of Utah by Gabriel A Lozada The contents of Sections 1 6 below are required for

More information

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2 LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 2010/11 Math for Microeconomics September Course, Part II Problem Set 1 with Solutions 1. Show that the general

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

1 Convexity, concavity and quasi-concavity. (SB )

1 Convexity, concavity and quasi-concavity. (SB ) UNIVERSITY OF MARYLAND ECON 600 Summer 2010 Lecture Two: Unconstrained Optimization 1 Convexity, concavity and quasi-concavity. (SB 21.1-21.3.) For any two points, x, y R n, we can trace out the line of

More information

Microeconomics I. September, c Leopold Sögner

Microeconomics I. September, c Leopold Sögner Microeconomics I c Leopold Sögner Department of Economics and Finance Institute for Advanced Studies Stumpergasse 56 1060 Wien Tel: +43-1-59991 182 soegner@ihs.ac.at http://www.ihs.ac.at/ soegner September,

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Modern Optimization Theory: Concave Programming

Modern Optimization Theory: Concave Programming Modern Optimization Theory: Concave Programming 1. Preliminaries 1 We will present below the elements of modern optimization theory as formulated by Kuhn and Tucker, and a number of authors who have followed

More information

Mathematical Economics: Lecture 16

Mathematical Economics: Lecture 16 Mathematical Economics: Lecture 16 Yu Ren WISE, Xiamen University November 26, 2012 Outline 1 Chapter 21: Concave and Quasiconcave Functions New Section Chapter 21: Concave and Quasiconcave Functions Concave

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

Summary Notes on Maximization

Summary Notes on Maximization Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject

More information

Mathematical Foundations -1- Convexity and quasi-convexity. Convex set Convex function Concave function Quasi-concave function Supporting hyperplane

Mathematical Foundations -1- Convexity and quasi-convexity. Convex set Convex function Concave function Quasi-concave function Supporting hyperplane Mathematical Foundations -1- Convexity and quasi-convexity Convex set Convex function Concave function Quasi-concave function Supporting hyperplane Mathematical Foundations -2- Convexity and quasi-convexity

More information

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for

More information

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Final Exam - Math Camp August 27, 2014

Final Exam - Math Camp August 27, 2014 Final Exam - Math Camp August 27, 2014 You will have three hours to complete this exam. Please write your solution to question one in blue book 1 and your solutions to the subsequent questions in blue

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

ARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2.

ARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2. ARE211, Fall 2005 LECTURE #18: THU, NOV 3, 2005 PRINT DATE: NOVEMBER 22, 2005 (COMPSTAT2) CONTENTS 5. Characteristics of Functions. 1 5.1. Surjective, Injective and Bijective functions 1 5.2. Homotheticity

More information

Constrained Optimization

Constrained Optimization Constrained Optimization Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 General Problem Consider the following general constrained optimization problem:

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

Week 3: Sets and Mappings/Calculus and Optimization (Jehle September and Reny, 20, 2015 Chapter A1/A2)

Week 3: Sets and Mappings/Calculus and Optimization (Jehle September and Reny, 20, 2015 Chapter A1/A2) Week 3: Sets and Mappings/Calculus and Optimization (Jehle and Reny, Chapter A1/A2) Tsun-Feng Chiang *School of Economics, Henan University, Kaifeng, China September 20, 2015 Week 3: Sets and Mappings/Calculus

More information

Math (P)refresher Lecture 8: Unconstrained Optimization

Math (P)refresher Lecture 8: Unconstrained Optimization Math (P)refresher Lecture 8: Unconstrained Optimization September 2006 Today s Topics : Quadratic Forms Definiteness of Quadratic Forms Maxima and Minima in R n First Order Conditions Second Order Conditions

More information

Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions

Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions Teng Wah Leo 1 Calculus of Several Variables 11 Functions Mapping between Euclidean Spaces Where as in univariate

More information

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7 Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

Nonlinear Programming (NLP)

Nonlinear Programming (NLP) Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume

More information

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution Outline Roadmap for the NPP segment: 1 Preliminaries: role of convexity 2 Existence of a solution 3 Necessary conditions for a solution: inequality constraints 4 The constraint qualification 5 The Lagrangian

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Preliminary draft only: please check for final version

Preliminary draft only: please check for final version ARE211, Fall2012 CALCULUS4: THU, OCT 11, 2012 PRINTED: AUGUST 22, 2012 (LEC# 15) Contents 3. Univariate and Multivariate Differentiation (cont) 1 3.6. Taylor s Theorem (cont) 2 3.7. Applying Taylor theory:

More information

ARE202A, Fall Contents

ARE202A, Fall Contents ARE202A, Fall 2005 LECTURE #2: WED, NOV 6, 2005 PRINT DATE: NOVEMBER 2, 2005 (NPP2) Contents 5. Nonlinear Programming Problems and the Kuhn Tucker conditions (cont) 5.2. Necessary and sucient conditions

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003 OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003 UNCONSTRAINED OPTIMIZATION 1. Consider the problem of maximizing a function f:ú n 6 ú within a set A f ú n. Typically, A might be all of ú

More information

EC400 Math for Microeconomics Syllabus The course is based on 6 sixty minutes lectures and on 6 ninety minutes classes.

EC400 Math for Microeconomics Syllabus The course is based on 6 sixty minutes lectures and on 6 ninety minutes classes. London School of Economics Department of Economics Dr Francesco Nava Offi ce: 32L.3.20 EC400 Math for Microeconomics Syllabus 2016 The course is based on 6 sixty minutes lectures and on 6 ninety minutes

More information

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2)

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2) Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2) Tsun-Feng Chiang *School of Economics, Henan University, Kaifeng, China September 27, 2015 Microeconomic Theory Week 4: Calculus and Optimization

More information

Economics 101A (Lecture 3) Stefano DellaVigna

Economics 101A (Lecture 3) Stefano DellaVigna Economics 101A (Lecture 3) Stefano DellaVigna January 24, 2017 Outline 1. Implicit Function Theorem 2. Envelope Theorem 3. Convexity and concavity 4. Constrained Maximization 1 Implicit function theorem

More information

Review of Optimization Methods

Review of Optimization Methods Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):

More information

CHAPTER 1. LINEAR ALGEBRA Systems of Linear Equations and Matrices

CHAPTER 1. LINEAR ALGEBRA Systems of Linear Equations and Matrices CHAPTER 1. LINEAR ALGEBRA 3 Chapter 1 LINEAR ALGEBRA 1.1 Introduction [To be written.] 1.2 Systems of Linear Equations and Matrices Why are we interested in solving simultaneous equations? We often have

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Calculus and optimization

Calculus and optimization Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Optimization. A first course on mathematics for economists

Optimization. A first course on mathematics for economists Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45

More information

ARE211, Fall2015. Contents. 2. Univariate and Multivariate Differentiation (cont) Taylor s Theorem (cont) 2

ARE211, Fall2015. Contents. 2. Univariate and Multivariate Differentiation (cont) Taylor s Theorem (cont) 2 ARE211, Fall2015 CALCULUS4: THU, SEP 17, 2015 PRINTED: SEPTEMBER 22, 2015 (LEC# 7) Contents 2. Univariate and Multivariate Differentiation (cont) 1 2.4. Taylor s Theorem (cont) 2 2.5. Applying Taylor theory:

More information

CONVEX FUNCTIONS AND OPTIMIZATION TECHINIQUES A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

CONVEX FUNCTIONS AND OPTIMIZATION TECHINIQUES A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF CONVEX FUNCTIONS AND OPTIMIZATION TECHINIQUES A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN MATHEMATICS SUBMITTED TO NATIONAL INSTITUTE OF TECHNOLOGY,

More information

Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS. Carmen Astorne-Figari Washington University in St. Louis.

Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS. Carmen Astorne-Figari Washington University in St. Louis. Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS Carmen Astorne-Figari Washington University in St. Louis August 12, 2010 INTRODUCTION General form of an optimization problem: max x f

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

1.4 FOUNDATIONS OF CONSTRAINED OPTIMIZATION

1.4 FOUNDATIONS OF CONSTRAINED OPTIMIZATION Essential Microeconomics -- 4 FOUNDATIONS OF CONSTRAINED OPTIMIZATION Fundamental Theorem of linear Programming 3 Non-linear optimization problems 6 Kuhn-Tucker necessary conditions Sufficient conditions

More information

Computational Optimization. Constrained Optimization Part 2

Computational Optimization. Constrained Optimization Part 2 Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints

More information

Constrained maxima and Lagrangean saddlepoints

Constrained maxima and Lagrangean saddlepoints Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 10: Constrained maxima and Lagrangean saddlepoints 10.1 An alternative As an application

More information

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 EC9A0: Pre-sessional Advanced Mathematics Course Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 1 Infimum and Supremum Definition 1. Fix a set Y R. A number α R is an upper bound of Y if

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Notes I Classical Demand Theory: Review of Important Concepts

Notes I Classical Demand Theory: Review of Important Concepts Notes I Classical Demand Theory: Review of Important Concepts The notes for our course are based on: Mas-Colell, A., M.D. Whinston and J.R. Green (1995), Microeconomic Theory, New York and Oxford: Oxford

More information

CONSTRAINED OPTIMALITY CRITERIA

CONSTRAINED OPTIMALITY CRITERIA 5 CONSTRAINED OPTIMALITY CRITERIA In Chapters 2 and 3, we discussed the necessary and sufficient optimality criteria for unconstrained optimization problems. But most engineering problems involve optimization

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

Finite Dimensional Optimization Part I: The KKT Theorem 1

Finite Dimensional Optimization Part I: The KKT Theorem 1 John Nachbar Washington University March 26, 2018 1 Introduction Finite Dimensional Optimization Part I: The KKT Theorem 1 These notes characterize maxima and minima in terms of first derivatives. I focus

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

Boston College. Math Review Session (2nd part) Lecture Notes August,2007. Nadezhda Karamcheva www2.bc.

Boston College. Math Review Session (2nd part) Lecture Notes August,2007. Nadezhda Karamcheva www2.bc. Boston College Math Review Session (2nd part) Lecture Notes August,2007 Nadezhda Karamcheva karamche@bc.edu www2.bc.edu/ karamche 1 Contents 1 Quadratic Forms and Definite Matrices 3 1.1 Quadratic Forms.........................

More information

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,

More information

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions UBC Economics 526 October 17, 2012 .1.2.3.4 Section 1 . max f (x) s.t. h(x) = c f : R n R, h : R n R m Draw picture of n = 2 and m = 1 At optimum, constraint tangent to level curve of function Rewrite

More information

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy. April 15, 2009 CHAPTER 4: HIGHER ORDER DERIVATIVES In this chapter D denotes an open subset of R n. 1. Introduction Definition 1.1. Given a function f : D R we define the second partial derivatives as

More information

Optimisation theory. The International College of Economics and Finance. I. An explanatory note. The Lecturer and Class teacher: Kantorovich G.G.

Optimisation theory. The International College of Economics and Finance. I. An explanatory note. The Lecturer and Class teacher: Kantorovich G.G. The International College of Economics and Finance The course syllabus Optimisation theory The eighth semester I. An explanatory note The Lecturer and Class teacher: Kantorovich G.G. Requirements to students:

More information

Concave programming. Concave programming is another special case of the general constrained optimization. subject to g(x) 0

Concave programming. Concave programming is another special case of the general constrained optimization. subject to g(x) 0 1 Introduction Concave programming Concave programming is another special case of the general constrained optimization problem max f(x) subject to g(x) 0 in which the objective function f is concave and

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

The Kuhn-Tucker and Envelope Theorems

The Kuhn-Tucker and Envelope Theorems The Kuhn-Tucker and Envelope Theorems Peter Ireland EC720.01 - Math for Economists Boston College, Department of Economics Fall 2010 The Kuhn-Tucker and envelope theorems can be used to characterize the

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

The Karush-Kuhn-Tucker (KKT) conditions

The Karush-Kuhn-Tucker (KKT) conditions The Karush-Kuhn-Tucker (KKT) conditions In this section, we will give a set of sufficient (and at most times necessary) conditions for a x to be the solution of a given convex optimization problem. These

More information

Lecture 3: Basics of set-constrained and unconstrained optimization

Lecture 3: Basics of set-constrained and unconstrained optimization Lecture 3: Basics of set-constrained and unconstrained optimization (Chap 6 from textbook) Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 9, 2018 Optimization basics Outline Optimization

More information

Convex Analysis and Economic Theory Winter 2018

Convex Analysis and Economic Theory Winter 2018 Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 7: Quasiconvex Functions I 7.1 Level sets of functions For an extended real-valued

More information

Monotone Function. Function f is called monotonically increasing, if. x 1 x 2 f (x 1 ) f (x 2 ) x 1 < x 2 f (x 1 ) < f (x 2 ) x 1 x 2

Monotone Function. Function f is called monotonically increasing, if. x 1 x 2 f (x 1 ) f (x 2 ) x 1 < x 2 f (x 1 ) < f (x 2 ) x 1 x 2 Monotone Function Function f is called monotonically increasing, if Chapter 3 x x 2 f (x ) f (x 2 ) It is called strictly monotonically increasing, if f (x 2) f (x ) Convex and Concave x < x 2 f (x )

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Mathematical Preliminaries for Microeconomics: Exercises

Mathematical Preliminaries for Microeconomics: Exercises Mathematical Preliminaries for Microeconomics: Exercises Igor Letina 1 Universität Zürich Fall 2013 1 Based on exercises by Dennis Gärtner, Andreas Hefti and Nick Netzer. How to prove A B Direct proof

More information

Optimization Theory. Lectures 4-6

Optimization Theory. Lectures 4-6 Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:

More information

QUADRATIC FORMS AND DEFINITE MATRICES. a 11 a 1n. a n1 a nn a 1i x i

QUADRATIC FORMS AND DEFINITE MATRICES. a 11 a 1n. a n1 a nn a 1i x i QUADRATIC FORMS AND DEFINITE MATRICES 1 DEFINITION AND CLASSIFICATION OF QUADRATIC FORMS 11Definitionofaquadraticform LetAdenoteannxnsymmetricmatrixwithrealentriesand letxdenoteanncolumnvectorthenq=x AxissaidtobeaquadraticformNotethat

More information

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the

More information

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

Econ Slides from Lecture 14

Econ Slides from Lecture 14 Econ 205 Sobel Econ 205 - Slides from Lecture 14 Joel Sobel September 10, 2010 Theorem ( Lagrange Multipliers ) Theorem If x solves max f (x) subject to G(x) = 0 then there exists λ such that Df (x ) =

More information

7) Important properties of functions: homogeneity, homotheticity, convexity and quasi-convexity

7) Important properties of functions: homogeneity, homotheticity, convexity and quasi-convexity 30C00300 Mathematical Methods for Economists (6 cr) 7) Important properties of functions: homogeneity, homotheticity, convexity and quasi-convexity Abolfazl Keshvari Ph.D. Aalto University School of Business

More information

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1 AREA, Fall 5 LECTURE #: WED, OCT 5, 5 PRINT DATE: OCTOBER 5, 5 (GRAPHICAL) CONTENTS 1. Graphical Overview of Optimization Theory (cont) 1 1.4. Separating Hyperplanes 1 1.5. Constrained Maximization: One

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris University of California, Davis Department of Agricultural and Resource Economics ARE 5 Lecture Notes Quirino Paris Karush-Kuhn-Tucker conditions................................................. page Specification

More information

The Envelope Theorem

The Envelope Theorem The Envelope Theorem In an optimization problem we often want to know how the value of the objective function will change if one or more of the parameter values changes. Let s consider a simple example:

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44 Chapter 13 Convex and Concave Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44 Monotone Function Function f is called monotonically increasing, if x 1 x 2 f (x 1 ) f (x 2 ) It

More information

Fractional Programming

Fractional Programming Fractional Programming Natnael Nigussie Goshu Addis Ababa Science and Technology University P.o.box: 647, Addis Ababa, Ethiopia e-mail: natnaelnigussie@gmail.com August 26 204 Abstract Fractional Programming,

More information

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx. Two hours To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER CONVEX OPTIMIZATION - SOLUTIONS xx xxxx 27 xx:xx xx.xx Answer THREE of the FOUR questions. If

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Part IB Optimisation

Part IB Optimisation Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

Mathematical Appendix

Mathematical Appendix Ichiro Obara UCLA September 27, 2012 Obara (UCLA) Mathematical Appendix September 27, 2012 1 / 31 Miscellaneous Results 1. Miscellaneous Results This first section lists some mathematical facts that were

More information

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4

More information

A Note on Two Different Types of Matrices and Their Applications

A Note on Two Different Types of Matrices and Their Applications A Note on Two Different Types of Matrices and Their Applications Arjun Krishnan I really enjoyed Prof. Del Vecchio s Linear Systems Theory course and thought I d give something back. So I ve written a

More information

Chapter 4 - Convex Optimization

Chapter 4 - Convex Optimization Chapter 4 - Convex Optimization Justin Leduc These lecture notes are meant to be used by students entering the University of Mannheim Master program in Economics. They constitute the base for a pre-course

More information

Lecture Unconstrained optimization. In this lecture we will study the unconstrained problem. minimize f(x), (2.1)

Lecture Unconstrained optimization. In this lecture we will study the unconstrained problem. minimize f(x), (2.1) Lecture 2 In this lecture we will study the unconstrained problem minimize f(x), (2.1) where x R n. Optimality conditions aim to identify properties that potential minimizers need to satisfy in relation

More information

A Necessary and Sufficient Condition for a Unique Maximum with an Application to Potential Games

A Necessary and Sufficient Condition for a Unique Maximum with an Application to Potential Games Towson University Department of Economics Working Paper Series Working Paper No. 2017-04 A Necessary and Sufficient Condition for a Unique Maximum with an Application to Potential Games by Finn Christensen

More information

Technical Results on Regular Preferences and Demand

Technical Results on Regular Preferences and Demand Division of the Humanities and Social Sciences Technical Results on Regular Preferences and Demand KC Border Revised Fall 2011; Winter 2017 Preferences For the purposes of this note, a preference relation

More information