z = f (x; y) f (x ; y ) f (x; y) f (x; y )

Similar documents
1 Objective. 2 Constrained optimization. 2.1 Utility maximization. Dieter Balkenborg Department of Economics

BEEM103 UNIVERSITY OF EXETER. BUSINESS School. January 2009 Mock Exam, Part A. OPTIMIZATION TECHNIQUES FOR ECONOMISTS solutions

z = f (x; y) = x 3 3x 2 y x 2 3

BEE1024 Mathematics for Economists

ECON2285: Mathematical Economics

September Math Course: First Order Derivative

Tutorial 3: Optimisation

Nonlinear Programming (NLP)

ECON 186 Class Notes: Optimization Part 2

SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self-paced Course

Review of Optimization Methods

EconS 301. Math Review. Math Concepts

BEE1020 Basic Mathematical Economics Dieter Balkenborg Week 14, Lecture Thursday Department of Economics

CHAPTER 1. FUNCTIONS 7

OR MSc Maths Revision Course

a = (a 1; :::a i )

ECON0702: Mathematical Methods in Economics

STUDY MATERIALS. (The content of the study material is the same as that of Chapter I of Mathematics for Economic Analysis II of 2011 Admn.

One Variable Calculus. Izmir University of Economics Econ 533: Quantitative Methods and Econometrics

Lecture Notes for Chapter 12

Lecture 1: The Classical Optimal Growth Model

Chapter 4 Differentiation

1.1 Basic Algebra. 1.2 Equations and Inequalities. 1.3 Systems of Equations

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

Lecture 4: Optimization. Maximizing a function of a single variable

The Kuhn-Tucker and Envelope Theorems

Calculus and optimization

Math Review ECON 300: Spring 2014 Benjamin A. Jones MATH/CALCULUS REVIEW

Sometimes the domains X and Z will be the same, so this might be written:

Microeconomic Theory -1- Introduction

Implicit Function Theorem: One Equation

Mathematical Economics: Lecture 16

II. An Application of Derivatives: Optimization

SCHOOL OF DISTANCE EDUCATION

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

BEE1004 / BEE1005 UNIVERSITY OF EXETER FACULTY OF UNDERGRADUATE STUDIES SCHOOL OF BUSINESS AND ECONOMICS. January 2002

Workbook. Michael Sampson. An Introduction to. Mathematical Economics Part 1 Q 2 Q 1. Loglinear Publishing

JUST THE MATHS UNIT NUMBER 1.5. ALGEBRA 5 (Manipulation of algebraic expressions) A.J.Hobson

SOLUTIONS FOR PROBLEMS 1-30

Econ Review Set 2 - Answers

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Workshop I The R n Vector Space. Linear Combinations

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44

Marginal Functions and Approximation

How might we evaluate this? Suppose that, by some good luck, we knew that. x 2 5. x 2 dx 5

Notes on the Mussa-Rosen duopoly model

EconS Microeconomic Theory I Recitation #5 - Production Theory-I 1

Previously in this lecture: Roughly speaking: A function is increasing (decreasing) where its rst derivative is positive

Mathematical Economics. Lecture Notes (in extracts)

MEI Core 1. Basic Algebra. Section 1: Basic algebraic manipulation and solving simple equations. Manipulating algebraic expressions

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

a11 a A = : a 21 a 22

Final Exam Review Packet

Final Exam Review Packet

Herndon High School Geometry Honors Summer Assignment

ECON Answers Homework #4. = 0 6q q 2 = 0 q 3 9q = 0 q = 12. = 9q 2 108q AC(12) = 3(12) = 500

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Rules of Differentiation

Economics 101A (Lecture 3) Stefano DellaVigna

EC487 Advanced Microeconomics, Part I: Lecture 2

Lecture # 1 - Introduction

Index. Cambridge University Press An Introduction to Mathematics for Economics Akihito Asano. Index.

MATHEMATICAL PROGRAMMING I

Math 1314 Lesson 23 Partial Derivatives

School of Business. Blank Page

Chapter 4. Maximum Theorem, Implicit Function Theorem and Envelope Theorem

Tvestlanka Karagyozova University of Connecticut

Econ Slides from Lecture 10

The Kuhn-Tucker Problem

MATHEMATICS FOR ECONOMISTS. Course Convener. Contact: Office-Hours: X and Y. Teaching Assistant ATIYEH YEGANLOO

ECON 5111 Mathematical Economics

Bi-Variate Functions - ACTIVITES

Study Guide for Math 095

1 Functions and Graphs

Contents CONTENTS 1. 1 Straight Lines and Linear Equations 1. 2 Systems of Equations 6. 3 Applications to Business Analysis 11.

Constrained optimization.

ECON 255 Introduction to Mathematical Economics

A-Level Notes CORE 1

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

Lecture 8: Basic convex analysis

ECON2285: Mathematical Economics

Radicals: To simplify means that 1) no radicand has a perfect square factor and 2) there is no radical in the denominator (rationalize).

2 Functions and Their

Basics of Calculus and Algebra

The Consumer, the Firm, and an Economy

ECON0702: Mathematical Methods in Economics

Rice University. Fall Semester Final Examination ECON501 Advanced Microeconomic Theory. Writing Period: Three Hours

1 + x 1/2. b) For what values of k is g a quasi-concave function? For what values of k is g a concave function? Explain your answers.

A matrix over a field F is a rectangular array of elements from F. The symbol

1. Introduction. 2. Outlines

Differentiation. 1. What is a Derivative? CHAPTER 5

Economics 203: Intermediate Microeconomics. Calculus Review. A function f, is a rule assigning a value y for each value x.

Maximum Value Functions and the Envelope Theorem

ECON 331 Homework #2 - Solution. In a closed model the vector of external demand is zero, so the matrix equation writes:

LECTURE NOTES ON MICROECONOMICS

Partial Differentiation

Mathematical Economics Part 2

Week 10: Theory of the Firm (Jehle and Reny, Chapter 3)

Economics 101. Lecture 2 - The Walrasian Model and Consumer Choice

Transcription:

BEEM0 Optimization Techiniques for Economists Lecture Week 4 Dieter Balkenborg Departments of Economics University of Exeter Since the fabric of the universe is most perfect, and is the work of a most perfect creator, nothing whatsoever takes place in the universe in which some form of maximum or minimum does not appear. Leonhard Euler, 744 Objective Two types of optimization problems frequently occur in economics: unconstrained optimization problems like pro t maximization problems and constrained optimization problems like the utility maximization problem of consumer constrained by a budget. In this handout we discuss the main step to solve an unconstrained optimization problem - namely to nd the two rst order conditions and how to solve them. The solutions for this system of two equations in two unknowns can be local maxima, minima or saddle points. How to distinguish between these types ( second order conditions ) will be discussed in the next handout. In order to solve the rst order conditions one has to solve a simultaneous system of equations. We will review methods to solve linear simultaneous systems. Thereafter we develop the Lagrangian approach for constrained optimization problems. Optimization problems. Unconstrained optimization Suppose we want to nd the absolute maximum of a function z f (x; y) in two variables, i.e., we want to nd the pair of numbers (x ; y ) such that f (x ; y ) f (x; y) holds for all (x; y). If (x ; y ) is such a maximum the following must hold: If we freeze the variable y at the optimal value y and vary only the variable x then the function f (x; y ) which is now just a function in just one variable has a maximum at x. Typically the maximum of this function in one variable will be a critical point. Hence we expect the

partial derivative @z to be zero at the optimum. Similarly we expect the partial derivative @x to be zero at the optimum. We are thus led to the rst order conditions @z @y @z @x jxx ;yy 0 @z @y jxx ;yy 0 which must typically be satis ed. We speak of rst order conditions because only the rst derivatives are involved. The conditions do not tell us whether we have a maximum, a minimum or a saddle point. To nd out the latter we will have to look at the second derivatives and the so-called second order conditions, as will be discussed in a later lecture. Example Consider a price-taking rm with the production function Q (K; L). Let r be the interest rate (the price of capital K), let w be the wage rate (the price of labour L) and let P be the price of the output the rm is producing. When the rm uses K units of capital and L units labour she can produce Q (K; L) units of the output and hence make a revenue of P Q (K; L) by selling each unit of output at the market price P. Her production costs will then be her total expenditure on the inputs capital and labour Her prot will be the di erence rk + wl: (K; L) P Q (K; L) rk rl: The rm tries to nd the input combination (K ; L ) which maximizes her prot. To nd it we want to solve the rst order conditions @ @K P @K r 0 () @ @L P @L w 0 () These conditions are intuitive: Suppose for instance P r > 0: is the marginal @K @K product of capital, i.e., it tells us how much more can be produced by using one more unit of capital. P is the marginal increase in revenue when one more unit of capital is used @K and the additional output is sold on the market. r is the marginal cost of using one more unit of capital. P r > 0 means that the marginal prot from using one more unit @K of capital is positive, i.e., it increases prots to produce more by using more capital and therefore we cannot be in a prot optimum. Correspondingly, P r < 0 would mean @K that prot can be increased by producing less and using less capital. So () should hold in the optimum. Similarly () should hold.

It is useful to rewrite the two rst order conditions as @K r P @L w P Thus, for both inputs it must be the case that the marginal product is equal to the price ratio of input- to output price. When we divide here the two left-hand sides and equate them with the quotient of the right hand sides we obtain as a consequence () (4) @K @L r P. w P r w () so the marginal rate of substitution must be equal to the price ratio of the input price (which is the relative price of capital in terms of labour). Example Let us be more specic and assume that the production function is Q (K; L) K 6 L and that the output- and input prices are P, r and w. Then @K @K 6 K 6 L @L 6 K 6 L The rst order conditions () and (4) become @L K 6 L K 6 L K 6 L K 6 L L K 6 K 6 L (6) K 6 L : (7) This is a simultaneous system of two equations in two unknowns. The implied condition () becomes L K which simplies to L K Example A monopolist with total cost function T C (Q) Q sells his product in two di erent countries. When he sells Q A units of the good in country A he will obtain the price P A Q A for each unit. When he sells Q B units of the good in country B he obtains the price P B 4 4Q B :

How much should the monopolist sell in the two countries in order to maximize prots? To solve this problem we calculate rst the prot function as the sum of the revenues in each country minus the production costs. Total revenue in country A is Total revenue in country B is Total production costs are Therefore the prot is T R A P A Q A ( T R B P B Q B (4 T C (Q A + Q B ) Q A ) Q A 4Q B ) Q B (Q A ; Q B ) ( Q A ) Q A + (4 4Q B ) Q B (Q A + Q B ) ; a quadratic function in Q A and Q B. In order to nd the prot maximum we must nd the critical points, i.e., we must solve the rst order conditions or @ A Q A + ( Q A ) (Q A + Q B ) 8Q A Q B 0 @ B 4Q B + (4 4Q B ) (Q A + Q B ) 4 Q A 0Q B 0 8Q A + Q B (8) Q A + 0Q B 4: Thus we have to solve a linear simultaneous system of two equations in two unknowns. Simultaneous systems of equations. Linear systems We illustrate four methods to solve a linear system of equations using the example.. Using the slope-intercept form x + 7y 0 (9) 4x 6y 8 We rewrite both linear equations in slope-intercept form 7y 0 x y 0 7 7 x 4x 6y 8 4x + 8 6y y x + 4

8 7 6 4 0 4 6 x The solutions to each equation form a line, which we have now described as the graph of linear functions. At the intersection point of the two lines the two linear functions must have the same y-value. Hence 0 7 7 x y x + 0 7 x + 7 x j 7 0 6 4x + x 87 9x x 87 9 We have found the value of x in a solution. To calculate the value of y we use one of the linear functions in slope-intercept form y + The solution to the system of equations is x ; y It is strongly recommended to check the result for the original system of equations... The method of substitution + 7 + 0 4 6 0 8 First we solve one of the equations in (9), say the second, for one of the variables, say y: 4x 6y 8 4x + 8 6y y 4 6 x + x + (0) Then we replace y in the other equation by the result. (Do not forget to place brackets around the expression.) We obtain an equation in only one variable, x, which we solve for x.

x + 7y 0 x + 7 x + 0 x + 4 x + 0 x + 4 x + 0 9 x 0 9 We have found x and can now use (0) to nd y: Hence the solution is x ; y... The method of elimination x 9 9 y x + + To eliminate x we proceed in two stages: First we multiply the rst equation by the coef- cient of x in the second equation and we multiply the second equation by the coe cient of x in the rst equation x + 7y 0 j 4 4x 6y 8 j 0x + 8y 00 0x 0y 90 Then we subtract the second equation from the rst equation 0x + 8y 00 0x 0y 90 j 0 + 8y ( 0y) 00 ( 90) 8y 90 y 90 8 Having found y we use one of the original equations to nd x x + 7y 0 x + 0 x x 6

Instead of rst eliminating x we could have rst eliminated y:..4 Cramer s rule x + 7y 0 j 6 4x 6y 8 j 7 0x + 4y 00 8x 4y 6 j + 8x 74 x This is a little preview on linear algebra. In linear algebra the system of equations is written as 7 x 0 4 6 y 8 x 0 where and are so-called columns vectors, simply two numbers written x 8 7 below each other and surrounded by square brackets. is the -matrix 4 6 of coe cients. A -matrix is a system of four numbers arranged in a square and surrounded by square brackets. With each -matrix a b A c d we associate a new number called the determinant det A a b c d ad Notice that the determinant is indicated by vertical lines in contrast to square brackets for a matrix. For instance, 7 4 6 ( 6) 4 7 0 8 8 Cramer s rule uses determinants to give an explicit formula for the solution of a linear simultaneous system of equations of the form ax + by e cx + dy f cb or a b c d x y e f : 7

Namely, x e f a c b d b d ed bf ad bc y a c a c e f b d af ce ad bc Thus x and y are calculated as the quotients of two determinants. In both cases we divide by the determinant of the matrix of the coe cients. For x the determinant in the numerator is the determinant of the matrix obtained by replacing in the matrix of coe cients the coe cients a and c of x by the constant terms e and f, i.e., the rst column is replaced by the column vector with the constant terms. Similarly, for y the determinant in the numerator is the determinant of the matrix obtained by replacing in the matrix of coe cients the coe cients b and d of y by the constant terms e and f, i.e., the second column is replaced by the column vector with the constant terms. The method works only if the determinant in the denominator is not zero. In our example, 0 7 x y 8 6 7 4 6 0 4 8 7 4 6 0 ( 6) ( 8) 7 ( 6) 4 7 ( 8) 4 0 ( 6) 4 7 90 00 8 00 + 6 0 8 74 8 90 8 Exercise 4 Use the above methods to nd the optimum in Example 8.. Existence and uniqueness. A linear simultaneous system of equations ax + by e cx + dy f can have zero, exactly one or in nitely many solution. To see that only these cases can arise, bring both equations into slope-intercept form y e b y f d If the slopes of these two linear functions are the same, they describe identical or parallel lines. The slopes are identical when a c, i.e., when the determinant of the matrix of b d We assume here for simplicity that the coe cients b and d are not zero. The arguments can be easily extended to these cases, except when all four coe cients are zero. The latter case is trivial - either there is no solution or all pairs of numbers (x; y) are solutions. 8 a b x c d x

coe cients ad cb is zero. If in addition the intercepts are equal, both equations describe the same line. In this case all points (x; y) on the line are solutions. If the intercepts are di erent the two equations describe parallel lines. These do not intersect and hence there is no solution. Example x + y x + 4y 4 has no solution: The two lines are parallel y y x x 0 x 4 There is no common solution. Trying to nd one yields a contradiction x y x j + x. One equation non-linear, one linear Consider, for instance, y + x 0 y + x In this case we solve the linear equation for one of the variables and substitute the result into the non-linear equation. As a result we obtain one non-linear equation in a single unknown. This makes the problem simpler, but admittedly some luck is needed to solve the non-linear equation in one variable. Solving the linear equation for x can sometimes 9

give a simpler non-linear equation than solving for y and vice versa. One may have to try both possibilities. In our example it is convenient to solve for x: x y x y y + ( y) y y + (y ) 0 So the unique solution is y and x 0.. Two non-linear equations There is no general method. One needs to memorize some tricks which work in special cases. In the example of pro t-maximization with a Cobb-Douglas production function we were led to the system 6 K 6 L K 6 L : This is a simultaneous system of two equations in two unknowns which looks hard to solve. However, as we have seen, division of the two equations shows that L K must hold in a critical point. We can substitute this into, say, the rst equation and obtain 6 K 6 L 6 K 6 K K p K p K K 8 6 K 6 + 6 6 K 6 6 K Thus the solution to the rst order conditions (and, in fact, the optimum) is K L 8. Remark 6 This method works with any Cobb-Douglas production function Q (K; L) K a L b where the indices a and b are positive and sum to less than. Remark 7 The rst order conditions (6) and (7) form a hiddenlinear system of equations. Namely, if you take the logarithms (introduced later!) you get the system ln 6 6 ln K + ln L ln ln + ln K ln L ln which is linear in ln K and ln L. One can solve this system for ln K and ln L, which then gives us the values for L and K. 0

4 Youngs theorem Example: z f (x; y) x + x y + y 4 This is a function in two variables. When we calculate the two partial derivatives we obtain two new functions in two variables: @z @x x + 6xy @z @y 6x y + 4y Each of these two functions can again be partially di erentiated in the two directions. So there are four second partial derivatives: @ z @ @z @ @x @x @x @x x + 6xy 6x + 6y @ z @y@x @ @z @ @y @x @y x + 6xy xy @ z @ @z @ @x@y @x @y @x 6x y + 4y xy @ z @ @z @ @y @y @y @y 6x y + 4y 6x + y We observe that out of these four new functions two are the same, namely the cross derivatives @ z and @ z. Thus it does not matter whether we rst take the derivative @y@x @x@y with respect to x and then with respect to y or vice versa. Youngs theorem states the two cross derivatives @ z and @ z are always the same, provided some mild conditions @y@x @x@y are satis ed.. The four second derivatives are conveniently arranged in the form of a -matrix called the Hessian matrix. In our example: " # @ z @ z @x H @y@x 6x + 6y xy xy 6x + y @ z @x@y @ z @y Second order conditions The second derivatives must exist and must be continuous functions.

A critical point of a function z f (x; y) can be a peak, a trough or a saddle point: 4 y x 4 0 0 z 0 z x y peak 00 0 0 0 y 0 x 0 z 00 z x y saddle z 0 4 x y 4 0 0 0 z x + y trough As in the case of a function in one variable, the second derivatives can be used to decide which of the three cases occur. However, the criterion that separates a saddle points from peaks and troughs may come as a surprise. Theorem 8 Suppose that the function z f (x; y) has a critical point at (x ; y ). If the determinant of the Hessian @ z @ z @x det H @y@x @ z @ z @ z @ z @x @y @x@y @y@x @ z @ z @ z @x @y @x@y @ z @x@y @ z @y is negative at (x ; y ) then (x ; y ) is a saddle point.

If this determinant is positive then at (x ; y ) then (x ; y ) is a peak or a trough. In this case the signs of @ z and @ z are the same. If both signs are positive, then @x j(x ;y ) @y j(x ;y ) (x ; y ) is a trough. If both signs are negative, then (x ; y ) is a peak. Nothing can be said if the determinant is zero. 8 < < 0: saddle point det H @ z > 0: trough : > 0: @x @ z < 0: peak @x Remark 9 If @ z is positive and @ z is negative or vice versa then the determinant is @x @y automatically negative and hence one has a saddle point. This is so because @ z is negative and therefore also @ z @ z @x @y @ z. @x@y @ z @x @y Remark 0 When the determinant is zero more complicated saddle points are possible. For instance, z yx y has a critical point at (0; 0) where all second derivatives are zero. Its graph is called a monkey saddle. Example z f (x; y) x y. The partial derivatives are @z @z x and y. @x @y Clearly, (0; 0) is the only critical point. The determinant of the Hessian is @ z @ z @x det H @y@x 0 0 () ( ) 0 0 4 < 0 @ z @x@y @ z @y Hence the function has a saddle point at (0; 0). Example z 0 x + 0yx 00 0 y has the two partial derivatives @z x + 0 @x and @z 0x y. It is not di cult to show that (0; 0) is the only critical point. The @y 00 determinant of the Hessian is @ z @ z @x det H @y@x 0 00 0 980 00 0 000 < 0 @ z @x@y @ z @y 0 00 00 y

Hence the function has a saddle point at (0; 0) and not a minimum, although both @ z and @x @ z are negative. The graph of the function and the level curves are as follows: @y 0 4 4 x0 0 yz 4 4 4 y 0 0 x 4 Exercise Show that the critical point in the example on page is a peak. Example 4 We have shown in the handouts of the previous weeks that the prot function (K; L) K 6 L K L has a critical point at K L 8. Is it a relative maximum? We have and hence H @ @K K 6 L @ @K @ @K@L @ @L@K @ @L For K L 8 we obtain (since 8 8 ( p 8) 64 ) det H j K8 L8 8 6 8 4 8 4 4 8 6 + 8 6 8 6 8 4 Thus the determinant is positive. Since @ @K j K8 L8 is a trough. @ @L 6K 6 L 6 L K 6 L K 6 L K 6 L 8 4 8 4 8 4 8 4 8 4 8 4 4 8 8 4 p 8 8 8 6 64 > 0 6 Absolute maxima or minima 8 4 > 0 we can conclude that K L 8 The role of intervals for functions in one variable is taken over by convex sets when considering functions in two variables. A region of the (x; y)-plane is called convex if it contains with every two points (x ; y ) and (x ; y ) in the region the line segment 4

connecting these two points. (Notice: There are concave and convex functions and there are concave and convex lenses, but there is no such thing as a concave set.) two convex sets two non-convex sets The (x; y)-plane and the the set of all non-negative coordinate pairs (x; y) are also convex sets. An absolute maximum of a function f (x; y) in a region is a point (x ; y ) in the region such that f (x ; y ) f (x; y) holds with respect to all other points in the region. Theorem Suppose the function f (x; y) is dened on a convex set and has there a unique critical point (x ; y ). If the determinant of the Hessian at this point is positive and if @ f is negative (positive) then (x ; y ) is an absolute maximum (minimum) on the @x convex set. The conditions are satis ed in many examples we have discussed, for instance in the pro t-maximization problem above. Remark 6 Suppose the function f (x; y) is dened on a convex set. Suppose the determinant of the Hessian is always positive in the region and that @ f is always positive @x (respectively, negative). Then the function is convex (respectively, negative). Hereby the function is convex (concave) if the area above (below) the graph of the function is convex. 7 Quadratic forms nx nx nx z (~x) b ij x i x j b ii x i + i j i in matrix form: B is a symmetric matrix (B B 0 ) nx nx b ij x i x j i i<j z (~x) ~x 0 B~x Quadratic function: nx nx ~z (~x) b ij x i x j + i j ~x 0 B~x + A~x + c nx a i x i + c i translation

positive/negative (semi-) de nite ~x 0 B~x > 0 for all ~x R n, ~x 6 0 ~x 0 B~x 0 for all ~x R n, ~x 6 0 ~x 0 B~x < 0 for all ~x R n, ~x 6 0 ~x 0 B~x 0 for all ~x R n, ~x 6 0 inde nite quadratic form ~x 0 B~x < 0 for some ~x R n and ~y 0 B~y > 0 for some ~x R n Theorem 7 (Sylvesters inertia law) There exists a linear change of coordinates x i nx g ij y j j that transforms the quadratic form into Xn ^z (~y) i y i Xn in + y i with n ; n 0 and n + n n: n, n are invariants, i.e. do not depend on the specic coordinate change. Notice: P 0 i y i 0: In matrix notation ~y 0 ~ B~y (G~x) 0 B (G~x) with G invertible and B a diagonal matrix with +; meaning of positive de nite n assume a 6 0 First coordinate change z ax + bx x + cx a x + ba x x + cx a x + b a x x + b a a x + b a x + c! b a! b x a x ; 0 on the diagonal. x + cx u x + b a x u x 6

brings quadratic form into au + c! b u a Second coordinate change y p a u and (if necessary) y r c quadratic form into desired form. Notice: det ^B (det G) det B k k minors, k k principle minors, k k not-so principle minors. ( b a) u brings Theorem 8 The quadratic form B is negative denite if and only if the sign of the determinant of the principle k k minors is ( ) k. Theorem 9 z f (~x) twice di erentiable on a convex set. Assume the Hessian matrix is negative denite excepts for a set of measure zero. Then f is strictly concave. Theorem 0 Let h (u) be an increasing function and g (x) concave on a concave domain. Then the composite function h (g (x)) is quasi-concave. 7