Chapter 8 Nonlinear programming. Extreme value theorem. Maxima and minima. LC Abueg: mathematical economics. chapter 8: nonlinear programming 1

Similar documents
where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

September Math Course: First Order Derivative

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

MATH2070 Optimisation

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

Lecture 4: Optimization. Maximizing a function of a single variable

Calculus 221 worksheet

Mathematics for Economics ECON MA/MSSc in Economics-2017/2018. Dr. W. M. Semasinghe Senior Lecturer Department of Economics

V. Graph Sketching and Max-Min Problems

Economics 101A (Lecture 3) Stefano DellaVigna

Functions. A function is a rule that gives exactly one output number to each input number.

Economics 205 Exercises

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

Review of Optimization Methods

Lecture 2: Convex Sets and Functions

Math 118, Fall 2014 Final Exam

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

MA102: Multivariable Calculus

3 Applications of partial differentiation

MATHEMATICS FOR ECONOMISTS. Course Convener. Contact: Office-Hours: X and Y. Teaching Assistant ATIYEH YEGANLOO

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Preliminary draft only: please check for final version

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44

School of Business. Blank Page

z = f (x; y) f (x ; y ) f (x; y) f (x; y )

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

MATH 409 Advanced Calculus I Lecture 10: Continuity. Properties of continuous functions.

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics: Lecture 16

II. An Application of Derivatives: Optimization

1. Introduction. 2. Outlines

Chapter 2: Unconstrained Extrema

Nonlinear Programming (NLP)

Index. Cambridge University Press An Introduction to Mathematics for Economics Akihito Asano. Index.

STUDY MATERIALS. (The content of the study material is the same as that of Chapter I of Mathematics for Economic Analysis II of 2011 Admn.

Final Exam Study Guide

Monotone Function. Function f is called monotonically increasing, if. x 1 x 2 f (x 1 ) f (x 2 ) x 1 < x 2 f (x 1 ) < f (x 2 ) x 1 x 2

Optimization Theory. Lectures 4-6

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Chapter 3a Topics in differentiation. Problems in differentiation. Problems in differentiation. LC Abueg: mathematical economics

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Tangent spaces, normals and extrema

Suppose that f is continuous on [a, b] and differentiable on (a, b). Then

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2)

1. Which one of the following points is a singular point of. f(x) = (x 1) 2/3? f(x) = 3x 3 4x 2 5x + 6? (C)

Bi-Variate Functions - ACTIVITES

ECARES Université Libre de Bruxelles MATH CAMP Basic Topology

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

Workshop I The R n Vector Space. Linear Combinations

Advanced Calculus Notes. Michael P. Windham

A LITTLE REAL ANALYSIS AND TOPOLOGY

Examination paper for TMA4180 Optimization I

Chapter 4E - Combinations of Functions

Math 1120 Calculus Test 3

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then

Set, functions and Euclidean space. Seungjin Han

Optimization: Problem Set Solutions

14 Lecture 14 Local Extrema of Function

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Math 10C - Fall Final Exam

Lagrange Multipliers

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

Higher order derivative

Econ Slides from Lecture 1

Applications of Derivatives

Reading Mathematical Expressions & Arithmetic Operations Expression Reads Note

APPLICATION OF DERIVATIVES

Constrained Optimization and Lagrangian Duality

Universidad Carlos III de Madrid

Math for Economics 1 New York University FINAL EXAM, Fall 2013 VERSION A

Partial Derivatives. w = f(x, y, z).

Final Exam Review Packet

Final Exam Review Packet

Optimality Conditions

Econ 101A Problem Set 1 Solution

Optimization. Sherif Khalifa. Sherif Khalifa () Optimization 1 / 50

ECON 186 Class Notes: Optimization Part 2

Differentiation. 1. What is a Derivative? CHAPTER 5

Math 164-1: Optimization Instructor: Alpár R. Mészáros

Optimality Conditions for Constrained Optimization

EconS 301. Math Review. Math Concepts

2015 Math Camp Calculus Exam Solution

0, otherwise. Find each of the following limits, or explain that the limit does not exist.

Optimization. A first course on mathematics for economists

Lagrange Multipliers

Math (P)refresher Lecture 8: Unconstrained Optimization

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Applications of Differentiation

Math Maximum and Minimum Values, I

MATH Midterm 1 Sample 4

minimize x subject to (x 2)(x 4) u,

ARE211, Fall2015. Contents. 2. Univariate and Multivariate Differentiation (cont) Taylor s Theorem (cont) 2

AB CALCULUS SEMESTER A REVIEW Show all work on separate paper. (b) lim. lim. (f) x a. for each of the following functions: (b) y = 3x 4 x + 2

Chapter 3 Prerequisite Skills. Chapter 3 Prerequisite Skills Question 1 Page 148. a) Let f (x) = x 3 + 2x 2 + 2x +1. b) Let f (z) = z 3 6z 4.

ECON 255 Introduction to Mathematical Economics

Transcription:

Chapter 8 Nonlinear programming Lectures in Mathematical Economics L Cagandahan Abueg De La Salle University School of Economics Extreme value theorem Maxima and minima Definition. Let f: D R D R A point x* in Dis said to be a global maximizer of f iff f( x) f( x*), x D The value f(x*) is called the global maximum of f. n chapter 8: nonlinear programming 1

Maxima and minima Definition. Let f: D R D R A point x # in Dis said to be a global minimizer of f iff f( x) f( x ), x D # The value f(x # ) is called the global minimum of f. n Maxima and minima Definition. Let f: D R D R A point x* in Dis said to be a local maximizer of f iffthere is a neighborhood V r (x*)(with r> 0) such that f( x) f( x*), x D V r ( x*) The value f(x*) is called the local maximum of f. n Maxima and minima Definition. Let f: D R D R A point x # in Dis said to be a local minimizer of f iffthere is a neighborhood V r (x # )(with r> 0) such that f( x) f( x ), x D V r ( x ) # # The value f(x # ) is called the local minimum of f. n chapter 8: nonlinear programming 2

Maxima and minima f(x) a x* c x m x M d b global maximizer local minimizer global minimizer local maximizer Maxima and minima Definition.A maximizer or a minimizer of a function f is in general, referred to as an optimizer(or an optimum point of f). The value of fat the optimum point is called an optimum of f. Suprema and infima Definition. Let S R A number v is said to be an upper boundof S if s v, s S A number w is said to be a lower boundof S if w s, s S chapter 8: nonlinear programming 3

Suprema and infima Definition. Let S R Sis said to be bounded above, if S has an upper bound. Sis said to be bounded below, if Shas a lower bound. Sis said to be bounded, if S has both an upper bound and a lower bound. Sis said to be unbounded, if Shas no upper bound nor a lower bound. Suprema and infima Definition. Let S R If Sis bounded above, then an upper bound u is said to be a supremum(or a least upper bound) of Sif no number smaller than uis an upper bound of S. We denote u: = sups Suprema and infima Definition. Let S R If Sis bounded below, then a lower bound z is said to be an infimum(or a greatest lower bound) of Sif no number bigger than zis a lower bound of S.We denote z: = infs chapter 8: nonlinear programming 4

Suprema and infima lower bounds of S upper bounds of S R w S v inf S sup S Suprema and infima In symbols, we have u = sups { v s v s S} = min :, and z = infs { w s w s S} = max :, Suprema and infima Example. Consider the interval [ 1,2] R We have sup[ 1,2] = 2 and inf[ 1,2] = 1. Now consider the interval ( 1,2) R We have sup( 1,2) = 2 and inf( 1,2) = 1. chapter 8: nonlinear programming 5

Suprema and infima Example. The intervals [ a, ) ( a, ) have infimaequal to a, but no suprema exist. Also, the intervals (, b) (, b] have suprema equal to b, but no infima exist. Suprema and infima Remark. Let S R If Shas a lower bound or an upper bound (or both), it must be finite. Moreover, the supremum or infimum of a set may not be an element of the set. For intervals, the sup and inf are always its finite endpoints. Extreme value theorem Theorem 8.1.[B. Bolzano (1830)] If Iis a closed and bounded interval and f: I R is continuous on I, then fis bounded on I. Moreover, if then M = sup( f x) m = inf f( x) x I x I ( ) x, x I f( x ) = m f( x ) = M m M m M chapter 8: nonlinear programming 6

Extreme value theorem f(x M ) = M f(x) f(x m ) = m a x M x m b Extreme value theorem Theorem 8.2.[K. Weierstrass (1860)] Let Kbe a compact n subset of R. Suppose that f: K R is continuous. Then fhas a maximizer and a minimizer in K. Unconstrained optimization chapter 8: nonlinear programming 7

Critical points Definition. Let f: D R D R A point x 0 in Dis said to be a critical point of f iff f ( x ) = 0 f ( x ) = 0 0 A critical point x 0 in Dthat satisfies f (x 0 ) = 0 is called a stationary point. n Univariate case Theorem 8.3.[Necessary condition for local optima] Let f: D R D R and suppose further that f is differentiable at x*, where x* intd If x* is a local optimizer of f, then f ( x*) = 0 Univariate case Example.Find the stationary point(s) of the function y = exp( x) x + 2 chapter 8: nonlinear programming 8

Univariate case Remark.The converse of Theorem 8.3 is not true. Consider 3 y = x Univariate case Theorem 8.4.[Sufficient condition for local optima] Let f: D R D R and suppose further that f is twice continuously differentiable on intdcontaining x* and that f (x*) = 0. Then Univariate case ( i) f ( x*) > 0 f( x*) < f( x), x V ( *) r x D ( ii) f ( x*) < 0 f( x*) > f( x), x V ( *) r x D chapter 8: nonlinear programming 9

Univariate case Remark.The condition that f( x*) < f( x), x V( x*) D means that x* is a strict local minimizer of f. Similarly, the condition that f( x*) > f( x), x V( x*) D means that x* is a strict local maximizer of f. r r Univariate case Example. Consider the function 2 y = f( x) = ax + bx + c, a 0 Univariate case Example.Consider the cubic polynomial f x x x x 3 2 ( ) = 2 3 12 + 1 chapter 8: nonlinear programming 10

Univariate case Example.Consider the cubic polynomial 1 3 y = x x 3 Univariate case Remark.Consider again the function 3 y = x Univariate case Theorem 8.5.Let f: D R D R and suppose further that f have a continuous n th order derivative on V r (x*) contained in D. Suppose that ( k) f ( x*) = 0, k = 1,..., n 1 ( n) f ( x*) 0 chapter 8: nonlinear programming 11

Univariate case i. If n is even and f (n) (x*) > 0, then x* is a strict local minimizer of f. ii. If n is even and f (n) (x*) < 0, then x* is a strict local maximizer of f. iii. If n is odd, then x* is horizontal inflection point of f. Univariate case Example.Consider the quintic polynomial 1 1 g( x) = x x 5 3 5 3 Univariate case Example.Consider the quartic polynomial h x x x x 4 2 ( ) = 6 8 + 10 chapter 8: nonlinear programming 12

Univariate case Example.Consider a perfectly competitive firm with revenue function R(q) and cost function C(q). The profit function of the firm is given by Π ( q) = R( q) C( q) Univariate case p MC( x) AC( x) q* q Univariate case y Total cost Total revenue MR(q 1 *) = MC(q 1 *) MR(q 2 *) = MC(q 2 *) q 1 * q 2 * q chapter 8: nonlinear programming 13

Univariate case Exercise. Consider the function x 3 f( x) = 2 x 3x + 2 State the necessary restrictions for x(if necessary, since this is a rational function). Determine all critical points. Among those, which are stationary? Univariate case Exercise. Let y = F( z), z = f( x) be differentiable functions and let ybe increasing in z. Prove that x*is a local maximum of fiff x*is a local maximum of F. Review: matrix calculus Definition. Consider the function y = f x = f x x x 1 2 n ( ) (,,..., ) If fis continuously differentiable, then the vector y y y f = x x x 1 2 n is called the gradientof f, read del f. T chapter 8: nonlinear programming 14

Review: matrix calculus Definition. Consider the function y = f x = f x x x 1 2 n ( ) (,,..., ) If f is twice continuously differentiable, then the matrix where H = H( x, x,..., x ) 1 2 n Review: matrix calculus 2 2 2 f f f 2 x x x x x 1 1 2 1 n 2 2 2 f f f 2 H() f = x2 x1 x x 2 2 xn 2 2 2 f f f 2 x x x x n 1 n 2 xn Review: matrix calculus is called the Hessian matrix, and its determinant is the Hessian determinant. We may also denote the Hessian matrix as H() f = H = 2 f f chapter 8: nonlinear programming 15

Multivariate case Theorem 8.6.[Necessary condition for local optima] Let n f: D R D R and suppose further that f have partial derivatives at x*, where x* intd If x* is a local optimizer of f, then f( x*) = θn Multivariate case Definition. A function f(x,y) is said to have a saddle point (x #,y # ) iff f x y f x y f x y # # # # (, ) (, ) (, ), ( x, y) Df Remark.A saddle point is an n dimensional analogue of the horizontal point of inflection. Multivariate case Theorem 8.7.[Sufficient condition for local optima] Let n f: D R D R and suppose further that fhave continuous second order partials on V ( x *) D r and that f( x*) = θn chapter 8: nonlinear programming 16

Multivariate case If 2 f( x*) is positive definite (resp., negative definite, indefinite), then x* is a strict local minimizer (resp., strict local maximizer, saddle point) of f. Multivariate case Example. Consider 2 2 f( x, y) = x y + x y Multivariate case Example.Classify the stationary points of the following functions: 2 2 ( i) f( x, y) = x y + xy ( ) 2 = + 2 + + + ( iii) 2 2 x y h( x, y) = e ii g( x, y) x y xy x 5y chapter 8: nonlinear programming 17

Multivariate case f( x, y) = x 2 y 2 + xy Multivariate case g( x, y) = x 2 + y 2 + xy + x + 5y Multivariate case 2 2 h( x, y) = exp( x y ) chapter 8: nonlinear programming 18

Multivariate case Exercise. Consider the function 2 3 z = f( x, y) = xy + x y xy Show that there are six optimizers for this function: (0,0),(1,0),( 1,0), 5 2 5 2 (0,1),,,, 2 5 2 5 Multivariate case Exercise. Verify that 2 2 2 g( x, y, z) = x + y + z xyz has four saddle points: { ( x*, y*, z *) = (2,2,2),( 2, 2,2), ( 2,2, 2),(2, 2, 2) } Multivariate case Remark.In the case of a function with two variables z = f(x,y), we can restate the sufficiency condition: let 2 f: D R D R and suppose further that have continuous second order partials on V ( x*, y*) D r ( ) chapter 8: nonlinear programming 19

Multivariate case and Define f f ( x*, y*) = 0, ( x*, y*) = 0 x y D( x, y) = f ( x, y) f ( x, y) f ( x, y) xx yy xy assumes that for this class of multivariate functions, Young s Theorem holds (Theorem 3.15) 2 Multivariate case Then ( i) D( x*, y*), f ( x*, y*), f ( x*, y*) > 0 ( ii) D x y > xx yy ( x*, y*) local minimum ( *, *) 0, f ( x*, y*), f ( x*, y*) < 0 xx yy ( x*, y*) local maximum Multivariate case ( iii) D( x*, y*) < 0 ( x*, y*) saddle point ( iv) D( x*, y*) = 0 no conclusion can be made about ( x*, y*) chapter 8: nonlinear programming 20

Multivariate case Example.Consider a perfectly competitive firm with revenue function R(x,y) and cost function C(x,y), where xand y are the two goods produced by the said firm. The firm s objective is to maximize profit. Find the necessary and sufficiency conditions that the firm must satisfy, if x* and y* exist. Multivariate case Solution: Formulate the profit function Π(x,y) as follows: Π ( x, y) = R( x, y) C( x, y) Multivariate case Exercise. Consider the function z = f( x, y) = (1 + y) x + y 3 2 2 defined over the Cartesian plane. Show that fhas a unique local minimum at the origin, but it does not have a global minimum. chapter 8: nonlinear programming 21

Convex sets Definition. A set Cis said to be convexiff x, y C r [0,1] rx + (1 r) y C We call rx+ (1 r)ya convex combinationof xand y. If the above is true for all r in (0,1), we then call Ca strictly convex set. Convex sets convex set the line segment (i.e., the convex combination) is contained in the set, possibly at the border/edge/ boundary of the set) strictly convex set the line segment (i.e., the convex combination) cannot be placed at the boundary of the set (at most only one point of the line segment) chapter 8: nonlinear programming 22

Convex sets Example. Consider the [real] Cartesian plane R 2 {( a, b): a, b R} = Define the following sets: S = [0,1] [0,1] 1 2 2 {(, ): 1} C = x y x + y 1 S 1 is called the unit square and C 1 is called the unit circle. Convex sets y (0,1) y (0,1) (1,1) ( 1,0) (0, 1) (0,0) (1,0) x (0,0) (1,0) x unit circle (C 1 ) unit square(s 1 ) a strictly convex set a convex set Convex sets Example. In microeconomics, the set defined by the budget constraint p x + p y m x y is called the budget set of the consumer, which is a convex set. The property of convexity of the budget set implies perfect divisibilityof x and y: possibility of consuming these in fractional units. chapter 8: nonlinear programming 23

Convex sets Budget set, which is [always] convex Exercise. True or false. i. If Aand Bare convex sets, then both their union and intersection are convex. ii. Let Uand Vbe convex subsets of the reals. Define { u v: u U, v V} U + V = + Then U + Vis convex. iii. Consider the linear programming problem max T z = c x s.t. Ax b, x θn Then the set of feasible [and optimal] solutions is convex. Hint: Consider Exhibit A, Special Cases in 6. chapter 8: nonlinear programming 24

Convex functions Definition. Let f: C R where Cis a convex subset in R. We say that fis a convex function iff x, y C r [0,1] ( (1 r) y) r ( f( x) ) (1 r) ( f( y) ) f rx + + Convex functions Definition. Let f: C R where Cis a convex subset in R. We say that fis a strictly convex function iff x, y C r (0,1) ( (1 r) y) r ( f( x) ) (1 r) ( f( y) ) f rx + < + Convex functions f( x ) 2 f( x) 1 f( rx + (1 r) x ) 1 2 x rx + (1 r) x x 1 1 2 2 chapter 8: nonlinear programming 25

not strictly convex function strictly convex function Concave functions Definition. Let g: C R where Cis a convex subset in R. We say that gis a concave function iff x, y C r [0,1] ( (1 r) y) r ( g( x) ) (1 r) ( g( y) ) g rx + + Concave functions Definition. Let g: C R where Cis a convex subset in R. We say that gis a strictly concave function iff x, y C r (0,1) ( (1 r) y) r ( g( x) ) (1 r) ( g( y) ) g rx + > + chapter 8: nonlinear programming 26

Concave functions f( rx + (1 r) x ) 1 2 f( x) 1 f( x ) 2 rx r x x + (1 ) x 1 1 2 2 strictly concave function not strictly concave function Example. The function exp(x) is convex on the whole real line, and ln(x) is concave on (0, ). y = exp( x) 2-1 -0.5 0.5 1-2 -4 y = ln( x) -6 chapter 8: nonlinear programming 27

Concave functions Remark. In general, the quadratic function y ax 2 = + bx + c a, 0 is concave on R if a < 0, and convex on R if a >0. In particular, the function y = x 2 is convex on the whole real line, while the function y = x 2 + 2xis concave on the real line. 2 2 y = x -3-2 -1 1 2 3 2 y = x + 2x -2-4 Example. The cubic function y = x 3 is concave on (,0] and is convex on [0, ). In the whole of the real line, it is neither concave nor convex. 5 0.02 0.01-0.01-0.02 chapter 8: nonlinear programming 28

Example. The hyperbola y = 1/x is concave on (,0) and is convex on (0, ). In the whole of the real line it is neither concave nor convex. 75 50 25-1 -0.5 0.5 1-25 -50-75 -100 Example. The absolute value function y = x is convex on the whole of R, but, it is both concave on convex on the intervals (,0] and [0, ). 1 0.8 0.6 0.4 0.2-1 -0.5 0.5 1 Exercise. i. Prove: a line [in the Cartesian plane] is both convex and concave. ii. Is the requirement of convexity of domain essential for the class of concave and convex functions? Explain why or why not. chapter 8: nonlinear programming 29

Remark. In either a concave function g(x)or a convex function f(x),graphically, there are no breaks or jumps that occur in the respective graphs of gand f, because the respective domains are convex sets. The nonconvexity of the domainwill not make fa valid convex function f( x ) 2 f( x) 1 x x 1 2 f( x ) 2 f( x) 1 The presence of the jump discontinuity will not make fa valid concave function x x 1 2 chapter 8: nonlinear programming 30

To avoid confusion with the word convex, old terminology used the word convex to denote a convex set and the word concave to denote the following: concave upward (for convex functions), and concave downward (for concave functions). Theorem 8.8. A concave function is continuous in the interior of its domain. Theorem 8.9. A convex function is continuous in the interior of its domain. Remark. If fis concave (resp., convex), then fis convex (resp., concave). Theorem 8.10. Let f(x) be twice continuously differentiable on ain open interval (a,b). i. fis concave on (a,b) iff f (x) 0, for every xin (a,b). ii. fis convex on (a,b) iff f (x) 0, for every xin (a,b). chapter 8: nonlinear programming 31

Theorem 8.11. Let f(x) be twice continuously differentiable on ain open interval (a,b). i. If f (x) < 0, for every xin (a,b), then fis strictly concave on (a,b). ii. If f (x) > 0, for every xin (a,b), then fis strictly convex on (a,b). Remark. The converse of Theorem 8.11 is not true. For example, the function g(x) = x 4 is strictly convex on R. But f (0) = 0 Recall that we had this similar case when we considered the function h(x) = x 3. In here, x* = 0 is a horizontal point of inflection. Thus, the condition that f ( x*) = 0 does not always yield to the conclusion that the stationary point x* = 0 is indeed a horizontal point of inflection. chapter 8: nonlinear programming 32

4 y = x 2 y = x The point x* = 0 is the minimum point of y = x 4, but it does not satisfy the sufficiency condition. (0,0) Theorem 8.12. Let f(x) have continuous second order partial derivatives on an [open] convex n Cof R. i. fis concave on C iffthe Hessian matrix 2 f is negative semidefinite on C. ii. fis convex on C iffthe Hessian matrix 2 f is positive semidefinite on C. Theorem 8.13. Let f(x) have continuous second order partial derivatives on an [open] convex n Cof R. i. If the Hessian matrix 2 f is negative definite x C, then fis strictly concave on C. ii. If the Hessian matrix 2 f is positive definite x C, then fis strictly convex on C. chapter 8: nonlinear programming 33

Example. Consider the function f( x, y) = x y 2 2 f( x, y) = x y 2 2 By the remark after Theorem 8.9, g( x, y): = f( x, y) = x + y 2 2 is a strictly convex function. chapter 8: nonlinear programming 34

g( x, y) = x + y 2 2 Theorem 8.14. Let Cbe convex, f, g: C R, C R If fand gare (strictly) concave on Cthen i. f + gis (strictly) concave on C ii. rfis (strictly) concave on C for every r> 0. iii. sfis (strictly) convex on C for every s< 0. n Example. Consider the functions y = f( x) = x y = g( x) = x 2 and let r = 1, s = 2. 2 chapter 8: nonlinear programming 35

Example. Consider a firm in the perfectly competitive market producing output qwith revenue function R(q) and cost function C(q). The profit function is given by Π ( q) = R( q) C( q) If pis the market price of q, then R( q) = pq Theorem 8.15. Let C be convex, f, g: C R, C R If fand gare (strictly) convex on Cthen i. f + gis (strictly) convex on C ii. rfis (strictly) convex on C for every r> 0. iii. sfis (strictly) concave on C for every s< 0. n Example. Consider the functions y = f x = + 2 ( ) x 1 y = g( x) = (5/2) x and let r = 2, s = 1. chapter 8: nonlinear programming 36

Theorem 8.16. Let Cbe convex, f: C R, C R Let the function g: f( C) R be defined on f(c), and f(c) is convex. If both fand gare (strictly) concave (with respect to domains) with gincreasing on f(c), then g fis (strictly) concave on C. n Theorem 8.17. Let Cbe convex, f: C R, C R Let the function g: f( C) R be defined on f(c), and f(c) is convex. If both fand gare (strictly) convex (with respect to domains) with gincreasing on f(c), then g fis (strictly) convex on C. n Exercise. Consider the function a b z = f( x, y) = Cx y, a, b, C > 0 Show that i. zis concave iff a + b = 1. ii. z is strictly concave iff a + b < 1 iii. z is not concave iff a + b > 1. chapter 8: nonlinear programming 37

Theorem 8.18. Let Cbe an open convex set, f: C R, x* C R where fhave continuous first order partial derivatives on C. If f is (strictly) concave on C, then x* is a (strict) local maximizer of fiff x* is a stationary point of f. n Theorem 8.19. Let Cbe an open convex set, f: C R, x* C R where fhave continuous first order partial derivatives on C. If f is (strictly) convex on C, then x* is a (strict) local minimizer of fiff x* is a stationary point of f. n Theorem 8.20. Every local maximizer of a concave function is a global maximizer. Also, every local minimizer of a convex function is a global minimizer. Theorem 8.21. A local maximizer of a strictly concave function is unique. Also, a local minimizer of a strictly convex function is unique. chapter 8: nonlinear programming 38

A local minimum of a strictly convex function is unique, and also it is the global minimum f( x ) 2 f( x) 1 f( rx + (1 r) x ) 1 2 x rx + (1 r) x x 1 1 2 2 f( rx + (1 r) x ) 1 2 f( x) 1 A local maximum of a strictly concave function is unique, and also it is the global maximum f( x ) 2 (1 ) x rx + r x x 1 1 2 2 Example. Consider w = x + xz y + y + yz + 3z 2 2 2 chapter 8: nonlinear programming 39

Exercise. To make economic sense, what restrictions must be imposed to the parameters a, b, c, and dof the total cost function 3 2 C( q) = aq + bq + cq + d Justify your answer. Optimization Optimization Definition.The general optimization problem, also known as the mathematical programming problem (or a constrained optimization problem), is given by max(ormin) f( x) subject to x X chapter 8: nonlinear programming 40

Optimization where fis a real-valued function and Xis a subset of the Euclidean n space R. We call fthe objective functionand Xis called the feasible setor the feasible region. An element of the feasible set is called a feasible pointor feasible solution. The feasible set is described by equalities or inequalities which we call constraints. Optimization If the objective function and all the constraints are linear, we call the optimization problem a linear programming problem. If either the objective function or at least one of the constraints are nonlinear, we call the optimization problem a nonlinear programming problem. chapter 8: nonlinear programming 41

Definition. Consider the nonlinear programming problem max z = f( x, y) s.t. g( x, y) = b The above problem is also called as a constrained optimization problem. We call z= f(x,y)the objective function and g(x,y) = b the constraint. Geometric interpretation Remark. Consider the constrained problem max z = f ( x, y ) s.t. ax + by = c The constraint restricts the values of fto the points of the constraint; i.e., the intersection of the surface fand the plane ax+ by = c; thereby maximizing fon this plane. Geometric interpretation z f(x,y) x ax + by = c y chapter 8: nonlinear programming 42

Definition. Given the problem max(ormin) z = f( x, y) s.t. g( x, y) = b The Lagrangean function(or simply, Lagrangean) is defined by ( λ, x, y) = f( x, y) + λ b g( x, y) The scalar λis called the Lagrange multiplier. Remark. The gradient of from the above problem is given by ( λ, x, y) λ x(, x, y) / λ = λ y(, x, y) = / x λ λ / y (, x, y) The Hessian matrix of (with arguments suppressed) is given by λλ λx λy 2 = xλ xx xy y λ yx yy called the bordered Hessian. chapter 8: nonlinear programming 43

Theorem 8.22. [Necessary condition] Given the problem max(ormin) z = f( x, y) s.t. g( x, y) = b where fand g have continuous first order partial derivatives. Let (x*,y*) be an optimizer of fon the feasible set and suppose that g x( x, y) 0 g( x, y) = g ( x, y) 0 y Then, there is a scalar λ* such that ( λ*, x*, y*) = θ i.e., (λ*,x*,y*)is a stationary point of. 3 Theorem 8.23. [Sufficient condition] Given the problem max(ormin) z = f( x, y) s.t. g( x, y) = b where fand g have continuous second order partial derivatives. Let (x*,y*) be an optimizer of fon the feasible set and suppose that chapter 8: nonlinear programming 44

g ( x, y) 0 g ( x, y) 0 x and that λ* and x*, y* satisfy 2 ( i) ( λ*, x*, y*) = θ det[ ( λ*, x*, y*)] < 0 2 ( ii) y ( x*, y*)localmin of f det[ ( λ*, x*, y*)] > 0 ( x*, y*)localmax of f 3 Example. Consider the firm s problem max F( K, L) = 5KL s.t. K + L = 10 Example. Examine for maxima or minima: s.t. f( x, y) = kxy x 2 + y 2 = b where band kare positive constants. chapter 8: nonlinear programming 45

Remark. Given the problem max(ormin) f( x, y) subjectto g( x, y) = b It is possible to solve the above by graphical methodor to do a substitution method: to rewrite the constraint and substitute into the objective function as if it were an unconstrained problem. Example. What is the point on the line x+ 2y = 4 that is closest to the origin? l (0,2) P = (0,0) (4,0) chapter 8: nonlinear programming 46

Graphical method: A careful sketching of the objective function and the constraint will yield the point Substitution method: We can rewrite the constraint as x = 4 2y Lagrangean method: Recall that the constrained problem is given by min 2 2 x + y = z s.t. x + 2y = 4 chapter 8: nonlinear programming 47

Exercise. i. What is the maximum product of n positive numbers and whose sum is unity? ii. Show that of all rectangles with a fixed perimeter P, the square will have the largest area. Remark. In the previous slide (exercise 1), if there are two positive numbers, we have the nonlinear programming problem max xy = z s.t. x + y = 1, x, y > 0 Exercise. Consider again the problem max F( K, L) = 5KL s.t. K + L = 10 Verify the solution obtained previously using the Lagrangean method by using the graphical and substitution methods. chapter 8: nonlinear programming 48

Theorem 8.24. Consider the problem max(ormin) z = f( x, y) s.t. g( x, y) = b where fand ghave continuous second order partial derivatives and bis a parameter. Suppose that (x*,y*) solves the problem and λ* is the associated Lagrange multiplier. If f(x*,y*) is a differentiable function of b, then df( x*, y*) = λ* db Theorem 8.25. Consider the problem max(ormin) f( x) = z subject to g( x) = b Suppose that x*solves the problem and f(x*) is a differentiable function of b. Then df( x*) = λ* db chapter 8: nonlinear programming 49

Exercise. Consider the problem min z = f( x, y) = x + y 2 2 3 2 s.t. ( x 1) y = 0 Show that the method of Lagrange multipliers does not work in this case and explain why. Exercise. Examine for optima using the accompanying Lagrangean function: z = x + y 2 2 s.t. xy = 1 Verify the answers using graphical method. Does the substitution method apply? Definition. Consider the problem max f( x) s.t. g( x) = b m < n with Lagrangean k n x R, k = 1,..., m m ( λ, x ) = f( x ) + λ b g( ) k x k k k k= 1 chapter 8: nonlinear programming 50

A point (λ*,x*) is called a saddle pointof iff ( λ*, x) ( λ*, x*) ( λ, x*) Remark. The saddle point of the Lagrangean is defined similarly with that of a function f(x,y); i.e., a point (x #,y # ) is a saddle point of fiff f x y f x y f x y # # # # (, ) (, ) (, ), ( x, y) Df Theorem 8.26. Given the problem max f( x), s.t. g( x) = b, k = 1,..., m k x R where f,g 1,g 2,...,g k are real-valued functions defined on k n X R n m < n If (λ*,x*) is a saddle point of, then g( x*) b = 0, k = 1,..., m k k Remark. More generally, we have the next result, stated in a theorem. chapter 8: nonlinear programming 51

Theorem 8.27. Given the problem max f( x), s.t. g( x) = b, k = 1,..., m k x R where f,g 1,g 2,...,g k are real-valued functions defined on k n X R n m < n If (λ*,x*) is a saddle point of, then x* is a global maximizer of f, subject to the constraints g( x) = b, k = 1,..., m k k Exercise. Consider again the problem max F( K, L) = 5KL s.t. K + L = 10 Verify the stationary point (λ*,x*,y*) = (25,5,5) is a saddle point of the corresponding Lagrangean function. chapter 8: nonlinear programming 52

To end... It is true that a mathematician who is not also something of a poet will never be a perfect mathematician. Karl Weierstrass chapter 8: nonlinear programming 53