Constrained Optimization in Two Variables

Similar documents
Constrained Optimization in Two Variables

12.10 Lagrange Multipliers

The First Derivative and Second Derivative Test

Derivatives in 2D. Outline. James K. Peterson. November 9, Derivatives in 2D! Chain Rule

The First Derivative and Second Derivative Test

Optimization Methods: Optimization using Calculus - Equality constraints 1. Module 2 Lecture Notes 4

Geometric Series and the Ratio and Root Test

Calculus of Variation An Introduction To Isoperimetric Problems

Matrix Solutions to Linear Systems of ODEs

Geometric Series and the Ratio and Root Test

Consequences of Continuity

Convergence of Fourier Series

Convergence of Sequences

Cable Convergence. James K. Peterson. May 7, Department of Biological Sciences and Department of Mathematical Sciences Clemson University

32. Method of Lagrange Multipliers

A Basic Course in Real Analysis Prof. P. D. Srivastava Department of Mathematics Indian Institute of Technology, Kharagpur

Multivariable Calculus and Matrix Algebra-Summer 2017

Convergence of Sequences

Consequences of Continuity

Fourier Sin and Cos Series and Least Squares Convergence

Matrices and Vectors

and ( x, y) in a domain D R a unique real number denoted x y and b) = x y = {(, ) + 36} that is all points inside and on

More Series Convergence

I know this time s homework is hard... So I ll do as much detail as I can.

Math 212-Lecture Interior critical points of functions of two variables

Uniform Convergence Examples

Review: critical point or equivalently f a,

Solutions of Math 53 Midterm Exam I

f x, y x 2 y 2 2x 6y 14. Then

Project One: C Bump functions

Solutions to old Exam 3 problems

Dirchlet s Function and Limit and Continuity Arguments

4.3 How derivatives affect the shape of a graph. The first derivative test and the second derivative test.

Mat 267 Engineering Calculus III Updated on 9/19/2010

Differentiation - Important Theorems

Directional derivatives and gradient vectors (Sect. 14.5). Directional derivative of functions of two variables.

CHAPTER 1-2: SHADOW PRICES

Function Composition and Chain Rules

Review sheet Final Exam Math 140 Calculus I Fall 2015 UMass Boston

Lower semicontinuous and Convex Functions

3.1 ANALYSIS OF FUNCTIONS I INCREASE, DECREASE, AND CONCAVITY

This theorem guarantees solutions to many problems you will encounter. exists, then f ( c)

4.3 - How Derivatives Affect the Shape of a Graph

Uniform Convergence Examples

First Variation of a Functional

PRACTICE PROBLEMS FOR MIDTERM I

HOMEWORK SOLUTIONS MATH 1910 Sections 6.1, 6.2, 6.3 Fall 2016

Physics 6010, Fall 2016 Constraints and Lagrange Multipliers. Relevant Sections in Text:

Math 210, Final Exam, Spring 2012 Problem 1 Solution. (a) Find an equation of the plane passing through the tips of u, v, and w.

Dirchlet s Function and Limit and Continuity Arguments

A. Correct! These are the corresponding rectangular coordinates.

The Limit Inferior and Limit Superior of a Sequence

Constrained Maxima and Minima EXAMPLE 1 Finding a Minimum with Constraint

1 Exponential Functions Limit Derivative Integral... 5

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

Function Composition and Chain Rules

Math Maximum and Minimum Values, I

The Kuhn-Tucker and Envelope Theorems

Derivatives and the Product Rule

Math Lagrange Multipliers

Predator - Prey Model Trajectories are periodic

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:

2.10 Saddles, Nodes, Foci and Centers

MATHEMATICS 200 April 2010 Final Exam Solutions

MATH section 3.1 Maximum and Minimum Values Page 1 of 7

6.5 Trigonometric Equations

McGill University April Calculus 3. Tuesday April 29, 2014 Solutions

Solving systems of ODEs with Matlab

Review for the First Midterm Exam

Maximum and Minimum Values - 3.3

Taylor Polynomials. James K. Peterson. Department of Biological Sciences and Department of Mathematical Sciences Clemson University

MATH 228: Calculus III (FALL 2016) Sample Problems for FINAL EXAM SOLUTIONS

Mathematics 205 HWK 18 Solutions Section 15.3 p725

General Power Series

CALCULUS 4 QUIZ #3 REVIEW Part 2 / SPRING 09

Review Test 2. MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. D) ds dt = 4t3 sec 2 t -

Math 171 Calculus I Spring, 2019 Practice Questions for Exam II 1

Assignment 16 Assigned Weds Oct 11

Exercises of Mathematical analysis II

Sections 5.1: Areas and Distances

is the intuition: the derivative tells us the change in output y (from f(b)) in response to a change of input x at x = b.

2017 HSC Mathematics Extension 2 Marking Guidelines

M151B Practice Problems for Exam 1

4.3 Mean-Value Theorem and Monotonicity

(d by dx notation aka Leibniz notation)

Predator - Prey Model Trajectories are periodic

Intro to Nonlinear Optimization

Partial Derivatives. w = f(x, y, z).

17 Solution of Nonlinear Systems

Preface.

The Explicit Form of a Function

DIFFERENTIATION. 3.1 Approximate Value and Error (page 151)

LESSON 25: LAGRANGE MULTIPLIERS OCTOBER 30, 2017

HOMEWORK 7 SOLUTIONS

Math Midterm Solutions

Predator - Prey Model Trajectories and the nonlinear conservation law

It is convenient to introduce some notation for this type of problems. I will write this as. max u (x 1 ; x 2 ) subj. to. p 1 x 1 + p 2 x 2 m ;

Complex Differentials and the Stokes, Goursat and Cauchy Theorems

University Mathematics 2

Proofs Not Based On POMI

Transcription:

in Two Variables James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 17, 216

Outline 1 2 What Does the Lagrange Multiplier Mean?

Let s look at the problem of finding the minimum or maimum of a function f (, y) subject to the constraint g(, y) = c where c is a constant. Let s suppose we have a point (, y ) where an etremum value occurs and assume f and g are differentiable at that point. Then, at another point (, y) which satisfies the constraint, we have g(, y) = g(, y ) + g (, y )( ) + g y (, y )(y y ) + E g (, y,, y ) But g(, y) = g(, y ) = c, so we have = g ( ) + g y (y y ) + E g (, y,, y ) where we let g = g (, y ) and g y = g y (, y ). Now given an, we assume we can find a y values so that g(, y) = c in B r () for some r >. Of course, this value need not be unique. Let φ() be the function defined by φ() = {y g(, y) = c} For eample, if g(, y) = c was the function 2 + y 2 = 25, then φ() = ± 25 2 for 5 5.

Clearly, we can t find a full circle B r () when = 5 or = 5, so let s assume the point (, y ) where the etremum value occurs does have such a local circle around it where we can find corresponding y values for all values in B r ( ). So locally, we have a function φ() defined around so that g(, φ()) = c for B r ( ). Then we have = g ( ) + g y (φ() φ( )) + E g (, φ(),, φ( )) Now divide through by to get ( ) φ() = g + gy φ( ) + E g (, φ(),, φ( )) Thus, g y ( ) φ() φ( ) = g E g (, φ(),, φ( )) and assuming g and g y, we can solve to find

φ() φ( ) = g g y 1 g y E g (, φ(),, φ( )) Eg (,φ(),,φ()) Since g is differentiable at (, φ( )), as,. Thus, we have φ is differentiable and therefore also continuous with φ ( ) = lim φ() φ( ) = g g y Now since both g and gy, the fraction g g too. This y means locally φ() is either strictly increasing or strictly decreasing; i.e. is a strictly monotone function. Let s assume φ ( ) > and so φ is increasing on some interval ( R, + R). Let s also assume the etreme value is a minimum, so we know on this interval f (, y) f (, y ) with g(, y) = c. This means

f (, φ()) f (, φ( ) on ( R, + R). Now do an epansion for f to get f (, φ()) = f (, φ( )) + f ( ) + f y (φ() φ( )) + E f (, φ(),, φ( )) where f = f (, φ( )) and fy = f y (, φ( )). This implies f (, φ()) f (, φ( )) = f ( ) + f y (φ() φ( )) + E f (, φ(),, φ( )) Now we have assumed f (, φ()) f (, φ( )), so we have f ( ) + f y (φ() φ( )) + E f (, φ(),, φ( )) If > we find ( ) φ() f + fy φ( ) + E f (, φ(),, φ( )) Now take the limit as + to find f + f y φ ( )

When <, we argue similarly to find Combining, we have f + f y φ ( ) f + f y φ ( ) which tells us at this etreme value we have the equation f + fy φ ( ) =. This implies fy = f φ ( ). Net note [ ] [ ] f f { f ( )} = = g y = f [ ] g f y f φ ( ) This says there is a scalar λ = f g f so that (f )(, y ) = λ (g)(, y ) where g(, y ) = c. The value f g is called the Lagrange Multiplier for this etremal Problem. This is the basis for the Lagrange Multiplier Technique for a constrained optimization problem. g g g y

The value λ above is called a Lagrange Multiplier of the etremum problem we are trying to solve. We can do a similar sort of analysis in the case the etremum is a maimum too. Our analysis assumes that the the point (, y ) where the etremum occurs is like an interior point in the {(, y) g(, y) = c}. That is, we assume for such an there is a interval B r ( ) with any B r ( ) having a corresponding value y = φ() so that g(, φ()) = c. So the argument does not handle boundary points such as the ±5 is our previous eample. To implement the Lagrange Multiplier technique, we define a new function H(, y, λ) = f (, y) + λ(g(, y) c) The critical points we see are the ones where the gradient of f is a multiple of the gradient of g for the reasons we discussed above. Note the λ we use here is the negative of the Lagrange Multiplier.

To find them, we set the partials of H equal to zero. H H y H λ = = f = λ g = = f y = λ g = = g(, y) = c The first two lines are the statement that the gradient of f is a multiple of the gradient of g and the third line is the statement that the constraint must be satisfied. As usual, solving these three nonlinear equations gives the interior point solutions and we always must also include the boundary points that solve the constraints as well.

Eample Etremize 2 + 3y subject to 2 + y 2 = 25. Solution Set H(, y, λ) = (2 + 3y) + λ( 2 + y 2 25). The critical point equations are then H H y H λ = 2 + λ(2) = = 3 + λ(2y) = = 2 + y 2 25 =

Solution Solving for λ, we find λ = 1/ = 3/(2y). Thus, = 2y/3. Using this relationship in the constraint, we find (2y/3) 2 + y 2 = 25 or 13y 2 /9 = 25. Thus, y 2 = 225/13 or y = ±15/ 13). This means = ±1/ 13. We find f (1/ 13, 15/ 13) = 45/ 13 = 12.48 f ( 1/ 13, 15/ 13) = 45/ 13 = 12.48 f ( 5, ) = 1 f (5, ) = 1 f (, 5) = 15 f (, 5) = 15. So the etreme values here occur at the boundary points (, ±5).

Eample Etremize sin(y) subject to 2 + y 2 = 1. Solution Set H(, y, λ) = (sin(y)) + λ( 2 + y 2 1). The critical point equations are then H H y H λ = y cos(y) + λ(2) = = cos(y) + λ(2y) = = 2 + y 2 1 =

Solution Solving for λ in the first equation, we find λ = y cos(y)/2. Now use this in the second equation to find cos(y) 2y 2 cos(y)/2 =. Thus, cos(y)( y 2 /) =. The first case is cos(y) =. This implies y = π/2 + 2nπ. These are rectangular hyperbolas and the closest one to the origin is y = π/2 = 1.57 whose closest point to the origin is ( π/2, π/2) = (1, 25, 1, 25). This point is outside of the constraint set, so this case does not matter. The second case is 2 = y 2 which, using the constraint, implies 2 = 1/2. Thus there are four possibilities: (1/ 2, 1/ 2), (1/ 2, 1/ 2), ( 1/ 2, 1/ 2) and ( 1/ 2, 1/ 2). The other possibilities are the circle s boundary points (, ±1) and (±1, ). We find sin(1/ 2, 1/ 2) = sin( 1/ 2, 1/ 2) = sin(1/2) =.48 sin(1/ 2, 1/ 2) = sin( 1/ 2, 1/ 2) = sin(1/2) =.48 sin(±1, ) =, sin(, ±1) = So the etreme values occur here at the interior points of the constraint.

What Does the Lagrange Multiplier Mean? Let s change our etremum problem by assuming the constraint constant is changing as a function of ; i.e. the problem is now find the etreme values of f (, y) subject to g(, y) = C(). We can use our tangent plane analysis like before. We assume the etreme value for constraint value C( ) occurs at an interior point of the constraint. We have Then = g + g y ( φ() φ( ) g(, φ()) C() = ) + ( Eg (, φ(),, φ( ) ) We assume the constraint bound function C is differentiable and so letting, we find g + g y φ ( ) = C ( ) ( ) C() C( ) Now assume ( 1, φ( 1 )) etremizes f for the constraint bound value C( 1 ).

What Does the Lagrange Multiplier Mean? We have f ( 1, φ( 1 )) f (, φ( )) = f ( 1 ) + f y (φ( 1 ) φ( )) +E f ( 1, φ( 1 ),, φ( )) or ( ) f (1, φ( 1 )) f (, φ( )) 1 ( ) = f + fy φ(1 ) φ( ) 1 ( ) Ef ( 1, φ( 1 ),, φ( )) + 1 Now assuming C() locally around, we can write ( )( ) f (1, φ( 1 )) f (, φ( )) C(1 ) C( ) = C( 1 ) C( ) 1 ( ) ( ) f + fy φ(1 ) φ( ) Ef ( 1, φ( 1 ),, φ( )) + 1 1

What Does the Lagrange Multiplier Mean? Now let Θ() be defined by Θ() = f (, φ()) when (, φ()) is the etreme value for the problem of etremizing f (, y) subject to g(, y) = C(). This is called the Optimal Value Function for this problem. Rewriting, we have ( )( ) Θ(1 ) Θ( ) C(1 ) C( ) = C( 1 ) C( ) 1 ( f + fy φ(1 ) φ( ) 1 ) + ( ) Ef ( 1, φ( 1 ),, φ( )) 1 Now let 1. We obtain ( ) Θ(1 ) Θ( ) lim C ( ) 1 C( 1 ) C( ) = f + fy φ ( ) Thus, the rate of change of the optimal value with respect to change in the constraint bound, dθ dc is well - defined and where C is the value C( ). dθ dc (C ) C ( ) = f + f y φ ( )

What Does the Lagrange Multiplier Mean? Assuming C ( ), we have dθ dc (C ) = f C ( ) + f y C ( ) φ ( ) But we know at the etreme value for that f y dθ dc (C ) = = f g f C ( ) + g y 1 C ( ) g f C ( ) φ ( ) = f ( g + g y φ ( ) ) = g y g f. Thus, ( C 1 + g ) y ( ) g φ ( ) But the Lagrange Multiplier λ for etremal value for is λ = f g. So ( ) dθ 1 dc (C ) = λ g + gy φ ( ) C ( ) But C ( ) = g + g y φ ( ). Hence, we find dθ dc (C ) = λ.

What Does the Lagrange Multiplier Mean? We see from our argument that the Lagrange Multiplier λ is the rate of change of optimal value with respect to the constraint bound. There are cases: sign(λ ) = sign( f g ) = + + = + which implies the optimal value goes up if the constraint bound is perturbed. sign(λ ) = sign( f g ) = = + which implies the optimal value goes up if the constrained bound is perturbed. sign(λ ) = sign( f g ) = + = which implies the optimal value goes down if the constrained bound is perturbed. sign(λ ) = sign( f g ) = + = which implies the optimal value goes down if the constrained bound is perturbed. Hence, we can interpret the Lagrange Multiplier as a price: the change in optimal value due to a change in constraint is essentially the loss of value eperienced due to the constraint change. For eample, if λ is very small, it says the rate of change of the optimal value with respect to constraint modification is small too. This implies the optimal value is insensitive to constraint modification.

What Does the Lagrange Multiplier Mean? Homework 37 37.1 Find the etreme values of sin(y) subject to 2 + 2y 2 = 1 using the Lagrange Multiplier Technique. 37.2 Find the etreme values of cos(y) subject to 2 + 2y 2 = 1 using the Lagrange Multiplier Technique. 37.3 If a n 1 and b n 3, use an ɛ N proof to show 5a n /b n 5/3. 37.4 If a n 1 and b n 2, use an ɛ N proof to show 5a n b n 1.