Constrained Optimization in Two Variables

Similar documents
Constrained Optimization in Two Variables

12.10 Lagrange Multipliers

Derivatives in 2D. Outline. James K. Peterson. November 9, Derivatives in 2D! Chain Rule

The First Derivative and Second Derivative Test

The First Derivative and Second Derivative Test

Optimization Methods: Optimization using Calculus - Equality constraints 1. Module 2 Lecture Notes 4

Geometric Series and the Ratio and Root Test

Geometric Series and the Ratio and Root Test

Calculus of Variation An Introduction To Isoperimetric Problems

Matrix Solutions to Linear Systems of ODEs

Consequences of Continuity

Consequences of Continuity

Multivariable Calculus and Matrix Algebra-Summer 2017

Convergence of Fourier Series

Convergence of Sequences

Cable Convergence. James K. Peterson. May 7, Department of Biological Sciences and Department of Mathematical Sciences Clemson University

32. Method of Lagrange Multipliers

A Basic Course in Real Analysis Prof. P. D. Srivastava Department of Mathematics Indian Institute of Technology, Kharagpur

More Series Convergence

Math 212-Lecture Interior critical points of functions of two variables

Convergence of Sequences

Fourier Sin and Cos Series and Least Squares Convergence

Matrices and Vectors

f x, y x 2 y 2 2x 6y 14. Then

Solutions to old Exam 3 problems

PRACTICE PROBLEMS FOR MIDTERM I

and ( x, y) in a domain D R a unique real number denoted x y and b) = x y = {(, ) + 36} that is all points inside and on

Math 210, Final Exam, Spring 2012 Problem 1 Solution. (a) Find an equation of the plane passing through the tips of u, v, and w.

Dirchlet s Function and Limit and Continuity Arguments

I know this time s homework is hard... So I ll do as much detail as I can.

Math Maximum and Minimum Values, I

Hölder s and Minkowski s Inequality

The Limit Inferior and Limit Superior of a Sequence

Physics 6010, Fall 2016 Constraints and Lagrange Multipliers. Relevant Sections in Text:

Uniform Convergence Examples

Function Composition and Chain Rules

Review: critical point or equivalently f a,

Solutions of Math 53 Midterm Exam I

1 Exponential Functions Limit Derivative Integral... 5

Bolzano Weierstrass Theorems I

Project One: C Bump functions

Dirchlet s Function and Limit and Continuity Arguments

4.3 How derivatives affect the shape of a graph. The first derivative test and the second derivative test.

Mat 267 Engineering Calculus III Updated on 9/19/2010

2.10 Saddles, Nodes, Foci and Centers

Differentiation - Important Theorems

Directional derivatives and gradient vectors (Sect. 14.5). Directional derivative of functions of two variables.

Function Composition and Chain Rules

Math Midterm Solutions

CHAPTER 1-2: SHADOW PRICES

MATHEMATICS 200 April 2010 Final Exam Solutions

Review sheet Final Exam Math 140 Calculus I Fall 2015 UMass Boston

Lower semicontinuous and Convex Functions

3.1 ANALYSIS OF FUNCTIONS I INCREASE, DECREASE, AND CONCAVITY

This theorem guarantees solutions to many problems you will encounter. exists, then f ( c)

4.3 - How Derivatives Affect the Shape of a Graph

Uniform Convergence Examples

General Power Series

First Variation of a Functional

Review for the First Midterm Exam

HOMEWORK SOLUTIONS MATH 1910 Sections 6.1, 6.2, 6.3 Fall 2016

A. Correct! These are the corresponding rectangular coordinates.

Constrained Maxima and Minima EXAMPLE 1 Finding a Minimum with Constraint

Solutions to Homework 7

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

Partial Derivatives. w = f(x, y, z).

Exercises of Mathematical analysis II

The Kuhn-Tucker and Envelope Theorems

Derivatives and the Product Rule

Math Lagrange Multipliers

Math 10C - Fall Final Exam

Multivariate Calculus Solution 1

Predator - Prey Model Trajectories are periodic

The University of British Columbia Midterm 1 Solutions - February 3, 2012 Mathematics 105, 2011W T2 Sections 204, 205, 206, 211.

Math 140 Final Sample A Solutions. Tyrone Crisp

0, otherwise. Find each of the following limits, or explain that the limit does not exist.

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:

MATH section 3.1 Maximum and Minimum Values Page 1 of 7

6.5 Trigonometric Equations

/639 Final Examination Solutions

Solving systems of ODEs with Matlab

McGill University April Calculus 3. Tuesday April 29, 2014 Solutions

MATH 228: Calculus III (FALL 2016) Sample Problems for FINAL EXAM SOLUTIONS

Maximum and Minimum Values - 3.3

Taylor Polynomials. James K. Peterson. Department of Biological Sciences and Department of Mathematical Sciences Clemson University

Mathematics 205 HWK 18 Solutions Section 15.3 p725

term from the numerator yields 2

CALCULUS 4 QUIZ #3 REVIEW Part 2 / SPRING 09

Review Test 2. MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. D) ds dt = 4t3 sec 2 t -

Predator - Prey Model Trajectories are periodic

Proofs Not Based On POMI

Math 171 Calculus I Spring, 2019 Practice Questions for Exam II 1

Assignment 16 Assigned Weds Oct 11

is the intuition: the derivative tells us the change in output y (from f(b)) in response to a change of input x at x = b.

Sections 5.1: Areas and Distances

2017 HSC Mathematics Extension 2 Marking Guidelines

M151B Practice Problems for Exam 1

4.3 Mean-Value Theorem and Monotonicity

(d by dx notation aka Leibniz notation)

MA 351 Fall 2008 Exam #3 Review Solutions 1. (2) = λ = x 2y OR x = y = 0. = y = x 2y (2x + 2) = 2x2 + 2x 2y = 2y 2 = 2x 2 + 2x = y 2 = x 2 + x

Transcription:

Constrained Optimization in Two Variables James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 17, 216 Outline Constrained Optimization What Does the Lagrange Multiplier Mean?

Let s look at the problem of finding the minimum or maimum of a function f (, y) subject to the constraint g(, y) = c where c is a constant. Let s suppose we have a point (, y) where an etremum value occurs and assume f and g are differentiable at that point. Then, at another point (, y) which satisfies the constraint, we have g(, y) = g(, y) + g(, y)( ) + gy (, y)(y y) + Eg (, y,, y) But g(, y) = g(, y) = c, so we have = g ( ) + g y (y y) + Eg (, y,, y) where we let g = g(, y) and g y = gy (, y). Now given an, we assume we can find a y values so that g(, y) = c in Br () for some r >. Of course, this value need not be unique. Let φ() be the function defined by φ() = {y g(, y) = c} For eample, if g(, y) = c was the function 2 + y 2 = 25, then φ() = ± 25 2 for 5 5. Clearly, we can t find a full circle Br () when = 5 or = 5, so let s assume the point (, y) where the etremum value occurs does have such a local circle around it where we can find corresponding y values for all values in Br (). So locally, we have a function φ() defined around so that g(, φ()) = c for Br (). Then we have = g ( ) + g y (φ() φ()) + Eg (, φ(),, φ()) Now divide through by to get φ() φ() = g + gy Eg (, φ(),, φ()) + Thus, φ() φ() gy = g Eg (, φ(),, φ()) and assuming g and g y, we can solve to find

φ() φ() = g gy 1 gy Eg (, φ(),, φ()) Eg (,φ(),,φ()) Since g is differentiable at (, φ()), as,. Thus, we have φ is differentiable and therefore also continuous with φ φ() φ() () = lim = g g y Now since both g and gy, the fraction g g too. This y means locally φ() is either strictly increasing or strictly decreasing; i.e. is a strictly monotone function. Let s assume φ () > and so φ is increasing on some interval ( R, + R). Let s also assume the etreme value is a minimum, so we know on this interval f (, y) f (, y) with g(, y) = c. This means f (, φ()) f (, φ() on ( R, + R). Now do an epansion for f to get f (, φ()) = f (, φ()) + f ( ) + f y (φ() φ()) + Ef (, φ(),, φ()) where f = f(, φ()) and fy = fy (, φ()). This implies f (, φ()) f (, φ()) = f ( ) + f y (φ() φ()) + Ef (, φ(),, φ()) Now we have assumed f (, φ()) f (, φ), so we have f ( ) + fy (φ() φ) + E f (, φ(),, φ) If > we find φ() φ() f + fy + E f (, φ(),, φ) Now take the limit as + to find f + f y φ

When <, we argue similarly to find Combining, we have f + f y φ () f + f y φ () which tells us at this etreme value we have the equation f + fy φ () =. This implies fy = f φ (). Net note [ ] [ ] f f { f( )} fy = f = f g φ () y = f [ ] g g gy g This says there is a scalar λ = f g so that (f )(, y) = λ (g)(, y) where g(, y) = c. The value f g is called the Lagrange Multiplier for this etremal Problem. This is the basis for the Lagrange Multiplier Technique for a constrained optimization problem. The value λ above is called a Lagrange Multiplier of the etremum problem we are trying to solve. We can do a similar sort of analysis in the case the etremum is a maimum too. Our analysis assumes that the the point (, y) where the etremum occurs is like an interior point in the {(, y) g(, y) = c}. That is, we assume for such an there is a interval Br () with any Br () having a corresponding value y = φ() so that g(, φ()) = c. So the argument does not handle boundary points such as the ±5 is our previous eample. To implement the Lagrange Multiplier technique, we define a new function H(, y, λ) = f (, y) + λ(g(, y) c) The critical points we see are the ones where the gradient of f is a multiple of the gradient of g for the reasons we discussed above. Note the λ we use here is the negative of the Lagrange Multiplier.

To find them, we set the partials of H equal to zero. = = f = λ g = = f y y = λ g = = g(, y) = c λ The first two lines are the statement that the gradient of f is a multiple of the gradient of g and the third line is the statement that the constraint must be satisfied. As usual, solving these three nonlinear equations gives the interior point solutions and we always must also include the boundary points that solve the constraints as well. Eample Etremize 2 + 3y subject to 2 + y 2 = 25. Solution Set H(, y, λ) = (2 + 3y) + λ( 2 + y 2 25). The critical point equations are then = 2 + λ(2) = y = 3 + λ(2y) = λ = 2 + y 2 25 =

Solution Solving for λ, we find λ = 1/ = 3/(2y). Thus, = 2y/3. Using this relationship in the constraint, we find (2y/3) 2 + y 2 = 25 or 13y 2 /9 = 25. Thus, y 2 = 225/13 or y = ±15/ 13). This means = ±1/ 13. We find f (1/ 13, 15/ 13) = 45/ 13 = 12.48 f ( 1/ 13, 15/ 13) = 45/ 13 = 12.48 f ( 5, ) = 1 f (5, ) = 1 f (, 5) = 15 f (, 5) = 15. So the etreme values here occur at the boundary points (, ±5). Eample Etremize sin(y) subject to 2 + y 2 = 1. Solution Set H(, y, λ) = (sin(y)) + λ( 2 + y 2 1). The critical point equations are then = y cos(y) + λ(2) = y = cos(y) + λ(2y) = λ = 2 + y 2 1 =

Solution Solving for λ in the first equation, we find λ = y cos(y)/2. Now use this in the second equation to find cos(y) 2y 2 cos(y)/2 =. Thus, cos(y)( y 2 /) =. The first case is cos(y) =. This implies y = π/2 + 2nπ. These are rectangular hyperbolas and the closest one to the origin is y = π/2 = 1.57 whose closest point to the origin is ( π/2, π/2) = (1, 25, 1, 25). This point is outside of the constraint set, so this case does not matter. The second case is 2 = y 2 which, using the constraint, implies 2 = 1/2. Thus there are four possibilities: (1/ 2, 1/ 2), (1/ 2, 1/ 2), ( 1/ 2, 1/ 2) and ( 1/ 2, 1/ 2). The other possibilities are the circle s boundary points (, ±1) and (±1, ). We find sin(1/ 2, 1/ 2) = sin( 1/ 2, 1/ 2) = sin(1/2) =.48 sin(1/ 2, 1/ 2) = sin( 1/ 2, 1/ 2) = sin(1/2) =.48 sin(±1, ) =, sin(, ±1) = So the etreme values occur here at the interior points of the constraint. Let s change our etremum problem by assuming the constraint constant is changing as a function of ; i.e. the problem is now find the etreme values of f (, y) subject to g(, y) = C(). We can use our tangent plane analysis like before. We assume the etreme value for constraint value C() occurs at an interior point of the constraint. We have g(, φ()) C() = Then φ() φ() = g + gy Eg (, φ(),, φ() C() C() + We assume the constraint bound function C is differentiable and so letting, we find g + g y φ () = C () Now assume (1, φ(1)) etremizes f for the constraint bound value C(1).

We have f (1, φ(1)) f (, φ()) = f (1 ) + f y (φ(1) φ()) +Ef (1, φ(1),, φ()) or f (1, φ(1)) f (, φ()) 1 φ(1) = f + fy φ() 1 ( Ef (1, φ(1),, φ()) + 1 ) Now assuming C() locally around, we can write f (1, φ(1)) f (, φ()) C(1) C() = C(1) C() 1 φ(1) f + fy φ() Ef (1, φ(1),, φ()) + 1 1 Now let Θ() be defined by Θ() = f (, φ()) when (, φ()) is the etreme value for the problem of etremizing f (, y) subject to g(, y) = C(). This is called the Optimal Value Function for this problem. Rewriting, we have Θ(1) Θ() C(1) C() = C(1) C() 1 ( φ(1) f + fy φ() 1 ) + Ef (1, φ(1),, φ()) 1 Now let 1. We obtain Θ(1) Θ() lim C () 1 C(1) C() = f + fy φ () Thus, the rate of change of the optimal value with respect to change in the constraint bound, dθ dc is well - defined and where C is the value C(). dθ dc (C) C () = f + f y φ ()

Assuming C (), we have dθ dc (C) = f C () + f y C () φ () But we know at the etreme value for that fy = g y g f. Thus, dθ dc (C) = f C () + g y f g C () φ () = f ( C 1 + g ) y () g φ () = f ( ) 1 g C g + gy φ () () But the Lagrange Multiplier λ for etremal value for is λ = f ( ) dθ 1 (C) = λ g + gy φ () dc C () But C () = g + gy φ (). Hence, we find dθ dc (C) = λ. g. So We see from our argument that the Lagrange Multiplier λ is the rate of change of optimal value with respect to the constraint bound. There are cases: sign(λ) = sign( f g ) = + + = + which implies the optimal value goes up if the constraint bound is perturbed. sign(λ) = sign( f g ) = = + which implies the optimal value goes up if the constrained bound is perturbed. sign(λ) = sign( f g ) = + = which implies the optimal value goes down if the constrained bound is perturbed. sign(λ) = sign( f g ) = + = which implies the optimal value goes down if the constrained bound is perturbed. Hence, we can interpret the Lagrange Multiplier as a price: the change in optimal value due to a change in constraint is essentially the loss of value eperienced due to the constraint change. For eample, if λ is very small, it says the rate of change of the optimal value with respect to constraint modification is small too. This implies the optimal value is insensitive to constraint modification.

Homework 37 37.1 Find the etreme values of sin(y) subject to 2 + 2y 2 = 1 using the Lagrange Multiplier Technique. 37.2 Find the etreme values of cos(y) subject to 2 + 2y 2 = 1 using the Lagrange Multiplier Technique. 37.3 If an 1 and bn 3, use an ɛ N proof to show 5an/bn 5/3. 37.4 If an 1 and bn 2, use an ɛ N proof to show 5anbn 1.