Optimization using Calculus. Optimization of Functions of Multiple Variables subject to Equality Constraints

Similar documents
Optimization Methods: Optimization using Calculus - Equality constraints 1. Module 2 Lecture Notes 4

Lecture 9: Implicit function theorem, constrained extrema and Lagrange multipliers

MATH529 Fundamentals of Optimization Constrained Optimization I

AP Calculus Testbank (Chapter 9) (Mr. Surowski)

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.

Introduction to the Calculus of Variations

Calculus Overview. f(x) f (x) is slope. I. Single Variable. A. First Order Derivative : Concept : measures slope of curve at a point.

2015 Math Camp Calculus Exam Solution

Mechanical Systems II. Method of Lagrange Multipliers

AP Calculus (BC) Chapter 9 Test No Calculator Section Name: Date: Period:

Sufficient Conditions for Finite-variable Constrained Minimization

Advanced Calculus Math 127B, Winter 2005 Solutions: Final. nx2 1 + n 2 x, g n(x) = n2 x

Taylor Series and stationary points

Making Models with Polynomials: Taylor Series

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

3 Applications of partial differentiation

11.10a Taylor and Maclaurin Series

Constrained optimization.

Section Taylor and Maclaurin Series

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS

8. Constrained Optimization

Error Bounds in Power Series

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Math 651 Introduction to Numerical Analysis I Fall SOLUTIONS: Homework Set 1

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form

1 Lagrange Multiplier Method

APPM 2350 FINAL EXAM FALL 2017

Optimization. A first course on mathematics for economists

As f and g are differentiable functions such that. f (x) = 20e 2x, g (x) = 4e 2x + 4xe 2x,

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

Final Exam - Answer key

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson

Math RE - Calculus II Antiderivatives and the Indefinite Integral Page 1 of 5

TAYLOR AND MACLAURIN SERIES

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Completion Date: Monday February 11, 2008

ENGI Partial Differentiation Page y f x

E 600 Chapter 4: Optimization

Chapter 8: Taylor s theorem and L Hospital s rule

Lagrange Multipliers

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

Midterm Review. Igor Yanovsky (Math 151A TA)

Math 5311 Constrained Optimization Notes

Nonlinear Programming (NLP)

Finite Dimensional Optimization Part III: Convex Optimization 1

Differentiation. f(x + h) f(x) Lh = L.

Ma 530 Power Series II

Math Real Analysis II

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

Final Exam Advanced Mathematics for Economics and Finance

Lagrange Relaxation and Duality

Lecture Notes for Chapter 9

3.7 Constrained Optimization and Lagrange Multipliers

Bob Brown Math 251 Calculus 1 Chapter 4, Section 1 Completed 1 CCBC Dundalk

Lecture Notes: Math Refresher 1

Partial Derivatives. w = f(x, y, z).

Lecture 4: Optimization. Maximizing a function of a single variable

MS 2001: Test 1 B Solutions

Homework 8 Solutions to Selected Problems

LESSON 25: LAGRANGE MULTIPLIERS OCTOBER 30, 2017

MATH Midterm 1 Sample 4

Convergence of sequences and series

Math 121 Winter 2010 Review Sheet

REVIEW OF DIFFERENTIAL CALCULUS

Solutions to Homework 7

Math 113 (Calculus 2) Exam 4

Calculus 2502A - Advanced Calculus I Fall : Local minima and maxima

Generalization to inequality constrained problem. Maximize

(Kernels +) Support Vector Machines

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Basics of Calculus and Algebra

Taylor and Maclaurin Series

Optimization: Problem Set Solutions

Mathematical Economics: Lecture 16

A Short Essay on Variational Calculus

Review for the Final Exam

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

The Derivative. Appendix B. B.1 The Derivative of f. Mappings from IR to IR

CALCULUS JIA-MING (FRANK) LIOU

Chapter 3 Numerical Methods

HOMEWORK 7 SOLUTIONS

Constrained Optimization

Lagrange Multipliers

Numerical Optimization

Examination paper for TMA4215 Numerical Mathematics

Let f(x) = x, but the domain of f is the interval 0 x 1. Note

Calculus Example Exam Solutions

Physics 105, Spring 2015, Reinsch. Homework Assignment 1 (This is a five page document. More hints added Feb. 2, 9pm. Problem statements unchanged.

Max Margin-Classifier

Power Series Solutions We use power series to solve second order differential equations

Linearization and Extreme Values of Functions

Mathematical Economics. Lecture Notes (in extracts)

Microeconomic Theory. Microeconomic Theory. Everyday Economics. The Course:

Lecture 4 September 15

Review of Optimization Methods

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

g(t) = f(x 1 (t),..., x n (t)).

Transcription:

Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints 1

Objectives Optimization of functions of multiple variables subjected to equality constraints using the method of constrained variation, and the method of Lagrange multipliers.

Constrained optimization A function of multiple variables, f(x), is to be optimized subject to one or more equality constraints of many variables. These equality constraints, g j (x), may or may not be linear. The problem statement is as follows: where Maximize (or minimize) f(x), subject to g j (X) = 0, j = 1,,, m X x1 x M x n = 3

Constrained optimization (contd.) m n With the condition that ; or else if m > n then the problem becomes an over defined one and there will be no solution. Of the many available methods, the method of constrained variation and the method of using Lagrange multipliers are discussed. 4

Solution by method of Constrained Variation For the optimization problem defined above, let us consider a specific case with n = and m = 1 before we proceed to find the necessary and sufficient conditions for a general problem using Lagrange multipliers. The problem statement is as follows: Minimize f(x 1,x ), subject to g(x 1,x ) = 0 For f(x 1,x ) to have a minimum at a point X * = [x 1*,x * ], a necessary condition is that the total derivative of f(x 1,x ) must be zero at [x 1*,x * ]. 5 f f df = dx + dx = x 1 1 x 0 (1)

Method of Constrained Variation (contd.) Since g(x 1*,x * ) = 0 at the minimum point, variations dx 1 and dx about the point [x 1*, x * ] must be admissible variations, i.e. the point lies on the constraint: g(x 1* + dx 1, x * + dx ) = 0 () assuming dx 1 and dx are small the Taylor series expansion of this gives us * gx ( * + dx, x + dx) 1 1 g g = gx (, x ) + (x,x ) dx+ (x,x ) dx= 0 * * * * * * 1 1 1 1 x1 x (3) 6

Method of Constrained Variation (contd.) g g or dg = dx at [x 1*,x * ] (4) 1+ dx = 0 x x 1 which is the condition that must be satisfied for all admissible variations. Assuming, (4) can be rewritten as g/ x 0 g/ x dx = ( x, x ) dx 1 * * 1 1 g/ x (5) 7

Method of Constrained Variation (contd.) (5) indicates that once variation along x 1 (dx 1 ) is chosen arbitrarily, the variation along x (dx ) is decided automatically to satisfy the condition for the admissible variation. Substituting equation (5) in (1) we have: f g/ x1 f (6) df = dx1 = 0 x1 g/ x x * * (x 1, x ) The equation on the left hand side is called the constrained variation of f. Equation (5) has to be satisfied for all dx 1, hence we have 8 f g f g = 0 x x x x 1 1 (x, x ) This gives us the necessary condition to have [x 1*, x * ] as an extreme point (maximum or minimum) * * 1 (7)

Solution by method of Lagrange multipliers Continuing with the same specific case of the optimization problem with n = and m = 1 we define a quantity λ, called the Lagrange multiplier as f / x λ = g/ x (8) (x, x ) * * 1 Using this in (5) f x g + λ = 0 x 1 1 (x, x ) * * 1 (9) And (8) written as f x g + λ = 0 x (x, x ) * * 1 (10) 9

Solution by method of Lagrange multipliers contd. Also, the constraint equation has to be satisfied at the extreme point gx (, x ) = 0 1 ( x, ) * * 1 x (11) Hence equations (9) to (11) represent the necessary conditions for the point [x 1 *, x *] to be an extreme point. Note that λ could be expressed in terms of g / x1 as well and g / x1 has to be non-zero. Thus, these necessary conditions require that at least one of the partial derivatives of g(x 1, x ) be non-zero at an extreme point. 10

Solution by method of Lagrange multipliers contd. The conditions given by equations (9) to (11) can also be generated by constructing a functions L, known as the Lagrangian function, as Lx ( 1, x, λ) = f( x1, x) + λgx ( 1, x) (1) 11 Alternatively, treating L as a function of x 1,x and λ, the necessary conditions for its extremum are given by L f g ( x1, x, λ) = ( x1, x) + λ ( x1, x) = 0 x x x 1 1 1 L f g ( x, x, λ) = ( x, x ) + λ ( x, x ) = 0 x x x 1 1 1 L ( x 1, x, λ) = g ( x 1, x ) = 0 λ (13)

Necessary conditions for a general problem For a general problem with n variables and m equality constraints the problem is defined as shown earlier Maximize (or minimize) f(x), subject to g j (X) = 0, j = 1,,, m where X x1 x M x n = In this case the Lagrange function, L, will have one Lagrange multiplier λ j for each constraint as Lx (, x,..., xλ, λ,..., λ ) = f( X) + λ g( X) + λ g( X) +... + λ g ( X) 1 n, 1 m 1 1 m m (14) 1

Necessary conditions for a general problem contd. L is now a function of n + m unknowns, x, and the 1, x,..., xn, λ1, λ,..., λm necessary conditions for the problem defined above are given by L f g = ( ) + ( X) = 0, i = 1,,..., n j = 1,,..., m x x x L λ m j X λ j i i j= 1 i j = g ( X) = 0, j = 1,,..., m j (15) 13 which represent n + m equations in terms of the n + m unknowns, x i and λ j. The solution to this set of equations gives us * * x 1 λ 1 (16) * * x and * λ X = M * x n λ = M * λ m

Sufficient conditions for a general problem A sufficient condition for f(x) to have a relative minimum at X * is that each root of the polynomial in Є, defined by the following determinant equation be positive. 14 L L L L g g L g 11 1 1n 11 1 m1 L L L g g g 1 n 1 m M O M M O M L L L L g g L g n1 n nn 1n n mn g g L g 0 L L 0 11 1 1n g g g 1 n g g L g 0 L L 0 m1 m mn M O M M O M M M = 0 (17)

Sufficient conditions for a general problem contd. where L * * Lij = ( X, λ ), for i = 1,,..., n and j = 1,,..., m x x i j g gpq = X p= m q= x p * ( ), where 1,,..., and 1,,..., q n (18) Similarly, a sufficient condition for f(x) to have a relative maximum at X * is that each root of the polynomial in Є, defined by equation (17) be negative. If equation (17), on solving yields roots, some of which are positive and others negative, then the point X * is neither a maximum nor a minimum. 15

Example Minimize f ( X) = 3x 6x x 5x + 7x + 5x 1 1 1, Subject to x 1+ x = 5 or 16

L x = 6x 10x + 5+ λ = 0 1 1 1 => 3x1+ 5 x = (5 + λ1) 1 => 3( x1+ x) + x = (5 + λ1) x = 1 x = 1 11 111 X* =, ; * = [ 3] λ L11 L1 g11 L1 L g1 = g11 g1 0 0

Thank you 19