MATH529 Fundamentals of Optimization Constrained Optimization I

Similar documents
Lecture 9: Implicit function theorem, constrained extrema and Lagrange multipliers

Optimization using Calculus. Optimization of Functions of Multiple Variables subject to Equality Constraints

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

September Math Course: First Order Derivative

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

ECON 186 Class Notes: Optimization Part 2

Constrained optimization.

Constrained Optimization

Lecture Notes for Chapter 12

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions

Constrained Optimization

Optimization. A first course on mathematics for economists

LESSON 25: LAGRANGE MULTIPLIERS OCTOBER 30, 2017

Math 212-Lecture Interior critical points of functions of two variables

Mechanical Systems II. Method of Lagrange Multipliers

Lagrange Multipliers

Lecture 4: Optimization. Maximizing a function of a single variable

MATH2070 Optimisation

Maximum Value Functions and the Envelope Theorem

E 600 Chapter 4: Optimization

1. For each function, find all of its critical points and then classify each point as a local extremum or saddle point.

MATH529 Fundamentals of Optimization Unconstrained Optimization II

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

Maximum Theorem, Implicit Function Theorem and Envelope Theorem

ECON2285: Mathematical Economics

Math 5311 Constrained Optimization Notes

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

Partial Derivatives Formulas. KristaKingMath.com

Lecture Notes: Math Refresher 1

Microeconomic Theory. Microeconomic Theory. Everyday Economics. The Course:

Lagrange Relaxation and Duality

MEASURE AND INTEGRATION: LECTURE 18

Introduction to the Calculus of Variations

3.7 Constrained Optimization and Lagrange Multipliers

8. Constrained Optimization

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003

Generalization to inequality constrained problem. Maximize

Review for the Final Exam

Constrained optimization BUSINESS MATHEMATICS

Constrained Optimization. Unconstrained Optimization (1)

Lagrangian Duality. Evelien van der Hurk. DTU Management Engineering

Sec. 1.1: Basics of Vectors

ENGI Partial Differentiation Page y f x

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Taylor Series and stationary points

Constrained optimization

2001 Dennis L. Bricker Dept. of Industrial Engineering The University of Iowa. Reducing dimensionality of DP page 1

Optimization. A first course on mathematics for economists Problem set 4: Classical programming

The Kuhn-Tucker Problem

Optimality Conditions

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Math 10C - Fall Final Exam

Optimization Methods: Optimization using Calculus - Equality constraints 1. Module 2 Lecture Notes 4

More on Lagrange multipliers

Optimal control problems with PDE constraints

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Math 210, Final Exam, Spring 2012 Problem 1 Solution. (a) Find an equation of the plane passing through the tips of u, v, and w.

(Most of the material presented in this chapter is taken from Thornton and Marion, Chap. 6)

1 Lagrange Multiplier Method

Multivariable Calculus and Matrix Algebra-Summer 2017

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

Partial Derivatives. w = f(x, y, z).

MATH 019: Final Review December 3, 2017

Bob Brown Math 251 Calculus 1 Chapter 4, Section 1 Completed 1 CCBC Dundalk

Optimization Theory. Lectures 4-6

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Course Outline. 2. Vectors in V 3.

Math 16A Second Midterm 6 Nov NAME (1 pt): Name of Neighbor to your left (1 pt): Name of Neighbor to your right (1 pt):

Nonlinear Programming and the Kuhn-Tucker Conditions

The Fundamental Welfare Theorems

Chapter 4: Partial differentiation

Lagrangian Methods for Constrained Optimization

The Fundamental Welfare Theorems

Tutorial 3: Optimisation

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

This exam will be over material covered in class from Monday 14 February through Tuesday 8 March, corresponding to sections in the text.

BEEM103 UNIVERSITY OF EXETER. BUSINESS School. January 2009 Mock Exam, Part A. OPTIMIZATION TECHNIQUES FOR ECONOMISTS solutions

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Microeconomics I. September, c Leopold Sögner

3. Minimization with constraints Problem III. Minimize f(x) in R n given that x satisfies the equality constraints. g j (x) = c j, j = 1,...

Calculus Overview. f(x) f (x) is slope. I. Single Variable. A. First Order Derivative : Concept : measures slope of curve at a point.

Final Exam - Answer key

APPM 1350 Exam 2 Fall 2016

Math 251 Midterm II Information Spring 2018

MATH 312 Section 2.4: Exact Differential Equations

Math 67. Rumbos Fall Solutions to Review Problems for Final Exam. (a) Use the triangle inequality to derive the inequality

Lagrange Murderpliers Done Correctly

Math 234 Final Exam (with answers) Spring 2017

Econ 101A Problem Set 1 Solution

14 Lecture 14 Local Extrema of Function

HOMEWORK 7 SOLUTIONS

MATH H53 : Final exam

Solutions to Homework 7

MATH 4211/6211 Optimization Constrained Optimization

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

Notes IV General Equilibrium and Welfare Properties

Linear and Combinatorial Optimization

Chain Rule. MATH 311, Calculus III. J. Robert Buchanan. Spring Department of Mathematics

Transcription:

MATH529 Fundamentals of Optimization Constrained Optimization I Marco A. Montes de Oca Mathematical Sciences, University of Delaware, USA 1 / 26 Motivating Example 2 / 26

Motivating Example min cost(b) = 1000 + 31b 1 + 20b 2 + 50b 3 +... + 40b n (EUR) subject to: b 1 + b 2 +... + b n 6500 (Casks) 0.6b 1 + 0.5b 2 30 (Tons of ingredient 1) 0.3b 1 + 0.1b 2 + 0.4b 4 + 0.5b 12 20 (Tons of ingredient 2). 0.1b 4 + 0.05b 6 5 (Tons of ingredient n) b i 100, i {1, 2,..., n} (Casks) (minimum production). 3 / 26 Effects of constraints 4 / 26

Effects of constraints 5 / 26 Definitions A general model of a constrained optimization problem is: subject to min f (x) x R n c i (x) = 0, i E c i (x) 0 (or c i (x) 0), i I where f is called the objective function, the functions c i (x), i E are the equality constraints, and the functions c i (x), i I are the inequality constraints. 6 / 26

Definitions The feasible set Ω R n is the set of points that satisfy the constraints: Ω = {x c i (x) 0, i I, c i (x) = 0, i E} Therefore, a constrained optimization problem can be defined simply as min f (x) x Ω 7 / 26 Example: One equality constraint Suppose you want to solve max f (x) = x 1 x 2 + 2x 1, subject to g(x) = 2x 1 + x 2 = 1. 8 / 26

Example: One equality constraint 9 / 26 Example: One equality constraint Suppose you want to solve max f (x) = x 1 x 2 + 2x 1, subject to g(x) = 2x 1 + x 2 = 1. Solution method 1: Substitution From the constraint, we see that x 2 = 1 2x 1. Thus, f (x) can be rewritten as f (x 1 ) = x 1 (1 2x 1 ) + 2x 1. 10 / 26

Example: One equality constraint Suppose you want to solve max f (x) = x 1 x 2 + 2x 1, subject to g(x) = 2x 1 + x 2 = 1. Solution method 1: Substitution From the constraint, we see that x 2 = 1 2x 1. Thus, f (x) can be rewritten as f (x 1 ) = x 1 (1 2x 1 ) + 2x 1. This method works only on very few cases. E.g., it does not work when one cannot solve for one of the variables. 11 / 26 Example: One equality constraint Solution method 2: Lagrange multipliers Define the Lagrangian function: Z = f (x) + λ(0 c 1 (x)) = x 1 x 2 + 2x 1 + λ(1 2x 1 x 2 ) where λ is a so-called Lagrange multiplier. If the constraint is satisfied, then c 1 (x) = 0, and Z is identical to f. Therefore, as long as c 1 (x) = 0, searching for the maximum of Z is the same as searching for the maximum of f. So, how to always satisfy c 1 (x) = 0? 12 / 26

Example: One equality constraint Solution method 2: Lagrange multipliers If Z = Z(λ, x 1, x 2 ), then Z = 0 implies Z λ = 1 2x 1 x 2 = 0, or simply c 1 (x) = 0 (the original constraint) Z x 1 = x 2 + 2 2λ = 0, and Z x 2 = x 1 λ = 0 Solving this system: x 1 = 3/4, x 2 = 1/2, and λ = 3/4. A second order condition should be used to tell whether (3/4, 1/2) is a maximum or a minimum. 13 / 26 Lagrange multipliers In general, to find an extremum x of f (x) subject to c 1 (x) = 0, we define the Lagrangian function: Then, L(x, λ ) = 0 implies L(x, λ) = f (x) λc 1 (x) c 1 (x ) = 0 and f (x ) λ c 1 (x ) = 0, or equivalently f (x ) = λ c 1 (x ). 14 / 26

Exercise Find all the extrema of f (x) = x 2 1 + x 2 2, subject to x 2 1 + x 2 = 1 15 / 26 Lagrange Multiplier Method: Derivation Total differential approach: Let z = f (x, y) (objective function) and g(x, y) = c (equality constraint). At an extremum, the first order condition translates into dz = 0, so dz = f x dx + f y dy = 0 (1). Since dx and dy are not independent (due to the constraint), we can take the differential of g as well: dg = g x dx + g y dy = 0 (2). From (2), dx = g y g x dy. Substituting dx in (1): f x g y g x dy + f y dy = 0, which implies f x g x = f y g y. If f x g x = f y g y = λ, then f x λg x = 0 and f y λg y = 0, which are the Lagrange multiplier equations. 16 / 26

Lagrange Multiplier Method: Derivation Taylor series approach: 17 / 26 Lagrange Multiplier Method: Derivation Taylor series approach: If we want to satisfy the constraint as we move from x to x + s, then 0 = c 1 (x + s) c 1 (x) + c 1 (x) T s, but since c 1 (x) = 0, then c 1 (x) T s = 0 (1) Additionally, if we want to decrease the function as we move, we would require f (x) T s < 0 (2). Only when f (x) = λ g(x), we cannot find s to satisfy (1) and (2). 18 / 26

Lagrange Multiplier Method: Derivation Taylor series approach: 19 / 26 Shadow Price The Lagrange multiplier measures the sensitivity of the optimal solution to changes in the constraint. For example, assume that λ = λ(b), x = x(b), and y = y(b). If you want to maximize U(x) subject to g(x) = B (so that c 1 (x) = B g(x) = 0). Then, L(x, λ) = U(x) + λ(b g(x)). By the Chain Rule: dl db = L dx 1 x 1 db + L dx 2 x 2 db + L λ dλ db dl db = (U x 1 λg x1 ) dx 1 db + (U x 2 λg x2 ) dx 2 db + (B g(x)) dλ db + λ(1). Since the first order condition says that U x1 λg x1 = 0, U x2 λg x2 = 0, and B g(x) = 0, we have dl db = λ. (This equation answers the question Will a slight relaxation of the budget constraint increase or decrease the optimal value of U? ) 20 / 26

Example Use the Lagrange-multiplier method to find stationary values of z = x 3y xy, subject to x + y = 6. Will a slight relaxation of the constraint increase or decrease the optimal value of z? At what rate? 21 / 26