Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions

Similar documents
Modern Optimization Theory: Concave Programming

Constrained Optimization

8. Constrained Optimization

MATH 4211/6211 Optimization Constrained Optimization

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions

Lecture 4: Optimization. Maximizing a function of a single variable

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

Economics 101A (Lecture 3) Stefano DellaVigna

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

The Kuhn-Tucker Problem

EC400 Math for Microeconomics Syllabus The course is based on 6 sixty minutes lectures and on 6 ninety minutes classes.

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

Optimization using Calculus. Optimization of Functions of Multiple Variables subject to Equality Constraints

MATH2070 Optimisation

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Mathematical Preliminaries

Final Exam Advanced Mathematics for Economics and Finance

Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS. Carmen Astorne-Figari Washington University in St. Louis.

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Optimization. A first course on mathematics for economists

September Math Course: First Order Derivative

Mechanical Systems II. Method of Lagrange Multipliers

MATH529 Fundamentals of Optimization Constrained Optimization I

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Optimization Theory. Lectures 4-6

Optimality Conditions

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

Lecture 18: Optimization Programming

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003

Numerical Optimization

Boston College. Math Review Session (2nd part) Lecture Notes August,2007. Nadezhda Karamcheva www2.bc.

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Mathematical Foundations II

Algorithms for constrained local optimization

Econ Slides from Lecture 14

Tutorial 3: Optimisation

Lagrange Relaxation and Duality

ECE580 Solution to Problem Set 6

E 600 Chapter 4: Optimization

Mathematical Economics. Lecture Notes (in extracts)

How to Characterize Solutions to Constrained Optimization Problems

Optimality conditions for Equality Constrained Optimization Problems

Linear and Combinatorial Optimization

Generalization to inequality constrained problem. Maximize

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Convex Optimization and Modeling

Nonlinear Programming and the Kuhn-Tucker Conditions

EE364a Review Session 5

Convex Optimization & Lagrange Duality

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2)

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

More on Lagrange multipliers

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

Lecture Notes: Math Refresher 1

Constrained Optimization and Lagrangian Duality

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Support Vector Machines

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

Microeconomics I. September, c Leopold Sögner

BEEM103 UNIVERSITY OF EXETER. BUSINESS School. January 2009 Mock Exam, Part A. OPTIMIZATION TECHNIQUES FOR ECONOMISTS solutions

Mathematical Economics: Lecture 16

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

Constrained data assimilation. W. Carlisle Thacker Atlantic Oceanographic and Meteorological Laboratory Miami, Florida USA

Chapter 2: Unconstrained Extrema

10 Numerical methods for constrained problems

Review for the Final Exam

Nonlinear Optimization: What s important?

Maximum Theorem, Implicit Function Theorem and Envelope Theorem

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

Part IB Optimisation

IE 5531: Engineering Optimization I

Math 5311 Constrained Optimization Notes

EconS 301. Math Review. Math Concepts

Sufficient Conditions for Finite-variable Constrained Minimization

5 Handling Constraints

CONSTRAINED NONLINEAR PROGRAMMING

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

Lagrangian Methods for Constrained Optimization

. This matrix is not symmetric. Example. Suppose A =

ECON2285: Mathematical Economics

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

1. For each function, find all of its critical points and then classify each point as a local extremum or saddle point.

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review

We are now going to move on to a discussion of Inequality constraints. Our canonical problem now looks as ( ) = 0 ( ) 0

1 Computing with constraints

Computation. For QDA we need to calculate: Lets first consider the case that

Math 291-2: Final Exam Solutions Northwestern University, Winter 2016

12. Interior-point methods

Convex Optimization M2

Linear and non-linear programming

Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions

Finite Dimensional Optimization Part III: Convex Optimization 1

SF2822 Applied nonlinear optimization, final exam Saturday December

Transcription:

UBC Economics 526 October 17, 2012

.1.2.3.4

Section 1

. max f (x) s.t. h(x) = c f : R n R, h : R n R m Draw picture of n = 2 and m = 1 At optimum, constraint tangent to level curve of function Rewrite as ( f )/( x 1 )(x ) ( f )/( x 2 )(x ) =( h)/( x 1)(x ) ( h)/( x 2 )(x ) ( f )/( x 1 )(x ) ( h)/( x 1 )(x ) = ( f )/( x 2)(x ) ( h)/( x 2 )(x ) µ f (x ) µ h (x ) =0 x 1 x 1 (1) f (x ) µ h (x ) =0 x 2 x 2 (2) h(x ) c =0 (3) Lagrangian L(x, µ) f (x) µ(h(x) c).

FOC with equality Theorem Let f : U R and h : U R m be continuously differentiable on U R n. Suppose x interior(u) is a local maximizer of f on U subject to h(x) = c. Also assume that Dh x has rank m. Then there exists µ R m such that (x, µ ) is a critical point of the Lagrangian, i.e. L(x, µ) = f (x) µ T (h(x) c). L (x, µ ) = f µ T h (x ) = 0 x i x i x i L (x, µ ) =h(x ) c = 0 µ j for i = 1,..., n and j = 1,..., m.

max f (x) s.t. g(x) b. x U Binding, g j (x ) = b j are just like equality Df x λ j Dg j,x = 0. Df x is direction f increases, so x + δdf x must violate constraint or x cannot be a maximizer g j (x + δdf T x ) >b j g j (x ) + δdg x Df T x + o(δ2 ) >b j Dg j,x Df T x >0 Combine with first order condition to get λ j > 0 Thus, λ j 0 and λ j = 0 iff g j (x ) < b j (complementary slackness condition)

. FOC with inequality Theorem Let f : U R and g : U R m be continuously differentiable on U R n. Suppose x interior(u) is a local maximizer of f on U subject to g(x) b. Suppose that the first k m, bind g j (x ) = b j for j = 1...k and that the Jacobian for these, has rank k. Then, there exists λ R m such that for we have λ j L(x, λ) = f (x) λ T (g(x) b). L (x, λ ) = f λ T g (x ) = 0 x i x i x i L (x, λ ) =λ j (g(x ) c) = 0 λ j λ j 0 g(x ) b for i = 1,..., n and j = 1,..., m.

Similar result Mixed equality and inequality

Section 2

expansion of f (x) around x. f (x + v) f (x ) =Df x v + v T D 2 f x v + r(v, x ) =v T D 2 f x v + r(v, x ) x + v must satisfy the h(x + v) = h(x ) + Dh x v + r h (v, x ) = c. so Dh x v = 0 x is a local maximizer of f subject to h(x) = c if v T D 2 f x v 0 for all v such that that Dh x v = 0

. condition for constrained maximization Theorem Let f : U R be twice continuously differentiable on U, and h : U R l and g : U R m be continuously differentiable on U R n. Suppose x interior(u) and there exists µ R l and λ R m such that for L(x, λ, µ) = f (x) λ T (g(x) b) µ T (h(x) c). the first order condition is satisfied. Let B be the matrix of the derivatives of the binding evaluated at x. If v T D 2 f x v < 0 for all v 0 such that Bv = 0, then x is a strict local constrained maximizer for f subject to h(x) = c and g(x) b.

Definition Let A be an n by n symmetric matrix and B be m by n, then A is Negative definite on N (B) if x T Ax < 0 for all x N (B) \ {0} Positive definite on N (B) if x T Ax > 0 for all x N (B) \ {0} Indefinite on N (B) if x 1 N (B) \ {0} s.t. x T 1 Ax 1 > 0 and some other x 2 N (B) \ {0} such that x T 2 Ax 2 < 0.

Checking definiteness using determinants Theorem Let A be an n by n symmetric matrix and B be m by n. Then A is negative definite on N (B) iff the last n m leading principal minors of ( 0 ) B B A alternate in sign, and the final (n + m)th principal minor has the same sign as ( 1) n.

. Checking definiteness using eigenvalues Write B = ( B 1 B 2 ) with rankb = rankb1 = m. 0 =Bx = ( ) ( ) x B 1 B 1 2 x 2 x 1 = (B 1 ) 1 B 2 x 2 ( (B1 ) so x N (B) iff x = 1 ) B 2 x 2 for some x 2 R n m x T Ax for x N (B) I n m ( (B1 ) 1 ) T ( ) ( B 2 A1 A 2 (B1 ) 1 ) B 2 x T Ax =x T 2 =x T 2 I n m A T 2 A 3 I n m ) (B 2 T (B1 T ) 1 A 1 (B 1 ) 1 B 2 + B2 T (B1 T ) 1 A 2 + A T 2 (B 1 ) 1 B 2 + A 3 x } {{ } C A negative definite on N (B) iff C is negative definite on R m, i.e. C has negative eigenvalues

Section 3

Theorem ( ) Under the of 2, let x (b, c) denote the solution of the constrained maximization problem, max f (x) s.t. g(x) b x U h(x) = c, and let µ(b, c) and λ(b, c) denote the corresponding Lagrange multipliers. The for each j = 1..m, b j f (x (b, c)) = λ j (b, c) and for each j = 1,..., l, c j f (x (b, c)) = µ j (b, c).

Section 4

Let f : U A R where U R n and A R k. Consider max f (x, α). x U Let x (α) be a local maximizer. Using the chain rule, d dα j f (x (α), α) = n i=1 f x i x i α j + f α j = f α j (x (α), α) where the second line follows from the first order condition.

Let f : U A R and h : U A R l where U R n and A R k. Consider max f (x, α) s.t. h(x, α) = 0. x U Let x (α) be a local maximizer, and let L(x (α), µ (α), α) be the Lagrangian. Using the chain rule, d dα j L(x (α), µ (α), α) = = L α j (x (α), µ (α), α) n i=1 L x i x i α j + l k=1 L µ k µ k α j + L α j