Unit: Optimality Conditions and Karush Kuhn Tucker Theorem

Similar documents
Constrained Optimization

MATH2070 Optimisation

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

Numerical Optimization

Generalization to inequality constrained problem. Maximize

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

Nonlinear Programming and the Kuhn-Tucker Conditions

5 Handling Constraints

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

IE 5531: Engineering Optimization I

Constrained Optimization Theory

Chap 2. Optimality conditions

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

Summary Notes on Maximization

Constrained optimization: direct methods (cont.)

Constrained optimization

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Numerical optimization

KKT Examples. Stanley B. Gershwin Massachusetts Institute of Technology

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

Lecture 18: Optimization Programming

Examination paper for TMA4180 Optimization I

Numerical Optimization. Review: Unconstrained Optimization

Lecture 3. Optimization Problems and Iterative Algorithms

IEOR 165 Lecture 13 Maximum Likelihood Estimation

Handling High Dimensional Problems with Multi-Objective Continuation Methods via Successive Approximation of the Tangent Space

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Gradient Descent. Dr. Xiaowei Huang

Optimization Methods

Constrained Optimization and Lagrangian Duality

Newton-Type Algorithms for Nonlinear Constrained Optimization

Optimization. A first course on mathematics for economists

1. f(β) 0 (that is, β is a feasible point for the constraints)

CS798: Selected topics in Machine Learning

Microeconomics I. September, c Leopold Sögner

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

Roles of Convexity in Optimization Theory. Efor, T. E and Nshi C. E

Nonlinear Optimization for Optimal Control

CS-E4830 Kernel Methods in Machine Learning

The Karush-Kuhn-Tucker (KKT) conditions

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Sufficient Conditions for Finite-variable Constrained Minimization

Solution of Multivariable Optimization with Inequality Constraints by Lagrange Multipliers. where, x=[x 1 x 2. x n] T

Machine Learning. Support Vector Machines. Manfred Huber

5. Duality. Lagrangian

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Scientific Computing: Optimization

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Optimization Tutorial 1. Basic Gradient Descent

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review

Linear vs Non-linear classifier. CS789: Machine Learning and Neural Network. Introduction

More on Lagrange multipliers

Review of Optimization Basics

Nonlinear Programming (NLP)

LAGRANGE MULTIPLIERS

Computational Optimization. Constrained Optimization Part 2

Review of Optimization Methods

Nonlinear Optimization

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

A Descent Method for Equality and Inequality Constrained Multiobjective Optimization Problems

Numerical Optimization of Partial Differential Equations

Convex Optimization M2

Optimality Conditions

CONSTRAINED NONLINEAR PROGRAMMING

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

MATH 2374 Spring Worksheet 4

Optimality conditions for Equality Constrained Optimization Problems

4M020 Design tools. Algorithms for numerical optimization. L.F.P. Etman. Department of Mechanical Engineering Eindhoven University of Technology

Convex Optimization Boyd & Vandenberghe. 5. Duality

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

The Kuhn-Tucker Problem

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

ICS-E4030 Kernel Methods in Machine Learning

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

We are now going to move on to a discussion of Inequality constraints. Our canonical problem now looks as ( ) = 0 ( ) 0

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems

Application of Numerical Algebraic Geometry to Geometric Data Analysis

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

2.3 Linear Programming

Optimality Conditions for Constrained Optimization

Math 5BI: Problem Set 6 Gradient dynamical systems

Economics 101A (Lecture 3) Stefano DellaVigna

Topic one: Production line profit maximization subject to a production rate constraint. c 2010 Chuan Shi Topic one: Line optimization : 22/79

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

Date: July 5, Contents

Calculus and optimization

Mathematical Economics. Lecture Notes (in extracts)

CHAPTER 2: QUADRATIC PROGRAMMING

Tangent spaces, normals and extrema

ARE202A, Fall Contents

CSCI : Optimization and Control of Networks. Review on Convex Optimization

IE 5531: Engineering Optimization I

A Brief Review on Convex Optimization

Transcription:

Unit: Optimality Conditions and Karush Kuhn Tucker Theorem

Goals 1. What is the Gradient of a function? What are its properties? 2. How can it be used to find a linear approximation of a nonlinear function? 3. Given a continuously differentiable function, which equations are fulfilled for local optima in the following cases? 1. Unconstrained 2. Equality Constraints 3. Inequality Constraints 4. How can this be used to find Pareto fronts analytically? 5. How to state conditions for locally efficient in multiobjective optimization?

Optimality conditions for differentiable problems Given a point on a continuous differentiable function. A sufficient condition for a point R to be a local maximum is for instance f (x)=0, f (x)<0: () ()=0, ()<0 Necessary conditions, can be used to restrict the set of candidate solutions. (f (x)=0) If sufficient conditions are met, this implies the solution is locally (Pareto) optimal, so it provides us with verified solutions. A condition that is both necessary and sufficient provides us with exactly all solutions to the problem.

Recall: Derivatives

Recall: Partial Derivatives https://www.khanacademy.org/math/multivariablecalculus/partial_derivatives_topic/partial_derivatives/v/p artial-derivatives

Gradient / Linear Taylor Approximations

Example 1-Dimension

Gradient computation: Example Program: wxmaxima: load(draw); draw3d(explicit(20*exp(-x^2-y^2)-10,x,0,2,y,-3,3), contour_levels = 15, contour = both, surface_hide = true);

Gradient properties The plot shows the Gradient at different points of the function (Gradient field) f(x 1,x 2 )= 20 exp(-(x 1 ) 2 (x 2 ) 2 ) Program: wxmaxima: f1(x1,x2):=20*exp(-x1^2-x2^2); gx1f1(x1,x2):=diff(f1(x1,x2),x1,1); gx2f1(x1,x2):=diff(f1(x1,x2),x2,1); load(dfdraw); drawdf([gx1f1(x1,x2),gx2f1(x1,x2)],[x1,-2,2],[x2,-2,2]);

Single objective, unconstrained

Single objective, unconstrained (example)

Constraints (equalities) Show that this yields m+n equations with m+n+1 unknowns.

Constraints (equalities) - interpretation Common tangent line

Example: One equality constraint, three dimensions Level curve of f: f(x1,x2,x3)=const All solutions that satisfy The equality constraints are located on the n the gray surface.

Example: 2 Equality constraints, three dimensions Points on the intersection of The two planes satisfy both constraints.

Constraints (inequalities) Harold W. Kuhn US-American Mathematician 1924-2014 Albert William Tucker Canadian Mathematician, 1905-1995

Constraint (inequality)

Geometrical interpretation KKT conditions f g a2 g a1 x

Multiobjective Optimization [cf. Miettinnen 99]

Unconstrained Multiobjective Optimization X* In 2-dimensional spaces this criterion reduces to the observation, that either one of the objectives has a zero gradient ( neccesary condition for ideal points) or the gradients are parallel.

Strategy: Solve multiobjective optimization problems by level set continuation

Strategy: Find efficient points using determinant 22 : + =0 Linearly dependent det [, ] =0 Efficient points can be found by searching for points with det, (necessary condition) Single objective optimization, multiple minima Multiobjective Optimization, Autumn 2005: Michael Emmerich 22

Take home messages 1. Gradient is a vector of first order partial derivatives that is perpendicular to level curves; Hessian contains second order partial derivatives. 2. Local linearization yields optimality condition; in single objective case gradient zero and positive/negative definite Hessian. 3. Lagrange multiplier rule can be used to solve constrained optimization problems with equality constraints. 4. KKT conditions generalize it to inequality constraints; negative gradient points in cone spanned by active constraints. 5. KKT conditions for multiobjective optimization require for interior points to be optimal that they have gradients which point in exactly the opposite directions. 6. KKT conditions define equation system the solution of which is an at most m-1 dimensional manifold

References Kuhn, Harold W., and Albert W. Tucker. "Nonlinear programming." Proceedings of the second Berkeley symposium on mathematical statistics and probability. Vol. 5. 1951. Miettinen, Kaisa. Nonlinear multiobjective optimization. Vol. 12. Springer, 1999. Miettinen, Kaisa. "Some methods for nonlinear multi-objective optimization."evolutionary Multi-Criterion Optimization. Springer Berlin Heidelberg, 2001. Hillermeier, C. (2001). Generalized homotopy approach to multiobjective optimization. Journal of Optimization Theory and Applications, 110(3), 557-583. Schütze, O., Coello Coello, C. A., Mostaghim, S., Talbi, E. G., & Dellnitz, M. (2008). Hybridizing evolutionary strategies with continuation methods for solving multi-objective problems. Engineering Optimization, 40(5), 383-402.