Newton-Type Algorithms for Nonlinear Constrained Optimization

Similar documents
Linear and nonlinear optimization with CasADi

Optimization: an Overview

Computational Optimization. Constrained Optimization Part 2

Lecture 18: Optimization Programming

Constrained Optimization

Lagrangian Duality for Dummies

More on Lagrange multipliers

CHAPTER 1-2: SHADOW PRICES

Generalization to inequality constrained problem. Maximize

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

MATH2070 Optimisation

Constrained optimization

SMO Algorithms for Support Vector Machines without Bias Term

Support Vector Machines: Maximum Margin Classifiers

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Review of Optimization Basics

Constrained Optimization and Lagrangian Duality

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Scientific Computing: Optimization

IE 5531 Midterm #2 Solutions

IE 5531: Engineering Optimization I

In applications, we encounter many constrained optimization problems. Examples Basis pursuit: exact sparse recovery problem

2.098/6.255/ Optimization Methods Practice True/False Questions

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

1. f(β) 0 (that is, β is a feasible point for the constraints)

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Convex Optimization Overview (cnt d)

Unit: Optimality Conditions and Karush Kuhn Tucker Theorem

Pattern Classification, and Quadratic Problems

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Lecture 16: October 22

Chapter 7. Optimization and Minimum Principles. 7.1 Two Fundamental Examples. Least Squares

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

IP-PCG An interior point algorithm for nonlinear constrained optimization

5 Handling Constraints

ICS-E4030 Kernel Methods in Machine Learning

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

Lecture 7: Convex Optimizations

m i=1 c ix i i=1 F ix i F 0, X O.

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Lecture 3. Optimization Problems and Iterative Algorithms

Linear Support Vector Machine. Classification. Linear SVM. Huiping Cao. Huiping Cao, Slide 1/26

5. Duality. Lagrangian

The Karush-Kuhn-Tucker conditions

Lagrange Relaxation and Duality

Introduction to Machine Learning Spring 2018 Note Duality. 1.1 Primal and Dual Problem

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Nonlinear Optimization

What s New in Active-Set Methods for Nonlinear Optimization?

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Primal-dual Subgradient Method for Convex Problems with Functional Constraints

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

CS-E4830 Kernel Methods in Machine Learning

Lecture: Duality of LP, SOCP and SDP

KKT Examples. Stanley B. Gershwin Massachusetts Institute of Technology

Part IB Optimisation

Optimization. A first course on mathematics for economists

The use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher

Constrained Optimization and Support Vector Machines

Interior-Point Methods for Linear Optimization

Decision Science Letters

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation

Convex Optimization & Lagrange Duality

Lecture: Duality.

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Lecture Notes on Support Vector Machine

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Convex Optimization M2

Understanding the Simplex algorithm. Standard Optimization Problems.

Optimization Tutorial 1. Basic Gradient Descent

Numerical optimization

Appendix A Taylor Approximations and Definite Matrices

Tutorial on Convex Optimization: Part II

Machine Learning. Support Vector Machines. Manfred Huber

Academic Editor: Dominique Bonvin Received: 26 November 2016; Accepted: 13 February 2017; Published: 27 February 2017

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Numerical Optimization

1 Computing with constraints

CS711008Z Algorithm Design and Analysis

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Convex Optimization Boyd & Vandenberghe. 5. Duality

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Chap 2. Optimality conditions

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Transcription:

Newton-Type Algorithms for Nonlinear Constrained Optimization Angelika Altmann-Dieses Faculty of Management Science and Engineering Karlsruhe University of Applied Sciences Moritz Diehl Department of Microsystems Engineering IMTEK and Department of Mathematics University of Freiburg some slide material was provided by W. Bangerth and K. Mombaur

Overview Constrained Optimization! Necessary Optimality Conditions KKT-conditions! Newton-type methods for equality constrained optimisation! How to treat inequalites? eample: Interior Point Methods

Nonlinear Programming! General problem formulation: min f s.t. g = 0 h 0 f objective function / cost function g equality constraints h inequality constraints f,g,h shall be smooth twice differentiable functions

Recall: ball on a spring without constraints min R m contour lines of f gradient vector f =, m unconstrained minimum: 0 = f, = 0, m

Now: ball on a spring with constraints constrained minimum: Lagrange multiplier inactive constraint h 0 3 : 0 : min = = h h f h f = µ gradient h of active constraint

Ball on a spring with two active constraints equilibrium of forces constraint forces 0, = µ µ µ µ h h f 0 3 : 0 : min = = h h f

Multipliers as shadow prices What happens if we rela a constraint? Feasible set becomes bigger, so new minimum fε becomes smaller. How much would we gain? f ε f µε old constraint: h 0 new constraint: h ε 0 Multipliers show the hidden cost of constraints.

The Lagrangian Function For constrained problems, introduce modification of objective function:! equality multipliers λi may have both signs in a solution! inequality multipliers µi cannot be negative cf. shadow prices! for inactive constraints, multipliers µi are zero L, λ, µ : = f λ i g µ h i i i Equilibrium of forces can now be written as: L, λ, µ = 0

Necessary optimality conditions Karush-Kuhn-Tucker KKT conditions: If a local minimum, then g = 0! feasible, i.e., and h 0! there eist λ, µ such that apple! µ 0 apple > µ 0 h > µ = 0! and i.e., µ i = 0 or h i = 0 for each i r L,,µ = 0

Overview Constrained Optimization! Necessary Optimality Conditions KKT-conditions! Newton-type methods for equality constrained optimisation! How to treat inequalites? eample: Interior Point Methods

IPOPT: an Interior Point Optimization Algorithm Math. Program., Ser. A 06, 5 57 006 Digital Object Identifier DOI 0.007/s007-004-0559-y Andreas Wächter Lorenz T. Biegler On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming Received: March, 004 / Accepted: September, 004 Published online: April 8, 005 Springer-Verlag 005 Abstract. We present a primal-dual interior-point algorithm with a filter line-search method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration phase for the filter method, second-order corrections, and inertia correction of the KKT matri. Heuristics are also considered that allow faster performance. This method has been implemented in the IPOPT code, which we demonstrate in a detailed numerical study based on 954 problems from the CUTEr test set. An evaluation is made of several line-search options, and a comparison is provided with two state-of-the-art interior-point codes for nonlinear programming.

CasADi optimisation environment CasADi A software framework for nonlinear optimization and optimal control Joel A. E. Andersson Joris Gillis Greg Horn James B. Rawlings Moritz Diehl submitted eisting reference: S. Forth et al. eds., Recent Advances in Algorithmic Differentiation,LectureNotes in Computational Science and Engineering 87, DOI 0.007/978-3-64-3003-3 7, Springer-VerlagBerlinHeidelberg0 CasADi: A Symbolic Package for Automatic Differentiation and Optimal Control 97 Joel Andersson, Johan Åkesson, and Moritz Diehl

Summary! Lagrangian function plays important role in constrained optimization! KKT conditions are necessary optimality conditions! Newton-type methods try to find a KKT point by successive linearizations! Inequality constraints can be addressed by Interior Point IP methods, e.g. in IPOPT code! Derivatives of problem functions can be automatically provided e.g. by CasADi optimisation environment