Computational Optimization. Augmented Lagrangian NW 17.3

Similar documents
Computational Optimization. Constrained Optimization Part 2

Algorithms for constrained local optimization

5 Handling Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

Constrained optimization: direct methods (cont.)

Constrained Optimization

CONSTRAINED NONLINEAR PROGRAMMING

minimize x subject to (x 2)(x 4) u,

Lecture V. Numerical Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization

Algorithms for Constrained Optimization

Lecture 3. Optimization Problems and Iterative Algorithms

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Multidisciplinary System Design Optimization (MSDO)

More on Lagrange multipliers

OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review

Introduction to unconstrained optimization - direct search methods

2.3 Linear Programming

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

MS&E 318 (CME 338) Large-Scale Numerical Optimization

LINEAR AND NONLINEAR PROGRAMMING

Numerical Optimization

Numerical Optimization

1 Computing with constraints


Numerical Optimization. Review: Unconstrained Optimization

Algorithms for nonlinear programming problems II

Nonlinear Optimization: What s important?

Examination paper for TMA4180 Optimization I

Lecture 13: Constrained optimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

5.6 Penalty method and augmented Lagrangian method

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

Applications of Linear Programming

Generalization to inequality constrained problem. Maximize

10 Numerical methods for constrained problems

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems

Quasi-Newton Methods

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

Some new facts about sequential quadratic programming methods employing second derivatives

Constrained Optimization Theory

Miscellaneous Nonlinear Programming Exercises

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3

8 Numerical methods for unconstrained problems

MATH 4211/6211 Optimization Basics of Optimization Problems

Algorithms for nonlinear programming problems II

Unconstrained optimization

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation

TMA947/MAN280 APPLIED OPTIMIZATION

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Optimization Concepts and Applications in Engineering

SF2822 Applied nonlinear optimization, final exam Saturday December

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Scientific Computing: An Introductory Survey

Second Order Optimality Conditions for Constrained Nonlinear Programming

Optimization and Root Finding. Kurt Hornik

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Convex Optimization. Problem set 2. Due Monday April 26th

An Inexact Newton Method for Nonlinear Constrained Optimization

Written Examination

Introduction to Nonlinear Stochastic Programming

Numerical optimization

Computational Optimization. Mathematical Programming Fundamentals 1/25 (revised)

Math 273a: Optimization Basic concepts

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

Scientific Computing: Optimization

CS-E4830 Kernel Methods in Machine Learning

MATH2070 Optimisation

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS

Chapter 3 Numerical Methods

Exam in TMA4180 Optimization Theory

What s New in Active-Set Methods for Nonlinear Optimization?

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

MA/OR/ST 706: Nonlinear Programming Midterm Exam Instructor: Dr. Kartik Sivaramakrishnan INSTRUCTIONS

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

Constrained Nonlinear Optimization Algorithms

Lagrange Relaxation and Duality

Lecture 15: SQP methods for equality constrained optimization

SF2822 Applied nonlinear optimization, final exam Wednesday June

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

8 Barrier Methods for Constrained Optimization

ECE580 Solution to Problem Set 6

Transcription:

Computational Optimization Augmented Lagrangian NW 17.3

Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday for 4/25 presenters). Monday, April 28, evening class presentations, pizza provided. Tuesday, April 29, in class presentations. Exam May 6 Tuesday, open notes/book

General Equality Problem min f( x) ( NLP) s.. t h ( x) = 0 i E i

Augmented Lagrangian Consider min f(x) s.t h(x)=0 Start with L(x, λ)=f(x)-λ h(x) Add penalty L(x, λ,c)=f(x)-λ h(x)+μ/2 h(x) 2 The penalty helps insure that the point is feasible.

Lagrangian Multiplier Estimate L(x, λ,μ)=f(x)-λ h(x)+μ/2 h(x) 2 If Lxv (,, μ) = f() x λ' hx () + μ hx ()' hx () = 0 x [ λ μ ] f() x hx ()' hx () = 0 Looks like the Lagrangian Multiplier! λ + = λ μhx ( ) 17.39 k 1 k k i i i

In Class Exercise Consider 3 min x st.. x+ 1 = 0 Find x*, λ* satisfying the KKT conditions The augmented Lagrangian is L(x, λ*, c)= Plot f(x), L(x, λ*), L(x*, λ*,4), L(x*, λ*,16) L(x*, λ*,40) Compare these functions near x*

Augmented Lagrangian Algorithm for equality constraints (17.3) 0 0 0 Given x, λ, μ > 0, tol> 0 For k = 0,1,2.. find an approximate minimizer of L(x,λ,μ) k k such that xl( x, λ, μ ) tol if optimal stop update Lagrangain multipliers 17.39 1 λ k+ = λ k μ k hx ( k ) 17.39 i i i chose new penalty k +1 k μ μ

AL has Nice Properties Penalty function can improve conditioning and convexity. Automatically gives estimates of Lagrangian Multipliers Finite penalty term

Theorem 17.5 Let x* be local solution of NLP-equality with LICQ and SOSC satisfied. Then for all μ sufficiently large, x* is a strict local minimizer of L(x,λ,μ). Only need a finite penalty term!

Why AL works AL solution close to real solution if penalty μ is large enough or if multiplier λ is close enough to the real thing. Subproblems have a strict local min, so unconstrained minimization methods should work well.

Add bounds constraints Original Problem min f( x) st.. h( x) = 0 i E i l x u Add only inequalities to Lagrangian μ x Lxλ μ f x λ hx hx 2 st.. l x u 2 k k k' min (,, ) = ( ) ( ) + ( ) 2

Algorithm 17.4 For bounds constraint case Just put nonlinear equalities in augmented Lagragian subproblem and keep bounds as is. If near feasible, update multipliers and penalties Else just update penalties.

Inequality Problems Method of multiplier can be extended to this case using penalty parameter t m L( xut,, ) = f( x) + u tg( x) u j = 1 ( 2 ) j j + j 1 2 2 If strict complementarity holds this function is twice differentiable.

Inequality Problems m (,, ) ( ) ( ( ) ) xlxut = f x uj tg j x ' g j( x) = 0 + j = 1 m (,, ) ( ( ) ) ulxut = uj tg j x uj = 0 + j = 1 KKT point of Augmented Lagrangian is KKT point of original problem Estimate of Lagrangian Multiplier is uj = uj tg j( x ) +

Inequality Problems m (,, ) ( ) ( ( ) ) xl x u t = f x uj tg j x ' g j( x) = 0 + j = 1 uˆ 0 j m ( (,, ) ( ) ˆ ) ulxut = uj tg j x uj = 0 + j = 1 if g ( x) > 0 for t sufficiently large u = 0 j if g ( x) = 0 u 0, u g ( x) = 0 if g j m = f( x) ( uˆ j) ' g j( x) = 0 uj = uj tg j( x ) + j = 1 j j j j ( x) < 0 for t sufficiently large get a contradiction j

NLP Family of Algorithms Basic Method Sequential Quadratric Programming Sequential Linear Prog Augmented Lagrangian Projection or Reduced Gradient Directions Steepest Descent Newton Quasi Newton Conjugate Gradient Space Direct Null Range Constraints Active Set Barrier Penalty Step Size Line Search Trust Region

Hybrid Approaches Method can be any combination of these algorithms MINOS: For linear program utilizes a simplex method. The generalization of this to nonlinear programs with linear constraints is the reduced gradient method. Nonlinear constraints are handled by utilizing the augmented Lagrangian. A BFGS estimate of the Hessian is used.