Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Similar documents
Lecture 7 Unconstrained nonlinear programming

Chapter 4. Unconstrained optimization

Statistics 580 Optimization Methods

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method

Introduction to unconstrained optimization - direct search methods

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

1 Numerical optimization

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

Convex Optimization CMU-10725

NonlinearOptimization

1 Numerical optimization

5 Quasi-Newton Methods

Programming, numerics and optimization

Optimization Methods

Lecture V. Numerical Optimization

Optimization and Root Finding. Kurt Hornik

Optimization. Totally not complete this is...don't use it yet...

Quasi-Newton Methods

Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno February 6, / 25 (BFG. Limited memory BFGS (L-BFGS)

Nonlinear Programming

Optimization Methods

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

2. Quasi-Newton methods

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3

Multivariate Newton Minimanization

Nonlinear Optimization: What s important?

Optimization Methods for Circuit Design

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Scientific Computing: An Introductory Survey

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

Maximum Likelihood Estimation

Static unconstrained optimization

Lecture 14: October 17

Unconstrained optimization

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Convex Optimization. Problem set 2. Due Monday April 26th

Optimization II: Unconstrained Multivariable

Practical Optimization: Basic Multidimensional Gradient Methods

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

8 Numerical methods for unconstrained problems

Algorithms for Constrained Optimization

Numerical solutions of nonlinear systems of equations

Lecture 7: Optimization methods for non linear estimation or function estimation

MATH 4211/6211 Optimization Quasi-Newton Method

Numerical Optimization

ECS550NFB Introduction to Numerical Methods using Matlab Day 2

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

Optimization Concepts and Applications in Engineering

Linear and Nonlinear Optimization

4 Newton Method. Unconstrained Convex Optimization 21. H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion:

Optimization II: Unconstrained Multivariable

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k

FALL 2018 MATH 4211/6211 Optimization Homework 4

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Higher-Order Methods

Applied Numerical Analysis

Review of Classical Optimization

Numerical Analysis of Electromagnetic Fields

IE 5531: Engineering Optimization I

Matrix Derivatives and Descent Optimization Methods

Quasi-Newton methods for minimization

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Mathematical optimization

Trajectory-based optimization

Gradient-Based Optimization

Improving the Convergence of Back-Propogation Learning with Second Order Methods

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

Numerical Optimization of Partial Differential Equations

Image restoration. An example in astronomy

Data Mining (Mineria de Dades)

Notes on Numerical Optimization

Two improved classes of Broyden s methods for solving nonlinear systems of equations

nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M.

EECS260 Optimization Lecture notes

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

Numerical computation II. Reprojection error Bundle adjustment Family of Newtonʼs methods Statistical background Maximum likelihood estimation

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Unconstrained Multivariate Optimization

The Conjugate Gradient Method

13. Nonlinear least squares

Scientific Computing: Optimization

Krzysztof Tesch. Continuous optimisation algorithms

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Math 273a: Optimization Netwon s methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

GRADIENT = STEEPEST DESCENT

NUMERICAL MATHEMATICS AND COMPUTING

Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction

Optimization Methods. Lecture 19: Line Searches and Newton s Method

Inverse Problems and Optimal Design in Electricity and Magnetism

Nonlinear programming

OPTIMIZATION TECHNIQUES AND THEIR APPLICATION

Numerical Optimization Techniques

Transcription:

Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23

Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is nonlinear and non-convex. Algorithms: 1. Newton and Quasi-Newton methods 2. Methods with direction determination and line optimization 3. Nelder-Mead Method Nonlinear Optimization without Constraints 2 / 23

Newton s algorithm 2nd-order Taylor expansion f(x) = f(x 0 )+ T f(x 0 ) (x x 0 )+ 1 2 (x x 0) T H(x 0 ) (x x 0 ) + O( x x 0 3 2) unconstrained quadratic problem Newton s algorithm: 1 min x 2 xt H(x 0 ) x + T f(x 0 ) x x opt = (x x 0 ) opt = H 1 (x 0 ) f(x 0 ) so x opt = x 0 H 1 (x 0 ) f(x 0 ) x k+1 = x k H 1 (x k ) f(x k ) Nonlinear Optimization without Constraints 3 / 23

Newton s algorithm (continued) f(x 0 )+ T f(x 0 ) (x x 0 )+ 1 2 (x x 0) T H(x 0 ) (x x 0 ) f f(x 0 ) x opt x 0 d d = x x 0 min d f(x 0 )+ T f(x 0 )d + 1 2 dt H(x 0 )d d = H 1 (x 0 ) f(x 0 ) x opt = x 0 +d = x 0 H 1 (x 0 ) f(x 0 ) Nonlinear Optimization without Constraints 4 / 23

Levenberg-Marquardt and quasi-newton algorithms Hessian matrix H(x k ) computation is time-consuming problems when close to singularity Solution: choose approximate Ĥ k Levenberg-Marquardt algorithm Broyden-Fletcher-Goldfarb-Shanno quasi-newton method Davidon-Fletcher-Powell quasi-newton method Modified Newton algorithm: x k+1 = x k Ĥ 1 k f(x k ) Nonlinear Optimization without Constraints 5 / 23

Modified Newton algorithm: x k+1 = x k Ĥ 1 k f(x k ) Levenberg-Marquardt algorithm: Ĥ k = λi +H(x k ) Broyden-Fletcher-Goldfarb-Shanno quasi-newton method: Ĥ k = Ĥ k 1 + q kq T k q T k s k Ĥ T k 1 s ks T k Ĥk 1 s T k Ĥk 1s k where s k = x k x k 1 q k = f(x k ) f(x k 1 ) Davidon-Fletcher-Powell quasi-newton method: ˆD k = ˆD k 1 + s ks T k q T k s k ˆD k 1 q k q T k ˆD T k 1 q T k ˆD k 1 q k where s k = x k x k 1 q k = f(x k ) f(x k 1 ) x k+1 = x k ˆD k f(x k ) no inverse! Nonlinear Optimization without Constraints 6 / 23

Nonlinear least squares problems e(x) = [ e 1 (x) e 2 (x)... e N (x) ] T (N components) f(x) = e(x) 2 2 = e T (x)e(x) f(x) = 2 e(x)e(x) N H(x) = 2 e(x) T e(x)+ 2 2 e i (x)e i (x) i=1 with e: Jacobian of e and 2 e i : Hessian of e i e(x) 0 Ĥ(x) = 2 e(x) T e(x) Gauss-Newton method: x k+1 = x k ( e(x k ) T e(x k )) 1 e(xk )e(x k ) Levenberg-Marquardt method: ( ) λi 1 x k+1 = x k 2 + e(x k) T e(x k ) e(x k )e(x k ) Nonlinear Optimization without Constraints 7 / 23

Direction determination & Line minimization Minimization along search direction Direction determination n-dimensional minimization: Choose a search direction d k at x k Minimize f(x) over the line One-dimensional minimization: x = argmin x f(x) x = x k +d k s, s R x k+1 = x k +d k s k with s k = argmin s f(x k +d k s) Nonlinear Optimization without Constraints 8 / 23

Direction determination & Line minimization (continued) f f k (s)= f(x k +d k s) x k +d k s x d k k s = 0 x k s Nonlinear Optimization without Constraints 9 / 23

Line minimization Initial point x k Search direction d k Line minimization min s f(x k +d k s ) = min s f k (s ) Fixed / Variable step method Parabolic / Cubic interpolation Golden section / Fibonacci method Nonlinear Optimization without Constraints 10 / 23

Parabolic interpolation f k f k (s 3 ) p k f k (s 1 ) f k (s 2 ) s 1 s 4 s 2 s 3 (a) using three function values f k pk f k (s 2 ) f k (s 1 ) s 1 s 4 s 2 (b) using two function values and 1 derivative Nonlinear Optimization without Constraints 11 / 23

Golden section method Suppose minimum in [a l,d l ], f k unimodal in [a l,d l ] f k Construct: b l = λa l +(1 λ)d l c l = (1 λ)a l +λd l a l b l c l d l Golden section method: λ = 1 2 ( 5 1) 0.6180 only one function evaluation per iteration f k a l+1 b l+1 c l+1d l+1 Nonlinear Optimization without Constraints 12 / 23

Fibonacci method: Fibonacci sequence {µ k } = 0, 1, 1, 2, 3, 5, 8, 13, 21,... µ k = µ k 1 +µ k 2 µ 0 = 0, µ 1 = 1 Select n such that Next use 1 µ n (b 0 a 0 ) ε λ l = µ n 2 l µ n 1 l Also allows to reuse points from one iteration to next Fibonacci method gives optimal interval reduction for given number of function evaluations Nonlinear Optimization without Constraints 13 / 23

Determination of search direction Gradient methods and conjugate-gradient methods Perpendicular search methods Perpendicular method Powell s perpendicular method Nonlinear Optimization without Constraints 14 / 23

Gradient and conjugate-gradient methods Steepest descent: d k = f(x k ) Conjugate gradient methods: d k = Ĥ 1 k f(x k ) Levenberg-Marquardt direction Broyden-Fletcher-Goldfarb-Shanno direction Davidon-Fletcher-Powell direction Fletcher-Reeves direction: d k = f(x k )+µ k d k 1 where µ k = T f(x k ) f(x k ) T f(x k 1 ) f(x k 1 ) Nonlinear Optimization without Constraints 15 / 23

Perpendicular search method Perpendicular set of search directions: d 0 = [ 1 0 0... 0 ] T d 1 = [ 0 1 0... 0 ] T d n 1 = [ 0 0 0... 1 ] T 5 4.5 4 3.5 3 2.5 2 1.5 1 x2 x3 0.5 x0 x1 0 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Nonlinear Optimization without Constraints 16 / 23

Powell s perpendicular method Initial point: x 0 := x 0 First set of search directions: results in x 1,..., x n S 1 = (d 0,d 1,...,d n 1 ) Perform search in direction x n x 0 x n New set of search directions: drop d 0 and add x n x 0 : results in x n+1,..., x 2n S 2 = (d 1,d 2,...,d n 1,x n x 0 ) Perform search in direction x 2n x n x 2n New set of search directions: drop d 1 and add x 2n x n :... S 3 = (d 2,d 3,...,d n 1,x n x 0,x 2n x n ) Nonlinear Optimization without Constraints 17 / 23

Powell s perpendicular method (continued) 5 4.5 4 x 4 3.5 3 2.5 x 3 x 4 2 1.5 x 2 1 x 2 0.5 x 0 = x 0 x 1 0 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Nonlinear Optimization without Constraints 18 / 23

Nelder-Mead method Vertices of a simplex: (x 0,x 1,x 2 ) Let f(x 0 ) > f(x 1 ) and f(x 0 ) > f(x 2 ) x 3 = x 1 +x 2 x 0 point reflection around x c = (x 1 +x 2 )/2 x 1 x 0 x c C R x 3 E C = Contraction x 2 R = Reflection E = Expansion Nelder-Mead method does not use gradient Method is less efficient than previous methods if more than 3 4 variables Nonlinear Optimization without Constraints 19 / 23

4.5 x12 4 3.5 3 2.5 2 1.5 x3 1 x1 0.5 x2 0 x0 0.5 0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 Reflection in the Nelder-Mead algorithm Nonlinear Optimization without Constraints 20 / 23

Summary Nonlinear programming without constraints: Standard form min x f(x) Three main classes of algorithms: 1 Newton and quasi-newton methods 2 Methods with direction determination and line optimization 3 Nelder-Mead Method Nonlinear Optimization without Constraints 21 / 23

Test: Classification of optimization problems I max exp(x 1 x 2 ) x R 3 s.t. 2x 1 +3x 2 5x 3 1 This problem is/can be recast as (check most restrictive answer): linear programming problem quadratic programming problem nonlinear programming problem Nonlinear Optimization without Constraints 22 / 23

Test: Classification of optimization problems II max x R 3 x 1x 2 +x 2 x 3 s.t. x 2 1 +x 2 2 +x 2 3 1 This problem is/can be recast as (check most restrictive answer): linear programming problem quadratic programming problem nonlinear programming problem Nonlinear Optimization without Constraints 23 / 23