Geometry optimization

Similar documents
Unconstrained optimization

Programming, numerics and optimization

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

Optimization II: Unconstrained Multivariable

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Optimization and Root Finding. Kurt Hornik

Chapter 4. Unconstrained optimization

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

5 Handling Constraints

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

2. Quasi-Newton methods

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Lecture V. Numerical Optimization

Nonlinear Optimization: What s important?

Higher-Order Methods

Optimization Methods

5 Quasi-Newton Methods

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Math 273a: Optimization Netwon s methods

Algorithms for Constrained Optimization

Optimization II: Unconstrained Multivariable

MS&E 318 (CME 338) Large-Scale Numerical Optimization

1 Numerical optimization

Optimization Tutorial 1. Basic Gradient Descent

Unconstrained Multivariate Optimization

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

Convex Optimization. Problem set 2. Due Monday April 26th

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

Quasi-Newton Methods

Optimization for neural networks

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

Exploring the energy landscape

Performance Surfaces and Optimum Points

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Introduction to gradient descent

Gradient-Based Optimization

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

IPAM Summer School Optimization methods for machine learning. Jorge Nocedal

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Written Examination

Nonlinear Programming

Quasi-Newton Methods

Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems

Improving the Convergence of Back-Propogation Learning with Second Order Methods

Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno February 6, / 25 (BFG. Limited memory BFGS (L-BFGS)

1 Numerical optimization

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Statistics 580 Optimization Methods

Arc Search Algorithms

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

Lecture 7 Unconstrained nonlinear programming

Review of Classical Optimization

Unconstrained minimization of smooth functions

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

How do we recognize a solution?

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

Maximum Likelihood Estimation

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization

Multivariate Newton Minimanization

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerical Optimization

8 Numerical methods for unconstrained problems

Chapter 3 Numerical Methods

September Math Course: First Order Derivative

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

Quasi-Newton methods for minimization

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Nonlinear Optimization for Optimal Control

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell

Part 4: IIR Filters Optimization Approach. Tutorial ISCAS 2007

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

ECS550NFB Introduction to Numerical Methods using Matlab Day 2

Trust Regions. Charles J. Geyer. March 27, 2013

Scientific Computing: An Introductory Survey

Gradient Descent. Dr. Xiaowei Huang

Neural Network Training

OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review

2.3 Linear Programming

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Static unconstrained optimization

Numerical Optimization

Optimization Methods

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

Scientific Computing: An Introductory Survey

Improving L-BFGS Initialization for Trust-Region Methods in Deep Learning

Algorithms for constrained local optimization

Unconstrained Optimization

Basic Math for

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization

Lecture 14: October 17

THE restructuring of the power industry has lead to

Transcription:

Geometry optimization Trygve Helgaker Centre for Theoretical and Computational Chemistry Department of Chemistry, University of Oslo, Norway European Summer School in Quantum Chemistry (ESQC) 211 Torre Normanna, Sicily, taly September 18 October 1, 211 Trygve Helgaker (CTCC, University of Oslo) Geometry optimization ESQC11 1 / 39 Geometry optimization Good standard methods are available for minimization Fletcher: Practical Methods of Optimization (198) Dennis and Schnabel: Numerical Methods for Unconstrained Optimization and Nonlinear Equations (1983) Gill, Murray, and Wright: Practical Optimization (1981) Methods for saddle points are much less developed less intuitive and experimental information available for saddle points many methods have been considered over the years but Localization of a saddle point is easy to make only in laboratories other than our own Havlas and Zahradník Trygve Helgaker (CTCC, University of Oslo) ntroduction ESQC11 2 / 39

Overview 1 Stationary points minima, saddle points 2 Strategies for optimizations local and global regions 3 The local region linear and quadratic models Newton s method Hessian updates and the quasi-newton method convergence rates and stopping criteria 4 Global strategies for minimizations the trust-region method the line-search method coordinate systems the initial Hessian comparison of methods 5 Global strategies for saddle-point optimizations Levenberg Marquardt trajectories gradient extremals image surfaces Trygve Helgaker (CTCC, University of Oslo) ntroduction ESQC11 3 / 39 Multivariate smooth functions Taylor expansion of a smooth function f about the current point x c : f (x) = f c + sg c + 1 2 sh cs +, s = x x c Multivariate function f in x with gradient g c and Hessian H c at x c : f x 2 f 1 x x = 1. g c = x1 2. H c =.. x n Diagonal Hessian representation:. f x n f (x) = f (x c ) + i 2 f x n x 1 φ i σ i + 1 λ 2 i σi 2 + i 2 f x 1 x n 2 f x 2 n gradient in the diagonal representation φi Hessian eigenvalues λi Hessian index: the number of negative Hessian eigenvalues Trygve Helgaker (CTCC, University of Oslo) Stationary points ESQC11 4 / 39

Stationary points A smooth function f (x) has a stationary point at x if the gradient vanishes g(x ) = (zero slope) maximum inflection point The function f (x) has a minimum at x (the minimizer) if the Hessian index is zero: small eigenvalue large eigenvalue Trygve Helgaker (CTCC, University of Oslo) Stationary points ESQC11 5 / 39 Strong and weak minima At a minimum, all eigenvalues are nonnegative if, in addition, all eigenvalues are positive, we have a strong minimum if one or more eigenvalues are zero, we have a weak minimum 8 6 x 4 4 2 x 2 2 Trygve Helgaker (CTCC, University of Oslo) Stationary points ESQC11 6 / 39

Local and global minima A minimum x is global if f (x) f (x ) for all x A minimum x that is not global is said to be local strong local minimum weak local minimum global minimum Most practical methods do not distinguish between local and global minima Trygve Helgaker (CTCC, University of Oslo) Stationary points ESQC11 7 / 39 Saddle points A saddle point is a stationary point with one or more negative Hessian eigenvalues a kth-order saddle point has Hessian index k The gradient and Hessian are both needed to characterize a stationary point Potential energy surfaces: minimum: stable molecular conformation 1st-order saddle point: transition state Electronic-structure energy functions: minimum: ground state saddle point: excited state Trygve Helgaker (CTCC, University of Oslo) Stationary points ESQC11 8 / 39

Minima and saddle points 1.5 1..5..5 Trygve Helgaker (CTCC, University of Oslo) Stationary points ESQC11 9 / 39 Strategies for optimization: global and local regions Any optimization is iterative, proceeding in steps or iterations At each step, a local model m(x) is constructed of the surface f (x) these must be (locally) accurate, flexible, and easy to determine A search proceeds in two regions: the global region and the local region global region 6 4 3 n Local region the local model m(x) represents f (x) accurately around the optimizer x take a step to the optimizer of the model m(x) this region usually presents few problems Global region the local model m(x) does not represent f (x) accurately around the optimizer x the model cannot locate x but must instead guide us in the right general direction relatively simple for minimizations, difficult in saddle-point searches Trygve Helgaker (CTCC, University of Oslo) Strategies for optimization ESQC11 1 / 39

The local region n the local region, the local model extends to the optimizer x of the true function We can then proceed in a simple manner: 1 construct a local model m c (x) of f (x) around the current point x c m c (x c ) = f (x c ) m c (x ) f (x ) 2 determine the stationary point x + of the local model dm c (x) dx = x=x 3 if x + = x terminate; otherwise, set x c = x and iterate again The convergence the optimization depends on the quality of the local model We shall build the local model by expansion around the current point the linear model, the quadratic model Trygve Helgaker (CTCC, University of Oslo) The local region ESQC11 11 / 39 The linear model The local linear or affine model arises by truncation after the first-order term: m A (x) = f (x c ) + g c s The linear model is typically constructed from the exact gradient The linear model is not very useful since it is unbounded it has no curvature information it has no stationary points The linear model forms the basis for the steepest-descent method in combination with line search (vide infra) Trygve Helgaker (CTCC, University of Oslo) The local region ESQC11 12 / 39

The second-order model n the second-order (SO) model we truncate the expansion after second order: m SO (x) = f (x c ) + g c s + 1 2 sh cs requires the exact gradient gc and Hessian H c at current point The SO models contains full information about local slope and curvature Unlike the first-order (linear) model, the SO model has a stationary point this point may or may not be close to the true stationary point in the local region, its stationary point is close to the true stationary point Trygve Helgaker (CTCC, University of Oslo) The local region ESQC11 13 / 39 Newton s method The SO model is given by m SO (x) = f (x c ) + g c s + 1 2 sh cs Differentiation the SO model and setting the result to zero, we obtain dm SO (s) ds = g c + H c s = s = Hc 1 g c The new point x + and the current point x c are related as x + = x c H 1 c g c Newton step The Newton step does not discriminate between minima and maxima When iterated, we obtain Newton s method Trygve Helgaker (CTCC, University of Oslo) The local region ESQC11 14 / 39

Convergence of Newton s method The relation between the new and old points is given by x c H 1 c g c Newton step Subtracting the true optimizer x, we obtain a relation between new and old errors e + = e c H 1 c g c, e + = x + x, e c = x c x We next expand the gradient and inverted Hessian around the true optimizer x : g c = g + H e c + O(e 2 c ) = H e c + O(e 2 c ) (since g = ) H 1 c = H 1 + O(e c ) nserted in the error expression above, these expansions give e + = e c Hc 1 g c = e c ( H 1 + O(e c ) )( H e c + O(ec 2 ) ) = O(ec 2 ) We conclude that Newton s method converges quadratically e + = O(e 2 c ) close to the optimizer, the number of correct digits doubles in each iteration Trygve Helgaker (CTCC, University of Oslo) The local region ESQC11 15 / 39 The quasi-newton method f the exact Hessian is unavailable or expensive, use an approximation to it this gives the more general quadratic model m Q (x) = f (x c ) + g c s + 1 2 sb cs, B c H c the associated quasi-newton step is given by x + = x c Bc 1 g c n the quasi-newton method, B is iteratively improved upon at each iteration, the exact Hessian satisfies the relation (g + g c ) = H + (x + x c ) + O((x + x c ) 2 ) by analogy, we require the new approximate Hessian to satisfy the relation (g + g c ) = B + (x + x c ) the quasi-newton condition the new Hessian is updated in a simple manner from Bc, g + g c and x + x c B + = f (B c, g + g c, x + x c ) Several update schemes are available Trygve Helgaker (CTCC, University of Oslo) The local region ESQC11 16 / 39

Hessian updates Apart from the quasi-newton condition, other properties are often imposed Hereditary symmetry B c symmetric B + symmetric Powell symmetric Broyden (PSB) update: B + = B c + ( s cs c )T c s c + ( s c s c )s c T c ( T c s c )s c s c ( s c s c ) 2 T c = (g + g c ) B c s c simple matrix and vector manipulations Hereditary positive definiteness B c positive definite B + positive definite Broyden Fletcher Goldfarb Shanno (BFGS) update: many other schemes are available B + = B c + y cỹ c ỹ c s c B cs c s c B c s c B c s c y c = g + g c Trygve Helgaker (CTCC, University of Oslo) The local region ESQC11 17 / 39 Convergence in local region Consider a sequence x k that converges to x lim k x k = x e k = x k x convergent sequence error vector Linear, superlinear and quadratic rates of convergence: e k+1 lim k e k e k+1 lim k e k = a linear convergence (steepest descent, gradient) = superlinear convergence (quasi-newton, updated Hessian) e k+1 lim = a k e k 2 quadratic convergence (Newton, exact Hessian) The local region presents few problems for methods based on the quadratic model convergence to weak minima will still be slow such minima require a quartic model for fast convergence As our model improves, fewer but more expensive steps are needed for convergence Trygve Helgaker (CTCC, University of Oslo) The local region ESQC11 18 / 39

Stopping criteria An optimization is terminated when one or several convergence criteria are satisfied Typically, the following conditions are used the gradient norm: g c ε the norm of the predicted second-order change in the energy: 1 g 2 chc 1 g c ε the norm of the (quasi-)newton step: 1 2 H 1 c g c ε n addition, we should always check the structure of the Hessian (the Hessian index) Finally, inspect the solution and use common sense! Trygve Helgaker (CTCC, University of Oslo) The local region ESQC11 19 / 39 The global region Optimization in the local region is fairly simple We shall now consider the more difficult global region... global region 6 4 3 n Local region the local model m(x) represents f (x) accurately around the optimizer x take a step to the optimizer of the model m(x) the same approach works for minima and saddle points Global region the local model m(x) does not represent f (x) accurately around the optimizer x the model must guide us in the right general direction this is relatively simple in minimizations but difficult in saddle-point searches Trygve Helgaker (CTCC, University of Oslo) The global region ESQC11 2 / 39

Strategies for minimization Global strategies are needed when the local model represents f (x) poorly around x Newton step x c x The Newton step above leads us away from the minimizer, increasing the energy A useful global strategy for minimization: reduce the function f (x) (sufficiently) at each step The method should be globally convergent it should converge to some (possibly local) minimum from any starting point however, we cannot ensure that the minimum is global There are two standard global strategies: the trust-region method the line-search method Trygve Helgaker (CTCC, University of Oslo) The global region ESQC11 21 / 39 The trust region n the trust-region method, we recognize that the second-order model is good only in some region around x c : the trust region (TR) The trust region cannot be specified in detail, we assume that it is a hypersphere ss h trust radius h the trust radius is updated by a feedback mechanism This gives us the restricted second-order (RSO) model m SO (x) = f (x c ) + g c s + 1 2 sh cs, ss h 2 at each iteration, we then minimize mso (x) subject to ss h 2 Trygve Helgaker (CTCC, University of Oslo) The global region ESQC11 22 / 39

Stationary points in the trust region The trust region may or may not have a stationary point in the interior. However, there are always two or more stationary points on the boundary. Consider the function f (x, y) expanded at (12, 8) in s x = x 12 and s y = y 8: f (x, y) = 8(x y) 2 + (x + y) 2 = 528 + [ ] [ ] 14 s x, s y + 1 24 2 + [ ] [ ] [ ] 18, 14 sx s x, s y 14, 18 s y with trust radius h = 1, there are four stationary points on the boundary 2 D C B 1 A B C D A n the global region, the minimize f on the boundary and go to point A Trygve Helgaker (CTCC, University of Oslo) The global region ESQC11 23 / 39 The level-shifted Newton step To determine stationary points on the boundary, we use Lagrange s method L(s, µ) = m SO (s) 1 µ( ss 2 h2 ) Lagrangian The stationary points are now obtained by setting the gradient to zero dl/ds = g c + H c s µs = s(µ) = (H c µ ) 1 g c We obtain the level-shifted Newton step s(µ) that depends on µ we select µ such that the step is to the boundary 2 D C 1 A B 1 2 Note: we have always at least two stationary points on the boundary Trygve Helgaker (CTCC, University of Oslo) The global region ESQC11 24 / 39

The trust-region algorithm 1 Construct the restricted second-order model of the surface at x c : m RSO (s) = f (x c ) + g c s + 1 2 sh cs, s h c 2 Take the Newton step if s() < h c and if H c has correction structure s() = Hc 1 g c 3 Otherwise take the level-shifted Newton step to the minimum on the boundary s(µ) = (H c µ ) 1 g c, µ < min(, λ 1 ), s(µ) = h c The Levenberg Marquardt trajectory: the step s(µ) as a function of µ: 1 Newton 1 Trygve Helgaker (CTCC, University of Oslo) The global region ESQC11 25 / 39 Trust-radius update The trust radius h c is updated by a feedback mechanism: R c = actual change predicted change = f + f c g c s + 1 2 sh cs = 1 + O(s3 ) R c mportant safety measure: we always reject the step if the function increases. calculate new step with reduced radius Typically implemented with exact Hessian. Updated Hessian may not be accurate enough for an unbiased search in all directions. Trygve Helgaker (CTCC, University of Oslo) The global region ESQC11 26 / 39

The line-search method f the Newton step must be rejected, it may still provide a direction for a line search n the line-search method, such searches form the basis for the global optimization Line search a one-dimensional search along a descent direction until an acceptable reduction in the function is obtained A descent direction is a vector z such that zg c < examples of descent directions: steepest-descent step: z = g c since g c g c < g c Newton step with p. d. Hessian: z = g c B c since g c B 1 c g c < the BFGS step guarantees p. d. Hessian the Newton step is usually better than the steepest-descent step descent directions Trygve Helgaker (CTCC, University of Oslo) The global region ESQC11 27 / 39 Line searches exact line search expensive, unnecessary inexact or partial line search try Newton step first! if necessary, backtrack until an acceptable step is found line search often used in connection with updated Hessians: quasi-newton methods relatively stable efficient backtracking does not make full use of available information since the Hessian is used only to generate the direction and not the length of the step the coupling between direction and length is ignored trust-region method first step size, next direction handles indefinite Hessians naturally less suited for updated Hessians guaranteed convergence conservative line-search method first direction, next step size handles indefinite Hessians poorly well suited for updated Hessians no guarantee of convergence risky Trygve Helgaker (CTCC, University of Oslo) The global region ESQC11 28 / 39

Coordinate systems A judicious choice of coordinates may improve convergence by reducing quadratic couplings and higher-order (anharmonic) terms Cartesian coordinates simple to set up and automate universal and uniform quality strong couplings and anharmonicities contains rotations and translations well suited for ab initio calculations nternal coordinates primitive internal coordinates: bond lengths, bond angles, dihedral angles physically well motivated: small couplings and anharmonicities nonredundant system difficult to set up solution: use redundant internal coordinates redundancies controlled by projections Trygve Helgaker (CTCC, University of Oslo) The global region ESQC11 29 / 39 nitial Hessian The efficiency of first-order methods depends on the quality of the initial Hessian Exact initial Hessian gives fewest iterations but is expensive A more efficient scheme may be to use a less accurate but cheaper initial Hessian A good approximate Hessian is easiest to set up in primitive internal coordinates diagonal harmonic model Hessian.45 ρ ij bond length B pp =.15 ρ ij ρ jk bond angle.5 ρ ij ρ jk ρ kl dihedral angle here ρij is a decaying model function ρ ij (r ij ) = exp [ α ij (R 2 ij r 2 ij ) ] 1 Rij 1 2 Trygve Helgaker (CTCC, University of Oslo) The global region ESQC11 3 / 39

Numerical comparisons total number of iterations/timings for 3 representative molecules (Baker set) 1st-order quasi-newton (BFGS) with different initial Hessians 2nd-order Newton method optimizations in Cartesian and redundant internal coordinates Cart. coor. inter. coor. iter. time iter. time Cart. 1. 768 2261 53 1475 quasi-newton diagonal int. diagonal.4 1. hmh 619 318 39 1873 931 911 363 269 28 164 781 664 Newton exact 21 97 158 757 123 1163 113 1491 the best method: the BFGS quasi-newton method in redundant internal coordinates with initial harmonic model Hessian (hmh) Trygve Helgaker (CTCC, University of Oslo) The global region ESQC11 31 / 39 Saddle points Saddle-point optimizations are more difficult than minimizations less experimental and intuitive information available less developed and stable There are a large number of methods in use The local region presents few problems provided a second-order model is used the Newton step is always to the stationary point of the second-order model, be it a minimum, a maximum or a saddle point All difficulties with saddle-point optimizations are in the global region it is hard to measure progress in saddle-point optimizations Trygve Helgaker (CTCC, University of Oslo) Saddle-point optimizations ESQC11 32 / 39

Minima and saddle points 1.5 1..5..5 Trygve Helgaker (CTCC, University of Oslo) Saddle-point optimizations ESQC11 33 / 39 Levenberg Marquardt trajectories A simple approach is to explore other solutions to the restricted 2nd-order problem s(µ) = (H c µ ) 1 g c 2 2 Select walks to reduce or increase the function along the various modes note: the trajectories depend on the expansion point this approach has been used with some success Trygve Helgaker (CTCC, University of Oslo) Saddle-point optimizations ESQC11 34 / 39

Gradient extremals Levenberg Marquardt trajectories are dependent on the expansion point Are there well-defined lines connecting stationary points of a smooth function? Steepest descent: follow gradient down from the saddle point not locally defined (not recognizable) intrinsic reaction coordinate Gradient extremals: connect stationary points locally defined (recognizable) by the condition H(x)g(x) = λ(x)g(x) The gradient is an eigenvector of the Hessian at gradient extremals Trygve Helgaker (CTCC, University of Oslo) Saddle-point optimizations ESQC11 35 / 39 From stationary points to gradient extremals Consider the gradient in the diagonal representation of the Hessian at a stationary point, all elements are zero at a gradient extremal, all elements except one are zero. φ(x sp ) =.. φ(x ge ) = φ(t). Gradient extermals are therefore points where we have relaxed just one of the conditions for a stationary point 3N 6 conditions specify a point 3N 7 conditions specify a line Only one nonzero gradient component in the eigenvector basis implies the condition H(x)g(x) = λ(x)g(x) t should be possible to follow gradient extremals between stationary points Trygve Helgaker (CTCC, University of Oslo) Saddle-point optimizations ESQC11 36 / 39

Gradient extremals as optimum ascent paths A gradient extremal corresponds to an optimum ascent path 6 4 2-2 -4-4 -2 2 6 Optimize gradient norm on countour line f (x) = k [g g 2µ(f (x) k)] = x 4 H(x)g (x) = µ(x)g (x) Some properties: locally defined, intersect at stationary points not necessarily tangent to gradient, curves a lot and difficult to follow Trygve Helgaker (CTCC, University of Oslo) Saddle-point optimizations ESQC11 37 / 39 mage functions magine a function f (x) with the following properties f (x) minimum saddle point f (x) saddle point minimum the function f (x) is said to be the image function of f (x) We may locate a saddle point of f (x) by minimizing f (x)! a trivial example: n general, we cannot construct an image function it may not even exist however, we know its gradient and Hessian this is sufficient for second-order optimizations Trygve Helgaker (CTCC, University of Oslo) Saddle-point optimizations ESQC11 38 / 39

Trust-region image minimization The gradient and Hessians of a function f and its image f are related as [ ] [ ] φ1 λ1 φ(x) =, λ(x) = φ 2 λ 2 [ ] [ ] φ1 λ1 φ(x) =, λ(x) = φ 2 λ 2 To minimize f (x), we must use the trust-region method line search is impossible The level-shifted Newton step for the image function is now given by s(µ) = ( H c µ1) 1 ḡ c = φ 1 λ 1 µ v 1 φ 2 λ 2 µ v 2 = φ 1 λ 1 + µ v 1 φ 2 λ 2 µ v 2 a simple sign change in the level-shift parameter µ for one mode the level-shifted Newton method maximizes this mode and minimize all others This trust-region image minimization is typically applied to the lowest Hessian mode very robust but not selective Trygve Helgaker (CTCC, University of Oslo) Saddle-point optimizations ESQC11 39 / 39