Adjoint Optimization

Size: px
Start display at page:

Download "Adjoint Optimization"

Transcription

1 Adjoint Optimization p. Adjoint Optimization On State Constraint and Second Order Adjoint Computation Eka Suwartadi Norwegian University of Science and Technology

2 Adjoint Optimization p. Why adjoint optimization? An efficient way to compute gradients for optimal control

3 Adjoint Optimization p. Why adjoint optimization? An efficient way to compute gradients for optimal control Requires only two simulations regardless of the number of decision variables(n)

4 Adjoint Optimization p. Why adjoint optimization? An efficient way to compute gradients for optimal control Requires only two simulations regardless of the number of decision variables(n) Much more efficient than finite differences or integrating sensitivity equations which requires N+1 simulations

5 Adjoint State Constraint Adjoint Optimization p.

6 Adjoint Optimization p. Big Picture Objective Function : NPV Constraints Adjoint Optimization control inputs Oil Reservoir states control inputs: BHP, injection/production rate states: pressure, water saturation

7 Adjoint Optimization p. Adjoint Optimization Algorithm 1. Define the Lagrangian max uɛu J(u) subject to : c(x,u) = 0 L(x,u,λ) = J(x,u) + λ T c(x,u)

8 Adjoint Optimization p. Adjoint Optimization Algorithm cont d 2. Take first order approximation as optimality condition L(x,u,λ) = J(x,u) + λ T c(x,u) x L(x,u,λ) x=x(u),λ=λ(u) = 0 adjoint equations : T c λ + J T x x = 0

9 Adjoint Optimization p. Adjoint Optimization Algorithm cont d 3. The gradient w.r.t u: J(u) = u L(x,u,λ) x=x(u),λ=λ(u) u L(x,u,λ) T = J u + λt c u

10 Adjoint Optimization p. Adjoint in the presence of state constraints max uɛu J(u) subject to : c(x,u) = 0 g(x, u) = 0 It is detrimental to adjoint optimization

11 Adjoint Optimization p. Adjoint in the presence of state constraints max uɛu J(u) subject to : c(x,u) = 0 g(x, u) = 0 It is detrimental to adjoint optimization Since the states are functions of the control inputs

12 Adjoint Optimization p. Adjoint in the presence of state constraints max uɛu J(u) subject to : c(x,u) = 0 g(x, u) = 0 It is detrimental to adjoint optimization Since the states are functions of the control inputs Lead to difficulties to compute the Jacobian of the constraints

13 Adjoint Optimization p. Why difficult to compute Jacobian? Objective function : J ( x 1,,x n 1,u 1,,u n 1) = n i=1 J i J : U R Adjoint u J : R R n u g : R n x n u R n g Jacobian x(u) g : R n g R n g n u

14 Adjoint Optimization p. 1 Mitigating the state constraint problem 1. KS (Kreisselmeier-Steinhauser) function - aggregating into one constraint KS(x,ρ) = 1 m ρ ln (ρg j (x)) j=1

15 Adjoint Optimization p. 1 Mitigating the state constraint problem 1. KS (Kreisselmeier-Steinhauser) function - aggregating into one constraint KS(x,ρ) = 1 m ρ ln (ρg j (x)) j=1 2. Smoothed penalty function, e.g Sarma s work (2006)

16 Adjoint Optimization p. 1 Mitigating the state constraint problem 1. KS (Kreisselmeier-Steinhauser) function - aggregating into one constraint KS(x,ρ) = 1 m ρ ln (ρg j (x)) j=1 2. Smoothed penalty function, e.g Sarma s work (2006) 3. Jacobian approximation,e.g TR1, TR2 algorithms (A.Griewank,2005)

17 Adjoint Optimization p. 1 Mitigating the state constraint problem 1. KS (Kreisselmeier-Steinhauser) function - aggregating into one constraint KS(x,ρ) = 1 m ρ ln (ρg j (x)) j=1 2. Smoothed penalty function, e.g Sarma s work (2006) 3. Jacobian approximation,e.g TR1, TR2 algorithms (A.Griewank,2005) 4. Barrier function or exact penalty function

18 Adjoint Optimization p. 1 Barrier function A barrier function(logarithmic) ensures feasibility max J(u) + µ u n log(g n (u)) approach optimal solution with µ 0

19 A Simple Example Optimization Adjoint Optimization p. 1

20 Adjoint Optimization p. 1 A Simple Example - cont d J(v n,s n,u n,y n ) = N J n s.t : B n (s n 1 ) C D C T 0 0 D T 0 0 v n p n π n = n=1 Fu n 0 Hu n Saturation equation which is numerically integrated using Newton method. v 0 q inj u 1 x 0 = p 0 q prd1 u 2 π 0, un = q prd2 = u 3 s 0 q prd3 u 4 q prd4 u 5

21 Adjoint Optimization p. 1 A Simple Example - cont d yn := u n s n prd1 s n prd2 s n prd3 s n prd

22 Objective function evolution Adjoint Optimization p. 1

23 Constraint Satisfaction Adjoint Optimization p. 1

24 Adjoint Optimization p. 1 Conclusion Efficient adjoint method honouring state constraints has been tested successfully

25 Second Order Adjoint Computation Adjoint Optimization p. 1

26 Adjoint Optimization p. 1 Overview Adjoint Optimization End up with BFGS/L-BFGS method

27 Adjoint Optimization p. 1 Overview Adjoint Optimization End up with BFGS/L-BFGS method BFGS gives superlinear convergence rate

28 Adjoint Optimization p. 1 Overview Adjoint Optimization End up with BFGS/L-BFGS method BFGS gives superlinear convergence rate Why not use second order adjoint / Newton method?

29 Adjoint Optimization p. 1 Overview Adjoint Optimization End up with BFGS/L-BFGS method BFGS gives superlinear convergence rate Why not use second order adjoint / Newton method? Newton method gives quadratic convergence rate

30 Adjoint Optimization p. 1 Overview Adjoint Optimization End up with BFGS/L-BFGS method BFGS gives superlinear convergence rate Why not use second order adjoint / Newton method? Newton method gives quadratic convergence rate Second order information increases convergence rate in optimization

31 Adjoint Optimization p. 2 A motivating example of the Adjoint Newton Navon et.al (1997,2007)

32 Adjoint Optimization p. 2 References of Adjoint Newton Method 1. M. Heinkenschloss, Rice University(2008), Numerical Solution of Implicitly Constrained Optimization Problems

33 Adjoint Optimization p. 2 References of Adjoint Newton Method 1. M. Heinkenschloss, Rice University(2008), Numerical Solution of Implicitly Constrained Optimization Problems 2. M. Heinkenschloss, ACM(1999), An interface between Optimization and Application for the Numerical Solution of Optimal Control Problems

34 Adjoint Optimization p. 2 References of Adjoint Newton Method 1. M. Heinkenschloss, Rice University(2008), Numerical Solution of Implicitly Constrained Optimization Problems 2. M. Heinkenschloss, ACM(1999), An interface between Optimization and Application for the Numerical Solution of Optimal Control Problems 3. Chapter 5 of Ito and Kunisch book, SIAM(2008): Langrange Multiplier Approach to Variational Problems and Applications

35 Adjoint Optimization p. 2 Optimization Formulation U is a convex set. U R n u f : U R c : R n x n u R n x minĵ(u) u U subject to : c(x,u) = 0

36 Adjoint Optimization p. 2 Hessian Computation Procedure 1. Define the Lagrangian L(x,u,λ) = J(x,u) + λ T c(x,u)

37 Adjoint Optimization p. 2 Hessian Computation Procedure cont d 2. Take first order approximation as optimality condition L(x,u,λ) = J(x,u) + λ T c(x,u) x L(x,u,λ) x=x(u),λ=λ(u) = 0 adjoint equations : T c λ + J T x x = 0

38 Adjoint Optimization p. 2 Hessian Computation Procedure cont d 3. The gradient w.r.t u: J(u) = u L(x,u,λ) x=x(u),λ=λ(u) u L(x,u,λ) T = J u + λt c u

39 Adjoint Optimization p. 2 Hessian Computation Procedure cont d 4. Solve Newton equation using conjugate-gradient to get δu or v

40 Adjoint Optimization p. 2 Hessian Computation Procedure cont d 4. Solve Newton equation using conjugate-gradient to get δu or v 5. Solve w :c x (x(u),u) w = c u (x(u),u) v

41 Adjoint Optimization p. 2 Hessian Computation Procedure cont d 4. Solve Newton equation using conjugate-gradient to get δu or v 5. Solve w :c x (x(u),u) w = c u (x(u),u) v 6. Solve p :c x (x(u),u) T p = xx L(x,u,λ) w xu L(x,u,λ) v

42 Adjoint Optimization p. 2 Hessian Computation Procedure cont d 4. Solve Newton equation using conjugate-gradient to get δu or v 5. Solve w :c x (x(u),u) w = c u (x(u),u) v 6. Solve p :c x (x(u),u) T p = xx L(x,u,λ) w xu L(x,u,λ) v 7. Then the Hessian-vector is 2 Ĵ(u)v = c u (x(u),u) T p ux L(x,u,λ)w+ uu L(x,u,λ)v

43 Adjoint Optimization p. 2 Application to Oil Reservoir Model max u J (u k,x k ) = N k=1 J k s.t : B k ( s k 1) C D C T 0 0 D T 0 0 v k p k π k = Fu k 0 Hu k (1) s k = s k 1 + t i k ( v k,s k,u k) (2) with initial input u 0 and state x 0 = ( p 0 s 0 ) T

44 Adjoint Optimization p. 2 Application to Oil Reservoir Model cont d Apply the Hessian-vector procedure : 1. Compute the pressure and saturation solution using IMPES method 2. Compute the Lagrangian multipliers by solving J = N k=1 J k + λ kt v +λ kt s ( B k v k Cp k + Dπ k Fu k f k) +λ kt p C T v k +λ kt ( ( π Dv k Hu k h k) s k s k 1 + ti k ( v k,s k,u k))

45 Adjoint Optimization p. 2 Application to Oil Reservoir Model cont d J v k = Jk v k + λkt v B k + λ kt p C T + λ kt π D + tλ kt s J p k = λ kt v C J π k = λ kt v D J s k = Jk s k + λ(k+1)t v for k = N,...,1 with s n(bk+1 v k+1 ) + λ kt s i k v k ) (I t ik s k J u k = Jk u k λkt v F λ kt π H

46 Adjoint Optimization p. 3 Application to Oil Reservoir Model cont d 3. Using conjugate gradient method to compute δ u 4. Solve w or derivative of the states { ( s k B k+1 v k+1) } I ws k+1 = for k = 0,...,N 5. Solve p or derivative of the Lagrangian multipliers ( ( ) I t i k T s )p k k s = ( ) t ik s I w k k s ( B k + C k + D k + t ik v )w k k v Cwp k Dwπ k Fδ k u Hδ k u p k+1 s { 2 J k s k w k s } T { ( s B k+1 v k+1)} T p k+1 k v and

47 Adjoint Optimization p. 3 Application to Oil Reservoir Model cont d C D C k 0 0 D k 0 0 B k for k = N,...,1 p k v p k p p k π = 6. The second order adjoint: t ( i k v k ) T p k s J u 2 = pkt v F p kt π H

48 Adjoint Optimization p. 3 Hessian Validation Compare the Hessian from finite difference Check property of self-adjointness of the Hessian

49 Adjoint Optimization p. 3 Current Status Theoretical derivation has been completed Implementing Adjoint-Hessian Collaboration with Stein Krogstad (SINTEF) A paper will be presented at SPE-RCSC

50 Thank you! Adjoint Optimization p. 3

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Oil Reservoir Production Optimization using Optimal Control

Oil Reservoir Production Optimization using Optimal Control Downloaded from orbit.dtu.dk on: Jun 27, 218 Oil Reservoir Production Optimization using Optimal Control Völcker, Carsten; Jørgensen, John Bagterp; Stenby, Erling Halfdan Published in: I E E E Conference

More information

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 Chapter 11 Taylor Series Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 First-Order Approximation We want to approximate function f by some simple function. Best possible approximation

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

Optimal control problems with PDE constraints

Optimal control problems with PDE constraints Optimal control problems with PDE constraints Maya Neytcheva CIM, October 2017 General framework Unconstrained optimization problems min f (q) q x R n (real vector) and f : R n R is a smooth function.

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Single Shooting and ESDIRK Methods for adjoint-based optimization of an oil reservoir

Single Shooting and ESDIRK Methods for adjoint-based optimization of an oil reservoir Downloaded from orbit.dtu.dk on: Dec 2, 217 Single Shooting and ESDIRK Methods for adjoint-based optimization of an oil reservoir Capolei, Andrea; Völcker, Carsten; Frydendall, Jan; Jørgensen, John Bagterp

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Simulation based optimization

Simulation based optimization SimBOpt p.1/52 Simulation based optimization Feb 2005 Eldad Haber haber@mathcs.emory.edu Emory University SimBOpt p.2/52 Outline Introduction A few words about discretization The unconstrained framework

More information

Analysis of Inexact Trust-Region Interior-Point SQP Algorithms. Matthias Heinkenschloss Luis N. Vicente. TR95-18 June 1995 (revised April 1996)

Analysis of Inexact Trust-Region Interior-Point SQP Algorithms. Matthias Heinkenschloss Luis N. Vicente. TR95-18 June 1995 (revised April 1996) Analysis of Inexact rust-region Interior-Point SQP Algorithms Matthias Heinkenschloss Luis N. Vicente R95-18 June 1995 (revised April 1996) Department of Computational and Applied Mathematics MS 134 Rice

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects

More information

Chapter 2. Optimization. Gradients, convexity, and ALS

Chapter 2. Optimization. Gradients, convexity, and ALS Chapter 2 Optimization Gradients, convexity, and ALS Contents Background Gradient descent Stochastic gradient descent Newton s method Alternating least squares KKT conditions 2 Motivation We can solve

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Lectures Notes Algorithms and Preconditioning in PDE-Constrained Optimization. Prof. Dr. R. Herzog

Lectures Notes Algorithms and Preconditioning in PDE-Constrained Optimization. Prof. Dr. R. Herzog Lectures Notes Algorithms and Preconditioning in PDE-Constrained Optimization Prof. Dr. R. Herzog held in July 2010 at the Summer School on Analysis and Numerics of PDE Constrained Optimization, Lambrecht

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

4 - H LEISURE SCIENCES I O SALE OF CHAMPIONS SALE SAFE AT SAFETY HOME SAFETY (YR) (YR) S S S S S FARM SAFETY SAFETY E N

4 - H LEISURE SCIENCES I O SALE OF CHAMPIONS SALE SAFE AT SAFETY HOME SAFETY (YR) (YR) S S S S S FARM SAFETY SAFETY E N F F F F --0045 --0075 --0076 F U F --0086 () F --0087 () F F --0096 F --0100 F --0116 --0170 --0250 --0251 --0275 --0301 --0117 4 - --0118 F F X --0305 --0310 --0351 --0355 --0356 --0400 --0401 --0405

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version:

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Fast Model Predictive Control with Soft Constraints

Fast Model Predictive Control with Soft Constraints European Control Conference (ECC) July 7-9,, Zürich, Switzerland. Fast Model Predictive Control with Soft Constraints Arthur Richards Department of Aerospace Engineering, University of Bristol Queens Building,

More information

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx PDE-constrained optimization and the adjoint method Andrew M. Bradley November 16, 21 PDE-constrained optimization and the adjoint method for solving these and related problems appear in a wide range of

More information

5 Quasi-Newton Methods

5 Quasi-Newton Methods Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions. Chapter 9 Lagrange multipliers Portfolio optimization The Lagrange multipliers method for finding constrained extrema of multivariable functions 91 Lagrange multipliers Optimization problems often require

More information

Pithy P o i n t s Picked I ' p and Patljr Put By Our P e r i p a tetic Pencil Pusher VOLUME X X X X. Lee Hi^h School Here Friday Ni^ht

Pithy P o i n t s Picked I ' p and Patljr Put By Our P e r i p a tetic Pencil Pusher VOLUME X X X X. Lee Hi^h School Here Friday Ni^ht G G QQ K K Z z U K z q Z 22 x z - z 97 Z x z j K K 33 G - 72 92 33 3% 98 K 924 4 G G K 2 G x G K 2 z K j x x 2 G Z 22 j K K x q j - K 72 G 43-2 2 G G z G - -G G U q - z q - G x) z q 3 26 7 x Zz - G U-

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems. Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of

More information

Double Smoothing technique for Convex Optimization Problems with Linear Constraints

Double Smoothing technique for Convex Optimization Problems with Linear Constraints 1 Double Smoothing technique for Convex Optimization Problems with Linear Constraints O. Devolder (F.R.S.-FNRS Research Fellow), F. Glineur and Y. Nesterov Center for Operations Research and Econometrics

More information

Improved Damped Quasi-Newton Methods for Unconstrained Optimization

Improved Damped Quasi-Newton Methods for Unconstrained Optimization Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified

More information

Computational Optimization. Augmented Lagrangian NW 17.3

Computational Optimization. Augmented Lagrangian NW 17.3 Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems Z. Akbari 1, R. Yousefpour 2, M. R. Peyghami 3 1 Department of Mathematics, K.N. Toosi University of Technology,

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems Optimization Methods and Software Vol. 00, No. 00, July 200, 8 RESEARCH ARTICLE A strategy of finding an initial active set for inequality constrained quadratic programming problems Jungho Lee Computer

More information

Iterative Methods for Smooth Objective Functions

Iterative Methods for Smooth Objective Functions Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods

More information

Parallelizing large scale time domain electromagnetic inverse problem

Parallelizing large scale time domain electromagnetic inverse problem Parallelizing large scale time domain electromagnetic inverse problems Eldad Haber with: D. Oldenburg & R. Shekhtman + Emory University, Atlanta, GA + The University of British Columbia, Vancouver, BC,

More information

8 Barrier Methods for Constrained Optimization

8 Barrier Methods for Constrained Optimization IOE 519: NL, Winter 2012 c Marina A. Epelman 55 8 Barrier Methods for Constrained Optimization In this subsection, we will restrict our attention to instances of constrained problem () that have inequality

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Introduction to the Calculus of Variations

Introduction to the Calculus of Variations 236861 Numerical Geometry of Images Tutorial 1 Introduction to the Calculus of Variations Alex Bronstein c 2005 1 Calculus Calculus of variations 1. Function Functional f : R n R Example: f(x, y) =x 2

More information

Numerical Analysis of Electromagnetic Fields

Numerical Analysis of Electromagnetic Fields Pei-bai Zhou Numerical Analysis of Electromagnetic Fields With 157 Figures Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Hong Kong Barcelona Budapest Contents Part 1 Universal Concepts

More information

Stochastic Optimization Algorithms Beyond SG

Stochastic Optimization Algorithms Beyond SG Stochastic Optimization Algorithms Beyond SG Frank E. Curtis 1, Lehigh University involving joint work with Léon Bottou, Facebook AI Research Jorge Nocedal, Northwestern University Optimization Methods

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review

OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review Department of Statistical Sciences and Operations Research Virginia Commonwealth University Oct 16, 2013 (Lecture 14) Nonlinear Optimization

More information

Operations Research Letters

Operations Research Letters Operations Research Letters 40 (2012) 521 528 Contents lists available at SciVerse ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Linear quadratic control and information

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7. SF2822 Applied Nonlinear Optimization Lecture 10: Interior methods Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH 1 / 24 Lecture 10, 2017/2018 Preparatory question 1. Try to solve theory question

More information

Conservation Laws: Systematic Construction, Noether s Theorem, Applications, and Symbolic Computations.

Conservation Laws: Systematic Construction, Noether s Theorem, Applications, and Symbolic Computations. Conservation Laws: Systematic Construction, Noether s Theorem, Applications, and Symbolic Computations. Alexey Shevyakov Department of Mathematics and Statistics, University of Saskatchewan, Saskatoon,

More information

The variational iteration method for solving linear and nonlinear ODEs and scientific models with variable coefficients

The variational iteration method for solving linear and nonlinear ODEs and scientific models with variable coefficients Cent. Eur. J. Eng. 4 24 64-7 DOI:.2478/s353-3-4-6 Central European Journal of Engineering The variational iteration method for solving linear and nonlinear ODEs and scientific models with variable coefficients

More information

OPER 627: Nonlinear Optimization Lecture 9: Trust-region methods

OPER 627: Nonlinear Optimization Lecture 9: Trust-region methods OPER 627: Nonlinear Optimization Lecture 9: Trust-region methods Department of Statistical Sciences and Operations Research Virginia Commonwealth University Sept 25, 2013 (Lecture 9) Nonlinear Optimization

More information

Solving MPECs Implicit Programming and NLP Methods

Solving MPECs Implicit Programming and NLP Methods Solving MPECs Implicit Programming and NLP Methods Michal Kočvara Academy of Sciences of the Czech Republic September 2005 1 Mathematical Programs with Equilibrium Constraints Mechanical motivation Mechanical

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

Improving the Convergence of Back-Propogation Learning with Second Order Methods

Improving the Convergence of Back-Propogation Learning with Second Order Methods the of Back-Propogation Learning with Second Order Methods Sue Becker and Yann le Cun, Sept 1988 Kasey Bray, October 2017 Table of Contents 1 with Back-Propagation 2 the of BP 3 A Computationally Feasible

More information

Lecture Notes for Chapter 12

Lecture Notes for Chapter 12 Lecture Notes for Chapter 12 Kevin Wainwright April 26, 2014 1 Constrained Optimization Consider the following Utility Max problem: Max x 1, x 2 U = U(x 1, x 2 ) (1) Subject to: Re-write Eq. 2 B = P 1

More information

Full-Waveform Inversion with Gauss- Newton-Krylov Method

Full-Waveform Inversion with Gauss- Newton-Krylov Method Full-Waveform Inversion with Gauss- Newton-Krylov Method Yogi A. Erlangga and Felix J. Herrmann {yerlangga,fherrmann}@eos.ubc.ca Seismic Laboratory for Imaging and Modeling The University of British Columbia

More information

Cubic regularization of Newton s method for convex problems with constraints

Cubic regularization of Newton s method for convex problems with constraints CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Quasi Newton Methods Barnabás Póczos & Ryan Tibshirani Quasi Newton Methods 2 Outline Modified Newton Method Rank one correction of the inverse Rank two correction of the

More information

Arc Search Algorithms

Arc Search Algorithms Arc Search Algorithms Nick Henderson and Walter Murray Stanford University Institute for Computational and Mathematical Engineering November 10, 2011 Unconstrained Optimization minimize x D F (x) where

More information

MATH 4211/6211 Optimization Quasi-Newton Method

MATH 4211/6211 Optimization Quasi-Newton Method MATH 4211/6211 Optimization Quasi-Newton Method Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Quasi-Newton Method Motivation:

More information

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form: 0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything

More information

Introduction to Unconstrained Optimization: Part 2

Introduction to Unconstrained Optimization: Part 2 Introduction to Unconstrained Optimization: Part 2 James Allison ME 555 January 29, 2007 Overview Recap Recap selected concepts from last time (with examples) Use of quadratic functions Tests for positive

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

POD for Parametric PDEs and for Optimality Systems

POD for Parametric PDEs and for Optimality Systems POD for Parametric PDEs and for Optimality Systems M. Kahlbacher, K. Kunisch, H. Müller and S. Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria DMV-Jahrestagung 26,

More information

Numerical Optimal Control Part 3: Function space methods

Numerical Optimal Control Part 3: Function space methods Numerical Optimal Control Part 3: Function space methods SADCO Summer School and Workshop on Optimal and Model Predictive Control OMPC 2013, Bayreuth Institute of Mathematics and Applied Computing Department

More information

PENNON A Generalized Augmented Lagrangian Method for Nonconvex NLP and SDP p.1/22

PENNON A Generalized Augmented Lagrangian Method for Nonconvex NLP and SDP p.1/22 PENNON A Generalized Augmented Lagrangian Method for Nonconvex NLP and SDP Michal Kočvara Institute of Information Theory and Automation Academy of Sciences of the Czech Republic and Czech Technical University

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

Scalable BETI for Variational Inequalities

Scalable BETI for Variational Inequalities Scalable BETI for Variational Inequalities Jiří Bouchala, Zdeněk Dostál and Marie Sadowská Department of Applied Mathematics, Faculty of Electrical Engineering and Computer Science, VŠB-Technical University

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 11: NPSOL and SNOPT SQP Methods 1 Overview

More information

Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno February 6, / 25 (BFG. Limited memory BFGS (L-BFGS)

Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno February 6, / 25 (BFG. Limited memory BFGS (L-BFGS) Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno (BFGS) Limited memory BFGS (L-BFGS) February 6, 2014 Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb

More information

Algorithms for PDE-Constrained Optimization

Algorithms for PDE-Constrained Optimization GAMM-Mitteilungen, 31 January 2014 Algorithms for PDE-Constrained Optimization Roland Herzog 1 and Karl Kunisch 2 1 Chemnitz University of Technology, Faculty of Mathematics, Reichenhainer Straße 41, D

More information

Relaxed linearized algorithms for faster X-ray CT image reconstruction

Relaxed linearized algorithms for faster X-ray CT image reconstruction Relaxed linearized algorithms for faster X-ray CT image reconstruction Hung Nien and Jeffrey A. Fessler University of Michigan, Ann Arbor The 13th Fully 3D Meeting June 2, 2015 1/20 Statistical image reconstruction

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

A Local Convergence Analysis of Bilevel Decomposition Algorithms

A Local Convergence Analysis of Bilevel Decomposition Algorithms A Local Convergence Analysis of Bilevel Decomposition Algorithms Victor DeMiguel Decision Sciences London Business School avmiguel@london.edu Walter Murray Management Science and Engineering Stanford University

More information