Numerical Methods for PDE-Constrained Optimization

Size: px
Start display at page:

Download "Numerical Methods for PDE-Constrained Optimization"

Transcription

1 Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences, 2007

2 Outline Motivating Example: Data Assimilation in Weather Forecasting Problem Formulation Solution Techniques Inexact SQP Methods for Equality Constrained Optimization Background and Basics Algorithm Overview Numerical Results Optimization Test Problems: Robustness PDE-Constrained Test Problems: Practicality Related and Future Work Inexact SQP for nonconvex optimization

3 Problem Formulation Outline Motivating Example: Data Assimilation in Weather Forecasting Problem Formulation Solution Techniques Inexact SQP Methods for Equality Constrained Optimization Background and Basics Algorithm Overview Numerical Results Optimization Test Problems: Robustness PDE-Constrained Test Problems: Practicality Related and Future Work Inexact SQP for nonconvex optimization

4 Problem Formulation Data assimilation in weather forecasting Goal: up-to-date global weather forecast for the next 7 to 10 days 1 If an entire initial state of the atmosphere (temperatures, pressures, wind patterns, humidities) were known at a certain point in time, then an accurate forecast could be obtained by integrating atmospheric model equations forward in time Flow described by Navier-Stokes and further sophistications of atmospheric physics and dynamics (none of which will be discussed here) 1 (Fisher, Nocedal, Trémolet, and Wright, 2007)

5 Problem Formulation In reality: partial information Limited amount of data (satellites, buoys, planes, ground-based sensors) Each observation is subject to error Nonuniformly distributed around the globe (satellite paths, densely-populated areas) (Graphics courtesy Yannick Trémolet)

6 Problem Formulation In reality: partial information Limited amount of data (satellites, buoys, planes, ground-based sensors) Each observation is subject to error Nonuniformly distributed around the globe (satellite paths, densely-populated areas) (Graphics courtesy Yannick Trémolet)

7 Problem Formulation Data assimilation: defining the unknowns Currently in operational use at the European Centre for Medium-Range Weather Forecasts (ECMWF) We want values for an initial state, call it x 0 For a given x 0, we could integrate our atmospheric models forward to forecast the state of the atmosphere at N time points x i = M(x i 1 ), i = 1,..., N (x i : state of the atmosphere at time i) Observe the atmosphere at these N time points (y i : observed state at time i) y 1,..., y N Let x b (background state) be values at initial time point obtained from previous forecast carry over old information

8 Problem Formulation Data assimilation as an optimization problem Define the difference 2 x 0 x b 3 f (x 0 x 1 y 1 ) = x N y N and choose x 0 as the initial state most likely to have given the observed data 1 min f (x 0 ) 2 x 0 2 R = 1 (x 0 x b ) T (R b ) 1 (x 0 x b P )+ 1 N i=0 (x i y i ) T (R i ) 1 (x i y i ) 1 2 f (x 0 ) 2 R 1 : distance measure between observed and expected behavior R = (R b, R 1,..., R N ): background and observation error covariance matrices (choice of these values is a separate, but important issue) In current forecasts, x 0 contains approximately unknowns

9 Solution Techniques Outline Motivating Example: Data Assimilation in Weather Forecasting Problem Formulation Solution Techniques Inexact SQP Methods for Equality Constrained Optimization Background and Basics Algorithm Overview Numerical Results Optimization Test Problems: Robustness PDE-Constrained Test Problems: Practicality Related and Future Work Inexact SQP for nonconvex optimization

10 Solution Techniques Computational issues Data assimilation optimization problem: 1 min f (x 0 ) 2 x 0 2 R = 1 (x 0 x b ) T (R b ) 1 (x 0 x b P )+ 1 N i=0 (x i y i ) T (R i ) 1 (x i y i ) Difficulties include: problem is very large ( x ) objective is nonconvex (nonlinear operators M i ) exact derivative information not available solutions needed in real-time

11 Solution Techniques Current algorithm: nonlinear elimination Given a guess of the initial state x 0 1. Apply operators M i to compute the expected state at the N time points x i = M(x i 1 ), i = 1,..., N and evaluate the objective 1 2 f (x 0 ) 2 R 1 2. Compute a step d toward an improved solution a.... derive sensitivities of x 1,..., x N with respect to x 0 to form J(x 0 ) : Jacobian of f (x 0 ) (Note: can only be done inexactly) b.... solve the quadratic problem min d 1 2 f (x 0 ) + J(x 0 )d 2 R 1

12 Solution Techniques Illustration of solution process

13 Solution Techniques Recent advances: multilevel schemes We are interested in solving the subproblem min d q(d) = 1 2 f (x 0 ) + J(x 0 )d 2 R 1 Conjugate Gradients or Lanczos method is applicable, but not at high resolutions Define a restriction operator S such that ˆx i = Sx i, i = 0,..., N Solve min ˆq(ˆd) ˆd where ˆq is a lower-resolution analog of q Use a prolongation operator S + to map the computed step into the higher-resolution space x 0 x 0 + S +ˆd

14 Solution Techniques Future study: weak constraints Previous formulation assumes that model errors can be neglected However, we can lift this questionable assumption by imposing weak constraints x i M(x i 1 ), i = 1,..., N These approximate equalities can be imposed by creating a penalty term in the objective min 1 2 (x 0 x b ) T (R b ) 1 (x 0 x b ) P N i=0 (x i y i ) T (R i ) 1 (x i y i ) P N i=1 (x i M(x i 1 )) T (Q i ) 1 (x i M(x i 1 )) where Q i are model error covariances Note: state vectors x 1,..., x N become unknowns; problem dimension can increase from to as large as !

15 Solution Techniques Future study: weak constraints The weak constraint formulation min 1 2 (x 0 x b ) T (R b ) 1 (x 0 x b ) P N i=0 (x i y i ) T (R i ) 1 (x i y i ) P N i=1 (x i M(x i 1 )) T (Q i ) 1 (x i M(x i 1 )) can be seen as a step toward constrained optimization 1 min (x 0 x b ) T (R b ) 1 (x 0 x b P ) + 1 N x 0,...,x N 2 2 i=0 (x i y i ) T (R i ) 1 (x i y i ) s.t. x i = M(x i 1 ), i = 1,..., N (Note: in solving the constrained formulation, constraints are satisfied only in the limit)

16 Solution Techniques Challenges in PDE-Constrained Optimization Optimization Method Nonlinear elimination Composite-step methods Full space methods DO versus OD Discretize-Optimize Optimize-Discretize Multilevel Schemes Compute steps at low resolution Optimize at low resolution Inexactness Efficiency Robustness Overview of important optimization questions

17 Solution Techniques Challenges in PDE-Constrained Optimization Optimization Method Nonlinear elimination Composite-step methods Full space methods DO versus OD Discretize-Optimize Optimize-Discretize Multilevel Schemes Compute steps at low resolution Optimize at low resolution Inexactness Efficiency Robustness Methods arising in the data assimilation problem

18 Solution Techniques Challenges in PDE-Constrained Optimization Optimization Method Nonlinear elimination Composite-step methods Full space methods DO versus OD Discretize-Optimize Optimize-Discretize Multilevel Schemes Compute steps at low resolution Optimize at low resolution Inexactness Efficiency Robustness Our contribution to the field...

19 Solution Techniques Computational savings with inexact methods Inexact step computations are... effective for unconstrained optimization 2 min x R n f (x)... effective for systems of nonlinear equations 3 F (x) = 0 or min x R n... necessary for many PDE-constrained problems F (x) 2 (Steihaug, 1983) 3 (Dembo, Eisenstat, Steihaug, 1982)

20 Background and Basics Outline Motivating Example: Data Assimilation in Weather Forecasting Problem Formulation Solution Techniques Inexact SQP Methods for Equality Constrained Optimization Background and Basics Algorithm Overview Numerical Results Optimization Test Problems: Robustness PDE-Constrained Test Problems: Practicality Related and Future Work Inexact SQP for nonconvex optimization

21 Background and Basics Equality Constrained Optimization Minimize an objective subject to mathematical equalities min f (x) x R n s.t. c(x) = 0 (where the constraints c(x) = 0 contain the PDE)

22 Background and Basics Characterizing Optimal Solutions Defining and then the derivatives L(x, λ) = f (x) + λ T c(x) g(x) = f (x) and A(x) = [ c i (x)], we have the first order optimality conditions» g(x) + A(x) T λ = 0 c(x)

23 Background and Basics Method of choice: Sequential Quadratic Programming (SQP) At a given iterate x k, formulate the quadratic subproblem min d R n f k + g T k d d T W k d s.t. c k + A k d = 0 for some symmetric positive definite W k ( xxl 2 k ) to compute a step, or, equivalently, solve the primal-dual system»»» Wk A T k dk gk + A T k λ = k A k 0 Measure progress toward a solution with the merit function δ k c k φ π(x) = f (x) + π c(x), π > 0

24 Background and Basics SQP: for k = 1, 2,... Compute step:»»» Wk A T k dk gk + A T k λ = k A k 0 Set π k π k 1 Line search: endfor δ k φ πk (x k + α k d k ) φ πk (x k ) + ηα k Dφ πk (d k ) c k (It works!)

25 Background and Basics SQP: for k = 1, 2,... Compute step:»»» Wk A T k dk gk + A T k λ = k A k 0 Set π k π k 1 Line search: endfor δ k φ πk (x k + α k d k ) φ πk (x k ) + ηα k Dφ πk (d k ) c k

26 Background and Basics SQP: for k = 1, 2,... Compute step:»»» Wk A T k dk gk + A T k λ = k A k 0 Set π k π k 1 Line search: endfor δ k φ πk (x k + α k d k ) φ πk (x k ) + ηα k Dφ πk (d k ) c k

27 Background and Basics SQP: for k = 1, 2,... Compute step:»»» Wk A T k dk gk + A T k λ = k A k 0 Set π k π k 1 Line search: endfor δ k φ πk (x k + α k d k ) φ πk (x k ) + ηα k Dφ πk (d k ) c k The step is good; Dφ πk (d k ) < 0 Algorithm is well-defined; α k > 0 π will stabilize reducing φ π drives search toward solution of optimization problem

28 Algorithm Overview Outline Motivating Example: Data Assimilation in Weather Forecasting Problem Formulation Solution Techniques Inexact SQP Methods for Equality Constrained Optimization Background and Basics Algorithm Overview Numerical Results Optimization Test Problems: Robustness PDE-Constrained Test Problems: Practicality Related and Future Work Inexact SQP for nonconvex optimization

29 Algorithm Overview Inexactness is necessary An SQP algorithm requires the exact solution of the system»»» Wk A T k dk gk + A T k λ = k A k 0 δ k However, for many PDE constrained problems we cannot form or factor this iteration matrix Alternatively, we can consider the application of an iterative solver which, during each inner iteration, yields»»»» Wk A T k dk gk + A T k λ = k ρk + A k 0 with residuals (ρ k, r k ) δ k Iterative methods only require mechanisms for computing products with W k and A k and its transpose, so the algorithm is matrix-free c k c k r k

30 Algorithm Overview Inexact SQP: Idea: terminate when (ρ k, r k ) is small for k = 1, 2,... Compute step:»»» Wk A T k dk gk + A T k λ = k + A k 0 Set π k π k 1 Line search: endfor δ k c k» ρk φ πk (x k + α k d k ) φ πk (x k ) + ηα k Dφ πk (d k ) r k

31 Algorithm Overview Inexact SQP: Idea: terminate when (ρ k, r k ) is small for k = 1, 2,... Compute step:»»» Wk A T k dk gk + A T k λ = k + A k 0 δ k c k» ρk r k For any level of inexactness: Set π k π k 1 Line search: φ πk (x k + α k d k ) φ πk (x k ) + ηα k Dφ πk (d k ) endfor

32 Algorithm Overview Inexact SQP: Idea: terminate when (ρ k, r k ) is small for k = 1, 2,... Compute step:»»» Wk A T k dk gk + A T k λ = k + A k 0 Set π k π k 1 Line search: endfor δ k c k» ρk φ πk (x k + α k d k ) φ πk (x k ) + ηα k Dφ πk (d k ) r k For any level of inexactness: Step may be an ascent direction for φ π Penalty parameter may tend to

33 Algorithm Overview Inexact SQP: We don t know when to terminate the iterative solver for k = 1, 2,... Compute step:»»» Wk A T k dk gk + A T k λ = k + A k 0 Set π k π k 1 Line search: endfor δ k c k» ρk φ πk (x k + α k d k ) φ πk (x k ) + ηα k Dφ πk (d k ) r k For any level of inexactness: Step may be an ascent direction for φ π Penalty parameter may tend to

34 Algorithm Overview Main idea: inexactness based on models Modern optimization methods work with models For the merit function φ π(x) = f (x) + π c(x), define a model about x k : m π(d) = f k + gk T d + 1 d T W 2 k d + π c k + A k d, For a given d k, we can estimate the reduction in φ π by evaluating mred π(d k ) = m π(0) m π(d k ) = gk T d k 1 d T 2 k W k d k + π( c k r k ) (recall r k = c k + A k d k )

35 Algorithm Overview Termination Tests We may terminate the iteration on»»» Wk A T k dk gk + A T k λ = k + A k 0 if the component d k : δ k c k» ρk r k

36 Algorithm Overview Termination Tests We may terminate the iteration on»»» Wk A T k dk gk + A T k λ = k + A k 0 δ k if the component d k : (1) yields a sufficiently large reduction in the model of the merit function for the most recent penalty parameter π k 1 ; i.e. c k mred πk 1 (d k ) σπ k 1 c k» ρk r k

37 Algorithm Overview Termination Tests We may terminate the iteration on»»» Wk A T k dk gk + A T k λ = k + A k 0 δ k if the component d k : (2) yields a reduction in the linear model of the constraints; i.e., for π k > π k 1 c k mred πk (d k ) σπ k c k» ρk r k

38 Algorithm Overview An Inexact SQP Algorithm (Byrd, Curtis, and Nocedal, 2007) Given parameters 0 < ɛ, τ, σ, η < 1 and 0 < β Initialize x 0, λ 0, and π 1 > 0 for k = 0, 1, 2,..., until convergence Set π k = π k 1 Iteratively solve the system» Wk A T»» k dk gk + A = T k λ» k ρk + A k 0 δ k c k r k Terminate the inner iterations when mred πk (d k ) ρ k σπ k c k max{β c k, ɛ g k + A T k λ k } or r k ρ k Perform line search φ πk (x k + α k d k ) φ πk (x k ) + ηα k Dφ πk (d k ) Set (x k+1, λ k+1 ) (x k, λ k ) + α k (d k, δ k ) endfor ɛ c k β c k π k g T k d k dt k W kd k (1 τ)( c k r k )

39 Algorithm Overview Summary: Our contribution SQP is the algorithm of choice for large-scale constrained optimization Introducing inexactness is a difficult issue and naïve approaches can fail to ensure convergence We present two simple termination tests for the iterative solver, where the level of inexactness is based on models of a merit function The algorithm is globally convergent Numerical experiments are considered next...

40 Optimization Test Problems: Robustness Outline Motivating Example: Data Assimilation in Weather Forecasting Problem Formulation Solution Techniques Inexact SQP Methods for Equality Constrained Optimization Background and Basics Algorithm Overview Numerical Results Optimization Test Problems: Robustness PDE-Constrained Test Problems: Practicality Related and Future Work Inexact SQP for nonconvex optimization

41 Optimization Test Problems: Robustness Question: Is the algorithm robust? 44 problems from standard optimization test sets (CUTEr and COPS) Algorithm A: inexactness based on»» ρk κ gk + A T k λ k, 0 < κ < 1 r k Algorithm B: inexactness based on model reductions (Note: can have κ 1!) c k Algorithm Alg. A Alg. B κ % Solved 45% 80% 86% 100%

42 PDE-Constrained Test Problems: Practicality Outline Motivating Example: Data Assimilation in Weather Forecasting Problem Formulation Solution Techniques Inexact SQP Methods for Equality Constrained Optimization Background and Basics Algorithm Overview Numerical Results Optimization Test Problems: Robustness PDE-Constrained Test Problems: Practicality Related and Future Work Inexact SQP for nonconvex optimization

43 PDE-Constrained Test Problems: Practicality Question: Is the algorithm practical? 2 model inverse problems in PDE-constrained optimization 4 where 1 min Qz (y,z) R n 2 d 2 + γr(y y ref ) s.t. A(y)z = q objective = data fitting + regularization constraints = PDE Example applications: Elliptic PDE groundwater modeling, DC resistivity Parabolic PDE optical tomography, electromagnetic imaging 4 (Haber and Hanson, 2007)

44 PDE-Constrained Test Problems: Practicality Question: Is the algorithm practical? First problem (n = 8192, t = 4096) Elliptic PDE (groundwater modeling, DC resistivity)... many details... Solution obtained in 9 iterations Sample output: Note: κ k = (ρ k, r k ) / (g k + A T k λ k, c k ) k KKT Inner Iter. κ k T. Test new π curr. π new π Overall, average κ k

45 PDE-Constrained Test Problems: Practicality Question: Is the algorithm practical? Second problem (n = 69632, t = 65536) Parabolic PDE (optical tomography, electromagnetic imaging)... many details... Solution obtained in 11 iterations Sample output: Note: κ k = (ρ k, r k ) / (g k + A T k λ k, c k ) k KKT Inner Iter. κ k T. Test curr. π curr. π new π Overall, average κ k

46 Inexact SQP for nonconvex optimization Outline Motivating Example: Data Assimilation in Weather Forecasting Problem Formulation Solution Techniques Inexact SQP Methods for Equality Constrained Optimization Background and Basics Algorithm Overview Numerical Results Optimization Test Problems: Robustness PDE-Constrained Test Problems: Practicality Related and Future Work Inexact SQP for nonconvex optimization

47 Inexact SQP for nonconvex optimization Convexity assumption Step computation procedure min d R n f k + g T k d d T W k d s.t. c k + A k d = 0» Wk A T k A k 0»» dk gk + A T k λ = k δ k c k

48 Inexact SQP for nonconvex optimization Nonconvex optimization If W k is not convex in the null space of the constraints No guarantee of descent Step may be unbounded Without a factorization of the primal-dual matrix» Wk A T k A k 0 we may not know if the problem is convex or not We could always set W k to a simple positive definite matrix, but then we may be distorting second order information (if available)

49 Inexact SQP for nonconvex optimization First step: recognizing good steps If the problem is convex, then a solution to»»» Wk A T k dk gk + A T k λ = k A k 0 δ k c k (1) is a good step If the problem is nonconvex, then a solution to (1) may still be a good step Idea: characterize d k based on properties of the decomposition d k = u k + v k (Note: d k 2 = u k 2 + v k 2 ) where A k u k = 0 and v k lies in range(a T k )

50 Inexact SQP for nonconvex optimization Central claim A step is good if either termination test for the inexact SQP algorithm is satisfied and for some small θ > 0 we have Notice that (a) θ u k v k or (b) d T k W k d k θ u k 2 (a) implies that the step d k is sufficiently parallel to v k (b) implies that the curvature is sufficiently positive along d k

51 Inexact SQP for nonconvex optimization Estimating properties of the step Observing we find and v k A k v k / A k = A k d k / A k, θ d k A k d k / A k θ u k v k d T k W k d k θ( d k 2 A k d k 2 / A k 2 ) d T k W k d k θ u k 2

52 Inexact SQP for nonconvex optimization Proposed Algorithm Apply an iterative solver to the primal-dual system Stop if a termination test is satisfied and... step is sufficiently parallel to vk θ d k A k d k / A k... or curvature is sufficiently positive d T k W k d k θ ( d k 2 A k d k 2 / A k 2) If θ d k > A k d k / A k, then set W k W k to satisfy θ d k 2 A k d k 2 / A k 2 dk T W k d k and iterate on perturbed system

53 Conclusion We have: 1. Described an important and challenging sample problem in PDE-constrained optimization 2. Described a globally convergent algorithm in the framework of a powerful algorithm: sequential quadratic programming 3. The algorithm has shown to be robust and efficient for realistic applications 4. Described some ideas for extending the approach to nonconvex problems

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

An Inexact Newton Method for Nonconvex Equality Constrained Optimization

An Inexact Newton Method for Nonconvex Equality Constrained Optimization Noname manuscript No. (will be inserted by the editor) An Inexact Newton Method for Nonconvex Equality Constrained Optimization Richard H. Byrd Frank E. Curtis Jorge Nocedal Received: / Accepted: Abstract

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t.

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t. AN INEXACT SQP METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION RICHARD H. BYRD, FRANK E. CURTIS, AND JORGE NOCEDAL Abstract. We present an algorithm for large-scale equality constrained optimization. The

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Recent Adaptive Methods for Nonlinear Optimization

Recent Adaptive Methods for Nonlinear Optimization Recent Adaptive Methods for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke (U. of Washington), Richard H. Byrd (U. of Colorado), Nicholas I. M. Gould

More information

Large-Scale Nonlinear Optimization with Inexact Step Computations

Large-Scale Nonlinear Optimization with Inexact Step Computations Large-Scale Nonlinear Optimization with Inexact Step Computations Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com IPAM Workshop on Numerical Methods for Continuous

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control ALADIN An Algorithm for Distributed Non-Convex Optimization and Control Boris Houska, Yuning Jiang, Janick Frasch, Rien Quirynen, Dimitris Kouzoupis, Moritz Diehl ShanghaiTech University, University of

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

On the use of piecewise linear models in nonlinear programming

On the use of piecewise linear models in nonlinear programming Math. Program., Ser. A (2013) 137:289 324 DOI 10.1007/s10107-011-0492-9 FULL LENGTH PAPER On the use of piecewise linear models in nonlinear programming Richard H. Byrd Jorge Nocedal Richard A. Waltz Yuchen

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Clément Royer - University of Wisconsin-Madison Joint work with Stephen J. Wright MOPTA, Bethlehem,

More information

IPAM Summer School Optimization methods for machine learning. Jorge Nocedal

IPAM Summer School Optimization methods for machine learning. Jorge Nocedal IPAM Summer School 2012 Tutorial on Optimization methods for machine learning Jorge Nocedal Northwestern University Overview 1. We discuss some characteristics of optimization problems arising in deep

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,

More information

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Flexible Penalty Functions for Nonlinear Constrained Optimization

Flexible Penalty Functions for Nonlinear Constrained Optimization IMA Journal of Numerical Analysis (2005) Page 1 of 19 doi: 10.1093/imanum/dri017 Flexible Penalty Functions for Nonlinear Constrained Optimization FRANK E. CURTIS Department of Industrial Engineering and

More information

A Trust-region-based Sequential Quadratic Programming Algorithm

A Trust-region-based Sequential Quadratic Programming Algorithm Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

A multigrid method for large scale inverse problems

A multigrid method for large scale inverse problems A multigrid method for large scale inverse problems Eldad Haber Dept. of Computer Science, Dept. of Earth and Ocean Science University of British Columbia haber@cs.ubc.ca July 4, 2003 E.Haber: Multigrid

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March

More information

A Trust-Funnel Algorithm for Nonlinear Programming

A Trust-Funnel Algorithm for Nonlinear Programming for Nonlinear Programming Daniel P. Robinson Johns Hopins University Department of Applied Mathematics and Statistics Collaborators: Fran E. Curtis (Lehigh University) Nic I. M. Gould (Rutherford Appleton

More information

On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations

On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations Frank E. Curtis Lehigh University Johannes Huber University of Basel Olaf Schenk University

More information

Sub-Sampled Newton Methods

Sub-Sampled Newton Methods Sub-Sampled Newton Methods F. Roosta-Khorasani and M. W. Mahoney ICSI and Dept of Statistics, UC Berkeley February 2016 F. Roosta-Khorasani and M. W. Mahoney (UCB) Sub-Sampled Newton Methods Feb 2016 1

More information

REPORTS IN INFORMATICS

REPORTS IN INFORMATICS REPORTS IN INFORMATICS ISSN 0333-3590 A class of Methods Combining L-BFGS and Truncated Newton Lennart Frimannslund Trond Steihaug REPORT NO 319 April 2006 Department of Informatics UNIVERSITY OF BERGEN

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD

A GLOBALLY CONVERGENT STABILIZED SQP METHOD A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

Numerical Optimization of Partial Differential Equations

Numerical Optimization of Partial Differential Equations Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada

More information

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725 Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725/36-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,...

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Martin Peter Neuenhofen October 26, 2018 Abstract We present an NLP solver for nonlinear optimization with quadratic penalty

More information

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0,

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0, AN INTERIOR-POINT ALGORITHM FOR LARGE-SCALE NONLINEAR OPTIMIZATION WITH INEXACT STEP COMPUTATIONS FRANK E. CURTIS, OLAF SCHENK, AND ANDREAS WÄCHTER Abstract. We present a line-search algorithm for large-scale

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,... m are

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

Direct Current Resistivity Inversion using Various Objective Functions

Direct Current Resistivity Inversion using Various Objective Functions Direct Current Resistivity Inversion using Various Objective Functions Rowan Cockett Department of Earth and Ocean Science University of British Columbia rcockett@eos.ubc.ca Abstract In geophysical applications

More information

4. DATA ASSIMILATION FUNDAMENTALS

4. DATA ASSIMILATION FUNDAMENTALS 4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

Primal-Dual Interior-Point Methods

Primal-Dual Interior-Point Methods Primal-Dual Interior-Point Methods Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 Outline Today: Primal-dual interior-point method Special case: linear programming

More information

POWER SYSTEMS in general are currently operating

POWER SYSTEMS in general are currently operating TO APPEAR IN IEEE TRANSACTIONS ON POWER SYSTEMS 1 Robust Optimal Power Flow Solution Using Trust Region and Interior-Point Methods Andréa A. Sousa, Geraldo L. Torres, Member IEEE, Claudio A. Cañizares,

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Lecture 15: SQP methods for equality constrained optimization

Lecture 15: SQP methods for equality constrained optimization Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained

More information

On Lagrange multipliers of trust region subproblems

On Lagrange multipliers of trust region subproblems On Lagrange multipliers of trust region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Applied Linear Algebra April 28-30, 2008 Novi Sad, Serbia Outline

More information

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares A Regularized Interior-Point Method for Constrained Nonlinear Least Squares XII Brazilian Workshop on Continuous Optimization Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique

More information

Arc Search Algorithms

Arc Search Algorithms Arc Search Algorithms Nick Henderson and Walter Murray Stanford University Institute for Computational and Mathematical Engineering November 10, 2011 Unconstrained Optimization minimize x D F (x) where

More information

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725 Proximal Newton Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: primal-dual interior-point method Given the problem min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

5.6 Penalty method and augmented Lagrangian method

5.6 Penalty method and augmented Lagrangian method 5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

The Squared Slacks Transformation in Nonlinear Programming

The Squared Slacks Transformation in Nonlinear Programming Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints

More information

Stochastic Optimization Algorithms Beyond SG

Stochastic Optimization Algorithms Beyond SG Stochastic Optimization Algorithms Beyond SG Frank E. Curtis 1, Lehigh University involving joint work with Léon Bottou, Facebook AI Research Jorge Nocedal, Northwestern University Optimization Methods

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 18, No. 1, pp. 106 13 c 007 Society for Industrial and Applied Mathematics APPROXIMATE GAUSS NEWTON METHODS FOR NONLINEAR LEAST SQUARES PROBLEMS S. GRATTON, A. S. LAWLESS, AND N. K.

More information

PDE Solvers for Fluid Flow

PDE Solvers for Fluid Flow PDE Solvers for Fluid Flow issues and algorithms for the Streaming Supercomputer Eran Guendelman February 5, 2002 Topics Equations for incompressible fluid flow 3 model PDEs: Hyperbolic, Elliptic, Parabolic

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

A stabilized SQP method: superlinear convergence

A stabilized SQP method: superlinear convergence Math. Program., Ser. A (2017) 163:369 410 DOI 10.1007/s10107-016-1066-7 FULL LENGTH PAPER A stabilized SQP method: superlinear convergence Philip E. Gill 1 Vyacheslav Kungurtsev 2 Daniel P. Robinson 3

More information

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems Optimization Methods and Software Vol. 00, No. 00, July 200, 8 RESEARCH ARTICLE A strategy of finding an initial active set for inequality constrained quadratic programming problems Jungho Lee Computer

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 11: NPSOL and SNOPT SQP Methods 1 Overview

More information

A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees

A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees Noname manuscript No. (will be inserted by the editor) A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees Frank E. Curtis Xiaocun Que May 26, 2014 Abstract

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

Integration of Sequential Quadratic Programming and Domain Decomposition Methods for Nonlinear Optimal Control Problems

Integration of Sequential Quadratic Programming and Domain Decomposition Methods for Nonlinear Optimal Control Problems Integration of Sequential Quadratic Programming and Domain Decomposition Methods for Nonlinear Optimal Control Problems Matthias Heinkenschloss 1 and Denis Ridzal 2 1 Department of Computational and Applied

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

Optimization Problems in Model Predictive Control

Optimization Problems in Model Predictive Control Optimization Problems in Model Predictive Control Stephen Wright Jim Rawlings, Matt Tenny, Gabriele Pannocchia University of Wisconsin-Madison FoCM 02 Minneapolis August 6, 2002 1 reminder! Send your new

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information