Direct Methods. Moritz Diehl. Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U.

Similar documents
Numerical Optimal Control Overview. Moritz Diehl

Efficient Numerical Methods for Nonlinear MPC and Moving Horizon Estimation

The Lifted Newton Method and Its Use in Optimization

Numerical Methods for Embedded Optimization and Optimal Control. Exercises

Hot-Starting NLP Solvers

1 Computing with constraints

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

Constrained optimization: direct methods (cont.)

Real-time Constrained Nonlinear Optimization for Maximum Power Take-off of a Wave Energy Converter

Algorithms for Constrained Optimization

Numerical Optimal Control Mixing Collocation with Single Shooting: A Case Study

Time-Optimal Automobile Test Drives with Gear Shifts

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

Inexact Newton-Type Optimization with Iterated Sensitivities

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

SQP METHODS AND THEIR APPLICATION TO NUMERICAL OPTIMAL CONTROL

An Inexact Newton Method for Nonlinear Constrained Optimization

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Implicitly and Explicitly Constrained Optimization Problems for Training of Recurrent Neural Networks

Constrained Optimization

Parallelizing large scale time domain electromagnetic inverse problem

The Direct Transcription Method For Optimal Control. Part 2: Optimal Control

4TE3/6TE3. Algorithms for. Continuous Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization

Dynamic Real-Time Optimization: Linking Off-line Planning with On-line Optimization

minimize x subject to (x 2)(x 4) u,

Trajectory Planning and Collision Detection for Robotics Applications

Constrained Nonlinear Optimization Algorithms

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

5 Handling Constraints

On Perspective Functions, Vanishing Constraints, and Complementarity Programming

The estimation problem ODE stability The embedding method The simultaneous method In conclusion. Stability problems in ODE estimation

PDE-Constrained and Nonsmooth Optimization

A Gauss Lobatto quadrature method for solving optimal control problems

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control

Tutorial on Control and State Constrained Optimal Control Problems

Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients

Chapter 2 Optimal Control Problem

arxiv: v1 [math.oc] 10 Apr 2017

Optimal Control and Applications

2 A. BARCLAY, P. E. GILL AND J. B. ROSEN where c is a vector of nonlinear functions, A is a constant matrix that denes the linear constraints, and b l

Numerical methods for nonlinear optimal control problems

Optimal Control. McGill COMP 765 Oct 3 rd, 2017

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Direct and indirect methods for optimal control problems and applications in engineering

An Inexact Newton Method for Optimization

A multilevel, level-set method for optimizing eigenvalues in shape design problems

Linear and nonlinear optimization with CasADi

What s New in Active-Set Methods for Nonlinear Optimization?

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Numerical Methods for Optimal Control Problems. Part I: Hamilton-Jacobi-Bellman Equations and Pontryagin Minimum Principle

SF2822 Applied nonlinear optimization, final exam Saturday December

Numerical Nonlinear Optimization with WORHP

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

NONLINEAR MODEL PREDICTIVE CONTROL VIA FEASIBILITY-PERTURBED SEQUENTIAL QUADRATIC PROGRAMMING

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA

Convergence of a Gauss Pseudospectral Method for Optimal Control

Math 273a: Optimization Netwon s methods

DISCRETE MECHANICS AND OPTIMAL CONTROL

Quasi-Newton Methods

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

The use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher

Graph Coarsening Method for KKT Matrices Arising in Orthogonal Collocation Methods for Optimal Control Problems

Deterministic Dynamic Programming

CHAPTER 2: QUADRATIC PROGRAMMING

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

Nonlinear Optimization: What s important?

A Trust-region-based Sequential Quadratic Programming Algorithm

SEQUENTIAL QUADRATIC PROGAMMING METHODS FOR PARAMETRIC NONLINEAR OPTIMIZATION

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

A Filter Active-Set Algorithm for Ball/Sphere Constrained Optimization Problem

CONSTRAINED NONLINEAR PROGRAMMING

A parallel method for large scale time domain electromagnetic inverse problems

Numerical Optimal Control (preliminary and incomplete draft) Moritz Diehl and Sébastien Gros

Leveraging Dynamical System Theory to Incorporate Design Constraints in Multidisciplinary Design

Numerical Optimization of Dynamic Systems (Draft) Moritz Diehl and Sébastien Gros

An Introduction to Model-based Predictive Control (MPC) by

Numerical Optimal Control Part 3: Function space methods

Numerical Optimal Control DRAFT

Theory and Applications of Constrained Optimal Control Proble

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Numerical Continuation of Bifurcations - An Introduction, Part I

Optimizing Economic Performance using Model Predictive Control

2.3 Linear Programming

A New Penalty-SQP Method

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini

Recent Adaptive Methods for Nonlinear Optimization

Convergence Analysis of Time-Optimal Model Predictive Control under Limited Computational Resources

Block Condensing with qpdunes

Introduction to the Optimal Control Software GPOPS II

CHAPTER 10: Numerical Methods for DAEs

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Model Predictive Control

An SR1/BFGS SQP algorithm for nonconvex nonlinear programs with block-diagonal Hessian matrix

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

Transcription:

Direct Methods Moritz Diehl Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U. Leuven Belgium

Overview Direct Single Shooting Direct Collocation Direct Multiple Shooting Structure Exploitation by Condensing Structure Exploitation by Riccati Recursion

Simplified Optimal Control Problem in ODE initial value x 0 path constraints h(x, u) 0 states x(t) terminal constraint r(x(t)) 0 minimize x( ), u( ) 0 t subject to controls u(t) T 0 T L(x(t), u(t)) dt + E (x(t)) x(0) x 0 = 0, (fixed initial value) ẋ(t) f(x(t), u(t)) = 0, t [0, T], (ODE model) h(x(t), u(t)) 0, t [0, T], (path constraints) r (x(t)) 0 (terminal constraints).

Recall: Optimal Control Family Tree Hamilton-Jacobi- Bellman Equation: Tabulation in State Space Single Shooting: Only discretized controls in NLP (sequential) Indirect Methods, Pontryagin: Solve Boundary Value Problem Direct Methods: Transform into Nonlinear Program (NLP) Collocation: Discretized controls and states in NLP (simultaneous) Multiple Shooting: Controls and node start values in NLP (simultaneous/hybrid)

Direct Methods First discretize, then optimize Transcribe infinite problem into finite dimensional, Nonlinear Programming Problem (NLP), and solve NLP. Pros and Cons: + Can use state-of-the-art methods for NLP solution. + Can treat inequality constraints and multipoint constraints much easier. - Obtains only suboptimal/approximate solution. Nowadays most commonly used methods due to their easy applicability and robustness.

Direct Single Shooting [Hicks, Ray 1971; Sargent, Sullivan 1977] Discretize controls u(t) on fixed grid 0 = t 0 < t 1 <... < t N = T, regard states x(t) on [0, T] as dependent variables. states x(t; q) x 0 discretized controls u(t; q) q 0 q N 1 0 q 1 t T Use numerical integration to obtain state as function x(t; q) of finitely many control parameters q = (q 0, q 1,...,q N 1 )

NLP in Direct Single Shooting After control discretization and numerical ODE solution, obtain NLP: minimize q subject to T 0 L(x(t; q), u(t; q)) dt + E (x(t; q)) h(x(t i ; q), u(t i ; q)) 0, i = 0,...,N, (discretized path constraints) r (x(t; q)) 0. (terminal constraints) Solve with finite dimensional optimization solver, e.g. Quadratic Programming (SQP). Sequential

Solution by Standard SQP Summarize problem as min q F(q) s.t. H(q) 0. Solve e.g. by Sequential Quadratic Programming (SQP), starting with guess q 0 for controls. k := 0 1. Evaluate F(q k ), H(q k ) by ODE solution, and derivatives! 2. Compute correction q k by solution of QP: min q F(q k) T q + 1 2 qt A k q s.t. H(q k ) + H(q k ) T q 0. 3. Perform step q k+1 = q k + α k q k with step length α k determined by line search.

Hessian in Quadratic Subproblem Matrix A k in QP min q F(q k) T q + 1 2 qt A k q s.t. H(q k ) + H(q k ) T q 0. is called the Hessian matrix. Several variants exist: exact Hessian: A k = 2 ql(q, µ) with µ the constraint multipliers. Delivers fast quadratic local convergence. Update Hessian using consecutive Lagrange gradients, e.g. by BFGS formula: superlinear In case of least squares objective F(q) = 1 2 R(q) 2 2 can also use Gauss-Newton Hessian (good linear convergence). A k = ( R q (qk ) ) T R q (qk )

Direct Single Shooting: Pros and Cons Sequential simulation and optimization. + Can use state-of-the-art ODE/DAE solvers. + Few degrees of freedom even for large ODE/DAE systems. + Active set changes easily treated. + Need only initial guess for controls q. - Cannot use knowledge of x in initialization (e.g. in tracking problems). - ODE solution x(t; q) can depend very nonlinearly on q. - Unstable systems difficult to treat. Often used in engineering applications e.g. in packages gopt (PSE), DYOS (Marquardt),...

Direct Collocation (Sketch) [Tsang et al. 1975] Discretize controls and states on fine grid with node values s i x(t i ). Replace infinite ODE 0 = ẋ(t) f(x(t), u(t)), t [0, T] by finitely many equality constraints c i (q i, s i, s i+1 ) = 0, i = 0,.. (.,N 1, ) e.g. c i (q i, s i, s i+1 ) := s i+1 s i t i+1 t i f si +s i+1 2, q i Approximate also integrals, e.g. ti+1 t i ( ) si + s i+1 L(x(t), u(t))dt l i (q i, s i, s i+1 ) := L, q i (t i+1 t i ) 2

NLP in Direct Collocation After discretization obtain large scale, but sparse NLP: minimize s, q N 1 i=0 l i (q i, s i, s i+1 ) + E (s N ) subject to s 0 x 0 = 0, (fixed initial value) c i (q i, s i, s i+1 ) = 0, i = 0,...,N 1, (discretized ODE model) h(s i, q i ) 0, i = 0,...,N, (discretized path constraints) r (s N ) 0. (terminal constraints) Solve e.g. with SQP method for sparse problems, or interior point methods (IPM).

What is a sparse NLP? General NLP: min w F(w) s.t. { G(w) = 0, H(w) 0. is called sparse if the Jacobians (derivative matrices) w G T = G w = contain many zero elements. ( G w j ) ij and w H T In SQP or IPM methods, this makes subproblems much cheaper to build and to solve.

Direct Collocation: Pros and Cons Simultaneous simulation and optimization. + Large scale, but very sparse NLP. + Can use knowledge of x in initialization. + Can treat unstable systems well. + Robust handling of path and terminal constraints. - Adaptivity needs new grid, changes NLP dimensions. Successfully used for practical optimal control e.g. by Biegler and Wächter (IPOPT), Betts, Bock/Schulz (OCPRSQP), v. Stryk (DIRCOL),...

Direct Multiple Shooting [Bock and Plitt, 1981] Discretize controls piecewise on a coarse grid u(t) = q i for t [t i, t i+1 ] Solve ODE on each interval [t i, t i+1 ] numerically, starting with artificial initial value s i : ẋ i (t; s i, q i ) = f(x i (t; s i, q i ), q i ), t [t i, t i+1 ], x i (t i ; s i, q i ) = s i. Obtain trajectory pieces x i (t; s i, q i ). Also numerically compute integrals l i (s i, q i ) := ti+1 t i L(x i (t i ; s i, q i ), q i )dt

Sketch of Direct Multiple Shooting x i (t i+1 ; s i, q i ) s i+1 s i s i+1 s 0 s 1 q x i 0 s N 1 sn t 0 q 0 t 1 t i t i+1 t N 1 t N

NLP in Direct Multiple Shooting minimize s, q N 1 i=0 l i (s i, q i ) + E (s N ) subject to s 0 x 0 = 0, (initial value) s i+1 x i (t i+1 ; s i, q i ) = 0, i = 0,...,N 1, (continuity) h(s i, q i ) 0, i = 0,...,N, (discretized path constraints) r (s N ) 0. (terminal constraints)

Structured NLP Summarize all variables as w := (s 0, q 0, s 1, q 1,...,s N ). Obtain structured NLP min w F(w) s.t. { G(w) = 0 H(w) 0. Jacobian G(w k ) T contains dynamic model equations. Jacobians and Hessian of NLP are block sparse, can be exploited in numerical solution procedure...

QP = Discrete Time Problem min x, u N 1 i=0 1 s i q i T 0 qi T s T i q i Q i Si T s i S i R i 1 s i + q i [ 1 s N ] T [ 0 p T N p N P N ][ 1 s N ] subject to s 0 x fix 0 = 0, (initial) s i+1 A i s i B i q i c i = 0, i = 0,...,N 1, (system) C i s i + D i q i c i 0, i = 0,...,N 1, (path) C N s N c N 0, (terminal)

Interpretation of Continuity Conditions In direct multiple shooting, continuity conditions s i+1 = x i (t i+1 ; s i, q i ) represent discrete time dynamic system. Linearized reduced continuity conditions (used in condensing to eliminate s 1,..., s N ) represent linear discrete time system: s i+1 = (x i (t i+1 ; s i, q i ) s i+1 )+X i s x i +Y i q i = 0, i = 0,...,N 1. If original system is linear, continuity is perfectly satisfied in all SQP iterations. Lagrange multipliers λ i for the continuity conditions are approximation of adjoint variables. They indicate the costs of continuity.

Condensing Technique [Bock, Plitt, 1984] As before in multiple shooting for BVPs, can use condensing of linear system equations s 0 q 0 X 0 Y 0 I X 1 Y 1 I s 1. q c 1 1.. c 2 s 2 =..... X N 1 Y N 1 I s c N N 1 q N 1 s N to eliminate s 1,..., s N from QP. Results in condensed QP in variables s 0 and q 0,..., q N only.

Riccati Recursion Alternative to condensing: can use Riccati recursion within QP solver addressing the full, uncondensed, but block sparse QP problem. Same algorithm as discrete time Riccati difference equation Linear effort in number N of shooting nodes, compared to O(N 3 ) for condensed QP. Use Interiour Point Method to deal with inequalities, or Schur- Complement type reduction techniques.

Direct Multiple Shooting: Pros and Cons Simultaneous simulation and optimization. + uses adaptive ODE/DAE solvers + but NLP has fixed dimensions + can use knowledge of x in initialization (here bounds; more important in online context). + can treat unstable systems well. + robust handling of path and terminal constraints. + easy to parallelize. - not as sparse as collocation. Used for practical optimal control e.g by Franke ( HQP ), Terwen (DaimlerChrysler); Santos and Biegler; Bock et al. ( MUSCOD- II )