Numerical Optimal Control Overview. Moritz Diehl
|
|
- Michael Wiggins
- 5 years ago
- Views:
Transcription
1 Numerical Optimal Control Overview Moritz Diehl
2 Simplified Optimal Control Problem in ODE path constraints h(x, u) 0 initial value x0 states x(t) terminal constraint r(x(t )) 0 controls u(t) 0 t T minimize x( ),u( ) subject to T 0 L(x(t), u(t)) dt + E (x(t )) x(0) x 0 = 0, (fixed initial value) ẋ(t) f (x(t), u(t)) = 0, t [0, T ], (ODE model) h(x(t), u(t)) 0, t [0, T ], (path constraints) r (x(t )) 0 (terminal constraints)
3 More general optimal control problems Many features left out here for simplicity of presentation: multiple dynamic stages differential algebraic equations (DAE) instead of ODE explicit time dependence constant design parameters multipoint constraints r(x(t 0 ), x(t 1 ),..., x(t end )) = 0
4 Optimal Control Family Tree Three basic families: Hamilton-Jacobi-Bellmann equation / dynamic programming Indirect Methods / calculus of variations / Pontryagin Direct Methods (control discretization)
5 Principle of Optimality Any subarc of an optimal trajectory is also optimal. initial value x 0 states x(t) optimal controls u(t) 0 t intermediate value x T Subarc on [ t, T ] is optimal solution for initial value x.
6 Dynamic Programming Cost-to-go IDEA: Introduce optimal-cost-to-go function on [ t, T ] T J( x, t) := min x,u t L(x, u)dt + E(x(T )) s.t. x( t) = x,... Introduce grid 0 = t 0 <... < t N = T. Use principle of optimality on intervals [t k, t k+1 ]: tk+1 J(x k, t k ) = min L(x, u)dt + J(x(t k+1 ), t k+1 ) x,u t k x k x(tk+1 ) s.t. x(t k ) = x k,... t k t k+1 T
7 Dynamic Programming Recursion Starting from J(x, t N ) = E(x), compute recursively backwards, for k = N 1,..., 0 J(x k, t k ) := min x,u tk+1 t k L(x, u)dt + J(x(t k+1 ), t k+1 ) s.t. x(t k ) = x k,... by solution of short horizon problems for all possible x k and tabulation in state space.
8 Dynamic Programming Recursion Starting from J(x, t N ) = E(x), compute recursively backwards, for k = N 1,..., 0 J(x k, t k ) := min x,u tk+1 t k L(x, u)dt + J(x(t k+1 ), t k+1 ) s.t. x(t k ) = x k,... by solution of short horizon problems for all possible x k and tabulation in state space. J(, t N ) x N
9 Dynamic Programming Recursion Starting from J(x, t N ) = E(x), compute recursively backwards, for k = N 1,..., 0 J(x k, t k ) := min x,u tk+1 t k L(x, u)dt + J(x(t k+1 ), t k+1 ) s.t. x(t k ) = x k,... by solution of short horizon problems for all possible x k and tabulation in state space. J(, t N 1 ) J(, t N ) x N 1 x N
10 Dynamic Programming Recursion Starting from J(x, t N ) = E(x), compute recursively backwards, for k = N 1,..., 0 J(x k, t k ) := min x,u tk+1 t k L(x, u)dt + J(x(t k+1 ), t k+1 ) s.t. x(t k ) = x k,... by solution of short horizon problems for all possible x k and tabulation in state space. J(, t 0 ) J(, t N 1 ) J(, t N ) x 0 x N 1 x N
11 Hamilton-Jacobi-Bellman (HJB) Equation Dynamic Programming with infinitely small timesteps leads to Hamilton-Jacobi-Bellman (HJB) Equation: J ( (x, t) = min L(x, u) + J ) (x, t)f (x, u) s.t. h(x, u) 0. t u x Solve this partial differential equation (PDE) backwards for t [0, T ], starting at the end of the horizon with J(x, T ) = E(x). NOTE: Optimal controls for state x at time t are obtained from ( u (x, t) = arg min L(x, u) + J ) (x, t)f (x, u) s.t. h(x, u) 0. u x
12 Dynamic Programming / HJB Dynamic Programming applies to discrete time, HJB to continuous time systems. Pros and Cons + Searches whole state space, finds global optimum. + Optimal feedback controls precomputed. + Analytic solution to some problems possible (linear systems with quadratic cost Riccati Equation) Viscosity solutions (Lions et al.) exist for quite general nonlinear problems. - But: in general intractable, because partial differential equation (PDE) in high dimensional state space: curse of dimensionality. Possible remedy: Approximate J e.g. in framework of neuro-dynamic programming [Bertsekas 1996]. Used for practical optimal control of small scale systems e.g. by Bonnans, Zidani, Lee, Back,...
13 Indirect Methods For simplicity, regard only problem without inequality constraints: initial value x0 states x(t) terminal cost E(x(T )) controls u(t) 0 t T minimize x( ),u( ) subject to T 0 L(x(t), u(t)) dt + E (x(t )) x(0) x 0 = 0, (fixed initial value) ẋ(t) f (x(t), u(t)) = 0, t [0, T ], (ODE model)
14 Pontryagin s Minimum Principle OBSERVATION: In HJB, optimal controls ( u (t) = arg min u L(x, u) + J (x, t)f (x, u) x depend only on derivative J x (x, t), not on J itself! IDEA: Introduce adjoint variables λ(t) ˆ= J x (x(t), t)t R nx and get controls from Pontryagin s Minimum Principle u (t, x, λ) = arg min u QUESTION: How to obtain λ(t)? ) L(x, u) + λ T f (x, u) }{{} Hamiltonian=:H(x,u,λ)
15 Adjoint Differential Equation Differentiate HJB Equation J t (x, t) = min u with respect to x and obtain: J H(x, u, x (x, t)t ) λ T = x (H(x(t), u (t, x, λ), λ(t))). Likewise, differentiate J(x, T ) = E(x) and obtain terminal condition λ(t ) T = E (x(t )). x
16 How to obtain explicit expression for controls? In simplest case, u (t) = arg min H(x(t), u, λ(t)) u is defined by H u (x(t), u (t), λ(t)) = 0 (Calculus of Variations, Euler-Lagrange). In presence of path constraints, expression for u (t) changes whenever active constraints change. This leads to state dependent switches. If minimum of Hamiltonian locally not unique, singular arcs occur. Treatment needs higher order derivatives of H.
17 Necessary Optimality Conditions Summarize optimality conditions as boundary value problem: x(0) = x 0, initial value ẋ(t) = f (x(t), u (t)), t [0, T ], ODE model λ(t) = H x (x(t), u (t), λ(t)) T, t [0, T ], adjoint equations u (t) = arg min H(x(t), u, λ(t)), u t [0, T ], minimum principle λ(t ) = E x (x(t ))T. adjoint final value. Solve with so called gradient methods, shooting methods, or collocation.
18 Indirect Methods First optimize, then discretize Pros and Cons + Boundary value problem with only 2 n x ODE. + Can treat large scale systems. - Only necessary conditions for local optimality. - Need explicit expression for u (t), singular arcs difficult to treat. - ODE strongly nonlinear and unstable. - Inequalities lead to ODE with state dependent switches. Possible remedy: Use interior point method in function space inequalities, e.g. Weiser and Deuflhard, Bonnans and Laurent-Varin Used for optimal control e.g. in satellite orbit planning at CNES...
19 Direct Methods First discretize, then optimize Transcribe infinite problem into finite dimensional, Nonlinear Programming Problem (NLP), and solve NLP. Pros and Cons: + Can use state-of-the-art methods for NLP solution. + Can treat inequality constraints and multipoint constraints much easier. - Obtains only suboptimal/approximate solution. Nowadays most commonly used methods due to their easy applicability and robustness.
20 Direct Methods Overview We treat three direct methods: Direct Single Shooting (sequential simulation and optimization) Direct Collocation (simultaneous simulation and optimization) Direct Multiple Shooting (simultaneous resp. hybrid)
21 Direct Single Shooting [Hicks1971,Sargent1978] Discretize controls u(t) on fixed grid 0 = t 0 < t 1 <... < t N = T, regard states x(t) on [0, T ] as dependent variables. states x(t; q) x 0 discretized controls u(t; q) q 0 0 q t 1 q N 1 T Use numerical integration to obtain state as function x(t; q) of finitely many control parameters q = (q 0, q 1,..., q N 1 )
22 NLP in Direct Single Shooting After control discretization and numerical ODE solution, obtain NLP: minimize q subject to T h(x(t i ; q), u(t i ; q)) 0, i = 0,..., N, 0 r (x(t ; q)) 0. L(x(t; q), u(t; q)) dt + E (x(t ; q)) (discretized path constraints) (terminal constraints) Solve with finite dimensional optimization solver, e.g. Sequential Quadratic Programming (SQP).
23 Solution by Standard SQP Summarize problem as min q F (q) s.t. H(q) 0. Solve e.g. by Sequential Quadratic Programming (SQP), starting with guess q 0 for controls. k := 0 1. Evaluate F (q k ), H(q k ) by ODE solution, and derivatives! 2. Compute correction q k by solution of QP: min q F (q k) T q+ 1 2 qt A k q s.t. H(q k )+ H(q k ) T q Perform step q k+1 = q k + α k q k with step length α k determined by line search.
24 ODE Sensitivities x(t; q) How to compute the sensitivity of a numerical ODE q solution x(t; q) with respect to the controls q? Four ways: 1. External Numerical Differentiation (END) 2. Variational Differential Equations 3. Automatic Differentiation 4. Internal Numerical Differentiation (IND)
25 Numerical Test Problem minimize x( ),u( ) subject to 3 0 x(t) 2 + u(t) 2 dt x(0) = x 0, (initial value) ẋ =(1 + x)x + u, t [0, 3], (ODE model) 1 x(t) x(t) 1 u(t) 0 0, t [0, 3], (bounds) 1 + u(t) 0 x(3) = 0. Remark: Uncontrollable growth for (1 + x 0 )x x (zero terminal constraint).
26 Single Shooting Optimization for x 0 = 0.05 Choose N = 30 equal control intervals. Initialize with steady state controls u(t) 0. Initial value x 0 = 0.05 is the maximum possible, because initial trajectory explodes otherwise.
27 Single Shooting: Initialization
28 Single Shooting: First Iteration
29 Single Shooting: 2nd Iteration
30 Single Shooting: 3rd Iteration
31 Single Shooting: 4th Iteration
32 Single Shooting: 5th Iteration
33 Single Shooting: 6th Iteration
34 Single Shooting: 7th Iteration and Solution
35 Direct Single Shooting: Pros and Cons Sequential simulation and optimization. + Can use state-of-the-art ODE/DAE solvers. + Few degrees of freedom even for large ODE/DAE systems. + Active set changes easily treated. + Need only initial guess for controls q. - Cannot use knowledge of x in initialization (e.g. in tracking problems). - ODE solution x(t; q) can depend very nonlinearly on q. - Unstable systems difficult to treat. Often used in engineering applications e.g. in packages gopt (PSE), DYOS (Marquardt),...
36 Direct Collocation (Sketch) [Tsang1975] Discretize controls and states on fine grid with node values s i x(t i ). Replace infinite ODE 0 = ẋ(t) f (x(t), u(t)), t [0, T ] by finitely many equality constraints e.g. c i (q i, s i, s i+1 ) = 0, i = 0,..., N 1, c i (q i, s i, s i+1 ) := s ( ) i+1 s i si + s i+1 f, q i t i+1 t i 2 Approximate also integrals, e.g. ti+1 t i ( ) si + s i+1 L(x(t), u(t))dt l i (q i, s i, s i+1 ) := L, q i (t i+1 t i ) 2
37 NLP in Direct Collocation After discretization obtain large scale, but sparse NLP: minimize s,q subject to N 1 i=0 l i (q i, s i, s i+1 ) + E (s N ) s 0 x 0 = 0, (fixed initial value) c i (q i, s i, s i+1 ) = 0, i = 0,..., N 1, (discretized ODE model) h(s i, q i ) 0, i = 0,..., N, (discretized path constraint r (s N ) 0. (terminal constraints) Solve e.g. with SQP method for sparse problems.
38 What is a sparse NLP? General NLP: min F (w) s.t. w G(w) = 0, H(w) 0. is called sparse if the Jacobians (derivative matrices) w G T = G ( ) G w = and w H T w j contain many zero elements. In SQP methods, this makes QP much cheaper to build and to solve. ij
39 Direct Collocation: Pros and Cons Simultaneous simulation and optimization. + Large scale, but very sparse NLP. + Can use knowledge of x in initialization. + Can treat unstable systems well. + Robust handling of path and terminal constraints. - Adaptivity needs new grid, changes NLP dimensions. Successfully used for practical optimal control e.g. by Biegler and Wächter (IPOPT), Betts,
40 Direct Multiple Shooting [Bock 1984] Discretize controls piecewise on a coarse grid u(t) = q i for t [t i, t i+1 ] Solve ODE on each interval [t i, t i+1 ] numerically, starting with artificial initial value s i : ẋ i (t; s i, q i ) = f (x i (t; s i, q i ), q i ), t [t i, t i+1 ], x i (t i ; s i, q i ) = s i. Obtain trajectory pieces x i (t; s i, q i ). Also numerically compute integrals l i (s i, q i ) := ti+1 t i L(x i (t i ; s i, q i ), q i )dt
41 Sketch of Direct Multiple Shooting x i (t i+1 ; s i, q i ) s i+1 s i s i+1 s 0 s 1 x0 q i s N 1 s N q 0 t 0 t 1 t i t i+1 t N 1 t N
42 NLP in Direct Multiple Shooting minimize s,q subject to N 1 i=0 l i (s i, q i ) + E (s N ) s 0 x 0 = 0, s i+1 x i (t i+1 ; s i, q i ) = 0, i = 0,..., N 1, h(s i, q i ) 0, i = 0,..., N, r (s N ) 0. (initial value) (continuity) (discretized path constraints (terminal constraints)
43 Structured NLP Summarize all variables as w := (s 0, q 0, s 1, q 1,..., s N ). Obtain structured NLP min w F (w) s.t. { G(w) = 0 H(w) 0. Jacobian G(w k ) T contains dynamic model equations. Jacobians and Hessian of NLP are block sparse, can be exploited in numerical solution procedure.
44 Test Example: Initialization with u(t) 0
45 Multiple Shooting: First Iteration
46 Multiple Shooting: 2nd Iteration
47 Multiple Shooting: 3rd Iteration and Solution
48 Direct Multiple Shooting: Pros and Cons Simultaneous simulation and optimization. + uses adaptive ODE/DAE solvers + but NLP has fixed dimensions + can use knowledge of x in initialization (here bounds; more important in online context). + can treat unstable systems well. + robust handling of path and terminal constraints. + easy to parallelize. - not as sparse as collocation. Used for practical optimal control e.g by Franke (ABB) ( HQP ), Terwen (Daimler); Bock et al. ( MUSCOD-II ); in ACADO Toolkit;...
49 Conclusions: Optimal Control Family Tree Hamilton-Jacobi- Bellman Equation: Tabulation in State Space Single Shooting: Only discretized controls in NLP (sequential) Indirect Methods, Pontryagin: Solve Boundary Value Problem Collocation: Discretized controls and states in NLP (simultaneous) Direct Methods: Transform into Nonlinear Program (NLP) Multiple Shooting: Controls and node start values in NLP (simultaneous/hybrid)
50 Literature T. Binder, L. Blank, H. G. Bock, R. Bulirsch, W. Dahmen, M. Diehl, T. Kronseder, W. Marquardt and J. P. Schler, and O. v. Stryk: Introduction to Model Based Optimization of Chemical Processes on Moving Horizons. In Grötschel, Krumke, Rambau (eds.): Online Optimization of Large Scale Systems: State of the Art, Springer, pp John T. Betts: Practical Methods for Optimal Control Using Nonlinear Programming. SIAM, Philadelphia, ISBN Dimitri P. Bertsekas: Dynamic Programming and Optimal Control. Athena Scientific, Belmont, 2000 (Vol I, ISBN: ) & 2001 (Vol II, ISBN: ) A. E. Bryson and Y. C. Ho: Applied Optimal Control, Hemisphere/Wiley, 1975.
Direct Methods. Moritz Diehl. Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U.
Direct Methods Moritz Diehl Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U. Leuven Belgium Overview Direct Single Shooting Direct Collocation Direct Multiple
More informationEN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018
EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal
More informationOptimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.
Optimal Control Lecture 18 Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen Ref: Bryson & Ho Chapter 4. March 29, 2004 Outline Hamilton-Jacobi-Bellman (HJB) Equation Iterative solution of HJB Equation
More informationNumerical Methods for Embedded Optimization and Optimal Control. Exercises
Summer Course Numerical Methods for Embedded Optimization and Optimal Control Exercises Moritz Diehl, Daniel Axehill and Lars Eriksson June 2011 Introduction This collection of exercises is intended to
More informationEfficient Numerical Methods for Nonlinear MPC and Moving Horizon Estimation
Efficient Numerical Methods for Nonlinear MPC and Moving Horizon Estimation Moritz Diehl, Hans Joachim Ferreau, and Niels Haverbeke Optimization in Engineering Center (OPTEC) and ESAT-SCD, K.U. Leuven,
More informationNumerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini
Numerical approximation for optimal control problems via MPC and HJB Giulia Fabrini Konstanz Women In Mathematics 15 May, 2018 G. Fabrini (University of Konstanz) Numerical approximation for OCP 1 / 33
More informationOptimal Control and Applications
V. M. Becerra - Session 1-2nd AstroNet-II Summer School Optimal Control and Applications Session 1 2nd AstroNet-II Summer School Victor M. Becerra School of Systems Engineering University of Reading 5-6
More informationNumerical Methods for Optimal Control Problems. Part I: Hamilton-Jacobi-Bellman Equations and Pontryagin Minimum Principle
Numerical Methods for Optimal Control Problems. Part I: Hamilton-Jacobi-Bellman Equations and Pontryagin Minimum Principle Ph.D. course in OPTIMAL CONTROL Emiliano Cristiani (IAC CNR) e.cristiani@iac.cnr.it
More informationNumerical Optimal Control Mixing Collocation with Single Shooting: A Case Study
Preprint, 11th IFAC Symposium on Dynamics and Control of Process Systems, including Biosystems Numerical Optimal Control Mixing Collocation with Single Shooting: A Case Study Anders Albert Lars Imsland
More informationDISCRETE MECHANICS AND OPTIMAL CONTROL
DISCRETE MECHANICS AND OPTIMAL CONTROL Oliver Junge, Jerrold E. Marsden, Sina Ober-Blöbaum Institute for Mathematics, University of Paderborn, Warburger Str. 1, 3398 Paderborn, Germany Control and Dynamical
More informationNumerical methods for nonlinear optimal control problems
Title: Name: Affil./Addr.: Numerical metods for nonlinear optimal control problems Lars Grüne Matematical Institute, University of Bayreut, 95440 Bayreut, Germany (e-mail: lars.gruene@uni-bayreut.de) Numerical
More informationThe Direct Transcription Method For Optimal Control. Part 2: Optimal Control
The Direct Transcription Method For Optimal Control Part 2: Optimal Control John T Betts Partner, Applied Mathematical Analysis, LLC 1 Fundamental Principle of Transcription Methods Transcription Method
More informationOptimal Control Theory
Optimal Control Theory The theory Optimal control theory is a mature mathematical discipline which provides algorithms to solve various control problems The elaborate mathematical machinery behind optimal
More informationChapter 2 Optimal Control Problem
Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationOPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28
OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from
More informationA FORMULATION OF NONLINEAR MODEL PREDICTIVE CONTROL USING AUTOMATIC DIFFERENTIATION. Yi Cao,1. School of Engineering, Cranfield University, UK
A FORMULATION OF NONLINEAR MODEL PREDICTIVE CONTROL USING AUTOMATIC DIFFERENTIATION Yi Cao,1 School of Engineering, Cranfield University, UK Abstract: The computational burden, which obstacles Nonlinear
More informationThe Lifted Newton Method and Its Use in Optimization
The Lifted Newton Method and Its Use in Optimization Moritz Diehl Optimization in Engineering Center (OPTEC), K.U. Leuven, Belgium joint work with Jan Albersmeyer (U. Heidelberg) ENSIACET, Toulouse, February
More informationOptimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:
Optimal Control Control design based on pole-placement has non unique solutions Best locations for eigenvalues are sometimes difficult to determine Linear Quadratic LQ) Optimal control minimizes a quadratic
More informationTutorial on Control and State Constrained Optimal Control Problems
Tutorial on Control and State Constrained Optimal Control Problems To cite this version:. blems. SADCO Summer School 211 - Optimal Control, Sep 211, London, United Kingdom. HAL Id: inria-629518
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationControlled Diffusions and Hamilton-Jacobi Bellman Equations
Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter
More informationImplicitly and Explicitly Constrained Optimization Problems for Training of Recurrent Neural Networks
Implicitly and Explicitly Constrained Optimization Problems for Training of Recurrent Neural Networks Carl-Johan Thore Linköping University - Division of Mechanics 581 83 Linköping - Sweden Abstract. Training
More informationNumerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients
Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients SADCO Summer School and Workshop on Optimal and Model Predictive Control OMPC 2013, Bayreuth
More informationTheory and Applications of Constrained Optimal Control Proble
Theory and Applications of Constrained Optimal Control Problems with Delays PART 1 : Mixed Control State Constraints Helmut Maurer 1, Laurenz Göllmann 2 1 Institut für Numerische und Angewandte Mathematik,
More informationHot-Starting NLP Solvers
Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio
More informationA Gauss Lobatto quadrature method for solving optimal control problems
ANZIAM J. 47 (EMAC2005) pp.c101 C115, 2006 C101 A Gauss Lobatto quadrature method for solving optimal control problems P. Williams (Received 29 August 2005; revised 13 July 2006) Abstract This paper proposes
More informationPontryagin s maximum principle
Pontryagin s maximum principle Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 5 1 / 9 Pontryagin
More informationStochastic and Adaptive Optimal Control
Stochastic and Adaptive Optimal Control Robert Stengel Optimal Control and Estimation, MAE 546 Princeton University, 2018! Nonlinear systems with random inputs and perfect measurements! Stochastic neighboring-optimal
More informationDynamic Real-Time Optimization: Linking Off-line Planning with On-line Optimization
Dynamic Real-Time Optimization: Linking Off-line Planning with On-line Optimization L. T. Biegler and V. Zavala Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA 15213 April 12,
More informationReal-time Constrained Nonlinear Optimization for Maximum Power Take-off of a Wave Energy Converter
Real-time Constrained Nonlinear Optimization for Maximum Power Take-off of a Wave Energy Converter Thomas Bewley 23 May 2014 Southern California Optimization Day Summary 1 Introduction 2 Nonlinear Model
More informationALADIN An Algorithm for Distributed Non-Convex Optimization and Control
ALADIN An Algorithm for Distributed Non-Convex Optimization and Control Boris Houska, Yuning Jiang, Janick Frasch, Rien Quirynen, Dimitris Kouzoupis, Moritz Diehl ShanghaiTech University, University of
More informationModule 05 Introduction to Optimal Control
Module 05 Introduction to Optimal Control Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html October 8, 2015
More informationSolution of Stochastic Optimal Control Problems and Financial Applications
Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty
More informationParallel-in-time integrators for Hamiltonian systems
Parallel-in-time integrators for Hamiltonian systems Claude Le Bris ENPC and INRIA Visiting Professor, The University of Chicago joint work with X. Dai (Paris 6 and Chinese Academy of Sciences), F. Legoll
More informationSQP METHODS AND THEIR APPLICATION TO NUMERICAL OPTIMAL CONTROL
c Birkhäuser Verlag Basel 207 SQP METHODS AND THEIR APPLICATION TO NUMERICAL OPTIMAL CONTROL ALEX BARCLAY PHILIP E. GILL J. BEN ROSEN Abstract. In recent years, general-purpose sequential quadratic programming
More informationTime-Optimal Automobile Test Drives with Gear Shifts
Time-Optimal Control of Automobile Test Drives with Gear Shifts Christian Kirches Interdisciplinary Center for Scientific Computing (IWR) Ruprecht-Karls-University of Heidelberg, Germany joint work with
More informationSuboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids
Suboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids Jochen Garcke joint work with Axel Kröner, INRIA Saclay and CMAP, Ecole Polytechnique Ilja Kalmykov, Universität
More informationOptimal Control. McGill COMP 765 Oct 3 rd, 2017
Optimal Control McGill COMP 765 Oct 3 rd, 2017 Classical Control Quiz Question 1: Can a PID controller be used to balance an inverted pendulum: A) That starts upright? B) That must be swung-up (perhaps
More informationQuadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University
.. Quadratic Stability of Dynamical Systems Raktim Bhattacharya Aerospace Engineering, Texas A&M University Quadratic Lyapunov Functions Quadratic Stability Dynamical system is quadratically stable if
More informationNumerical Optimal Control Part 3: Function space methods
Numerical Optimal Control Part 3: Function space methods SADCO Summer School and Workshop on Optimal and Model Predictive Control OMPC 2013, Bayreuth Institute of Mathematics and Applied Computing Department
More informationStatic and Dynamic Optimization (42111)
Static and Dynamic Optimization (421) Niels Kjølstad Poulsen Build. 0b, room 01 Section for Dynamical Systems Dept. of Applied Mathematics and Computer Science The Technical University of Denmark Email:
More informationThe HJB-POD approach for infinite dimensional control problems
The HJB-POD approach for infinite dimensional control problems M. Falcone works in collaboration with A. Alla, D. Kalise and S. Volkwein Università di Roma La Sapienza OCERTO Workshop Cortona, June 22,
More informationNumerical Optimal Control (preliminary and incomplete draft) Moritz Diehl and Sébastien Gros
Numerical Optimal Control (preliminary and incomplete draft) Moritz Diehl and Sébastien Gros May 17, 2017 (Ch. 1-4 nearly ready for study by NOCSE students) Contents Preface page v 1 Introduction: Dynamic
More informationOn Perspective Functions, Vanishing Constraints, and Complementarity Programming
On Perspective Functions, Vanishing Constraints, and Complementarity Programming Fast Mixed-Integer Nonlinear Feedback Control Christian Kirches 1, Sebastian Sager 2 1 Interdisciplinary Center for Scientific
More informationDirect and indirect methods for optimal control problems and applications in engineering
Direct and indirect methods for optimal control problems and applications in engineering Matthias Gerdts Computational Optimisation Group School of Mathematics The University of Birmingham gerdtsm@maths.bham.ac.uk
More informationConvergence of a Gauss Pseudospectral Method for Optimal Control
Convergence of a Gauss Pseudospectral Method for Optimal Control Hongyan Hou William W. Hager Anil V. Rao A convergence theory is presented for approximations of continuous-time optimal control problems
More informationChapter 5. Pontryagin s Minimum Principle (Constrained OCP)
Chapter 5 Pontryagin s Minimum Principle (Constrained OCP) 1 Pontryagin s Minimum Principle Plant: (5-1) u () t U PI: (5-2) Boundary condition: The goal is to find Optimal Control. 2 Pontryagin s Minimum
More informationFeedback Optimal Control of Low-thrust Orbit Transfer in Central Gravity Field
Vol. 4, No. 4, 23 Feedback Optimal Control of Low-thrust Orbit Transfer in Central Gravity Field Ashraf H. Owis Department of Astronomy, Space and Meteorology, Faculty of Science, Cairo University Department
More informationAn Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84
An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A
More informationA Robust Controller for Scalar Autonomous Optimal Control Problems
A Robust Controller for Scalar Autonomous Optimal Control Problems S. H. Lam 1 Department of Mechanical and Aerospace Engineering Princeton University, Princeton, NJ 08544 lam@princeton.edu Abstract Is
More informationOptimal control and applications to aerospace: some results and challenges
Optimal control and applications to aerospace: some results and challenges E. Trélat Abstract This article surveys the classical techniques of nonlinear optimal control such as the Pontryagin Maximum Principle
More informationINVERSION IN INDIRECT OPTIMAL CONTROL
INVERSION IN INDIRECT OPTIMAL CONTROL François Chaplais, Nicolas Petit Centre Automatique et Systèmes, École Nationale Supérieure des Mines de Paris, 35, rue Saint-Honoré 7735 Fontainebleau Cedex, France,
More informationOptimizing Economic Performance using Model Predictive Control
Optimizing Economic Performance using Model Predictive Control James B. Rawlings Department of Chemical and Biological Engineering Second Workshop on Computational Issues in Nonlinear Control Monterey,
More informationOPTIMAL SPACECRAF1 ROTATIONAL MANEUVERS
STUDIES IN ASTRONAUTICS 3 OPTIMAL SPACECRAF1 ROTATIONAL MANEUVERS JOHNL.JUNKINS Texas A&M University, College Station, Texas, U.S.A. and JAMES D.TURNER Cambridge Research, Division of PRA, Inc., Cambridge,
More informationMATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem
MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem Pulemotov, September 12, 2012 Unit Outline Goal 1: Outline linear
More informationAn adaptive RBF-based Semi-Lagrangian scheme for HJB equations
An adaptive RBF-based Semi-Lagrangian scheme for HJB equations Roberto Ferretti Università degli Studi Roma Tre Dipartimento di Matematica e Fisica Numerical methods for Hamilton Jacobi equations in optimal
More informationAlberto Bressan. Department of Mathematics, Penn State University
Non-cooperative Differential Games A Homotopy Approach Alberto Bressan Department of Mathematics, Penn State University 1 Differential Games d dt x(t) = G(x(t), u 1(t), u 2 (t)), x(0) = y, u i (t) U i
More informationECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming
ECE7850 Lecture 7 Discrete Time Optimal Control and Dynamic Programming Discrete Time Optimal control Problems Short Introduction to Dynamic Programming Connection to Stabilization Problems 1 DT nonlinear
More informationA Remark on IVP and TVP Non-Smooth Viscosity Solutions to Hamilton-Jacobi Equations
2005 American Control Conference June 8-10, 2005. Portland, OR, USA WeB10.3 A Remark on IVP and TVP Non-Smooth Viscosity Solutions to Hamilton-Jacobi Equations Arik Melikyan, Andrei Akhmetzhanov and Naira
More informationComputational Issues in Nonlinear Control and Estimation. Arthur J Krener Naval Postgraduate School Monterey, CA 93943
Computational Issues in Nonlinear Control and Estimation Arthur J Krener Naval Postgraduate School Monterey, CA 93943 Modern Control Theory Modern Control Theory dates back to first International Federation
More informationOptimal Control of Differential Equations with Pure State Constraints
University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange Masters Theses Graduate School 8-2013 Optimal Control of Differential Equations with Pure State Constraints Steven Lee
More informationMathematical Economics. Lecture Notes (in extracts)
Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationOptimal periodic locomotion for a two piece worm with an asymmetric dry friction model
1st International Symposium on Mathematical Theory of Networks and Systems July 7-11, 014. Optimal periodic locomotion for a two piece worm with an asymmetric dry friction model Nak-seung Patrick Hyun
More informationAn Inexact Newton Method for Nonlinear Constrained Optimization
An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results
More informationLecture 3: Hamilton-Jacobi-Bellman Equations. Distributional Macroeconomics. Benjamin Moll. Part II of ECON Harvard University, Spring
Lecture 3: Hamilton-Jacobi-Bellman Equations Distributional Macroeconomics Part II of ECON 2149 Benjamin Moll Harvard University, Spring 2018 1 Outline 1. Hamilton-Jacobi-Bellman equations in deterministic
More informationAn Introduction to Optimal Control of Partial Differential Equations with Real-life Applications
An Introduction to Optimal Control of Partial Differential Equations with Real-life Applications Hans Josef Pesch Chair of Mathematics in Engineering Sciences University of Bayreuth, Bayreuth, Germany
More informationNoncausal Optimal Tracking of Linear Switched Systems
Noncausal Optimal Tracking of Linear Switched Systems Gou Nakura Osaka University, Department of Engineering 2-1, Yamadaoka, Suita, Osaka, 565-0871, Japan nakura@watt.mech.eng.osaka-u.ac.jp Abstract. In
More informationComputational Issues in Nonlinear Dynamics and Control
Computational Issues in Nonlinear Dynamics and Control Arthur J. Krener ajkrener@ucdavis.edu Supported by AFOSR and NSF Typical Problems Numerical Computation of Invariant Manifolds Typical Problems Numerical
More informationFormula Sheet for Optimal Control
Formula Sheet for Optimal Control Division of Optimization and Systems Theory Royal Institute of Technology 144 Stockholm, Sweden 23 December 1, 29 1 Dynamic Programming 11 Discrete Dynamic Programming
More information1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx
PDE-constrained optimization and the adjoint method Andrew M. Bradley November 16, 21 PDE-constrained optimization and the adjoint method for solving these and related problems appear in a wide range of
More informationLinear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters
Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationOptimal control of nonstructured nonlinear descriptor systems
Optimal control of nonstructured nonlinear descriptor systems TU Berlin DFG Research Center Institut für Mathematik MATHEON Workshop Elgersburg 19.02.07 joint work with Peter Kunkel Overview Applications
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationGraph Coarsening Method for KKT Matrices Arising in Orthogonal Collocation Methods for Optimal Control Problems
Graph Coarsening Method for KKT Matrices Arising in Orthogonal Collocation Methods for Optimal Control Problems Begüm Şenses Anil V. Rao University of Florida Gainesville, FL 32611-6250 Timothy A. Davis
More informationCE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions
CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura
More informationNumerical Methods for Constrained Optimal Control Problems
Numerical Methods for Constrained Optimal Control Problems Hartono Hartono A Thesis submitted for the degree of Doctor of Philosophy School of Mathematics and Statistics University of Western Australia
More informationRobust Process Control by Dynamic Stochastic Programming
Konrad-Zuse-Zentrum für Informationstechnik Berlin Takustraße 7 D-14195 Berlin-Dahlem Germany MARC C. STEINBACH Robust Process Control by Dynamic Stochastic Programming ZIB-Report 04-20 (May 2004) ROBUST
More informationEE291E/ME 290Q Lecture Notes 8. Optimal Control and Dynamic Games
EE291E/ME 290Q Lecture Notes 8. Optimal Control and Dynamic Games S. S. Sastry REVISED March 29th There exist two main approaches to optimal control and dynamic games: 1. via the Calculus of Variations
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More information2 A. BARCLAY, P. E. GILL AND J. B. ROSEN where c is a vector of nonlinear functions, A is a constant matrix that denes the linear constraints, and b l
SQP METHODS AND THEIR APPLICATION TO NUMERICAL OPTIMAL CONTROL ALEX BARCLAY y, PHILIP E. GILL y, AND J. BEN ROSEN z Abstract. In recent years, general-purpose sequential quadratic programming (SQP) methods
More informationAn Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization
An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns
More informationA NEW COMPUTATIONAL METHOD FOR OPTIMAL CONTROL OF A CLASS OF CONSTRAINED SYSTEMS GOVERNED BY PARTIAL DIFFERENTIAL EQUATIONS
A NEW COMPUTATIONAL METHOD FOR OPTIMAL CONTROL OF A CLASS OF CONSTRAINED SYSTEMS GOVERNED BY PARTIAL DIFFERENTIAL EQUATIONS Nicolas Petit Mark Milam Richard Murray Centre Automatique et Systèmes École
More informationOptimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations
Martino Bardi Italo Capuzzo-Dolcetta Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations Birkhauser Boston Basel Berlin Contents Preface Basic notations xi xv Chapter I. Outline
More informationGame Theory Extra Lecture 1 (BoB)
Game Theory 2014 Extra Lecture 1 (BoB) Differential games Tools from optimal control Dynamic programming Hamilton-Jacobi-Bellman-Isaacs equation Zerosum linear quadratic games and H control Baser/Olsder,
More informationHamilton-Jacobi-Bellman Equation Feb 25, 2008
Hamilton-Jacobi-Bellman Equation Feb 25, 2008 What is it? The Hamilton-Jacobi-Bellman (HJB) equation is the continuous-time analog to the discrete deterministic dynamic programming algorithm Discrete VS
More informationMethods for Computing Periodic Steady-State Jacob White
Introduction to Simulation - Lecture 15 Methods for Computing Periodic Steady-State Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Periodic Steady-state problems Application
More informationCHAPTER 2: QUADRATIC PROGRAMMING
CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,
More informationLecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.
Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference
More informationAn Application of Pontryagin s Maximum Principle in a Linear Quadratic Differential Game
An Application of Pontryagin s Maximum Principle in a Linear Quadratic Differential Game Marzieh Khakestari (Corresponding author) Institute For Mathematical Research, Universiti Putra Malaysia, 43400
More informationProper Orthogonal Decomposition in PDE-Constrained Optimization
Proper Orthogonal Decomposition in PDE-Constrained Optimization K. Kunisch Department of Mathematics and Computational Science University of Graz, Austria jointly with S. Volkwein Dynamic Programming Principle
More informationBlock Condensing with qpdunes
Block Condensing with qpdunes Dimitris Kouzoupis Rien Quirynen, Janick Frasch and Moritz Diehl Systems control and optimization laboratory (SYSCOP) TEMPO summer school August 5, 215 Dimitris Kouzoupis
More informationCHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67
CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Chapter2 p. 1/67 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Main Purpose: Introduce the maximum principle as a necessary condition to be satisfied by any optimal
More informationHomework Solution # 3
ECSE 644 Optimal Control Feb, 4 Due: Feb 17, 4 (Tuesday) Homework Solution # 3 1 (5%) Consider the discrete nonlinear control system in Homework # For the optimal control and trajectory that you have found
More informationProject Optimal Control: Optimal Control with State Variable Inequality Constraints (SVIC)
Project Optimal Control: Optimal Control with State Variable Inequality Constraints (SVIC) Mahdi Ghazaei, Meike Stemmann Automatic Control LTH Lund University Problem Formulation Find the maximum of the
More informationStability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games
Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,
More informationUCLA Chemical Engineering. Process & Control Systems Engineering Laboratory
Constrained Innite-Time Nonlinear Quadratic Optimal Control V. Manousiouthakis D. Chmielewski Chemical Engineering Department UCLA 1998 AIChE Annual Meeting Outline Unconstrained Innite-Time Nonlinear
More information