Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients

Size: px
Start display at page:

Download "Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients"

Transcription

1 Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients SADCO Summer School and Workshop on Optimal and Model Predictive Control OMPC 2013, Bayreuth Institute of Mathematics and Applied Computing Department of Aerospace Engineering Universita t der Bundeswehr Mu nchen (UniBw M) matthias.gerdts@unibw.de Fotos: nchen Magnus Manske (Panorama), Luidger (Theatinerkirche), Kurmis (Chin. Turm), Arad Mojtahedi (Olympiapark), Max-k (Deutsches Museum), Oliver Raupach (Friedensengel), Andreas Praefcke (Nationaltheater)

2 Schedule and Contents Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Time Topic 9:00-10:30 Introduction, overview, examples, indirect method 10:30-11:00 Coffee break 11:00-12:30 Discretization techniques, structure exploitation, calculation of gradients, extensions: sensitivity analysis, mixedinteger optimal control 12:30-14:00 Lunch break 14:00-15:30 Function space methods: Gradient and Newton type methods 15:30-16:00 Coffee break 16:00-17:30 Numerical experiments

3 Indirect, Direct, and Function Space Methods Optimal Control Problem Indirect method Direct discretization method Function space methods based on necessary optimality conditions (minimum principle) leads to a boundary value problem (BVP) BVP needs to be solved numerically by, e.g., multiple shooting methods based on discretization, e.g. collocation or direct shooting leads to finite dimensional optimization problem (NLP) NLP needs to be solved numerically by, e.g., SQP work in Banach spaces generalizations of gradient method, Lagrange-Newton method, SQP method,... discretization only at subproblem level

4 Contents Direct Discretization Methods Full Discretization Reduced Discretization and Shooting Methods Adjoint Estimation Convergence Examples Mixed-Integer Optimal Control Variable Time Transformation Parametric Sensitivity Analysis and Realtime-Optimization Parametric Optimal Control Problems

5 Contents Direct Discretization Methods Full Discretization Reduced Discretization and Shooting Methods Adjoint Estimation Convergence Examples Mixed-Integer Optimal Control Variable Time Transformation Parametric Sensitivity Analysis and Realtime-Optimization Parametric Optimal Control Problems

6 Optimal Control Problem Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Optimal Control Problem (OCP) Minimize ϕ(x(t 0 ), x(t f )) subject to the differential equation (t 0, t f fixed) the control and state constraints x (t) = f (t, x(t), u(t)), t 0 t t f, c(t, x(t), u(t)) 0, t 0 t t f, and the boundary conditions ψ(x(t 0 ), x(t f )) = 0.

7 Direct Discretization Idea Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients infinite dimensional optimal control problem OCP with state x and control u discretization x h, u h finite dimensional optimization problem (NLP) Minimize J(x h, u h ) subject to G(x h, u h ) 0, H(x h, u h ) = 0 Remarks: NLP can be large-scale and sparse or small and dense, depending on the type of discretization many different approaches for discretization exist: full discretization, collocation, pseudospectral methods, direct multiple shooting methods,...

8 Direct Discretization Idea Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients x h (BDF, RK) x 1 x M u h (B-Splines) u 1 u N control grid t 0 t N state grid t 0 t M

9 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Throughout we use one-step methods for the discretization of the differential equation x (t) = f (t, x(t), u(t))

10 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Throughout we use one-step methods for the discretization of the differential equation on a grid (equidistant for simplicity) x (t) = f (t, x(t), u(t)) G h := {t i := t 0 + ih; i = 0, 1,..., N}, h = t f t 0 N, N N

11 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Throughout we use one-step methods for the discretization of the differential equation on a grid (equidistant for simplicity) x (t) = f (t, x(t), u(t)) G h := {t i := t 0 + ih; i = 0, 1,..., N}, h = t f t 0 N, N N One-step Methods The one-step method is defined by the recursion x i+1 = x i + hφ(t i, x i, u h, h), i = 0, 1,..., N 1, and provides approximations x i = x h (t i ) x(t i ) at t i G h. The function Φ is called increment function. It depends on u h, which represents a control parameterization to be discussed later.

12 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Throughout we use one-step methods for the discretization of the differential equation on a grid (equidistant for simplicity) x (t) = f (t, x(t), u(t)) G h := {t i := t 0 + ih; i = 0, 1,..., N}, h = t f t 0 N, N N One-step Methods The one-step method is defined by the recursion x i+1 = x i + hφ(t i, x i, u h, h), i = 0, 1,..., N 1, and provides approximations x i = x h (t i ) x(t i ) at t i G h. The function Φ is called increment function. It depends on u h, which represents a control parameterization to be discussed later. Remark: The control approximation u h needs to be defined. There are different ways to do it; all of them lead to different discretization schemes for OCP.

13 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Examples (One-step methods) Euler method: x i+1 = x i + hf (t i, x i, u i ), Φ(t, x, u, h) = f (t, x, u)

14 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Examples (One-step methods) Euler method: x i+1 = x i + hf (t i, x i, u i ), Φ(t, x, u, h) = f (t, x, u) Heun s method: k 1 = f (t i, x i,?) k 2 = f (t i + h, x i + hk 1,?) } stage derivatives x i+1 = x i + h 2 (k 1 + k 2 )

15 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Examples (One-step methods) Euler method: x i+1 = x i + hf (t i, x i, u i ), Φ(t, x, u, h) = f (t, x, u) Heun s method: k 1 = f (t i, x i, u i ) k 2 = f (t i + h, x i + hk 1, u i ) } stage derivatives x i+1 = x i + h 2 (k 1 + k 2 )

16 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Examples (One-step methods) Euler method: x i+1 = x i + hf (t i, x i, u i ), Φ(t, x, u, h) = f (t, x, u) Heun s method: k 1 = f (t i, x i, u i ) k 2 = f (t i + h, x i + hk 1, u i ) } stage derivatives x i+1 = x i + h 2 (k 1 + k 2 ) Φ(t, x, u, h) = 1 (f (t, x, u) + f (t + h, x + hf (t, x, u), u)) 2

17 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Examples (One-step methods) Euler method: x i+1 = x i + hf (t i, x i, u i ), Φ(t, x, u, h) = f (t, x, u) Heun s method: k 1 = f (t i, x i, u i ) k 2 = f (t i + h, x i + hk 1, u i+1 ) } stage derivatives x i+1 = x i + h 2 (k 1 + k 2 )

18 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Examples (One-step methods) Euler method: x i+1 = x i + hf (t i, x i, u i ), Φ(t, x, u, h) = f (t, x, u) Heun s method: k 1 = f (t i, x i, u i ) k 2 = f (t i + h, x i + hk 1, u i+1 ) } stage derivatives x i+1 = x i + h 2 (k 1 + k 2 ) Φ(t, x, u i, u i+1, h) = 1 2 (f (t, x, u i ) + f (t + h, x + hf (t, x, u i ), u i+1 ))

19 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Examples (One-step methods) Modified Euler method: k 1 = f (t i, x i,?) k 2 = f (t i + h 2, x i + h 2 k 1,?) x i+1 = x i + hk 2

20 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Examples (One-step methods) Modified Euler method: k 1 = f (t i, x i, u i ) k 2 = f (t i + h 2, x i + h 2 k 1, u i ) x i+1 = x i + hk 2

21 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Examples (One-step methods) Modified Euler method: k 1 = f (t i, x i, u i ) k 2 = f (t i + h 2, x i + h 2 k 1, u i ) x i+1 = x i + hk 2 Φ(t, x, u, h) = f (t + h 2, x + h f (t, x, u), u) 2

22 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Examples (One-step methods) Modified Euler method: k 1 = f (t i, x i, u i ) k 2 = f (t i + h 2, x i + h 2 k 1, u i + u i+1 ) 2 x i+1 = x i + hk 2

23 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Examples (One-step methods) Modified Euler method: k 1 = f (t i, x i, u i ) k 2 = f (t i + h 2, x i + h 2 k 1, u i + u i+1 ) 2 x i+1 = x i + hk 2 Φ(t, x, u i, u i+1, h) = f (t + h 2, x + h 2 f (t, x, u i ), u i + u i+1 ) 2

24 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Examples (One-step methods) Modified Euler method: k 1 = f (t i, x i, u i ) k 2 = f (t i + h 2, x i + h 2 k 1, u i+1/2 ) x i+1 = x i + hk 2

25 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Examples (One-step methods) Modified Euler method: k 1 = f (t i, x i, u i ) k 2 = f (t i + h 2, x i + h 2 k 1, u i+1/2 ) x i+1 = x i + hk 2 Φ(t, x, u i, u i+1/2, h) = f (t + h 2, x + h 2 f (t, x, u i ), u i+1/2 )

26 State Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Examples (One-step methods) Modified Euler method: k 1 = f (t i, x i, u i ) k 2 = f (t i + h 2, x i + h 2 k 1, u i+1/2 ) x i+1 = x i + hk 2 Φ(t, x, u i, u i+1/2, h) = f (t + h 2, x + h 2 f (t, x, u i ), u i+1/2 ) In general: Runge-Kutta methods (explicit or implicit)

27 Collocation Collocation idea Approximate the solution x of the initial value problem x (t) = f (t, x(t), u(t)), x(t i ) = x i, in [t i, t i+1 ] by a polynomial p : [t i, t i+1 ] R nx of degree s. Construction:

28 Collocation Collocation idea Approximate the solution x of the initial value problem x (t) = f (t, x(t), u(t)), x(t i ) = x i, in [t i, t i+1 ] by a polynomial p : [t i, t i+1 ] R nx of degree s. Construction: collocation points t i τ 1 < τ 2 < τ s t i+1

29 Collocation Collocation idea Approximate the solution x of the initial value problem x (t) = f (t, x(t), u(t)), x(t i ) = x i, in [t i, t i+1 ] by a polynomial p : [t i, t i+1 ] R nx of degree s. Construction: collocation points t i τ 1 < τ 2 < τ s t i+1 collocation conditions: p(t i ) = x i, p (τ k ) = f (τ k, p(τ k ), u(τ k )), k = 1,..., s

30 Collocation Collocation idea Approximate the solution x of the initial value problem x (t) = f (t, x(t), u(t)), x(t i ) = x i, in [t i, t i+1 ] by a polynomial p : [t i, t i+1 ] R nx of degree s. Construction: collocation points t i τ 1 < τ 2 < τ s t i+1 collocation conditions: p(t i ) = x i, p (τ k ) = f (τ k, p(τ k ), u(τ k )), k = 1,..., s define x i+1 := p(t i+1 )

31 Collocation Collocation idea Approximate the solution x of the initial value problem x (t) = f (t, x(t), u(t)), x(t i ) = x i, in [t i, t i+1 ] by a polynomial p : [t i, t i+1 ] R nx of degree s. Construction: collocation points t i τ 1 < τ 2 < τ s t i+1 collocation conditions: p(t i ) = x i, p (τ k ) = f (τ k, p(τ k ), u(τ k )), k = 1,..., s define x i+1 := p(t i+1 ) Example For s = 2, τ 1 = t i, τ 2 = t i+1, the collocation idea yields x i+1 = x i + h 2 (f (t i, x i, u i ) + f (t i+1, x i+1, u i+1 )). This is the implicit trapezoidal rule!

32 Collocation Collocation idea Approximate the solution x of the initial value problem x (t) = f (t, x(t), u(t)), x(t i ) = x i, in [t i, t i+1 ] by a polynomial p : [t i, t i+1 ] R nx of degree s. Construction: collocation points t i τ 1 < τ 2 < τ s t i+1 collocation conditions: p(t i ) = x i, p (τ k ) = f (τ k, p(τ k ), u(τ k )), k = 1,..., s define x i+1 := p(t i+1 ) Example For s = 2, τ 1 = t i, τ 2 = t i+1, the collocation idea yields x i+1 = x i + h 2 (f (t i, x i, u i ) + f (t i+1, x i+1, u i+1 )). This is the implicit trapezoidal rule! In general: Every collocation method corresponds to an implicit Runge-Kutta method.

33 Discretization of Optimal Control Problems Grid: (equidistant for simplicity) G h := {t 0 + ih; i = 0, 1,..., N}, h = t f t 0 N, N N Control parameterization (approximation of u on G h )

34 Discretization of Optimal Control Problems Grid: (equidistant for simplicity) G h := {t 0 + ih; i = 0, 1,..., N}, h = t f t 0 N, N N Control parameterization (approximation of u on G h ) piecewise constant or continuous and piecewise linear approximation (local support): u i u(t i ) (t i G h ) = u h = (u 0, u 1,..., u N ) R (N+1)nu

35 Discretization of Optimal Control Problems Grid: (equidistant for simplicity) G h := {t 0 + ih; i = 0, 1,..., N}, h = t f t 0 N, N N Control parameterization (approximation of u on G h ) piecewise constant or continuous and piecewise linear approximation (local support): u i u(t i ) (t i G h ) = u h = (u 0, u 1,..., u N ) R (N+1)nu interpolating cubic spline (non-local support, twice continuously differentiable)

36 Discretization of Optimal Control Problems Grid: (equidistant for simplicity) G h := {t 0 + ih; i = 0, 1,..., N}, h = t f t 0 N, N N Control parameterization (approximation of u on G h ) piecewise constant or continuous and piecewise linear approximation (local support): u i u(t i ) (t i G h ) = u h = (u 0, u 1,..., u N ) R (N+1)nu interpolating cubic spline (non-local support, twice continuously differentiable) polynomials (non-local support, smooth) pseudospectral methods

37 Discretization of Optimal Control Problems Grid: (equidistant for simplicity) G h := {t 0 + ih; i = 0, 1,..., N}, h = t f t 0 N, N N Control parameterization (approximation of u on G h ) piecewise constant or continuous and piecewise linear approximation (local support): u i u(t i ) (t i G h ) = u h = (u 0, u 1,..., u N ) R (N+1)nu interpolating cubic spline (non-local support, twice continuously differentiable) polynomials (non-local support, smooth) pseudospectral methods B-spline approximations of order k (local support, arbitrary smoothness possible): u h (t) = u h (t; w) := N+k 1 i=1 w i B k i (t), w = (w 1,..., w M ) R Mnu, M := N + k 1 B k i : basis functions (elementary B-splines); w: control parameterization

38 Discretization of Optimal Control Problems Example (Elementary B-splines: Basis functions) Elementary B-splines of orders k = 2, 3, 4: ([t 0, t f ] = [0, 1], N = 5, equidistant grid) Elementary B-splines of order k = 1 are piecewise constant basis functions.

39 Discretization of Optimal Control Problems Definition (elementary B-Spline of order k) Let k N. Define the auxiliary grid G k h := {τ i i = 1,..., N + 2k 1} with auxiliary grid points t 0, if 1 i k, τ i := t i k, if k + 1 i N + k 1, t N, if N + k i N + 2k 1.

40 Discretization of Optimal Control Problems Definition (elementary B-Spline of order k) Let k N. Define the auxiliary grid G k h := {τ i i = 1,..., N + 2k 1} with auxiliary grid points t 0, if 1 i k, τ i := t i k, if k + 1 i N + k 1, t N, if N + k i N + 2k 1. The elementary B-splines B k i ( ) of order k for i = 1,..., N + k 1 are defined by the recursion B 1 i (t) := { 1, if τi t < τ i+1 0, otherwise ( piecewise constant ) B k i (t) := t τ i τ i+k 1 τ i B k 1 i (t) + τ i+k t τ i+k τ i+1 B k 1 i+1 (t) convention: 0/0 = 0 whenever auxiliary grid points coincide.

41 Discretization of Optimal Control Problems Example For k = 2 we obtain the continuous and piecewise linear functions t τ i τ i+1 τ, if τ i t < τ i+1, i B 2 τ i (t) = i+2 t τ i+2 τ, if τ i+1 t < τ i+2, i+1 0, otherwise. For k = 3 we obtain the continuously differentiable function (t τ i ) 2 (τ i+2 τ i )(τ i+1 τ i ), if t [τ i, τ i+1 ), (t τ i )(τ i+2 t) B 3 (τ i+2 τ i )(τ i+2 τ i+1 ) + (τ i+3 t)(t τ i+1 ) (τ i+3 τ i+1 )(τ i+2 τ i+1 ), if t [τ i+1, τ i+2 ), i (t) = (τ i+3 t) 2 (τ i+3 τ i+1 )(τ i+3 τ i+2 ), if t [τ i+2, τ i+3 ), 0, otherwise.

42 Discretization of Optimal Control Problems Remarks: The elementary B-splines B k i ( ), i = 1,..., N + k 1, restricted to the intervals [t j, t j+1 ], j = 0,..., N 1, are polynomials of degree at most k 1.

43 Discretization of Optimal Control Problems Remarks: The elementary B-splines B k i ( ), i = 1,..., N + k 1, restricted to the intervals [t j, t j+1 ], j = 0,..., N 1, are polynomials of degree at most k 1. elementary B-splines are bounded and sum up to one

44 Discretization of Optimal Control Problems Remarks: The elementary B-splines B k i ( ), i = 1,..., N + k 1, restricted to the intervals [t j, t j+1 ], j = 0,..., N 1, are polynomials of degree at most k 1. elementary B-splines are bounded and sum up to one For k 2 the B-splines B k i ( ) are k 2 times continuously differentiable.

45 Discretization of Optimal Control Problems Remarks: The elementary B-splines B k i ( ), i = 1,..., N + k 1, restricted to the intervals [t j, t j+1 ], j = 0,..., N 1, are polynomials of degree at most k 1. elementary B-splines are bounded and sum up to one For k 2 the B-splines B k i ( ) are k 2 times continuously differentiable. For k 3 the derivative obeys the recursion d k 1 dt Bk i (t) = B k 1 i (t) k 1 B k 1 i+1 τ i+k 1 τ i τ i+k τ (t). i+1

46 Discretization of Optimal Control Problems Remarks: The elementary B-splines B k i ( ), i = 1,..., N + k 1, restricted to the intervals [t j, t j+1 ], j = 0,..., N 1, are polynomials of degree at most k 1. elementary B-splines are bounded and sum up to one For k 2 the B-splines B k i ( ) are k 2 times continuously differentiable. For k 3 the derivative obeys the recursion d k 1 dt Bk i (t) = B k 1 i (t) k 1 B k 1 i+1 τ i+k 1 τ i τ i+k τ (t). i+1 elementary B-splines possess local support supp(b k i ) [τ i, τ i+k ] with B k i (t) { > 0, if t (τi, τ i+k ), = 0, otherwise, for k > 1.

47 Discretization of Optimal Control Problems Remarks: The elementary B-splines B k i ( ), i = 1,..., N + k 1, restricted to the intervals [t j, t j+1 ], j = 0,..., N 1, are polynomials of degree at most k 1. elementary B-splines are bounded and sum up to one For k 2 the B-splines B k i ( ) are k 2 times continuously differentiable. For k 3 the derivative obeys the recursion d k 1 dt Bk i (t) = B k 1 i (t) k 1 B k 1 i+1 τ i+k 1 τ i τ i+k τ (t). i+1 elementary B-splines possess local support supp(b k i ) [τ i, τ i+k ] with B k i (t) { > 0, if t (τi, τ i+k ), = 0, otherwise, for k > 1. most often k = 1 (piecewise constant) or k = 2 (continuous, piecewise linear) are used in discretization of optimal control problems

48 Contents Direct Discretization Methods Full Discretization Reduced Discretization and Shooting Methods Adjoint Estimation Convergence Examples Mixed-Integer Optimal Control Variable Time Transformation Parametric Sensitivity Analysis and Realtime-Optimization Parametric Optimal Control Problems

49 Full Discretization of OCP Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients State discretization For a given control parameterization u h ( ; w) approximations x i x(t i ), t i G h, are obtained by a one-step method This yields a state approximation x i+1 = x i + hφ(t i, x i, w, h), i = 0, 1,..., N 1. x h = (x 0, x 1,..., x N ) R (N+1)nx Example (Explicit Euler method) x i+1 = x i + hf (t i, x i, u i ), i = 0, 1,..., N 1.

50 Full Discretization of OCP Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Discretization of the optimal control problem with Mayer-type objective function and B-spline control parameterization of order k yields: Fully discretized optimal control problem (D-OCP) Find x h = (x 0,..., x N ) R (N+1)nx and w = (w 1,..., w M ) R Mnu such that ϕ(x 0, x N ) becomes minimal subject to the discretized dynamic constraints x i + hφ(t i, x i, w, h) x i+1 = 0, i = 0, 1,..., N 1, the discretized control and state constraints c(t i, x i, u h (t i ; w)) 0, i = 0, 1,..., N, and the boundary conditions ψ(x 0, x N ) = 0.

51 Full Discretization of OCP Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Remark The constraints x i + hφ(t i, x i, w, h) x i+1 = 0, i = 0, 1,..., N 1, can be understood in two ways, if Φ defines a Runge-Kutta method with stages k j, j = 1,..., s:

52 Full Discretization of OCP Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Remark The constraints x i + hφ(t i, x i, w, h) x i+1 = 0, i = 0, 1,..., N 1, can be understood in two ways, if Φ defines a Runge-Kutta method with stages k j, j = 1,..., s: The stage equations k j =... are not explicitly added as equality constraints in D-OCP, e.g. for Heun s method one would only add the constraints x i + h 2 (f (t i, x i, u i ) + f (t i+1, x i + hf (t i, x i, u i ), u i )) x i+1 = 0

53 Full Discretization of OCP Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Remark The constraints x i + hφ(t i, x i, w, h) x i+1 = 0, i = 0, 1,..., N 1, can be understood in two ways, if Φ defines a Runge-Kutta method with stages k j, j = 1,..., s: The stage equations k j =... are not explicitly added as equality constraints in D-OCP, e.g. for Heun s method one would only add the constraints x i + h 2 (f (t i, x i, u i ) + f (t i+1, x i + hf (t i, x i, u i ), u i )) x i+1 = 0 The stage equations k j =... are explicitly added as equality constraints in D-OCP, e.g. for Heun s method one would add the constraints k 1 f (t i, x i, u i ) = 0 k 2 f (t i+1, x i + hk 1, u i ) = 0 x i + h 2 (k 1 + k 2 ) x i+1 = 0 This version is typically implemented for implicit Runge-Kutta methods.

54 Full Discretization of OCP Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients The fully discretized optimal control problem is a standard nonlinear optimization problem, which is large-scale but exhibits a sparse structure: Nonlinear Optimization Problem (NLP) Minimize w.r.t. z = (x h, w) subject to the constraints J(z) := ϕ(x 0, x N ) H(z) = 0, G(z) 0, where x 0 + hφ(t 0, x 0, w, h) x 1 H(z) :=., G(z) := x N 1 + hφ(t N 1, x N 1, w, h) x N ψ(x 0, x N ) c(t 0, x 0, u h (t 0 ; w)).. c(t N, x N, u h (t N ; w))

55 Sparsity Structures in D-OCP Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients J (z) = ( ϕ x 0 ϕ x N 0 ),

56 Sparsity Structures in D-OCP Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients J (z) = H (z) = ( ) ϕ x ϕ 0 x 0, N M 0 Id hφ w [t 0] M N 1 Id hφ w [t N 1] ψ x ψ 0 x 0 N, M j := Id + hφ x [t j ]

57 Sparsity Structures in D-OCP Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients J (z) = H (z) = ( ) ϕ x ϕ 0 x 0, N M 0 Id hφ w [t 0] M N 1 Id hφ w [t N 1] ψ x ψ 0 x 0 N, M j := Id + hφ x [t j ] G (z) = c x [t 0] c u [t 0]u h,w (t 0; w)......, c x [t N ] c u [t N ]u h,w (t N ; w)

58 Sparsity Structures in D-OCP Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients J (z) = H (z) = ( ) ϕ x ϕ 0 x 0, N M 0 Id hφ w [t 0] M N 1 Id hφ w [t N 1] ψ x ψ 0 x 0 N, M j := Id + hφ x [t j ] G (z) = u h,w (t j ; w) = c x [t 0] c u [t 0]u h,w (t 0; w)......, c x [t N ] c u [t N ]u h,w (t N ; w) ( ) B k 1 (t j ) Id B k 2 (t j ) Id B k M (t j ) Id Observation: Bi k have local support = most entries in u h,w (t j ; w) and in Φ w [t j ] vanish H (z) and G (z) have a large-scale and sparse structure.

59 Sparsity Structures in the Full Discretization Approach Lagrange function: L(z, λ, µ) = ϕ(x 0, x N ) + σ ψ(x 0, x N ) N 1 + λ i+1 (x i + hφ(t i, x i, w, h) x i+1 ) + i=0 N µ i c(t i, x i, u h (t i ; w)) i=0 Hessian matrix: L zz(z, λ, µ) = L x 0,x 0 L x 0,x N L x 0,w L x N,x 0 L x N,x N L x N,w L w,x 0 L w,x N L w,w Note: The blocks L x j,w = (L w,x j ) and L w,w are sparse matrices if B-splines are used.

60 Full Euler Discretization I Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients B-splines of order k = 1 (piecewise constant) and k = 2 (continuous and piecewise linear) satisfy Hence: u j 1 := u(t j 1 ; w) = N+k 1 i=1 w i B k i (t j 1 ) = w j (j = 1,..., N + k 1) The function value of u at t j and the coefficient w j+1 coincide. This will be exploited in the following examples.

61 Full Euler Discretization II Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Example (D-OCP using explicit Euler) Find z = (x h, u h ) with x h = (x 0,..., x N ) R (N+1)nx u h = (u 0,..., u N ) R (N+1)nu such that J(z) = ϕ(x 0, x N ) becomes minimal subject to the constraints x i + hf (t i, x i, u i ) x i+1 = 0, i = 0, 1,..., N 1, c(t i, x i, u i ) 0, i = 0, 1,..., N, ψ(x 0, x N ) = 0.

62 Full Euler Discretization III Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Sparsity: (Φ = f ) H (z) = M 0 Id hf u[t 0 ] M N 1 Id hf u [t N 1] ψ x 0 ψ x N 0 0, G (z) = c x [t 0] c u [t 0] c x [t N ] c u[t N ], where M j = Id + hf x [t j ], j = 0,..., N 1

63 Full Euler Discretization IV Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Lagrange function: L(z, λ, µ) = ϕ(x 0, x N ) + σ ψ(x 0, x N ) N 1 + λ i+1 (x i + hf (t i, x i, u i, h) x i+1 ) + i=0 N µ i c(t i, x i, u i ) i=0 Hessian matrix: L zz (z, λ, µ) = L x 0,x 0 L x 0,x N L x 0,u L x N,x 0 L x N,x N L x N,u N L u 0,x 0 L u 0,u L u N,x N L u N,u N

64 Full Trapezoidal Discretization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Example (D-OCP using trapezoidal rule (Collocation)) Find z = (x h, u h ) with x h = (x 0,..., x N ) R (N+1)nx u h = (u 0,..., u N ) R (N+1)nu such that J(z) = ϕ(x 0, x N ) becomes minimal subject to the constraints x i + h 2 (f (t i, x i, u i ) + f (t i+1, x i+1, u i+1 )) x i+1 = 0, i = 0, 1,..., N 1, c(t i, x i, u i ) 0, i = 0, 1,..., N, ψ(x 0, x N ) = 0.

65 Full Hermite-Simpson Discretization (Collocation) I Example (Hermite-Simpson, Collocation) The Hermite-Simpson rule reads x i+1 = x i + h 6 (f i + 4f i f i+1 ), i = 0, 1,..., N 1, where f i := f (t i, x i, u i ), f i+1 := f (t i+1, x i+1, u i+1 ), f i+ 1 := f (t i + h 2 2, x i+ 2 1, u i+ 1 ), 2 x i+ 1 2 := 1 2 (x i + x i+1 ) + h 8 (f i f i+1 ). Herein, we need to specify what u i+ 1 2 is supposed to be. Several choices are possible, e.g. if a continuous and piecewise linear control approximation is chosen, then u i+ 1 2 = 1 2 (u i + u i+1 ); u i+ 1 can be introduced as an additional optimization variable without specifying any relations 2 to u i and u i+1.

66 Full Hermite-Simpson Discretization (Collocation) II Caution! See what happens if you apply the modified Euler method with additional optimization variables u k+ 1 at the midpoints t k + h 2 to the following problem: 2 Minimize subject to the constraints 1 1 u(t) 2 + 2x(t) 2 dt 2 0 x (t) = 1 x(t) + u(t), x(0) = 1. 2 The optimal solution is 2 exp(3t) + exp(3) 2(exp(3t) exp(3)) ˆx(t) =, û(t) = exp(3t/2)(2 + exp(3)) exp(3t/2)(2 + exp(3)).

67 Numerical Solution of D-OCP Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients It remains to solve the large-scale and sparse NLP by. e.g. sequential-quadratic programming (SQP), e.g. WORHP or SNOPT interior-point methods, e.g. IPOPT or KNITRO any other software package suitable for large-scale nonlinear programs Caution: Designing a code for large-scale nonlinear programming is non-trivial, because sparse Hessian and Jacobian approximations are expensive to compute in general (graph coloring problem) regularization techniques for singular or indefinite Hessians are required one cannot use standard BFGS updates since the update leads to large-scale and dense matrices (fill-in!) linear equation solvers are needed to solve linear equations with large-scale saddle point matrices (KKT matrices) of type ( L zz A ) A S

68 Grid Refinement Approaches: Refinement based on the local discretization error of the state dynamics and local refinement at junction points of active/inactive state constraints, see [1, 2] Refinement taking into account the discretization error in the optimality system including adjoint equations, see [3] [1] Betts, J. T. and Huffman, W. P. Mesh Refinement in Direct Transcription Methods for Optimal Control. Optimal Control Applications and Methods, 19; 1 21, [2] C. Büskens. Optimierungsmethoden und Sensitivitätsanalyse für optimale Steuerprozesse mit Steuer- und Zustandsbeschränkungen. PhD thesis, Fachbereich Mathematik, Westfälische Wilhems-Universität Münster, Münster, Germany, [3] J. Laurent-Varin, F. Bonnans, N. Berend, C. Talbot, and M. Haddou. On the refinement of discretization for optimal control problems. IFAC Symposium on Automatic Control in Aerospace, St. Petersburg, 2004.

69 Pseudospectral Methods Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Approach: global approximation of control and state by Legendre or Chebyshev polynomials direct discretization sparse nonlinear programming solver Advantages: exponential (or spectral) rate of convergence (faster than polynomial) good accuracy already with coarse grids Disadvantages: oscillations for non-differentiable trajectories [1] G. Elnagar, M. A. Kazemi, and M. Razzaghi. The Pseudospectral Legendre Method for Discretizing Optimal Control Problems. IEEE Transactions on Automatic Control, 40: , [2] J. Vlassenbroeck and R. Van Doreen. A Chebyshev Technique for Solving Nonlinear Optimal Control Problems. IEEE Transactions on Automatic Control, 33: , [3] F. Fahroo and I.M. Ross. Direct Trajectory Optimization by a Chebyshev Pseudospectral Method. Journal of Guidance Control and Dynamics, 25, 2002.

70 Contents Direct Discretization Methods Full Discretization Reduced Discretization and Shooting Methods Adjoint Estimation Convergence Examples Mixed-Integer Optimal Control Variable Time Transformation Parametric Sensitivity Analysis and Realtime-Optimization Parametric Optimal Control Problems

71 Reduced Discretization (Direct Single Shooting) Fully discretized optimal control problem Minimize ϕ(x 0, x N ) w.r.t. x h = (x 0,..., x N ) R (N+1)nx and w = (w 1,..., w M ) R Mnu subject to x i + hφ(t i, x i, w, h) x i+1 = 0, i = 0, 1,..., N 1, c(t i, x i, u h (t i ; w)) 0, i = 0, 1,..., N, ψ(x 0, x N ) = 0.

72 Reduced Discretization (Direct Single Shooting) Reduction of size by solving the discretized differential equations: x 0 =: X 0 (x 0, w),

73 Reduced Discretization (Direct Single Shooting) Reduction of size by solving the discretized differential equations: x 0 x 1 = x 0 + hφ(t 0, x 0, w, h) =: X 0 (x 0, w), =: X 1 (x 0, w),

74 Reduced Discretization (Direct Single Shooting) Reduction of size by solving the discretized differential equations: x 0 x 1 = x 0 + hφ(t 0, x 0, w, h) x 2 = x 1 + hφ(t 1, x 1, w, h) =: X 0 (x 0, w), =: X 1 (x 0, w),

75 Reduced Discretization (Direct Single Shooting) Reduction of size by solving the discretized differential equations: x 0 x 1 = x 0 + hφ(t 0, x 0, w, h) x 2 = x 1 + hφ(t 1, x 1, w, h). =: X 0 (x 0, w), =: X 1 (x 0, w), = X 1 (x 0, w) + hφ(t 1, X 1 (x 0, w), w, h) =: X 2 (x 0, w),

76 Reduced Discretization (Direct Single Shooting) Reduction of size by solving the discretized differential equations: x 0 x 1 = x 0 + hφ(t 0, x 0, w, h) x 2 = x 1 + hφ(t 1, x 1, w, h). =: X 0 (x 0, w), =: X 1 (x 0, w), = X 1 (x 0, w) + hφ(t 1, X 1 (x 0, w), w, h) =: X 2 (x 0, w), x N = x N 1 + hφ(t N 1, x N 1, w, h)

77 Reduced Discretization (Direct Single Shooting) Reduction of size by solving the discretized differential equations: x 0 x 1 = x 0 + hφ(t 0, x 0, w, h) x 2 = x 1 + hφ(t 1, x 1, w, h). =: X 0 (x 0, w), =: X 1 (x 0, w), = X 1 (x 0, w) + hφ(t 1, X 1 (x 0, w), w, h) =: X 2 (x 0, w), x N = x N 1 + hφ(t N 1, x N 1, w, h) = X N 1 (x 0, w) + hφ(t N 1, X N 1 (x 0, w), w, h N 1 ) =: X N (x 0, w).

78 Reduced Discretization (Direct Single Shooting) Reduction of size by solving the discretized differential equations: x 0 x 1 = x 0 + hφ(t 0, x 0, w, h) x 2 = x 1 + hφ(t 1, x 1, w, h). =: X 0 (x 0, w), =: X 1 (x 0, w), = X 1 (x 0, w) + hφ(t 1, X 1 (x 0, w), w, h) =: X 2 (x 0, w), x N = x N 1 + hφ(t N 1, x N 1, w, h) = X N 1 (x 0, w) + hφ(t N 1, X N 1 (x 0, w), w, h N 1 ) =: X N (x 0, w). Remarks:

79 Reduced Discretization (Direct Single Shooting) Reduction of size by solving the discretized differential equations: x 0 x 1 = x 0 + hφ(t 0, x 0, w, h) x 2 = x 1 + hφ(t 1, x 1, w, h). =: X 0 (x 0, w), =: X 1 (x 0, w), = X 1 (x 0, w) + hφ(t 1, X 1 (x 0, w), w, h) =: X 2 (x 0, w), x N = x N 1 + hφ(t N 1, x N 1, w, h) = X N 1 (x 0, w) + hφ(t N 1, X N 1 (x 0, w), w, h N 1 ) =: X N (x 0, w). Remarks: The state trajectory is fully determined by the initial value x 0 and the control parameterization w (direct shooting idea).

80 Reduced Discretization (Direct Single Shooting) Reduction of size by solving the discretized differential equations: x 0 x 1 = x 0 + hφ(t 0, x 0, w, h) x 2 = x 1 + hφ(t 1, x 1, w, h). =: X 0 (x 0, w), =: X 1 (x 0, w), = X 1 (x 0, w) + hφ(t 1, X 1 (x 0, w), w, h) =: X 2 (x 0, w), x N = x N 1 + hφ(t N 1, x N 1, w, h) = X N 1 (x 0, w) + hφ(t N 1, X N 1 (x 0, w), w, h N 1 ) =: X N (x 0, w). Remarks: The state trajectory is fully determined by the initial value x 0 and the control parameterization w (direct shooting idea). The above procedure is nothing else than solving the initial value problem x (t) = f (t, x(t), u h (t; w)), x(t 0 ) = x 0 with the one-step method with increment function Φ.

81 Reduced Discretization (Direct Single Shooting) Reduced Discretization (RD-OCP) Minimize ϕ(x 0, X N (x 0, w)) with respect to x 0 R nx and w R Mnu subject to the constraints ψ(x 0, X N (x 0, w)) = 0, c(t j, X j (x 0, w), u h (t j ; w)) 0, j = 0, 1,..., N. Remarks: much smaller than D-OCP (fewer optimization variables and fewer constraints) but: more nonlinear than D-OCP

82 Reduced Discretization (Direct Single Shooting) The reduced discretization is again a finite dimensional nonlinear optimization problem, but of reduced size: Reduced Nonlinear Optimization Problem (R-NLP) Minimize J(z) := ϕ(x 0, X N (x 0, w)) w.r.t. z = (x 0, w) R nx +Mnu subject to the constraints where H(z) = 0, G(z) 0, c(t 0, x 0, u h (t 0 ; w)) c(t 1, X 1 (x 0, w), u h (t 1 ; w)) H(z) := ψ(x 0, X N (x 0, w)), G(z) :=. c(t N, X N (x 0, w), u h (t N ; w)).

83 Reduced Discretization (Direct Single Shooting) Derivatives: (required in gradient based optimization method) J (z) = ( ) ϕ x 0 + ϕ x f X N,x 0 ϕ xf X N,w G (z) = c x [t 0 ] c x [t 0] + c u [t 0] u h,w (t 0; w) c x [t 1 ] X 1,x c 0 x [t 1 ] X 1,w + c u[t 1 ] u h,w (t 1; w).. c x [t N ] X N,x c 0 x [t N ] X N,w + c u[t N ] u h,w (t N; w) H (z) = ( ) ψ x 0 + ψ x f X N,x 0 ψ xf X N,w How to compute sensitivities X i,x 0 (x 0, w), X i,w (x 0, w), i = 1,..., N?

84 Computation of Derivatives Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Different approaches exist: (a) The sensitivity differential equation approach is advantageous if the number of constraints is (much) larger than the number of variables in the discretized problem.

85 Computation of Derivatives Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Different approaches exist: (a) The sensitivity differential equation approach is advantageous if the number of constraints is (much) larger than the number of variables in the discretized problem. (b) The adjoint equation approach is preferable if the number of constraints is less than the number of variables in the discretized problem.

86 Computation of Derivatives Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Different approaches exist: (a) The sensitivity differential equation approach is advantageous if the number of constraints is (much) larger than the number of variables in the discretized problem. (b) The adjoint equation approach is preferable if the number of constraints is less than the number of variables in the discretized problem. (c) A powerful tool for the evaluation of derivatives is algorithmic differentiation, see This approach assumes that the evaluation of a function is performed by a FORTRAN or C procedure. Algorithmic differentiation means that the complete procedure is differentiated step by step using, roughly speaking, chain and product rules. The result is again a FORTRAN or C procedure that provides the derivative of the function.

87 Computation of Derivatives Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Different approaches exist: (a) The sensitivity differential equation approach is advantageous if the number of constraints is (much) larger than the number of variables in the discretized problem. (b) The adjoint equation approach is preferable if the number of constraints is less than the number of variables in the discretized problem. (c) A powerful tool for the evaluation of derivatives is algorithmic differentiation, see This approach assumes that the evaluation of a function is performed by a FORTRAN or C procedure. Algorithmic differentiation means that the complete procedure is differentiated step by step using, roughly speaking, chain and product rules. The result is again a FORTRAN or C procedure that provides the derivative of the function. (d) The approximation by finite differences is straightforward, but has the drawback of being computationally expensive and often suffers from low accuracy.

88 Computation of Derivatives by Sensitivity Analysis Given: one-step method X 0 (x 0, w) = x 0, X i+1 (x 0, w) = X i (x 0, w) + hφ(t i, X i (x 0, w), w, h), i = 0, 1,..., N 1.

89 Computation of Derivatives by Sensitivity Analysis Given: one-step method X 0 (x 0, w) = x 0, X i+1 (x 0, w) = X i (x 0, w) + hφ(t i, X i (x 0, w), w, h), i = 0, 1,..., N 1. Goal: Compute sensitivities ( S i := X i (x 0, w) = X i,x 0 (x 0, w) X i,w (x 0, w) ), i = 0, 1,..., N.

90 Computation of Derivatives by Sensitivity Analysis Given: one-step method X 0 (x 0, w) = x 0, X i+1 (x 0, w) = X i (x 0, w) + hφ(t i, X i (x 0, w), w, h), i = 0, 1,..., N 1. Goal: Compute sensitivities ( S i := X i (x 0, w) = X i,x 0 (x 0, w) X i,w (x 0, w) ), i = 0, 1,..., N. Idea: Differentiate the one-step method step-by-step with respect to x 0 and w.

91 Computation of Derivatives by Sensitivity Analysis Sensitivity Computation (Internal Numerical Differentiation, IND) Init: ( ) S 0 = Id 0 R nx (nx +Mnu).

92 Computation of Derivatives by Sensitivity Analysis Sensitivity Computation (Internal Numerical Differentiation, IND) Init: ( ) S 0 = Id 0 R nx (nx +Mnu). For i = 0, 1,..., N 1 compute ( ) S i+1 = S i + h Φ x [t i ] S i + Φ w [t w i ], (x 0, w)

93 Computation of Derivatives by Sensitivity Analysis Sensitivity Computation (Internal Numerical Differentiation, IND) Init: ( ) S 0 = Id 0 R nx (nx +Mnu). For i = 0, 1,..., N 1 compute ( ) S i+1 = S i + h Φ x [t i ] S i + Φ w [t w i ], (x 0, w) where w ( (x 0, w) = 0 Id ) R Mnu (nx +Mnu).

94 Computation of Derivatives by Sensitivity Analysis Sensitivity Computation (Internal Numerical Differentiation, IND) Init: ( ) S 0 = Id 0 R nx (nx +Mnu). For i = 0, 1,..., N 1 compute ( ) S i+1 = S i + h Φ x [t i ] S i + Φ w [t w i ], (x 0, w) where w ( (x 0, w) = 0 Id ) R Mnu (nx +Mnu). Note: The dimension of S i depends on the dimension n x + Mn u of the optimization vector (x 0, w), but not on the number of constraints of the reduced optimization problem. Hence, this approach is useful, if the number of constraints exceeds the number of optimization variables.

95 Computation of Derivatives by Sensitivity Analysis Explicit Euler method: X i+1 (x 0, w) = X i (x 0, w) + hf (t i, X i (x 0, w), u i ), i = 0, 1,..., N 1. Recall: u j 1 = w j, j = 1,..., N, i.e. w = (u 0, u 1,..., u N 1 ) Sensitivity Computation (Internal Numerical Differentiation, IND) Init: For i = 0, 1,..., N 1 compute ( ) S 0 = Id 0 R nx (nx +Nnu). S i+1 = S i + h ( ) f x [t i ] S i + f u [t u i i ], (x 0, w) where u ( i (x 0, w) = Id 0 0 ). i-th n u n u-block

96 IND and Sensitivity Differential Equation Discretization of the sensitivity differential equation S (t) = f x [t] S(t) + f u [t] u h,(x 0,w)(t; w), ( ) nx (nx +Nnu) S(t 0 ) = Id 0 R with the explicit Euler method yields the internal numerical differentiation formulae ( ) S i+1 = S i + h f x [t i ] S i + f u [t u i i ], i = 0, 1,..., N 1, (x 0, w) ( ) S 0 = Id 0 nx (nx +Nnu) R Conclusion: The IND approach for the calculation of sensitivities coincides with the discretization of the sensitivity differential equation, if the same one-step method and control approximation are used.

97 Gradient Computation by Adjoint Equation Given: One-step discretization scheme (z = (x 0, w) ): X 0 (z) = x 0, X i+1 (z) = X i (z) + hφ(t i, X i (z), w, h), i = 0, 1,..., N 1,

98 Gradient Computation by Adjoint Equation Given: One-step discretization scheme (z = (x 0, w) ): X 0 (z) = x 0, X i+1 (z) = X i (z) + hφ(t i, X i (z), w, h), i = 0, 1,..., N 1, A function Γ of type Γ(z) := γ(x 0, X N (z), w). This can be the objective function of the reduced optimal control problem or the boundary conditions or any of the discretized state constraints (with N replaced by i).

99 Gradient Computation by Adjoint Equation Given: One-step discretization scheme (z = (x 0, w) ): Goals: X 0 (z) = x 0, X i+1 (z) = X i (z) + hφ(t i, X i (z), w, h), i = 0, 1,..., N 1, A function Γ of type Γ(z) := γ(x 0, X N (z), w). This can be the objective function of the reduced optimal control problem or the boundary conditions or any of the discretized state constraints (with N replaced by i). Compute gradient of Γ with respect to z.

100 Gradient Computation by Adjoint Equation Given: One-step discretization scheme (z = (x 0, w) ): Goals: X 0 (z) = x 0, X i+1 (z) = X i (z) + hφ(t i, X i (z), w, h), i = 0, 1,..., N 1, A function Γ of type Γ(z) := γ(x 0, X N (z), w). This can be the objective function of the reduced optimal control problem or the boundary conditions or any of the discretized state constraints (with N replaced by i). Compute gradient of Γ with respect to z. Avoid the costly computation of the sensitivity matrices S i, i = 0,..., N, with IND.

101 Gradient Computation by Adjoint Equation Define the auxiliary functional Γ a using multipliers λ i, i = 1,..., N: N 1 Γ a(z) := Γ(z) + λ i+1 (X i+1(z) X i (z) hφ(t i, X i (z), w, h)) i=0

102 Gradient Computation by Adjoint Equation Define the auxiliary functional Γ a using multipliers λ i, i = 1,..., N: N 1 Γ a(z) := Γ(z) + λ i+1 (X i+1(z) X i (z) hφ(t i, X i (z), w, h)) i=0 Differentiating Γ a with respect to z leads to the expression ( ) Γ a(z) = γ x 0 λ 1 hλ 1 Φ x [t 0 ] S N 1 i=1 ( γ x N + λ N ( ) N 1 λ i λ i+1 hλ i+1 Φ x [t i ] S i i=0 ) S N + γ w hλ i+1 Φ w [t i ] w z.

103 Gradient Computation by Adjoint Equation Define the auxiliary functional Γ a using multipliers λ i, i = 1,..., N: N 1 Γ a(z) := Γ(z) + λ i+1 (X i+1(z) X i (z) hφ(t i, X i (z), w, h)) i=0 Differentiating Γ a with respect to z leads to the expression ( ) Γ a(z) = γ x 0 λ 1 hλ 1 Φ x [t 0 ] S N 1 i=1 ( γ x N + λ N ( ) N 1 λ i λ i+1 hλ i+1 Φ x [t i ] S i i=0 ) S N + γ w hλ i+1 Φ w [t i ] w z. The terms S i = X i (z) are just the sensitivities in the sensitivity equation approach which we alertdo not (!) want to compute here.

104 Gradient Computation by Adjoint Equation Define the auxiliary functional Γ a using multipliers λ i, i = 1,..., N: N 1 Γ a(z) := Γ(z) + λ i+1 (X i+1(z) X i (z) hφ(t i, X i (z), w, h)) i=0 Differentiating Γ a with respect to z leads to the expression ( ) Γ a(z) = γ x 0 λ 1 hλ 1 Φ x [t 0 ] S N 1 i=1 ( γ x N + λ N ( ) N 1 λ i λ i+1 hλ i+1 Φ x [t i ] S i i=0 ) S N + γ w hλ i+1 Φ w [t i ] w z. The terms S i = X i (z) are just the sensitivities in the sensitivity equation approach which we alertdo not (!) want to compute here. Idea: Choose λ i such that the red terms involving S i vanish.

105 Gradient Computation by Adjoint Equation Discrete adjoint equation: (to be solved backwards in time) λ i λ i+1 hλ i+1 Φ x [t i ] = 0, i = 0,..., N 1 Transversality condition: (terminal condition at t = t N ) λ N = γ x N (x 0, X N (z), w)

106 Gradient Computation by Adjoint Equation Discrete adjoint equation: (to be solved backwards in time) λ i λ i+1 hλ i+1 Φ x [t i ] = 0, i = 0,..., N 1 Transversality condition: (terminal condition at t = t N ) λ N = γ x N (x 0, X N (z), w) Then: ( where S 0 = Id 0 Γ a (z) = (γ x 0 λ 0 ). ) N 1 S 0 + γ w i=0 hλ i+1 Φ w [t i ] w z,

107 Gradient Computation by Adjoint Equation Discrete adjoint equation: (to be solved backwards in time) λ i λ i+1 hλ i+1 Φ x [t i ] = 0, i = 0,..., N 1 Transversality condition: (terminal condition at t = t N ) λ N = γ x N (x 0, X N (z), w) Then: ( where S 0 = Id 0 Γ a (z) = (γ x 0 λ 0 ). What is the relation between Γ a and Γ? ) N 1 S 0 + γ w i=0 hλ i+1 Φ w [t i ] w z,

108 Gradient Computation by Adjoint Equation Theorem It holds Γ (z) = Γ a (z) = (γ x 0 λ 0 ) N 1 S 0 + γ w i=0 hλ i+1 Φ w [t i ] w z. Notes:

109 Gradient Computation by Adjoint Equation Theorem It holds Γ (z) = Γ a (z) = (γ x 0 λ 0 ) N 1 S 0 + γ w i=0 hλ i+1 Φ w [t i ] w z. Notes: With Γ (z) = Γ a(z) we found a formula for the gradient of Γ.

110 Gradient Computation by Adjoint Equation Theorem It holds Γ (z) = Γ a (z) = (γ x 0 λ 0 ) N 1 S 0 + γ w i=0 hλ i+1 Φ w [t i ] w z. Notes: With Γ (z) = Γ a(z) we found a formula for the gradient of Γ. The size of the adjoint equation does not depend on the dimension of w and in particular not on the number N that defines the grid. But an individual adjoint equation has to be solved for every function whose gradient is required. For the reduced optimal control problem these functions are the objective function, the boundary conditions, and the discretized state constraints.

111 Gradient Computation by Adjoint Equation Example We compare the CPU times for the emergency landing maneuver without dynamic pressure constraint for the sensitivity equation approach and the adjoint equation approach for different values of N: CPU time [s] sensitivity eq. adjoint eq. CPU time per iteration [s] sensitivity eq. adjoint eq N N

112 Contents Direct Discretization Methods Full Discretization Reduced Discretization and Shooting Methods Adjoint Estimation Convergence Examples Mixed-Integer Optimal Control Variable Time Transformation Parametric Sensitivity Analysis and Realtime-Optimization Parametric Optimal Control Problems

113 Optimal Control Problem and its Euler Discretization Goal: Derive adjoint estimates from the optimal solution of D-OCP Approach: Compare necessary conditions for OCP and D-OCP OCP Find an absolutely continuous state vector x and an essentially bounded control vector u such that ϕ(x(t 0 ), x(t f )) becomes minimal subject to the differential equation D-OCP Find x h = (x 0,..., x N ) and u h = (u 0,..., u N 1 ) such that ϕ(x 0, x N ) becomes minimal subject to the discretized dynamic constraints x (t) = f (t, x(t), u(t)) a.e. in [t 0, t f ], x i +h i f (t i, x i, u i ) x i+1 = 0, i = 0, 1,..., N 1, the control-state constraints c(t, x(t), u(t)) 0 a.e. in [t 0, t f ], the pure state constraints s(t, x(t)) 0 in [t 0, t f ], and the boundary conditions ψ(x(t 0 ), x(t f )) = 0. the discretized control-state constraints c(t i, x i, u i ) 0, i = 0, 1,..., N 1, the discretized pure constraints s(t i, x i ) 0, i = 0, 1,..., N, and the boundary conditions ψ(x 0, x N ) = 0.

114 Optimal Control Problem Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Augmented Hamilton function: Ĥ(t, x, u, λ, η) := λ f (t, x, u) + η c(t, x, u) Local Minimum Principle l 0 0, (l 0, σ, λ, η) 0 Adjoint differential equation: tf tf λ(t) = λ(t f ) + Ĥ x (τ, ˆx(τ ), û(τ ), λ(τ ), η(τ )) dt + s x (τ, ˆx(τ )) dµ(τ ) t t Transversality conditions: λ(t 0 ) = (l 0 ϕ x0 + σ ψ x0 ), λ(t f ) = l 0 ϕ x f + σ ψ x f Stationarity of augmented Hamilton function: Almost everywhere we have Complementarity conditions: Ĥ u (t, ˆx(t), û(t), λ(t), η(t)) = 0 η(t) 0, η(t) c(t, ˆx(t), û(t)) = 0 tf t 0 s(t, ˆx(t)) dµ(t) = 0, tf t 0 h(t) dµ(t) 0 0 h C(I, R ns )

115 Approximation of Adjoints Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Theorem (Discrete Minimum Principle) Assumptions: ϕ, f 0, f, c, s, ψ are continuously differentiable (ˆx h, û h ) is a local minimum of D-OCP Then there exist multipliers (l 0, κ, λ, ζ, ν) 0 with: l 0 0 Discrete adjoint equations: For i = 0,..., N 1 we have λ i = N 1 λ N + h j Ĥ x j=i ( t j, ˆx j, û j, λ j+1, ζ j h j ) N 1 + j=i s x (t j, ˆx j ) ν j Discrete transversality conditions: ( ) λ 0 = l 0 ϕ x 0 (ˆx 0, ˆx N ) + ψ x 0 (ˆx 0, ˆx N ) κ λ N = l 0 ϕ x f (ˆx 0, ˆx N ) + ψ x f (ˆx 0, ˆx N ) κ + s x (t N, ˆx N ) ν N

116 Approximation of Adjoints Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Theorem (Discrete Minimum Principle (continued)) Discrete stationarity conditions: For i = 0,..., N 1 we have ( Ĥ u t i, ˆx i, û i, λ i+1, ζ ) i = 0 h i Discrete complementarity conditions: 0 ζ i, ζ i c(t i, ˆx i, û i ) = 0, i = 0,..., N 1 0 ν i, ν i s(t i, ˆx i ) = 0, i = 0,..., N Proof: evaluation of Fritz John conditions from nonlinear programming

117 Approximation of Adjoints Comparison with Minimum Principle Adjoints: Discrete: N 1 λ i = λ N + j=i h j Ĥ x ( t j, ˆx j, û j, λ j+1, ζ ) N 1 j + h j j=i s x (t j, ˆx j ) ν j λ N = l 0 ϕ x f (ˆx 0, ˆx N ) + ψ x f (ˆx 0, ˆx N ) κ + s x (t N, ˆx N ) ν N Interpretation:

118 Approximation of Adjoints Comparison with Minimum Principle Adjoints: Discrete: N 1 λ i = λ N + j=i h j Ĥ x ( t j, ˆx j, û j, λ j+1, ζ ) N 1 j + h j j=i s x (t j, ˆx j ) ν j λ N = l 0 ϕ x f (ˆx 0, ˆx N ) + ψ x f (ˆx 0, ˆx N ) κ + s x (t N, ˆx N ) ν N Continuous: tf λ(t) = λ(t f ) + t tf Ĥ x (t, ˆx(t), û(t), λ(t), η(t)) dt + s x (t, ˆx(t)) dµ(t) t λ(t f ) = l 0 ϕ x f (x(t 0 ), x(t f )) + ψ x f (x(t 0 ), x(t f )) σ Interpretation:

119 Approximation of Adjoints Comparison with Minimum Principle Adjoints: Discrete: N 1 λ i = λ N + j=i h j Ĥ x ( t j, ˆx j, û j, λ j+1, ζ ) N 1 j + h j j=i s x (t j, ˆx j ) ν j λ N = l 0 ϕ x f (ˆx 0, ˆx N ) + ψ x f (ˆx 0, ˆx N ) κ + s x (t N, ˆx N ) ν N Continuous: tf λ(t) = λ(t f ) + t tf Ĥ x (t, ˆx(t), û(t), λ(t), η(t)) dt + s x (t, ˆx(t)) dµ(t) t λ(t f ) = l 0 ϕ x f (x(t 0 ), x(t f )) + ψ x f (x(t 0 ), x(t f )) σ Interpretation: κ σ, λ i λ(t i ), ζ i h i η(t i )

120 Approximation of Adjoints Comparison with Minimum Principle Adjoints: Discrete: N 1 λ i = λ N + j=i h j Ĥ x ( t j, ˆx j, û j, λ j+1, ζ ) N 1 j + h j j=i s x (t j, ˆx j ) ν j λ N = l 0 ϕ x f (ˆx 0, ˆx N ) + ψ x f (ˆx 0, ˆx N ) κ + s x (t N, ˆx N ) ν N Continuous: tf λ(t) = λ(t f ) + t tf Ĥ x (t, ˆx(t), û(t), λ(t), η(t)) dt + s x (t, ˆx(t)) dµ(t) t λ(t f ) = l 0 ϕ x f (x(t 0 ), x(t f )) + ψ x f (x(t 0 ), x(t f )) σ Interpretation: κ σ, λ i λ(t i ), ζ i h i η(t i ) ν i µ(t i+1 ) µ(t i ), i = 0,..., N 1

121 Approximation of Adjoints Comparison with Minimum Principle Adjoints: Discrete: N 1 λ i = λ N + j=i h j Ĥ x ( t j, ˆx j, û j, λ j+1, ζ ) N 1 j + h j j=i s x (t j, ˆx j ) ν j λ N = l 0 ϕ x f (ˆx 0, ˆx N ) + ψ x f (ˆx 0, ˆx N ) κ + s x (t N, ˆx N ) ν N Continuous: tf λ(t) = λ(t f ) + t tf Ĥ x (t, ˆx(t), û(t), λ(t), η(t)) dt + s x (t, ˆx(t)) dµ(t) t λ(t f ) = l 0 ϕ x f (x(t 0 ), x(t f )) + ψ x f (x(t 0 ), x(t f )) σ Interpretation: κ σ, λ i λ(t i ), ζ i h i η(t i ) ν i µ(t i+1 ) µ(t i ), i = 0,..., N 1 ν N is interpreted as jump height at t = t f, i.e. Note that µ can jump at t f. ν N µ(t f ) µ(t f ).

122 Approximation of Adjoints Comparison with Minimum Principle Adjoints: Discrete: N 1 λ i = λ N + j=i h j Ĥ x ( t j, ˆx j, û j, λ j+1, ζ ) N 1 j + h j j=i s x (t j, ˆx j ) ν j λ N = l 0 ϕ x f (ˆx 0, ˆx N ) + ψ x f (ˆx 0, ˆx N ) κ + s x (t N, ˆx N ) ν N Continuous: tf λ(t) = λ(t f ) + t tf Ĥ x (t, ˆx(t), û(t), λ(t), η(t)) dt + s x (t, ˆx(t)) dµ(t) t λ(t f ) = l 0 ϕ x f (x(t 0 ), x(t f )) + ψ x f (x(t 0 ), x(t f )) σ Interpretation: κ σ, λ i λ(t i ), ζ i h i η(t i ) ν i µ(t i+1 ) µ(t i ), i = 0,..., N 1 ν N is interpreted as jump height at t = t f, i.e. Note that µ can jump at t f. ν N µ(t f ) µ(t f ). λ N is interpreted as the left-sided limit of λ at t f, i.e. λ N λ(t f ). Note that λ may jump at t f.

123 Approximation of Adjoints Comparison with Minimum Principle Complementarity conditions: Discrete: Continuous: tf t 0 s(t, ˆx(t)) dµ(t) = 0, 0 ν i s(t i, ˆx i ) 0, i = 0,..., N tf Interpretation implies: for h = (h 0,..., h N ) 0 the condition implies t 0 h(t) dµ(t) 0 0 h C(I, R ns ) 0 ν i µ(t i+1 ) µ(t i ) monotonicity of µ N 0 hi ν i i=0 Summing up ν i s(t i, ˆx i ) = 0 yields tf t 0 h(t) dµ(t) N 0 = νi i=0 s(t i, ˆx i ) tf t 0 s(t, ˆx(t)) dµ(t)

124 Contents Direct Discretization Methods Full Discretization Reduced Discretization and Shooting Methods Adjoint Estimation Convergence Examples Mixed-Integer Optimal Control Variable Time Transformation Parametric Sensitivity Analysis and Realtime-Optimization Parametric Optimal Control Problems

125 Convergence for Nonlinear Problems order O(h) for mixed control-state constraints and Euler discretization, see [3] Assumptions: u is continuous,... order O(h) for mixed control-state and pure state constraints and Euler discretization, see [1] Assumptions: u W 1,,... second order O(h 2 ), see [2] Assumptions: u W 1, and u is of bounded variation,... order O(h p ) for certain Runge-Kutta methods, see [4] Assumptions: u (p 1) is of bounded variation,... [1] A. L. Dontchev, W. W. Hager, and K. Malanowski. Error Bounds for Euler Approximation of a State and Control Constrained Optimal Control Problem. Numerical Functional Analysis and Optimization, 21(5 & 6): , [2] A. L. Dontchev, W. W. Hager, and V. M. Veliov. Second-Order Runge-Kutta Approximations in Control Constrained Optimal Control. SIAM Journal on Numerical Analysis, 38(1): , [3] K. Malanowski, C. Büskens, and H. Maurer. Convergence of Approximations to Nonlinear Optimal Control Problems. In Anthony Fiacco, editor, Mathematical programming with data perturbations, volume 195, pages Dekker. Lecture Notes in Pure and Applied Mathematics, [4] W. W. Hager. Runge-Kutta methods in optimal control and the transformed adjoint system. Numerische Mathematik, 87(2): , 2000.

126 Convergence for Linear and Linear-Quadratic Problems Linear control problems, Euler discretization, see [3] order O(h) in L 1 -norm for u Assumptions: bang-bang control,... Linear control problems, Euler discretization, see [1] (i) order O(h) in L 1 -norm for u (ii) order O( h) in L 2 -norm for u order O(h) in L -norm for x Assumptions: no singular arcs, finite number of switches, structural stability,... Linear-quadratic control problems, Euler discretization, see [2] (i) order O(h) in L 1 -norm for u (ii) order O(h) in L -norm for x Assumptions: no singular arcs, finite number of switches, structural stability,... [1] W. Alt, R. Baier, M. Gerdts, and F. Lempio, Approximation of linear control problems with bang-bang solutions, Optimization: A Journal of Mathematical Programming and Operations Research, (2011), DOI / [2] W. Alt, R. Baier, M. Gerdts, and F. Lempio, Error bounds for Euler approximation of linear-quadratic control problems with bang-bang solutions, Numerical Algebra, Control and Optimization, 2 (2012), [3] V. M. Veliov, Error analysis of discrete approximations to bang-bang optimal control problems: the linear case, Control Cybern., 34 (2005),

127 Contents Direct Discretization Methods Full Discretization Reduced Discretization and Shooting Methods Adjoint Estimation Convergence Examples Mixed-Integer Optimal Control Variable Time Transformation Parametric Sensitivity Analysis and Realtime-Optimization Parametric Optimal Control Problems

128 Virtual Test-Drive: Double-Lane-Change Maneuver Boundary: piecewise polynomials

129 Virtual Test-Drive: Double-Lane-Change Maneuver Minimize subject to and tf t f + α u(t) 2 dt 0 x (t) = v(t) cos ψ(t), y (t) = v(t) sin ψ(t), ψ (t) = v(t) l v (t) = u 2 (t) δ (t) = u 1 (t) δ(t) [ 30, 30] v(t) [0, 6] u 1 (t) [ 30, 30] tan δ(t) in [degree] in [m/s] in [degree/s] u 2 (t) [ 1, 1] in [m/s 2 ] + initial conditions & track bounds (x,y) Notation: δ v ψ l (x, y) ψ l steering angle velocity yaw angle distance from rear axle to front axle midpoint position of rear axle of car δ δ

130 y Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Virtual Test-Drive: Double-Lane-Change Maneuver State 2 vs State x state 3 State 3 vs time t state 4 State 4 vs time t state 5 State 5 vs time t control 1 Control 1 vs time t control 2 Control 2 vs time t adjoint 1 Adjoint 1 vs time t adjoint 2 Adjoint 2 vs time t adjoint 3 Adjoint 3 vs time t adjoint 4 Adjoint 4 vs time t adjoint 5 Adjoint 5 vs time t parameter 1 Parameter 1 vs time t

131 A Coupled ODE-PDE Optimal Control Problem Truck with a water tank: horizontal control force u dw dt L y h B(x) + h(t,x) yt d x B(x) dt m T : mass of truck, m W : mass of water tank, d T : distance truck, d W : distance water tank, B(x): bottom elevation of tank, h(t, x): water level relative to bottom at time t and position x, v(t, x): horizontal water velocity at time t and position x, L: length of water tank, c : spring force constant, k: damper force constant

132 A Coupled ODE-PDE Optimal Control Problem Coupled ODE-PDE system: (Ω := (0, T ) (0, L)) ODE Saint-Venant i.c./b.c. { mt d T = u F(d T, d W, d T, d W ), d T (0) = 0, d T (0) = (d T )0, d T (T ) = d m W d W = F (d T, d W, d T, d W ) + m W a W (t), d W (0) = d 0 W, d W (T ) = d T W, d W (0) = v t + h t + (hv) x = 0, (t, x) Ω ( ) 1 2 v 2 + gh = gb x 1 m F (d T, d W, d x W T, d W ) h(0, x) = h 0 (x), x [0, L] v(0, x) = v 0 (x), x [0, L] v(t, 0) = v(t, L) = 0, t [0, T ] h x (t, 0) = B x (0) 1 g m W F (d T, d W, d T, d W ) t, t [0, T ] h x (t, L) = B x (L) 1 g m W F (d T, d W, d T, d W ) t, t [0, T ] spring/damper force and total acceleration of water tank: F (d T, d W, d T, d W ) = c(d T d W + d) + k(d T d W ) a W (t) = 1 L v t (t, x)dx L 0 = g 1 (B(L) B(0)) F(d T, d W, d T L m, d W ) g (h(t, L) h(t, 0)) W L

133 A Coupled ODE-PDE Optimal Control Problem Approximation: Lax-Friedrich scheme (1st order in time, 2nd order in space, explicit) on equidistant grid (obey CFL condition!) h n i h(t n, x i ), v n i v(t n, x i ) (t n, x i ) := (n t, i x) (n = 0,..., N, i = 0,..., M, t = T /N, x = L/M) Discretized Saint-Venant equations: For i = 0,..., M 1 and n = 0,..., N 1, ( 1 h n+1 i 1 ( ) ) h n i+1 t 2 + hn i ( ) h n i+1 2 x v n i+1 hn i 1 v n i 1 = 0, ( 1 v n+1 i 1 ( ) ) v n i+1 t 2 + v n i ( 1 2 x 2 (v n i+1 )2 + gh n i+1 1 ) 2 (v n i 1 )2 gh n i 1 = gb x (x i ) 1 m W F(d T (t n), d W (t n), d T (tn), d W (tn)) + boundary and initial conditions Optimal control problem (braking maneuver) Minimize α 0 T + α 1 (d T (T )2 + d W (T )2 ) (T free) subject to the discretized Saint-Venant equations, truck dynamics, and control bounds.

134 Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients A Coupled ODE-PDE Optimal Control Problem Results: Trolley & Saint-Venant Equations Trolley & Saint-Venant Equations h [m] 1.5 v [m] x [m] time [s] x [m] Trolley & Saint-Venant Equations time [s] 12 Trolley & Saint-Venant Equations F [N] u [N] time [s] time [s] (α0 = 0.1, α1 = 100, dt (T ) = 100, dw (T ) = 95, d 0 (0) = 10, d 0 (0) = 10, N = 600, M = 20, L = 4, mt = 2000, T W mw = 4000, h0 = 1, c = 40000, k = 10000, g = 9.81)

135 Contents Direct Discretization Methods Full Discretization Reduced Discretization and Shooting Methods Adjoint Estimation Convergence Examples Mixed-Integer Optimal Control Variable Time Transformation Parametric Sensitivity Analysis and Realtime-Optimization Parametric Optimal Control Problems

136 Optimal Control Problem Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Mixed-Integer OCP Minimize ϕ(x(t 0 ), x(t f )) s.t. ẋ(t) f (t, x(t), u(t), v(t)) = 0 s(t, x(t)) 0 ψ(x(t 0 ), x(t f )) = 0 u(t) U v(t) V Notation state: x W 1, ([t 0, t f ], R nx ) controls: u L ([t 0, t f ], R nu ), v L ([t 0, t f ], R nv ) U R nu convex, closed, V = {v 1,..., v M } discrete

137 Solution Approaches Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Indirect approach: exploit necessary optimality conditions (minimum principle) Direct discretization approaches: (a) variable time transformation [Lee, Teo, Jennings, Rehbock, G.,...] (b) Branch & Bound [von Stryk, G.,...] (c) sum-up-rounding strategy [Sager] (d) stochastic/heuristic optimization [Schlüter/G.,...]

138 Contents Direct Discretization Methods Full Discretization Reduced Discretization and Shooting Methods Adjoint Estimation Convergence Examples Mixed-Integer Optimal Control Variable Time Transformation Parametric Sensitivity Analysis and Realtime-Optimization Parametric Optimal Control Problems

139 Discretization Main grid Minor grid G h : t i = t 0 + ih, i = 0,..., N, h = t f t 0 N G h,m : τ i,j = t i + j h, j = 0,..., M, i = 0,..., N 1 M M = number of discrete values in V = {v 1,..., v M }

140 Idea Replace the discrete control v by a fixed and piecewise constant function on the minor grid according to v Gh,M (τ ) = v j for τ [τ i,j 1, τ i,j ], i = 0,..., N 1, j = 1,..., M v M. v 1 t i 1 τ i 1,j t i τ i,j t i+1 τ i+1, j

141 Idea: Variable Time Transformation Variable time transformation: τ t = t(τ ), t(τ ) := t 0 + w(s)ds, τ [t 0, t f ] t 0 and Remark: t f t 0 = tf w is the speed of running through [t 0, t f ]: t 0 w(s)ds dt dτ = w(τ ), τ [t 0, t f ] w(τ ) = 0 in [τ i,j, τ i,j+1 ) [t(τ i,j ), t(τ i,j+1 )] shrinks to {t(τ i,j )}

142 Time Transformation Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients w(τ ) t i+2 t(τ ) t i+1 t i t i 1 t i 1 τ i 1,j t i τ i,j t i+1 τ i+1, j

143 New Control Consider w as new control subject to the restrictions: w(τ ) 0 for all τ ( no running back in time) w(τ ) piecewise constant on the minor grid G h,m Major grid points are invariant under time transformation: ti+1 t i w(τ )dτ = t i+1 t i = h, i = 0,..., N 1 Control set: W := w L ([t 0, t f ], R) w(τ ) 0, w piecewise constant on G h,m, ti+1 t i w(τ )dτ = t i+1 t i i

144 Backtransformation Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients v Gh,M (τ ): w(τ ): Corresponding control v(s) = v Gh,M (t 1 (s)): t(τ t i 1 t i,1 ) i t = t(τ i+1 i,2 ) t(τ i+1,1 ) = t(τ i+1,2 )

145 Transformed Optimal Control Problem TOCP Minimize ϕ(x(t 0 ), x(t f )) s.t. ẋ(τ ) w(τ )f (τ, x(τ ), u(τ ), v Gh,M (τ )) = 0 s(τ, x(τ )) 0 ψ(x(t 0 ), x(t f )) = 0 u(τ ) U w W Remarks: If w(τ ) 0 in [τ i,j, τ i,j+1 ] then x remains constant therein! v Gh,M is the fixed function defined before! TOCP has only continuous controls, no discrete controls anymore!

146 Virtual Test-Drives: Optimal Control Problem Simulation of Test-Drives: model of automobile differential equations model of test-course state constraints, boundary conditions model of driver optimal control problem

147 Single-Track Model Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Usage: basic investigation of lateral dynamics (up to 0.4g), controller design l f l r e SP α f v f δ α r v β (x, y) ψ F lf F sf v r F lr F sr

148 Equations of Motion Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients ẋ = v cos(ψ β) ẏ = v sin(ψ β) [ v = (F lr F Ax ) cos β + F lf cos(δ + β) ( ) F sr F Ay sin β ] F sf sin(δ + β) /m β = [ w z (F lr F Ax ) sin β + F lf sin(δ + β) + ( ) ] F sr F Ay cos β + Fsf cos(δ + β) /(m v) ψ = w z [ ] ẇ z = F sf l f cos δ F sr l r F Ay e SP + F lf l f sin δ /I zz δ = w δ Lateral tire forces F sf, F sr : magic formula [Pacejka 93]

149 Equations of Motion: Details Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Longitudinal tire forces (rear wheel drive): F lf = F Bf F Rf, F lr = M wheel (φ, µ) R F Br F Rr Power train torque: [Neculau 92] (F Rf, F Rr rolling resistances front/rear) M wheel (φ, µ) = i g(µ) i t M mot (φ, µ) Controls: (i g: gear transmission, i t : motor torque transmission, M mot : motor torque) w δ [w min, w max ] : steering angle velocity F Bf, F Br [F min, F max ] : braking forces front/rear φ [0, 1] : gas pedal position µ {1, 2, 3, 4, 5} : gear

150 Result: 20 grid points Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients steering angle velocity w δ : gear shift µ: Braking force: F B (t) 0, Gas pedal position: φ(t) 1 Complete enumeration (1 [s] to solve DOCP): years Branch & Bound: 23 m 52 s, objective value: Transformation: 2 m s, objective value:

151 Result: 40 grid points Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Steering angle velocity w δ : Gear shift µ: Braking force: F B (t) 0, Gas pedal position: φ(t) 1 Complete enumeration (1 [s] to solve DOCP): years Branch & Bound: 232 h 25 m 31 s, objective value: Transformation: 9 m s, objective value:

152 Result: 80 grid points Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Steering angle velocity w δ : Gear shift µ: Braking force: F B (t) 0, Gas pedal position: φ(t) 1 65 m s, objective value:

153 F8 Aircraft Minimize T subject to x 1 = 0.877x 1 + x x 1 x x x 2 2 x1 2 x x v x 1 2 v x 1v v 3 x 2 = x 3 x 3 = 4.208x x x x v x 2 1 v + 46x 1v v 3 v { , } x(0) = (0.4655, 0, 0), x(t ) = (0, 0, 0) Source: by Sebastian Sager

154 F8 Aircraft (N = 500, T = ) state state normalized time normalized time 0.6 state 3 2 control v (normalized values) normalized time normalized time

155 Lotka-Volterra Fishing Problem Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Minimize 12 subject to 0 (x 1 (t) 1) 2 + (x 2 (t) 1) 2 dt x 1 (t) = x 1(t) x 1 (t)x 2 (t) 0.4x 1 (t)v(t) x 2 (t) = x 2(t) + x 1 (t)x 2 (t) 0.2x 2 (t)v(t) v(t) {0, 1} x(0) = (0.5, 0.7) Source: by Sebastian Sager Background: optimal fishing strategy on a fixed time horizon to bring the biomasses of both predator as prey fish to a prescribed steady state

156 Lotka-Volterra Fishing Problem Solutions Relaxed Solution: No switching costs: Lotka-Volterra Fishing Problem (relaxed) Lotka-Volterra Fishing Problem control v1 state x1 state x control v1 state x1 state x time t time t With Switching costs: Lotka-Volterra Fishing Problem control v1 statex1 state x time t

157 References [1] M. Gerdts. Solving Mixed-Integer Optimal Control Problems by Branch&Bound: A Case Study from Automobile Test-Driving with Gear Shift. Optimal Control, Applications and Methods, 26(1):1 18, [2] M. Gerdts. A variable time transformation method for mixed-integer optimal control problems. Optimal Control, Applications and Methods, 27(3): , [3] S. Sager. Numerical methods for mixed-integer optimal control problems. PhD thesis, Naturwissenschaftlich-Mathematische Gesamtfakultät, Universität Heidelberg, Heidelberg, Germany, [4] S. Sager. MIOCP benchmark site. [5] S. Sager, H. G. Bock, and G. Reinelt. Direct methods with maximal lower bound for mixed-integer optimal control problems. Mathematical Programming (A), 118(1): , [6] H. W. J. Lee, K. L. Teo, V. Rehbock, and L. S. Jennings. Control parametrization enhancing technique for optimal discrete-valued control problems. Automatica, 35(8): , [7] A. Siburian. Numerical Methods for Robust, Singular and Discrete Valued Optimal Control Problems. PhD thesis, Curtin University of Technology, Perth, Australia, 2004.

158 Contents Direct Discretization Methods Full Discretization Reduced Discretization and Shooting Methods Adjoint Estimation Convergence Examples Mixed-Integer Optimal Control Variable Time Transformation Parametric Sensitivity Analysis and Realtime-Optimization Parametric Optimal Control Problems

159 Parametric Optimization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients NLP(p) Minimize f (z, p) s.t. g i (z, p) = 0, i = 1,..., n E, g i (z, p) 0, i = n E + 1,..., n g Notation: f, g i : R nz R np R, i = 1,..., n g, twice continuously differentiable parameter p R np (no optimization variable!) Active inequalities: I(z, p) := {i {n E + 1,..., n g} g i (z, p) = 0} Active set: A(z, p) := {1,..., n E } I(z, p)

160 Parametric Optimization Definition Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients z strongly regular local minimum of NLP(p 0 ) iff (a) z is feasible.

161 Parametric Optimization Definition Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients z strongly regular local minimum of NLP(p 0 ) iff (a) z is feasible. (b) z fulfills the linear independence constraint qualification (LICQ).

162 Parametric Optimization Definition Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients z strongly regular local minimum of NLP(p 0 ) iff (a) z is feasible. (b) z fulfills the linear independence constraint qualification (LICQ). (c) The KKT conditions hold at (z, λ ) with Lagrange multiplier λ.

163 Parametric Optimization Definition Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients z strongly regular local minimum of NLP(p 0 ) iff (a) z is feasible. (b) z fulfills the linear independence constraint qualification (LICQ). (c) The KKT conditions hold at (z, λ ) with Lagrange multiplier λ. (d) The strict complementarity condition holds: λ i g i (z, p 0 ) > 0 for all i = n E + 1,..., n g.

164 Parametric Optimization Definition Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients z strongly regular local minimum of NLP(p 0 ) iff (a) z is feasible. (b) z fulfills the linear independence constraint qualification (LICQ). (c) The KKT conditions hold at (z, λ ) with Lagrange multiplier λ. (d) The strict complementarity condition holds: (e) Second-order sufficient condition: λ i g i (z, p 0 ) > 0 for all i = n E + 1,..., n g. L zz (z, λ, p 0 )(d, d) > 0 for all 0 d T C (z, p 0 ) with the critical cone T C (z, p) := d R nz g i,z (z, p)(d) 0, i I(z, p), λ i = 0, g i,z (z, p)(d) = 0, i I(z, p), λ i > 0, g i,z (z, p)(d) = 0, i {1,..., n E }.

165 Parametric Optimization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Sensitivity Theorem [Fiacco 83] Let z be a strongly regular local minimum of (NLP(p 0 )) for nominal parameter p 0. Then there exist neighborhoods B ɛ(p 0 ) and B δ (z, λ ) with:

166 Parametric Optimization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Sensitivity Theorem [Fiacco 83] Let z be a strongly regular local minimum of (NLP(p 0 )) for nominal parameter p 0. Then there exist neighborhoods B ɛ(p 0 ) and B δ (z, λ ) with: NLP(p) has unique strongly regular local minimum (z(p), λ(p)) B δ (z, λ ) for each p B ɛ(p 0 ).

167 Parametric Optimization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Sensitivity Theorem [Fiacco 83] Let z be a strongly regular local minimum of (NLP(p 0 )) for nominal parameter p 0. Then there exist neighborhoods B ɛ(p 0 ) and B δ (z, λ ) with: NLP(p) has unique strongly regular local minimum (z(p), λ(p)) B δ (z, λ ) for each p B ɛ(p 0 ). z(p), λ(p) are continuously differentiable w.r.t. p.

168 Parametric Optimization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Sensitivity Theorem [Fiacco 83] Let z be a strongly regular local minimum of (NLP(p 0 )) for nominal parameter p 0. Then there exist neighborhoods B ɛ(p 0 ) and B δ (z, λ ) with: NLP(p) has unique strongly regular local minimum (z(p), λ(p)) B δ (z, λ ) for each p B ɛ(p 0 ). z(p), λ(p) are continuously differentiable w.r.t. p. The active set remains unchanged for each p B ɛ(p 0 ): A(z, p 0 ) = A(z(p), p).

169 Parametric Optimization Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Sensitivity Theorem [Fiacco 83] Let z be a strongly regular local minimum of (NLP(p 0 )) for nominal parameter p 0. Then there exist neighborhoods B ɛ(p 0 ) and B δ (z, λ ) with: NLP(p) has unique strongly regular local minimum (z(p), λ(p)) B δ (z, λ ) for each p B ɛ(p 0 ). z(p), λ(p) are continuously differentiable w.r.t. p. The active set remains unchanged for each p B ɛ(p 0 ): A(z, p 0 ) = A(z(p), p). Proof: implicit function theorem + stability of LICQ, critical cone and second-order sufficient condition

170 Applications dependence of consistent initial values on parameters in DAEs computation of gradients for bilevel optimization problems, e.g. Stackelberg games, path planning under safety requirements (airraces) parametric sensitivity analysis of reachable sets model-predictive control: updates without re-optimization see Vryan Palma s talk Wednesday, September 11, 9:40-10:05

171 Realtime Optimization [Büskens, Maurer] Real-Time Approximation for NLP(p) Given: (offline) nominal parameter p 0

172 Realtime Optimization [Büskens, Maurer] Real-Time Approximation for NLP(p) Given: (offline) nominal parameter p 0 strongly regular local minimum z

173 Realtime Optimization [Büskens, Maurer] Real-Time Approximation for NLP(p) Given: (offline) nominal parameter p 0 strongly regular local minimum z sensitivity differentials dz dp (p 0)

174 Realtime Optimization [Büskens, Maurer] Real-Time Approximation for NLP(p) Given: (offline) nominal parameter p 0 strongly regular local minimum z sensitivity differentials dz dp (p 0) Taylor approximation/real-time approximation: (online) For a perturbed parameter p p 0 compute and use z(p) as an approximation of z(p). z(p) := z(p 0 ) + dz dp (p 0)(p p 0 )

175 Realtime Optimization [Büskens, Maurer] Real-Time Approximation for NLP(p) Given: (offline) nominal parameter p 0 strongly regular local minimum z sensitivity differentials dz dp (p 0) Taylor approximation/real-time approximation: (online) For a perturbed parameter p p 0 compute and use z(p) as an approximation of z(p). Limitations: z(p) := z(p 0 ) + dz dp (p 0)(p p 0 )

176 Realtime Optimization [Büskens, Maurer] Real-Time Approximation for NLP(p) Given: (offline) nominal parameter p 0 strongly regular local minimum z sensitivity differentials dz dp (p 0) Taylor approximation/real-time approximation: (online) For a perturbed parameter p p 0 compute and use z(p) as an approximation of z(p). Limitations: z(p) := z(p 0 ) + dz dp (p 0)(p p 0 ) approximation error z(p) z(p) = o( p p 0 )

177 Realtime Optimization [Büskens, Maurer] Real-Time Approximation for NLP(p) Given: (offline) nominal parameter p 0 strongly regular local minimum z sensitivity differentials dz dp (p 0) Taylor approximation/real-time approximation: (online) For a perturbed parameter p p 0 compute and use z(p) as an approximation of z(p). Limitations: z(p) := z(p 0 ) + dz dp (p 0)(p p 0 ) approximation error z(p) z(p) = o( p p 0 ) constraint violation due to linearization (projection required if feasibility is an issue)

178 Realtime Optimization [Büskens, Maurer] Real-Time Approximation for NLP(p) Given: (offline) nominal parameter p 0 strongly regular local minimum z sensitivity differentials dz dp (p 0) Taylor approximation/real-time approximation: (online) For a perturbed parameter p p 0 compute and use z(p) as an approximation of z(p). Limitations: z(p) := z(p 0 ) + dz dp (p 0)(p p 0 ) approximation error z(p) z(p) = o( p p 0 ) constraint violation due to linearization (projection required if feasibility is an issue) linearization only justified in neighborhood B ɛ(p 0 ), same active set

179 Corrector Iteration for Feasibility [Büskens] NLP(p) Minimize f (z, p) s.t. g(z, p) = 0

180 Corrector Iteration for Feasibility [Büskens] NLP(p) Minimize f (z, p) s.t. g(z, p) = 0 Realtime approximation: z [0] (p) := z(p) = z(p 0 ) + dz dp (p 0)(p p 0 )

181 Corrector Iteration for Feasibility [Büskens] NLP(p) Minimize f (z, p) s.t. g(z, p) = 0 Realtime approximation: z [0] (p) := z(p) = z(p 0 ) + dz dp (p 0)(p p 0 ) Introducing z(p) into constraints yields constraint violation ε [0] := g( z(p), p)

182 Corrector Iteration for Feasibility [Büskens] NLP(p) Minimize f (z, p) s.t. g(z, p) = 0 Realtime approximation: z [0] (p) := z(p) = z(p 0 ) + dz dp (p 0)(p p 0 ) Introducing z(p) into constraints yields constraint violation Idea of corrector iteration: ε [0] := g( z(p), p) perform sensitivity analysis w.r.t. ε for nominal parameter ε 0 = 0

183 Corrector Iteration for Feasibility [Büskens] NLP(p) Minimize f (z, p) s.t. g(z, p) = 0 Realtime approximation: z [0] (p) := z(p) = z(p 0 ) + dz dp (p 0)(p p 0 ) Introducing z(p) into constraints yields constraint violation Idea of corrector iteration: ε [0] := g( z(p), p) perform sensitivity analysis w.r.t. ε for nominal parameter ε 0 = 0 apply correction (fixed point iteration): z [i+1] (p) := z [i] (p) dz dε (p 0, ε 0 )ε [i], ε [i] := g( z [i] (p), p) (note the minus sign!)

184 Contents Direct Discretization Methods Full Discretization Reduced Discretization and Shooting Methods Adjoint Estimation Convergence Examples Mixed-Integer Optimal Control Variable Time Transformation Parametric Sensitivity Analysis and Realtime-Optimization Parametric Optimal Control Problems

185 Optimal Control Problem Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients OCP Minimize ϕ(x(t 0 ), x(t f ), p) s.t. x (t) f (x(t), u(t), p) = 0 a.e. in [t 0, t f ] c(x(t), u(t), p) 0 a.e. in [t 0, t f ] ψ(x(t 0 ), x(t f ), p) = 0 Notation state: x W 1, ([t 0, t f ], IR nx ) control: u L ([t 0, t f ], IR nu ) parameter: p (not an optimization variable!)

186 Realtime Optimization [Büskens, Maurer] Real-Time Approximation for DOCP(p) Given: (offline) nominal parameter p 0 strongly regular local minimum z of DOCP(p 0 ) sensitivity differentials dz dp (p 0)

187 Realtime Optimization [Büskens, Maurer] Real-Time Approximation for DOCP(p) Given: (offline) nominal parameter p 0 strongly regular local minimum z of DOCP(p 0 ) sensitivity differentials dz dp (p 0) Taylor approximation/real-time approximation: (online) For a perturbed parameter p p 0 compute x 1 (p) := x 1 (p) + dx 1 dp (p 0)(p p 0 ), ũ i (p) := u i (p 0 ) + du i dp (p 0)(p p 0 ). and use x 1 (p) as an approximation of the initial value x 1 (p) and ũ i (p) as an approximate optimal control for the parameter p.

188 Real-time optimal control: Application to DOCP(p) Open-loop: nominal solution sensitivities measurement: parameter p p real-timecontrol u(t, p) ẋ = f (x, u, p) x(t) Open-loop real-time update formula (p: perturbation of p 0 ): ( online) u i (p) u i (p 0 ) + du i dp (p 0) (p p 0 ) (i = 0,..., N) real-time control approximation: piecewise linear/constant with u i (p)

189 Example: Pendulum Optimal control problem Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Minimize 4.5 u(t) 2 dt 0 subject to and ẋ 1 (t) = x 3 (t) 2x 1 (t)y 2 (t) ẋ 2 (t) = x 4 (t) 2x 2 (t)y 2 (t) mẋ 3 (t) = 2x 1 (t)y 1 (t) + u(t)x 2(t) l mẋ 4 (t) = mg 2x 2 (t)y 1 (t) u(t)x 1(t) l 0 = x 1 (t) 2 + x 2 (t) 2 l 2 0 = x 1 (t)x 3 (t) + x 2 (t)x 4 (t) x 1 (0) = x 2 (0) = 1 2, x 3 (0) = x 4 (0) = 0, x 1 (4.5) = x 3 (4.5) = 0 and 1 u(t) 1

190 Example: Pendulum Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Nominal control and sensitivities du/dm and du/dg (nominal values m = 1, g = 9.81) du/dm u du/dg u time [s] (smallest eigenvalue of reduced Hessian for N = 400) time [s]

191 Full Car Model BMW 1800/2000 [von Heydenaber 80] ẋ(t) = f (x(t), y(t), u(t)) 0 = g(x(t), y(t), u(t)) Notation state : x(t) IR 37, y(t) IR 4 control : u(t) (steering angle velocity) DAE : index 1, semi-explicit, g y non-sing., piecewise defined dynamics (wheel: ground contact yes/np)

192 Real-time: BMW (10 %, 30 % perturbation)

193 Real-time optimal control: Numerical Results Objective: linear combination of final time and steering effort min Initial velocity: 25 [m/s] Real-time parameter: offset 3.5 [m] p 0 = centre of gravity 0.56 [m] ( state constraints) ( car s dynamics) Nominal control: Sensitivity differentials t f : dt f (p 0 ) dp = ( Sensitivity differentials u: ) Accuracy: Optimality/feasibility 10 11, DSRTSE (10 11 diff, 10 4 alg), N = 81, M = 1

194 Real-time optimal control: Numerical Results Errors Pert objective bound. cond. #1/#2 state constr. control structure % abs./rel. abs. max. abs. max. abs./abs. L 2 of control / / / diff. ) / / / eq / / / eq / / / eq / / / eq / / / eq. ): Nominal control (left) and optimal control for perturbation of 5% (right)

195 Emergency Landing Manoeuvre in Realtime Scenario: propulsion system breakdown Goal: maximization of range w.r.t. current position Controls: lift coefficient, angle of bank no thrust available; no fuel consumption (constant mass)

196 References [1] A. V. Fiacco. Introduction to Sensitivity and Stability Analysis in Nonlinear Programming, volume 165 of Mathematics in Science and Engineering. Academic Press, New York, [2] J. F. Bonnans and A. Shapiro. Perturbation Analysis of Optimization Problems. Springer Series in Operations Research. Springer, New York, [3] K. Malanowski and H. Maurer. Sensitivity analysis for parametric control problems with control-state constraints. Computational Optimization and Applications, 5(3): , [4] K. Malanowski and H. Maurer. Sensitivity analysis for optimal control problems subject to higher order state constraints. Annals of Operations Research, 101:43 73, [5] H. Maurer and D. Augustin. Sensitivity Analysis and Real-Time Control of Parametric Optimal Control Problems Using Boundary Value Methods. In M. Grötschel, S. O. Krumke, and J. Rambau, editors, Online Optimization of Large Scale Systems, pages Springer, [6] H. Maurer and H. J. Pesch. Solution differentiability for parametric nonlinear control problems with control-state constraints. Journal of Optimization Theory and Applications, 86(2): , [7] C. Büskens. Real-Time Solutions for Perturbed Optimal Control Problems by a Mixed Open- and Closed-Loop Strategy. In M. Grötschel, S. O. Krumke, and J. Rambau, editors, Online Optimization of Large Scale Systems, pages Springer, [8] C. Büskens and H. Maurer. Sensitivity Analysis and Real-Time Control of Parametric Optimal Control Problems Using Nonlinear Programming Methods. In M. Grötschel, S. O. Krumke, and J. Rambau, editors, Online Optimization of Large Scale Systems, pages Springer, [9] H. J. Pesch. Numerical computation of neighboring optimum feedback control schemes in real-time. Applied Mathematics and Optimization, 5: , [10] H. J. Pesch. Real-time computation of feedback controls for constrained optimal control problems. i, ii. Optimal Control Applications and Methods, 10(2): , , 1989.

197 References J. T. Betts. Practical methods for optimal control using nonlinear programming, volume 3 of Advances in Design and Control. SIAM, Philadelphia, M. Gerdts. Optimal Control of ODEs and DAEs. DeGruyter, Berlin, 2011

198 Numerical Optimal Control Part 2: Discretization techniques, structure exploitation, calculation of gradients Thanks for your Attention! Questions? Further information: Fotos: nchen Magnus Manske (Panorama), Luidger (Theatinerkirche), Kurmis (Chin. Turm), Arad Mojtahedi (Olympiapark), Max-k (Deutsches Museum), Oliver Raupach (Friedensengel), Andreas Praefcke (Nationaltheater)

Numerical Optimal Control Part 3: Function space methods

Numerical Optimal Control Part 3: Function space methods Numerical Optimal Control Part 3: Function space methods SADCO Summer School and Workshop on Optimal and Model Predictive Control OMPC 2013, Bayreuth Institute of Mathematics and Applied Computing Department

More information

Trajectory Planning and Collision Detection for Robotics Applications

Trajectory Planning and Collision Detection for Robotics Applications Trajectory Planning and Collision Detection for Robotics Applications joint work with Rene Henrion, Dietmar Homberg, Chantal Landry (WIAS) Institute of Mathematics and Applied Computing Department of Aerospace

More information

The Direct Transcription Method For Optimal Control. Part 2: Optimal Control

The Direct Transcription Method For Optimal Control. Part 2: Optimal Control The Direct Transcription Method For Optimal Control Part 2: Optimal Control John T Betts Partner, Applied Mathematical Analysis, LLC 1 Fundamental Principle of Transcription Methods Transcription Method

More information

A Gauss Lobatto quadrature method for solving optimal control problems

A Gauss Lobatto quadrature method for solving optimal control problems ANZIAM J. 47 (EMAC2005) pp.c101 C115, 2006 C101 A Gauss Lobatto quadrature method for solving optimal control problems P. Williams (Received 29 August 2005; revised 13 July 2006) Abstract This paper proposes

More information

Numerical Optimal Control Overview. Moritz Diehl

Numerical Optimal Control Overview. Moritz Diehl Numerical Optimal Control Overview Moritz Diehl Simplified Optimal Control Problem in ODE path constraints h(x, u) 0 initial value x0 states x(t) terminal constraint r(x(t )) 0 controls u(t) 0 t T minimize

More information

Direct and indirect methods for optimal control problems and applications in engineering

Direct and indirect methods for optimal control problems and applications in engineering Direct and indirect methods for optimal control problems and applications in engineering Matthias Gerdts Computational Optimisation Group School of Mathematics The University of Birmingham gerdtsm@maths.bham.ac.uk

More information

Theory and Applications of Constrained Optimal Control Proble

Theory and Applications of Constrained Optimal Control Proble Theory and Applications of Constrained Optimal Control Problems with Delays PART 1 : Mixed Control State Constraints Helmut Maurer 1, Laurenz Göllmann 2 1 Institut für Numerische und Angewandte Mathematik,

More information

Direct Methods. Moritz Diehl. Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U.

Direct Methods. Moritz Diehl. Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U. Direct Methods Moritz Diehl Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U. Leuven Belgium Overview Direct Single Shooting Direct Collocation Direct Multiple

More information

Convergence of a Gauss Pseudospectral Method for Optimal Control

Convergence of a Gauss Pseudospectral Method for Optimal Control Convergence of a Gauss Pseudospectral Method for Optimal Control Hongyan Hou William W. Hager Anil V. Rao A convergence theory is presented for approximations of continuous-time optimal control problems

More information

Tutorial on Control and State Constrained Optimal Control Problems

Tutorial on Control and State Constrained Optimal Control Problems Tutorial on Control and State Constrained Optimal Control Problems To cite this version:. blems. SADCO Summer School 211 - Optimal Control, Sep 211, London, United Kingdom. HAL Id: inria-629518

More information

INVERSION IN INDIRECT OPTIMAL CONTROL

INVERSION IN INDIRECT OPTIMAL CONTROL INVERSION IN INDIRECT OPTIMAL CONTROL François Chaplais, Nicolas Petit Centre Automatique et Systèmes, École Nationale Supérieure des Mines de Paris, 35, rue Saint-Honoré 7735 Fontainebleau Cedex, France,

More information

Gauss Pseudospectral Method for Solving Infinite-Horizon Optimal Control Problems

Gauss Pseudospectral Method for Solving Infinite-Horizon Optimal Control Problems AIAA Guidance, Navigation, and Control Conference 2-5 August 21, Toronto, Ontario Canada AIAA 21-789 Gauss Pseudospectral Method for Solving Infinite-Horizon Optimal Control Problems Divya Garg William

More information

Geoffrey T. Huntington Blue Origin, LLC. Kent, WA William W. Hager Department of Mathematics University of Florida Gainesville, FL 32611

Geoffrey T. Huntington Blue Origin, LLC. Kent, WA William W. Hager Department of Mathematics University of Florida Gainesville, FL 32611 Direct Trajectory Optimization and Costate Estimation of Finite-Horizon and Infinite-Horizon Optimal Control Problems Using a Radau Pseudospectral Method Divya Garg Michael A. Patterson Camila Francolin

More information

Numerical Methods for Embedded Optimization and Optimal Control. Exercises

Numerical Methods for Embedded Optimization and Optimal Control. Exercises Summer Course Numerical Methods for Embedded Optimization and Optimal Control Exercises Moritz Diehl, Daniel Axehill and Lars Eriksson June 2011 Introduction This collection of exercises is intended to

More information

Optimal Control and Applications

Optimal Control and Applications V. M. Becerra - Session 1-2nd AstroNet-II Summer School Optimal Control and Applications Session 1 2nd AstroNet-II Summer School Victor M. Becerra School of Systems Engineering University of Reading 5-6

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Real-time Constrained Nonlinear Optimization for Maximum Power Take-off of a Wave Energy Converter

Real-time Constrained Nonlinear Optimization for Maximum Power Take-off of a Wave Energy Converter Real-time Constrained Nonlinear Optimization for Maximum Power Take-off of a Wave Energy Converter Thomas Bewley 23 May 2014 Southern California Optimization Day Summary 1 Introduction 2 Nonlinear Model

More information

Legendre Pseudospectral Approximations of Optimal Control Problems

Legendre Pseudospectral Approximations of Optimal Control Problems Legendre Pseudospectral Approximations of Optimal Control Problems I. Michael Ross 1 and Fariba Fahroo 1 Department of Aeronautics and Astronautics, Code AA/Ro, Naval Postgraduate School, Monterey, CA

More information

Divya Garg Michael A. Patterson Camila Francolin Christopher L. Darby Geoffrey T. Huntington William W. Hager Anil V. Rao

Divya Garg Michael A. Patterson Camila Francolin Christopher L. Darby Geoffrey T. Huntington William W. Hager Anil V. Rao Comput Optim Appl (2011) 49: 335 358 DOI 10.1007/s10589-009-9291-0 Direct trajectory optimization and costate estimation of finite-horizon and infinite-horizon optimal control problems using a Radau pseudospectral

More information

Automatica. A unified framework for the numerical solution of optimal control problems using pseudospectral methods

Automatica. A unified framework for the numerical solution of optimal control problems using pseudospectral methods Automatica 46 (2010) 1843 1851 Contents lists available at ScienceDirect Automatica journal homepage: www.elsevier.com/locate/automatica Brief paper A unified framework for the numerical solution of optimal

More information

Automatica. Pseudospectral methods for solving infinite-horizon optimal control problems

Automatica. Pseudospectral methods for solving infinite-horizon optimal control problems Automatica 47 (2011) 829 837 Contents lists available at ScienceDirect Automatica journal homepage: www.elsevier.com/locate/automatica Brief paper Pseudospectral methods for solving infinite-horizon optimal

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Implicitly and Explicitly Constrained Optimization Problems for Training of Recurrent Neural Networks

Implicitly and Explicitly Constrained Optimization Problems for Training of Recurrent Neural Networks Implicitly and Explicitly Constrained Optimization Problems for Training of Recurrent Neural Networks Carl-Johan Thore Linköping University - Division of Mechanics 581 83 Linköping - Sweden Abstract. Training

More information

CHAPTER 10: Numerical Methods for DAEs

CHAPTER 10: Numerical Methods for DAEs CHAPTER 10: Numerical Methods for DAEs Numerical approaches for the solution of DAEs divide roughly into two classes: 1. direct discretization 2. reformulation (index reduction) plus discretization Direct

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

Suboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids

Suboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids Suboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids Jochen Garcke joint work with Axel Kröner, INRIA Saclay and CMAP, Ecole Polytechnique Ilja Kalmykov, Universität

More information

Introduction to the Numerical Solution of IVP for ODE

Introduction to the Numerical Solution of IVP for ODE Introduction to the Numerical Solution of IVP for ODE 45 Introduction to the Numerical Solution of IVP for ODE Consider the IVP: DE x = f(t, x), IC x(a) = x a. For simplicity, we will assume here that

More information

Joint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018

Joint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018 EXTENDED EULER-LAGRANGE AND HAMILTONIAN CONDITIONS IN OPTIMAL CONTROL OF SWEEPING PROCESSES WITH CONTROLLED MOVING SETS BORIS MORDUKHOVICH Wayne State University Talk given at the conference Optimization,

More information

A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations

A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations S. Hussain, F. Schieweck, S. Turek Abstract In this note, we extend our recent work for

More information

Numerical Nonlinear Optimization with WORHP

Numerical Nonlinear Optimization with WORHP Numerical Nonlinear Optimization with WORHP Christof Büskens Optimierung & Optimale Steuerung London, 8.9.2011 Optimization & Optimal Control Nonlinear Optimization WORHP Concept Ideas Features Results

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx PDE-constrained optimization and the adjoint method Andrew M. Bradley November 16, 21 PDE-constrained optimization and the adjoint method for solving these and related problems appear in a wide range of

More information

Numerical solution of ODEs

Numerical solution of ODEs Numerical solution of ODEs Arne Morten Kvarving Department of Mathematical Sciences Norwegian University of Science and Technology November 5 2007 Problem and solution strategy We want to find an approximation

More information

Direct Optimal Control and Costate Estimation Using Least Square Method

Direct Optimal Control and Costate Estimation Using Least Square Method 21 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 21 WeB22.1 Direct Optimal Control and Costate Estimation Using Least Square Method Baljeet Singh and Raktim Bhattacharya

More information

Chapter 2 Optimal Control Problem

Chapter 2 Optimal Control Problem Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Lecture V: The game-engine loop & Time Integration

Lecture V: The game-engine loop & Time Integration Lecture V: The game-engine loop & Time Integration The Basic Game-Engine Loop Previous state: " #, %(#) ( #, )(#) Forces -(#) Integrate velocities and positions Resolve Interpenetrations Per-body change

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Tutorial on Control and State Constrained Optimal Control Pro. Control Problems and Applications Part 3 : Pure State Constraints

Tutorial on Control and State Constrained Optimal Control Pro. Control Problems and Applications Part 3 : Pure State Constraints Tutorial on Control and State Constrained Optimal Control Problems and Applications Part 3 : Pure State Constraints University of Münster Institute of Computational and Applied Mathematics SADCO Summer

More information

An hp-adaptive pseudospectral method for solving optimal control problems

An hp-adaptive pseudospectral method for solving optimal control problems OPTIMAL CONTROL APPLICATIONS AND METHODS Optim. Control Appl. Meth. 011; 3:476 50 Published online 6 August 010 in Wiley Online Library (wileyonlinelibrary.com)..957 An hp-adaptive pseudospectral method

More information

Engineering Notes. NUMERICAL methods for solving optimal control problems fall

Engineering Notes. NUMERICAL methods for solving optimal control problems fall JOURNAL OF GUIANCE, CONTROL, AN YNAMICS Vol. 9, No. 6, November ecember 6 Engineering Notes ENGINEERING NOTES are short manuscripts describing ne developments or important results of a preliminary nature.

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

Lecture IV: Time Discretization

Lecture IV: Time Discretization Lecture IV: Time Discretization Motivation Kinematics: continuous motion in continuous time Computer simulation: Discrete time steps t Discrete Space (mesh particles) Updating Position Force induces acceleration.

More information

Efficient Numerical Methods for Nonlinear MPC and Moving Horizon Estimation

Efficient Numerical Methods for Nonlinear MPC and Moving Horizon Estimation Efficient Numerical Methods for Nonlinear MPC and Moving Horizon Estimation Moritz Diehl, Hans Joachim Ferreau, and Niels Haverbeke Optimization in Engineering Center (OPTEC) and ESAT-SCD, K.U. Leuven,

More information

Generalized B-spline functions method for solving optimal control problems

Generalized B-spline functions method for solving optimal control problems Computational Methods for Differential Equations http://cmde.tabrizu.ac.ir Vol. 2, No. 4, 24, pp. 243-255 Generalized B-spline functions method for solving optimal control problems Yousef Edrisi Tabriz

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

AN OVERVIEW OF THREE PSEUDOSPECTRAL METHODS FOR THE NUMERICAL SOLUTION OF OPTIMAL CONTROL PROBLEMS

AN OVERVIEW OF THREE PSEUDOSPECTRAL METHODS FOR THE NUMERICAL SOLUTION OF OPTIMAL CONTROL PROBLEMS (Preprint) AAS 9-332 AN OVERVIEW OF THREE PSEUDOSPECTRAL METHODS FOR THE NUMERICAL SOLUTION OF OPTIMAL CONTROL PROBLEMS Divya Garg, Michael A. Patterson, William W. Hager, and Anil V. Rao University of

More information

Theory, Solution Techniques and Applications of Singular Boundary Value Problems

Theory, Solution Techniques and Applications of Singular Boundary Value Problems Theory, Solution Techniques and Applications of Singular Boundary Value Problems W. Auzinger O. Koch E. Weinmüller Vienna University of Technology, Austria Problem Class z (t) = M(t) z(t) + f(t, z(t)),

More information

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1 Chapter 01.01 Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 Chapter 01.02 Measuring errors 11 True error 11 Relative

More information

Numerical Optimal Control Mixing Collocation with Single Shooting: A Case Study

Numerical Optimal Control Mixing Collocation with Single Shooting: A Case Study Preprint, 11th IFAC Symposium on Dynamics and Control of Process Systems, including Biosystems Numerical Optimal Control Mixing Collocation with Single Shooting: A Case Study Anders Albert Lars Imsland

More information

CHAPTER 5: Linear Multistep Methods

CHAPTER 5: Linear Multistep Methods CHAPTER 5: Linear Multistep Methods Multistep: use information from many steps Higher order possible with fewer function evaluations than with RK. Convenient error estimates. Changing stepsize or order

More information

Multistage Methods I: Runge-Kutta Methods

Multistage Methods I: Runge-Kutta Methods Multistage Methods I: Runge-Kutta Methods Varun Shankar January, 0 Introduction Previously, we saw that explicit multistep methods (AB methods) have shrinking stability regions as their orders are increased.

More information

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25. Logistics Week 12: Monday, Apr 18 HW 6 is due at 11:59 tonight. HW 7 is posted, and will be due in class on 4/25. The prelim is graded. An analysis and rubric are on CMS. Problem du jour For implicit methods

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Dynamic Programming with Hermite Interpolation

Dynamic Programming with Hermite Interpolation Dynamic Programming with Hermite Interpolation Yongyang Cai Hoover Institution, 424 Galvez Mall, Stanford University, Stanford, CA, 94305 Kenneth L. Judd Hoover Institution, 424 Galvez Mall, Stanford University,

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

NUMERICAL MATHEMATICS AND COMPUTING

NUMERICAL MATHEMATICS AND COMPUTING NUMERICAL MATHEMATICS AND COMPUTING Fourth Edition Ward Cheney David Kincaid The University of Texas at Austin 9 Brooks/Cole Publishing Company I(T)P An International Thomson Publishing Company Pacific

More information

CONVERGENCE RATE FOR A RADAU COLLOCATION METHOD APPLIED TO UNCONSTRAINED OPTIMAL CONTROL

CONVERGENCE RATE FOR A RADAU COLLOCATION METHOD APPLIED TO UNCONSTRAINED OPTIMAL CONTROL CONVERGENCE RATE FOR A RADAU COLLOCATION METHOD APPLIED TO UNCONSTRAINED OPTIMAL CONTROL WILLIAM W. HAGER, HONGYAN HOU, AND ANIL V. RAO Abstract. A local convergence rate is established for an orthogonal

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 4 Interpolation 4.1 Polynomial interpolation Problem: LetP n (I), n ln, I := [a,b] lr, be the linear space of polynomials of degree n on I, P n (I) := { p n : I lr p n (x) = n i=0 a i x i, a i lr, 0 i

More information

Applied Numerical Analysis

Applied Numerical Analysis Applied Numerical Analysis Using MATLAB Second Edition Laurene V. Fausett Texas A&M University-Commerce PEARSON Prentice Hall Upper Saddle River, NJ 07458 Contents Preface xi 1 Foundations 1 1.1 Introductory

More information

PARTIAL DIFFERENTIAL EQUATIONS

PARTIAL DIFFERENTIAL EQUATIONS MATHEMATICAL METHODS PARTIAL DIFFERENTIAL EQUATIONS I YEAR B.Tech By Mr. Y. Prabhaker Reddy Asst. Professor of Mathematics Guru Nanak Engineering College Ibrahimpatnam, Hyderabad. SYLLABUS OF MATHEMATICAL

More information

Single Shooting and ESDIRK Methods for adjoint-based optimization of an oil reservoir

Single Shooting and ESDIRK Methods for adjoint-based optimization of an oil reservoir Downloaded from orbit.dtu.dk on: Dec 2, 217 Single Shooting and ESDIRK Methods for adjoint-based optimization of an oil reservoir Capolei, Andrea; Völcker, Carsten; Frydendall, Jan; Jørgensen, John Bagterp

More information

Finite Difference and Finite Element Methods

Finite Difference and Finite Element Methods Finite Difference and Finite Element Methods Georgy Gimel farb COMPSCI 369 Computational Science 1 / 39 1 Finite Differences Difference Equations 3 Finite Difference Methods: Euler FDMs 4 Finite Element

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

Numerical Methods I Orthogonal Polynomials

Numerical Methods I Orthogonal Polynomials Numerical Methods I Orthogonal Polynomials Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 Nov. 4th and 11th, 2010 A. Donev (Courant Institute)

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Numerical Continuation and Normal Form Analysis of Limit Cycle Bifurcations without Computing Poincaré Maps

Numerical Continuation and Normal Form Analysis of Limit Cycle Bifurcations without Computing Poincaré Maps Numerical Continuation and Normal Form Analysis of Limit Cycle Bifurcations without Computing Poincaré Maps Yuri A. Kuznetsov joint work with W. Govaerts, A. Dhooge(Gent), and E. Doedel (Montreal) LCBIF

More information

EXTREMAL ANALYTICAL CONTROL AND GUIDANCE SOLUTIONS FOR POWERED DESCENT AND PRECISION LANDING. Dilmurat Azimov

EXTREMAL ANALYTICAL CONTROL AND GUIDANCE SOLUTIONS FOR POWERED DESCENT AND PRECISION LANDING. Dilmurat Azimov EXTREMAL ANALYTICAL CONTROL AND GUIDANCE SOLUTIONS FOR POWERED DESCENT AND PRECISION LANDING Dilmurat Azimov University of Hawaii at Manoa 254 Dole Street, Holmes 22A Phone: (88)-956-2863, E-mail: azimov@hawaii.edu

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

INVERSION BASED CONSTRAINED TRAJECTORY OPTIMIZATION. Nicolas Petit, Mark B. Milam, Richard M. Murray

INVERSION BASED CONSTRAINED TRAJECTORY OPTIMIZATION. Nicolas Petit, Mark B. Milam, Richard M. Murray INVERSION BASED CONSTRAINED TRAJECTORY OPTIMIZATION Nicolas Petit, Mark B Milam, Richard M Murray Control and Dynamical Systems Mail Code 107-81 California Institute of Technology Pasadena, CA 91125 {npetit,milam,murray}@cdscaltechedu

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j

More information

Time-Optimal Automobile Test Drives with Gear Shifts

Time-Optimal Automobile Test Drives with Gear Shifts Time-Optimal Control of Automobile Test Drives with Gear Shifts Christian Kirches Interdisciplinary Center for Scientific Computing (IWR) Ruprecht-Karls-University of Heidelberg, Germany joint work with

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon's method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by Numerical Analysis A Comprehensive Introduction H. R. Schwarz University of Zürich Switzerland with a contribution by J. Waldvogel Swiss Federal Institute of Technology, Zürich JOHN WILEY & SONS Chichester

More information

Introduction to the Optimal Control Software GPOPS II

Introduction to the Optimal Control Software GPOPS II Introduction to the Optimal Control Software GPOPS II Anil V. Rao Department of Mechanical and Aerospace Engineering University of Florida Gainesville, FL 32611-625 Tutorial on GPOPS II NSF CBMS Workshop

More information

Initial value problems for ordinary differential equations

Initial value problems for ordinary differential equations Initial value problems for ordinary differential equations Xiaojing Ye, Math & Stat, Georgia State University Spring 2019 Numerical Analysis II Xiaojing Ye, Math & Stat, Georgia State University 1 IVP

More information

Zentrum für Technomathematik

Zentrum für Technomathematik Zentrum für Technomathematik Fachbereich 3 Mathematik und Informatik Computational Parametric Sensitivity Analysis of Perturbed PDE Optimal Control Problems with State and Control Constraints Christof

More information

Switching Time Optimization Bartolomeo Stellato, Sina Ober-Blöbaum and Paul Goulart

Switching Time Optimization Bartolomeo Stellato, Sina Ober-Blöbaum and Paul Goulart Switching Time Optimization, Sina Ober-Blöbaum and Paul Goulart CDC 2016 Dec 14, 2016 Las Vegas, NV, USA Switched System Dynamics ẋ=f i (x(t)) t [τ i,τ i+1 ) 3 Switching Time Optimization Problem minimize

More information

Numerical Methods for Differential Equations

Numerical Methods for Differential Equations Numerical Methods for Differential Equations Chapter 2: Runge Kutta and Linear Multistep methods Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 9 Initial Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

Direct Transcription of Optimal Control Problems with Finite Elements on Bernstein Basis

Direct Transcription of Optimal Control Problems with Finite Elements on Bernstein Basis Direct Transcription of Optimal Control Problems with Finite Elements on Bernstein Basis Lorenzo A. Ricciardi, Massimiliano Vasile University of Strathclyde, Glasgow, United Kingdom, G1 1XJ The paper introduces

More information

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1

More information

Consistency and Convergence

Consistency and Convergence Jim Lambers MAT 77 Fall Semester 010-11 Lecture 0 Notes These notes correspond to Sections 1.3, 1.4 and 1.5 in the text. Consistency and Convergence We have learned that the numerical solution obtained

More information

An Efficient Algorithm Based on Quadratic Spline Collocation and Finite Difference Methods for Parabolic Partial Differential Equations.

An Efficient Algorithm Based on Quadratic Spline Collocation and Finite Difference Methods for Parabolic Partial Differential Equations. An Efficient Algorithm Based on Quadratic Spline Collocation and Finite Difference Methods for Parabolic Partial Differential Equations by Tong Chen A thesis submitted in conformity with the requirements

More information

THE ESA NLP-SOLVER WORHP RECENT DEVELOPMENTS AND APPLICATIONS

THE ESA NLP-SOLVER WORHP RECENT DEVELOPMENTS AND APPLICATIONS THE ESA NLP-SOLVER WORHP RECENT DEVELOPMENTS AND APPLICATIONS Dennis Wassel, Florian Wolff, Jan Vogelsang, and Christof Büskens Center for Industrial Mathematics, University of Bremen, PO Box 33 04 40,

More information

Kybernetika. Terms of use: Persistent URL: Institute of Information Theory and Automation AS CR, 2015

Kybernetika. Terms of use: Persistent URL:   Institute of Information Theory and Automation AS CR, 2015 Kybernetika Yousef Edrisi Tabriz; Mehrdad Lakestani Direct solution of nonlinear constrained quadratic optimal control problems using B-spline functions Kybernetika, Vol. 5 (5), No., 8 98 Persistent URL:

More information

Suboptimal Open-loop Control Using POD. Stefan Volkwein

Suboptimal Open-loop Control Using POD. Stefan Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria PhD program in Mathematics for Technology Catania, May 22, 2007 Motivation Optimal control of evolution problems: min J(y,

More information

Optimal control and applications to aerospace: some results and challenges

Optimal control and applications to aerospace: some results and challenges Optimal control and applications to aerospace: some results and challenges E. Trélat Abstract This article surveys the classical techniques of nonlinear optimal control such as the Pontryagin Maximum Principle

More information

Ordinary Differential Equations

Ordinary Differential Equations Ordinary Differential Equations Professor Dr. E F Toro Laboratory of Applied Mathematics University of Trento, Italy eleuterio.toro@unitn.it http://www.ing.unitn.it/toro September 19, 2014 1 / 55 Motivation

More information

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation Outline Interpolation 1 Interpolation 2 3 Michael T. Heath Scientific Computing 2 / 56 Interpolation Motivation Choosing Interpolant Existence and Uniqueness Basic interpolation problem: for given data

More information

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Integration, differentiation, and root finding. Phys 420/580 Lecture 7 Integration, differentiation, and root finding Phys 420/580 Lecture 7 Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule:

More information