Convergence of the Forward-Backward Sweep Method in Optimal Control

Size: px
Start display at page:

Download "Convergence of the Forward-Backward Sweep Method in Optimal Control"

Transcription

1 Convergence of the Forward-Backward Sweep Method in Optimal Control Michael McAsey a Libin Mou a Weimin Han b a Department of Mathematics, Bradley University, Peoria, IL b Department of Mathematics, University of Iowa, Iowa City, IA Abstract The Forward-Backward Sweep Method is a numerical technique for solving optimal control problems. The technique is reviewed and a convergence theorem is proved for a basic type of optimal control problem. Examples illustrate the performance of the method. Keywords: Optimal control, Numerical solution, Convergence 1 Introduction Solutions of optimal control problems are often difficult. Yet when either learning or teaching optimal control it is helpful to have some examples with closed form solutions. It is also useful to have a simple numerical scheme that can produce a numerical approximation to solutions for some problems for which closed form solutions are not available. In their textbook [20] Lenhart and Workman have just that. The Forward-Backward Sweep Method (FBSM) in [20] is easy to program and runs quickly. The method is designed to solve the differentialalgebraic system generated by the Maximum Principle that characterizes the solution. A detailed convergence analysis of the method is not appropriate for the intended audience of [20], but after seeing the method work on problems from several disciplines, it seems natural to ask about some convergence properties. In this paper we prove a convergence result for the method applied to a very basic class of problems. Corresponding author: Michael McAsey, Department of Mathematics, Bradley University, Peoria, IL 61625, mcasey@bradley.edu, phone: , fax:

2 The literature on numerical solutions of optimal control problems is large. To put some of this into perspective, consider a basic problem: choose a control u(t) to optimize an integral T f(t, x(t), u(t))dt subject to a differential equation constrain x (t) = g(t, x(t), u(t)), x( ) = x 0. The main analytical technique is provided by Pontryagin s Maximum Principle which gives necessary conditions that the control u(t) and the state x(t) need to satisfy. These conditions can be solved explicitly in a few examples. However for most problems, especially problems that also involve additional constraints on the state or control, the conditions are too algebraically involved to be solved explicitly. So numerical approaches are used to construct approximations to the solutions. Useful surveys on numerical methods can be found in texts, articles, and introductions to articles. Examples include [5],[6],[7], [8], and [12]. Numerical techniques for optimal control problems can often be classified as either direct or indirect. For a direct method, the differential equation and the integral are discretized and the problem is converted into a nonlinear programming problem. Many choices are available for discretizing the integral, the differential equation, and for solving the nonlinear programming problem resulting in several different numerical methods. For example, in an early paper [15], Hager considers an unconstrained problem to minimize the final state value subject to the differential equation x = f(x(t), u(t)). The paper treats discretizations by both one-step and multistep approximations. Dontchev, Hager, and co-authors have produced not only convergence results but rates of convergence for direct techniques on problems that include state and control constraints. For a sample, see [9], [10], [11], [12], [15]. Indirect methods approximate solutions to optimal control problems by numerically solving the boundary value problem for the differential-algebraic system generated by the Maximum Principle. Techniques for solving boundary value problems can be found in the venerable book by Keller [18] and include shooting, finite difference, and collocation methods. More recently, Iserles book [17] solves boundary value problems via finite element methods. The addition of an algebraic constraint or state/control constraints present additional difficulties that do not appear in classical BVP. Relevant for the present paper is work by Hackbusch [14] that approximates solutions to boundary value problems with two parabolic equations. The books [1], [16], [19] have extensive treatments of differential-algebraic systems, concentrating more on initial value problems rather than boundary value problems. The paper by Bertolazzi [4] has an informative introduction, highlighting the various numerical techniques in optimal control and their advantages and disadvantages. The idea exploited by the FBSM can be seen in the way one of the equations is solved in a forward direction and the other is solved backwards with updates from the first. An early reference to a technique that has the forward-backward flavor is [21] where the update 2

3 step is different from that considered here. In [13] Enright and Muir use both explicit and implicit Runge-Kutta methods (and an average of the two) for two-point boundary value problems that also has some of the flavor of the FBSM. In section 2 of this paper, we describe the type of optimal control problems to which we will apply the FBSM. These are among the most basic in the subject. In section 3 we investigate the convergence issue for the simplest case of the method. We show both a continuous version and a discrete version of the convergence theorem. In section 4 we illustrate the numerical performance of the method through simulations of solutions to a couple of examples in optimal control. The paper closes in section 5 with a few remarks on more general problems. 2 Basic problem The basic problem to be considered is to choose a control function u(t) to maximize an integral objective function: t1 max f(t, x(t), u(t))dt (2.1) u subject to the state equation x (t) = g(t, x(t), u(t)), x( ) = x 0. (2.2) For this formulation of the problem assume that x and u are vector-valued functions on [, t 1 ] with values in R n and R m, respectively. Assume f and g map R R n R m into R and R n, respectively. The basic problem is generalized in several ways. Some of these include: (1) the terminal value of the state x(t 1 ) may be fixed; (2) the end time t 1 could be a choice variable; and (3) the objective may include a scrap function φ(t 1 ) in addition to the integral. There are more variations on the basic problem of course, but these are the main problems in [20] that can be solved by the FBSM. The FBSM is one of the so-called indirect methods for solving optimal control problems. Begin by using the Maximum Principle to characterize the method as applied to the basic problem. This is considered in detail in [20] (p. 13) and we provide a brief sketch. Assume that f and g are continuously differentiable in all three variables. We also assume that a solution to the basic problem exists where x is continuously differentiable and u is piecewise continuous. Form the Hamiltonian H(t, x, u, λ) = λ 0 f(t, x, u) + λg(t, x, u) where λ = λ(t) is the adjoint or co-state variable. (We will take the constant λ 0 to be equal to 1 for the problems considered here, although in general this cannot be assumed.) The Maximum Principle says that there is a co-state variable λ(t), such that an optimal state x(t), and optimal control u(t) must necessarily (1) satisfy the state equation, x (t) = g(t, x(t), u(t)), x( ) = x 0 ; 3

4 (2) satisfy the co-state equation dλ/dt = H/ x, λ(t 1 ) = 0; (2.3) and (3) optimize the Hamiltonian, considered as a function of the control. These three conditions result in a two-point boundary value problem and an additional algebraic equation from the optimality condition (3). Assuming enough structure on the functions, condition (3) can be written as H u = 0. Although it is not necessary for the numerical algorithm, in many problems this equation can be solved for u, and that is how we shall state the FBSM. In brief, the Forward Backward Sweep Method first solves the state equation x = g(t, x, u) with a Runge-Kutta routine, then solves the costate equation (2.3) backwards in time with the Runge-Kutta solver, and then updates the control. This produces a new approximation of the state, costate, and control (x, λ, u). The method continues by using these new updates and calculating new Runge-Kutta approximations and control updates with the goal of finding a fixed point (x, λ, u). The method terminates when there is sufficient agreement between the states, costates, and controls of two passes through the approximation loop. 3 Convergence of the FBSMs To better understand the idea of the convergence analysis, we first consider the FBSM at the continuous level, and this is done in Subsection 3.1. The argument can then be adapted for the convergence study of the FBSM applied to the discrete systems, in Subsection 3.2. Throughout this section we will assume that an optimal solution exists for the basic problem (2.1)-(2.2). The Lipschitz conditions to be assumed shortly are enough to be able to apply the Maximum Principle. (See [23, p. 85] for a statement of the Maximum Principle.) This in turn implies that the boundary value problem of interest has a solution. Thus the real problem is solving a boundary value problem of a specific form. 3.1 Convergence for the continuous system For notational simplicity, we express the problem as finding (x(t), λ(t), u(t)) such that x (t) = g(t, x(t), u(t)), x( ) = x 0, (3.1) λ (t) = h 1 (t, x(t), u(t)) + λ(t) h 2 (t, x(t), u(t)), λ(t 1 ) = 0, (3.2) u(t) = h 3 (t, x(t), λ(t)). (3.3) Here x 0 and < t 1 are given real numbers, g, h 1 and h 2 are given functions satisfying the continuity properties mentioned in Section 2 so that the system (3.1) (3.3) has a unique 4

5 solution (x(t), λ(t), u(t)). The equation (3.3) is interpreted as u being defined uniquely from the optimality condition, and there is no need to actually have an explicit formula for h 3. The use of the form (3.3) is for convenience. The FBSM for the system (3.1) (3.3) reads as follows: Initialization: choose an initial guess u (0) (= u (0) (t)). Iteration: for k 0, solve dx (k+1) (t) dt dλ (k+1) (t) dt = g(t, x (k+1) (t), u (k) (t)), x (k+1) ( ) = x 0, (3.4) = h 1 (t, x (k+1) (t), u (k) (t)) + λ (k+1) (t) h 2 (t, x (k+1) (t), u (k) (t)), λ (k+1) (t 1 ) = 0, (3.5) u (k+1) (t) = h 3 (t, x (k+1) (t), λ (k+1) (t)). (3.6) For a convergence analysis of the above FBSM, we will make the following assumptions. (A). The functions g, h 1, h 2 and h 3 are Lipschitz continuous with respect to their second and third arguments, with Lipschitz constants L g, L h1, etc., e.g. g(x 1, u 1 ) g(x 2, u 2 ) L g ( x 1 x 2 + u 1 u 2 ). Moreover, Λ = λ < and H = h 2 <. Note that in convergence analysis of numerical methods for ODEs, it is standard to assume Lipschitz conditions ([2]). In the proof of the next theorem, we will apply a simple form of the well-known Gronwall s inequality ([3, Exercise ]): Suppose f and g are continuous functions on [a, b] and g is non-decreasing, then f(t) g(t) + c t a f(s) ds = f(t) e c (t a) g(t) t [a, b]. (3.7) Similarly, if f and g are continuous functions on [a, b] and g is non-increasing, then f(t) g(t) + c b Theorem 3.1 Under the assumptions (A), if t f(s) ds = f(t) e c (b t) g(t) t [a, b]. (3.8) c 0 L h3 { [e L g(t 1 ) 1 ] + (L h1 + Λ L h2 ) 1 H [ e H(t 1 ) 1 ] [ e Lg(t 1 ) + 1 ]} < 1, (3.9) then we have convergence: as k, max x(t) x (k) (t) + max λ(t) λ (k) (t) + max u(t) u (k) (t) 0. (3.10) t t 1 t t 1 t t 1 5

6 Proof. Denote the errors e (k) x = x x (k), e (k) λ = λ λ (k), e (k) u = u u (k). These errors are all functions of t as well as k. From (3.1) and (3.4), we have Then de (k+1) x (t) dt = g(t, x(t), u(t)) g(t, x (k+1) (t), u (k) (t)), e (k+1) x ( ) = 0. t [ e (k+1) x (t) = g(s, x(s), u(s)) g(s, x (k+1) (s), u (k) (s)) ] ds. Apply the Lipschitz condition on g, Hence, t e (k+1) x (t) L g [ e (k+1) x Similarly, from (3.2) and (3.5), we have e (k+1) λ (t) = t1 [ e (k+1) λ (t) t t t 1 (s) + e (k) u (s) ] ds, t [, t 1 ]. (3.11) { h 1 (s, x(s), u(s)) h 1 (s, x (k+1) (s), u (k) (s)) + λ(s) [ h 2 (s, x(s), u(s)) h 2 (s, x (k+1) (s), u (k) (s)) ] } + e (k+1) λ (s) h 2 (s, x (k+1) (s), u (k) (s)) ds. (L h1 + Λ L h2 ) ( e (k+1) x (s) + e (k) u (s) ) + H e (k+1) λ ] (s) ds, t [, t 1 ]. (3.12) Furthermore, from (3.3) and (3.6), we obtain [ ] e (k+1) u (t) L h3 e (k+1) x (s) + e (k+1) λ (s), t [, t 1 ]. (3.13) Apply Gronwall s inequality (3.7) on (3.11) to obtain t e (k+1) x (t) e Lg(t t0) L g e (k) u (s) ds, t [, t 1 ]. (3.14) Apply Gronwall s inequality (3.8) on (3.12) to obtain t1 e (k+1) λ (t) e H(t1 t) (L h1 + Λ L h2 ) t 6 [ e (k+1) x (s) + e (k) u (s) ] ds, t [, t 1 ].

7 Then, plug (3.14) into the right side of this inequality and use integration by parts to obtain { [e e (k+1) λ (t) e H(t1 t) L (L h1 + Λ L h2 ) g(t 1 ) e ] t Lg(t ) e (k) u (s) ds + [ e Lg(t 1 ) e Lg(t t0) + 1 ] t 1 } e u (k) (s) ds, t [, t 1 ]. (3.15) Use (3.14) and (3.15) in (3.13), e (k+1) u (t) L h3 { e Lg(t ) L g t e (k) u (s) ds + e H(t 1 t) (L h1 + Λ L h2 ) where Hence, + e H(t 1 t) (L h1 + Λ L h2 ) [ e Lg(t 1 ) e Lg(t ) ] t 1 We integrate (3.16) over the interval [, t 1 ] to obtain t1 t1 e (k+1) u (t) dt c 1 t e (k) u (t) dt, t1 t e (k) u (s) ds t1 c 1 = c 0 L h3 (L h1 + Λ L h2 ) e (H Lg) t+ht 1 L g dt. t1 e (k) u (s) ds }, t [, t 1 ]. (3.16) t1 e (k) u (t) dt (c 1 ) k e (0) u (t) dt. (3.17) Thus, if c 1 < 1, valid under the assumption (3.9), we have t1 e (k) u (t) dt 0 as k. (3.18) Using this convergence in (3.14), (3.15) and (3.16), we conclude the statement (3.10). Remark 3.2 The condition (3.9) is valid if L h3 is sufficiently small, or if L g (t 1 ) and L h1 + Λ L h2 are sufficiently small. As is seen from the proof, this condition can be replaced by the weaker one c 1 < 1. It is possible to further sharpen the condition (3.9). Other iteration methods may be studied as well. For the iteration, one may consider using dx (k+1) = g(t, x (k) (t), u (k) (t)), x( ) = x 0, dt instead of (3.4) then it only requires an integration to get x (k+1). A similar comment applies to (3.5). The price to pay is a slower convergence. 7

8 3.2 The numerical algorithm and convergence for the discretized system In a numerical implementation of the FBSM we are not actually solving the state and co-state differential equations (3.1) and (3.2), of course, but are finding instead numerical approximations of the solutions at discrete points in the interval. The convergence theorem in this section will show that when the Lipschitz constants are small enough or the time interval is short enough, then there is a grid size and an iteration count so that the error between the solution at the nodes and the discrete approximation can be made small. Recall the system that is being solved. x (t) = g(t, x(t), u(t)), x(0) = a λ (t) = h 1 (t, x(t), u(t)) + λh 2 (t, x(t), u(t)), λ(t ) = 0 u = h 3 (t, x(t), λ(t)) (3.19) For notational convenience the interval is now assumed to be [0, T ] and the initial state is denoted by a. The assumptions (A) continue in force for this section. Recall these assumptions are: the functions g, h 1, h 2, h 3 are continuous in t and Lipschitz in x, u, and λ. We continue to assume that a solution exists to the optimal control problem (2.1)-(2.2). These hypotheses and the Maximum Principle then imply that the boundary value problem (3.19) has a solution. In this section we also assume here that the solutions x(t), λ(t) and u(t) are continuous. Let n be a positive integer and define the step size h = T/n. Denote x j = x(t j ) and λ j = λ(t j ), where t j = + jh = jh and x(t), λ(t), u(t) are the actual solutions to the system (3.19). For each k 0, let x k j, λ k j, u k j be the k-th approximations to x j, λ j, u j, as defined below. Our goal is to show that the approximations converge to the solution as k and h Discrete Approximations Consider a discrete approximation to a general initial value problem y = g(t, y), α < t β; y(α) = y 0 that has the following scheme: y j+1 = y j + hg(t j, y j, h; g) where G is a function such that η g (h) sup{ g(t, y) G(t, y, h; g) : α t β, < y < } 0 (3.20) 8

9 as h 0. The classical methods (e.g., Euler, Runge-Kutta) have this form. This scheme will be applied to approximate x(t) using the first equation and then applied to the second equation backwards in time to approximate λ(t). For simplicity, the resulting functions G associated with g(t, x, u) and f(t, x, u, λ) = h 1 (t, x, u) + λh 2 (t, x, u) are denoted by G and F as follows: G(t, x, u) = G(t, x, u, h; g); F (t, x, u, λ) = G(t, x, u, λ, h; f). For j = 0,..., n 1 define the forward and backward difference operators j x = x j+1 x j, δ j x = x j x j+1. The operators j and δ j apply to the approximating sequences x k j, λ k j, u k j as well The Algorithm Initialization: choose initial guess u 0 j, j = 0,..., n. Iteration: for k 0, we define x k+1 j, λ k+1 j, u k+1 j, j = 0,..., n by the equations j x k+1 = hg(t j, x k+1 j, u k j ), x k+1 0 = a, j = 0,..., n 1 δ j 1 λ k+1 = hf (t j, x k+1 j, u k j, λ k+1 j ), λ k+1 n = 0 j = n,..., 1 u k+1 j = h 3 (t j, x k+1 j, λ k+1 j ) j = 0,..., n. (3.21) Convergence Theorem Theorem 3.3 Suppose the assumptions (A) holds. Suppose that either the Lipschitz constants are small or T is small. Then max{ x(t j ) x k j + λ(t j ) λ k j + u(t j ) u k j, j = 0,..., n} 0 as k, n. That is, for every ε > 0 there exist N, K > 0 such that for all n > N and k > K. max{ x(t j ) x k j + λ(t j ) λ k j + u(t j ) u k j, j = 0,..., n} < ε Remark 3.4 The first approximating equation can be replaced by See Remark 3.6 for a discussion. j x k+1 = hg(t j, x k j, u k j ), x k+1 0 = a. (3.22) 9

10 Proof. Denote the errors by e k xj = x j x k j, e k λj = λ j λ k j, e k uj = u j u k j for k 0, j = 0, 1,..., n. The proof follows the general outline of the proof of the continuous approximation. The essence is to find bounds for the errors in x and λ in terms of the error in u and then show that this last error can be made small. Define the following average errors: E k x = h e k xj, Eλ k = h e k λj, Eu k = h e k uj. (3.23) Since h = T/n, E k x is an average error of the approximation x k j. The idea of the proof is to show that E k u 0 as k and h 0, which implies the desired result. Inequality for e k xj Note that e k+1 x0 = 0. From the equations for x and x k+1 we get for i = 0,..., n 1, It follows that i e k+1 x = e k+1 x(i+1) ek+1 xi = (x i+1 x k+1 i+1 ) (x i x k+1 i ) i e k+1 x = i x i x k+1 = = (x i+1 x i ) (x k+1 i+1 xk+1 i ) = i x i x k+1. ti+1 t i [g(t, x(t), u(t)) G(t i, x k+1 i, u k i )]dt. (3.24) To analyze the preceding difference, subtract and add both the quantities g(t i, x(t i ), u(t i )) and g(t i, x k+1 i, u k i ). First, using the continuity of g, x, and u, we get g(t, x(t), u(t)) g(t i, x(t i ), u(t i )) ω g (h) (3.25) where ω g (h) is the oscillation of the function g(t, x(t), u(t)) considered as a function of t; that is, ω g (h) = sup r,s [0,T ], r s h g(r, x(r), u(r)) g(s, x(s), u(s)). Second, by the Lipschitz condition on g we get g(ti, x(t i ), u(t i )) g(t i, x k+1 i, u k i ) Lgx x(t i ) x k+1 i + L gu u(t i ) u k i L gx e k+1 xi + L gu e k ui. (3.26) 10

11 Third, by the definition of η in (3.20), we have g(t i, x k+1 i, u k i ) G(t i, x k+1 i, u k i ) η g (h). (3.27) Putting these three pieces (3.25)-(3.27) together we have that i e k+1 x L gx h e k+1 xi + L gu h e k ui + o 1 (h) (3.28) for j = 0,..., n 1, where o 1 (h) = hω g (h) + hη g (h). Note that This and (3.28) imply j 1 e k+1 xj = e k+1 x0 + i e k+1 xi, e k+1 x0 = 0. e k+1 xj i=0 j 1 [L gx h e k+1 xi + L gu h e k ui + o 1 (h)]. (3.29) i=0 Sum both sides of (3.29) on j and change the order of sums to get e k+1 xj = = j 1 [L gx h e k+1 xi + L gu h e k ui + o 1 (h)] i=0 i=0 (n i)[l gx h e k+1 xi + L gu h e k ui + o 1 (h)]. Multiply both sides of this inequality by h = T/n. Note that (n i)h nh = T. Use the notation 3.23 for average error to get So if T L gx < 1, then we get E k+1 x T [L gx E k+1 x + L gu E k u + n o 1 (h)]. (3.30) E k+1 x T 1 T L gx [L gu E k u + n o 1 (h)]. (3.31) Thus we have a bound for errors in x written in terms of errors in u. Inequality for e k λj 11

12 Next we try to get similar inequality for errors in λ. We use the equation in (3.19) for λ and that in (3.21) for λ k to get for j = n,..., 1, tj 1 δ j 1 e k+1 λ = δ j 1 λ δ j 1 λ k+1 = [f(t, x(t), u(t), λ(t)) F (t j, x k+1 j, u k j, λ k+1 j )]dt t j = tj t j tj 1 [f(t, x(t), u(t), λ(t)) f(t j, x(t j ), u(t j ), λ(t j ))]dt [f(t j, x(t j ), u(t j ), λ(t j )) f(t j, x k+1 j, u k j, λ k+1 j )]dt t j tj 1 [f(t j, x k+1 j, u k j, λ k+1 j ) F (t j, x k+1 j, u k j, λ k+1 j )]dt. t j Using a computation similar to (3.28), we get δ j 1 e k λ [L fx h e k+1 xj + L fu h e k uj + L fλ h e k+1 + o 2 (h)] where o 2 (h) = hω f (h)+hη f (h). Recall ω f (h) is the oscillation function of f(t, x(t), u(t), λ(t)), η f (h) is defined in (3.20), and L fx, L fu, L fλ are the Lipschitz constants of f = h 1 (t, x, u) + λh 2 (t, x, u) for x, u, λ, respectively. Since and e k+1 λn e k+1 λ,j 1 = ek+1 λn + j = 0, by the triangle inequality, we have i=n δ i 1 e k+1 λ λj e k+1 λ,j 1 j δ i 1 e k λ i=n j i=n [L fx h e k+1 xi + L fu h e k ui + L fλ h e k+1 λi + ho 2 (h)]. (3.32) The next step is to rewrite (3.32) to get errors in λ on the left side only. Eliminate e k λj We need the following discrete Gronwall s inequality for sequences f n, p n, and k n. Lemma 3.5 Assume that g 0 0 and p n 0 for n 0, and for n 1, Then f n (g 0 + n 1 p j)e n 1 k j. n 1 n 1 f 0 g 0 ; f n g 0 + p j + k j f j. 12

13 A proof can be found in Quarteroni and Valli [22], p. 14. Apply this Lemma to (3.32) (backwards) with g 0 = 0, p j = L fx h e k+1 xj +L fu h e k uj +o 2 (h) and k j = L fλ h to get e k+1 λ,j 1 M j 1 i=j [L fx h e k+1 xi + L fu h e k ui + o 2 (h)], j = n,..., 1 (3.33) where M j = e Lfλh(n j). Note that M j M 0 = e T L fλ for errors in λ written in terms of errors in x and u. because hn = T. This gives a bound Show E k u 0 By the third equation in (3.21) we obtain for j = 0,..., n, e k+1 uj L h3 [ e k+1 xj + e k+1 λj ]. (3.34) Replace j by j + 1 in (3.33) and substitute it into (3.34) to get [ e k+1 uj L h3 e k+1 xj + M j ] [L fx h e k+1 + L fu h e k ui + o 2 (h)] i=j+1 Sum (3.35) over j from j = 0 to j = n to get [ e k+1 uj L h3 e k+1 xj + L h3 [ = L h3 [ i=1 e k+1 xj + i=j+1 i 1 i=1 K i e k+1 xi + xi M j [L fx h e k+1 xi + L fu h e k ui + o 2 (h)] M j [L fx h e k+1 xi + L fu h e k ui + o 2 (h)] L fu N i h e k ui + o 2 (h) i=1 ] N i i=1 ] ] (3.35) (3.36) where for i = 1,..., n, i 1 N i = M j = elfλh(n+1) e L fλh(n i+1) ; K e L fλh i = 1 + L fx N i h. 1 Note that N i im 0. It follows that K i 1 + T M 0 L fx ; N i 1 2 n(n + 1)M 0 n 2 M 0. i=1 13

14 Now (3.36) implies that e k+1 uj L h3 [(1 + T M 0 L fx ) i=1 e k+1 xi + M 0 nh ] e k ui + n 2 M 0 o 2 (h). (3.37) Multiplying both sides of (3.37) by h, using T = nh and the definition of average errors, we get [ Eu k+1 L h3 (1 + T M0 L fx )Ex k+1 + M 0 T Eu k + o 2 (h)t 2 M 0 h 1] Combining this with (3.31) we get for k = 0, 1, 2,..., where Iterating (3.38) we obtain i=1 E k+1 u BE k u + o 3 (h) (3.38) (1+T M B = L h3 M 0 T + L 0 L fx )T L gu h3 1 T L gx o 3 (h) = L h3 M 0 T 2 o 2 (h) + L h h3 T 2 o 1 (h). [1 T L gx]h E k u B k E 0 u + k B i o 3 (h). Note that B < 1 when either T is small or the Lipschitz constants are small. Therefore B k 0 and k i=0 Bi 1 is bounded. Moreover, the definitions of o 1 B 1(h) and o 2 (h) imply that o 1 (h)/h 0, o 2 (h)/h 0 as h 0, which implies that o 3 (h) 0. Therefore E k u 0 as k and h 0. All the pieces are now in place. By (3.31), we also get E k x 0 as k and h 0. Go back to (3.29) to see that as h 0. From (3.35) we see that i=0 max,...,n ek xj L gx Ex k+1 + L gu Eu k + T o 1 (h)/h 0 max,...,n ek uj L h3 [ max,...,n ek+1 xj + M 0 (L fx Ex k+1 + L fu Eu k + T o 2 (h)/h)] 0 as h 0. Finally, from (3.33) we see that as h 0, This finishes the proof. max,...,n ek λj M 0 [L fx Ex k+1 + L fu Eu k + T o 2 (h)/h] 0. 14

15 Remark 3.6 The proof for the alternative approximating equation is similar. In this case, (3.30) is replaced by j x k+1 = hg(t j, x k j, u k j ), x k+1 0 = a (3.39) E k+1 x T [L gx E k x + L gu E k u + n o 1 (h)] We do not have (3.31). Then (3.38) is replaced by E k+1 u AE k x + BE k u + o 4 (h) with A, B, o 4 (h) being similar expressions of T and the Lipschitz constants of f, g. So we get the following iterative inequality for the average errors: ( ) ( ) E k+1 x E k Eu k+1 A x Eu k + C ) ( T no1 (h) ( Lgx T L where A = gu T A B and C = o 4 (h) the A < 1 and C 0, which imply the desired results. ). Under the same conditions, we have 4 Examples 4.1 A successful example The following simple linear-quadratic problem has been used as an example in several papers. See for example Vlassenbroeck and Van Dooren [24]. subject to the state equation max u x(t) 2 + u(t) 2 dt (4.40) x (t) = x(t) + u(t), x(0) = 1. (4.41) The Maximum Principle can be used to construct an analytic solution. The co-state equation is λ (t) = λ(t) x(t). The optimizing condition on the Hamiltonian gives u(t) = λ(t). Together with the state equation, the result is the following linear differential-algebraic system. x (t) = x(t) + u(t), x(0) = 1 (4.42) λ (t) = λ(t) x(t), λ(1) = 0 (4.43) u(t) = λ(t). (4.44) 15

16 The solution is x(t) = 2 cosh( 2(t 1)) sinh( 2(t 1)) 2 cosh( 2)+sinh( 2) and λ(t) = sinh( 2(t 1)) 2 cosh( 2)+sinh( 2). The optimal value of the objective functional is J = The final value of the state is x(1) = and the initial value of the co-state is λ(0) = The numerical computations of the FBSM algorithm were implemented in Mathematica. The initial guess for the control is u 0. The differential equation solver used is the fourth order Runge-Kutta on the interval [0, 1] partitioned into N subintervals. The stopping criteria is determined by finding the relative errors for the state, the co-state, and the control and requiring that all three be less than a specified value δ. The desired relative error for the state variable, for example, is xk 1 x k < δ where is the l 1 -norm, x k = N x k j=1 xk j. The inequality is rewritten as 0 < δ x k x k 1 x k. The test is then to terminate the iteration loop when this and the corresponding expressions for the control and co-state all become positive. Table 1 summarizes these computations for various choices of values of the relative error δ and the number of subintervals N. The number of times that the Runge- Kutta algorithm was called for the state (and the co-state) variable is given in the column labeled count. Also shown are the errors in the computed values of the objective J, the final state value x N, and the costate at time 0. (The co-state and control are negatives of one another in this example.) The estimate of the value of the objective J approx is found by using Simpson s Rule with the computed values of the state and control and the relevant value of N. Table 1 shows that for this example, the count is generally unaffected by N, the number of subintervals used. This indicates that a factor affecting the speed of convergence of the method is the choice of tolerance, δ, of the relative error between iterations. The value of δ is not a measurement of the error between the true value and computed value of the state, co-state, control, or objective, yet there seems to be a general association of δ and these other errors. The error in the objective J appears to be decreasing with decreasing δ for δ small enough. 16

17 Table 1 δ N count J J approx x(1) x N λ(0) λ A less-than-successful example As suggested by the convergence theorems, the FBSM has limitations. In this subsection, we will show an example for which the method does not converge. Further experimentation shows some relationships between the Lipschitz constants and the length of the time interval in the example. Finally, a serendipitous solution is found by averaging iterates in the example. The example is found in [20], p. 17, where its purpose is to illustrate the use of the Maximum Principle to construct a closed form solution. The location of the example in [20] was prior to any discussion of the FBSM. It is another linear-quadratic problem quite similar to the previous example. The problem is to choose the control u(t) so that subject to the state equation max u x(t) 2 + u(t) 2 dt (4.45) x (t) = x(t) + u(t), x(0) = 1. (4.46) Using the Maximum Principle to find the associated linear differential equations, we find 17

18 x (t) = x(t) + u(t), x(0) = 1 λ (t) = x(t) λ(t), λ(1) = 0 u(t) = λ(t). The solution is x(t) = 3e2t +e 4 2t and λ(t) = 3(e4 2t e 2t ). Using the FBSM with the control 3+e 4 3+e 4 initially set to be identically zero and δ =.001, the method fails to terminate even after thousands of iterations of the differential equation solvers. Since the coefficient 3 in the objective function is large, we experimented with a parameterized version of the example. Let m and T be positive parameters and consider the related problem of choosing u(t) so that subject to the state equation max u 1 T mx(t) 2 + u(t) 2 dt (4.47) 2 0 x (t) = x(t) + u(t), x(0) = 1. (4.48) The following table shows examples of m and the largest value of T for which the algorithm achieves tolerance δ = using N = 100 subintervals on the interval [0, T ] with the maximum number of iterations allowed being The values of T are the largest so that the algorithm using T +.01 does not meet the tolerance in less that 1500 iterations. Table 2 m T count Table 3 m T count The large values of count in the Table 2 should be compared with those in the Table 3 in which T is fixed at the value T =

19 Figure 1: Approximation of x(t) from above and below While examining the original example of this subsection with m = 3 and T = 1, we observed the following behavior of the state variable: for large enough k and for all j = 1, 2,..., N we find (1) x 2k j < x(t j ) < x 2k+1 j and (2) x k j = x k+2 j. That is, the algorithm produced alternating upper and lower approximations to the state variable that did not converge to the solution. A convergent alternation phenomenon had been seen previously in looking at examples that achieved the required tolerance. Figure 1 shows a few iterations illustrating this phenomenon for m = 2.9, T = 1 and N = 30 and also shows the actual solution. See also p. 181 in [20]. The situation is different for m = 3 and T = 1 since the iterations do not converge to the solution and represent period 2 solutions of the discrete system that are different from the actual solution. The iterates eventually settle in to a pattern with periodicity 2 as seen in Figure 2 with N = 30. But all is not lost: the average of the upper and lower estimates is the actual solution! So even though the FBSM itself does not yield the solution, the average of the iterates does yield the solution in this example. 5 More general problems The Forward Backward Sweep Method can be generalized to other optimal control problems. Lenhart and Workman [20] show how to do this for problems with bounded controls, a u(t) b by changing the characterization of u from, say u(t) = h 3 (t, x(t), λ(t) in (3.3) to u(t) = min(b, max(a, h 3 (t, x(t), λ(t))). Problems involving fixed endpoints, i.e., both x(0) = x 0 and x(t ) = x T given, are also 19

20 Figure 2: Periodic, nonconvergent approximations of x(t) considered in [20] using an Adapted Forward Backward Sweep Method. The added feature is a shooting method. Guess a value for the costate at the terminal time: λ(t ) = θ. Use the FBSM to find the value of the state at the terminal time x(t ) = x N. The idea is to think of the map from θ to x N = x N,θ as a function V (θ) and then use a secant method is to find a root of V (θ) = x T x N,θ. A similar method is described to solve problems with so-called scrap functions in which the goal is to optimize a functional of the form φ(x(t )) + T f(t, x(t), u(t))dt. It is also shown how to use a modification of the method to 0 solve free terminal time problems in which T is also a choice variable. For each of the problems just described, the code seems to work well and it would be of interest to have convergence results similar to those in Section 3 for these additional methods. References [1] U.M. Ascher and L.R Petzold, Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, SIAM, Philadelphia, [2] K. Atkinson, An Introduction to Numerical Analysis, 2 nd ed., John Wiley, New York, [3] K. Atkinson and W. Han, Theoretical Numerical Analysis: A Functional Analysis Framework, 3 rd edition, Springer-Verlag, New York, [4] E. Bertolazzi, F. Biral, M. Da Lio, Symbolic-numeric efficient solution of optimal control problems for multibody systems, J. Comput. Appl. Math. 185 (2006)

21 [5] J. T. Betts, Survey of Numerical Methods for Trajectory Optimization, Journal of Guidance, Control, and Dynamics, 21 (1998) [6] J. T. Betts, Practical Methods for Optimal Control Using Nonlinear Programming, SIAM, Philadelphia, [7] R. Bulirsch, E. Nerz, H.J. Pesch, and O. von Stryk, Combining direct and indirect methods in optimal control: range maximization of a hang glider, in Optimal control. Calculus of variations, optimal control theory and numerical methods. Papers from the Second Conference held at the University of Freiburg, Freiburg, May 26 June 1, Edited by R. Bulirsch, A. Miele, J. Stoer and K. H. Well. International Series of Numerical Mathematics, 111. Birkhuser Verlag, Basel, [8] F.L. Chernousko and A.A. Lyubshin, Method of successive approximations for solutions of optimal control problems, Optimal Control Applications and Methods, 3 (1982) [9] A. L. Dontchev, Error estimates for a discrete approximation to constrained control problems, SIAM J. Numer. Anal., 18 (1981) [10] A.L. Dontchev, W.W. Hager, and K. Malanowski, Error bounds for the Euler approximation of a state and control constrained optimal control problem, Numer. Funct. Anal. Optim., 21 (2000) [11] A.L. Dontchev, W.W. Hager, and V.M. Veliov, Second-order Runge-Kutta approximations in control constrained optimal control, SIAM J. Numer. Anal., 38 (2000) [12] A.L. Dontchev and W.W. Hager, The Euler approximation in state constrained optimal control, Math. Comp., 70 (2000) [13] W.H. Enright and P.H. Muir, Efficient classes of Runge-Kutta Methods for two-point boundary value problems, Computing, 37 (1986) [14] W. Hackbusch, A numerical method for solving parabolic equations with opposite orientations, Computing, 20 (1978) [15] W. W. Hager, Rates of convergence for discrete approximations to unconstrained control problems, SIAM J. Numer. Anal., 13 (1976) [16] E. Hairer and G. Wanner, Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems, Springer, Berlin, [17] A. Iserles, A First Course in the Numerical Analysis of Differential Equations, Cambridge U. Press, Cambridge,

22 [18] H.B. Keller Numerical Methods for Two-Point Boundary-Value Problems, Blaisdell Publishing Company, Waltham, Mass., [19] P. Kunkel and V. Mehrmann, Differential-Algebraic Equations: Analysis and Numerical Solution, European Mathematical Society, Zürich, [20] S. Lenhart and J. T. Workman, Optimal Control Applied to Biological Models, Chapman & Hall/CRC, Boca Raton, [21] S.K. Mitter, The successive approximation method for the solution of optimal control problems, Automatica, 3 (1966) [22] A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, Springer-Verlag, Berlin, [23] A. Seierstad and Knut Sydsaeter, Optimal Control with Economic Applications, North- Holland, Amsterdam, [24] J. Vlassenbroeck and R. Van Dooren, A Chebyshev Technique for Solving Nonlinear Optimal Control Problems, IEEE Trans. Automat. Control, 33 (1988)

A Gauss Lobatto quadrature method for solving optimal control problems

A Gauss Lobatto quadrature method for solving optimal control problems ANZIAM J. 47 (EMAC2005) pp.c101 C115, 2006 C101 A Gauss Lobatto quadrature method for solving optimal control problems P. Williams (Received 29 August 2005; revised 13 July 2006) Abstract This paper proposes

More information

Convergence of a Gauss Pseudospectral Method for Optimal Control

Convergence of a Gauss Pseudospectral Method for Optimal Control Convergence of a Gauss Pseudospectral Method for Optimal Control Hongyan Hou William W. Hager Anil V. Rao A convergence theory is presented for approximations of continuous-time optimal control problems

More information

Numerical Optimal Control Overview. Moritz Diehl

Numerical Optimal Control Overview. Moritz Diehl Numerical Optimal Control Overview Moritz Diehl Simplified Optimal Control Problem in ODE path constraints h(x, u) 0 initial value x0 states x(t) terminal constraint r(x(t )) 0 controls u(t) 0 t T minimize

More information

The Riccati transformation method for linear two point boundary problems

The Riccati transformation method for linear two point boundary problems Chapter The Riccati transformation method for linear two point boundary problems The solution algorithm for two point boundary value problems to be employed here has been derived from different points

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

Introduction to the Numerical Solution of IVP for ODE

Introduction to the Numerical Solution of IVP for ODE Introduction to the Numerical Solution of IVP for ODE 45 Introduction to the Numerical Solution of IVP for ODE Consider the IVP: DE x = f(t, x), IC x(a) = x a. For simplicity, we will assume here that

More information

The method of lines (MOL) for the diffusion equation

The method of lines (MOL) for the diffusion equation Chapter 1 The method of lines (MOL) for the diffusion equation The method of lines refers to an approximation of one or more partial differential equations with ordinary differential equations in just

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

Consistency and Convergence

Consistency and Convergence Jim Lambers MAT 77 Fall Semester 010-11 Lecture 0 Notes These notes correspond to Sections 1.3, 1.4 and 1.5 in the text. Consistency and Convergence We have learned that the numerical solution obtained

More information

Study the Numerical Methods for Solving System of Equation

Study the Numerical Methods for Solving System of Equation Study the Numerical Methods for Solving System of Equation Ravi Kumar 1, Mr. Raj Kumar Duhan 2 1 M. Tech. (M.E), 4 th Semester, UIET MDU Rohtak 2 Assistant Professor, Dept. of Mechanical Engg., UIET MDU

More information

Chapter 2 Optimal Control Problem

Chapter 2 Optimal Control Problem Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter

More information

A SYMBOLIC-NUMERIC APPROACH TO THE SOLUTION OF THE BUTCHER EQUATIONS

A SYMBOLIC-NUMERIC APPROACH TO THE SOLUTION OF THE BUTCHER EQUATIONS CANADIAN APPLIED MATHEMATICS QUARTERLY Volume 17, Number 3, Fall 2009 A SYMBOLIC-NUMERIC APPROACH TO THE SOLUTION OF THE BUTCHER EQUATIONS SERGEY KHASHIN ABSTRACT. A new approach based on the use of new

More information

INTRODUCTION TO COMPUTER METHODS FOR O.D.E.

INTRODUCTION TO COMPUTER METHODS FOR O.D.E. INTRODUCTION TO COMPUTER METHODS FOR O.D.E. 0. Introduction. The goal of this handout is to introduce some of the ideas behind the basic computer algorithms to approximate solutions to differential equations.

More information

Southern Methodist University.

Southern Methodist University. Title: Continuous extensions Name: Lawrence F. Shampine 1, Laurent O. Jay 2 Affil./Addr. 1: Department of Mathematics Southern Methodist University Dallas, TX 75275 USA Phone: +1 (972) 690-8439 E-mail:

More information

CHAPTER 10: Numerical Methods for DAEs

CHAPTER 10: Numerical Methods for DAEs CHAPTER 10: Numerical Methods for DAEs Numerical approaches for the solution of DAEs divide roughly into two classes: 1. direct discretization 2. reformulation (index reduction) plus discretization Direct

More information

Direct Methods. Moritz Diehl. Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U.

Direct Methods. Moritz Diehl. Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U. Direct Methods Moritz Diehl Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U. Leuven Belgium Overview Direct Single Shooting Direct Collocation Direct Multiple

More information

arxiv: v1 [math.ca] 7 Jul 2013

arxiv: v1 [math.ca] 7 Jul 2013 Existence of Solutions for Nonconvex Differential Inclusions of Monotone Type Elza Farkhi Tzanko Donchev Robert Baier arxiv:1307.1871v1 [math.ca] 7 Jul 2013 September 21, 2018 Abstract Differential inclusions

More information

SMOOTHNESS PROPERTIES OF SOLUTIONS OF CAPUTO- TYPE FRACTIONAL DIFFERENTIAL EQUATIONS. Kai Diethelm. Abstract

SMOOTHNESS PROPERTIES OF SOLUTIONS OF CAPUTO- TYPE FRACTIONAL DIFFERENTIAL EQUATIONS. Kai Diethelm. Abstract SMOOTHNESS PROPERTIES OF SOLUTIONS OF CAPUTO- TYPE FRACTIONAL DIFFERENTIAL EQUATIONS Kai Diethelm Abstract Dedicated to Prof. Michele Caputo on the occasion of his 8th birthday We consider ordinary fractional

More information

FDM for parabolic equations

FDM for parabolic equations FDM for parabolic equations Consider the heat equation where Well-posed problem Existence & Uniqueness Mass & Energy decreasing FDM for parabolic equations CNFD Crank-Nicolson + 2 nd order finite difference

More information

THE METHOD OF LINES FOR PARABOLIC PARTIAL INTEGRO-DIFFERENTIAL EQUATIONS

THE METHOD OF LINES FOR PARABOLIC PARTIAL INTEGRO-DIFFERENTIAL EQUATIONS JOURNAL OF INTEGRAL EQUATIONS AND APPLICATIONS Volume 4, Number 1, Winter 1992 THE METHOD OF LINES FOR PARABOLIC PARTIAL INTEGRO-DIFFERENTIAL EQUATIONS J.-P. KAUTHEN ABSTRACT. We present a method of lines

More information

Linear Multistep Methods

Linear Multistep Methods Linear Multistep Methods Linear Multistep Methods (LMM) A LMM has the form α j x i+j = h β j f i+j, α k = 1 i 0 for the approximate solution of the IVP x = f (t, x), x(a) = x a. We approximate x(t) on

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

Identification of Parameters in Neutral Functional Differential Equations with State-Dependent Delays

Identification of Parameters in Neutral Functional Differential Equations with State-Dependent Delays To appear in the proceedings of 44th IEEE Conference on Decision and Control and European Control Conference ECC 5, Seville, Spain. -5 December 5. Identification of Parameters in Neutral Functional Differential

More information

Observability and forward-backward observability of discrete-time nonlinear systems

Observability and forward-backward observability of discrete-time nonlinear systems Observability and forward-backward observability of discrete-time nonlinear systems Francesca Albertini and Domenico D Alessandro Dipartimento di Matematica pura a applicata Universitá di Padova, 35100

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Optimal Control of Differential Equations with Pure State Constraints

Optimal Control of Differential Equations with Pure State Constraints University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange Masters Theses Graduate School 8-2013 Optimal Control of Differential Equations with Pure State Constraints Steven Lee

More information

Numerical Integration of Equations of Motion

Numerical Integration of Equations of Motion GraSMech course 2009-2010 Computer-aided analysis of rigid and flexible multibody systems Numerical Integration of Equations of Motion Prof. Olivier Verlinden (FPMs) Olivier.Verlinden@fpms.ac.be Prof.

More information

THE θ-methods IN THE NUMERICAL SOLUTION OF DELAY DIFFERENTIAL EQUATIONS. Karel J. in t Hout, Marc N. Spijker Leiden, The Netherlands

THE θ-methods IN THE NUMERICAL SOLUTION OF DELAY DIFFERENTIAL EQUATIONS. Karel J. in t Hout, Marc N. Spijker Leiden, The Netherlands THE θ-methods IN THE NUMERICAL SOLUTION OF DELAY DIFFERENTIAL EQUATIONS Karel J. in t Hout, Marc N. Spijker Leiden, The Netherlands 1. Introduction This paper deals with initial value problems for delay

More information

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 28 PART II Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 29 BOUNDARY VALUE PROBLEMS (I) Solving a TWO

More information

CONVERGENCE AND STABILITY CONSTANT OF THE THETA-METHOD

CONVERGENCE AND STABILITY CONSTANT OF THE THETA-METHOD Conference Applications of Mathematics 2013 in honor of the 70th birthday of Karel Segeth. Jan Brandts, Sergey Korotov, Michal Křížek, Jakub Šístek, and Tomáš Vejchodský (Eds.), Institute of Mathematics

More information

Initial value problems for ordinary differential equations

Initial value problems for ordinary differential equations Initial value problems for ordinary differential equations Xiaojing Ye, Math & Stat, Georgia State University Spring 2019 Numerical Analysis II Xiaojing Ye, Math & Stat, Georgia State University 1 IVP

More information

Numerical Methods for Embedded Optimization and Optimal Control. Exercises

Numerical Methods for Embedded Optimization and Optimal Control. Exercises Summer Course Numerical Methods for Embedded Optimization and Optimal Control Exercises Moritz Diehl, Daniel Axehill and Lars Eriksson June 2011 Introduction This collection of exercises is intended to

More information

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester HIGHER ORDER METHODS School of Mathematics Semester 1 2008 OUTLINE 1 REVIEW 2 HIGHER ORDER METHODS 3 MULTISTEP METHODS 4 SUMMARY OUTLINE 1 REVIEW 2 HIGHER ORDER METHODS 3 MULTISTEP METHODS 4 SUMMARY OUTLINE

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

Backward error analysis

Backward error analysis Backward error analysis Brynjulf Owren July 28, 2015 Introduction. The main source for these notes is the monograph by Hairer, Lubich and Wanner [2]. Consider ODEs in R d of the form ẏ = f(y), y(0) = y

More information

Quarter-Sweep Gauss-Seidel Method for Solving First Order Linear Fredholm Integro-differential Equations

Quarter-Sweep Gauss-Seidel Method for Solving First Order Linear Fredholm Integro-differential Equations MATEMATIKA, 2011, Volume 27, Number 2, 199 208 c Department of Mathematical Sciences, UTM Quarter-Sweep Gauss-Seidel Method for Solving First Order Linear Fredholm Integro-differential Equations 1 E. Aruchunan

More information

Mon Jan Improved acceleration models: linear and quadratic drag forces. Announcements: Warm-up Exercise:

Mon Jan Improved acceleration models: linear and quadratic drag forces. Announcements: Warm-up Exercise: Math 2250-004 Week 4 notes We will not necessarily finish the material from a given day's notes on that day. We may also add or subtract some material as the week progresses, but these notes represent

More information

RUNGE-KUTTA TIME DISCRETIZATIONS OF NONLINEAR DISSIPATIVE EVOLUTION EQUATIONS

RUNGE-KUTTA TIME DISCRETIZATIONS OF NONLINEAR DISSIPATIVE EVOLUTION EQUATIONS MATHEMATICS OF COMPUTATION Volume 75, Number 254, Pages 631 640 S 0025-571805)01866-1 Article electronically published on December 19, 2005 RUNGE-KUTTA TIME DISCRETIZATIONS OF NONLINEAR DISSIPATIVE EVOLUTION

More information

Numerical Differential Equations: IVP

Numerical Differential Equations: IVP Chapter 11 Numerical Differential Equations: IVP **** 4/16/13 EC (Incomplete) 11.1 Initial Value Problem for Ordinary Differential Equations We consider the problem of numerically solving a differential

More information

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx PDE-constrained optimization and the adjoint method Andrew M. Bradley November 16, 21 PDE-constrained optimization and the adjoint method for solving these and related problems appear in a wide range of

More information

Half of Final Exam Name: Practice Problems October 28, 2014

Half of Final Exam Name: Practice Problems October 28, 2014 Math 54. Treibergs Half of Final Exam Name: Practice Problems October 28, 24 Half of the final will be over material since the last midterm exam, such as the practice problems given here. The other half

More information

Problem List MATH 5173 Spring, 2014

Problem List MATH 5173 Spring, 2014 Problem List MATH 5173 Spring, 2014 The notation p/n means the problem with number n on page p of Perko. 1. 5/3 [Due Wednesday, January 15] 2. 6/5 and describe the relationship of the phase portraits [Due

More information

Fourth Order RK-Method

Fourth Order RK-Method Fourth Order RK-Method The most commonly used method is Runge-Kutta fourth order method. The fourth order RK-method is y i+1 = y i + 1 6 (k 1 + 2k 2 + 2k 3 + k 4 ), Ordinary Differential Equations (ODE)

More information

The Derivation of Interpolants for Nonlinear Two-Point Boundary Value Problems

The Derivation of Interpolants for Nonlinear Two-Point Boundary Value Problems European Society of Computational Methods in Sciences and Engineering ESCMSE) Journal of Numerical Analysis, Industrial and Applied Mathematics JNAIAM) vol. 1, no. 1, 2006, pp. 49-58 ISSN 1790 8140 The

More information

Physics 202 Laboratory 3. Root-Finding 1. Laboratory 3. Physics 202 Laboratory

Physics 202 Laboratory 3. Root-Finding 1. Laboratory 3. Physics 202 Laboratory Physics 202 Laboratory 3 Root-Finding 1 Laboratory 3 Physics 202 Laboratory The fundamental question answered by this week s lab work will be: Given a function F (x), find some/all of the values {x i }

More information

Weak Convergence of Numerical Methods for Dynamical Systems and Optimal Control, and a relation with Large Deviations for Stochastic Equations

Weak Convergence of Numerical Methods for Dynamical Systems and Optimal Control, and a relation with Large Deviations for Stochastic Equations Weak Convergence of Numerical Methods for Dynamical Systems and, and a relation with Large Deviations for Stochastic Equations Mattias Sandberg KTH CSC 2010-10-21 Outline The error representation for weak

More information

A First Course on Kinetics and Reaction Engineering Supplemental Unit S5. Solving Initial Value Differential Equations

A First Course on Kinetics and Reaction Engineering Supplemental Unit S5. Solving Initial Value Differential Equations Supplemental Unit S5. Solving Initial Value Differential Equations Defining the Problem This supplemental unit describes how to solve a set of initial value ordinary differential equations (ODEs) numerically.

More information

The Direct Transcription Method For Optimal Control. Part 2: Optimal Control

The Direct Transcription Method For Optimal Control. Part 2: Optimal Control The Direct Transcription Method For Optimal Control Part 2: Optimal Control John T Betts Partner, Applied Mathematical Analysis, LLC 1 Fundamental Principle of Transcription Methods Transcription Method

More information

A brief introduction to ordinary differential equations

A brief introduction to ordinary differential equations Chapter 1 A brief introduction to ordinary differential equations 1.1 Introduction An ordinary differential equation (ode) is an equation that relates a function of one variable, y(t), with its derivative(s)

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Exponentially Fitted Error Correction Methods for Solving Initial Value Problems

Exponentially Fitted Error Correction Methods for Solving Initial Value Problems KYUNGPOOK Math. J. 52(2012), 167-177 http://dx.doi.org/10.5666/kmj.2012.52.2.167 Exponentially Fitted Error Correction Methods for Solving Initial Value Problems Sangdong Kim and Philsu Kim Department

More information

Tong Sun Department of Mathematics and Statistics Bowling Green State University, Bowling Green, OH

Tong Sun Department of Mathematics and Statistics Bowling Green State University, Bowling Green, OH Consistency & Numerical Smoothing Error Estimation An Alternative of the Lax-Richtmyer Theorem Tong Sun Department of Mathematics and Statistics Bowling Green State University, Bowling Green, OH 43403

More information

Explicit One-Step Methods

Explicit One-Step Methods Chapter 1 Explicit One-Step Methods Remark 11 Contents This class presents methods for the numerical solution of explicit systems of initial value problems for ordinary differential equations of first

More information

Numerical Methods for Differential Equations

Numerical Methods for Differential Equations Numerical Methods for Differential Equations Chapter 2: Runge Kutta and Linear Multistep methods Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the

More information

1 Introduction. Erik J. Balder, Mathematical Institute, University of Utrecht, Netherlands

1 Introduction. Erik J. Balder, Mathematical Institute, University of Utrecht, Netherlands Comments on Chapters 20 and 21 (Calculus of Variations and Optimal Control Theory) of Introduction to Mathematical Economics (third edition, 2008) by E.T. Dowling Erik J. Balder, Mathematical Institute,

More information

Ordinary differential equations - Initial value problems

Ordinary differential equations - Initial value problems Education has produced a vast population able to read but unable to distinguish what is worth reading. G.M. TREVELYAN Chapter 6 Ordinary differential equations - Initial value problems In this chapter

More information

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Chapter 5 Exercises. (a) Determine the best possible Lipschitz constant for this function over 2 u <. u (t) = log(u(t)), u(0) = 2.

Chapter 5 Exercises. (a) Determine the best possible Lipschitz constant for this function over 2 u <. u (t) = log(u(t)), u(0) = 2. Chapter 5 Exercises From: Finite Difference Methods for Ordinary and Partial Differential Equations by R. J. LeVeque, SIAM, 2007. http://www.amath.washington.edu/ rjl/fdmbook Exercise 5. (Uniqueness for

More information

On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous mappings

On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous mappings Int. J. Nonlinear Anal. Appl. 7 (2016) No. 1, 295-300 ISSN: 2008-6822 (electronic) http://dx.doi.org/10.22075/ijnaa.2015.341 On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous

More information

c 2002 Society for Industrial and Applied Mathematics

c 2002 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 4 No. pp. 507 5 c 00 Society for Industrial and Applied Mathematics WEAK SECOND ORDER CONDITIONS FOR STOCHASTIC RUNGE KUTTA METHODS A. TOCINO AND J. VIGO-AGUIAR Abstract. A general

More information

ELLIPTIC RECONSTRUCTION AND A POSTERIORI ERROR ESTIMATES FOR PARABOLIC PROBLEMS

ELLIPTIC RECONSTRUCTION AND A POSTERIORI ERROR ESTIMATES FOR PARABOLIC PROBLEMS ELLIPTIC RECONSTRUCTION AND A POSTERIORI ERROR ESTIMATES FOR PARABOLIC PROBLEMS CHARALAMBOS MAKRIDAKIS AND RICARDO H. NOCHETTO Abstract. It is known that the energy technique for a posteriori error analysis

More information

Numerical Algorithms for ODEs/DAEs (Transient Analysis)

Numerical Algorithms for ODEs/DAEs (Transient Analysis) Numerical Algorithms for ODEs/DAEs (Transient Analysis) Slide 1 Solving Differential Equation Systems d q ( x(t)) + f (x(t)) + b(t) = 0 dt DAEs: many types of solutions useful DC steady state: state no

More information

Introduction to standard and non-standard Numerical Methods

Introduction to standard and non-standard Numerical Methods Introduction to standard and non-standard Numerical Methods Dr. Mountaga LAM AMS : African Mathematic School 2018 May 23, 2018 One-step methods Runge-Kutta Methods Nonstandard Finite Difference Scheme

More information

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations Sunyoung Bu University of North Carolina Department of Mathematics CB # 325, Chapel Hill USA agatha@email.unc.edu Jingfang

More information

Math 128A Spring 2003 Week 12 Solutions

Math 128A Spring 2003 Week 12 Solutions Math 128A Spring 2003 Week 12 Solutions Burden & Faires 5.9: 1b, 2b, 3, 5, 6, 7 Burden & Faires 5.10: 4, 5, 8 Burden & Faires 5.11: 1c, 2, 5, 6, 8 Burden & Faires 5.9. Higher-Order Equations and Systems

More information

A New Block Method and Their Application to Numerical Solution of Ordinary Differential Equations

A New Block Method and Their Application to Numerical Solution of Ordinary Differential Equations A New Block Method and Their Application to Numerical Solution of Ordinary Differential Equations Rei-Wei Song and Ming-Gong Lee* d09440@chu.edu.tw, mglee@chu.edu.tw * Department of Applied Mathematics/

More information

An homotopy method for exact tracking of nonlinear nonminimum phase systems: the example of the spherical inverted pendulum

An homotopy method for exact tracking of nonlinear nonminimum phase systems: the example of the spherical inverted pendulum 9 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June -, 9 FrA.5 An homotopy method for exact tracking of nonlinear nonminimum phase systems: the example of the spherical inverted

More information

4 Stability analysis of finite-difference methods for ODEs

4 Stability analysis of finite-difference methods for ODEs MATH 337, by T. Lakoba, University of Vermont 36 4 Stability analysis of finite-difference methods for ODEs 4.1 Consistency, stability, and convergence of a numerical method; Main Theorem In this Lecture

More information

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

Mathematics for chemical engineers. Numerical solution of ordinary differential equations Mathematics for chemical engineers Drahoslava Janovská Numerical solution of ordinary differential equations Initial value problem Winter Semester 2015-2016 Outline 1 Introduction 2 One step methods Euler

More information

Lecture 42 Determining Internal Node Values

Lecture 42 Determining Internal Node Values Lecture 42 Determining Internal Node Values As seen in the previous section, a finite element solution of a boundary value problem boils down to finding the best values of the constants {C j } n, which

More information

The Bock iteration for the ODE estimation problem

The Bock iteration for the ODE estimation problem he Bock iteration for the ODE estimation problem M.R.Osborne Contents 1 Introduction 2 2 Introducing the Bock iteration 5 3 he ODE estimation problem 7 4 he Bock iteration for the smoothing problem 12

More information

Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations

Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations Ram Sharan Adhikari Assistant Professor Of Mathematics Rogers State University Mathematical

More information

Ordinary Differential Equation Theory

Ordinary Differential Equation Theory Part I Ordinary Differential Equation Theory 1 Introductory Theory An n th order ODE for y = y(t) has the form Usually it can be written F (t, y, y,.., y (n) ) = y (n) = f(t, y, y,.., y (n 1) ) (Implicit

More information

OPTIMAL CONTROL CHAPTER INTRODUCTION

OPTIMAL CONTROL CHAPTER INTRODUCTION CHAPTER 3 OPTIMAL CONTROL What is now proved was once only imagined. William Blake. 3.1 INTRODUCTION After more than three hundred years of evolution, optimal control theory has been formulated as an extension

More information

The Definition and Numerical Method of Final Value Problem and Arbitrary Value Problem Shixiong Wang 1*, Jianhua He 1, Chen Wang 2, Xitong Li 1

The Definition and Numerical Method of Final Value Problem and Arbitrary Value Problem Shixiong Wang 1*, Jianhua He 1, Chen Wang 2, Xitong Li 1 The Definition and Numerical Method of Final Value Problem and Arbitrary Value Problem Shixiong Wang 1*, Jianhua He 1, Chen Wang 2, Xitong Li 1 1 School of Electronics and Information, Northwestern Polytechnical

More information

SYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS

SYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS BIT 0006-3835/00/4004-0726 $15.00 2000, Vol. 40, No. 4, pp. 726 734 c Swets & Zeitlinger SYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS E. HAIRER Section de mathématiques, Université

More information

UNCONDITIONAL STABILITY OF A CRANK-NICOLSON ADAMS-BASHFORTH 2 IMPLICIT-EXPLICIT NUMERICAL METHOD

UNCONDITIONAL STABILITY OF A CRANK-NICOLSON ADAMS-BASHFORTH 2 IMPLICIT-EXPLICIT NUMERICAL METHOD UNCONDITIONAL STABILITY OF A CRANK-NICOLSON ADAMS-BASHFORTH IMPLICIT-EXPLICIT NUMERICAL METHOD ANDREW JORGENSON Abstract. Systems of nonlinear partial differential equations modeling turbulent fluid flow

More information

Lecture 7 - Separable Equations

Lecture 7 - Separable Equations Lecture 7 - Separable Equations Separable equations is a very special type of differential equations where you can separate the terms involving only y on one side of the equation and terms involving only

More information

An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints

An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints S. Lopes, F. A. C. C. Fontes and M. d. R. de Pinho Officina Mathematica report, April 4, 27 Abstract Standard

More information

INVERSION IN INDIRECT OPTIMAL CONTROL

INVERSION IN INDIRECT OPTIMAL CONTROL INVERSION IN INDIRECT OPTIMAL CONTROL François Chaplais, Nicolas Petit Centre Automatique et Systèmes, École Nationale Supérieure des Mines de Paris, 35, rue Saint-Honoré 7735 Fontainebleau Cedex, France,

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

arxiv: v2 [math.na] 21 May 2018

arxiv: v2 [math.na] 21 May 2018 SHORT-MT: Optimal Solution of Linear Ordinary Differential Equations by Conjugate Gradient Method arxiv:1805.01085v2 [math.na] 21 May 2018 Wenqiang Yang 1, Wenyuan Wu 1, and Robert M. Corless 2 1 Chongqing

More information

Simple Iteration, cont d

Simple Iteration, cont d Jim Lambers MAT 772 Fall Semester 2010-11 Lecture 2 Notes These notes correspond to Section 1.2 in the text. Simple Iteration, cont d In general, nonlinear equations cannot be solved in a finite sequence

More information

ON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT

ON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT ON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT Received: 31 July, 2008 Accepted: 06 February, 2009 Communicated by: SIMON J SMITH Department of Mathematics and Statistics La Trobe University,

More information

Chapter 5. Pontryagin s Minimum Principle (Constrained OCP)

Chapter 5. Pontryagin s Minimum Principle (Constrained OCP) Chapter 5 Pontryagin s Minimum Principle (Constrained OCP) 1 Pontryagin s Minimum Principle Plant: (5-1) u () t U PI: (5-2) Boundary condition: The goal is to find Optimal Control. 2 Pontryagin s Minimum

More information

NUMERICAL SOLUTION OF DELAY DIFFERENTIAL EQUATIONS VIA HAAR WAVELETS

NUMERICAL SOLUTION OF DELAY DIFFERENTIAL EQUATIONS VIA HAAR WAVELETS TWMS J Pure Appl Math V5, N2, 24, pp22-228 NUMERICAL SOLUTION OF DELAY DIFFERENTIAL EQUATIONS VIA HAAR WAVELETS S ASADI, AH BORZABADI Abstract In this paper, Haar wavelet benefits are applied to the delay

More information

NUMERICAL METHODS. lor CHEMICAL ENGINEERS. Using Excel', VBA, and MATLAB* VICTOR J. LAW. CRC Press. Taylor & Francis Group

NUMERICAL METHODS. lor CHEMICAL ENGINEERS. Using Excel', VBA, and MATLAB* VICTOR J. LAW. CRC Press. Taylor & Francis Group NUMERICAL METHODS lor CHEMICAL ENGINEERS Using Excel', VBA, and MATLAB* VICTOR J. LAW CRC Press Taylor & Francis Group Boca Raton London New York CRC Press is an imprint of the Taylor & Francis Croup,

More information

Solution of Stochastic Optimal Control Problems and Financial Applications

Solution of Stochastic Optimal Control Problems and Financial Applications Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

Exponential Integrators

Exponential Integrators Exponential Integrators John C. Bowman (University of Alberta) May 22, 2007 www.math.ualberta.ca/ bowman/talks 1 Exponential Integrators Outline Exponential Euler History Generalizations Stationary Green

More information

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations An Overly Simplified and Brief Review of Differential Equation Solution Methods We will be dealing with initial or boundary value problems. A typical initial value problem has the form y y 0 y(0) 1 A typical

More information

The Milne error estimator for stiff problems

The Milne error estimator for stiff problems 13 R. Tshelametse / SAJPAM. Volume 4 (2009) 13-28 The Milne error estimator for stiff problems Ronald Tshelametse Department of Mathematics University of Botswana Private Bag 0022 Gaborone, Botswana. E-mail

More information

FIXED POINT ITERATION

FIXED POINT ITERATION FIXED POINT ITERATION The idea of the fixed point iteration methods is to first reformulate a equation to an equivalent fixed point problem: f (x) = 0 x = g(x) and then to use the iteration: with an initial

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Efficiency of Runge-Kutta Methods in Solving Simple Harmonic Oscillators

Efficiency of Runge-Kutta Methods in Solving Simple Harmonic Oscillators MATEMATIKA, 8, Volume 3, Number, c Penerbit UTM Press. All rights reserved Efficiency of Runge-Kutta Methods in Solving Simple Harmonic Oscillators Annie Gorgey and Nor Azian Aini Mat Department of Mathematics,

More information

Optimal periodic locomotion for a two piece worm with an asymmetric dry friction model

Optimal periodic locomotion for a two piece worm with an asymmetric dry friction model 1st International Symposium on Mathematical Theory of Networks and Systems July 7-11, 014. Optimal periodic locomotion for a two piece worm with an asymmetric dry friction model Nak-seung Patrick Hyun

More information

Research Article The Numerical Solution of Problems in Calculus of Variation Using B-Spline Collocation Method

Research Article The Numerical Solution of Problems in Calculus of Variation Using B-Spline Collocation Method Applied Mathematics Volume 2012, Article ID 605741, 10 pages doi:10.1155/2012/605741 Research Article The Numerical Solution of Problems in Calculus of Variation Using B-Spline Collocation Method M. Zarebnia

More information

A POSTERIORI ERROR ESTIMATES FOR THE BDF2 METHOD FOR PARABOLIC EQUATIONS

A POSTERIORI ERROR ESTIMATES FOR THE BDF2 METHOD FOR PARABOLIC EQUATIONS A POSTERIORI ERROR ESTIMATES FOR THE BDF METHOD FOR PARABOLIC EQUATIONS GEORGIOS AKRIVIS AND PANAGIOTIS CHATZIPANTELIDIS Abstract. We derive optimal order, residual-based a posteriori error estimates for

More information

The collocation method for ODEs: an introduction

The collocation method for ODEs: an introduction 058065 - Collocation Methods for Volterra Integral Related Functional Differential The collocation method for ODEs: an introduction A collocation solution u h to a functional equation for example an ordinary

More information