Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

Similar documents
Jim Lambers MAT 772 Fall Semester Lecture 21 Notes

Initial value problems for ordinary differential equations

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

Scientific Computing: An Introductory Survey

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Math 128A Spring 2003 Week 11 Solutions Burden & Faires 5.6: 1b, 3b, 7, 9, 12 Burden & Faires 5.7: 1b, 3b, 5 Burden & Faires 5.

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

Linear Multistep Methods I: Adams and BDF Methods

Fourth Order RK-Method

MTH 452/552 Homework 3

Solving Ordinary Differential equations

Chapter 5 Exercises. (a) Determine the best possible Lipschitz constant for this function over 2 u <. u (t) = log(u(t)), u(0) = 2.

Multistep Methods for IVPs. t 0 < t < T

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

CS520: numerical ODEs (Ch.2)

CHAPTER 5: Linear Multistep Methods

MATH 350: Introduction to Computational Mathematics

NUMERICAL SOLUTION OF ODE IVPs. Overview

Ordinary differential equations - Initial value problems

Southern Methodist University.

Math 411 Preliminaries

Module 4: Numerical Methods for ODE. Michael Bader. Winter 2007/2008

Numerical solution of ODEs

COSC 3361 Numerical Analysis I Ordinary Differential Equations (II) - Multistep methods

9.6 Predictor-Corrector Methods

Applied Numerical Analysis

Numerical Methods for Differential Equations

The family of Runge Kutta methods with two intermediate evaluations is defined by

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

multistep methods Last modified: November 28, 2017 Recall that we are interested in the numerical solution of the initial value problem (IVP):

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations

Solving Ordinary Differential Equations

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

Applied Math for Engineers

Syntax. Arguments. Solve m oderately stifo DEsand DAEs;trapezoidalrule. 1 of :34

Astrodynamics (AERO0024)

Numerical Differential Equations: IVP

Chapter 6 - Ordinary Differential Equations

Lecture 8: Calculus and Differential Equations

Lecture 8: Calculus and Differential Equations

Consistency and Convergence

Numerical Methods for Differential Equations

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Initial-Value Problems for ODEs. Introduction to Linear Multistep Methods

Initial value problems for ordinary differential equations

Math 128A Spring 2003 Week 12 Solutions

HIGHER ORDER METHODS. There are two principal means to derive higher order methods. b j f(x n j,y n j )

Numerical Methods - Initial Value Problems for ODEs

Modeling & Simulation 2018 Lecture 12. Simulations

Ordinary Differential Equations

2 Numerical Methods for Initial Value Problems

Multistage Methods I: Runge-Kutta Methods

Chapter 10. Initial value Ordinary Differential Equations

Numerical Methods for the Solution of Differential Equations

8.1 Introduction. Consider the initial value problem (IVP):

THE MATLAB ODE SUITE

Lecture 4: Numerical solution of ordinary differential equations

Lecture IV: Time Discretization

Ordinary Differential Equations. Monday, October 10, 11

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

The Milne error estimator for stiff problems

5.6 Multistep Methods

1 Error Analysis for Solving IVP

Review for Exam 2 Ben Wang and Mark Styczynski

Research Article Diagonally Implicit Block Backward Differentiation Formulas for Solving Ordinary Differential Equations

Butcher tableau Can summarize an s + 1 stage Runge Kutta method using a triangular grid of coefficients

Remark on the Sensitivity of Simulated Solutions of the Nonlinear Dynamical System to the Used Numerical Method

1 Ordinary Differential Equations

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by

Ordinary Differential Equations

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 2. Predictor-Corrector Methods

Chapter 9 Implicit Methods for Linear and Nonlinear Systems of ODEs

Ordinary Differential Equations II

MECH : a Primer for Matlab s ode suite of functions

Math Numerical Analysis Homework #4 Due End of term. y = 2y t 3y2 t 3, 1 t 2, y(1) = 1. n=(b-a)/h+1; % the number of steps we need to take

AM205: Assignment 3 (due 5 PM, October 20)

Solving ODEs and PDEs in MATLAB. Sören Boettcher

Ordinary Differential Equations II

8 Numerical Integration of Ordinary Differential

Numerical Methods for Engineers. and Scientists. Applications using MATLAB. An Introduction with. Vish- Subramaniam. Third Edition. Amos Gilat.


Differential Equations (Mathematics) Evaluate the numerical solution using output of ODE solvers.

Dynamic Systems. Simulation of. with MATLAB and Simulink. Harold Klee. Randal Allen SECOND EDITION. CRC Press. Taylor & Francis Group

MATHEMATICAL METHODS INTERPOLATION

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Jan 9

Computational Techniques Prof. Dr. Niket Kaisare Department of Chemical Engineering Indian Institute of Technology, Madras

Math 660 Lecture 4: FDM for evolutionary equations: ODE solvers

Part IB Numerical Analysis

Solving Models With Off-The-Shelf Software. Example Of Potential Pitfalls Associated With The Use And Abuse Of Default Parameter Settings

AN OVERVIEW. Numerical Methods for ODE Initial Value Problems. 1. One-step methods (Taylor series, Runge-Kutta)

The Initial Value Problem for Ordinary Differential Equations

S#ff ODEs and Systems of ODEs

CHAPTER 10: Numerical Methods for DAEs

Lecture V: The game-engine loop & Time Integration

A First Course on Kinetics and Reaction Engineering Supplemental Unit S5. Solving Initial Value Differential Equations

On interval predictor-corrector methods

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

5. Ordinary Differential Equations. Indispensable for many technical applications!

Transcription:

Logistics Week 12: Monday, Apr 18 HW 6 is due at 11:59 tonight. HW 7 is posted, and will be due in class on 4/25. The prelim is graded. An analysis and rubric are on CMS. Problem du jour For implicit methods like backward Euler or the trapezoidal rule, we need to solve a nonlinear equation at each update. For backward Euler, for example, we have y n+1 hf(y n+1 ) y n = 0. What is Newton s iteration for computing y n+1 given y n? Answer: The Newton iteration is y new n+1 = y old n+1 (I hf (y old )) 1 (y old n+1 hf(y old n+1) y n ). We could also do something less expensive by computing and factoring an approximation to the Jacobian once and re-using it until the convergence becomes too slow. If you use Matlab s implicit ODE solvers, you can specify the Jacobian (or, if specifying the Jacobian directly is expensive, you can specify the sparsity pattern of the Jacobian). For a non-stiff problem where f is not too large, note that we could also use fixed point iteration: y new n+1 = y n + hf(y old n+1). In order to do Newton, fixed point, or any other iteration, though, we need a good initial guess. But we can get a good initial guess by polynomial interpolation through previous points. In the context of ODE solvers, this is called a predictor, which we pair together with a corrector to get the implicit method.

An aside: higher-order differential equations So far, we have been talking about first-order equations. It is worth spending a moment reminding ourselves how to convert higher-order systems to firstorder form. For example, becomes my + by + ky = f(t) [ ] [ ] y v = v f(t) b v k y. m m More generally, we can always convert high-order equations to first-order form by introducing auxiliary variables. However, it is sometimes better to recognize that higher-order differential equations are special and treat them directly rather than converting them to the more general form. This is done, for example, in the Newmark family of methods commonly used to time-step finite element simulations (which are second-order in time). The Runge-Kutta concept Runge-Kutta methods evaluate f(t, y) multiple times in order to get higher order accuracy. For example, the classical Runge-Kutta scheme is K 0 = f(t n, y n ) ( K 1 = f t n + h 2, y n + h ) 2 K 0 ( K 2 = f t n + h 2, y n + h ) 2 K 1 K 3 = f (t n + h, y n + hk 2 ) y n+1 = y n + h 6 (K 0 + 2K 1 + 2K 2 + K 3 ). Note that if f is a function of time alone, this is simply Simpson s rule. This is no accident. Runge-Kutta methods are frequently used in pairs where a high-order method and a lower-order method can be computed with the same evaluations. Perhaps the most popular such methods are the Fehlberg 4(5) and

Dormand-Prince 4(5) pairs the Matlab code ode45 uses the Dormand- Prince pair. The difference between the two methods is then used as an estimate of the local error in the lower-order method. If a local error estimate seems too large, it is natural to try again with a shorter step based on an asymptotic expansion of the error. This method of step control works well on many problems in practice, but it is not foolproof (as we will see in HW 7). For example, in some settings the adaptive error control may suggest a time step which is fine for local error, but terrible for stability. Adaptive time stepping routines generally use tolerances for both absolute and relative errors. A time step is accepted if e i < max (rtol i y i, atol i ) where rtol i and atol i are the tolerances for the ith component of the solution vector. The error tolerances have default values (10 3 relative and 10 6 absolute), but in practice it may be a good idea to set the tolerances yourself. In principle, comparing two methods gives us an error estimate only for the lower-order method. However, one often takes a step with the higherorder method (at least for non-stiff problems). This cheat works well in practice, but we use the dignified-sound name of local extrapolation to dodge awkward questions about its mathematical legitimacy. There are a bewildering variety of Runge-Kutta methods. Some are explicit, others are implicit. Some preserve interesting structural properties. Some are based on equally-spaced interpolation points, others evaluate on Gauss-Legendre points. In some, the stages can be computed one at a time; in others, the stages all depend on each other. But these methods are beyond the scope of the current discussion. Matlab s ode45 For most non-stiff problems, ode45 is a good first choice of integrators. The basic calling sequence is [tout, yout] = ode45(f, tspan, y0); The function f(t,y) returns a column vector. On output, tout is a column vector of evaluation times and yout is a matrix of solution values (one per row). Usually, tspan has two entries: tspan = [t0 tmax]. However, we can also specify points where we want solution values. In general, the underlying

ODE solver does not put time steps at each of these points; instead it fills in the values using polynomial interpolation (this is called dense output). The ode45 function takes an optional output called opt that contains a structure produced by odeset. Using odeset, we can set error tolerances, put bounds on the step size, indicate to the solver that certain components must be non-negative, look for special events, or request diagnostic output. The multistep concept The Runge-Kutta methods proceed from time t n to time t n+1, then stop looking at t n. An alternative is to use not only the behavior at t n, but also the behavior at previous times t n 1, t n 2, etc. Methods that do this are multistep methods. Most such methods are based on linear interpolation. For non-stiff problems, the Adams family are the most popular multi-step methods. The k-step explicit Adams methods (or Adams-Bashforth methods) interpolate f(t j, y j ) at points t n k,..., t n with a degree k polynomial p(t). Then in order to estimate one computes y(t n+1 ) = y(t n ) + y n+1 = y n + tn+1 t n tn+1 t n f(s, y(s)) ds, p(s) ds. The implicit Adams methods (or Adams-Moulton methods) also interpolate through the unknown point. Though they are not A-stable, Adams- Moulton methods have larger stability regions and smaller error constants than Adams-Bashforth methods. Often, the two are used together to form a predictor-corrector pair: predict with Adams-Bashforth, then correct to Adams-Moulton. Because these methods are typically used for non-stiff problems, fixed point iteration often provides an adequate corrector. With multistep methods, we can adapt not only the time step, but also the order. Very high-order methods may be appropriate when the solution is smooth and we want to either minimize the number of time steps or to meet very strict accuracy requirements. The Matlab routine ode113 implements a variable-order Adams-Bashforth-Moulton predictor-corrector solver. The Adams methods interpolate the function values f; the backward differentiation formulas (BDF) instead interpolate y. The next step y n+1 is

chosen so that the polynomial interpolating (t n k, y n k ) through (t n+1, y n+1 ) has derivative at t n+1 equal to f(t n+1, y n+1 ). Matlab s solver ode15s uses a variable-order numerical differentiation formula (a close relative of BDF). The ode15s code would be a typical first choice for stiff problems.