δ Substituting into the differential equation gives: x i+1 x i δ f(t i,x i ) (3)

Similar documents
Lecture 4. Magic Numbers

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Fourth Order RK-Method

Checking the Radioactive Decay Euler Algorithm

13 Numerical Solution of ODE s

Do not turn over until you are told to do so by the Invigilator.

Section 7.4 Runge-Kutta Methods

Ordinary Differential Equations (ODEs)

ODE Runge-Kutta methods

Logistic Map, Euler & Runge-Kutta Method and Lotka-Volterra Equations

Physics 299: Computational Physics II Project II

Runga-Kutta Schemes. Exact evolution over a small time step: Expand both sides in a small time increment: d(δt) F (x(t + δt),t+ δt) Ft + FF x ( t)

Fundamentals Physics

Do not turn over until you are told to do so by the Invigilator.

Functional differentiation

Consistency and Convergence

Ordinary Differential Equations (ODEs)

Numerical Methods for Ordinary Differential Equations

Topic 5: The Difference Equation

Ordinary differential equations. Phys 420/580 Lecture 8

A Brief Introduction to Numerical Methods for Differential Equations

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 1. Runge-Kutta Methods

Initial value problems for ordinary differential equations

Computational Methods CMSC/AMSC/MAPL 460. Ordinary differential equations

25. Chain Rule. Now, f is a function of t only. Expand by multiplication:

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

Euler s Method, cont d

Physics 115/242 The leapfrog method and other symplectic algorithms for integrating Newton s laws of motion

Four Point Gauss Quadrature Runge Kuta Method Of Order 8 For Ordinary Differential Equations

ECE 422/522 Power System Operations & Planning/Power Systems Analysis II : 7 - Transient Stability

AIMS Exercise Set # 1

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

Physics 115/242 The leapfrog method and other symplectic algorithms for integrating Newton s laws of motion

Multistep Methods for IVPs. t 0 < t < T

Lecture 4: Numerical solution of ordinary differential equations

Numerical Optimization Algorithms

Ph 22.1 Return of the ODEs: higher-order methods

FINITE DIFFERENCES. Lecture 1: (a) Operators (b) Forward Differences and their calculations. (c) Backward Differences and their calculations.

Modeling & Simulation 2018 Lecture 12. Simulations

Chap. 20: Initial-Value Problems

8 Numerical Integration of Ordinary Differential

Problem 1: Lagrangians and Conserved Quantities. Consider the following action for a particle of mass m moving in one dimension

Math 113 (Calculus 2) Exam 4

THE INVERSE FUNCTION THEOREM

Particle Systems. University of Texas at Austin CS384G - Computer Graphics

Module 6: Implicit Runge-Kutta Methods Lecture 17: Derivation of Implicit Runge-Kutta Methods(Contd.) The Lecture Contains:

ENGI9496 Lecture Notes State-Space Equation Generation

Introduction to the Numerical Solution of IVP for ODE

Ordinary Differential Equations II

3 Modeling and Simulation

Differential Equations

Parametric Equations, Function Composition and the Chain Rule: A Worksheet

NUMERICAL ANALYSIS 2 - FINAL EXAM Summer Term 2006 Matrikelnummer:

Ordinary Differential Equations II

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Module 6 : Solving Ordinary Differential Equations - Initial Value Problems (ODE-IVPs) Section 1 : Introduction

An Introduction to Differential Algebra

Numerical Solution of Differential Equations

Initial-Value Problems for ODEs. Introduction to Linear Multistep Methods

Maths III - Numerical Methods

Numerical integration for solving differential equations

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 2. Predictor-Corrector Methods

Lesson 4: Population, Taylor and Runge-Kutta Methods

Optimization Tutorial 1. Basic Gradient Descent

Solving scalar IVP s : Runge-Kutta Methods

Numerical method for approximating the solution of an IVP. Euler Algorithm (the simplest approximation method)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

AM 205 Final Project The N-Body Problem

Fourier Series : Dr. Mohammed Saheb Khesbak Page 34

How to Use Calculus Like a Physicist

Ordinary Differential Equations

Lecture IV: Time Discretization

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

Goals for This Lecture:

Introduction to Nonlinear Optimization Paul J. Atzberger

First In-Class Exam Solutions Math 246, Professor David Levermore Thursday, 20 September 2018

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

Applied Math for Engineers

TP 3:Runge-Kutta Methods-Solar System-The Method of Least Squares

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

Computational Techniques Prof. Dr. Niket Kaisare Department of Chemical Engineering Indian Institute of Technology, Madras

Self-Concordant Barrier Functions for Convex Optimization

Numerical Differential Equations: IVP

DIFFERENTIATION RULES

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Numerical Analysis Exam with Solutions

Integration - Past Edexcel Exam Questions

NUMERICAL ANALYSIS PROBLEMS

MECH : a Primer for Matlab s ode suite of functions

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Euler s Method (BC Only)

In this chapter we study several functions that are useful in calculus and other areas of mathematics.

Runge - Kutta Methods for first and second order models

Euler s Method, Taylor Series Method, Runge Kutta Methods, Multi-Step Methods and Stability.

Introduction to standard and non-standard Numerical Methods

Numerical Analysis: Interpolation Part 1

Transcription:

Solving Differential Equations Numerically Differential equations are ubiquitous in Physics since the laws of nature often take on a simple form when expressed in terms of infinitesimal changes of the variables. Many differential equations do not have a simple analytic solution in terms of well known functions, so numerical methods need to be used to solve them. Here we present three similar methods: Euler s algorithm and two Runga-Kutta algorithms. These fall under the catagory of finite-difference methods for solving differential equations. We first develop these algorithm s for a single function of one variable, x(t), that satifies a first order differential equation. Let x(t) satisfy the following differential equation: dx = f(t,x) (1) dt The approach in finite-difference methods is to discretize the t-variable. The easiest way to quantize t is to use equal spacing, δ, between the values. That is, the chosen t values are indexed by the integer i: t i = i δ, where i goes from zero to some maximum value imax. The continuous function x(t) becomes an array x[i] = x(t i ). The continuous differential equation is transformed into an algebraic equation containing the x(t i ). Euler Algorithm In the Euler Algorithm, the derivative is replaced by its finite difference approximation. Defining x i x(t i ), we have dx dt x i+1 x i δ Substituting into the differential equation gives: (2) Solving for x i+1 yields x i+1 x i δ f(t i,x i ) (3) x i+1 x i + δf(t i,x i ) (4) If the initial condition is such that x 0 is known, the above equation can be used to find x 1. Then the equation can be used again to find x 2, etc. The Euler algorithm is equivalent to including only the first term in the Taylor expansion. The Taylor expansion for x(t + δ) about x(t) is given by 1

x(t + δ) = x(t) + δx (t) + δ2 2 x (t) + x(t + δ) = x(t) + δf(t,x) + δ2 2 f (t,x) + Neglecting terms or order δ 2 or greater gives the approximation x(t + δ) x(t) + δf(t,x) (5) which is the Euler algorithm. The following is an example code for the Euler algorithm: x2 = x1 + del*f(t,x1); Note that in this code we don t save all the x(t i ), but just the last ones. The example above is for a first-order differential equation. However, in physics we often encounter second-order differential equations. However, one can turn a second order differential equation into two first order ones. We demonstrate this with Newton s force law equation in one dimension. The equation d 2 x dt = F(t,x,v x) (6) 2 m where F(t,x,v x ) is the force and v x = dx/dt is the velocity of the particle. If we consider v x (t) as a function of time, we have dv x dt dx dt = F(t,x,v x ) = v x 2

We can now apply the Euler algorithm to each equation. Below is an example code to handle this situation: v1 = initial velocity; v2 = v1 + del*f(t,x1,v1)/m; x2 = x1 + del*v1; v1 = v2; There are some variations to the Euler algorithm for second order differential equations. One variation is to use v2 instead of v1 in the equation for x2: v1 = initial velocity; v2 = v1 + del*f(t,x1,v1)/m; x2 = x1 + del*v2; v1 = v2; This simple change yields more accurate results if the motion is oscillatory. Another variation for a second order differential equation is to use the Euler approach for the second derivative. The second derivative at x i can be approximated as: Collecting terms gives d 2 x dt (x i+1 x i )/δ (x i x i 1 )/δ 2 δ 3 (7)

d 2 x dt x i+1 + x i 1 2x i (8) 2 δ 2 Substituting this finite-difference formula for the second derivative into Newton s law of motion yields x i+1 + x i 1 2x i F(t i,x i,v i ) δ 2 m From this equation we can solve for x i+1 : x i+1 2x i x i 1 + δ 2F(t i,x i,v i ) m If F does not depend on the velocity, v i, then this form is particularly useful. (9) (10) Runge-Kutta Methods The Euler algorithm is conceptually easy to understand. To improve accuracy, one can decrease the step size and increase the number of iteration points. However, the computation time increases with the number of iteration points. As you might guess, there are a more efficient algorithms for increasing accuracy. We will describe one such method first, the Second-Order Runge-Kutta algorithm, then prove why it is more accurate. We want the best estimate of x(t+δ), with the knowledge of x(t) and f(t,x). The Euler algorithm uses the first two terms of the Taylor expansion, with the derivative evaluated at x at time t. From the mean-value theorum, there is a value of x = x and t = t such that x(t + δ) = x(t) + δf(x,t ) (11) A simple improvement over the Euler method is to choose values x = x m, where x m is half way between x(t) and x(t + δ). We don t know x m exactly, but we can use the Euler algorithm to approximate it. The two step process would be as follows: x m x(t) + (δ/2) f(x(t),t) t m t + δ/2 x(t + δ) x(t) + δ f(x m,t m ) 4

Note that x m is determined via the Euler algorithm, but with a step size of δ/2. The results are still approximate, but as we will show are accurate to order δ 2. The above method is a type of second-order Runge-Kutta algorithm. An example code for the algorithm is: xm = x1 + del*f(t,x1)/2; tm = t + del/2; x2 = x1 + del*f(tm,xm); For a second order differential equation, this algorithm will need to be applied to x and dx/dt. In the case of Newton s force law, the code would look like: v1 = initial velocity; vm = v1 + del*f(t,x1,v1)/m/2; xm = x1 + del*v1/2; tm = t + del/2; v2 = v1 + del*f(tm,xm,vm)/m; x2 = x1 + del*vm; v1 = v2; Although there are nearly twice as many lines of code for each step δ as the Euler algorithm, this Runge-Kutta algorithm can converge somewhat faster than the Euler algorithm. In the next section we derive the general form for the second-order Runge-Kutta algorithm. 5

Development of Runge-Kutta Forms The approach in second-order Runge-Kutta methods is to use only first derivative information to best estimate the change in x(t) for one step size. A good method to accomplish this is to use a combination of two terms: one with derivative information at the start of the interval f(t 0,x 0 ) and a second term which uses derivative information somewhere in the interval f(t 0 +αδ,x 0 +βδf(t 0,x 0 ). Here α and β are constants, and the interval starts at time t 0 and x 0 x(t 0 ). The ansatz for this approach is x(t 0 + δ) = x(t 0 ) + w 1 δf(t 0,x 0 ) + w 2 δf(t 0 + αδ,x 0 + βδf(t 0,x 0 )) (12) where w 1 is the weighting for the derivative at the start of the interval, and w 2 is the weighting for the derivative somewhere in the interval. The argument x 0 means x(t 0 ). Note, that the Euler algorithm is obtained if w 1 = 1 and w 2 = 0. To compare with the Taylor expansion, we can expand the right side of the equation about x(t 0 ). Expanding f(t 0 + αδ,x 0 + βδf(t 0,x 0 ) we have f(t 0 + αδ,x 0 + βδf(t 0,x 0 ) = f(t 0,x 0 ) + αδ f f + βδf t x + (13) to order δ 2. Note that the derivative are evaluated at x = x 0 and t = t 0. Substituting into Eq. 12, yields x(t 0 + δ) = x(t 0 ) + w 1 δf(t 0,x 0 ) + w 2 δ(f(t 0,x 0 ) + αδ f t f + βδf + ) (14) x x(t 0 + δ) = x(t 0 ) + (w 1 + w 2 )δf(t 0,x 0 ) + αw 2 δ 2 f t + w 2βδ 2 f f x + (15) to order δ 2. We need to compare this expression with that of the Taylor expansion of x(t). The Taylor expansion of x(t 0 + δ) about x(t 0 ) is given by x(t 0 + δ) = x(t 0 ) + δ dx dt + δ2 d 2 x 2 dt + (16) 2 to order δ 2. The first derivative of x(t) t0 equals f(t 0,x 0 ). The second derivative of x(t) with respect to t equals the first derivative of f with respect to t: d 2 x/dt 2 = df/dt. f has an implicit and explicite time dependance: df dt d 2 x dt 2 = f t + f dx x dt = f t + f f x 6

Substituting this expression for the second derivative into the Taylor expansion yields: x(t 0 + δ) = x(t 0 ) + δf(x 0,t 0 ) + δ2 f 2 t + δ2 2 f f x + (17) to order δ 2, where we have used dx/dt t0 = f(x 0,t 0 ). Comparing Eq. 15 with Eq. 17, we see that if w 1, w 2, α and β satisfy the following conditions: w 1 + w 2 = 1 αw 2 = 1/2 βw 2 = 1/2 then the expansion agrees with the Taylor expansion to order δ 2. This is one order of δ better than the Euler algorithm. There is not a unique solution to these three equations. They do constrain α to equal β, and w 2 = 1 w 1. However, w 1 can take on any value between 0 and 1. One of the simplest choices is w 1 = 0. This choice gives w 2 = 1, and α = β = 1/2, and is the choice we presented in these notes: x(t 0 + h) = x(t 0 ) + δf(t 0 + δ/2,x 0 + (δ/2)f(t 0,x 0 )) (18) This equation is more clearly expressed as we did before: x m x(t 0 ) + (δ/2) f(t 0,x 0 )) t m t 0 + δ/2 x(t 0 + δ) x(t 0 ) + δ f(t m,x m ) The Taylor expansion analysis allows us to determine the accuracy of the algorithms. For the Euler algorithm, x(t + δ) is exact up to first order in δ. That is, the errors decrease with order δ 2. If the step size δ is divided by N, then the error for x(t + δ/n) is 1/N 2 as large as the error for x(t + δ). However, with 1/N the step size, one needs N steps to reach x(t + δ), so the overall gain is a decrease of 1/N in the error. For the second-order Runge-Kutta algorithm, the errors decrease with order δ 3. If δ is decreased by a factor of N, then the error for x(t + δ/n) decreases by a factor of 1/N 3 from the error of x(t + δ). Since N steps are needed, the overall gain is a decrease of 1/N 2 in the error. However, since the second-order Runge-Kutta requires 5/3 the computing time per δ/n step as the Euler algorithm, there is often only a 7

marginal gain in accuracy per computing time. Fourth-Order Runge-Kutta algorithm The most common Runge-Kutta algorithm used in for initial value differential equations is the 4 th -order Runge-Kutta. It gives great accuracy per computing time. Below, we state the algorithm without derivation: K 1 = δf(t,x) K 2 = δf(t + δ/2,x + K 1 /2) K 3 = δf(t + δ/2,x + K 2 /2) K 4 = δf(t + δ,x + K 3 ) x(t + δ) = x(t) + (K 1 + 2K 2 + 2K 3 + K 4 )/6 For the fourth-order Runge-Kutta algorithm, the errors decrease with order δ 5, and there ends up being a gain in accuracy per computing time. 8