Astronomy 8824: Numerical Methods Notes 2 Ordinary Differential Equations

Similar documents
High performance computing and numerical modeling

Ph 22.1 Return of the ODEs: higher-order methods

Maths III - Numerical Methods

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

A Brief Introduction to Numerical Methods for Differential Equations

Logistic Map, Euler & Runge-Kutta Method and Lotka-Volterra Equations

Comparison of Numerical Ordinary Differential Equation Solvers

The family of Runge Kutta methods with two intermediate evaluations is defined by

Butcher tableau Can summarize an s + 1 stage Runge Kutta method using a triangular grid of coefficients

Ordinary Differential Equations. Monday, October 10, 11

X i t react. ~min i max i. R ij smallest. X j. Physical processes by characteristic timescale. largest. t diff ~ L2 D. t sound. ~ L a. t flow.

Solution of ordinary Differential Equations

AM 205 Final Project The N-Body Problem

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS

Physically Based Modeling: Principles and Practice Differential Equation Basics

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 1. Runge-Kutta Methods

Physically Based Modeling Differential Equation Basics

Numerical differentiation

Chapter 9 Implicit integration, incompressible flows

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

FORTRAN 77 Lesson 7. Reese Haywood Ayan Paul

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Math 225 Differential Equations Notes Chapter 1

Fourth Order RK-Method

Ordinary Differential Equations

First-Order Ordinary Differntial Equations II: Autonomous Case. David Levermore Department of Mathematics University of Maryland.

Overview spherical accretion

ODE Runge-Kutta methods

Chap. 20: Initial-Value Problems

Integration of Ordinary Differential Equations

ES.1803 Topic 13 Notes Jeremy Orloff

A Hybrid Method for the Wave Equation. beilina

Ordinary Differential Equations (ODEs)

MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES

Nonlinear Equations. Not guaranteed to have any real solutions, but generally do for astrophysical problems.

1 R.V k V k 1 / I.k/ here; we ll stimulate the action potential another way.) Note that this further simplifies to. m 3 k h k.

Geometric methods for orbit integration. PiTP 2009 Scott Tremaine

Ordinary Differential Equations

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

Time scales. Notes. Mixed Implicit/Explicit. Newmark Methods [ ] [ ] "!t = O 1 %

Review for Exam 2 Ben Wang and Mark Styczynski

Chapter 6 - Ordinary Differential Equations

1 Ordinary differential equations

Multistep Methods for IVPs. t 0 < t < T

First-Order Differential Equations

Graded Project #1. Part 1. Explicit Runge Kutta methods. Goals Differential Equations FMN130 Gustaf Söderlind and Carmen Arévalo

1 GSW Sets of Systems

Ordinary Differential Equations

Differential Equations

DIFFERENTIAL EQUATIONS

Heading for death. q q

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

Value Problems Introduction

CS 257: Numerical Methods

Newton-Raphson. Relies on the Taylor expansion f(x + δ) = f(x) + δ f (x) + ½ δ 2 f (x) +..

Lec 9: Stellar Evolution and DeathBirth and. Why do stars leave main sequence? What conditions are required for elements. Text

Section 2.7 Solving Linear Inequalities

Ordinary Differential Equations (ODEs)

The Definition and Numerical Method of Final Value Problem and Arbitrary Value Problem Shixiong Wang 1*, Jianhua He 1, Chen Wang 2, Xitong Li 1

Chapter 9 Implicit Methods for Linear and Nonlinear Systems of ODEs

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Maps and differential equations

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II

Physics 202 Laboratory 3. Root-Finding 1. Laboratory 3. Physics 202 Laboratory

16.1 Runge-Kutta Method

Computational Astrophysics

Definition of differential equations and their classification. Methods of solution of first-order differential equations

Physics 115/242 The leapfrog method and other symplectic algorithms for integrating Newton s laws of motion

PH36010: Numerical Methods - Evaluating the Lorenz Attractor using Runge-Kutta methods Abstract

SYSTEMS OF ODES. mg sin ( (x)) dx 2 =

Ordinary Differential Equations II

Numerical solution of ODEs

CHAPTER 7. An Introduction to Numerical Methods for. Linear and Nonlinear ODE s

Stars IV Stellar Evolution

Physics 115/242 The leapfrog method and other symplectic algorithms for integrating Newton s laws of motion

Integration of Differential Equations

Numerical Methods - Initial Value Problems for ODEs

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Ordinary Differential Equations

Chapter 5. Formulation of FEM for Unsteady Problems

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

EXAMPLES OF PROOFS BY INDUCTION

Outline - March 18, H-R Diagram Review. Protostar to Main Sequence Star. Midterm Exam #2 Tuesday, March 23

Getting Started with Communications Engineering

NUMERICAL METHODS FOR ENGINEERING APPLICATION

Lecture 4. Magic Numbers

A First Course on Kinetics and Reaction Engineering Supplemental Unit S5. Solving Initial Value Differential Equations

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

NUMERICAL SOLUTION OF ODE IVPs. Overview

Spiral Structure Formed in a Pair of Interacting Galaxies

Numerical Methods for Ordinary Differential Equations

CHAPTER 4. Basics of Fluid Dynamics

3.4 Numerical orbit integration

Efficient solution of stationary Euler flows with critical points and shocks

NUMERICAL METHODS IN ASTROPHYSICS An Introduction

Differential Equations

3 Hydrostatic Equilibrium

2. Equations of Stellar Structure

Physics 115/242 Romberg Integration

Transcription:

Astronomy 8824: Numerical Methods Notes 2 Ordinary Differential Equations Reading: Numerical Recipes, chapter on Integration of Ordinary Differential Equations (which is ch. 15, 16, or 17 depending on edition). Concentrate on the first three sections. Also look briefly at the first sections of the following chapter on Two Point Boundary Value Problems. For orbit integrations, 3.4 of the current edition of Binney & Tremaine (Galactic Dynamics) is a good discussion, especially on the topics of symplectic integration schemes and individual particle timesteps. We ll discuss some general aspects of integrating ODEs, while the example in the problem set will be a relatively simple case of integrating orbits in a gravitational potential, which is often relevant to astronomers! A Side Note: Systems of Linear Equations Reference: Numerical Recipes, chapter 2. The system of equations a 11 x 1 + a 12 x 2 + a 13 x 3 +...a 1n x n = b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 +...a 2n x n = b 2... a n1 x 1 + a n2 x 2 + a n3 x 3 +...a nn x n = b n can be written in matrix form A x = b. If the equations are linearly independent, then the matrix A is non-singular and invertible. The solution to the equations is: x = A 1 A x = A 1 b. The computational task of matrix inversion is discussed in Chapter 2 of NR. For small, non-singular matrices it is straightforward, allowing a straightforward solution to a system of equations. For example, in ipython try import numpy as np a=np.array([[1,9,7],[2,4,12],[3,7,3]]) 1

b=np.array([9,-3,7]) ainv=np.linalg.inv(a) np.dot(a,ainv) x=np.dot(ainv,b) np.dot(a,x) The problem becomes more challenging when The solutions have to be done many, many times, in which case it s important to be efficient. The equations are degenerate or nearly so, in which case one needs singular value decomposition. The number of matrix elements is very large. If many of the matrix elements are zero, then one can use sparse matrix methods. Initial Value Problems Higher order differential equations can be reduced to systems of first-order differential equations. For example d 2 y dx + q(x)dy 2 dx = r(x) can be rewritten as dy dx = z(x) dz = r(x) q(x)z(x), dx two first-order equations that can be integrated simultaneously. The generic problem to solve is therefore a system of coupled first-order ODEs, dy i (x) dx = f i (x, y 1, y 2,..., y N ), i = 1,..., N. For the case of an orbit in a gravitational potential we have which can be written d 2 x dt 2 = Φ(x) dx dt = v dv dt = Φ(x). 2

This can be written in the dy i (x)/dx form above, though for orbits it is most straightforward to just think in terms of x and v. An initial value problem is one in which all values of y i are specified at the initial value of x. For example, one might specify x and v at t = 0 and integrate forward in time. Runge-Kutta We ll now think about integrating a single ODE for variable y(x). The simplest way to integrate an ODE, like the simplest way to do a numerical integral, is Euler s method: y n+1 = y n + hf(x n, y n ) to advance from x n to x n+1 = x n + h. As with numerical integrals, this is a bad method; one can get higher accuracy and greater stability with only slightly more work. Mid-point method (2nd order Runge-Kutta) Use derivative at x n to advance to x n+1/2. Then use the derivative at x n+1/2 to advance from x n to x n+1. with an error O(h 3 ) per step. k 1 = hf(x n, y n ) k 2 = hf(x n + h/2, y n + k 1 /2) y n+1 = y n + k 2 Since the number of steps is 1/h, the error in the integration should scale as h 2. 4th order Runge-Kutta One can evaluate at more intermediate points and fit a higher order function. This involves more evaluations per step, but it should allow one to take larger steps. The most commonly used scheme is 4th order, which seems to be a good compromise between these two: k 1 = hf(x n, y n ) k 2 = hf(x n + h/2, y n + k 1 /2) k 3 = hf(x n + h/2, y n + k 2 /2) k 4 = hf(x n + h, y n + k 3 ) y n+1 = y n + k 1 /6 + k 2 /3 + k 3 /3 + k 4 /6 + O(h 5 ). 3

(See NR, eq. 17.1.3.) Adaptive Step Size If computing time is not an issue, you can just integrate your ODE multiple times with steadily decreasing h and check for convergence to your desired accuracy. However, the step size required for an accurate integration may vary by by a large factor through the domain. For example, integrating a highly elliptical orbit requires much smaller timesteps near peri-center than through most of the orbit, so using a fixed timestep may be wasteful. You would also like your code to give you an estimate of the accuracy of its answer. NR discusses strategies for doing this within Runge-Kutta. The most straightforward idea is to advance from x to x + H, where H is macroscopic jump, using small steps h. Keep halving h (doubling the number of steps) until the desired accuracy is achieved. Start with this value of h for the next H-step. If it s more accurate than needed, increase h; if it s less accurate than needed, decrease h. There is a somewhat more efficient way to achieve this effect, described in NR and implemented in their routines. This approach is quite general. For a specific problem, you may have physical intuition about what should guide the size of steps. For example, the accuracy of integration may depend on the ratio of the timestep to the local dynamical time (Gρ) 1/2 for a gravitational calculation or sound-crossing time L/c s for a hydrodynamic calculation. In such situations, you may be able to choose a constant scaling factor that multiplies some function of local conditions, rather than using step-doubling or other Runge-Kutta strategies. Bulirsch-Stoer This is analogous to Romberg integration for numerical integrals. Take a sequence of different numbers of steps, with decreasing h. Using the results for this sequence, extrapolate to the result for h = 0. NR is enthusiastic about this method for cases that require greater efficiency or higher accuracy than Runge-Kutta. 4

However, it s more susceptible to problems if the integration isn t smooth everywhere. Leapfrog Integration Leapfrog is a 2nd-order integration scheme for gravitational problems. From initial conditions x 0, v 0, take a half-step to get v 1/2. Then leapfrog the positions over the velocities, then the velocities over the positions, and so forth. x 1 = x 0 + v 1/2 t v 3/2 = v 1/2 Φ(x 1 ) t x 2 = x 1 + v 3/2 t etc. I am almost sure that this is equivalent to 2nd-order Runge-Kutta but for the specific case of equations where ẋ depends only on v and v depends only on x, which makes it possible to do fewer evaluations. Important: You will usually want to output both positions and velocities. If your arrays store positions on the whole steps and velocities on the half-steps, then you must advance the velocities half-a-step before output (but be careful to get them back on the half-step before continuing the integration). You also need to synchronize positions and velocities before computing energy conservation checks. Stiff Equations and Implicit Integration Sometimes you want to integrate a system where there are two or more very different timescales. If you integrate with timesteps short compared to the shorter timescale, your calculation may never finish. A common example is a hydrodynamic system with cooling. You are typically interested in phenomena happening on the sound-crossing timescale L/c s, where L is the scale of the system. But in a dense region, the cooling timescale may become orders-of-magnitude shorter than the sound-crossing timescale. If you have to do the short timescale integration accurately to get the long timescale behavior to be accurate, then your stuck. But sometimes the short timescale behavior doesn t matter too much (e.g., it matters that the gas gets cold, but it doesn t matter exactly what its temperature is), and you just need your integration to be stable. 5

In this case, an implicit scheme for the short timescale behavior may be useful. Here is an example from NR: Consider The explicit Euler scheme is y = cy, c > 0. y n+1 = y n + hy n = (1 ch)y n. If h > 2/c, then the method is unstable, with y n as n. One can avoid this behavior by substituting y n+1 for y n, obtaining the implicit equation y n+1 = y n + hy n+1. In this particular case, the implicit equation is trivially solved y n+1 = which gives a stable result even for large h. y n 1 + ch, More generally, the implicit equations may be solved by matrix inversion (if they re linear) or by some form of iteration (which we ll get to in two weeks) if they re non-linear. If you have a stiff set of equations, then sometimes the physics of the problem will suggest an alternative solution. For example, when timescales for ionization are short compared to other timescales in the system, then relative abundances may always be close to ionization equilibrium, where recombination rates balance ionization rates. This gives a set of equations that can be solved as a function of local conditions (e.g., density, temperature), and perhaps stored in a lookup table. If you re interested in evolution of a hierarchical triple star system where the outer orbital period is much longer than the inner orbital period, it may be adequate to treat the inner binary as a ring with mass spread over the orbit according to the amount of time the stars spend there. Two Point Boundary Value Problems Sometimes your boundary conditions aren t all specified at the same point. For example, you might know an initial position but be trying to achieve a specific final velocity. 6

Stellar structure is a classic example: some boundary conditions are known at the center of the star, but others are known at the surface. Two general strategies for this kind of problem are the shooting method and relaxation methods. In the simplest version of the shooting method, you take a guess at the initial boundary values and see where you end up. Then you change an initial boundary value and see how that changes where you end up. You can now do some form of iteration (as discussed in two weeks) to try to zero in on the correct initial boundary condition to satisfy your final boundary condition. It s like firing your artillery shell and continuously adjusting the angle of your gun until you hit your desired target. Sometimes it may be very difficult to get to the right final boundary condition. In such cases it may be better to integrate from both sides and try to meet in the middle, e.g., from the center of the star and the surface of the star. In relaxation methods, you start with an approximate guess at the full solution. You then use finite differences to calculate the errors in this guess at every point in the domain. You then try to adjust your solution to eliminate these errors. If your guess is close, then you may be able to reduce the problem of eliminating the errors to a problem of solving simultaneous linear equations. Relaxation methods are most useful when you have a very good guess to start with. For example, you may want to solve for a sequence of main sequence stellar models with increasing mass. The solution at mass M, perhaps scaled in some way, then becomes a good starting guess for the solution at mass M + ǫ. One could do the evolution of a star of fixed M in a similar way, with the composition changing slightly from one time to the next. 7