Runge-Kutta methods. With orders of Taylor methods yet without derivatives of f (t, y(t))

Similar documents
AMS 147 Computational Methods and Applications Lecture 09 Copyright by Hongyun Wang, UCSC. Exact value. Effect of round-off error.

Differential Equations

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

Function Composition and Chain Rules

Exercises for numerical differentiation. Øyvind Ryan

Consistency and Convergence

Differential equations. Differential equations

Fall 2014 MAT 375 Numerical Methods. Numerical Differentiation (Chapter 9)

Numerical Analysis MTH603. dy dt = = (0) , y n+1. We obtain yn. Therefore. and. Copyright Virtual University of Pakistan 1

Numerical Differentiation

158 Calculus and Structures

CHAPTER 2 MODELING OF THREE-TANK SYSTEM

The Derivative as a Function

Definition of the Derivative

Physically Based Modeling: Principles and Practice Implicit Methods for Differential Equations

(5.5) Multistep Methods

MATH1151 Calculus Test S1 v2a

Differential Calculus (The basics) Prepared by Mr. C. Hull

4.2 - Richardson Extrapolation

Euler s Method, cont d

Chapter XI. Solution of Ordinary Differential Equations

2.8 The Derivative as a Function

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems

Copyright c 2008 Kevin Long

Integral Calculus, dealing with areas and volumes, and approximate areas under and between curves.

Continuity and Differentiability Worksheet

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT

The Verlet Algorithm for Molecular Dynamics Simulations

INTRODUCTION TO CALCULUS LIMITS

Section 2.7 Derivatives and Rates of Change Part II Section 2.8 The Derivative as a Function. at the point a, to be. = at time t = a is

Chapter 1. Density Estimation

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Chapter Seven The Quantum Mechanical Simple Harmonic Oscillator

= 0 and states ''hence there is a stationary point'' All aspects of the proof dx must be correct (c)

Applications of the van Trees inequality to non-parametric estimation.

Flavius Guiaş. X(t + h) = X(t) + F (X(s)) ds.

Math 31A Discussion Notes Week 4 October 20 and October 22, 2015

Finite Difference Method

Differentiation in higher dimensions

qwertyuiopasdfghjklzxcvbnmqwerty uiopasdfghjklzxcvbnmqwertyuiopasd fghjklzxcvbnmqwertyuiopasdfghjklzx cvbnmqwertyuiopasdfghjklzxcvbnmq

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

Function Composition and Chain Rules

DELFT UNIVERSITY OF TECHNOLOGY Faculty of Electrical Engineering, Mathematics and Computer Science

Recall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if

Math Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

5.1 introduction problem : Given a function f(x), find a polynomial approximation p n (x).

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 2. Predictor-Corrector Methods

Numerical Methods - Initial Value Problems for ODEs

NUMERICAL DIFFERENTIATION

Poisson Equation in Sobolev Spaces

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Lesson 6: The Derivative

MA2264 -NUMERICAL METHODS UNIT V : INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL. By Dr.T.Kulandaivel Department of Applied Mathematics SVCE

3.1. COMPLEX DERIVATIVES 89

Continuity. Example 1

Order of Accuracy. ũ h u Ch p, (1)

New Fourth Order Quartic Spline Method for Solving Second Order Boundary Value Problems

MATH1901 Differential Calculus (Advanced)

1 Introduction to Optimization

Taylor Series and the Mean Value Theorem of Derivatives

Chapter 2. Limits and Continuity 16( ) 16( 9) = = 001. Section 2.1 Rates of Change and Limits (pp ) Quick Review 2.1

Chapter 2 Limits and Continuity

An Approximation to the Solution of the Brusselator System by Adomian Decomposition Method and Comparing the Results with Runge-Kutta Method

Continuity and Differentiability of the Trigonometric Functions

MAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016

Chapter 4: Numerical Methods for Common Mathematical Problems

Investigating Euler s Method and Differential Equations to Approximate π. Lindsay Crowl August 2, 2001

MATH1131/1141 Calculus Test S1 v8a

The Priestley-Chao Estimator

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

The entransy dissipation minimization principle under given heat duty and heat transfer area conditions

1 + t5 dt with respect to x. du = 2. dg du = f(u). du dx. dg dx = dg. du du. dg du. dx = 4x3. - page 1 -

A.P. CALCULUS (AB) Outline Chapter 3 (Derivatives)

Variable Step Size Differential Equation Solvers

Finite Difference Methods Assignments

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

MAT 145. Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Walrasian Equilibrium in an exchange economy

MATH 155A FALL 13 PRACTICE MIDTERM 1 SOLUTIONS. needs to be non-zero, thus x 1. Also 1 +

Applied Linear Statistical Models. Simultaneous Inference Topics. Simultaneous Estimation of β 0 and β 1 Issues. Simultaneous Inference. Dr.

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY

Polynomial Interpolation

LECTURE 14 NUMERICAL INTEGRATION. Find

2.3 Product and Quotient Rules

232 Calculus and Structures

Stability properties of a family of chock capturing methods for hyperbolic conservation laws

Ordinary Differential Equations

Notes: DERIVATIVES. Velocity and Other Rates of Change

Online Learning: Bandit Setting

New Distribution Theory for the Estimation of Structural Break Point in Mean

Approximation of the Viability Kernel

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Polynomial Interpolation

Derivation Of The Schwarzschild Radius Without General Relativity

MATH745 Fall MATH745 Fall

HOMEWORK HELP 2 FOR MATH 151

Transcription:

Runge-Kutta metods Wit orders of Taylor metods yet witout derivatives of f (t, y(t))

First order Taylor expansion in two variables Teorem: Suppose tat f (t, y) and all its partial derivatives are continuous on D def = {(t, y) a t b, c y d.} Let (t 0, y 0 ), (t, y) D, t = t t 0, y = y y 0. Ten f (t, y) = P 1 (t, y) + R 1 (t, y), were

First order Taylor expansion in two variables Teorem: Suppose tat f (t, y) and all its partial derivatives are continuous on D def = {(t, y) a t b, c y d.} Let (t 0, y 0 ), (t, y) D, t = t t 0, y = y y 0. Ten f (t, y) = P 1 (t, y) + R 1 (t, y), were P 1 (t, y) = f (t 0, y 0 ) + t f t (t 0, y 0 ) + y f y (t 0, y 0 ), R 1 (t, y) = 2 t 2 for some point (ξ, µ) D. 2 f t 2 (ξ, µ) + t y 2 f t y (ξ, µ) + 2 y 2 2 f (ξ, µ), y 2

Second order Taylor expansion in two variables Teorem: Suppose tat f (t, y) and all its partial derivatives are continuous on D def = {(t, y) a t b, c y d.} Let (t 0, y 0 ), (t, y) D, t = t t 0, y = y y 0. Ten f (t, y) = P 2 (t, y) + R 2 (t, y), were

Second order Taylor expansion in two variables Teorem: Suppose tat f (t, y) and all its partial derivatives are continuous on D def = {(t, y) a t b, c y d.} Let (t 0, y 0 ), (t, y) D, t = t t 0, y = y y 0. Ten ( P 2 (t, y) = f (t 0, y 0 ) + + f (t, y) = P 2 (t, y) + R 2 (t, y), were ) t f t (t 0, y 0 ) + y f y (t 0, y 0 ) ( 2 t 2 f 2 t 2 (t 0, y 0 ) + t y 2 f t y (t 0, y 0 ) + 2 y 2 f 2 y 2 (t 0, y 0 ) ),

Second order Taylor expansion in two variables Teorem: Suppose tat f (t, y) and all its partial derivatives are continuous on D def = {(t, y) a t b, c y d.} Let (t 0, y 0 ), (t, y) D, t = t t 0, y = y y 0. Ten ( P 2 (t, y) = f (t 0, y 0 ) + + ( 2 t 2 R 2 (t, y) = 1 3 3! f (t, y) = P 2 (t, y) + R 2 (t, y), j=0 were ) t f t (t 0, y 0 ) + y f y (t 0, y 0 ) 2 f t 2 (t 0, y 0 ) + t y ( 3 j ) 3 j t j y 2 f t y (t 0, y 0 ) + 2 y 2 3 f t 3 j y (ξ, µ) j ) 2 f y 2 (t 0, y 0 ),

Runge-Kutta Metod of Order Two (III) Midpoint Metod w 0 = α, w j+1 = w j + f ( t j + 2, w j + ) 2 f (t j, w j ), j = 0, 1,, N 1. Two function evaluations for eac j, Second order accuracy. No need for derivative calculations

General 2nd order Runge-Kutta Metods w 0 = α; for j = 0, 1,, N 1, w j+1 = w j + (a 1 f (t j, w j ) + a 2 f (t j + α 2, w j + δ 2 f (t j, w j ))).

General 2nd order Runge-Kutta Metods w 0 = α; for j = 0, 1,, N 1, w j+1 = w j + (a 1 f (t j, w j ) + a 2 f (t j + α 2, w j + δ 2 f (t j, w j ))). Two function evaluations for eac j, Want to coose a 1, a 2, α 2, δ 2 for igest possible order of accuracy.

General 2nd order Runge-Kutta Metods w 0 = α; for j = 0, 1,, N 1, w j+1 = w j + (a 1 f (t j, w j ) + a 2 f (t j + α 2, w j + δ 2 f (t j, w j ))). Two function evaluations for eac j, Want to coose a 1, a 2, α 2, δ 2 for igest possible order of accuracy. local truncation error τ j+1 () = y(t j+1) y(t j ) (a 1 f (t j, y(t j ))+a 2 f (t j +α 2, y(t j )+δ 2 f (t j, y(t j )))) = y (t j ) + 2 y (t j ) + O( 2 ) ( f (a 1 + a 2 ) f (t j, y(t j )) + a 2 α 2 t (t j, y(t j )) + a 2 δ 2 f (t j, y(t j )) f ) y (t j, y(t j )) + O( 2 ).

local truncation error τ j+1 () = y (t j ) + 2 y (t j ) + O( 2 ) ( f (a 1 + a 2 ) f (t j, y(t j )) + a 2 α 2 t (t j, y(t j )) + a 2 δ 2 f (t j, y(t j )) f ) y (t j, y(t j )) + O( 2 ). were y (t j ) = f (t j, y(t j )),

local truncation error τ j+1 () = y (t j ) + 2 y (t j ) + O( 2 ) ( f (a 1 + a 2 ) f (t j, y(t j )) + a 2 α 2 t (t j, y(t j )) + a 2 δ 2 f (t j, y(t j )) f ) y (t j, y(t j )) + O( 2 ). were y (t j ) = f (t j, y(t j )), y (t j ) = d f d t (t j, y(t j )) = f t (t j, y(t j )) + f (t j, y(t j )) f y (t j, y(t j )). For any coice wit we ave a second order metod a 1 + a 2 = 1, a 2 α 2 = a 2 δ 2 = 2, τ j+1 () = O( 2 ).

local truncation error τ j+1 () = y (t j ) + 2 y (t j ) + O( 2 ) ( f (a 1 + a 2 ) f (t j, y(t j )) + a 2 α 2 t (t j, y(t j )) + a 2 δ 2 f (t j, y(t j )) f ) y (t j, y(t j )) + O( 2 ). were y (t j ) = f (t j, y(t j )), y (t j ) = d f d t (t j, y(t j )) = f t (t j, y(t j )) + f (t j, y(t j )) f y (t j, y(t j )). For any coice wit we ave a second order metod a 1 + a 2 = 1, a 2 α 2 = a 2 δ 2 = 2, τ j+1 () = O( 2 ). Four parameters, tree equations

General 2nd order Runge-Kutta Metods w 0 = α; for j = 0, 1,, N 1, w j+1 = w j + (a 1 f (t j, w j ) + a 2 f (t j + α 2, w j + δ 2 f (t j, w j ))). a 1 + a 2 = 1, a 2 α 2 = a 2 δ 2 = 2, Midpoint metod: a 1 = 0, a 2 = 1, α 2 = δ 2 = 2, w j+1 = w j + f ( t j + 2, w j + ) 2 f (t j, w j ). Modified Euler metod: a 1 = a 2 = 1 2, α 2 = δ 2 =, w j+1 = w j + 2 (f (t j, w j ) + f (t j+1, w j + f (t j, w j ))).

3rd order Runge-Kutta Metod (rarely used in practice) w 0 = α; for j = 0, 1,, N 1, w j+1 = w j + ( ( f (t j, w j ) + 3f t j + 2 4 3, w j + 2 3 f (t j + 3, w j + )) 3 f (t j, w j )) def = w j + φ(t j, w j ).

3rd order Runge-Kutta Metod (rarely used in practice) w 0 = α; for j = 0, 1,, N 1, w j+1 = w j + ( ( f (t j, w j ) + 3f t j + 2 4 3, w j + 2 3 f (t j + 3, w j + )) 3 f (t j, w j )) def = w j + φ(t j, w j ). local truncation error τ j+1 () = y(t j+1)y(t j ) φ(t j, y(t j )) = O( 3 ).

4t order Runge-Kutta Metod w 0 = α; for j = 0, 1,, N 1, k 1 = f (t j, w j ), ( k 2 = f t j + 2, w j + 1 ) 2 k 1, ( k 3 = f t j + 2, w j + 1 ) 2 k 2, k 4 = f (t j+1, w j + k 3 ), w j+1 = w j + 1 6 (k 1 + 2k 2 + 2k 3 + k 4 ). 4 function evaluations per step

Example dy Initial Value ODE = y t 2 + 1, 0 t 2, y(0) = 0.5, dt exact solution y(t) = (1 + t) 2 0.5 e t.

Example dy Initial Value ODE = y t 2 + 1, 0 t 2, y(0) = 0.5, dt exact solution y(t) = (1 + t) 2 0.5 e t.

Example dy Initial Value ODE = y t 2 + 1, 0 t 2, y(0) = 0.5, dt exact solution y(t) = (1 + t) 2 0.5 e t.

Example dy Initial Value ODE = y t 2 + 1, 0 t 2, y(0) = 0.5, dt exact solution y(t) = (1 + t) 2 0.5 e t.

Adaptive Error Control (I) dy = f (t, y(t)), a t b, y(a) = α. dt Consider a variable-step metod wit a well-cosen function φ(t, w, ): w 0 = α. for j = 0, 1,, coose step-size j = t j+1 t j, set wj+1 = w j + j φ (t j, w j, j ).

Adaptive Error Control (I) dy = f (t, y(t)), a t b, y(a) = α. dt Consider a variable-step metod wit a well-cosen function φ(t, w, ): w 0 = α. for j = 0, 1,, coose step-size j = t j+1 t j, set wj+1 = w j + j φ (t j, w j, j ). Adaptively coose step-size to satisfy given tolerance

Adaptive Error Control (II) Given an order-n metod w0 = α. for j = 0, 1,, w j+1 = w j + φ (t j, w j, ), = t j+1 t j, local truncation error (LTE) τ j+1 () = y(t j+1) y(t j ) φ (t j, y(t j ), ) = O ( n ). Given tolerance τ > 0, we would like to estimate largest step-size for wic τ j+1 () τ.

Adaptive Error Control (II) Given an order-n metod w0 = α. for j = 0, 1,, w j+1 = w j + φ (t j, w j, ), = t j+1 t j, local truncation error (LTE) τ j+1 () = y(t j+1) y(t j ) φ (t j, y(t j ), ) = O ( n ). Given tolerance τ > 0, we would like to estimate largest step-size for wic τ j+1 () τ. Approac: Estimate τ j+1 () wit order-(n + 1) metod w j+1 = w j + φ (t j, w j, ), for j 0.

LTE Estimation (I) Assume w j y(t j ), w j y(t j ) (only estimating LTE).

LTE Estimation (I) Assume w j y(t j ), w j y(t j ) (only estimating LTE). φ(t, w, ) is order-n metod τ j+1 () def = w j y(t j ) = y(t j+1 ) y(t j ) φ (t j, y(t j ), ) y(t j+1 ) (w j + φ (t j, w j, )) y(t j+1 ) w j+1 = O ( n ).

LTE Estimation (I) Assume w j y(t j ), w j y(t j ) (only estimating LTE). φ(t, w, ) is order-n metod τ j+1 () def = w j y(t j ) = φ(t, w, ) is order-(n + 1) metod, y(t j+1 ) y(t j ) φ (t j, y(t j ), ) y(t j+1 ) (w j + φ (t j, w j, )) y(t j+1 ) w j+1 = O ( n ). τ j+1 () def = w j y(t j ) = y(t j+1 ) y(t j ) φ (t j, y(t j ), ) ( y(t j+1 ) w j + φ ) (t j, w j, ) y(t j+1 ) w j+1 = O ( n+1).

LTE Estimation (III) Assume w j y(t j ), w j y(t j ) (only estimating LTE).

LTE Estimation (III) Assume w j y(t j ), w j y(t j ) (only estimating LTE). φ(t, w, ) is order-n metod τ j+1 () y(t j+1) w j+1 = O ( n ).

LTE Estimation (III) Assume w j y(t j ), w j y(t j ) (only estimating LTE). φ(t, w, ) is order-n metod τ j+1 () y(t j+1) w j+1 φ(t, w, ) is order-(n + 1) metod, = O ( n ). τ j+1 () y(t j+1) w j+1 = O ( n+1).

LTE Estimation (III) Assume w j y(t j ), w j y(t j ) (only estimating LTE). φ(t, w, ) is order-n metod τ j+1 () y(t j+1) w j+1 φ(t, w, ) is order-(n + 1) metod, = O ( n ). τ j+1 () y(t j+1) w j+1 = O ( n+1). terefore τ j+1 () y(t j+1) w j+1 + w j+1 w j+1 = O ( n+1) + w j+1 w j+1 LTE estimate: τ j+1 () w j+1 w j+1 = O ( n ).

step-size selection (I) LTE estimate: τ j+1 () w j+1 w j+1

step-size selection (I) LTE estimate: τ j+1 () w j+1 w j+1 Since τ j+1 () = O ( n ), assume τ j+1 () K n were K is independent of.

step-size selection (I) LTE estimate: τ j+1 () w j+1 w j+1 Since τ j+1 () = O ( n ), assume τ j+1 () K n were K is independent of. K sould satisfy K n w j+1 w j+1. Assume LTE for new step-size q satisfies given tolerance ɛ: τ j+1 (q ) ɛ, need to estimate q.

step-size selection (II) LTE estimate: K n τ j+1 () w j+1 w j+1

step-size selection (II) LTE estimate: K n τ j+1 () w j+1 w j+1 Assume LTE for q satisfies given tolerance ɛ: τ j+1 (q ) K (q ) n = q n K n q n w j+1 w j+1 ɛ.

step-size selection (II) LTE estimate: K n τ j+1 () w j+1 w j+1 Assume LTE for q satisfies given tolerance ɛ: τ j+1 (q ) K (q ) n = q n K n q n w j+1 w j+1 ɛ. 1 new step-size estimate: q ɛ n w j+1 w j+1

Summary Given order-n metod w j+1 = w j + φ (t j, w j, ), = t j+1 t j, j 0, and given order-(n + 1) metod w j+1 = w j + φ (t j, w j, ), for j 0. for eac j, compute w j+1 = w j + φ (t j, w j, ), w j+1 = w j + φ (t j, w j, ), new step-size q sould satisfy 1 1 q ɛ n ɛ n w j+1 w j+1 =. φ (t j, w j, ) φ (t j, w j, )

Runge-Kutta-Felberg: 4 t order metod, 5 t order estimate w j+1 = w j + 25 216 k 1 + 1408 2565 k 3 + 2197 4104 k 4 1 5 k 5, w j+1 = w j + 16 135 k 1 + 6656 12825 k 3 + 28561 56430 k 4 9 50 k 5 + 2 55 k 6, were

Runge-Kutta-Felberg: 4 t order metod, 5 t order estimate w j+1 = w j + 25 216 k 1 + 1408 2565 k 3 + 2197 4104 k 4 1 5 k 5, w j+1 = w j + 16 135 k 1 + 6656 12825 k 3 + 28561 56430 k 4 9 50 k 5 + 2 55 k 6, were k 1 = f (t j, w j ),

Runge-Kutta-Felberg: 4 t order metod, 5 t order estimate w j+1 = w j + 25 216 k 1 + 1408 2565 k 3 + 2197 4104 k 4 1 5 k 5, w j+1 = w j + 16 135 k 1 + 6656 12825 k 3 + 28561 56430 k 4 9 50 k 5 + 2 55 k 6, were k 1 = f (t j, w j ), ( k 2 = f t j + 4, w j + 1 ) 4 k 1, ( k 3 = f t j + 3 8, w j + 3 32 k 1 + 9 ) 32 k 2, ( k 4 = f t j + 12 13, w j + 1932 2197 k 1 7200 2197 k 2 + 7296 ) 2197 k 3, ( k 5 = f t j +, w j + 439 216 k 1 8 k 2 + 3680 513 k 3 845 ) 4104 k 4, ( k 6 = f t j + 2, w j 8 27 k 1 + 2 k 2 3544 2565 k 3 + 1859 4104 k 4 11 ) 40 k 5.

Runge-Kutta-Felberg: step-size selection procedure compute a conservative value for q: 1 q = ɛ 4 2 ( w j+1 w j+1 ).

Runge-Kutta-Felberg: step-size selection procedure compute a conservative value for q: 1 q = ɛ 4 2 ( w j+1 w j+1 ). make restricted step-size cange: { 0.1, if q 0.1, = 4, if q 4.

Runge-Kutta-Felberg: step-size selection procedure compute a conservative value for q: 1 q = ɛ 4 2 ( w j+1 w j+1 ). make restricted step-size cange: { 0.1, if q 0.1, = 4, if q 4. step-size can t be too big: = min (, max ).

Runge-Kutta-Felberg: step-size selection procedure compute a conservative value for q: 1 q = ɛ 4 2 ( w j+1 w j+1 ). make restricted step-size cange: { 0.1, if q 0.1, = 4, if q 4. step-size can t be too big: = min (, max ). step-size can t be too small: if < min ten declare failure.

Runge-Kutta-Felberg: example dy Initial Value ODE = y t 2 + 1, 0 t 2, y(0) = 0.5, dt exact solution y(t) = (1 + t) 2 0.5 e t.

Runge-Kutta-Felberg: example dy Initial Value ODE = y t 2 + 1, 0 t 2, y(0) = 0.5, dt exact solution y(t) = (1 + t) 2 0.5 e t.

Runge-Kutta-Felberg: example dy Initial Value ODE = y t 2 + 1, 0 t 2, y(0) = 0.5, dt exact solution y(t) = (1 + t) 2 0.5 e t.

Runge-Kutta-Felberg: example dy Initial Value ODE = y t 2 + 1, 0 t 2, y(0) = 0.5, dt exact solution y(t) = (1 + t) 2 0.5 e t. t j j y(t j ) w j w j 0.00000 0.00000 0.50000 0.50000 0.50000 0.25000 0.25000 0.92049 0.92049 0.92049 0.48655 0.23655 1.39649 1.39649 1.39649 0.72933 0.24278 1.95374 1.95375 1.95375 0.97933 0.25000 2.58642 2.58643 2.58643 1.22933 0.25000 3.26045 3.26046 3.26046 1.47933 0.25000 3.95208 3.95210 3.95210 1.72933 0.25000 4.63081 4.63083 4.63083 1.97933 0.25000 5.25747 5.25749 5.25749 2.00000 0.02067 5.30547 5.30549 5.30549

Runge-Kutta-Felberg: solution plots Initial Value ODE dy dt = y t 2 + 1, 0 t 2, y(0) = 0.5.

Runge-Kutta-Felberg: solution plots Initial Value ODE dy dt = y t 2 + 1, 0 t 2, y(0) = 0.5.

Runge-Kutta-Felberg: solution errors Initial Value ODE dy dt = y t 2 + 1, 0 t 2, y(0) = 0.5.

Runge-Kutta-Felberg: solution errors Initial Value ODE dy dt = y t 2 + 1, 0 t 2, y(0) = 0.5.

Runge-Kutta-Felberg: truncation errors Initial Value ODE actual def = dy dt y(t j ) w j j = y t 2 + 1, 0 t 2, y(0) = 0.5., def estimate = w j w j j.

Runge-Kutta-Felberg: truncation errors Initial Value ODE actual def = dy dt y(t j ) w j j = y t 2 + 1, 0 t 2, y(0) = 0.5., def estimate = w j w j j.