THE TRAPEZOIDAL QUADRATURE RULE

Similar documents
AN OVERVIEW. Numerical Methods for ODE Initial Value Problems. 1. One-step methods (Taylor series, Runge-Kutta)

Integration of Ordinary Differential Equations

HIGHER ORDER METHODS. There are two principal means to derive higher order methods. b j f(x n j,y n j )

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Second Order ODEs. CSCC51H- Numerical Approx, Int and ODEs p.130/177

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Exercises, module A (ODEs, numerical integration etc)

Ordinary differential equation II

Computation Fluid Dynamics

EXAMPLE OF ONE-STEP METHOD

TS Method Summary. T k (x,y j 1 ) f(x j 1,y j 1 )+ 2 f (x j 1,y j 1 ) + k 1

Do not turn over until you are told to do so by the Invigilator.

Initial Value Problems

Lecture 6. Numerical Solution of Differential Equations B21/B1. Lecture 6. Lecture 6. Initial value problem. Initial value problem

The Definition and Numerical Method of Final Value Problem and Arbitrary Value Problem Shixiong Wang 1*, Jianhua He 1, Chen Wang 2, Xitong Li 1

Differential Equations

(x x 0 )(x x 1 )... (x x n ) (x x 0 ) + y 0.

Differential Equations

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 1. Runge-Kutta Methods

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Fourth Order RK-Method

DELFT UNIVERSITY OF TECHNOLOGY Faculty of Electrical Engineering, Mathematics and Computer Science

Logistic Map, Euler & Runge-Kutta Method and Lotka-Volterra Equations

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS

DELFT UNIVERSITY OF TECHNOLOGY Faculty of Electrical Engineering, Mathematics and Computer Science

Examination paper for TMA4215 Numerical Mathematics

Scientific Computing: An Introductory Survey

ODE Runge-Kutta methods

SYSTEMS OF ODES. mg sin ( (x)) dx 2 =

Chapter 5: Numerical Integration and Differentiation

Ordinary Differential Equations

Fixed point iteration and root finding

Section 6.3 Richardson s Extrapolation. Extrapolation (To infer or estimate by extending or projecting known information.)

Consistency and Convergence

4 Stability analysis of finite-difference methods for ODEs

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

Explicit One-Step Methods

Physics 250 Green s functions for ordinary differential equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Practice Exam 2 (Solutions)

Principles of Scientific Computing Local Analysis

Lecture 4: Numerical solution of ordinary differential equations

Exam in TMA4215 December 7th 2012

Numerical Methods for Differential Equations

Numerical solution of ODEs

Additional exercises with Numerieke Analyse

Numerical Methods - Initial Value Problems for ODEs

Romberg Integration. MATH 375 Numerical Analysis. J. Robert Buchanan. Spring Department of Mathematics

Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error. 2- Fixed point

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

CHAPTER 5: Linear Multistep Methods

Research Computing with Python, Lecture 7, Numerical Integration and Solving Ordinary Differential Equations

Introduction to the Numerical Solution of IVP for ODE

Ordinary Differential Equations. Monday, October 10, 11

Woods Hole Methods of Computational Neuroscience. Differential Equations and Linear Algebra. Lecture Notes

Lecture 5: Single Step Methods

The family of Runge Kutta methods with two intermediate evaluations is defined by

Scientific Computing: Numerical Integration

Do not turn over until you are told to do so by the Invigilator.

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

8.1 Introduction. Consider the initial value problem (IVP):

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method

Ordinary Differential Equations II

MA 3021: Numerical Analysis I Numerical Differentiation and Integration

A Brief Introduction to Numerical Methods for Differential Equations

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Notes on Numerical Analysis

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 20

INTRODUCTION TO SCIENTIFIC COMPUTING PART II DRAFT

Numerical Methods for Initial Value Problems; Harmonic Oscillators

Computational Methods CMSC/AMSC/MAPL 460. Ordinary differential equations

Numerical Differential Equations: IVP

Numerical Methods. King Saud University

Chapter 4: First-order differential equations. Similarity and Transport Phenomena in Fluid Dynamics Christophe Ancey

Numerical Methods for Initial Value Problems; Harmonic Oscillators

Mathematical Analysis Outline. William G. Faris

Modeling & Simulation 2018 Lecture 12. Simulations

Homework set #7 solutions, Math 128A J. Xia

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

= D ( )( ) derivative

Numerical Methods in Physics and Astrophysics

Ordinary Differential Equations

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 16. f(x) dx,

Spring 2012 Introduction to numerical analysis Class notes. Laurent Demanet

FIXED POINT ITERATION

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Linear Multistep Methods

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Ordinary Differential Equations (ODEs)

PIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED

Degree Master of Science in Mathematical Modelling and Scientific Computing Mathematical Methods I Thursday, 12th January 2012, 9:30 a.m.- 11:30 a.m.

Module 4: Numerical Methods for ODE. Michael Bader. Winter 2007/2008

Introduction to standard and non-standard Numerical Methods

3 Applications of partial differentiation

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 2013

Fixed point iteration Numerical Analysis Math 465/565

Transcription:

y 1 Computing area under y=1/(1+x^2) 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Trapezoidal rule x THE TRAPEZOIDAL QUADRATURE RULE From Chapter 5, we have the quadrature formula b ( ) b a g(x) dx = [g(a)+g(b)] a 2 (b a)3 g (ξ) 12 for some a ξ b.

THE TRAPEZOIDAL RULE FOR ODEs Integrate Y (x) =f(x, Y (x)) over the interval[x n,x ]: Y (x )=Y (x n )+ x x n f(x, Y (x)) dx Apply the trapezoidal quadrature rule to this integral: Y (x ) = Y (x n ) + h 2 [f(x n,y(x n )) + f(x,y(x )] h3 12 Y (ξ n ) Dropping the error term, we have the trapezoidal rule: y = y n + h 2 [f(x n,y n )+f(x,y )]

y = y n + h 2 [f(x n,y n )+f(x,y )], n 0 This is a one-step implicit method. Its truncation error is T n (Y )= h3 12 Y (ξ n ) For its error, we can use the generaltheory of 6.3; or directly, Y (x n ) y h (x n ) e 2K(x n x 0 ) Y 0 y h (x 0 ) + e2k(x n x 0 ) 1 h2 Y K 12 provided hk 1 and the Lipschitz condition is satisfied.

STABILITY Let y 0 z 0 ɛ and z = z n + h 2 [f(x n,z n )+f(x,z )], n 0 Then y n z n ɛe 2K(x n x 0 ), x0 x n b ASYMPTOTIC ERROR FORMULA We can show Y (x n ) y h (x n )=D(x n )h 2 + O(h 3 ) D (x) =f y (x, Y (x))d(x) 1 12 Y (x), D(x 0 )=0 assuming y 0 = Y 0.

RICHARDSON EXTRAPOLATION At any node point x common to the use of stepsizes h and 2h, wehave Then Y (x) y h (x) =D(x)h 2 + O(h 3 ) Y (x) y 2h (x) =4D(x)h 2 + O(h 3 ) Y (x) y 2h (x). =4[Y (x) y h (x)] Y (x). = y h (x)+ 1 3 [y h(x) y 2h (x)]

ITERATIVE SOLUTION To solve for y in define y = y n + h 2 [f(x n,y n )+f(x,y )] y (j+1) = y n + h 2 [ f(x n,y n )+f(x,y (j) ) ] for j =0, 1, 2,...,withaninitialguessy (0) y. To analyze the convergence: y y (j+1) = h [ f(x,y ) f(x,y (j) ] 2 ) y y (j+1) = h 2 f(x,y ) f(x,y (j) ) hk 2 y y (j), j 0 According to this, convergence is assured if we choose h so small that hk 2 < 1

If we use the mean value theorem, we get a better idea of nature of the convergence: y y (j+1). = h [ f(x,y ) y y (j) ] 2 y for j =0, 1, 2,... Thus the crucialfactor is h f(x,y ) 2 y and we need its magnitude to be less than 1. The smaller this factor, the faster the convergence. Also, note that with most stable differential equations, the partialderivative is negative. Thus the error in the iterates will oscillate between negative and positive, which means the iterates are oscillating about the desired solution y.

CHOOSING THE INITIAL GUESS We generally to choose y y (0) = O(hp ) for some p>0. How to choose y (0)? (a) The preceding answer: (b) Euler s method: y (0) = y n y (0) = y n + hf(x n,y n ) (c) Midpoint method: y (0) = y n 1 +2hf(x n,y n )

THE LOCAL SOLUTION An important idea to consider is that of the local solution of the differentialequation. Given that we have gotten to the point (x n,y n ) with the trapezoidal method, consider the initial value problem y = f(x, y), x x n, y(x n )=y n Denote the solution of this problem by u n (x). It is the exact solution of the differential equation from x n onwards, based on our best knowledge of the solution at x n. Thus u n (x) =f(x, u n(x)), x x n, u n (x n )=y n It is this solution that we are truly estimating at each step.

If we proceed in analogy with the derivation of Euler s method, u n (x ) = u n (x n )+hu n (x n)+ h2 2 u n (ξ n) If we let then = y n + hf(x n,y n )+ h2 2 u n(ξ n ) y (0) = y n + hf(x n,y n ) u n (x ) y (0) = h2 2 u n (ξ n)

Similarly, for the derivation of the trapezoidal method, u n (x ) = u n (x n ) + h 2 [f(x n,u n (x n )) + f(x,u n (x ))] h3 12 u n (ζ n ) = y n + h 2 [f(x n,y n )+f(x,u n (x ))] h3 12 u From this, we can derive n (ζ n ) u n (x ) y = h3 asisdescribedinthetext. 12 u n (x n )+O(h 4 )

If we use the Euler predictor, then y (0) = y n + hf(x n,y n ) y y (0) = h2 2 u n(ξ n )+ h3 12 u n (x n )+O(h 4 ) = h2 2 u n (ξ n)+o(h 3 ) Return to error in the iteration, Then y y (j+1) hk 2 y y (1) y y (j) y y (2) O(h3 ) O(h4 ), j 0

Usually we try to make the iteration error less significant than the truncation error in the trapezoidal method, T n (Y )= h3 12 Y (ξ n ) Thus we would use two iterations of the trapezoidal iteration equation y (j+1) = y n + h 2 and then let y = y (2). [ f(x n,y n )+f(x,y (j) ) ], j =0, 1

If we repeat this discussion with the midpoint predictor, y (0) = y n 1 +2hf(x n,y n ) then we can derive u n (x ) y (0) = 5h3 12 u n (x n )+O(h 4 ) Combined with the localerror for the trapezoidalrule we have u n (x ) y = h3 y y (0) = h3 2 u 12 u n (x n )+O(h 4 ) n (x n )+O(h 4 ) For the iteration, we need iterate only once, obtaining y y (1) O(h4 ) We then proceed with y = y (1).

A-STABILITY For reasons examined in a later section ( 6.9), we look at the behaviour of numericalsolutions to y = λy, y(0) = 1 when λ is a real or complex number with real(λ) < 0. The true solution is Y (x) = e λx ; and as x, Y (x) 0. We ask when the numericalsolution has the same behaviour. The trapezoidalmethod for this case is y = y n + h 2 [λy n + λy ] We can solve for y to get y = 1+ hλ 2 1 hλ 2 y n, y 0 =1

Together with y 0 = 1, this leads to y n = 1+ hλ 2 n, n 0 1 hλ 2 If λ<0 and is real, then clearly y n 0asn. By a slightly more complicated argument, the same is true if real(λ) < 0. Numericalmethods for which this is true, independent of the size of h, are called A-stable methods. A-stable methods are very useful is solving stiff differentialequations, which we explore in 6.9.