Outline. 1 Numerical Integration. 2 Numerical Differentiation. 3 Richardson Extrapolation

Similar documents
CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

Scientific Computing: An Introductory Survey

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

CS 257: Numerical Methods

8.3 Numerical Quadrature, Continued

Section 6.6 Gaussian Quadrature

Numerical Methods I: Numerical Integration/Quadrature

Chapter 4: Interpolation and Approximation. October 28, 2005

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 16. f(x) dx,

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

(x x 0 )(x x 1 )... (x x n ) (x x 0 ) + y 0.

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

Numerical Integration (Quadrature) Another application for our interpolation tools!

Lectures 9-10: Polynomial and piecewise polynomial interpolation

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Introductory Numerical Analysis

Differentiation and Integration

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Numerical Methods. King Saud University

Scientific Computing: Numerical Integration

Outline. 1 Boundary Value Problems. 2 Numerical Methods for BVPs. Boundary Value Problems Numerical Methods for BVPs

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

We consider the problem of finding a polynomial that interpolates a given set of values:

Fixed point iteration and root finding

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Numerical Integration

Numerical Integra/on

Mathematics for Engineers. Numerical mathematics

Preface. 2 Linear Equations and Eigenvalue Problem 22

Lecture 4: Numerical solution of ordinary differential equations

Ch. 03 Numerical Quadrature. Andrea Mignone Physics Department, University of Torino AA

MA2501 Numerical Methods Spring 2015

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

CHAPTER 4. Interpolation

Romberg Integration and Gaussian Quadrature

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

Multistage Methods I: Runge-Kutta Methods

Scientific Computing

Numerical Analysis Exam with Solutions

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

LECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes).

MA 3021: Numerical Analysis I Numerical Differentiation and Integration

APPM/MATH Problem Set 6 Solutions

NUMERICAL MATHEMATICS AND COMPUTING

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

6 Lecture 6b: the Euler Maclaurin formula

Fundamentals of Numerical Mathematics. Charles University Prague Faculty of Mathematics and Physics Czech Republic

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by

Principles of Scientific Computing Local Analysis

5. Hand in the entire exam booklet and your computer score sheet.

Spring 2012 Introduction to numerical analysis Class notes. Laurent Demanet

Scientific Computing: An Introductory Survey

Numerical Integration of Functions

Linear Multistep Methods I: Adams and BDF Methods

Polynomials. p n (x) = a n x n + a n 1 x n 1 + a 1 x + a 0, where

3.1 Interpolation and the Lagrange Polynomial

Numerical Integra/on

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

HIGHER ORDER METHODS. There are two principal means to derive higher order methods. b j f(x n j,y n j )

Scientific Computing: An Introductory Survey

Extrapolation in Numerical Integration. Romberg Integration

Homework and Computer Problems for Math*2130 (W17).

Chapter 5 - Quadrature

AM205: Assignment 3 (due 5 PM, October 20)

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Introduction to Numerical Analysis

Engg. Math. II (Unit-IV) Numerical Analysis

Ma 530 Power Series II

Lecture 28 The Main Sources of Error

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

November 20, Interpolation, Extrapolation & Polynomial Approximation

Accuracy, Precision and Efficiency in Sparse Grids

Numerical techniques to solve equations

The problems that follow illustrate the methods covered in class. They are typical of the types of problems that will be on the tests.

Numerical interpolation, extrapolation and fi tting of data

Numerical Analysis: Interpolation Part 1

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

Computational Methods

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

AIMS Exercise Set # 1

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Spring 2012 Introduction to numerical analysis Class notes. Laurent Demanet

Numerical Methods I Orthogonal Polynomials

Comparison of Clenshaw-Curtis and Gauss Quadrature

GENG2140, S2, 2012 Week 7: Curve fitting

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

Method of Finite Elements I

Fourth Order RK-Method

Gregory's quadrature method

Transcription:

Outline Numerical Integration Numerical Differentiation Numerical Integration Numerical Differentiation 3 Michael T. Heath Scientific Computing / 6

Main Ideas Quadrature based on polynomial interpolation: Methods: Method of undetermined coefficients (e.g., Adams-Bashforth) Lagrangian interpolation Rules: Midpoint, Trapezoidal, Simpson, Newton-Cotes Gaussian Quadrature Quadrature based on piecewise polynomial interpolation Composite trapezoidal rule Composite Simpson Richardson extrapolation Differentiation Taylor series / Richardson extrapolation Derivatives of Lagrange polynomials Derivative matrices

Integration Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems For f : R! R, definite integral over interval [a, b] I(f) = Z b a f(x) dx is defined by limit of Riemann sums nx R n = (x i+ x i ) f( i ) i= Riemann integral exists provided integrand f is bounded and continuous almost everywhere Absolute condition number of integration with respect to perturbations in integrand is b a Integration is inherently well-conditioned because of its smoothing effect Michael T. Heath Scientific Computing 3 / 6

Numerical Integration Numerical Differentiation Numerical Quadrature Quadrature Rules Adaptive Quadrature Other Integration Problems Quadrature rule is weighted sum of finite number of sample values of integrand function To obtain desired level of accuracy at low cost, How should sample points be chosen? How should their contributions be weighted? Computational work is measured by number of evaluations of integrand function required Michael T. Heath Scientific Computing 4 / 6

Quadrature Rules Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems An n-point quadrature rule has form Q n (f) = nx w i f(x i ) i= Points x i are called nodes or abscissas Multipliers w i are called weights Quadrature rule is open if a<x and x n <b closed if a = x and x n = b Can also have (x x x n ) a < b (Adams-Bashforth timesteppers) Michael T. Heath Scientific Computing 5 / 6

Numerical Integration Numerical Differentiation Quadrature Rules, continued Quadrature Rules Adaptive Quadrature Other Integration Problems Quadrature rules are based on polynomial interpolation Integrand function f is sampled at finite set of points Polynomial interpolating those points is determined Integral of interpolant is taken as estimate for integral of original function In practice, interpolating polynomial is not determined explicitly but used to determine weights corresponding to nodes If Lagrange is interpolation used, then weights are given by w i = Z b a `i(x), i =,...,n Michael T. Heath Scientific Computing 6 / 6

Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems Method of Undetermined Coefficients Alternative derivation of quadrature rule uses method of undetermined coefficients To derive n-point rule on interval [a, b], take nodes x,...,x n as given and consider weights w,...,w n as coefficients to be determined Force quadrature rule to integrate first n polynomial basis functions exactly, and by linearity, it will then integrate any polynomial of degree n exactly Thus we obtain system of moment equations that determines weights for quadrature rule Michael T. Heath Scientific Computing 7 / 6

Quadrature Overview I(f) := Z b a f(t) dt nx i= w i f i, =: Q n (f) f i := f(t i ) Idea is to minimize the number of function evaluations. Small n is good. Several strategies: global rules composite rules composite rules + extrapolation adaptive rules

Global (Interpolatory) Quadrature Rules Generally, approximate f(t) bypolynomialinterpolant,p(t). f(t) p(t) = nx l i (t) f i i= I(f) = Z b a f(t) dt Z b a p(t) dt =: Q n (f) Q n (f) = Z b a! nx l i (t) f i i= dt = nx Z b i= a l i (t) dt f i = nx i= w i f i. w i = Z b a l i (t) dt We will see two types of global (interpolatory) rules: Newton-Cotes interpolatory on uniformly spaced nodes. Gauss rules interpolatory on optimally chosen point sets.

Midpoint rule: l i (t) = Examples w = Z b a l (t) dt = Z b a dt =(b a) Trapezoidal rule: l (t) = t b a b, l (t) = t a b a w = Z b a l (t) dt = (b a) = Z b a l (t) dt = w Simpson s rule: w = Z b a w = Z b a l (t) dt = Z b l (t) dt = b a 6 = a Z b a t b a t a a l 3 (t) dt = w 3 b b dt = (b a) 3 These are the Newton-Cotes quadrature rules for n=,, and 3, respectively. The midpoint rule is open. Trapezoid and Simpson s rules are closed.

Finding Weights: Method of Undetermined Coefficients Example : Find w i for [a, b] =[, ], n =3. First approach: f =,t,t. I() = I(t) = 3X i= 3X i= w i = w i t i = t Results in 3 3matrixforthew i s. 3X I(t ) = w i t i = 3 t3 i=

X Finding Weights: Method of Undetermined Coefficients Second approach: Choose f so that some of the coe the w i svanish. cients multiplying I = w ( 3 )( ) = Z (t 3 )(t ) dt I = w ( 3 )( 3 ) = Z (t )(t ) dt I 3 = w 3 ( )( 3 ) = Z (t )(t Corresponds to the Lagrange interpolant approach. 3 ) dt

Method of Undetermined Coefficients Example : Find w i for [a, b] =[0, ], n =3,butusingf i = f(t i )=f(i), with i =-,-,and0.(thet i s are outside the interval (a, b).) Result should be exact for f(t) lp 0,lP,andlP. Take f=, f=t, andf=t. Find w = 5 X wi = = Z w w = = Z 4w + w = 3 = Z w = 6 0 0 0 dt tdt t dt w 0 = 3. This example is useful for finding integration coe cients for explicit timestepping methods that will be seen in later chapters.

Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems Example: Undetermined Coefficients Derive 3-point rule Q 3 (f) =w f(x )+w f(x )+w 3 f(x 3 ) on interval [a, b] using monomial basis Take x = a, x =(a + b)/, and x 3 = b as nodes First three monomials are, x, and x Resulting system of moment equations is w +w +w 3 = w a + w (a + b)/+w 3 b = w a + w ((a + b)/) + w 3 b = Z b a Z b a Z b a dx = x b a = b a xdx=(x /) b a =(b a )/ x dx =(x 3 /3) b a =(b 3 a 3 )/3 Michael T. Heath Scientific Computing 8 / 6

Example, continued Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems In matrix form, linear system is 3 3 w 4 a (a + b)/ b 5 4w 5 = a ((a + b)/) b w 3 b a 3 4(b a )/5 (b 3 a 3 )/3 Solving system by Gaussian elimination, we obtain weights w = b a 6, w = (b a), w 3 = b a 3 6 which is known as Simpson s rule Michael T. Heath Scientific Computing 9 / 6

Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems Method of Undetermined Coefficients More generally, for any n and choice of nodes x,...,x n, Vandermonde system 3 3 3 w b a x x x n w 6 4..... 7 6 7. 5 4. 5 = (b a )/ 6 7 4. 5 x n x n x n n w n (b n a n )/n determines weights w,...,w n Michael T. Heath Scientific Computing 0 / 6

Numerical Integration Numerical Differentiation Stability of Quadrature Rules Quadrature Rules Adaptive Quadrature Other Integration Problems Absolute condition number of quadrature rule is sum of magnitudes of weights, nx w i i= If weights are all nonnegative, then absolute condition number of quadrature rule is b a, same as that of underlying integral, so rule is stable If any weights are negative, then absolute condition number can be much larger, and rule can be unstable Michael T. Heath Scientific Computing 3 / 6

Conditioning Absolute condition number of integration: I(f) = I( ˆf) = Z b a Z b a f(t) dt ˆf(t) dt I(f) I( ˆf) = Z b a (f ˆf) dt apple b a f ˆf Absolute condition number is b a.

Conditioning Absolute condition number of quadrature: Q n (f) Q n ( ˆf) apple nx i= w i f i ˆfi apple nx i= w i max i f i ˆfi apple nx i= w i f ˆf C = nx i= w i If Q n (f) isinterpolatory,then P w i = (b a) : nx Q n () = w i i= Z b a dt = (b a). If w i 0, then C = (b a). Otherwise, C>(b a) andcanbearbitrarilylargeasn!.

Numerical Integration Numerical Differentiation Newton-Cotes Quadrature Quadrature Rules Adaptive Quadrature Other Integration Problems Newton-Cotes quadrature rules use equally spaced nodes in interval [a, b] Midpoint rule M(f) =(b a)f a + b Trapezoid rule Simpson s rule S(f) = b T (f) = b 6 a a (f(a)+f(b)) f(a)+4f a + b + f(b) Michael T. Heath Scientific Computing 4 / 6

Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems Example: Newton-Cotes Quadrature Approximate integral I(f) = R 0 exp( x ) dx 0.74684 M(f) = ( 0) exp( /4) 0.77880 T (f) = (/)[exp(0) + exp( )] 0.683940 S(f) = (/6)[exp(0) + 4 exp( /4) + exp( )] 0.74780 < interactive example > Michael T. Heath Scientific Computing 5 / 6

Error Estimation Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems Expanding integrand f in Taylor series about midpoint m =(a + b)/ of interval [a, b], f(x) = f(m)+f 0 (m)(x m)+ f 00 (m) (x m) + f 000 (m) 6 (x m) 3 + f (4) (m) (x m) 4 + 4 Integrating from a to b, odd-order terms drop out, yielding I(f) = f(m)(b a)+ f 00 (m) 4 (b a)3 + f (4) (m) 90 (b a)5 + = M(f)+E(f)+F (f)+ where E(f) and F (f) represent first two terms in error expansion for midpoint rule Michael T. Heath Scientific Computing 6 / 6

Numerical Integration Numerical Differentiation Error Estimation, continued Quadrature Rules Adaptive Quadrature Other Integration Problems If we substitute x = a and x = b into Taylor series, add two series together, observe once again that odd-order terms drop out, solve for f(m), and substitute into midpoint rule, we obtain I(f) =T (f) E(f) 4F (f) Thus, provided length of interval is sufficiently small and f (4) is well behaved, midpoint rule is about twice as accurate as trapezoid rule Halving length of interval decreases error in either rule by factor of about /8 Michael T. Heath Scientific Computing 7 / 6

Numerical Integration Numerical Differentiation Error Estimation, continued Quadrature Rules Adaptive Quadrature Other Integration Problems Difference between midpoint and trapezoid rules provides estimate for error in either of them T (f) M(f) =3E(f)+5F (f)+ so E(f) T (f) M(f) 3 Weighted combination of midpoint and trapezoid rules eliminates E(f) term from error expansion I(f) = 3 M(f)+ 3 T (f) 3 F (f)+ = S(f) 3 F (f)+ which gives alternate derivation for Simpson s rule and estimate for its error Michael T. Heath Scientific Computing 8 / 6

Numerical Integration Numerical Differentiation Example: Error Estimation Quadrature Rules Adaptive Quadrature Other Integration Problems We illustrate error estimation by computing approximate value for integral R 0 x dx =/3 M(f) = ( 0)(/) =/4 T (f) = 0 (0 + )=/ E(f) (T (f) M(f))/3 =(/4)/3 =/ Error in M(f) is about /, error in T (f) is about /6 Also, S(f) =(/3)M(f)+(/3)T (f) =(/3)(/4)+(/3)(/) = /3 which is exact for this integral, as expected Michael T. Heath Scientific Computing 9 / 6

f(x) = x I(f) = Z f(x) dx = x M(f) = f(0) = = 4. x 3 3 Example = 3 3. ( error =/3) T (f) = f( ) + f() =. ( error =4/3) Error for midpoint rule is generally ½ that of trapezoidal rule.

Numerical Integration Numerical Differentiation Accuracy of Quadrature Rules Quadrature Rules Adaptive Quadrature Other Integration Problems Quadrature rule is of degree d if it is exact for every polynomial of degree d, but not exact for some polynomial of degree d + By construction, n-point interpolatory quadrature rule is of degree at least n Rough error bound I(f) Q n (f) apple 4 hn+ kf (n) k where h = max{x i+ x i : i =,...,n }, shows that Q n (f)! I(f) as n!, provided f (n) remains well behaved Higher accuracy can be obtained by increasing n or by decreasing h Michael T. Heath Scientific Computing / 6

Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems Accuracy of Newton-Cotes Quadrature Since n-point Newton-Cotes rule is based on polynomial interpolant of degree n, we expect rule to have degree n Thus, we expect midpoint rule to have degree 0, trapezoid rule degree, Simpson s rule degree, etc. From Taylor series expansion, error for midpoint rule depends on second and higher derivatives of integrand, which vanish for linear as well as constant polynomials So midpoint rule integrates linear polynomials exactly, hence its degree is rather than 0 Similarly, error for Simpson s rule depends on fourth and higher derivatives, which vanish for cubics as well as quadratic polynomials, so Simpson s rule is of degree 3 Michael T. Heath Scientific Computing 0 / 6

Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems Accuracy of Newton-Cotes Quadrature In general, odd-order Newton-Cotes rule gains extra degree beyond that of polynomial interpolant on which it is based n-point Newton-Cotes rule is of degree n but of degree n if n is odd if n is even, This phenomenon is due to cancellation of positive and negative errors < interactive example > Michael T. Heath Scientific Computing / 6

Newton-Cotes Weights n\j 3 4 5 6 3 3 3 4 8 4 5 45 95 6 88 4 3 9 8 64 45 5 96 3 9 8 8 5 5 44 3 8 64 45 5 44 4 45 5 96 95 88

Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems Drawbacks of Newton-Cotes Rules Newton-Cotes quadrature rules are simple and often effective, but they have drawbacks Using large number of equally spaced nodes may incur erratic behavior associated with high-degree polynomial interpolation (e.g., weights may be negative) Indeed, every n-point Newton-Cotes rule with n has at least one negative weight, and P n i= w i!as n!, so Newton-Cotes rules become arbitrarily ill-conditioned Newton-Cotes rules are not of highest degree possible for number of nodes used Michael T. Heath Scientific Computing / 6

Newton-Cotes Formulae: What Could Go Wrong? Demo: newton_cotes.m and newton_cotes.m Newton-Cotes formulae are interpolatory. For high n, Lagrange interpolants through uniform points are illconditioned. In quadrature, this conditioning is manifest through negative quadrature weights (bad). Imagine how the gradient of Q responds to f i in this case

Numerical Integration Numerical Differentiation Gaussian Quadrature Quadrature Rules Adaptive Quadrature Other Integration Problems Gaussian quadrature rules are based on polynomial Numerical Integration Quadrature Rules Numerical Differentiation Adaptive Quadrature Other Integration Problems interpolation, but nodes as well as weights are chosen to Gaussian Quadrature maximize degree of resulting rule Gaussian quadrature rules are based on polynomial interpolation, but nodes as well as weights are chosen to maximize degree of resulting rule With n parameters, we can attain degree of n With n parameters, we can attain degree of n Gaussian quadrature Gaussian quadrature rules can be derived be derived by method of by method of undetermined coefficients, but resulting system of moment undeterminedequations coefficients, that determines nodes but and resulting weights is nonlinear system of moment equations that determines nodes and weights is nonlinear Also, nodes are usually irrational, even if endpoints of interval are rational Although inconvenient for hand computation, nodes and weights are tabulated in advance and stored in subroutine for use on computer Also, nodes are usually irrational, even if endpoints of interval are rational Michael T. Heath Scientific Computing 4 / 6 Although inconvenient for hand computation, nodes and weights are tabulated in advance and stored in subroutine for use on computer Michael T. Heath Scientific Computing 4 / 6

Lagrange Polynomials: Good and Bad Point Distributions N=4 N=7 N=8 φ φ 4 Uniform Gauss-Lobatto-Legendre We can see that for N=8 one of the uniform weights is close to becoming negative.

Lagrange Polynomials: Good and Bad Point Distributions gll_txt.m All weights are positive.

Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems Example: Gaussian Quadrature Rule Derive two-point Gaussian rule on [, ], G (f) =w f(x )+w f(x ) where nodes x i and weights w i are chosen to maximize degree of resulting rule We use method of undetermined coefficients, but now nodes as well as weights are unknown parameters to be determined Four parameters are to be determined, so we expect to be able to integrate cubic polynomials exactly, since cubics depend on four parameters Michael T. Heath Scientific Computing 5 / 6

Example, continued Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems Requiring rule to integrate first four monomials exactly gives moment equations w + w = w x + w x = w x + w x = w x 3 + w x 3 = Z Z Z Z dx = x = xdx=(x /) =0 x dx =(x 3 /3) =/3 x 3 dx =(x 4 /4) =0 Michael T. Heath Scientific Computing 6 / 6

Example, continued Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems One solution of this system of four nonlinear equations in four unknowns is given by x = / p 3, x =/ p 3, w =, w = Another solution reverses signs of x and x Resulting two-point Gaussian rule has form G (f) =f( / p 3) + f(/ p 3) and by construction it has degree three In general, for each n there is unique n-point Gaussian rule, and it is of degree n Gaussian quadrature rules can also be derived using orthogonal polynomials Michael T. Heath Scientific Computing 7 / 6

Consider I := Gauss Quadrature, I Z f(x) dx. Find w i,x i i =,...,n, to maximize degree of accuracy, M. Cardinality,. : lp M = M + w i + x i = n M + = n () M = n Indeed, it is possible to find x i and w i such that all polynomials of degree apple M =n areintegratedexactly. The n nodes, x i, are the zeros of the nth-order Legendre polynomial. The weights, w i, are the integrals of the cardinal Lagrange polynomials associated with these nodes: w i = Z l i (x) dx, l i (x) lp n, l i (x j )= ij. Error scales like I Q n C f (n) ( ) (n)! (Q n exact for f(x) lp n.) n nodes are roots of orthogonal polynomials

Change of Interval Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems Gaussian rules are somewhat more difficult to apply than Newton-Cotes rules because weights and nodes are usually derived for some specific interval, such as [, ] Given interval of integration [a, b] must be transformed into standard interval for which nodes and weights have been tabulated To use quadrature rule tabulated on interval [, ], Z nx f(x) dx w i f(x i ) i= to approximate integral on interval [a, b], I(g) = Z b a g(t) dt we must change variable from x in [, ] to t in [a, b] Michael T. Heath Scientific Computing 8 / 6

Numerical Integration Numerical Differentiation Change of Interval, continued Quadrature Rules Adaptive Quadrature Other Integration Problems Many transformations are possible, but simple linear transformation (b a)x + a b Many transformations t = are possible, but simple linear transformation has advantage of preserving degree of quadrature rule (b a)x + a b t = Generally, the translation is much simpler: has advantage of preserving degree of quadrature rule t i = a + i + (b a) When =, t = a. When =,t = b. Here i, i=,...,n, are the Gauss points on (-,). Michael T. Heath Scientific Computing 9 / 6

Use of Gauss Quadrature Generally, the translation is much simpler: t i = a + i + (b a) When =, t = a. When =,t = b. Here i, i=,...,n, are the Gauss points on (-,). So, you simply look up* the ( i,w i )pairs, use the formula above to get t i, then evaluate Q n = (*that is, call a function) (b a) X w i f(t i ) i

Use of Gauss Quadrature There is a lot of software, in most every language, for computing the nodes and weights for all of the Gauss, Gauss-Lobatto, Gauss-Radau rules (Chebyshev, Legendre, etc.)

Numerical Integration Numerical Differentiation Gaussian Quadrature Quadrature Rules Adaptive Quadrature Other Integration Problems Gaussian quadrature rules have maximal degree and optimal accuracy for number of nodes used Weights are always positive and approximate integral always converges to exact integral as n! Unfortunately, Gaussian rules of different orders have no nodes in common (except possibly midpoint), so Gaussian rules are not progressive (except for Gauss-Chebyshev) Thus, estimating error using Gaussian rules of different order requires evaluating integrand function at full set of nodes of both rules Michael T. Heath Scientific Computing 30 / 6

Gauss Quadrature Example

Gauss Quadrature Derivation Gauss Quadrature: I Suppose g(x) lp m with zeros x 0,x,...,x m. g(x) = Ax m + a m x m + + a x + a 0 = A(x x 0 )(x x ) (x {z x m ). } lp m Let n<m andconsiderthefirstn +zeros: g(x) = (x x 0 )(x x ) (x x n ) {z } =: q n+ (x) = q n+ (x) r(x) A (x x n+ ) (x x m ) {z } =: r(x) lp m (n+)

Consider f(x) lp m, m > n+, p n (x) lp n, (Lagrange interpolating polynomial.) Then, let p n (x i ) = f(x i ),i=0,..., n. g(x) := f(x) p n (x) lp m g(x i ) = f(x i ) p n (x i ) = 0, i =0,..., n, g(x) = (x x 0 )(x x ) (x x n ) A (x x n+ ) (x x m ) = q n+ (x) r(x), with r(x) lp m (n+). It follows that, for any f(x) lp m, m>n+,wecanwrite with r(x) lp m (n+). f(x) = p n (x) + q n+ (x) r(x),

Notice that Z f(x) dx = = Z Z p n (x) dx + i=0 Z! nx f i l i (x) dx + q n+ (x) r(x) dx Z q n+ (x) r(x) dx = nx Z f i i=0 l i (x) dx + Z q n+ (x) r(x) dx = nx f i w i + i=0 Z q n+ (x) r(x) dx, with w i := Z l i (x) dx. Note that the quadrature rule is exact i Z q n+ (x) r(x) dx = 0.

Orthogonal Polynomials The Legendre polynomials of degree k, k (x) lp k are orthogonal polynomials satisfying ( i, j ) := Z i (x) j (x) dx = ij { 0,,..., k } form a basis (a spanning set) for lp k. That is, for any p k (x) lp k there is a unique set of coe k such that cients p k (x) = 0 0 (x) + (x) +... + k k (x). Note that, if j>k, then ( j,p k ) = kx i=0 i ( j, i ) = 0.

Returning to quadrature, our rule will be exact if Z q n+ (x) r(x) dx = 0. We get to choose the nodes, x 0,x,...,x n, that define q n+. If we choose the nodes to be the zeros of n+ (x), then the integral will vanish if r(x) lp k, with k<n+. Recall that r lp m (n+), which implies m (n +) < n+ m < n +. m apple n +. Thus, using n +nodes, x 0,x,...,x n, (the zeros of n+ ), we have a quadrature rule that is exact for all polynomials of up to degree n +.

Gauss Quadrature: II A common way to state the Gauss quadrature problem is, Find n + weights, w i, and points, x i, that will maximize the degree of polynomial, m, for which the quadrature rule will be exact. Since you have n +degreesoffreedom(w i,x i ), i =0,...,n then you should be able to be exact for a space of functions having cardinality n +. The cardinality of lp m is m +. Setting m +=n +wefindm =n +.

Gauss Lobatto Quadrature This is similar to the Gauss quadrature problem stated previously, save that we constrain x 0 = andx n =+,whichmeanswenow have only n degrees of freedom. We should expect m =n, and this is indeed the case when we choose the x i stobethezerosof( x )Pn(x). 0 As before, the weights are the integrals of the Lagrange cardinal functions, l i (x).

Arbitrary Domains Usually, we re interested in more general integrals, i.e., Ĩ := Z b a f(x) dx. The integral can be computed using a change of basis, f[x( )], with x( ) := a + b a ( + ). Note that dx = b a d = J d, where J is the Jacobian associated with the map from [, ] to [a, b].

With this change of basis, we have Ĩ = b a b a b a b a Z f[x( )] d = b! nx w i ˆf( i ) i=0! nx w i f(x i ) i=0! nx w i f i. i=0 a Z ˆf( ) d The quadrature weights stay the same. The quadrature points are given by x i = a + (b a)( + i).

Here, the i sarethezerosof n+ ( ) or( ) n( ), 0 depending on whether one wants Gauss-Legendre (GL) or Gauss-Lobatto-Legendre (GLL) quadrature. There are closed form expressions for the quadrature weights: Gauss Gauss-Lobatto w i = ( i ) P 0 n+ ( i) w i = n(n +)[P n ( )], where P k is the kth-order Legendre polynomial normalized such that P k () =. Most algorithms for finding the i sarebasedonsolvinganeigenvalue problem for a tridiagonal matrix (the companion matrix). The matlab script zwgll.m returns the set ( i,w i )forn + node points.

Composite Rules Main Idea: Use your favorite rule on each panel and sum across all panels. Particularly good if f(x) not differentiable at the knot points.

Numerical Integration Numerical Differentiation Composite Quadrature Quadrature Rules Adaptive Quadrature Other Integration Problems Alternative to using more nodes and higher degree rule is to subdivide original interval into subintervals, then apply simple quadrature rule in each subinterval Summing partial results then yields approximation to overall integral This approach is equivalent to using piecewise interpolation to derive composite quadrature rule Composite rule is always stable if underlying simple rule is stable Approximate integral converges to exact interval as number of subintervals goes to infinity provided underlying simple rule has degree at least zero Michael T. Heath Scientific Computing 33 / 6

Composite Quadrature Rules Composite Trapezoidal (Q CT ) and Composite Simpson (Q CS ) rules work by breaking the interval [a,b] into panels and then applying either trapezoidal or Simpson method to each panel. Q CT is the most common, particularly since Q CS is readily derived via Richardson extrapolation at no extra work. Q CT can be combined with Richardson extrapolation to get higher order accuracy, not quite competitive with Gauss quadrature, but a significant improvement. (This combination is known as Romberg integration.) For periodic functions, Q CT is a Gauss quadrature rule.

Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems Examples: Composite Quadrature Subdivide interval [a, b] into k subintervals of length h =(b a)/k, letting x j = a + jh, j =0,...,k Composite midpoint rule M k (f) = kx (x j x j ) f j= xj + x j = h kx f j= xj + x j Composite trapezoid rule T k (f) = kx j= (x j x j ) (f(x j )+f(x j )) = h f(a)+f(x )+ + f(x k )+ f(b) Michael T. Heath Scientific Computing 34 / 6

Implementation of Composite Trapezoidal Rule Assuming uniform spacing h =(b a)/k, Q CT := kx Q j = j= kx j= h (f j + f j ) = h f 0 + hf + hf +... +... + hf k + h f k = kx j= w j f j

Composite Trapezoidal Rule I j := Z xj x j f(x) dx = h (f j + f j )+O(h 3 ) =: Q j + O(h 3 ) = Q j + c j h 3 + higher order terms. c j apple max [x j,x j ] f 00 (x) I = Z b a f(x) dx = kx j= Q j {z } Q CT + kx c j h 3 j= + h.o.t. I Q CT + h.o.t. = h 3 kx j= c j apple h 3 k max j c j = h max j c j (b-a)

Other Considerations with Composite Trapezoidal Rule Composite Trapezoidal Rule: (uniform spacing, n panels) T h apple f0 + f := h = h " n X i= + f + f f i + (f 0 + f n ) + + f n + f n #, = nx w i f i, w 0 = w n = h,w i = h otherwise. i=0 Stable (w i > 0).

Composite Midpoint Rule: (uniform spacing, n panels) M h := h = h h i f + f 3 + + f n " nx i= f i # = h nx f(x 0 + ih i= h ) = nx i=0 w i f i, w i = h. (Note number of function evaluations.) Accuracy? Ĩ I h =??

Single Panel: x [x i,x i ]. Taylor series about x i : f(x) = f i +(x x i )f 0 i + (x x i ) f 00 i + (x x i 3! )3 f 000 i + Integrate: Ĩ i := Z xi x i f(x) dx = hf i + (x x i ) x i x i f 0 i + (x x i 3! ) 3 x i f 00 i x i + = hf i +0+ h3 3! f 00 3 i +0+ h5 5! f iv 5 i +0+ h7 7! f vi 7 i + = hf i + h3 4 f 00 i + h5 90 f iv i + h7 3560 f vi i + = M i + h3 4 f 00 i + h5 90 f iv i + h7 3560 f vi i + Therefore, midpoint rule error (single panel) is: M i = Ĩ h 3 4 f 00 i h 5 90 f iv i h 7 3560 f vi i Locally, midpoint rule is O(h 3 )accurate.

Trapezoidal Rule: (single panel) Taylor series about x i. f(x i ) = f i h f 0 i + h f 00 i! h 3 f 000 i 3! + f(x i ) = f i + h f 0 i + h f 00 i + 3 h f 000 i + Take average: h [f i + f i ] = M h + h3! f 00 i = Ĩi + h3! + h5 3 4! f iv 4 i f 00 i + h5 4! 4 + h7 6! f vi 6 i + f iv i 5 + h7 6! 6 f vi i 7 + = Ĩi + h3 f 00 + h5 480 f iv + h7 53760 f vi + = Ĩi + c h 3 f 00 i + c 4 h 5 f iv i + c 6 h 7 f vi i + Leading-order error for trapezoid rule is twice that of midpoint.

Composite Rule: Sum trapezoid rule across n panels: I CT := h " n X i= f i + (f 0 + f n ) # = nx i= h [f i + f i ] = nx i= hĩi + c h 3 f 00 i = Ĩ + c h "h nx i= f 00 i + c 4 h 5 f iv i # + c 4 h 4 "h + c 6 h 7 f vi i nx i= f iv i # i + + = Ĩ + h applez b a f 00 dx +h.o.t. + c 4 h 4 applez b a f iv dx +h.o.t. + = Ĩ + h [f 0 (b) f 0 (a)] + O(h 4 ). Global truncation error is O(h ) and has a particularly elegant form. Can estimate f 0 (a) andf 0 (b) witho(h )accurateformulato yield O(h 4 )accuracy. With care, can also precisely define the coe cient for h 4, h 6, and other terms (Euler-Maclaurin Sum Formula).

Examples. Apply (composite) trapezoidal rule for several endpoint conditions, f 0 (a) andf 0 (b):. Standard case (nothing special).. Lucky case (f 0 (a) =f 0 (b) =0). 3. Unlucky case (f 0 (b) = ). 4. Really lucky case (f (k) (a) =f (k) (b), k =,,...). Functions on [a, b] =[0, ]: () f(x) = e x () f(x) = e x ( cos x) (3) f(x) = p x (4) f(x) = log( + cos x). quad.m example.

trap_v_gll.m, trap_txt.m Working with D Nodal Bases

Working with D Nodal Bases on GLL Points trap_v_gll.m, gll_txt.m

Working with D Nodal Bases on GLL Points Spectral Convergence

Working with D Nodal Bases What is the convergence behavior for highly oscillatory functions? trap_v_gll_k.m

Strategies to improve to O(h 4 ) or higher? Endpoint Correction. Estimate f 0 (a) andf 0 (b) too(h )usingavailablef i data. How? Q: What happens if you don t have at least O(h )accuracy? - Requires knowing the c coe cient. :(. trap_endpoint.m I h = Ĩ + c h + O(h 4 ) I h = Ĩ +4c h + O(h 4 ) (Reuses f i, i=even!) I R = apple 4 3 I h 3 I h = Isimpson!

Composite Trapezoidal + Can in fact show that if f C K+ then I = Q CT + c h + c 4 h 4 + c 6 h 6 +... + c K h K + O(h K+ ) Suggests the following strategy: () I = Q CT(h) + c h + c 4 h 4 + c 6 h 6 +... () I = Q CT(h) + c (h) + c 4 (h) 4 + c 6 (h) 6 +... Take 4 ()-() (eliminate O(h ) term): 4I I = 4Q CT(h) Q CT(h) + c 0 4h 4 + c 0 6h 6 +... I = 4 3 Q CT(h) 3 Q CT(h) +ĉ 4 h 4 +ĉ 6 h 6 +... = Q S(h) +ĉ 4 h 4 +ĉ 6 h 6 + h.o.t. Here, Q S(h) 4 3 Q CT(h) 3 Q CT(h)

Composite Trapezoidal + Can in fact show that if f C K+ then I = Q CT + c h + c 4 h 4 + c 6 h 6 +... + c K h K + O(h K+ ) Suggests the following strategy: () I = Q CT(h) + c h + c 4 h 4 + c 6 h 6 +... () I = Q CT(h) + c (h) + c 4 (h) 4 + c 6 (h) 6 +... Take 4 ()-() (eliminate O(h ) term): 4I I = 4Q CT(h) Q CT(h) + c 0 4h 4 + c 0 6h 6 +... I = 4 3 Q CT(h) 3 Q CT(h) +ĉ 4 h 4 +ĉ 6 h 6 +... = Q S(h) +ĉ 4 h 4 +ĉ 6 h 6 + h.o.t. Here, Q S(h) 4 3 Q CT(h) 3 Q CT(h)

Composite Trapezoidal + Can in fact show that if f C K+ then I = Q CT + c h + c 4 h 4 + c 6 h 6 +... + c K h K + O(h K+ ) Suggests the following strategy: Original error O(h ) () I = Q CT(h) + c h + c 4 h 4 + c 6 h 6 +... () I = Q CT(h) + c (h) + c 4 (h) 4 + c 6 (h) 6 +... Take 4 ()-() (eliminate O(h ) term): 4I I = 4Q CT(h) Q CT(h) + c 0 4h 4 + c 0 6h 6 +... I = 4 3 Q CT(h) 3 Q CT(h) +ĉ 4 h 4 +ĉ 6 h 6 +... = Q S(h) +ĉ 4 h 4 +ĉ 6 h 6 + h.o.t. Here, Q S(h) 4 3 Q CT(h) 3 Q CT(h) New error O(h 4 )

+ Composite Trapezoidal Rule a = x 0 x x x 3 x 4......... x k = b h h h : w j : h h h h h h h : w j : h 0 h 0 h 0 h h (k even) 4 3 w j w h 3 j: 3 4h 3 h 3 4h 3 h 3 4h 3 h h 3 3 Richardson + Composite Trapezoidal = Composite Simpson But we never compute it this way. Just use Q CS = (4 Q CT(h) Q CT(h) ) / 3 No new function evaluations required!

Repeated (Romberg Integration) We can repeat the extrapolation process to get rid of the O(h 4 ) term. And repeat again, to get rid of O(h 6 ) term. T k,0 = Trapezoidal rule withh =(b a)/ k T k,j = 4j T k,j T k,j 4 j h T 0,0 h/ T,0 T, h/4 T,0 T, T, h/8 T 3,0 T 3, T 3, T 3,3 O(h ) O(h 4 ) O(h 6 ) O(h 8 ) Idea works just as well if errors are of form c h + c h + c 3 h 3 +, but tabular form would involve j instead of 4 j

Repeated (Romberg Integration) We can repeat the extrapolation process to get rid of the O(h 4 ) term. And repeat again, to get rid of O(h 6 ) term.

Richardson Example I = Z 0 e x dx Initial values, all created from same 7 values of f(x)..8594094953.753930946485.7790455757.7058596430.78848579994 Using these 5 values, we build the table (extrapolate) to get more precision. None Round Round Round 3 Round 4.85940949.7539309464.78865876.77904557.7838849.78868794.70585964.788454699.788848.78888794.78848579.78897405.78888675.78888460.78888459

Richardson Example Error for (aka Romberg integration) /h None Round Round Round 3 Round 4.4086e-0 3.5649e-0 5.793e-04 4 8.940e-03 3.703e-05 8.5947e-07 8.368e-03.36e-06.3759e-08 3.3549e-0 6 5.5930e-04.4559e-07.63e-0.349e- 3.49e-4 O(h^) O(h^4) O(h^6) O(h^8) O(h^0) Gauss Quadrature Results n Qn E.859e+00.4086e-0 3.789e+00 5.793e-04 4.783e+00.0995e-06 5.783e+00.666e-09 6.783e+00 7.846e-3 7.783e+00 0

Numerical Integration Numerical Differentiation Romberg Integration In many problems, such as numerical integration or differentiation, approximate value for some quantity is computed based on some step size Ideally, we would like to obtain limiting value as step size approaches zero, but we cannot take step size arbitrarily small because of excessive cost or rounding error Based on values for nonzero step sizes, however, we may be able to estimate value for step size of zero One way to do this is called Richardson extrapolation Michael T. Heath Scientific Computing 53 / 6

Numerical Integration Numerical Differentiation Romberg Integration, continued Let F (h) denote value obtained with step size h If we compute value of F for some nonzero step sizes, and if we know theoretical behavior of F (h) as h! 0, then we can extrapolate from known values to obtain approximate value for F (0) Suppose that F (h) =a 0 + a h p + O(h r ) as h! 0 for some p and r, with r>p Assume we know values of p and r, but not a 0 or a (indeed, F (0) = a 0 is what we seek) Michael T. Heath Scientific Computing 54 / 6

Numerical Integration Numerical Differentiation Romberg Integration, continued Suppose we have computed F for two step sizes, say h and h/q for some positive integer q Then we have F (h) = a 0 + a h p + O(h r ) F (h/q) = a 0 + a (h/q) p + O(h r )=a 0 + a q p h p + O(h r ) This system of two linear equations in two unknowns a 0 and a is easily solved to obtain a 0 = F (h)+ F (h) F (h/q) q p + O(h r ) Accuracy of improved value, a 0, is O(h r ) Michael T. Heath Scientific Computing 55 / 6

Numerical Integration Numerical Differentiation Romberg Integration, continued Extrapolated value, though improved, is still only approximate, not exact, and its accuracy is still limited by step size and arithmetic precision used If F (h) is known for several values of h, then extrapolation process can be repeated to produce still more accurate approximations, up to limitations imposed by finite-precision arithmetic Michael T. Heath Scientific Computing 56 / 6

Numerical Integration Numerical Differentiation Romberg Integration Example: Use Richardson extrapolation to improve accuracy of finite difference approximation to derivative of function sin(x) at x = Using first-order accurate forward difference approximation, we have F (h) =a 0 + a h + O(h ) so p =and r =in this instance Using step sizes of h =0.5and h/ =0.5 (i.e., q =), we obtain F (h) = sin(.5) sin() =0.3048 0.5 F (h/) = sin(.5) sin() =0.430055 0.5 Michael T. Heath Scientific Computing 57 / 6

Example, continued Numerical Integration Numerical Differentiation Romberg Integration Extrapolated value is then given by F (0) = a 0 = F (h)+ F (h) F (h/) (/) For comparison, correctly rounded result is cos() = 0.54030 =F (h/) F (h) =0.54806 < interactive example > Michael T. Heath Scientific Computing 58 / 6

Numerical Integration Numerical Differentiation Example: Romberg Integration Romberg Integration As another example, evaluate Z / 0 sin(x) dx Using composite trapezoid rule, we have F (h) =a 0 + a h + O(h 4 ) so p =and r =4in this instance With h = /, F (h) =F ( /) = 0.785398 With q =, F (h/) = F ( /4) = 0.948059 Michael T. Heath Scientific Computing 59 / 6

Example, continued Numerical Integration Numerical Differentiation Romberg Integration Extrapolated value is then given by F (0) = a 0 = F (h)+ F (h) F (h/) 4F (h/) F (h) = =.0080 3 which is substantially more accurate than values previously computed (exact answer is ) < interactive example > Michael T. Heath Scientific Computing 60 / 6

Numerical Integration Numerical Differentiation Romberg Integration Romberg Integration Continued Richardson extrapolations using composite trapezoid rule with successively halved step sizes is called Romberg integration It is capable of producing very high accuracy (up to limit imposed by arithmetic precision) for very smooth integrands It is often implemented in automatic (though nonadaptive) fashion, with extrapolations continuing until change in successive values falls below specified error tolerance Michael T. Heath Scientific Computing 6 / 6

Double Integrals Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems Approaches for evaluating double integrals include Use automatic one-dimensional quadrature routine for each dimension, one for outer integral and another for inner integral Use product quadrature rule resulting from applying one-dimensional rule to successive dimensions Use non-product quadrature rule for regions such as triangles Michael T. Heath Scientific Computing 4 / 6

Tensor-Product Integration As with interpolation, if domain can be expressed in rectangular form, then can use tensor-products of D interpolants: Q n = NX NX w i w j f( i, j ) j=0 i=0 The weights are just the D quadrature weights. More complex domains handled by mappings of [-,] to more general shapes and/or by using composite multidomain integration.

Multiple Integrals Numerical Integration Numerical Differentiation Quadrature Rules Adaptive Quadrature Other Integration Problems To evaluate multiple integrals in higher dimensions, only generally viable approach is Monte Carlo method Function is sampled at n points distributed randomly in domain of integration, and mean of function values is multiplied by area (or volume, etc.) of domain to obtain estimate for integral Error in estimate goes to zero as / p n, so to gain one additional decimal digit of accuracy requires increasing n by factor of 00 For this reason, Monte Carlo calculations of integrals often require millions of evaluations of integrand < interactive example > Michael T. Heath Scientific Computing 43 / 6

Numerical Integration Numerical Differentiation Multiple Integrals, continued Quadrature Rules Adaptive Quadrature Other Integration Problems Monte Carlo method is not competitive for dimensions one or two, but strength of method is that its convergence rate is independent of number of dimensions For example, one million points in six dimensions amounts to only ten points per dimension, which is much better than any type of conventional quadrature rule would require for same level of accuracy < interactive example > Michael T. Heath Scientific Computing 44 / 6

Numerical Integration Numerical Differentiation Numerical Differentiation Numerical Differentiation Finite Difference Approximations Automatic Differentiation Differentiation is inherently sensitive, as small perturbations in data can cause large changes in result Differentiation is inverse of integration, which is inherently stable because of its smoothing effect For example, two functions shown below have very similar definite integrals but very different derivatives Michael T. Heath Scientific Computing 47 / 6

Numerical Integration Numerical Differentiation Numerical Differentiation Finite Difference Approximations Automatic Differentiation Numerical Differentiation, continued To approximate derivative of function whose values are known only at discrete set of points, good approach is to fit some smooth function to given data and then differentiate approximating function If given data are sufficiently smooth, then interpolation may be appropriate, but if data are noisy, then smoothing approximating function, such as least squares spline, is more appropriate < interactive example > Michael T. Heath Scientific Computing 48 / 6

Numerical Differentiation Techniques Three common approaches for deriving formulas Taylor series Taylor series + Richardson extrapolation Differentiate Lagrange interpolants Readily programmed, see, e.g., Fornberg s spectral methods text.

Example Matlab Code

Using Taylor Series to Derive Difference Formulas Taylor Series: () f j+ = f j + hfj 0 + h f j 00 () f j = f j (3) f j = f j hfj 0 + h f j 00 Approximation of f 0 j := f 0 (x j ): + h3 3! f j 000 h 3 3! f j 000 + h4 4! f (4) ( + ) + h4 4! f (4) ( ) or h [() ()] : f j+ f j h h [() (3)] : f j+ f j h = f 0 j + h f 00 j + h.o.t. = fj 0 + h 3! f j 000 + h.o.t.

h : f j+ h f j = f 0 j + c h + c h + c 3 h 3 + h : f j+ h f j = f 0 j + c h + c 4h + c 3 8h 3 + h h = 4f j+ 4f j h f j+ h f j = 3f j +4f j+ f j+ h = f 0 j + c h + c 3 h 3 + Formula is improved from O(h) to O(h )

Numerical Integration Numerical Differentiation Finite Difference Approximations Numerical Differentiation Finite Difference Approximations Automatic Differentiation Given smooth function f : R! R, we wish to approximate its first and second derivatives at point x Consider Taylor series expansions f(x + h) = f(x)+f 0 (x)h + f 00 (x) h + f 000 (x) h 3 + 6 f(x h) = f(x) f 0 (x)h + f 00 (x) h f 000 (x) h 3 + 6 Solving for f 0 (x) in first series, obtain forward difference approximation f 0 (x) = f(x + h) f(x) h f 00 (x) h + f(x + h) f(x) h which is first-order accurate since dominant term in remainder of series is O(h) Michael T. Heath Scientific Computing 49 / 6

Numerical Integration Numerical Differentiation Numerical Differentiation Finite Difference Approximations Automatic Differentiation Finite Difference Approximations, continued Similarly, from second series derive backward difference approximation f 0 (x) = f(x) f(x h) h f(x) f(x h) h which is also first-order accurate + f 00 (x) h + Subtracting second series from first series gives centered difference approximation f 0 (x) = f(x + h) f(x h) h f(x + h) f(x h) h which is second-order accurate f 000 (x) h + 6 Michael T. Heath Scientific Computing 50 / 6

Numerical Integration Numerical Differentiation Numerical Differentiation Finite Difference Approximations Automatic Differentiation Finite Difference Approximations, continued Adding both series together gives centered difference approximation for second derivative f 00 (x) = f(x + h) f(x)+f(x h) h f(x + h) f(x)+f(x h) h which is also second-order accurate f (4) (x) h + Finite difference approximations can also be derived by polynomial interpolation, which is less cumbersome than Taylor series for higher-order accuracy or higher-order derivatives, and is more easily generalized to unequally spaced points < interactive example > Michael T. Heath Scientific Computing 5 / 6