(x x 0 )(x x 1 )... (x x n ) (x x 0 ) + y 0.

Similar documents
COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

X. Numerical Methods

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

Romberg Integration and Gaussian Quadrature

Scientific Computing

Introductory Numerical Analysis

MA 3021: Numerical Analysis I Numerical Differentiation and Integration

Applications of Differentiation

Numerical Methods. King Saud University

Scientific Computing: Numerical Integration

Section 6.6 Gaussian Quadrature

Outline. 1 Numerical Integration. 2 Numerical Differentiation. 3 Richardson Extrapolation

Principles of Scientific Computing Local Analysis

Numerical Integration

Ma 530 Power Series II

Ch. 03 Numerical Quadrature. Andrea Mignone Physics Department, University of Torino AA

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

LECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes).

MATH 1242 FINAL EXAM Spring,

12.0 Properties of orthogonal polynomials

Chapter 4: Interpolation and Approximation. October 28, 2005

Mathematics for Engineers. Numerical mathematics

COURSE Numerical integration of functions

Section x7 +

In numerical analysis quadrature refers to the computation of definite integrals.

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

Numerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 16. f(x) dx,

SECTION A. f(x) = ln(x). Sketch the graph of y = f(x), indicating the coordinates of any points where the graph crosses the axes.

8.3 Numerical Quadrature, Continued

f(x) g(x) = [f (x)g(x) dx + f(x)g (x)dx

Homework and Computer Problems for Math*2130 (W17).

CS 257: Numerical Methods

Numerical Analysis: Interpolation Part 1

Differentiation and Integration

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

INTRODUCTION TO COMPUTER METHODS FOR O.D.E.

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

MATH 31B: MIDTERM 2 REVIEW. sin 2 x = 1 cos(2x) dx = x 2 sin(2x) 4. + C = x 2. dx = x sin(2x) + C = x sin x cos x

APPM/MATH Problem Set 6 Solutions

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Chapter 3: Root Finding. September 26, 2005

Answers to Homework 9: Numerical Integration

FIXED POINT ITERATION

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

Introduction to Numerical Analysis

Engg. Math. II (Unit-IV) Numerical Analysis

Calculus II Study Guide Fall 2015 Instructor: Barry McQuarrie Page 1 of 8

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

MA2501 Numerical Methods Spring 2015

Simpson s 1/3 Rule Simpson s 1/3 rule assumes 3 equispaced data/interpolation/integration points

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

4.9 APPROXIMATING DEFINITE INTEGRALS

Goal: Approximate the area under a curve using the Rectangular Approximation Method (RAM) RECTANGULAR APPROXIMATION METHODS

Mathematical Methods for Physics and Engineering

EXAMPLE OF ONE-STEP METHOD

THE TRAPEZOIDAL QUADRATURE RULE

Physics 115/242 Romberg Integration

Interpolation Theory

LEAST SQUARES APPROXIMATION

Fixed point iteration and root finding

MATH 163 HOMEWORK Week 13, due Monday April 26 TOPICS. c n (x a) n then c n = f(n) (a) n!

We consider the problem of finding a polynomial that interpolates a given set of values:

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

MA 8019: Numerical Analysis I Solution of Nonlinear Equations

Stat 451 Lecture Notes Numerical Integration

Engg. Math. I. Unit-I. Differential Calculus

Exam in TMA4215 December 7th 2012

Extrapolation in Numerical Integration. Romberg Integration

Section Taylor and Maclaurin Series

Numerical Methods I: Numerical Integration/Quadrature

Practice Exam 2 (Solutions)

Numerical Methods. Aaron Naiman Jerusalem College of Technology naiman

Math 122 Test 3. April 15, 2014

You can learn more about the services offered by the teaching center by visiting

Example 1 Which of these functions are polynomials in x? In the case(s) where f is a polynomial,

Solutions to Math 41 Final Exam December 10, 2012

Numerical Integra/on

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim

5 Numerical Integration & Dierentiation

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.

n=0 ( 1)n /(n + 1) converges, but not

18.01 EXERCISES. Unit 3. Integration. 3A. Differentials, indefinite integration. 3A-1 Compute the differentials df(x) of the following functions.

Numerical Integration

Practice problems from old exams for math 132 William H. Meeks III

Week 2 Techniques of Integration

1 Taylor-Maclaurin Series

LECTURE 14 NUMERICAL INTEGRATION. Find

Numerical techniques to solve equations

1.4 Techniques of Integration

Lecture 4: Numerical solution of ordinary differential equations

Transcription:

> 5. Numerical Integration Review of Interpolation Find p n (x) with p n (x j ) = y j, j = 0, 1,,..., n. Solution: p n (x) = y 0 l 0 (x) + y 1 l 1 (x) +... + y n l n (x), l k (x) = n j=1,j k Theorem Let y j = f(x j ), f(x) smooth and p n interpolates f(x) at x 0 < x 1 <... < x n. For any x (x 0, x n ) there is a ξ such that f(x) p n (x) = f (n+1) (ξ) (n + 1)! Example: n = 1, linear: (x x 0 )(x x 1 )... (x x n ) }{{} ψ(x) f(x) p 1 (x) = f (ξ) (x x 0 )(x x 1 ), p 1 (x) = y 1 y 0 (x x 0 ) + y 0. x 1 x 0 x x j x k x j.

> 5. Numerical Integration > 5.1 The Trapezoidal Rule Mixed rule Find area under the curve y = f(x) b a f(x)dx = N 1 j=0 xj+1 x j f(x)dx }{{} interpolate f(x) on (x j, x j+1 ) and integrate exactly N 1 j=0 (x j+1 x j ) f(x j) + f(x j+1 )

> 5. Numerical Integration > 5.1 The Trapezoidal Rule Trapezoidal Rule xj+1 x j xj+1 = x j f(x) xj+1 x j p 1 (x)dx { f(xj+1 ) + f(x j ) x j+1 x j (x x j ) + f(x j ) = (x j+1 x j ) f(x j) + f(x j+1 ) } dx f(b) + f(a) T 1 (f) = (b a) (6.1)

> 5. Numerical Integration > 5.1 The Trapezoidal Rule Another derivation of Trapezoidal Rule: Seek: xj+1 f(x)dx w j f(x j ) + w j+1 f(x j+1 ) x j exact on polynomials of degree 1 (1, x). f(x) 1 : f(x) = x : xj+1 x j f(x)dx = x j+1 x j = w j 1 + w j+1 1 xj+1 xdx = x j+1 x j x j = w jx j + w j+1 x j+1

> 5. Numerical Integration > 5.1 The Trapezoidal Rule Example Approximate integral I = 1 0 dx 1 + x The true value is I = ln() = 0.693147. Using (6.1), we obtain T 1 = 1 ( 1 + 1 ) = 3 4 = 0.75 and the error is I T 1 (f) = 0.0569 (6.)

> 5. Numerical Integration > 5.1 The Trapezoidal Rule To improve on approximation (6.1) when f(x) is not a nearly linear function on [a, b], break interval [a, b] into smaller subintervals and apply (6.1) on each subinterval. If the subintervals are small enough, then f(x) will be nearly linear on each one. Example Evaluate the preceding example by using T 1 (f) on two subintervals of equal length. For two subintervals, and the error I = 1 0 T = 17 4 dx 1 1 + x + 1 = 0.70833 is about 1 4 of that for T 1 in (6.). dx 1 + x = 1 1 + 3 + 1 3 + 1 I T = 0.015 (6.3)

> 5. Numerical Integration > 5.1 The Trapezoidal Rule We derive the general formula for calculations using n subintervals of equal length h = b a n. The endpoints of each subinterval are then x j = a + jh, Breaking the integral into n subintegrals I(f) = = b a x1 f(x)dx = x 0 f(x)dx + h f(x 0) + f(x 1 ) xn x 0 x f(x)dx x 1 f(x)dx +... + + h f(x 1) + f(x ) j = 0, 1,..., n xn x n 1 f(x)dx +... + h f(x n 1) + f(x n ) The trapezoidal numerical integration rule ( 1 T n (f)=h f(x 0)+f(x 1 )+f(x )+ +f(x n 1 )+ 1 ) f(x n) (6.4)

> 5. Numerical Integration > 5.1 The Trapezoidal Rule With a sequence of increasing values of n, T n (f) will usually be an increasingly accurate approximation of I(f). But which sequence of n should be used? If n is doubled repeatedly, then the function values used in each T n (f) will include all earlier function values used in the preceding T n (f). Thus, the doubling of n will ensure that all previously computed information is used in the new calculation, making the trapezoidal rule less expensive than it would be otherwise. ( f(x0 ) T (f) = h + f(x 1 ) + f(x ) ) with Also with h = b a, x 0 = a, x 1 = a + b, x = b. ( f(x0 ) T 4 (f) = h + f(x 1 ) + f(x ) + f(x 3 ) + f(x ) 4) h = b a 4, x 0 = a, x 1 = 3a + b, x = a + b 4 4, x 3 = a + 3b, x 4 = b 4 Only f(x 1 ) and f(x 3 ) need to be evaluated.

> 5. Numerical Integration > 5.1 The Trapezoidal Rule Example We give calculations of T n (f) for three integrals I (1) = I () = I (3) = 1 0 4 0 π 0 e x dx = 0.74684138147 dx 1 + x = tan 1 (4) = 1.3581766366803 dx + cos(x) = π 3 = 3.675987846844 n I (1) I () I (3) Error Ratio Error Ratio Error Ratio 1.55E- -1.33E-1-5.61E-1 4 3.84E-3 4.0-3.59E-3 37.0-3.76E- 14.9 8 9.59E-4 4.01 5.64E-4-6.37-1.93E-4 195.0 16.40E-4 4.00 1.44E-4 3.9-5.19E-9 37,600.0 3 5.99E-5 4.00 3.60E-5 4.00 * 64 1.50E-5 4.00 9.01E-6 4.00 * 18 3.74E-6 4.00.5E-6 4.00 * The error for I (1), I () decreases by a factor of 4 when n doubles, for I (3) the answers for n = 3, 64, 18 were correct up to the limits due to rounding error on the computer (16 decimal digits).

> 5. Numerical Integration > 5.1.1 Simpson s rule To improve on T 1 (f) in (6.1), use quadratic interpolation to approximate f(x) on [a, b]. Let P (x) be the quadratic polynomial that interpolates f(x) at a, c = a+b and b. I(f) = b a b a P (x)dx (6.5) ( ) (x c)(x b) (x a)(x b) (x a)(x c) f(a) + f(c) + (a c)(a b) (c a)(c b) (b a)(b c) f(b) dx This can be evaluated directly. But it is easier with a change of variables. Let h = b a and u = x a. Then and b a (x c)(x b) (a c)(a b) dx = 1 h = 1 h h 0 a+h (u h)(u h)du = 1 h S (f) = h 3 a (x c)(x b)dx ( u 3 3 3 ) h u h + h u = h 3 ( f(a) + 4f( a + b ) ) + f(b) 0 (6.6)

> 5. Numerical Integration > 5.1.1 Simpson s rule Example Then h = b a = 1 and and the error is S (f) = 1 3 I = 1 0 dx 1 + x ( 1 + 4( 3 ) + 1 ) = 5 36 I S = ln() S = 0.00130 = 0.69444 (6.7) while the error for the trapezoidal rule (the number of function evaluations is the same for both S and T ) was I T = 0.015. The error in S is smaller than that in (6.3) for T by a factor of 1, a significant increase in accuracy.

> 5. Numerical Integration > 5.1.1 Simpson s rule 1 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.5 0 0.1 0. 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure: An illustration of Simpson s rule (6.6), y = f(x), y = P (x)

> 5. Numerical Integration > 5.1.1 Simpson s rule The rule S (f) will be an accurate approximation to I(f) if f(x) is nearly quadratic on [a, b]. For the other cases, proceed in the same manner as for the trapezoidal rule. Let n be an even integer, h = b a n and define the evaluation points for f(x) by x j = a + jh, j = 0, 1,..., n We follow the idea from the trapezoidal rule, but break [a, b] = [x 0, x n ] into larger intervals, each containing three interpolation node points: I(f) = = b a x f(x)dx = x 0 f(x)dx + xn x 0 x4 f(x)dx x f(x)dx +... + xn x n f(x)dx h 3 [f(x 0) + 4f(x 1 ) + f(x )] + h 3 [f(x ) + 4f(x 3 ) + f(x 4 )] +... + h 3 [f(x n ) + 4f(x n 1 ) + f(x n )]

> 5. Numerical Integration > 5.1.1 Simpson s rule Simpson s rule: S n (f) = h 3 (f(x 0)+4f(x 1 )+f(x )+4f(x 3 )+f(x 4 ) (6.8) + +f(x n )+4f(x n 1 )+f(x n )) It has been among the most popular numerical integration methods for more than two centuries.

> 5. Numerical Integration > 5.1.1 Simpson s rule Example Evaluate the integrals I (1) = I () = I (3) = 1 0 4 0 π 0 e x dx = 0.74684138147 dx 1 + x = tan 1 (4) = 1.3581766366803 dx + cos(x) = π = 3.675987846844 3 n I (1) I () I (3) Error Ratio Error Ratio Error Ratio -3.56E-4 8.66E- -1.6 4-3.1E-5 11.4 3.95E-. 1.37E-1-9. 8-1.99E-6 15.7 1.95E-3 0.3 1.3E- 11. 16-1.5E-7 15.9 4.0E-6 485.0 6.43E-5 191.0 3-7.79E-9 16.0.33E-8 17.0 1.71E-9 37,600.0 64-4.87E-10 16.0 1.46E-9 16.0 * 18-3.04E-11 16.0 9.15E-11 16.0 * For I (1), I (), the ratio by which the error decreases approaches 16. For I (3), the errors converge to zero much more rapidly

> 5. Numerical Integration > 5. Error formulas Theorem Let f C [a, b], n N. The error in integrating I(f) = b a f(x)dx using the trapezoidal rule T n (f) = h [ 1 f(x 0) + f(x 1 ) + f(x ) + + f(x n 1 ) + 1 f(x n) ] is given by En T I(f) T n (f) = h (b a) f (c n ) (6.9) 1 where c n is some unknown point in a, b, and h = b a n.

> 5. Numerical Integration > 5. Error formulas Theorem Suppose that f C [a, b], and h = max j (x j+1 x j ). Then b f(x)dx (x j+1 x j ) f(x j+1)+f(x j ) a j b a 1 h max f (x). a x b Proof: Let I j be the j th subinterval and p 1 = linear interpolant on I j at x j, x j+1. Local error: f(x) p 1 (x) = f (x) xj+1 (x x j )(x x j+1 ). }{{} ψ(x) f(x)dx (x j+1 x j ) f(x j) + f(x j+1 ) x j =

> 5. Numerical Integration > 5. Error formulas proof Hence = 1 xj+1 x j xj+1 x j f (ξ)! 1 max a x b f (x) (x x j )(x x j+1 ) f (ξ) x x j x x j+1 dx xj+1 x j = 1 max f (x) (x j+1 x j ) 3. a x b 6 (x x j )(x j+1 x)dx local error 1 1 max a x b f (x) h 3 j, h j = x j+1 x j.

> 5. Numerical Integration > 5. Error formulas proof Finally, b global error = f(x)dx a j ( xj+1 = j j (x j+1 x j ) f(x j) + f(x j+1 ) f(x)dx (x j+1 x j ) f(x j) + f(x j+1 ) x j 1 1 max a x b f (x) (x j+1 x j ) 3 = 1 1 max a x b f (x) h 1 max a x b f (x) ) n (x j+1 x j ). j=0 } {{ } b a local error j n (x j+1 x j ) 3 }{{} j=0 h (x j+1 x j )

> 5. Numerical Integration > 5. Error formulas Example Recall the example I(f) = 1 0 dx 1 + x = ln Here f(x) = 1 1+x, [a, b] = [0, 1], and f (x) = (1+x) 3. Then by (6.9) E T n (f) = h 1 f (c n ), 0 c n 1, h = 1 n. This cannot be computed exactly since c n is unknown. But and therefore max f (x) = max 0 x 1 0 x 1 (1 + x) 3 = En T (f) h h () = 1 6 For n = 1 and n = we have E1 T (f) 1 = 0.167, E T (f) }{{} 6 }{{} 0.0569 0.015 ( 1 ) 6 = 0.0417.

> 5. Numerical Integration > 5. Error formulas A possible weakness in the trapezoidal rule can be inferred from the assumption of the theorem for the error. If f(x) does not have two continuous derivatives on [a, b], then T n (f) does converge more slowly?? YES for some functions, especially if the first derivative is not continuous.

> 5. Numerical Integration > 5..1 Asymptotic estimate of T n(f) The error formula (6.9) En T (f) I(f) T n (f) = h (b a) 1 f (c n ) can only be used to bound the error, because f (c n ) is unknown. This can be improved by a more careful consideration of the error formula. A central element of the proof of (6.9) lies in the local error α+h α for some c [α, α + h]. f(α) + f(α + h) f(x)dx h = h3 1 f (c) (6.10)

> 5. Numerical Integration > 5..1 Asymptotic estimate of T n(f) Recall the derivation of the trapezoidal rule T n (f) and use the local error (6.10): E T n (f) = = b a x1 f(x)dx T n (f) = xn x 0 f(x)dx h f(x 0) + f(x 1 ) x 0 + + xn f(x)dx T n (f) f(x)dx h f(x n 1) + f(x n ) x n 1 = h3 1 f (γ 1 ) h3 1 f (γ ) h3 1 f (γ n ) with γ 1 [x 0, x 1 ], γ [x 1, x ],... γ n [x n 1, x n ], and ( En T (f) = h 1 hf (γ 1 ) + + hf (γ n ) }{{} =(b a)f (c n) x + f(x)dx h f(x 1) + f(x ) x 1 ), c n [a, b].

> 5. Numerical Integration > 5..1 Asymptotic estimate of T n(f) To estimate the trapezoidal error, observe that hf (γ 1 ) + + hf (γ n ) is a Riemann sum for the integral b The Riemann sum is based on the partition a f (x)dx = f (b) f (a) (6.11) [x 0, x 1 ], [x 1, x ],..., [x n 1, x n ] of [a, b]. As n, this sum will approach the integral (6.11). With (6.11), we find an asymptotic estimate (improves as n increases) E T n (f) h 1 (f (b) f (a)) =: ẼT n (f). (6.1) As long as f (x) is computable, ẼT n (f) will be very easy to compute.

> 5. Numerical Integration > 5..1 Asymptotic estimate of T n(f) Example Again consider I = 1 0 dx 1 + x. Then f 1 (x) =, and the asymptotic estimate (6.1) yields (1 + x) the estimate ( Ẽn T = h 1 1 (1 + 1) 1 ) (1 + 0) = h 16, h = 1 n and for n = 1 and n = Ẽ1 T = 1 16 = 0.065, ẼT = 0.0156 I T 1 = 0.0569, I T = 0.015

> 5. Numerical Integration > 5..1 Asymptotic estimate of T n(f) The estimate ẼT n (f) = h 1 (f (b) f (a)) has several practical advantages over the earlier formula (6.9) E T n (f) = h (b a) 1 f (c n ). 1 It confirms that when n is doubled (or h is halved), the error decreases by a factor of about 4, provided that f (b) f (a) 0. This agrees with the results for I (1) and I (). (6.1) implies that the convergence of T n (f) will be more rapid when f (b) f (a) = 0. This is a partial explanation of the very rapid convergence observed with I (3) 3 (6.1) leads to a more accurate numerical integration formula by taking ẼT n (f) into account: I(f) T n (f) h 1 (f (b) f (a)) I(f) T n (f) h 1 (f (b) f (a)) := CT n (f), (6.13) the corrected trapezoidal rule

> 5. Numerical Integration > 5..1 Asymptotic estimate of T n(f) Example Recall the integral I (1), I = 1 0 e x dx = 0.74684138143 n I T n (f) Ẽ n (f) CT n (f) I CT n (f) Ratio 1.545E-4 1.533E- 0.746698561877 1.6E-4 4 3.840E-3 3.83E-3 0.746816175313 7.96E-6 15.8 8 9.585E-4 9.580E-4 0.746836344 4.99E-7 16.0 16.395E-4.395E-4 0.74684101633 3.1E-8 16.0 3 5.988E-5 5.988E-5 0.74684130863 1.95E-9 16.0 64 1.497E-5 1.497E-5 0.7468413690.E-10 16.0 Table: Example of CT n (f) and Ẽn(f) Note that the estimate Ẽn T (f) = h e 1, h = 1 6 n is a very accurate estimator of the true error. Also, the error in CT n (f) converges to zero at a more rapid rate than does the error for T n (f). When n is doubled, the error in CT n (f) decreases by a factor of about 16.

> 5. Numerical Integration > 5.. Error formulae for Simpson s rule Theorem Assume f C 4 [a, b], n N. The error in using Simpson s rule is En S (f) = I(f) S n (f) = h4 (b a) f (4) (c n ) (6.14) 180 with c n [a, b] an unknown point, and h = b a n. Moreover, this error can be estimated with the asymptotic error formula Ẽ S n (f) = h4 180 (f (b) f (a)) (6.15) Note that (6.14) says that Simpson s rule is exact for all f(x) that are polynomials of degree 3, whereas the quadratic interpolation on which Simpson s rule is based is exact only for f(x) a polynomial of degree. The degree of precision being 3 leads to the power h 4 in the error, rather than the power h 3, which would have been produced on the basis of the error in quadratic interpolation. The higher power of h 4, and the simple form of the method that historically have caused Simpson s rule to become the most popular numerical integration rule.

> 5. Numerical Integration > 5.. Error formulae for Simpson s rule Example dx 1+x : Recall (6.7) where S (f) was applied to I = 1 0 1 ( S (f) = 1 + 4( 3 3 ) + 1 ) = 5 36 = 0.69444 f(x) = 1 1 + x, f 3 (x) = 6 (1 + x) 4, f (4) 4 (x) = (1 + x) 5 The exact error is given by E S n (f) = h4 180 f (4) (c n ), h = 1 n for some 0 c n 1. We can bound it by The asymptotic error is given by For n =, ẼS n Ẽ S n (f) = h4 180 En S (f) h4 h4 4 = 180 15 ( 6 (1 + 1) 4 6 (1 + 0) 4 ) = h4 3 = 0.00195; the actual error is 0.00130.

> 5. Numerical Integration > 5.. Error formulae for Simpson s rule The behavior in I(f) S n (f) can be derived from (6.15): Ẽn S (f) = h4 ( f (b) f (a) ), 180 i.e., when n is doubled, h is halved, and h 4 decreases by of factor of 16. Thus, the error E S n (f) should decrease by the same factor, provided that f (a) f (b). This is the error observed with integrals I (1) and I (). When f (a) = f (b), the error will decrease more rapidly, which is a partial explanation of the rapid convergence for I (3).

> 5. Numerical Integration > 5.. Error formulae for Simpson s rule The theory of asymptotic error formulae E n (f) = Ẽn(f) (6.16) such as for E T n (f) and E S n (f), says that (6.16) will vary with the integrand f, which is illustrated with the two cases I (1) and I (). From (6.14) and (6.15) we infer that Simpson s rule will not perform as well if f(x) is not four times continuously differentiable, on [a, b].

> 5. Numerical Integration > 5.. Error formulae for Simpson s rule Example Use Simpson s rule to approximate I = 1 0 xdx = 3. n Error Ratio.860E- 4 1.014E-.8 8 3.587E-3.83 16 1.68E-3.83 3 4.485E-4.83 Table: Simpson s rule for x The column Ratio show the convergence is much slower.

> 5. Numerical Integration > 5.. Error formulae for Simpson s rule As was done for the trapezoidal rule, a corrected Simpson s rule can be defined: CS n (f) = S n (f) h4 ( f (b) f (a) ) (6.17) 180 This will usually will be more accurate approximation than S n (f).

> 5. Numerical Integration > 5..3 Richardson extrapolation The error estimates for Trapezoidal rule (6.1) En T (f) h ( f (b) f (a) ) 1 and Simpson s rule (6.15) Ẽn S (f) = h4 ( f (b) f (a) ) 180 are both of the form I I n c n p (6.18) where I n denotes the numerical integral and h = b a n. The constants c and p vary with the method and the function. With most integrands f(x), p = for the trapezoidal rule and p = 4 for Simpson s rule. There are other numerical methods that satisfy (6.18), with other value of p and c. We use (6.18) to obtain a computable estimate of the error I I n, without needing to know c explicitly.

> 5. Numerical Integration > 5..3 Richardson extrapolation Replacing n by n and comparing to (6.18) I I n p (I I n ) c n p I I n c p n p (6.19) and solving for I gives the Richardson s extrapolation formula ( p 1)I p I n I n I 1 p 1 (p I n I n ) R n (6.0) R n is an improved estimate of I, based on using I n, I n, p, and the assumption (6.18). How much more accurate than I n depends on the validity of (6.18), (6.19).

> 5. Numerical Integration > 5..3 Richardson extrapolation To estimate the error in I n, compare it with the more accurate value R n I I n R n I n = 1 p 1 (p I n I n ) I n I I n 1 p 1 (I n I n ) (6.1) This is Richardson s error estimate.

> 5. Numerical Integration > 5..3 Richardson extrapolation Example Using the trapezoidal rule to approximate we have I = 1 0 e x dx = 0.74684138143 T = 0.731370518, T 4 = 0.749840978 Using (6.0) I 1 p 1 (p I n I n ) with p = and n =, we obtain I R 4 = 1 3 (4I 4 I ) = 1 3 (4T 4 T ) = 0.7468553797 The error in R 4 is 0.000031; and from a previous Table, R 4 is more accurate than T 3. To estimate the error in T 4, use (6.1) to get I T 4 1 3 (T 4 T ) = 0.00387 The actual error in T 4 is 0.00384; and thus (6.1) is a very accurate error estimate.

> 5. Numerical Integration > 5..4 Periodic Integrands Definition A function f(x) is periodic with period τ if f(x) = f(x + τ), x R (6.) and this relation should not be true with any smaller value of τ. For example, f(x) = e cos(πx) is periodic with periodic τ =. If f(x) is periodic and differentiable, then its derivatives are also periodic with period τ.

> 5. Numerical Integration > 5..4 Periodic Integrands Consider integrating I = b a f(x)dx with trapezoidal or Simpson s rule, and assume that b a is and integer multiple of the period τ. Assume f(x) C [a, b] (has derivatives of any order). Then for all derivatives of f(x), the periodicity of f(x) implies that f (k) (a) = f (k) (b), k 0 (6.3) If we now look at the asymptotic error formulae for the trapezoidal and Simpson s rules, they become zero because of (6.3). Thus, the error formulae ẼT n (f) and ẼS n (f) should converge to zero more rapidly when f(x) is a periodic function, provided b a is an integer multiple of the period of f.

> 5. Numerical Integration > 5..4 Periodic Integrands The asymptotic error formulae ẼT n (f) and ẼS n (f) can be extended to higher-order terms in h, using the Euler-MacLaurin expansion and the higher-order terms are multiples of f (k) (b) f (k) (a) for all odd integers k 1. Using this, we can prove that the errors E T n (f) and E S n (f) converge to zero even more rapidly than was implied by the earlier comments for f(x) periodic. Note that the trapezoidal rule is the preferred integration rule when we are dealing with smooth periodic integrands. The earlier results for the integral I (3) illustrate this.

> 5. Numerical Integration > 5..4 Periodic Integrands Example The ellipse with boundary ( x ) ( y ) + = 1 a b has area πab. For the case in which the area is π (and thus ab = 1), we study the variation of the perimeter of the ellipse as a and b vary. The ellipse has the parametric representation (x, y) = (a cos θ, b sin θ), 0 θ π (6.4) By using the standard formula for the perimeter, and using the symmetry of the ellipse about the x-axis, the perimeter is given by π (dx ) ( ) dy P = + dθ 0 dθ dθ π = a sin θ + b cos θdθ 0

> 5. Numerical Integration > 5..4 Periodic Integrands Since ab = 1, we write this as π 1 P (b) = 0 b sin θ + b cos θdθ = π (b b 4 1) cos θ + 1dθ (6.5) 0 We consider only the case with 1 b <. Since the perimeters for the two ellipses ( x a ) + ( y b ) = 1 and ( x b ) + ( y a ) = 1 are equal, we can always consider the case in which the y-axis of the ellipse is larger than or equal to its x-axis; and this also shows ( ) 1 P = P (b), b > 0 (6.6) b

> 5. Numerical Integration > 5..4 Periodic Integrands The integrand of P (b) f(θ) = b [ (b 4 1) cos θ + 1 ] 1 is periodic with period π. As discussed above, the trapezoidal rule is the natural choice for numerical integration of (6.5). Nonetheless, there is a variation in the behaviour of f(θ) as b varies, and this will affect the accuracy of the numerical integration. 16 14 1 10 8 6 4 0 0 0.5 1 1.5.5 3 3.5 π/ Figure:The graph of integrand f(θ) : b =, 5, 8

> 5. Numerical Integration > 5..4 Periodic Integrands n b = b = 5 b = 8 8 8.575517 19.918814 31.69068 16 8.578405 0.044483 31.95363 3 8.5784 0.063957 3.008934 64 8.5784 0.06567 3.018564 18 8.5784 0.065716 3.019660 56 8.5784 0.065717 3.019709 Table: Trapezoidal Rule Approximation of (6.5) Note that as b increases, the trapezoidal rule converges more slowly. This is due to the integrand f(θ) changing more rapidly as b increases. For large b, f(θ) changes very rapidly in the vicinity of θ = 1 π; and this causes the trapezoidal rule to be less accurate than when b is smaller, near 1. To obtain a certain accuracy in the perimeter P (b), we must increase n as b increases.

> 5. Numerical Integration > 5..4 Periodic Integrands 45 40 35 z=p(b) 30 5 0 15 10 5 0 1 3 4 5 6 7 8 9 10 Figure: The graph of perimeter function P (b) for ellipse The graph of P (b) reveals that P (b) 4b for large b. Returning to (6.5), we have for large b P (b) π ( b 4 cos θ ) 1 dθ b 0 π b b cos θ dθ = 4b 0 We need to estimate the error in the above approximation to know when we can use it to replace P (b); but it provides a way to avoid the integration of (6.5) for the most badly behaved cases.

> 5. Numerical Integration > Review and more Review xj+1 x j f(x)dx xj+1 x j p n (x)dx } {{ } I j p n (x) interpolates at x (0) j, x (1) j,..., x (n) j points on [x j, x j+1 ] Local error: xj+1 x j f(x)dx I j = xj+1 x j f (n+1)(ξ) (n + 1)! ψ(x)dx (integrand is error in interpolation) where ψ(x) = (x x (0) j )(x x (1) j ) (x x (n) j ).

> 5. Numerical Integration > Review and more 6 x 106 4 ψ(x) 0 4 6 x_j x_{j+1} x j x x j+1 Conclusion: exact on P n 1 local error C max f (n+1) (x) h n+ global error C max f (n+1) (x) h n+1 (b a)

> 5. Numerical Integration > Review and more Observation: If ξ is a point on (x j, x j+1 ), then (if g is continuous) i.e., g(ξ) = g(x j+ 1 ) + O(h) g(ξ) = g(x j+ 1 ) + (ξ x j+ 1 ) g (η) }{{} h } {{ } O(h)

> 5. Numerical Integration > Review and more Local error: 1 (n + 1)! = + xj+1 x j f (n+1) (ξ) ψ(x)dx }{{} =f (n+1) (x j+ 1 )+O(h) xj+1 1 (n + 1)! f (n+1) (x j+ 1 ) ψ(x)dx x j }{{} Dominant Term O(h n+ ) C (n+1)! max f (n+) h n+3 { }}{ 1 xj+1 f (n+) (η(x)) (ξ x (n + 1)! j+ 1 ) ψ(x)dx x j } {{ } take max out }{{} integrate } {{ } Higher Order Terms O(h n+3 )

> 5. Numerical Integration > Review and more The dominant term: case N = 1, Trapezoidal Rule 0.05 ψ(x) = (x x j )(x x j+1 ) 0 0.05 0.1 0.15 0. x_j x_{j+1}

> 5. Numerical Integration > Review and more The dominant term: case N =, Simpson s Rule xj+1 ψ(x) = (x x j )(x x j+ 1 )(x x j+1 ) ψ(x)dx = 0 x j 0. ψ(x) = (x x j )(x x j+1/ )(x x j+1 ) 0.15 0.1 0.05 0 0.05 0.1 0.15 0. x_j x_{j+1/} x_{j+1}

> 5. Numerical Integration > Review and more The dominant term: case N = 3, Simpson s 3/8 s Rule Local error = O(h 5 ) 0.1 ψ(x) = (x x j )(x x j+1/3 )(x x j+/3 )(x x j+1 ) 0.08 0.06 0.04 0.0 0 0.0 0.04 0.06 x_j x_{j+1/3} x_{j+/3} x_{j+1}

> 5. Numerical Integration > Review and more The dominant term: case N = 4 ψ(x)dx = 0 local error = O(h 7 ) 0.4 0.3 0. 0.1 0 0.1 0. 0.3 0.4 x_j x_{j+1}

> 5. Numerical Integration > Review and more Simpson s Rule Is exact on P (and P 3 actually) x j+1/ x j+1/ = (x j + x j+1 )/ Seek: xj+1 x j f(x)dx w j f(x j ) + w j+1/ f(x j+1/ ) + w j+1 f(x j+1 )

> 5. Numerical Integration > Review and more Exact on 1, x, x : 1 : x : x : xj+1 x j 1dx = x j+1 x j = w j 1 + w j+1/ 1 + w j+1 1, xj+1 xdx = x j+1 x j xj+1 x dx = x3 j+1 x j 3 x j = w jx j + w j+1/ x j+1/ + w j+1 x j+1, x3 j 3 = w jx j + w j+1/ x j+1/ + w j+1x j+1. 3 3 linear system: w j = 1 6 (x j+1 x j ); w j+1/ = 4 6 (x j+1 x j ); w j+1 = 1 6 (x j+1 x j ).

> 5. Numerical Integration > Review and more Theorem h = max(x j+1 x j ), I(f) = Simpson s rule approximation: b a f(x)dx I(f) b a 880 h4 max Trapezoid rule versus Simpson s rule Cost in TR Cost in SR a x b f (4) (x) = function evaluations/integral no. intervals 3 function evaluations/integral no. intervals = 3 ; Accuracy in TR Accuracy in SR = h b a 1 h 4 b a 880. (reducible to 1 if storing the previous values) = 40 h. E.g. for h = 1 100.4 106, i.e. SR is more accurate than TR by factor of.4 10 6.

> 5. Numerical Integration > Review and more What if there is round-off error? Suppose we use the method with f(x j ) computed = f(x j ) true ± ε j, ε j = O(machine precision) }{{} =ε b a f(x)dx j (x j+1 x j ) f(x j+1) computed + f(x j ) computed = j (x j+1 x j ) f(x j+1) ± ε j+1 + f(x j ) ± ε j = (x j+1 x j ) f(x j+1) + f(x j ) + (x j+1 x j ) ±ε j+1 ± ε j j j }{{}}{{} value in exact arithmetic contribution of round-off error ε(b a)

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration The numerical methods studied in the first sections were based on integrating 1 linear (trapezoidal rule) and quadratic (Simpson s rule) and the resulting formulae were applied on subdivisions of ever smaller subintervals. We consider now a numerical method based on exact integration of polynomials of increasing degree; no subdivision of the integration interval [a, b] is used. Recall the Section 4.4 of Chapter 4 on approximation of functions.

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration Let f(x) C[a, b]. Then ρ n (f) denotes the smallest error bound that can be attained in approximating f(x) with a polynomial p(x) of degree n on the given interval a x b. The polynomial m n (x) that yields this approximation is called the minimax approximation of degree n for f(x) and ρ n (f) is called the minimax error. max f(x) m n(x) = ρ n (f) (6.7) a x b

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration Example Let f(x) = e x for x [0, 1] n ρ n (f) n ρ n (f) 1 5.30E- 6 7.8E-6 1.79E- 7 4.6E-7 3 6.63E-4 8 9.64E-8 4 4.63E-4 9 8.05E-9 5 1.6E-5 10 9.16E-10 Table: Minimax errors for e x, 0 x 1 The minimax errors ρ n (f) converge to zero rapidly, although not at a uniform rate.

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration If we have a numerical integration formula to integrate low- to moderate-degree polynomials exactly, then the hope is that the same formula will integrate other functions f(x) almost exactly, if f(x) is well approximable by such polynomials.

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration To illustrate the derivation of such integration formulae, we restrict ourselves to the integral 1 I(f) = f(x)dx. 1 The integration formula is to have the general form: (Gaussian numerical integration method) I n (f) = n w j f(x j ) (6.8) j=1 and we require that the nodes {x 1,..., x n } and weights {w 1,..., w n } be so chosen that I n (f) = I(f) for all polynomials f(x) of as large degree as possible.

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration Case n = 1 The integration formula has the form 1 1 f(x)dx w 1 f(x 1 ) (6.9) Using f(x) 1, and forcing (6.9) = w 1 Using f(x) = x 0 = w 1 x 1 which implies x 1 = 0. Hence (6.9) becomes 1 1 f(x)dx f(0) I 1 (f) (6.30) This is the midpoint formula, and is exact for all linear polynomials. To see that (6.30) is not exact for quadratics, let f(x) = x. Then the error in (6.30) is 1 1 x dx (0) = 3 0, hence (6.30) has degree of precision 1.

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration Case n = The integration formula is 1 1 f(x)dx w 1 f(x 1 ) + w f(x ) (6.31) and it has four unspecified quantities: x 1, x, w 1, w. To determine these, we require (6.31) to be exact for the four monomials obtaining 4 equations f(x) = 1, x, x, x 3. = w 1 + w 0 = w 1 x 1 + w x 3 = w 1x 1 + w x 0 = w 1 x 3 1 + w x 3

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration Case n = This is a nonlinear system with a solution 3 3 w 1 = w = 1, x 1 = 3, x = 3 (6.3) and another one based on reversing the signs of x 1 and x. This yields the integration formula ( 1 ) ( ) 3 3 f(x)dx f + f I (f) (6.33) 3 3 1 which has degree of precision 3 (exact on all polynomials of degree 3 and not exact for f(x) = x 4 ).

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration Example Approximate I = 1 1 e x dx = e e 1 =.530404 Using we get 1 1 ( ) ( ) 3 3 f(x)dx f + f I (f) 3 3 I = e 3 3 + e 3 3 I I = 0.00771 =.36961 The error is quite small, considering we are using only node points.

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration Case n > We seek the formula (6.8) I n (f) = n w j f(x j ) j=1 which has n points unspecified parameters x 1,..., x, w 1,..., w n, by forcing the integration formula to be exact for n monomials f(x) = 1, x, x,..., x n 1 In turn, this forces I n (f) = I(f) for all polynomials f of degree n 1. This leads to the following system of n nonlinear equations in n unknowns: = w 1 + w +... + w n 0 = w 1 x 1 + w x +... + w n x n 3 = w 1 x 1 + w x +... + w n x n (6.34) n 1. = w 1 x n 1 + w x +... + w n x n n 0 = w 1 x n 1 1 + w x +... + w n x n 1 n The resulting formula I n (f) has degree of precision n 1.

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration Solving this system is a formidable problem. The nodes {x i } and weights {w i } have been calculated and collected in tables for most commonly used values of n. n x i w i ± 0.57735069 1.0 3 ± 0.774596669 0.5555555556 0.0 0.8888888889 4 ± 0.8611363116 0.3478548451 ± 0.3399810436 0.651451549 5 ± 0.9061798459 0.36968851 ± 0.5384693101 0.478686705 0.0 0.5688888889 6 ± 0.93469514 0.17134494 ± 0.661093865 0.3607651730 ± 0.386191861 0.4679139346 7 ± 0.949107913 0.19484966 ± 0.7415311856 0.797053915 ± 0.4058451514 0.3818300505 0.0 0.4179591837 8 ± 0.960898565 0.10185363 ± 0.7966664774 0.3810345 ± 0.55534099 0.3137066459 ± 0.183434645 0.366837834 Table: Nodes and weights for Gaussian quadrature formulae

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration There is also another approach to the development of the numerical integration formula (6.8), using the theory of orthogonal polynomials. From that theory, it can be shown that the nodes {x 1,..., x n } are the zeros of the Legendre polynomials of degree n on the interval [ 1, 1]. Recall that these polynomials were introduced in Section 4.7. For example, P (x) = 1 (3x 1) and its roots are the nodes given in (6.3) x 1 = 3 3, x = 3 3. Since the Legendre polynomials are well known, the nodes {x j } can be found without any recourse to the nonlinear system (6.34).

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration The sequence of formulae (6.8) is called Gaussian numerical integration method. From its definition, I n (f) uses n nodes, and it is exact for all polynomials of degree n 1. I n (f) is limited to 1 f(x)dx, an integral 1 over [ 1, 1]. But this limitation is easily removed. Given an integral I(f) = introduce the linear change of variable x = transforming the integral to with b a f(x)dx (6.35) b + a + t(b a), 1 t 1 (6.36) I(f) = b a Now apply Ĩn to this new integral. 1 1 ( ) b + a + t(b a) f(t) = f f(t)dt (6.37)

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration Example Apply Gaussian numerical integration to the three integrals I (1) = 1 0 e x dx, I () = 4 dx 0, I (3) = π dx 1+x 0 +cos(x), which were used as examples for the trapezoidal and Simpson s rules. All are reformulated as integrals over [ 1, 1]. The error results are n Error in I (1) Error in I () Error in I (3).9E-4 -.33E- 8.3E-1 3 9.55E-6-3.49E- -4.30E-1 4-3.35E-7 1.90E-3 1.77E-1 5 6.05E-9 1.70E-3-8.1E- 6-7.77E-11.74E-4 3.55E- 7 7.89E-13-6.45E-5-1.58E- 10 * 1.7E-6 1.37E-3 15 * 7.40E-10 -.33E-5 0 * * 3.96E-7

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration If these results are compared to those of trapezoidal and Simpson s rule, then Gaussian integration of I (1) and I () is much more efficient than the trapezoidal rules. But then integration of the periodic integrand I (3) is not as efficient as with the trapezoidal rule. These results are also true for most other integrals. Except for periodic integrands, Gaussian numerical integration is usually much more accurate than trapezoidal and Simpson rules. This is even true with many integrals in which the integrand does not have a continuous derivative.

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration Example Use Gaussian integration on I = 1 0 xdx = 3 The results are n I I n Ratio -7.E-3 4-1.16E-3 6. 8-1.69E-4 6.9 16 -.30E-5 7.4 3-3.00E-6 7.6 64-3.84E-7 7.8 where n is the number of node points. The ratio column is defined as I I 1 n I I n and it shows that the error behaves like I I n c n 3 (6.38) for some c. The error using Simpson s rule has an empirical rate of 1 convergence proportional to only n, a much slower rate than (6.38). 1.5

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration A result that relates the minimax error to the Gaussian numerical integration error. Theorem Let f C[a, b], n 1. Then, if we aply Gaussian numerical integration to I = b a f(x)dx, the error I n satisfies I(f) I n (f) (b a)ρ n 1 (f) (6.39) where ρ n 1 (f) is the minimax error of degree n 1 for f(x) on [a, b].

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration Example Using the table apply (6.39) to n ρ n (f) n ρ n (f) 1 5.30E- 6 7.8E-6 1.79E- 7 4.6E-7 3 6.63E-4 8 9.64E-8 4 4.63E-4 9 8.05E-9 5 1.6E-5 10 9.16E-10 I = 1 For n = 3, the above bound implies The actual error is 9.95E 6. 0 e x dx I I 3 ρ 5 (e x ) = 3.4 10 5.

> 5. Numerical Integration > 5.3 Gaussian Numerical Integration Gaussian numerical integration is not as simple to use as are the trapezoidal and Simpson rules, partly because the Gaussian nodes and weights do not have simple formulae and also because the error is harder to predict. Nonetheless, the increase in the speed of convergence is so rapid and dramatic in most instances that the method should always be considered seriously when one is doing many integrations. Estimating the error is quite difficult, and most people satisfy themselves by looking at two or more succesive values. If n is doubled, then repeatedly comparing two successive values, I n and I n, is almost always adequate for estimating the error in I n I I n I n I n This is somewhat inefficient, but the speed of convergence in I n is so rapid that this will still not diminish its advantage over most other methods.

> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature A common problem is the evaluation of integrals of the form I(f) = b a w(x)f(x)dx (6.40) with f(x) a well-behaved function and w(x) a possibly (and often) ill-behaved function. Gaussian quadrature has been generalized to handle such integrals for many functions w(x). Examples include 1 1 f(x) 1, 1 xf(x)dx, f(x) ln( 1 1 x x )dx. The function w(x) is called a weight function. 0 0

> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature We begin by imitating the development given earlier in this section, and we do so for the special case of I(f) = 1 0 f(x) x dx in which w(x) = 1 x. As before, we seek numerical integration formulae of the form I n (f) = n w j f(x j ) (6.41) j=1 and we require that the nodes {x 1,..., x n } and the weights {w 1,..., w n } be so chosen that I n (f) = I(f) for polynomials f(x) of as large as possible.

> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature Case n = 1 The integration formula has the form 1 0 f(x) x dx w 1 f(x 1 ) We force equality for f(x) = 1 and f(x) = x. This leads to equations w 1 = 1 w 1 x 1 = 0 1 1 x dx = 0 x x dx = 3 Solving for w 1 and x 1, we obtain the formula 1 and it has the degree of precision 1. 0 f(x) x dx f( 1 3 ) (6.4)

> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature Case n = The integration formula has the form 1 0 f(x) x dx w 1 f(x 1 ) + w f(x ) (6.43) We force equality for f(x) = 1, x, x, x 3. This leads to equations 1 1 w 1 + w = dx = x 0 w 1 x 1 + w x = w 1 x 1 + w x = w 1 x 3 1 + w x 3 = 1 0 1 0 1 0 x x dx = 3 x x dx = 5 x 3 x dx = 7 This has the solution x 1 = 3 7 35 30 = 0.11559, x = 3 7 + 35 30 = 0.74156 w 1 = 1 + 1 18 30 = 1.3049, w = 1 1 18 30 = 0.69571 The resulting formula (6.43) has degree of precision 3.

> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature Case n > We seek formula (6.41), which has n unspecified parameters, x 1,..., x n, w 1,..., w n, by forcing the integration formula to be exact for the n monomials f(x) = 1, x, x,..., x n 1. In turn, this forces I n (f) = I(f) for all polynomials f of degree n 1. This leads to the following system of n nonlinear equations in n unknowns: w 1 + w +... + w n = w 1 x 1 + w x +... + w n x n = 3 (6.44) w 1 x 1 + w x +... + w n x n = 5. w 1 x n 1 1 + w x n 1 +... + w n x n 1 n = 4n 1 The resulting formula I n (f) has degree of precision n 1. As before, this system is very difficult to solve directly, but there are alternative methods of deriving {x i } and {w i }. It is based on looking at the polynomials that are orthogonal with respect to the weight function w(x) = 1 x on the interval [0, 1].

> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature Example We evaluate I = 1 0 cos(πx) x dx = 0.74796566683146 using (6.4) and (6.43) 1 0 f(x) x dx f( 1 ) 1.0 1 0 f(x) x dx w 1 f(x 1 ) + w f(x ) = 0.740519 I is a reasonable estimate of I, with I I = 0.00745.

> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature A general theory can be developed for the weighted Gaussian quadrature b n I(f) = w(x)f(x)dx w j f(x j ) = I n (f) (6.45) a It requires the following assumptions for the weight function w(x): 1 w(x) > 0 for a < x < b; For all integers n n, b a j=1 w(x) x n dx < These hypotheses are the same as were assumed for the generalized least squares approximation theory following Section 4.7 of Chapter 4. This is not accidental since both Gaussian quadrature and least squares approximation theory are dependent on the subject of orthogonal polynomials. The node points {x j } solving the system (6.44) are the zeros of the degree n orthogonal polynomial on [a, b] with respect to the weight function w(x) = 1 x. For the generalization (6.45), the nodes {x i } are the zeros of the degree n orthogonal polynomial on [a, b] with respect to the weight function w(x).

> 5. Numerical Integration > Supplement Gauss s idea: The optimal abscissas of the κ point Gaussian quadrature formulas are precisely the roots of the orthogonal polynomial for the same interval and weighting function. b f(x)dx = xj+1 f(x)dx a j x j }{{} composite formula = 1 ( xj+1 x j f t + x ) ( ) j+1 + x j xj+1 x j dt j 1 }{{} κ = R 1 1 g(t)dt l=1 w l g(ql) }{{} κ point Gauss Rule for max accuracy w 1,..., w κ : weights, q 1,..., q κ : quadrature points on ( 1, 1). Exact on polynomials p(x) P κ 1, i.e., 1, t, t,..., t κ 1.

> 5. Numerical Integration > Supplement Example: 3 point Gauss, exact on P 5 exact on 1, t, t, t 3, t 4, t 5 +1 1 g(t)dt w 1 g(q 1 ) + w g(q ) + w 3 g(q 3 ) 1 1 1dt = = w 1 1 + w + w 3 1 1 t dt = 0 = w 1q 1 + w q + w 3 q 3 1 1 t dt = 3 = w 1q 1 + w q + w 3q 3 1 1 t3 dt = 0 = w 1 q 3 1 + w q 3 + w 3q 3 3 1 1 t4 dt = 5 = w 1q 4 1 + w q 4 + w 3q 4 3 1 1 t5 dt = 0 = w 1 q 5 1 + w q 5 + w 3q 5 3 Guess: q 1 = q 3, q = 0(q 1 q q 3 ), w 1 = w 3.

> 5. Numerical Integration > Supplement Example: 3 point Gauss, exact on P 5 exact on 1, t, t, t 3, t 4, t 5 With this guess: w 1 + w = w 1 q 1 = /3 w 1 q 4 1 = /5, hence 3 q 1 = 5, q 3 = 3 5 w 1 = 5 9, w 3 = 5 9, w = 8 9 A. H. Stroud and D. Secrest: Gaussian Quadrature Formulas. Englewood Cliffs, NJ: Prentice-Hall, 1966.

> 5. Numerical Integration > Supplement 1 The idea of Gauss Gauss-Lobatto 1 1 g(t)dt = w 1 g( 1) + w g(q ) + + w k 1 g( k 1 ) + w k g(1) k nodes locates as k-point formula; is accurate P k 3. (Order decreased by beside the Gauss quadrature formula).

> 5. Numerical Integration > Supplement Adaptive Quadrature Problem Given b a f(x)dx and ε preassigned tolerance compute I(f) with (a) to assured accuracy b a b a f(x)dx f(x)dx I(f) < ε (b) at minimal / near minimal cost (no. function evaluations) Strategy: LOCALIZE!

> 5. Numerical Integration > Supplement Localization Theorem Let I(f) = j I j(f) where I j (f) x j+1 x j f(x)dx. xj+1 If f(x)dx I j (f) x j < ε(x j+1 x j ) (= local tolerance), b a b then f(x)dx I(f) < ε(= tolerance) Proof: a b f(x)dx I(f) = xj+1 f(x)dx I(f) a j x j j = ( ) xj+1 f(x)dx I j (f) xj+1 f(x)dx I j (f) j x j j x j = ε(x j+1 x j ) = ε (x j+1 x j ) = ε (b a) = ε. b a b a b a j j

> 5. Numerical Integration > Supplement Need: and Estimator for local error Strategy when to cut h to ensure accuracy? when to increase h to ensure minimal cost? One approach: halving and doubling! Recall: Trapezoidal rule I j (x j+1 x j ) f(x j)+f(x j+1 ). A priori estimate: xj+1 f(x)dx I j = (x j+1 x j ) 3 f (s j ) x j 1 for some s j in (x j, x j+1 ).

> 5. Numerical Integration > Supplement Step 1: compute I j I j = f(x j+1)+f(x j ) (x j+1 x j ) Step : cut interval in half + reuse trapezoidal rule I j = f(x j) + f(x j+ 1 Error estimate: xj+1 x j xj+1 ) (x j+ 1 x j ) + f(x j+1) + f(x j+ 1 ) (x j+1 x j+ 1 ) f(x)dx I j = h3 j 1 f (ξ j ) = e j 1 st use of trapezoid rule f(x)dx I j = (h j/) 3 x j 1 h 3 j = 1 4 1 f 4 (ξ j ) +O(hj) }{{} f (η 1 ) + (h j/) 3 f (η ) 1 nd use of TR e j

> 5. Numerical Integration > Supplement Substracting e j = 4e j + Higher Order Terms I j I j = 3e j + O(h 4 ) = e j = I j I j 3 + Higher Order Terms }{{} O(h 4 )

> 5. Numerical Integration > Supplement 4 points Gauss: exact on P 7 local error: O(h 9 ) global error: O(h 8 ) A priori estimate: xj+1 f(x)dx I j = C(x j+1 x j ) 9 f (8) (ξ j ) x j xj+1 f(x)dx I j = Ch 9 jf (8) (ξ j ) x j xj+1 ( ) 9 hj f(x)dx I j = C f (8) (ξ j) + C x j I j I j = 5e j + O(h 10 ) e j = I j I j 5 ( hj ) 9 f (8) (ξ j ) }{{} = C 8 h9 j f (8) (ξ j )+O(h 10 ) + Higher Order Terms }{{} O(h 10 )

> 5. Numerical Integration > Supplement Algorithm Input: a, b, f(x) upper error tolerance: ε max initial mesh width: h Initialize: Integral = 0.0 x L = a ε min = 1 ε k+3 max * x R = x L + h ( If x R > b, x R b ) do integral 1 more time and stop Compute on x L, x R : I, I and EST EST = I I k+1 1 (if exact on P k )

> 5. Numerical Integration > Supplement error is just right : h If ε min b a < EST < ε max Integral Integral + I x L x R go to * error is too small : If EST ε min b a h Integral Integral + I x L x R h h go to * error is too big : If EST εmax b a h h h/.0 go to * STOP END h b a

> 5. Numerical Integration > Supplement Trapezium rule xj+1 f(x)dx (x j+1 x j ) f(x j+1) + f(x j ) x j xj+1 x j f(x) p 1 (x)dx = = f (x) xj+1 x j ψ(x)dx }{{} integrate exactly xj+1 x j f (ξ) xj+1 +O(h) ψ(x)dx x j (x x j )(x x j+1 ) dx }{{} ψ(x)

> 5. Numerical Integration > Supplement The mysteries of ψ(x) 1.5 x 105 1 0.5 ψ(x) 0 0.5 1 1.5 x_j q_1 q_ q_3 q_4 q_5 q_6 q_7 x j x x j+1 ψ(x) = (x q 1 )(x q ) (x q 7 )

> 5. Numerical Integration > Supplement Error in k + 1 point quadrature p k (x) interpolates f(x) = f(x) p k (x) = f k+1 (ξ) (k+1)! ψ(x) (x j ) q 1 < q <... < q k+1 ( x j+1 ) xj+1 x j f(x)dx } {{ } true xj+1 x j p k (x)dx = } {{ } approx xj+1 x j ψ(x) (k + 1)! f (k+1) (ξ)dx

> 5. Numerical Integration > Supplement 1. A simple error bound Ignoring oscillation of ψ(x): error max f k+1 (k + 1)! xj+1 x j ψ(x) dx }{{} = R x j+1 x j x q 1 x q k+1 h R k+1 x j+1 x j dx 1.5 x 105 max f k+1 x j+1 x j k+ (k + 1)! 1 0.5 ψ(x) and abs(ψ(x)) 0 0.5 1 1.5 x_j q_1 q_ q_3 q_4 q_5 q_6 q_7 x j x x j+1

> 5. Numerical Integration > Supplement. Analysis without cancelation Lemma Let ξ, x (x j, x j+1 ). Then x j < ξ < x < x j+1 f (k+1) (ξ) = f (k+1) (x) + (ξ x)f k+ (η) MVT for some η between ξ and x, and ξ x x j+1 x j h.

> 5. Numerical Integration > Supplement. Analysis without cancelation error true approx 1 xj+1 [ ] = ψ(x) f (k+1) (x) +(ξ x)f (k+) (η) dx (k + 1)! x j }{{}}{{} fixed O(h) f (k+1) (x) (k + 1)! + 1 (k + 1)! max f (k+) (k + 1)! xj+1 x j ψ(x)dx }{{} R x =0 j+1 if x j xj+1 x j xj+1 hk+3 max f (k+)(x) (k + 1)! ψ(x)dx=0 f (k+) (η)(ξ x)ψ(x) ξ x ψ(x) x j }{{}}{{} h h k+1 dx The error for Simpson s rule, i.e. cancelation.

> 5. Numerical Integration > Supplement Lemma ψ(x) interpolates zero at k + 1 points (deg ψ(x) = k + 1) The general result of p(q l ) = 0, l = 1,..., k + 1, p P k+1 is p(x) = Constant ψ(x).

> 5. Numerical Integration > Supplement Questions: 1) How to pick the points q 1,..., q k+1 so that 1 1 integrates P k+m exactly? g(x)dx w 1 g(q 1 ) +... + w k+1 g(q k+1 ) (6.46) ) What does this imply about the error? Remark If m < 1, pick q 1,..., q k+1 so that 1 1 ψ(x)dx = 0 and then the error converges O(h k+3 ). m=

> 5. Numerical Integration > Supplement Step 1 Let r 1 be some fixed point on [ 1, 1]: 1 < q 1 < q <... < r 1 <... < q k < q k+1 p k+1 (x) = p k (x) + ψ(x) g(r 1) p k (r 1 ) ψ(r 1 ) p k interpolates g(x) at q 1,..., q k+1. Claim: p k+1 interpolates g(x) at k + points q 1,..., q k+1, r 1. Suppose now that (6.46) is exact on P k+1, then from (6.47) 1 g(x)dx 1 }{{} = true 1 1 p k+1 (x) }{{} 1 substitute (6.47) 1 g(x)dx p k (x)dx 1 } {{ 1 } error in k + 1 quadrature rule E k+1 dx error in k + quadrature rule, E k+ 1 1 ψ(x) f(r 1) p k (r 1 ) dx ψ(r 1 ) (6.47)

> 5. Numerical Integration > Supplement Step 1 Conclusion 1 So E k+ = E k+1 f(r 1) p k (r 1 ) ψ(r 1 ) 1 1 ψ(x)dx If 1 1 ψ(x)dx = 0, then error in k + 1 point rule is exactly the same as if we had used k + points.

> 5. Numerical Integration > Supplement Step Let r 1, r be fixed points in [ 1, 1]. So interpolate at k + 3 points: q 1,..., q k+1, r 1, r p k+ (x) = p k (x) + ψ(x)(x r 1 ) g(r ) p k (r ) (r r 1 )ψ(r ) + ψ(x)(x r ) g(r 1) p k (r 1 ) (r 1 r )ψ(r 1 ) Consider error in a rule with k + 1 + points: 1 error in k + 3 p. r. = 1 1 1 = 1 g(x)dx g(x)dx 1 g(r 1) p k (r 1 ) (r 1 r )ψ(r 1 ) 1 1 p k+ (x)dx p k (x)dx g(r ) p k (r ) (r r 1 )ψ(r ) 1 1 ψ(x)(x r )dx. 1 1 (6.48) ψ(x)(x r 1 )dx

> 5. Numerical Integration > Supplement Step So E k+3 = E k+1 + Const 1 1 ψ(x)(x r 1)dx + Const 1 1 ψ(x)(x r )dx Conclusion If 1 1 ψ(x)dx = 0 and 1 1 xψ(x)dx = 0, then error in k + 1 point rule has the same error as k + 3 point rule.

> 5. Numerical Integration > Supplement... So 1 1 E k+1+m = E k+1 + C 0 ψ(x)dx + C 1 ψ(x)x 1 dx +... (6.49) Conclusion 3 1 1 + C m ψ(x)x m 1 dx (6.50) 1 If 1 1 ψ(x)xj dx = 0, j = 0,..., m 1, then error is as good as using m extra points. 1

> 5. Numerical Integration > Supplement Overview Interpolating Quadrature Interpolate f(x) at q 0, q 1, q,..., q k p k (x) f(x) p k (x) = f (k+1) (ξ) (k + 1)! (x q 0)(x q 1 )... (x q k ) 1 1 Gauss rules f(x)dx 1 1 p k (x)dx = pick q l to maximize exactness what is the accuracy what are the q l s? 1 (k + 1)! 1 1 f (k+1) (ξ)ψ(x)dx

> 5. Numerical Integration > Supplement Overview Interpolate at k + 1 + m points q 0,..., q k, r 1..., r m error E k+m Interpolate at k + 1 points q 0,..., q k error 1 = E k + c 0 1 ψ(x) 1dx 1 +c 1 1 ψ(x) xdx +... 1 +c m 1 ψ(x) xm 1 dx Definition p(x) is the µ + 1 st orthogonal polynomial on [ 1, 1] (weight w(x) 1) if p(x) P µ+1 and 1 1 p(x)xl dx = 0, l = 0,..., µ, i.e., 1 1 p(x)q(x)dx = 0 q P µ.

> 5. Numerical Integration > Supplement Overview Pick q 0, q 1,..., q k so 1 1 ψ(x) 1dx = 0 1 1 ψ(x) xdx = 0. 1 1 ψ(x) xm 1 dx = 0 1 ψ(x) }{{} 1 deg k+1 deg m 1 {}}{ q(x) dx = 0, q P m 1 So, maximum accuracy if ψ(x) is the orthogonal polynomial of degree k + 1: 1 1 ψ(x)q(x)dx = 0 q P k m 1 = k, m = k + 1 So, the Gauss quadrature points are the roots of the orthogonal polynomial.

> 5. Numerical Integration > Supplement Overview Adaptivity I = I 1 + I Trapezium rule s local error = O(h 3 ) xj+1 x j xj+1 f(x)dx I = 4e e 8e 1, e 8e so, e 4e x j f(x)dx I = e = e 1 + e I I = 3e (+ Higher Order Terms) e = I I 3 (+ Higher Order Terms)

> 5. Numerical Integration > Supplement Overview Final Observation T rue Approx 4e T rue Approx e So, can solve for e + T rue Solving for T rue: } equations, unknowns: e, T rue T rue I + e I + I I 3 4 3 I 1 3 I (+ Higher Order Terms)

> 5. Numerical Integration > 5.4 Numerical Differentiation To numerically calculate the derivative of f(x), begin by recalling the definition of derivative This justifies using f (x) = lim x 0 f(x + h) f(x) h f (x) f(x + h) f(x) h D h f(x) (6.51) for small values of h. D h f(x) is called a numerical derivative of f(x) with stepsize h.

> 5. Numerical Integration > 5.4 Numerical Differentiation Example Use D h f to approximate the derivative of f(x) = cos(x) at x = π 6. h D n (f) Error Ratio 0.1-0.5443 0.0443 0.05-0.5144 0.0144 1.98 0.05-0.51077 0.01077 1.99 0.015-0.50540 0.00540 1.99 0.0065-0.5070 0.0070.00 0.00315-0.50135 0.00135.00 Looking at the error column, we see the error is nearly proportional to h; when h is halved, the error is almost halved.

> 5. Numerical Integration > 5.4 Numerical Differentiation To explain the behaviour in this example, Taylor s theorem can be used to find an error formula. Expanding f(x + h) about x, we get f(x + h) = f(x) + hf (x) + h f (c) for some c between x and x + h. Substituting on the right side of (6.51), we obtain D h f(x) = 1 h ] } {[f(x) + hf (x) + h f (c) f(x) = f (x) + h f (c) f (x) D h f(x)= h f (c) (6.5) The error is proportional to h, agreeing with the results in the Table above.

> 5. Numerical Integration > 5.4 Numerical Differentiation For that example, ( f π ) ( π ) D h f = h cos(c) (6.53) 6 6 where c is between 1 6 π and 1 6 π + h. Let s check that if c is replaced by 1 6π, then the RHS of (6.53) agrees with the error column in the Table. As seen in the example, we use the formula (6.51) with a positive stepsize h > 0. The formula (6.51) is commonly known as the forward difference formula for the first derivative. We can formally replace h by h in (6.51) to obtain the formula f f(x) f(x h) (x), h > 0 (6.54) h This is the backward difference formula for the first derivative. A derivation similar to that leading to (6.5) shows that f f(x) f(x h) (x) = h h f (c)) (6.55) for some c between x and x h. Thus, we expect the accuracy of the backward difference formula to be almost the same as that of the forward difference formula.

> 5. Numerical Integration > 5.4.1 Differentiation Using Interpolation Let P n (x) denote the degree n polynomial that interpolates f(x) at n + 1 node points x 0,..., x n. To calculate f (x) at some point x = t, use Many different formulae can be obtained by 1 varying n and by f (t) P n (t) (6.56) varying the placement of the nodes x 0,..., x n relative to the point t of interest.