Fundamental Methods of Numerical Extrapolation With Applications

Similar documents
NUMERICAL INTEGRATION. The inverse process to differentiation in calculus is integration. Mathematically, integration is represented by.

Numerical Integration

Lecture 14: Quadrature

Math 1B, lecture 4: Error bounds for numerical methods

Numerical Analysis. 10th ed. R L Burden, J D Faires, and A M Burden

CMDA 4604: Intermediate Topics in Mathematical Modeling Lecture 19: Interpolation and Quadrature

Lecture 19: Continuous Least Squares Approximation

Lecture 20: Numerical Integration III

APPROXIMATE INTEGRATION

Math& 152 Section Integration by Parts

Review of Calculus, cont d

New Expansion and Infinite Series

III. Lecture on Numerical Integration. File faclib/dattab/lecture-notes/numerical-inter03.tex /by EC, 3/14/2008 at 15:11, version 9

5.7 Improper Integrals

Exam 2, Mathematics 4701, Section ETY6 6:05 pm 7:40 pm, March 31, 2016, IH-1105 Instructor: Attila Máté 1

Numerical Integration. 1 Introduction. 2 Midpoint Rule, Trapezoid Rule, Simpson Rule. AMSC/CMSC 460/466 T. von Petersdorff 1

NUMERICAL INTEGRATION

Numerical Integration

P 3 (x) = f(0) + f (0)x + f (0) 2. x 2 + f (0) . In the problem set, you are asked to show, in general, the n th order term is a n = f (n) (0)

A REVIEW OF CALCULUS CONCEPTS FOR JDEP 384H. Thomas Shores Department of Mathematics University of Nebraska Spring 2007

Improper Integrals. Type I Improper Integrals How do we evaluate an integral such as

Advanced Calculus: MATH 410 Notes on Integrals and Integrability Professor David Levermore 17 October 2004

Numerical Analysis: Trapezoidal and Simpson s Rule

The Regulated and Riemann Integrals

Unit #9 : Definite Integral Properties; Fundamental Theorem of Calculus

THE EXISTENCE-UNIQUENESS THEOREM FOR FIRST-ORDER DIFFERENTIAL EQUATIONS.

7.2 The Definite Integral

The First Fundamental Theorem of Calculus. If f(x) is continuous on [a, b] and F (x) is any antiderivative. f(x) dx = F (b) F (a).

different methods (left endpoint, right endpoint, midpoint, trapezoid, Simpson s).

Properties of Integrals, Indefinite Integrals. Goals: Definition of the Definite Integral Integral Calculations using Antiderivatives

3.4 Numerical integration

How can we approximate the area of a region in the plane? What is an interpretation of the area under the graph of a velocity function?

Lecture 6: Singular Integrals, Open Quadrature rules, and Gauss Quadrature

f(x) dx, If one of these two conditions is not met, we call the integral improper. Our usual definition for the value for the definite integral

Goals: Determine how to calculate the area described by a function. Define the definite integral. Explore the relationship between the definite

Chapter 5. Numerical Integration

Lecture Note 4: Numerical differentiation and integration. Xiaoqun Zhang Shanghai Jiao Tong University

Improper Integrals, and Differential Equations

The goal of this section is to learn how to use a computer to approximate definite integrals, i.e. expressions of the form. Z b

Lecture 23: Interpolatory Quadrature

Numerical integration

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER /2019

MAT 772: Numerical Analysis. James V. Lambers

Recitation 3: More Applications of the Derivative

1. Gauss-Jacobi quadrature and Legendre polynomials. p(t)w(t)dt, p {p(x 0 ),...p(x n )} p(t)w(t)dt = w k p(x k ),

Chapter 3 Polynomials

and that at t = 0 the object is at position 5. Find the position of the object at t = 2.

63. Representation of functions as power series Consider a power series. ( 1) n x 2n for all 1 < x < 1

Convergence of Fourier Series and Fejer s Theorem. Lee Ricketson

p-adic Egyptian Fractions

Chapter 3 Solving Nonlinear Equations

Math 8 Winter 2015 Applications of Integration

Chapter 0. What is the Lebesgue integral about?

n f(x i ) x. i=1 In section 4.2, we defined the definite integral of f from x = a to x = b as n f(x i ) x; f(x) dx = lim i=1

Review of basic calculus

Best Approximation. Chapter The General Case

4.4 Areas, Integrals and Antiderivatives

Lecture 1: Introduction to integration theory and bounded variation

Lecture 17. Integration: Gauss Quadrature. David Semeraro. University of Illinois at Urbana-Champaign. March 20, 2014

Z b. f(x)dx. Yet in the above two cases we know what f(x) is. Sometimes, engineers want to calculate an area by computing I, but...

Interpreting Integrals and the Fundamental Theorem

Advanced Computational Fluid Dynamics AA215A Lecture 3 Polynomial Interpolation: Numerical Differentiation and Integration.

UNIFORM CONVERGENCE. Contents 1. Uniform Convergence 1 2. Properties of uniform convergence 3

Taylor Polynomial Inequalities

ODE: Existence and Uniqueness of a Solution

1 Probability Density Functions

Euler, Ioachimescu and the trapezium rule. G.J.O. Jameson (Math. Gazette 96 (2012), )

Chapter 7 Notes, Stewart 8e. 7.1 Integration by Parts Trigonometric Integrals Evaluating sin m x cos n (x) dx...

Chapter 1. Basic Concepts

Math 360: A primitive integral and elementary functions

AP Calculus Multiple Choice: BC Edition Solutions

Natural examples of rings are the ring of integers, a ring of polynomials in one variable, the ring

1 The Lagrange interpolation formula

Lecture 12: Numerical Quadrature

Math 131. Numerical Integration Larson Section 4.6

a < a+ x < a+2 x < < a+n x = b, n A i n f(x i ) x. i=1 i=1

Chapters 4 & 5 Integrals & Applications

COT4501 Spring Homework VII

Construction of Gauss Quadrature Rules

State space systems analysis (continued) Stability. A. Definitions A system is said to be Asymptotically Stable (AS) when it satisfies

Main topics for the First Midterm

1 Error Analysis of Simple Rules for Numerical Integration

Anti-derivatives/Indefinite Integrals of Basic Functions

Series: Elementary, then Taylor & MacLaurin

Numerical quadrature based on interpolating functions: A MATLAB implementation

Part IB Numerical Analysis

Review of Gaussian Quadrature method

CALCULUS WITHOUT LIMITS

f(x)dx . Show that there 1, 0 < x 1 does not exist a differentiable function g : [ 1, 1] R such that g (x) = f(x) for all

THE HANKEL MATRIX METHOD FOR GAUSSIAN QUADRATURE IN 1 AND 2 DIMENSIONS

An approximation to the arithmetic-geometric mean. G.J.O. Jameson, Math. Gazette 98 (2014), 85 95

Math 113 Exam 2 Practice

Improper Integrals. The First Fundamental Theorem of Calculus, as we ve discussed in class, goes as follows:

LECTURE 19. Numerical Integration. Z b. is generally thought of as representing the area under the graph of fèxè between the points x = a and

Discrete Least-squares Approximations

Stuff You Need to Know From Calculus

W. We shall do so one by one, starting with I 1, and we shall do it greedily, trying

Monte Carlo method in solving numerical integration and differential equation

31.2. Numerical Integration. Introduction. Prerequisites. Learning Outcomes

(0.0)(0.1)+(0.3)(0.1)+(0.6)(0.1)+ +(2.7)(0.1) = 1.35

Transcription:

Fundmentl Methods of Extrpoltion 1 Fundmentl Methods of Numericl Extrpoltion With Applictions Eric Hung-Lin Liu ehliu@mit.edu Keywords: numericl nlysis, extrpoltion, richrdson, romberg, numericl differentition, numericl integrtion Abstrct Extrpoltion is n incredibly powerful technique for incresing speed nd ccurcy in vrious numericl tsks in scientific computing. As we will see, extrpoltion cn trnsform even the most mundne of lgorithms (such s the Trpezoid Rule) into n extremely fst nd ccurte lgorithm, incresing the rte of convergence by more thn one order of mgnitude. Within this pper, we will first present some bckground theory to motivte the derivtion of Richrdson nd Romberg Extrpoltion nd provide support for their vlidity nd stbility s numericl techniques. Then we will present exmples from differentition nd integrtion to demonstrte the incredible power of extrpoltion. We will lso give some MATLAB code to show some simple implenttion methods in ddition to discussing error bounds nd differentiting between true nd flse convergence. 1 Introduction In mny prcticl pplictions, we often find ourselves ttempting to compute continuous quntities on discrete mchines of limited precision. In severl clssic pplictions (e.g. numericl differentition), this is ccomplished by tending the step-size to zero. Unfortuntely s we will see, this cretes severe problems with floting point round-off errors. In other situtions such s numericl integrtion, tending the step-size towrd zero does not cuse extreme round-off errors, but it cn be severely limiting in terms of execution time: decrese by fctor of 10 increses the required function evlutions by fctor of 10. Extrpoltion, the topic of this pper, ttempts to rectify these problems by computing weighted verges of reltively poor estimtes to eliminte error modes specificlly, we will be exmining vrious forms of Richrdson Extrpoltion. Richrdson nd Richrdson-like schemes re but subset of ll extrpoltion methods, but they re mong the most common. Moreover they hve numerous pplictions nd re not terribly difficult to understnd nd implement. We will demonstrte tht vrious numericl methods tught in introductory clculus clsses re in fct insufficient, but they cn ll be gretly improved through proper ppliction of extrpoltion techniques.

2 Liu 1.1 Bckground: A Few Words on Series Expnsions We re very often ble to ssocite some infinite sequence {A n } with known function A(h). For exmple, A(h) my represent first divided differences scheme for differentition, where h is the step-size. Note tht h my be continuous or discrete. The sequence {A n } is described by: A n = A(h n ), n N for monotoniclly decresing sequence {h n } tht converges to 0 in the limit. Thus lim h 0+ A(h) = A nd similrly, lim n A n = A. Computing A(0) is exctly wht we wnt to do continuing the differentition exmple, roughly speking A(0) is the point where the divided differences becomes the derivtive. Interestingly enough, lthough we require tht A(y) hve n symptotic series expnsion, we re only curious bout its form. Specificlly, we will exmine functions with expnsions tht tke this form: A(h) = A + s α k h p k + O(h p s+1 ) s h 0+ (1) k=1 where p n 0 n, p 1 < p 2 <... < p s + 1, nd ll the α k re independent of y. The p n cn in fct be imginry, in which cse we compre only the rel prts. Clerly, hving p 1 > 0 gurntees the existence of lim h 0+ A(h) = A. In the cse where tht limit does not exist, A is the ntilimit of A(h), nd then we know tht t p i 0 for t lest i = 1. However, s long s the sequence {p n } is monotoniclly incresing such tht lim k p k = +, (1) exists nd A(h) hs the expnsion A(h) A + k=1 α kh p k. We do not need to declre ny conditions on the convergence of the infinite series. We do ssume tht the h k re known, but we do not need to know the α k, since s we will see, these constnt fctors re not significnt. Now suppose tht p 1 > 0 (hence it is true p), then when h is sufficiently smll, A(y) pproximtes A with error A(h) A = O(h p 1 ). It is worth noting tht if p 1 is rther lrge, then A(h) A 0 even for vlues of y tht re not very smll. However in most pplictions, h 1 will be smll (sy p 1 < 4), so it would pper tht we must resort to smller nd smller vlues of h. But s previously noted, this is lmost lwys prohibitive in terms of round-off errors nd/or function evlutions. But we should not despir, becuse we re now stnding on the doorstep of the fundmentl ide behind Richrdson Extrpoltion. Tht ide is to use eqution (1) to somehow eliminte the h p 1 error term, thus obtining new pproximtion A 1 (h) A = O(h h 2 ). It is obvious tht A 1 (h) A < A(h) A since p 2 > p 1. As we will see lter, this is ccomplished by tking weighted verge of A(h) nd A( h). s 2 Simple Numericl Differentition nd Integrtion Schemes Before continuing with our development of the theory of extrpoltion, we will briefly review differentition by first divided differences nd qudrture using the (composite) trpezoid nd midpoint rules. These re mong the simplest lgorithms; they re strightforwrd enough to be presented in first-yer clculus clsses. Being ll first-order pproximtions, their error modes re significnt, nd they hve little ppliction in rel situtions. However s we will see, pplying simple extrpoltion scheme mkes these simple methods extremely powerful, convering difficult problems quickly nd chieving higher ccurcy in the process.

Fundmentl Methods of Extrpoltion 3 2.1 Differentition vi First Divided Differencs Suppose we hve n rbitrry function f(x) C 1 [x 0 ǫ,x 0 + ǫ] for some ǫ > 0. Then pply simple liner fit to f(x) in the forementioned region to pproximte the f (x). We cn begin by ssuming the existence of dditionl derivtives nd computing Tylor Expnsion: where h = x x 0 nd 0 < h ǫ Resulting in: f(x) = f(x 0 ) + f (x 0 )(x x 0 ) + 1 2 f (x 0 )(x x 0 ) 2 + O(f(h 3 )) f (x 0 ) = f(x + h) f(x) h 1 2 f (x)h + O(h 2 ) f (x 0 ) = f(x 0 + h) f(x 0 ) (2) h From the Tylor Expnsion, it would seem tht our first divided difference pproximtion hs trunction error on the order of O(h). Thus one might imgine tht llowing h 0 will yield O(h) 0 t rte liner in h. Unfortuntely, due the limittions of finite precision rithmetic, performing h 0 cn only decrese trunction error certin mount before rithmetic error becomes dominnt. Thus selecting very smll h is not vible solution, which is sdly misconception of mny Clculus students. Let us exmine exctly how this rithmetic error rises. Note tht becuse of the discrete nture of computers, number is represented s (1 + ǫ 1 ) where ǫ 1 is some error. ( b) = (1 + ǫ 1 ) b(1 + ǫ 2 ) = ( b) + (ǫ 1 bǫ 2 ) + ( b)ǫ 3 = ( b)(1 + ǫ 3 ) + ( + b )ǫ 4 Thus the floting point representtion of the difference numertor, fl[f(x + h) f(x)] is f(x + h) f(x) + 2f(x)ǫ +... where 2f(x)ǫ represents the rithmetic error due to subtrction. So the floting point representtion of the full divided difference is: Agin, 2f(x)ǫ h f (x) = fl[ f(x + h) f(x) ] + 2f(x)ǫ 1 h h 2 f (x)h +... (3) is the rithmetic error, while 1f (x)h is the trunction error from only tking the 2 liner term of the Tylor Expnsion. Since one term cts like 1 nd the other like h, there h exists n optimum choice of h such tht the totl error is minimized: ( )1 f(x) h 2 f (x) ǫ 2 (4) Implying tht h optimum eps where eps is mchine precision, which is roughly 10 16 in IEEE double precision. Thus our best error is 4(f (x)f(x)ǫ) 1 2 10 8. This is rther disml, nd we certinly cn do better. Nturlly it is impossible to modify rithemtic error: it is property of double precision representtions. So we look to the trunction error: prime suspect for extrpoltion.

4 Liu 1 Filure of First Divided Differences 0 1 2 log 10 (error) 3 4 5 6 7 8 16 14 12 10 8 6 4 2 0 log (dx) 10 Figure 1: Approximtion errors from dex dx x=1 The extent of this problem cn be seen in Fig. 1. It displys the numericl errors resulting from the first divided difference pproximtion of dex dx x=1. The y-xis represents the logrithm of the error, while the x-xis shows the pproximte step size. The log-log scle is convenient for visulizing rough order of mgnitude pproximtions for numericl errors with the dditionl benefit of linerizing functions with polynomil error expressions. This method is commonplce in numericl nlysis. Note tht for prcticl purposes, we should not be pplying first divided differences to estimte derivtives. The previous method is O(h). By using centered divided differences, we cn obtin Oh 2 without ny dditionl work. We initilly voided this becuse the exmple of numericl errors is most severe using first differences. The formul for centered differences pproximtion is: δ 0 (h) = f(x 0 + h) f(x 0 h) for 0 < h (5) 2h The derivtive pproximtion then hs error: δ 0 (h) f (x 0 ) = f (ξ(h)) h 2 = O(h 2 ) (6) 3! Sidi provides the full expnsion, obtined with Tylor series expnsion round x 0 : δ 0 (h) f (x 0 ) = s k=1 f 2k+1 (x 0 ) (2k + 1)! h2k + R s (h) where R s (h) = f(2s+3) (ξ(h)) h 2s+2 = O(h 2s+2 ) (2s + 3)! (7)

Fundmentl Methods of Extrpoltion 5 2.2 Integrtion vi the Trpezoid nd Midpoint Rules First we should note tht since the integrtion schemes presented re used over rnge of interpoltion points {x j }, the given formuls correspond to the Composite Trpezoid nd Midpoint schemes, lthough we will drop the composite lbel. Composite Trpezoid Rule: ( ) f(x) dx = h n 1 f() + 2 f(x j ) + f(b) + O(h 2 ) (8) 2 For uniformly chosen set of points {x j } in the rnge [,b] such tht x j = + jh nd j = 1/n where n is the totl number of interpoltion points. Observe tht ll the Trpezoid Rule relly does is fit line to the function between ech pir of points in the intervl. The Trpezoid Rule is specil cse (n = 1) of the (n+1)-point closed Newton-Cotes formul. Also, Simpson s Rule is nother specil cse (n = 2) of this formul. The composite rules rise from iterting the Newton-Cotes expressions over rnge of points. The following is dpted from Burden nd Fires: Theorem 1. Suppose tht n i=0 if(x i ) denotes the (n+1)-point closed Newton-Cotes formul with x 0 =, x n = b, nd h = b. Then there exists ξ (,b) such tht n n f(x)dx = i f(x i ) + hn+3 f n+2 (ξ) n n t 2 (t j)dt (n + 2)! i=0 if n is even nd f C n+2 [,b], nd 0 f(x)dx = n i=0 i f(x i ) + hn+2 f n+1 (ξ) (n + 1)! n 0 t n (t j)dt if n is odd nd f C n+1 [,b]. Proof. As implied in the introduction, we re ssuming the vlidity of the following expnsion: where i = xn f(x)dx x 0 L i (x)dx = n i f(x i ) i=0 xn x 0 n j=0 j i x x j x i x j dx with L i being the i-th term of the Lgrnge Interpolting Polynomil. Then we cn exploit the properties of the Lgrnge Interpolting Polynomil long with Tylor Expnsions to rrive t the result. The proof is omitted here, but very detiled explntion cn be found in Isscson nd Keller long with more detils bout the specific error nlysis. An lterntive, more difficult derivtion tht voids the Lgrnge Interpolting Polynomil cn be found in Sidi. There is prllel (n+1)-point open Newton Cotes formul tht does not evlute the endpoints,b, which is useful for voiding endpoint singulrities s in int 1 0 1 dx. The Midpoint x Rule is member of this group.

6 Liu Theorem 2. Suppose tht n i=0 if(x i ) denotes the (n+1)-point closed Newton-Cotes formul with x 1 =, x n+1 = b, nd h = b. Then there exists ξ (,b) such tht n+2 f(x)dx = n i=0 i f(x i ) + hn+3 f n+2 (ξ) (n + 2)! n+1 1 n t 2 (t j)dt if n is even nd f C n+2 [,b], nd f(x)dx = n i=0 i f(x i ) + hn+2 f n+1 (ξ) (n + 1)! n+1 1 t n (t j)dt if n is odd nd f C n+1 [,b]. Proof. The proof is nlgous to Theorem 1, employing Lgrnge Interpolting Polynomils in the sme fshion. Agin, the full proof cn be found in Isscson nd Keller. For completeness, here is the Composite Midpoint Rule. Agin, the error term hs been simplified becuse the exct coeffcients re not of interest, s we will soon see. Additionlly, notice tht s with the Composite Trpezoid Rule, the error term power is one less thn the one expressed by the Newton-Cotes formuls. This is simply becuse the Composite rule hs summed n copies of the simple Newton-Cotes rules. Composite Midpoint Rule: n 1 f(x) dx = h f( + jh 2 ) + O(h2 ) (9) Now, hving reviewed the bsics of numeric integrtion nd differentition, we will finlly introduce the Richrdson (differentition) nd Romberg (integrtion) Extrpoltion procedures. The reder my find it surprising tht only these most bsic of numeric lgorithms re necessry to procede, but the reson for this will soon become cler. The two methods re in fct essentilly the sme, so we will only derive the Romberg lgorithm. Then we will present numeric results demonstrting the superiority of extrpoltion over the bsic lgorithms. We will lso show tht one cn rrive t Simpson s rule simply by performing extrpoltion on the Trpezoid rule. (Note how we hve voided the most simple schemes of left nd right hnd sums. The resoning is left to the reder.) 3 The Richrdson nd Romberg Extrpoltion Technique 3.1 Conceptul nd Anlytic Approch The fundmentl ide behind these extrpoltion schemes is the usge of multiple, low-ccurcy evlutions to eliminte error modes nd crete highly ccurte result. As mentioned erlier, we re ssuming tht the working functions hve the following property: A(h) A = O(h p 1 ), where p 1 is known or obtinble. To motivte our understnding of extrpoltion, we will write some rbitrry qudrture rule I s function of its ssocited step-size, h: I(h). For exmple, the Trpezoid Rule could be written s:

Fundmentl Methods of Extrpoltion 7 ( ) I(h) = h n 1 f() + 2 f(x j ) + f(b) + O(h 2 ) 2 As previously mentioned, suppose the expnsion I(h) = I 0 +h p 1 k 1 +hp 2 k 2 +... exists, where {p i } R re the set of error powers nd {k i } R re some known or unknown constnts. Now consider: I( h s ) = I 0 + hp 1 k 1 s p 1 + hp 2 k 2 s p 2 +... To eliminte the h p 1 error term (becuse it is the dominnt error power), we will compute I(h) + s p 1 I( h s ). I (h) = s p 1 I( h s ) I(h) = sp 1 I 0 I 0 + 0 O(h p 1 ) + hp 2 k 2 s p 1 p 2 h p 2 k 2 +... (10) = sp 1 I( h) I(h) s = I s p 0 + sp1 p2 1 h p 2 k 1 1 s p 2 +... 1 1 (11) = I 0 + h p 2 k 2 +... (12) = I( h s ) I(h) I(h) s s p 1 1 (13) And tht is the bsic ide underlying Romberg Extrpoltion. Note tht s is the weight by which we decrese the step-size in ech new evlution of I. s = 2 is very common choice. From (14), we cn continue itertively (or recursively) to elimte O(h p 2 ). Using tht result, we cn eliminte O(h p 3 ), nd continue on indefinitely, in theory. In relity, we re limited by floting point precision. The following tble presents grphicl representtion of the extrpoltion process. Notice tht the left-most column is the only column tht involves ctul function evlutions. The second column represents estimtes of I with the O(h p 1 ) term eliminted using the lgorithm given bove. The third column elmimintes O(h p 2 ) nd so forth. The ctul pproximtions tht should be used for error checking cn be found by reding off the entries on the min digonl of this mtrix. Tble 1: Itertion of Generic Extrpoltion Method I 0 (h) I 0 ( h) s I 1(h) I 0 ( h ) s 2 I 1 ( h) s I 2(h) I 0 ( h ) s 3 I 1 ( h ) s 2 I 2 ( h) s I 3(h)....... Note: more generlly, we need not decrese h geometriclly by fctors of 1 s, but this pproch is often esiest to visulize, nd it simplifies the lgebr in most pplictions without dmging performnce. Common choices for s re smll integers (e.g. s = 2) becuse lrger integers my cuse the number of function evlutions to grow prohibitively quickly.

8 Liu 3.2 A Few Words on Implementtion When implementing Richrdson or Romberg extrpoltion, it is most common to do it itertively, unless storge is n extreme issue. Although the problem cn be solved just s esily using recursion, itertion is generlly preferred. In n itertive method, storge of the full tble/mtrix is not neccessry. Conveniently, it is only necessry to store the lst two rows. Although one should keep t lest short history (1 or 2 entries) from the min digonl for error comprisions. Tht is to sy, when the difference between successive estimtes decreses to some given tolernce (the smllest possible tolernce being mchine-zero), we should terminte the lgorithm becuse we re done. However it is possible the extrpoltion method to stll loclly nd generte two successive entries tht re within tolernce even though the result my be reltively fr from the desired precision. Thus it is common prctice to check errors cross every second or third term to void this problem. The following exmple demonstrtes simple implementtion tht voids storing the full tble using MATLAB. It only checks for termintion in successive terms, nd it ssumes the error powers follow the rithmetic progression 2, 4, 6,.... while log10(bs(diff(i)))>log10(eps) %Only checks error of successive terms h(k)=(b-)/2^k; %step-size for the next row R(2,1)=midpoint(,b,h(end)); %Uses midpoint rule to obtin the first entry for j=2:k %perform extrpoltion steps R(2,j)=R(2,j-1)+(R(2,j-1)-R(1,j-1))/(4^(j-1)-1); end I(k)=R(end); %store ltest result in n rry for m=1:k %copy the second row to the first row R(1,m)=R(2,m); end %the next itertion will overwrite the current second row k=k+1; end If the full mtrix is desired, the for loop would be similr to the following. Here, we llow for rbitrry error powers. %xs re the x-loctions long the xis where the function is evluted %ys re the corresponding y-vlues %p is the rry of error powers n = length(xs); R = zeros(n); %crete the tble R(:,1) = ys; for :n-1, for i=1:(n-j), R(i,j+1)=R(i,j)+(R(i,j)-R(i+1,j))/((xs(i+j)/xs(i))^(p(j)/j)-1); end end

Fundmentl Methods of Extrpoltion 9 4 Grphicl Demonstrtion of Results 4.1 Differentition nd Integrtion Revisited To see the power of extrpoltion, we present two illustrtive exmples. First, we return to de x dx x=1. Figure 2 shows the originl plot of the first divided difference pproximtion followed by the un-extrpolted centered difference (red). The two blck lines represent one nd two extrpoltion steps with the single step being nturlly less ccurte. Thus we cn see tht extrpoltion incresed the rte of convergence here by n incredible mount. Not only tht, but it lso hnded us 14 deciml plces of ccurcy, compred to the mere 7 chieved by first differences. Even the single step showed gret improvements over the non-extrpolted centered difference. But unfortuntely, not even extrpoltion cn eliminte the problems of floting point errors. Wht it cn do nd hs done here is eliminte trunction error due to poor initil pproximtion methods. 2 Improvement of Divided Differences from Extrpoltion 0 2 log 10 (error) 4 6 8 10 12 14 16 14 12 10 8 6 4 2 0 log (h) 10 Figure 2: First nd Extrpolted Centered Differences for dex dx x=1 Figure 3 displys the LHS, Trpezoid, Midpoint, nd Simpson Rules in their stndrd forms (dotted lines). If it is not cler, LHS is the blck line mrked by sterisks, the midpoint/trpezoid rules virtully lie on top of ech other, nd the pink like mrked by Xs is Simpson s Rule. Once extrpolted (solid lines), Simpson, Midpoint, nd Trpezoid re nerly indistinguishble. (Why does this imply tht we should never strt with Simpson s Rule when extrpolting?) Even the lowly Left Hnd Sum is now performing t decent level. But clerly, the Midpoint or Trpezoid Rule is the best choice here. The deciding fctor between them is typiclly where we wnt the function evlutions to occur (i.e. Midpoint voids the endpoints).

10 Liu 0 Comprison of Extrpolted nd Un Extrpolted Integrtion Methods 2 4 6 log 10 (error) 8 10 12 14 16 18 20 4 3.5 3 2.5 2 1.5 1 0.5 log (h) 10 Figure 3: LHS, Trpezoid, Midpoint, nd Simpson extrpolted nd Un-Extrpolted 4.2 Finding Error Powers Grphiclly Oftentimes, we find ourselves ttempting to extrpolte expressions where the error powers hve not been nlyticlly pre-derived for our convenience. We hve lredy seen few exmples of this; other situtions include most cnned methods for numericlly solving ODEs. However not ll cses re so nice. For exmple, it is esy to pply extrpoltion to evlute infinite series or sequences, but the error terms re usully not obvious in those cses. Differentition nd integrtion re specil in tht it is reltively esy to nlyticlly show how to form the {p i }. In these situtions, we re left with two options. We cn either ttempt to nlyticlly derive the error terms or we cn use grphicl methods. This is nother sitution where using log-log plots is idel. We begin with: I(h) I( h s ) = (1 s p 1 )h p 1 k 1 + (1 s p2 )h p 2 k 1 +... = (1 s p 1 )h p 1 k 1 + O(h p 2 ) From here, we would like to determine p 1. We cn do this by first ssuming tht for sufficently 1 smll vlues of h (in prctice, on the order of nd smller is sufficient), we cn mke the 100 pproximtion O(h p 2 ) 0. Now we pply log 10 to both sides; note tht we will now drop the subscript p 1 nd ssume tht log mens log 10. So now we hve: log(i(h) I( h s )) = p log(h) + log(1 s p ) + logk 1 (14) ( At this point, we cn tylor expnd log(1 s p ) = 1 ln(10 s p 1 2 s 2p 1 3 s p +... ). At this point we hve two options. If we believe tht the error powers will be integers or simple frctions (e.g. 1, 1, 1 nd similrly simple ones), then we cn mke the dditionl pproximtion 2 3 4

Fundmentl Methods of Extrpoltion 11 log(1 s p ) log(1) = 0. As we will see, this series of pproximtions ctully does not relly hrm our bility to determine p in most situtions. The lterntive is to retin more terms from the expnsion of log(1 s p ) nd solve for p using the differences log(i(h) I( h )). Proceeding s with the first option, we hve: log(i(h) I( h )) = p log(h) + C (15) s Figure 4 begins with the Trpezoid Rule extrpolted once using the known first error term, p 1 = 2. The slope of this (red) line shows the next error power we would need to use in order to continue with the extrpoltion process. Not surprisingly, the plot shows tht p 2 = 4. Tble 2 displys the successive points between step-sizes. As predicted, the slope vlues converged to pproximtely 4 firly quickly. In the first few entries, the choice of s is too smll, nd in the lst few vlues, it begins to diverge from 4 s floting point errors come into ply. But in the center, there is little doubt tht the correct choice is p 2 = 4, if you did not believe the plot. The other lines in figure 4 demonstrte wht hppens s continue setting successive error powers. Here it is cler tht the progression for the trpezoid rule goes in multiples of 2. 5 Conclusion Extrpoltion llows us to chieve greter precision with fewer function evlutions thn unextrpolted methods. In some cses (s we sw with simple difference schemes), extrpoltion is the only wy to improve numeric behvior using the sme lgorithm (i.e. without switching to centered differences or something better). Using sets of poor estimtes with smll stepsizes, we cn esily nd quickly eliminte error modes, reching results tht would be otherwise unobtinble without significntly smller step-sizes nd function evlutions. Thus we turn reltively poor numeric schemes like the Trpezoid Rule into excellent performers. Of course the mteril presented here is certinly not the limit of extrpoltion methods. The Richrdson nd Romberg techniques re prt of fundmentl set of methods using polynomil pproximtions. There exist other methods using rtionl function pproximtions (e.g. Pde, Burlirsch Stoer) long with generliztions of mteril presented here. For exmple, Sidi gives detiled nlysis of extrpolting functions whose expnsions re more complicted thn A(h) = A + s k=1 α kh p k + O(h p s+1 ). The next order of complexity cn solve problems where we cn write A(h) = A+ s k=1 α kφ k (y)+o(φ s+1 (y)) where A(y) nd φ k (y) re ssumed known, while gin the α k re not required. Still, the methods presented here re more thn sufficient for mny pplictions including evluting integrls, derivtives, infinite sums, infinite sequences, function pproximtions t point (e.g. evluting π or γ), nd more. 5.1 From Trpezoid to Simpson nd Beyond On closing note, we will now quickly outline how Romberg extrpoltion cn be used to esily derive Simpson s rule from the Trpezoid rule. More generlly, extrpolting the n = i Closed Newton-Cotes Formul results in the n = i + 1 version. By now, this should not be ll tht surprising. We strt with s = 2 nd p = 2; the choice of p is bsed on the error term from the Closed Newton-Cotes Formul. Alterntively, it could hve been determined grphiclly, s shown erlier.

12 Liu 0 Using Slope to Find Error Exponents 2 4 6 log 10 (diff(s)) 8 10 12 14 16 18 20 4 3.5 3 2.5 2 1.5 1 0.5 log (h) 10 Figure 4: Trpezoid Rule pplied to 1.25 0.25 exp( x2 ) Tble 2: Slope between step-sizes 4.61680926238264 4.12392496687081 4.02980316622319 4.00738318535940 4.00184180151224 4.00045117971785 4.00034982057854 3.99697103596297 4.08671822443851 4.02553509210714

Fundmentl Methods of Extrpoltion 13 4trp( h 2 ) trp(h) 2 2 1 = Simp( h 2 ) ( ) ( ) = h n 1 f() + 2 f(x j ) + f(b) h n 1 f() + 2 f(x j ) + f(b) 3 6 n = h 2 1 n 2 f() + 2 f(x j ) + 4 f(x j ) + f(b) 3 Thus Simpson s Rule is: n Simpson s Rule: f(x) dx = h 2 1 f() + 2 f(x j ) + 4 3 References [1] Burden, R. nd D. Fires, Numericl Anlysis 8TH Ed, Thomson, 2005. n 2 f(x j ) + f(b) + O(h 4 ) (16) [2] Isscson, E. nd H. B. Keller. Anlysis of Numericl Methods, John Wiley nd Sons. New York, 1966. [3] Press, et. l. Numericl Recipes in C: The Art of Scientific Computing, Cmbridge University Press, 1992. [4] Sidi, Avrm, Prcticl Extrpoltion Methods, Cmbridge University Press, 2003. [5] Stoer, J. nd R. Burlirsch, Introduction to Numericl Anlysis, Springer-Verlg, 1980. [6] Sujit, Kirpekr, Implementtion of the Burlirsch Stoer Extrpoltion Method, U. Berkeley, 2003.