2

Size: px
Start display at page:

Download "2"

Transcription

1 1 Notes for Numerical Analysis Math 54 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft)

2 2

3 Contents 1 Polynomial Interpolation Review Introduction Lagrange Interpolation Interpolation error and convergence Interpolation error Convergence Interpolation at Chebyshev points Hermite interpolation Lagrange form of Hermite interpolation polynomials Newton form of Hermite interpolation polynomial Hermite interpolation error Spline Interpolation Piecewise Lagrange interpolation Cubic spline interpolation Convergence of cubic splines B-splines Interpolation in multiple dimensions Least-squares Approximations

4 4 CONTENTS

5 Chapter 1 Polynomial Interpolation 1.1 Review Mean-value theorem: Let f 2 C[a; b] and differentiable on (a; b) then there exists c 2 (a; b) such that f 0 (c) = f(b) f(a) : b a Weighted Mean-value theorem: If f 2 C[a; b] and g(x) > 0 on [a; b], then there exists c 2 (a; b) such that Z b a f(x)g(x)dx = f(c) Z b a g(x)dx: Rolle's theorem: If f 2 C[a; b], differentiable on (a; b) and f(a) = f(b), then there exists c 2 (a; b) such that f 0 (c) =0. Generalized Rolle's theorem: If f 2 C[a; b], n + 1 times differentiable on (a; b) and admits n +2 zeros in [a; b], then there exists c 2 (a; b) such that f (n+1) (c) =0. Intermediate value theorem: If f 2 C[a; b] such that f(a) = f(b), then for each y between f(a) and f(b) there exists c 2 (a; b) such that f(c) =y. 5

6 CHAPTER 1. POLYNOMIAL INTERPOLATION 1.2 Introduction From the Webster dictionary the definition of interpolation reads as follows: Interpolation is the act of introducing something, especially, spurious and foreign, the act of calculating values of functions between values already known" Our goal is approximate a set of data points or a function by a simpler polynomial function. Given a set of data points x i ; i = 0; 1; n; x i = x j ; if i = j we would like toconstruct a polynomial p m (x) such that p (k) m (x i )=f (k) (x i ); i =0; 1; m; k =0; 1; n i ; with n = mx i=0 n i 1: Lagrange Interpolation: n i =0,m 1,p n (x i )=f(x i ); i =0; 1; ;n Taylor Interpolation: n 0 > 1, m =0,p (k) n (x 0 )=f (k) (x 0 ); k =0; 1; ;n 0. Hermite Interpolation: n i 1, m 1, p (k) n (x i )=f (k) (x i ); i =0; 1; ;m; k =0; 1; n i. Why Interpolation? For instance interpolation is used to approximate integrals derivatives Z b a f(x)dx ß Z b a p m (x)dx f (k) (x) ß p (k) m (x) and plays a major role in approximating differential equations. Taylor interpolation: We first study Taylor polynomials defined as p n (x) =f(x 0 )+(x x 0 )f 0 (x 0 )+ (x x 0) 2 f (2) (x 0 )+ + (x x 0) n f (n) (x 0 ): 2 n!

7 1.3. LAGRANGE INTERPOLATION 7 The interpolation error or remainder formula in Taylor expansions is written as f(x) p n (x) = (x x 0) n+1 f (n+1) (ο) (n + 1)! Example: f(x) =sin(x); x 0 =0 p 2n+1 (x) = nx k=0 The interpolation error can be written as jsin(x) p 2n+1 (x)j = ( 1) k x 2k+1 (2k + 1)! jxj2n+3 jxj2n+3 jcos(ο)j < (2n + 3)! (2n + 3)! On the interval 0 <x<1=2 the interpolation error is bounded as jsin(x) p 2n+1 (x)j < 1 2 2n+3 (2n + 3)! : 1.3 Lagrange Interpolation Lagrange form: Given a set of points (x i ;f(x i )); i = 0; 1; 2; n; x j = x i we define the Lagrange coefficient polynomials l i (x); i =0; 1; n: such as ( 1; if i = j l i (x j )= 0; otherwise and is defined as l i (x) = (x x 0)(x x 1 ) (x x i 1 )(x x i+1 ) (x x n ) (x i x 0 )(x i x 1 ) (x i x i 1 )(x i x i+1 ) (x i x n ) : The Lagrange form of the interpolation polynomial is p n (x) = nx i=0 f(x i )l i (x)

8 8 CHAPTER 1. POLYNOMIAL INTERPOLATION Example: Let us consider the data set x = [0; 1; 2]; f =[ 2; 1; 2] l 0 (x) = (x 1)(x 2) ( 1)( 2) =(x 2 3x +2)=2 l 1 (x) = l 2 (x) = x(x 2) (1)( 1) = x2 +2x x(x 1) (2)(1) =(x 2 x)=2 p 2 (x) = 2l 0 (x) l 1 (x)+2l 2 (x) =x 2 2 Example: f(x) = cos(x) 5 using 8 points x = [0; 1; 2; 3; 4; 5; ; 7] and 14 points x i = i Λ 0:5;i=0; 2; 14 A Matlab example x=[ ]; y=cos(x).^5; c = polyfit(x,y,length(x)-1); xi = 0:0.1:7; zi = cos(xi).^5; yi =polyval(c,xi); subplot(2,1,1) title('interpolation') plot(xi,yi,'-.',xi,zi,x,y,'*'); subplot(2,1,2) title('interpolation Error') plot(xi,zi-yi,x,zeros(1,length(x)),'-*'); Newton form and divided differences: We develop a procedure to compute a 0 ; a 1 ; ; a n such that the interpolation polynomial has the form p n (x) =a 0 + a 1 (x x 0 )+a 2 (x x 0 )(x x 1 )+ + a n (x x 0 ) (x x n 1 )

9 1.3. LAGRANGE INTERPOLATION 9 where a 0 = f(x 0 ), a 1 = f (x 1) f (x 0 ) x 1 x 0 and such that a k = f[x 0 ;x 1 ; ;x k ] is called the k th divided difference. All divided differences are generated by the following recurrence formula f[x i ]=f(x i ) f[x i ;x k ]= f[x k] f[x i ] x k x i f[x 0 ;x 1 ; ;x k 1 ;x k ]= f[x 1; ;x k ] f[x 0 ; ;x k 1 ] x k x 0 x i f(x i ) 1 st DD 2 nd DD 3 rd DD 4 th DD x 0 f(x 0 ) f[x 0 ;x 1 ] x 1 f(x 1 ) f[x 0 ;x 1 ;x 2 ] f[x 1 ;x 2 ] f[x 0 ;x 1 ;x 2 ;x 3 ] x 2 f(x 2 ) f[x 1 ;x 2 ;x 3 ] f[x 0 ;x 1 ;x 2 ;x 3 ;x 4 ] f[x 2 ;x 3 ] f[x 1 ;x 2 ;x 3 ;x 4 ] x 3 f(x 3 ) f[x 2 ;x 3 ;x 4 ] f[x 3 ;x 4 ] x 4 f(x 4 ) The forward Newton polynomial can be written as p 4 (x) =f(x 0 )+f[x 0 ;x 1 ](x x 0 )+f[x 0 ;x 1 ;x 2 ](x x 0 )(x x 1 )+ f[x 0 ;x 1 ;x 2 ;x 3 ](x x 0 )(x x 1 )(x x 2 ) +f[x 0 ;x 1 ;x 2 ;x 3 ;x 4 ](x x 0 )(x x 1 )(x x 2 )(x x 3 ) The backward Newton polynomial can be written as p 4 (x) =f(x 4 )+f[x 3 ;x 4 ](x x 4 )+f[x 2 ;x 3 ;x 4 ](x x 4 )(x x 3 )+ f[x 1 ;x 2 ;x 3 ;x 4 ](x x 4 )(x x 3 )(x x 2 ) +f[x 0 ;x 1 ;x 2 ;x 3 ;x 4 ](x x 4 )(x x 3 )(x x 2 )(x x 1 )

10 10 CHAPTER 1. POLYNOMIAL INTERPOLATION Example: x i f(x i ) 1 st DD 2 nd DD 3 rd DD 4 th DD 5 th DD = = / = 24 40= The forward Newton polynomial p 5 (x) =1 8(x +2)+2(x + 2)(x +1) 10 3 (x + 2)(x +1)x (x + 2)(x +1)x(x 1) (x + 2)(x +1)x(x 1)(x 2): The Backward Newton polynomial is given by p 5 (x) = 4 12(x 3)18(x 3)(x 2) 40 (x 3)(x 2)(x 1) Remarks: 11 (x 3)(x 2)(x 1)x (x 3)(x 2)(x 1)x(x 1): (i) The upper diagonal contains the coefficients for the forward Newton polynomials. (ii) The lower diagonal contains the coefficients for the backward Newton polynomials. (iii) p k (x) interpolates f at x 0 ;x 1 ; ;x k and is obtained as

11 1.4. INTERPOLATION ERROR AND CONVERGENCE 11 p k (x) =f(x 0 )+ kx j=1 (iv) p 2 (x) that interpolates f at x 2 ;x 3 and x 4 is Y j 1 f[x 0 ;x 1 ; ;x j ] (x x i ): p 2 (x) =f(x 2 )+f[x 2 ;x 3 ](x x 2 )+f[x 2 ;x 3 ;x 4 ](x x 2 )(x x 3 ): i=0 (v) If we decide to add an additional point, it should be added at the bottom for forward Newton polynomials and at the top for backward Newton polynomials. (vi) f[x 0 ;x 1 ; ;x k ]= f (k) (ο) k! ; ο 2 [a; b]: Nested mutliplication An efficient algorithm to evaluate Newton polynomial can be obtained by writing p n (x) =a 1 +(a 2 + (a n 2 +(a n 1 +(a n + a n+1 (x x n )) Matlab program (x x n 1 ))(x x n 2 ) )(x x 1 )): %input a(i), i=1,2,...,n+1, x(i),i=1,...,n+1, and x % p = a(n+1); for i=n:-1:1 p = a(i) + p*(x-x(i)); end; %p = p_n(x) 1.4 Interpolation error and convergence In this section we study the interpolation error and convergence of interpolation polynomials to the interpolated function.

12 12 CHAPTER 1. POLYNOMIAL INTERPOLATION Interpolation error Theorem Let f 2 C[a; b] and x 0 ;x 1 ;x 2 ; x n ; be n+1 distinct points in [a; b]. Then there exists a unique polynomial p n of degree at most n such that p n (x i )=f(x i ); i =0; 1; n. Proof. Existence: we define and One can verify that Uniqueness: nq nq j=0;j=i (x x j ) L i (x) = (x i x j ) j=0;j=i ( 1; i=j L i (x j )=ffi ij = 0; otherwise p n (x) = nx i=0 f(x i )L i (x) p n (x j )=f(x j ): Assume there are two polynomials q n (x) and p n (x) such that and consider the difference q n (x j )=p n (x j )=f(x j ); j =0; 1; 2; ;n d n (x) =p n (x) q n (x): d n (x j ) = 0; i = 0; 1; ;n so d n (x) has n +1 roots. By the fundamental theorem of Algebra d n (x) =0.

13 1.4. INTERPOLATION ERROR AND CONVERGENCE 13 Theorem Let f 2 C[a; b] (n + 1) differentiable on (a; b) and let x 0 ; x 1 ; ; x n ; be (n +1) distinct points in [a; b]. If p n (x) is such that p n (x i ) = f(x i ); i = 0; 1; ;n, then for each x 2 [a; b] there exists ο(x) 2 [a; b] such that Q where W (x) = n (x x i ). i=0 f(x) p n (x) = f n+1 (ο(x)) W (x); (n + 1)! Proof. Let x 2 [a; b] and x = x i ; i =0; 1; ;n and define the function g(t) =f(t) p n (t) f(x) p n(x) W (t): W (x) We note that g has (n + 2) roots, i.e., g(x i )=0; i =0; 1; n and g(x) =0. Using the generalized Rolle's Theorem there exits ο(x) 2 (a; b) such that which leads to g (n+1) (ο(x)) = 0 g (n+1) (ο(x)) = f (n+1) (ο(x)) 0 f(x) p n(x) (n + 1)! = 0; (1.1.1) W (x) We solve (1.1.1) to find f(x) p n (x) = f (n+1) (ο(x)) W (x) which completes the (n+1)! proof. Corollary 1. Assume that max jf (n+1) (x)j = M n+1 then x2[a;b] and jf(x) p n (x)j» M n+1 jw (x)j; 8 x 2 [a; b]: (n + 1)! max jf(x) p n (x)j» M n+1 x2[a;b] (n + 1)! max jw (x)j: x2[a;b] Proof. The proof is straight forward.

14 14 CHAPTER 1. POLYNOMIAL INTERPOLATION Examples: jjf P 1 jj 1» M 2h 2 ; [x 0 ; x 0 + h]: 8 jjf p 2 jj 1» M 3h 3 jjf p 3 jj 1» M 4h 4 9 p 3 ; [x 0;x 0 +2h]: 24 ; [x 0;x 0 +3h]: Example: Let us interpolate f(x) =e x 3 on [0; 1] at x 0 ;x 1 ; ;x n. The n +1 derivative isf (n+1) (x) = e x 3 3 n+1 where M n+1 = max jf (n+1) (x)j = e 1 3 x2[0;1] 3 : n+1 The interpolation error can be bounded as jf(x) p n (x)j» e1=3 jw (x)j ; x 2 [0; 1]: 3 n+1 (n + 1)! For instance, for n =4and x =[0; 1=4; 1=2; 3=4; 1], W (x) =x(x 1=4)(x 1=2)(x 3=4)(x 1) The error at x =0:3 can be bounded as jf(0:3) p 4 (0:3)j» e1=3 jw (0:3)j 3 5 5! ß 0: : Example: Let us consider f(x) =cos(x)+x on [0; 2] which satisfies jf (k) (x)j»1. max x2[0;2] The interpolation error can be bounded as Case 1: Two interpolation points with h =2, max jf(x) p 1 (x)j» h2 M 2 x2[0;2] 8» 4=8 =0:5:

15 1.4. INTERPOLATION ERROR AND CONVERGENCE 15 Case 2: Three interpolation points with h =1, max jf(x) p 2 (x)j» h2 M 3 x2[0;2] 9 p 3» 1=(9p 3 ß 0:041: Case 3: Four interpolation points with h = 2=3, max jf(x) p 3 (x)j» h4 M 4 x2[0;2] Convergence» (2=3) 4 =24 ß 0:0082: We start by reviewing the convergence of functions and defining simple and uniform convergence of sequences of functions. Let f n (x); n =0; 1; be a sequence of continuous functions on [a; b]. Definition 1. (Simple convergence): f n (x) converges simply to f(x) if and only if at every x 2 [a; b] lim n!1 jf n (x) f(x)j =0: Definition 2. (Uniform convergence): f n converges uniformly to f if and only if lim max jf n (x) f(x)j =0. n!1 x2[a;b] Example: Let us consider the sequence f n (x) = 1 ;n=0; 1; ; x 2 [0; 1]: 1+nx (i) The sequence f n converges simply to 0 for all 0 <x» 1 while f n (0) = 1. However, f n does not converge uniformly since jjf n jj 1 =1. (ii) The sequence f n converges uniformly to 0 on [2; 3] since jjf n jj 1 =1=(1 + 2n). Next, we will study the uniform convergence of interpolation polynomials on a fixed interval [a; b] as the number of interpolation points approaches

16 1 CHAPTER 1. POLYNOMIAL INTERPOLATION infinity. Let h =(b a)=n and x i = a + ih; i =0; 1; 2; ;n; equidistant interpolation points. Let p n (x) denote the Lagrange interpolation polynomial, i.e., p n (x i )=f(x i ); i =0; ;n and let us study the limit lim jjf p njj 1 = lim max jf(x) p n (x)j: n!1 n!1 x2[a;b] For x 2 [a; b], jx x i j»(b a) which leads to jw (x)j»(b a) n+1. Thus, the interpolation error is bounded as jjf p n jj 1» M n+1 (n + 1)! (b a)n+1 : We have uniform convergence when M n+1 (n+1)! (b a)n+1! 0 as n!1. Theorem Let f be an analytic function on a disk centered at(a+b)=2 with a radius r > 3(b a)=2. Then, the interpolation polynomial p n (x) satisfying p n (x i )=f(x i ); i =0; 1; 2; n; converges to f as n!1, i.e., lim jjf p njj 1 =0: n!1 Proof. A function is analytic at (b+a)=2 if it admits a power series expansion that converges on a disk of radius r and centered at (a + b)=2. Applying Cauchy's formula I f (k) (x) = k! 2ßi C r f(z) dz; x 2 [a; b]: k+1 (z x) I jf (k) (x)j» k! 2ß C r jf(z)j dz; k+1 j(z x)j x 2 [a; b]:

17 y 1.4. INTERPOLATION ERROR AND CONVERGENCE z pole 0.5 * r r 0 d a (a+b)/2 x b 0.5 C r x Let z be a point on the circle C r and x 2 [a; b]. From the triangle with vertices z, (a + b)=2 and x the following triangle inequality holds jz xj + d r: Noting that d» (b a)=2 the triangle inequality yields jf (k) (x)j» k! 2ß jz xj r (b a)=2: Assume r> b a 2 ([a; b] ρ C r ) to obtain M k» max jf(z)j z2c r jr (b a)=2j (k+1) 2ßr: r r (b a)=2 max k! jf(z)j z2c r (r (b a)=2) : k Using k = n + 1 the interpolation error may bebounded as jf(x) p n (x)j» M n+1 (n + 1)! (b a)(n+1)» max jf(z)j z2c r ψ r(b a) r (b a) 2 Finally, wehave uniform convergence if establishes the theorem.! b a r (b a)=2 n b a < 1, i.e., r> 3 (b a) which r (b a)=2 2 :

18 18 CHAPTER 1. POLYNOMIAL INTERPOLATION Examples of analytic functions are sin(z), e z, cos(z 2 ). Runge phenomenon: Let f(x) = 1 4+x 2 is C 1 [ 10; 10] but we do not have uniform convergence on [ 10; 10] when using the n +1equally spaced interpolation points x i = 10 + ih; i =0; 1; ;n, h =20=n. The function f(z) has two poles z = ±2i, thus, can't be analytic on disk C r with radius r > 3(b a)=2 = 30 and center at (a + b)=2. We cannot apply the previous theorem and we should expect convergence problems for x far away from the origin. The largest interval [ a; a] satisfying r > 3(b a)=2 = 3a corresponds to a < 2=3. Actually, it may converge on a larger interval because this is a sufficient condition. Interpolation errors and divided differences: The Newton form of p n (x) that interpolates f at x i ; i =0; 1; ;n is p n (x) =f[x 0 ]+f[x 0 ;x 1 ](x x 0 )+ nx f[x 0 ; ;x i ] i 1 Y i=2 j=0 (x x j ): We proof a theorem relating the interpolation errors and divided differences. Theorem If f 2 C[a; b] and n +1 times differentiable, then for every x 2 [a; b] and f(x) p n (x) =f[x 0 ;x 1 ; ;x n ;x] ny i=0 (x x i ) f[x 0 ;x 1 ;x 2 ; ;x k ]= f (k) (ο) ; ο 2 [ min k! x i; max x i]: i=0; ;k i=0; ;n Proof. Let us introduce another point x distinct from x i ;i = 0; 1; n and let p n+1 interpolate f at x 0 ;x 1 ; x n and x, thus

19 1.5. INTERPOLATION AT CHEBYSHEV POINTS 19 p n+1 (x) =p n (x)+f[x 0 ;x 1 ; ;x n ;x] ny i=0 (x x i ): Combining the equation p n+1 (x) =f(x) and the interpolation error formula we write f(x) p n (x) =f[x 0 ;x 1 ; ;x n ;x] ny i=0 (x x i )= f (n+1) (ο(x)) (n + 1)! ny i=0 (x x i ): This leads to f[x 0 ;x 1 ; ;x k ]= f (k) (ο) ; ο 2 [ min k! x i; max x i]; i=0; ;k i=0; ;k which completes the proof. Remark: lim f[x 0;x 1 ; ;x k ]= f (k) (x 0 ) x i!x 0 ;i=1; k k! 1.5 Interpolation at Chebyshev points In the previous section we have shown that uniform convergence does not occur using uniform interpolation points for some functions. Now, we study the interpolation error on [ 1; 1] where the (n + 1) interpolation points, x Λ i ; i =0; 1; ;n; in [ 1; 1] are selected such that jjw Λ (:)jj 1 = min Q2 ~ P n+1 jjq(:)jj 1 where ~P n is the set of the monic polynomials ~P n =[Q 2P n jq = x n + n 1 X i=1 c i x i ];

20 20 CHAPTER 1. POLYNOMIAL INTERPOLATION Q and W Λ (x) = n (x x Λ i ). i Question: Are there interpolation points x Λ i ; i =0; 1; 2; ;n in [ 1; 1] such that jjw Λ jj 1 = min x i 2[a;b];i=0;1; ;n jjw jj 1 If the above statement is true, the interpolation error can be bounded by jje n jj 1» M n+1 (n+1)! jjw Λ jj 1 The Answer : The best interpolation points x Λ i ; i = 0; 1; 2; ;n are the roots of the Chebyshev polynomial T n+1 (x) defined as T k (x) =cos(karcos(x)); k =0; 1; 2; : In the following theorem we will prove some properties of Chebyshev polynomials. Theorem The Chebyshev polynomials T k (x); k =0; 1; 2;, satisfy the following properties: (i) jt k (x)j»1; for all 1» x» 1 (ii) T k+1 (x) =2xT k (x) T k 1 (iii) T k (x) has k roots x Λ j = cos( 2j+1 ß); j =0; 1; ;k 1; 2 [ 1; 1] 2k Q k 1 (iv) T k (x) =2 k 1 (x x Λ j) j=0 (v) If ~T k (x) = T k(x) then max j ~T 2 k 1 k (x)j = 1. 2 x2[ 1;1] k 1 Proof. We obtain (i) by noting that the range of the cosine function is [ 1; 1]. To obtain (ii) we write T k+1 (x) =cos(karcos(x)+arcos(x)) = cos(k + )

21 1.5. INTERPOLATION AT CHEBYSHEV POINTS 21 where = arcos(x) and write T k+1 = cos(k + ) T k 1 = cos(k ) Use the trigonometric identity cos(a ± b) = cos(a)cos(b) sin(a)sin(b) to obtain and cos(k + ) =cos(k )cos( ) sin(k )sin( ) cos(k ) =cos(k )cos( )+sin(k )sin( ) Adding the previous equations to obtain T k+1 (x)+t k 1 (x) =2cos(k )cos( ) =2xT k (x) This proves (ii). To obtain the roots we set cos(karcos(x)) = 0 which leads to karcos(x) = If we solve for x, we obtain leads to the roots x = cos( x Λ j (2j +1) ß; j =0; ±1; ±2; : 2 (2j +1) ß);j =0; ±1; ±2; ; 2k +1) = cos((2j ß);j =0; 1; ;k 1: 2k Use induction to prove (iv) step 1: T 1 (x) =2 0 x, T 2 (x) =2 1 x 2 1 (iv) is true for k =1; 2. P Step 2: Assume T k = 2 k 1 x k + k 1 c i x i, for k = 1; 2; ;n and use (ii) we write i=0 T n+1 (x) =2xT n T n 1 (x) =2 n x n+1 + nx i=0 a i x i

22 22 CHAPTER 1. POLYNOMIAL INTERPOLATION k 1 This establishes (iv), i.e., T k =2 k 1 (x x Λ j). Q j=0 Applying (iv) we show that (v) is true. Corollary 2. If ~T n (x) is the monic Chebyshev polynomial of degree n, then max j Q ~ n (x)j max j T ~ n (x)j = 1 1»x»1 1»x»1 2 ; 8 Q ~ n 2 P ~ n : n 1 Proof. Assume there is another monic polynomial ~ R n (x) such that We also note that max j ~R n (x)j < 1 1»x»1 2 n 1 ~T n (z k )= ( 1)k 2 n 1 ; z k = cos(kß=n); k =0; 1; 2; ;n: The (n 1)-degree polynomial d n 1 (x) = ~T n (x) ~R n (x) satisfies d n 1 (z 0 ) > 0, d n 1 (z 1 ) < 0, d n 1 (z 2 ) > 0, d n 1 (z 3 ) < 0 So d n 1 (x) changes sign between each pair z k and z k+1 ; k =0; 1; 2; ;n and thus has n roots. Thus d n 1 (x) =0,identically, i.e., ~T (x) and ~R n (x) are identical. This leads to a contradiction with the assumption above. Below are the first five Chebyshev polynomials. T 0 (x) = 1 T 1 (x) = x T 2 (x) = 2x 2 1 T 3 (x) = 4x 3 3x T 4 (x) = 8x 4 8x 2 +1 Example of Chebyshev points:

23 1.5. INTERPOLATION AT CHEBYSHEV POINTS 23 k x Λ 0 x Λ 1 x Λ 2 x Λ 3 1 p 0 p 2 p 2=2 2=2 p 3 3=2 0 3=2 4 cos(ß=8) cos(3ß=8) cos(5ß=8) cos(7ß=8) Application to interpolation: Let p n (x) 2 P n interpolate f(x) 2 C n+1 [ 1; 1] at the roots of T n+1 (x), x Λ j; j = 0; 1; 2; ;n. Thus, we can write the interpolation error formula as f(x) p n (x) = f n+1 (ο(x)) (n + 1)! ~T n+1 (x) Using (v) from the previous theorem and assuming jjf n+1 jj 1;[ 1;1]» M n+1 we obtain Remarks: max jf(x) p n (x)j» M n+1 x2[ 1;1] 2 n (n + 1)! 1. We note that this choice of interpolation points reduces the error significantly. 2. With Chebyshev points, p n converges uniformly to f when f 2 C 1 [ 1; 1] only. The function f does not have to be analytic (see Gautschi). Example 1: Consider f(x) =e x ; x 2 [ 1; 1] Case 1: with three points; n=2: Case 2: with points; n=5: jje 2 jj 1» M 3 = e=24 = 0:113: 2 3!2

24 24 CHAPTER 1. POLYNOMIAL INTERPOLATION jje 5 jj 1» M!2 5 = e=(720 32) = 0: : How many Chebyshev points are needed to have jje n jj 1 < 10 8 jje n jj 1» M n+1 2 n (n + 1)! = e (n + 1)!2 n =0: ; for n=9: Thus, 10 Chebyshev points are needed. Chebyshev points on [a; b]: Chebyshev points can be used on an arbitrary interval [a; b] using the linear transformation x = a + b 2 We also need the inverse mapping t =2 x a b a First, we order the Chebyshev nodes in [ 1; 1] as + b a t; 1» t» 1: (1.1.2a) 2 1; a» x» b: (1.1.2b) t Λ 2k k = cos( ß ß) = cos(2k ß); k =0; 1; 2; ;n: 2n +2 2n +2 we define the interpolation nodes on an arbitrary interval [a; b] as Remarks: x Λ k = a + b 2 + b a 2 tλ k; k =0; 1; 2; ;n 1. x Λ 0 <x Λ 1 < <x Λ n 2. x Λ k are symmetric with respect to the center (a + b)=2 3. x Λ k are independent of the interpolated function f

25 1.5. INTERPOLATION AT CHEBYSHEV POINTS 25 Theorem Let f 2 C n+1 [a; b] and p n interpolate f at the Chebyshev nodes x Λ k ;k =0; 1; 2; ;n, in [a,b]. Then (b a) n+1 max jf(x) p n (x)j»2m n+1 x2[a;b] 4 n+1 (n + 1)! : Where M n+1 = max jf (n+1) (x)j. x2[a;b] Q Proof. It suffices to rewrite W (x) = n (x x i Λ) using the mapping (1.1.2) to find that i=0 (x x Λ i )= b a 2 (t tλ i ); and W (x) = ny i=0 (x x i Λ)=( b a 2 )n+1 ny i=0 (t t Λ i )=( b a 2 )n+1 ~ T n+1 (t): Finally, using jj ~T n+1 jj 1;[ 1;1] = 1 2 n we complete proof. Example 2: Consider f(x) = 3 x = e ln(3)x ; x 2 [0; 1] whose derivative is f (n+1) (x) = ln(3) n+1 e ln(3)x. Noting that f (n+1) is a monotonically increasing function, M n+1 = f (n+1) (1)=3ln(3) n+1. Therefore, jje n jj 1» 2 3 ln(3)n+1 4 n+1 (n + 1)! = ln(3)n+1 4 n+1 (n + 1)! :

26 2 CHAPTER 1. POLYNOMIAL INTERPOLATION # of Chebyshev points Error bound Hermite interpolation We restate the general Hermite interpolation by Letting x 0 <x 1 <x 2 x m be m + 1 distinct points such that f (k) (x i )=p (k) n (x i ); k =0; 1; n i 1; i =0; 1; m; (1.1.3) P where m n i = n + 1 and n i 1. We note that n i =1; i =0; ;m; leads to i=0 Lagrange interpolation Lagrange form of Hermite interpolation polynomials Theorem There exists a unique polynomial p n (x) that satisfies (1.1.3) with n i =2and n =2m +1. Proof. Existence: Next, we study the special case n i =2; i =0; 1; 2; ; where l i;1 (x) =(x x i )l i (x) 2 : (1.1.4)

27 1.. HERMITE INTERPOLATION 27 and l i;0 = l i (x) 2 2l 0 i(x i )(x x i )l i (x) 2 : (1.1.5) Now, we can verify that l i;1 (x j )=0; j =0; 1; 2; ;m Thus, l 0 i;1(x j )=ffi ij. l 0 i;1(x) =2(x x i )l 0 i(x)l i (x)+l i (x) 2 : One can easily check that l i;0 (x j )=ffi ij. For l 0 i;0(x) we have l 0 i;0(x) =(1 2l 0 i(x i )(x x i ))2l 0 i(x)l i (x) 2l 0 i(x i )l i (x) 2 : Thus, l 0 i;0(x j )=0; j =0; 1; ;m: Existence of Hermite interpolation polynomial is established by writing the Lagrange form of Hermite polynomial as p n (x) = mx i=0 f(x i )l i;0 (x)+ mx i=0 f 0 (x i )l i;1 (x): (1.1.) Uniqueness: Assume there are two polynomials p n (x) and q n (x) that satisfy (1.1.3) and consider the difference d n (x) =p n (x) q n (x) which satisfies d (s) n (x j )=0; s =0; 1; j =0; 1; ;m: Thus, d n (x) is a polynomial of degree at most n and has (n+1) roots counting the multiplicity of each root. The fundamental theorem of Algebra shows that d n (x) is identically zero. With this we establish the uniqueness of p n and finish the proof of the theorem.

28 28 CHAPTER 1. POLYNOMIAL INTERPOLATION 1..2 Newton form of Hermite interpolation polynomial Using the following relation f[x 0 ;x 0 + h; x 0 +2h; ;x 0 + kh] = f (k) (ο) ; x 0 <ο<x 0 + kh (1.1.7) k! and taking the limit when h! 0we obtain that f[x 0 ;x 0 ;x 0 ; ;x 0 ]= f k (x 0 ) : (1.1.8) k! The divided difference table for the data (x k + ih; f(x 0 + kh)); i =0; 1; 2; 3; 4 will converge to the table shown below where we recover the Taylor polynomial about x k and with n k =5. Using this observation we initialize the table for x k and n k =5asfollows: (i) every point x i is repeated n i times (ii) we set z i = x k ; z i+1 = x k ; ; z i+4 = x k (iii) we initialize f[z i ;z i+1 ; ; z i+s ]= f (s) (x k ) s! as shown in the following table z i x k f(x k ) z i+1 x k f(x k ) f 0 (x k ) z i+2 x k f(x k ) f 0 (x k ) f 00 (x k )=2! z i+3 x k f(x k ) f 0 (x k ) f 00 (x k )=2! f 000 (x k )=3! z i+4 x k f(x k ) f 0 (x k ) f 00 (x k )=2! f 000 (x k )=3! f (4) (x k )=4! The general formula for divided differences with repeated arguments for x 0» x 1» :::<x n is given by f[x i ;x i+1 ;::: ;x i+k ]= ( f [xi+1 ;x i+2 ;::: ;x i+k ] f [x i ;::: ;x k 1 ] f (k) (ο) k! x i+k x i ; if x i = x i+k Example: Let f(x) = x 4 +1, f 0 (x) = 4x 3. We will construct a polynomial p 5 (x) such that p(x i ) = f(x i ) and p 0 (x i ) = f 0 (x i ) with x i = 1; 0; 1. The Hermite divided difference table is given as

29 1.. HERMITE INTERPOLATION 29 z i f(z i ) 1DD 2 DD 3 DD 4 DD 5DD z z f 0 ( 1) = 4 z z f 0 (0) = z z f 0 (1) = The forward Hermite polynomial is given as p 5 (x) =2 4(x +1)+3(x +1) 2 2(x +1) 2 x +(x +1) 2 x 2 =1+x 4 (1.1.9) The Hermite polynomial that interpolates f and f 0 at x = 1; 0isgiven as p 3 (x) =2 4(x +1)+3(x +1) 2 2(x +1) 2 x: (1.1.10) The backward Hermite polynomial is given as p 5 (x) =2 4(x 1)+3(x 1) 2 +2(x 1) 2 x +(x 1) 2 x 2 =1+x 4 (1.1.11) The Hermite polynomial that interpolates f and f 0 at x =0; 1 is given by p 3 (x)=2+4(x 1) + 3(x 1) 2 +2(x 1) 2 x: (1.1.12) Example: Consider the data with m =1; n 0 =1; n 1 = 2 given in the following table x i 0 1 f(x i ) 1 2 f 0 (x i ) 0 1 f 00 (x i ) NA 2 We write the divided differences table as

30 30 CHAPTER 1. POLYNOMIAL INTERPOLATION z I f(z i ) 1DD 2DD 3DD 4DD z z f 0 (x 0 )=0 z z f 0 (x 1 )=1 0-1 z f 0 (x 1 )=1 f 00 (x 1 )=2!=1 1 2 The Hermite polynomial is given as H 4 (x)=1+0(x 0)+(x 0) 2 (x 0) 2 (x 1) + 2(x 0) 2 (x 1) 2 =1+2x 2 x 3 +2x 2 (x 1) 2 : (1.1.13) 1..3 Hermite interpolation error Theorem Let f(x) 2 C[a; b] be 2m +2 differentiable on (a; b) and consider x 0 <x 1 <x 2 ; ;x m in [a; b] with n i =2; i =0; 1; ;m. If p 2m+1 is the Hermite polynomial such that p (k) 2m+1(x i ) = f (k) (x i ), i = 0; 1; ; m, k =0; 1, then there exists ο(x) 2 [a; b] such that f(x) p 2m+1 (x) = f (2m+2) (ο(x)) W (x) (2m + 2)! (1.1.14a) where W (x) = my i=0 (x x i ) 2 : (1.1.14b) Proof. We consider the special case n i = 2; i.e., n = 2m +1, select an arbitrary point x 2 [a; b]; x = x i ; i =0; ;m and define the function g(t) =f(t) p 2m+1 (t) f(x) p 2m+1(x) W (t): (1.1.15) W (x)

31 1.. HERMITE INTERPOLATION 31 We note that g has (m+2) roots, i.e., g(x i )=0; i =0; 1; m and g(x) =0. Applying the generalized Rolle's Theorem we show that g 0 (ο i )=0;i=0; 1; ;m where ο i 2 [a; b], ο i = x j, ο i = x. Using (1.1.3) with n i = 2 we have g 0 (x i ) = 0; i = 0; 1; ;m. Thus, g 0 (t) has 2m +2roots in [a; b]. Applying the generalized Rolle's theorem we show that there exists ο 2 (a; b) such that g (2m+2) (ο) =0: Combining this with (1.1.15) yields 0=f (2m+2) (ο) f(x) p 2m+1(x) (2m + 2)! W (x) Solving for f(x) p 2m+1 (x) leads to (1.1.14). Corollary 3. If f(x) and p 2m+1 (x) are as in the previous theorem, then and jf(x) p 2m+1 (x)j < M 2m+2 jw (x)j; x 2 [a; b] (1.1.1) (2m + 2)! jjf(x) p 2m+1 (x)jj 1;[a;b]» M 2m+2 (2m + 2)! (b a)2m+2 : (1.1.17) Proof. The proof is straight forward. At this point wewould like to note that we can prove a uniform convergence result under the same conditions as for Lagrange interpolation. Example: Let f(x) =sin(x); x 2 [0;ß=2] and p 5 (x) interpolate f and f 0 at x i =0; 0:2; ß 2, with n i = 2 and 2m +2=

32 32 CHAPTER 1. POLYNOMIAL INTERPOLATION Using the error bound with M 2m+2 =1,n i = 2 and W (x) =[(x 0)(x 0:2)(x ß 2 )]2 ; we obtain je 5 (1:1)j» jw (1:1)j! ß 0:4082! ß 3: : (1.1.18) 1.7 Spline Interpolation In this section we will study piecewise polynomial interpolation and write the interpolation errors in terms of the subdivision size and the degree of polynomials Piecewise Lagrange interpolation We construct the piecewise linear interpolation for the data (x i ;f(x i )); i = 0; 1; ;n such that x 0 <x 1 < <x n as P 1 (x) = 8 >< >: p 1;0 (x) =f(x 0 ) (x x 1) x 0 x 1 + f(x 1 ) (x x 0) x 1 x 0 ; x 2 [x 0 ;x 1 ]; p 1;i (x) =f(x i ) (x x i+1) x i x i+1 + f(x i+1 ) (x x i) x i+1 x i ; x 2 [x i ;x i+1 ]: i =0; 1; ;n 1: (1.1.19) The interpolation error on (x i ;x i+1 ) is bounded as where M 2;i = jje 1;i (x)jj» M 2;i 2 j(x x i)(x x i+1 )j; x 2 (x i ;x i+1 ); (1.1.20) max jf (2) (x)j. This can be written as x i»x»x i+1 The global error is jje 1;i jj 1» M 2;ih 2 i 8 ; h i = x i+1 x i : (1.1.21)

33 1.7. SPLINE INTERPOLATION 33 jje 1 jj 1» M 2H 2 8 ; H = max h i : (1.1.22) i=0; ;n 1 Theorem Let f 2 C 0 [a; b] be twice differentiable on (a; b). If P 1 (x) is the piecewise linear interpolant of f at x i = a + i Λ h; i = 0; 1; ;n, h =(b a)=n, then P 1 converges uniformly to f as n!1. Proof. We prove this theorem using the error estimate (1.1.22). For piecewise quadratic interpolation we select h =(b a)=n and construct p 2;i that interpolates f at x i, (x i + x i+1 )=2 and x i+1. In this case the interpolation error is bounded as A global bound is jje 2;i jj 1» M 3;i(h i =2) 3 9 p ; h i = x i+1 x i : (1.1.23) 3 jje 2 jj 1» M 3(H=2) 3 9 p 3 ; H = max h i : (1.1.24) i=0; ;2n 1 Theorem Let f 2 C 0 [a; b] and be m+1 times differentiable on (a; b) and x 0 < x 1 < < x n with h i = x i+1 x i and H = max h i. If P m (x) is i the piecewise polynomial of degree m on each subinterval [x i ;x i+1 ] and P m (x) interpolates f on [x i ;x i+1 ] at x i;k = x i + k Λ h ~ i ; k =0; 1; ;m, h ~ i = h i =m, then P m converges uniformly to f as H! 0. Proof. Again we prove this theorem using the error bound jje m jj 1» M m+1h m+1 ; H = max h i : (1.1.25) (m + 1)! i=0;1; ;nm 1 Similarly,we may construct piecewise Hermite interpolation polynomials following the same line of reasoning as for Lagrange interpolation.

34 34 CHAPTER 1. POLYNOMIAL INTERPOLATION Cubic spline interpolation We use piecewise polynomials of degree three that are C 2 and interpolate the data such as S(x k )=f(x k )=y k ; k =0; 1; n. Algorithm (i) Order the points x k ; x =0; 1; n such that a = x 0 <x 1 <x 2 < x n 1 <x n = b: (ii) Let S(x) be a piecewise spline defined by n cubic polynomials such that S(x) =S k (x) =a k + b k (x x k )+c k (x x k ) 2 + d k (x x k ) 3 ; x k» x» x k+1 (iii) find a k ; b k ; c k ; d k ; k =0; 1; n 1 such that (1) S(x k )=y k ; k =0; 1; ;n (2) S k (x k+1 )=S k+1 (x k+1 ); k =0; ;n 2 (3) S 0 k (x k+1) =S 0 k+1 (x k+1); k =0; ;n 2 (4) S 00 k (x k+1) =S 00 k+1 (x k+1); k =0; ;n 2 Theorem If A is an n n strictly diagonally dominant matrix, i.e., ja kk j > n P i=1;i=k ja k;i j; k =1; 2; n, then A is nonsingular. Proof. By contradiction, we assume that A is singular, i.e., there exists a nonzero vector x such that Ax = 0 and let x k such that jx k j = maxjx i j. This leads to a kk x k = nx i=1;i=k a ki x i : (1.1.2) Taking the absolute value and using the triangle inequality we obtain ja kk jjx k j» nx i=1;i=k ja ki jjx i j: (1.1.27)

35 1.7. SPLINE INTERPOLATION 35 Dividing both terms by jx k j we get ja kk j» nx i=1;i=k ja ki j jx ij jx k j» nx i=1;i=k ja ki j (1.1.28) This leads to a contradiction since A is strictly diagonally dominant. Theorem Let us consider the set of data points (x i ;f(x i )); i = 0; 1; ;n; such that x 0 < x 1 < ;x n. If S 00 (x 0 ) = S 00 (x n ) = 0, then there exists a unique piecewise cubic polynomial that satisfies the conditions (iii). Proof. Existence: we assume S " (x k ) = m k where h k = x k+1 x k and use piecewise linear interpolation of S " to write x x S 00 k+1 x x k k (x) =m k + m k+1 x k x k+1 x k+1 x k = m k h k (x x k+1 )+ m k+1 h k (x x k ); x k» x» x k+1 With this definition of S ", condition (4) is automatically satisfied. Integrating S 00 k (x) we obtain S k (x) = m k h k (x x k+1 ) 3 + m k+1 h k (x x k ) 3 + p k (x k+1 x)+q k (x x k ) Need to find m k ; q k and p k ; k =0; 1; 2; ;n 1: In order to enforce the conditions (1) and (2) we write S k (x k )=y k = m k h2 k + p k h k S k (x k+1 )=y k+1 = m k+1 h2 k + q k h k Solve for p k and q k to solve p k = y k m kh k h k (1.1.29a)

36 3 CHAPTER 1. POLYNOMIAL INTERPOLATION q k = y k+1 h k m k+1h k : (1.1.29b) We note that if m k ; k =0; 1;::: ;n are known, The previous equations may be used to compute p k and q k. Now, substitute p k and q k in the equation for S k to have S k (x) = m k (x x k+1 ) 3 + m k+1 (x x k ) 3 +( y k m kh k )(x k+1 x)+ h k h k h k ( y k+1 h k m k+1h k )(x x k ) (1.1.30) Applying condition (3) to enforce the continuity of S 0 (x) where S 0 k(x k+1 )=S 0 k+1(x k+1 ); k =0; 1; ;n 1: (1.1.31) and S 0 k(x) = m k 2h k (x x k+1 ) 2 + m k+1 2h k (x x k ) 2 ( y k m kh k )+( y k+1 h k h k m k+1h k ); x k» x» x k+1 ; (1.1.32) S 0 k+1(x) = m k+1 2h k+1 (x x k+2 ) 2 + m k+2 2h k+1 (x x k+1 ) 2 ( y k+1 m k+1h k+1 )+( y k+2 m k+2h k+1 ); x k+1» x» x k+2 : (1.1.33) h k+1 h k+1 Taking the limit from the left at x k+1 leads to S 0 k(x k+1 )= m k+1h k 3 + m kh k + d k : (1.1.34) Taking the limit from the right atx k+1 yields

37 1.7. SPLINE INTERPOLATION 37 where S 0 k+1(x k+1 )= m k+1h k+1 3 m k+2h k+1 d k = y k+1 y k h k ; k =0; 1; ;n 1 + d k+1 Using (1.1.31) we obtain the following system having n + 1 unknowns and n 1 equations. (I) ( m k h k +2m k+1 (h k + h k+1 )+m k+2 h k+1 =(d k+1 d k ); k =0; 1; 2; ;n 2: (1.1.35) Nowwe need to close the system by adding two more equations from S 00 (x 0 )= 0 and S 00 (x n ) = 0 which leads to This is called the natural spline. The system (1.1.35) and (1.1.3) lead to (I:NAT) 8 >< >: In matrix form we write 2 4 m 0 =0; m n =0: (1.1.3) (2h 0 +2h 1 )m 1 + h 1 m 2 = u 0 m k h k +2m k+1 (h k + h k+1 )+m k+1 h k+1 = u k ; 1» k» n 3 h n 2 m n 2 +2(h n 2 + h n 1 )m n 1 = u n 2 2(h 0 + h 1 ) h h 1 2(h 1 + h 2 ) h h 2 2(h 2 + h 3 ) h n h n 2 2(h n 2 + h n 1 ) m 1 m 2 m 3. m n = The resulting matrix is strictly symmetric positive definite and diagonally dominant and yields a unique solution. 2 4 u 0 u 1 u 2. u n

38 38 CHAPTER 1. POLYNOMIAL INTERPOLATION Other splines include: Not-a-Knot Spline: We add the following two conditions: S (x 1 )=S (x 1 ) which leads to and the condition S 000 n 2(x n 1 )=S 000 n 1(x n 1 ) which leads to m 1 m 0 h 0 = m 2 m 1 h 1 (1.1.37) m n m n 1 h n 1 = m n 1 m n 2 h n 2 (1.1.38) Solve (1.1.37) and (1.1.38) for m 0 and m n to obtain m 0 =(1+h 0 =h 1 )m 1 (h 0 =h 1 )m 2 (1.1.39) m n = h n =h n1 m n 2 +(1+h n =h n 1 )m n 1 (1.1.40) Substitute into the system (1.1.35) to obtain (I:NK) 8 >< >: (3h 0 +2h 1 + h 2 0=h 1 )m 1 +(h 1 h 2 0=h 1 )m 2 = u 0 m k h k +2m k+1 (h k + h k+1 )+m k+1 h k+1 = u k ; k =1; ;n 3 (h n 2 h 2 n 1=h n 2 )m n 2 +(2h n 2 +3h n 1 + h 2 n 1=h n 2 )m n 1 = u n 2 where u k = (d k+1 d k ); k = 0; 1; ;n 2. We solve the system for m 1 ;m 2 ; m n 1 and use (1.1.39) and (1.1.40) to find m 0 and m n. Use (1.1.29) to find p k and q k ;k =0; 1; n 1. Finally, we use the formula (1.1.30) that defines S k (x).

39 1.7. SPLINE INTERPOLATION 39 Clamped Spline: We close the system (I) using the following conditions S0(x 0 0 )=f 0 (x 0 ) Sn 1(x 0 n )=f 0 (x n ) S 0 0(x) = m 0 2h 0 (x x 1 ) 2 + m 1 2h 0 (x x 0 ) 2 ( y 0 h 0 m 0h 0 )+(y 1 h 0 m 1h 0 ) S 0 0(x 0 )= m 0h 0 3 m 1h 0 The boundary equation S 0 0(x 0 )=f 0 (x 0 ) leads to + y 1 y 0 h 0 : 2m 0 h 0 + m 1 h 0 =(d 0 f 0 (x 0 )) (1.1.41) The boundary condition S 0 n 1(x n )=f 0 (x n ) yields the equation S 0 n 1(x n )= m nh n m n 1h n 1 + d n 1 = f 0 (x n ); which leads Now, the system (I) is reduced to (I:CL) 8 >< >: ( 3h 0 2m n h n 1 + m n 1 h n 1 =(f 0 (x n ) d n 1 ) (1.1.42) +2h 2 1 )m 1 + h 1 m 2 = u 0 3(d 0 f 0 (x 0 )) m k h k +2m k+1 (h k + h k+1 )+m k+1 h k+1 = u k ; k =1; 2; ;n 3 h n 2 m n 2 +(2h n 2 + 3h n 1 )m 2 n 1 = u n 2 3(f 0 (x n ) d n 1 ) Matrix formulation for the not-a-knot spline AM = U

40 40 CHAPTER 1. POLYNOMIAL INTERPOLATION where A = 2 4 3h 0 +2h 1 + h2 0 h 1 h 1 h2 0 h h 1 2(h 1 + h 2 ) h h 2 2(h 2 + h 3 ) h n h n 2 h2 n 1 h n 2 2h n 2 +3h n 1 + h2 n 1 h n 2 M = 2 4 m 1 m 2 m ; B = 2 4 u 0 u 1 u m n 1 u n 2 The system admits a unique solution since the matrix is strictly diagonally dominant. Example 1: consider the function f(x) =x=(2 + x) at 1; 1; 2; 3 x i f(x i ) 1 st DD *(2nd Diff) 1 1 2=3 1 1=3 3 1= 2 1=2 2=5 1=10 3 3=5 h 0 =2,h 1 =1,h 2 =1.» 3h0 +2h 1 + h 2 0=h 1 h 1 h 2 0=h 1» m1 h 1 h 2 2=h 1 2h 1 +3h 2 + h 2 2=h 1 )»» 12 3 m1 = 0 m 2» 3 2=5 The solution is m 1 = 4=15, m 2 = 1=15 = m 2» u0 u 1

41 1.7. SPLINE INTERPOLATION 41 Matrix formulation for the clamped spline 2 4 ( 3h 0 +2h 2 1 ) h h 1 2(h 1 + h 2 ) h h 2 2(h 2 + h 3 ) h n h n 2 (2h n 2 + 3h n 1 = 2 4 u 0 3(d 0 f 0 (x 0 )) u 1 u 2. u n 2 3(f 0 (x n ) d n 1 ) ) m 1 m 2 m 3. m n 1 The system admits a unique solution since the matrix is symmetric diagonally dominant and positive definite (SPD). Example 1: Let us interpolate the function f(x) = x=(2 + x) where f 0 (x) =2=(2 + x) 2. Thus S 0 0(x 0 )=2=f 0 ( 1) and S 0 n 1(x 3 )=2=25 = f 0 (3). To compute the u i ; i =0; 1we use the following table x i f(x i ) 1 st DD *(2nd Diff) 1 1 2=3 1 1=3 3 1= 2 1=2 2=5 1=10 3 3=5 h 0 =2,h 1 =1,h 2 =1.»» 3h0 =2+2h 1 h 1 m1 = h 1 2h 1 +3h 2 =2) m 2» u0 3(d 0 f 0 (x 0 )) u 1 3(f 0 (x 3 ) d 2 ) 3 7 5

42 42 CHAPTER 1. POLYNOMIAL INTERPOLATION»»» 5 1 m1 1 = 1 7=2 m 2 17=50 The solution is m 1 =4=275, m 2 = 9=55 m 0 =3 f [x 0;x 1 ] f 0 (x 0 ) h 0 m 1 =2= 582=275 m 3 =3 f 0 (x 3 ) f [x 2 ;x 3 ] h 2 m 2 =2==275 On [-1,1] : p 0 = y 0 =h 0 (m 0 h 0 )= = 113=550, q 0 = y 1 =h 0 (m 1 h 0 )= =49=550 On [1; 2] S 0 (x) = (x 4 1)3 + (x +1) p 1 = y 1 =h 1 (m 1 h 1 )= =81=275, q 1 = y 2 =h 1 (m 2 h 1 )= =29= (1 x)+ (x +1) S 1 (x) = (x 2)3 9 (x 1) (2 x)= (x 1)=55 On [2,3] p 2 = y 2 =h 2 (m 2 h 2 )= =29=55, q 2 = y 3 =h 2 (m 3 h 2 )= = 14=275 S 2 (x) = 9 55 (x 3)3 1 (x 2) (3 x)= (x 2)=275 Examples of natural spline approximations

43 1.7. SPLINE INTERPOLATION 43 Example 1: Let f(x) = jxj and construct the cubic spline interpolation at the points x i = 2+i; i =0; 1; 2; 3; 4 x i f(x i ) 1 st DD *(2nd Diff) We note that h i = h; i =0; 1; 2; 3, m 0 = m 4 = 0 which leads to the following system m 1 m 2 m = The solution is m 1 = =7, m 2 =24=7, m 3 = =7. On [ 2; 1] p 0 =(y 0 m 0h 2 0 )=h 0 =(2 0)=1 =2q 0 =(y 1 m 1h 2 0 )=h 0 =(1+1=7)=8=7 S 0 (x) = 1 7 (x +2)3 +2( 1 x)+ 8 (x +2) 7 On [ 1; 0] p 1 =(y 1 m 1h 2 1 )=h 1 =(2 0)=1 =8=7 q 1 =(y 2 m 2h 2 1 )=h 1 =(1+1=7) = 4=7 S 1 (x) = 1 7 x (x +1) (0 x) 4 (x +1) 7

44 44 CHAPTER 1. POLYNOMIAL INTERPOLATION On [0; 1] p 2 =(y 2 m 2h 2 2 )=h 2 =( )= 4=7 q 2 =(y 3 m 3h 2 2 )=h 2 =( )=8=7 S 2 (x) = 4 7 (x 1)3 x 3 =7 4 7 (1 x)+8 7 x On [1; 2] p 3 =(y 3 m 3h 2 3 )=h 3 =8=7, q 3 =(y 4 m 4h 2 3 )=h 3 =2 S 3 (x) = (2 x) 3 =7+2(x 1) + 8(2 x)=7 p =(2; 8=7; 4=7; 8=7), q =(8=7; 4=7; 8=7; 2) Example 2: x i f(x i ) 1 st DD *(2nd Diff) 1 1 2=3 1 1=3 3 1= 2 1=2 2=5 1=10 3 3=5 h 0 =2,h 1 =1,h 2 =1. Natural cubic spline leads to m 0 = m 3 =0. The other coefficients satisfy the system»»» 2(h0 + h 1 ) h 1 m1 3 = h 1 2(h 1 + h 2 ) m 2 2=5»» 1 m1 = 1 4 m 2» 3 2=5

45 1.7. SPLINE INTERPOLATION 45 which admits the following solution m 1 = , m 2 = On [ 1; 1] p 0 =(y 0 m 0h 2 0 )=h 0 = 1=2, q 0 =(y 1 m 1h 2 0 )=h 0 =77=230 On [1; 2] p 1 =(y 1 m 1h 2 1 )=h 1 =48=115, q 1 =(y 2 m 2h 2 1 )=h 1 =57=115 On [2; 3] p 2 =(y 2 m 2h 2 2 )=h 2 =57=115, q 2 =(y 3 m 3h 2 2 )=h 2 =3=5 p =[ 1=2; 48=115; 57=115], q = [77=230; 57=115; 3=5] Matlab commands for splines x=0:1:10; y=sin(x); xi=0:0.2:10; yi = sin(xi); %piecewise linear interpolation y1 = interp1(x,y,xi) plot(x,y,'0',xi,yi) %plot the exact function hold on; plot(xi,y1); % y2 = interp1(x,y,xi,'spline') % spline interpolation y3 = interp1(x,y,'cubic') % piecewise cubic interpolation y4 = spline(x,y,xi) %not-a-knot spline plot(xi,y4); plot(xi,y2); plot(xi,y3);

46 4 CHAPTER 1. POLYNOMIAL INTERPOLATION Convergence of cubic splines We will study the uniform convergence of the clamped cubic spline for f 2 C 4 [a; b]. We first write the matrix formulation for the clamped cubic spline as AM = B where M =[m 0 ;m 1 ; ;m n ] t, B =[b 0 ;b 1 ; ;b n ] t. Let us recall that and S 0 0(x 0 )= h 0m 0 3 S 0 n 1(x n )= h n 1m n 1 + h 0m 1 + h n 1m n 3 = (y 1 y 0 ) h 0 f 0 (x 0 ) = f 0 (x n ) (y n y n 1 ) h n 1 : We combine (1.1.41), (1.1.42) and (1.1.35) to write the (n +1) (n +1) matrix A as a i;j = 8 >< >: 2; if i = j 1; if (i; j) =(1; 2) or (i; j) =(n; n 1) h i h i +h i 1 ; if j = i +1; 1 <i<n h i 1 h i +h i 1 ; if j = i 1; 1» i<n 1 0; otherwise; : (1.1.43) the n +1by n +1matrix A can be written as A = ::: h... i 1 h h i +h i 1 2 i h i +h i 1 ::: ::: (1.1.44) The right-hand side B is defined by b 0 = h 0 ( y 1 y 0 h 0 f 0 (x 0 )); (1.1.45)

47 1.7. SPLINE INTERPOLATION 47 b i = h i + h i 1 ( y i+1 y i h i y i y i 1 h i 1 ); i =1; 2; ;n 1; (1.1.4) b n = h n 1 (f 0 (x n ) y n y n 1 h n 1 ): (1.1.47) Lemma Let A be defined in (1.1.43) such that Az = w. Then, jjzjj 1»jjwjj 1 Proof. Let z k be such that jz k j = jjzjj 1. leads to The k th equation from Az = w Applying the triangle inequality we have a k;k 1 z k 1 +2z k + a k;k+1 z k+1 = w k (1.1.48) jjwjj 1 jw k j = ja k;k 1 z k 1 +2z k + a k;k+1 z k+1 j (1.1.49) 2jz k j a k;k 1 jz k 1 j a k;k+1 jz k+1 j (1.1.50) (2 (a k;k 1 + a k;k+1 )jz k j: (1.1.51) Using the fact that a k;k 1 + a k;k+1 =1we complete the proof. In order to state the following lemma we let F =[f 00 (x 0 );f 00 (x 1 ); ;f 00 (x n )] t, R = B AF = A(M F) and H = h i. max i=0; n 1 Lemma If f 2 C 4 [a; b] and jjf (4) jj 1» M 4, then jjm Fjj 1»jjRjj 1» 3 4 M 4H 2 : (1.1.52)

48 48 CHAPTER 1. POLYNOMIAL INTERPOLATION Proof. r 0 = b 0 2f 00 (x 0 ) f 00 (x 1 )= h 0 ( y 1 y 0 h 0 f 0 (x 0 )) 2f 00 (x 0 ) f 00 (x 1 ): Using Taylor expansion we write = 1 h 0 (h 0 f 0 (x 0 )+ h2 0f 00 (x 0 ) 2 (1.1.53) y 1 y 0 h 0 = f(x 0 + h 0 ) f(x 0 ) h 0 (1.1.54) + h3 0f 000 (x 0 ) + h4 0f (4) (fi 1 ) ) (1.1.55) 24 which leads to f 00 (x 0 + h 0 )=f 00 (x 0 )+h 0 f 000 (x 0 )+ h2 0f (4) (fi 2 ) 2 (1.1.5) Thus, r 0 = h2 0f (4) (fi 1 ) 4 h2 0f (4) (fi 2 ) 2 (1.1.57) jr 0 j < 3H 2 M 4 =4: (1.1.58) Similarly for Using Taylor series r n = b n f 00 (x n 1 ) 2f 00 (x n ): (1.1.59) b n = h n 1 (f 0 (x n ) y n y n 1 h n 1 ) (1.1.0) f(x n ) f(x n 1 ) h n 1 = f(x n 1) f(x n ) h n 1 = 1 h n 1 ( h n 1 f 0 (x n )+ h2 n 1 2 f 00 (x n ) h3 n 1 f 000 (x n )+ h4 n 1 24 f (4) (fi 1 )): (1.1.1)

49 1.7. SPLINE INTERPOLATION 49 f 00 (x n 1 )=f 00 (x n h n 1 )=f 00 (x n ) h n 1 2 f 000 (x n )+ h2 n 1 2 f (4) (fi 2 ) (1.1.2) jr n j < 3H 2 M 4 =4: (1.1.3) r j = b j μ j f 00 (x j 1 ) 2f 00 (x j ) j f 00 (x j+1 ); (1.1.4a) where and μ j = h j 1 h j + h j 1 ; j = b j = Using Taylor expansion we write h j h j + h j 1 (1.1.4b) h j 1 + h j ( y j+1 y j h j y j y j 1 h j 1 ): (1.1.4c) y j+1 y j =[f 0 (x j )+ h jf 00 (x j ) + h2 jf 000 (x j ) h j 2 y j y j 1 =[+f 0 (x j ) h j 1f 00 (x j ) + h2 j 1f 000 (x j ) h j h3 jf (4) (fi 1 ) ]; (1.1.4d) 24 h3 j 1f (4) (fi 2 ) ]; 24 (1.1.4e) f 00 (x j 1 )=f 00 (x j ) h j 1 f 000 (x j )+h 2 j 1f (4) (fi 3 )=2; (1.1.4f) f 00 (x j+1 )=f 00 (x j )+h j f 000 (x j )+h 2 jf (4) (fi 4 )=2: (1.1.4g) Note that fi i 2 (x j 1 ;x j+1 ). Combining (1.1.4) we obtain r j = 1 [ h3 jf (4) (fi 1 ) + h3 j 1f (4) (fi 2 ) h3 j 1f (4) (fi 3 ) h j + h j h3 jf (4) (fi 4 ) ]: 2 (1.1.5)

50 50 CHAPTER 1. POLYNOMIAL INTERPOLATION This can be bounded as jr j j» 3 4 M h 3 j + h 3 j 1 4 (1.1.) h j + h j 1 Without loss of generality we assume h j h j 1 and write h 3 j + h 3 j 1 h j + h j 1 = h 2 j 1+( h j 1 h j ) 3 1+ h j 1 h j» h 2 j» H 2 : (1.1.7) Thus, jr j j < 3 4 M 4H 2 : (1.1.8) Finally, using Lemma we have which completes the proof. jjm Fjj 1»jjRjj 1» 3 4 M 4H 2 ; (1.1.9) Theorem Le f(x) 2 C 4 [a; b], a» x 0 < x 1 < < x n» b, h j = x j+1 x j and H = h j. Assume there exits K > 0 independent of H such that max i=0; ;n 1 H h j» K; j =0; 1; ;n 1: (1.1.70) If S(x) is the clamped cubic spline approximation of f at x i ; i =0; 1; ;x n, then there exists C k > 0 independent of H such that jjf (k) S (k) jj 1;[a;b]» C k M 4 KH 4 k ; k =0; 1; 2; 3; (1.1.71) where M 4 = jjf (4) jj 1. Proof. For k = 3 and x 2 [x j 1 x j ]by adding and subtracting few auxiliary terms the error can be written e 000 (x) =f 000 (x) S 000 (x) =f 000 (x) m j m j 1 h j 1

51 1.7. SPLINE INTERPOLATION 51 = f 000 (x) m j f 00 (x j ) h j 1 + m j 1 f 00 (x j 1 ) h j 1 f 00 (x j ) f 00 (x) h j 1 Using Lemma we bound the following terms + f 00 (x j 1 ) f 00 (x) h j 1 : (1.1.72) j m j f 00 (x j ) h j 1 j» 3M 4H 2 4h j 1 ; j m j 1 f 00 (x j 1 ) h j 1 j» 3M 4H 2 4h j 1 : (1.1.73) We use Taylor series to obtain and f 00 (x j ) f 00 (x) =(x j x)f 000 (x)+ (x j x) 2 f (4) (fi 1 ) 2 f 00 (x j 1 ) f 00 (x) =(x j 1 x)f 000 (x)+ (x j 1 x) 2 f (4) (fi 2 ) 2 we bound the error je 000 (x)j» 3M 4H 2 4h j h j 1 j(x j x)f 000 (x)+(x j x) 2 f (4) (fi 1 ) (1.1.74) (x j 1 x)f 000 (x) (x j 1 x) 2 f (4) (fi 2 ) h j 1 f 000 (x)j (1.1.75) 2 The f 000 terms cancel out to give Using we write je 000 (x)j» 3M 4H 2 2h j 1 + M 4 2h j [(x j x) 2 +(x j 1 x) 2 ]: jj(x j x) 2 +(x j 1 x) 2 jj 1;[xj 1 ;x j ] = h 2 j 1» H 2 ; Since H=h j» K we have je 000 (x)j» 3M 4H 2 2h j 1 + M 4H 2 2h j 1 : (1.1.7)

52 52 CHAPTER 1. POLYNOMIAL INTERPOLATION jf 000 (x) S 000 (x)j»2m 4 KH; 8x: (1.1.77) For k =2,we note that for each x 2 (a; b), there is x j such that jx j xj» H=2. Now we rewrite e 00 (x) as e 00 (x) =f 00 (x) S 00 (x) =f 00 (x j ) S 00 (x j )+ Z x x j (f 000 (t) S 000 (t))dt (1.1.78) Using j R gj < R jgjdx and Lemma we obtain je 00 (x)j» 3M 4H j(x x j )j max jjf 000 S 000 jj (1.1.79) x2[a;b] Thus,» 3M 4H M 4 KH 2» 7KM 4H 2 ; K > 1: (1.1.80) 4 jjf 00 (x) S 00 (x)jj 1» 7KM 4H 2 : (1.1.81) 4 For k =1,we consider e(t) =f(t) S(t), since e(x j ) = 0, by Rolle's theorem there exist ο j ;j =0; 1; ;n 1 such that e 0 (ο i )=0ande 0 (x 0 )=e 0 (x n )=0. For every x 2 [a; b] there exists ο i such that jx ο i j» H and e 0 (x) can be written as f 0 (x) S 0 (x) = Thus, Z x ο i (f 00 (t) S 00 (t))dt (1.1.82) je 0 (x)j»jx ο i jjje 00 jj 1» 7 4 M 4KH 3 : (1.1.83)

53 1.7. SPLINE INTERPOLATION 53 For k = 0, for every x 2 [a; b] there is x j such that jx x j j» H=2 we also write f(x) S(x) = Z x x j (f 0 (t) S 0 (t))dt (1.1.84) je(x)j»jx x j jjje 0 jj 1» 7 8 M 4KH 4 : (1.1.85) We conclude that S (k) converges uniformly to f (k) for k =0; 1; 2; 3 and H! 0. Optimal bounds are proved by Birkhoff and De Boor (Burden and Faires) as We recall the Hermite interpolation error jf H 3 j < jjf Sjj 1 < M 4H 4 : (1.1.8) M 4H 4 = M 4H ; (1.1.87) Comparing the cubic spline and Hermite interpolation errors, we see that the ratio between the spline and the Hermite errors is only 5. We also note that Hermite interpolation requires the derivative at all the interpolation points while the clamped spline needs the derivatives at the end points only. Optimality of Splines: The optimality is in the sense that cubic spline has the smallest curvature. For a curve defined by y = f(x) the curvature is defined as fi = jf 00 (x)jj [1+(f 0 (x)) 2 ] 3=2 : Here the curvature is approximated by jf 00 (x)j and R b a S00 (x) 2 dx is minimized. More precisely we state the following theorem.

54 54 CHAPTER 1. POLYNOMIAL INTERPOLATION Theorem If f 2 C 2 [a; b] and S(x) is the natural cubic spline that interpolates f at n +1 points x i ; i =0; 1; ;n; then Z b a S 00 (x) 2 dx» Z b a f 00 (x) 2 dx: (1.1.88) Proof. We consider the function e(x) = f(x) S(x) with e(x i ) = 0; i = 0; 1; ;n; and write the approximate curvature of f = S + e as Z b a Z b a S 00 (x) 2 dx + f 00 (x) 2 dx = Z b a Z b a (S 00 (x)+e 00 (x)) 2 dx = Z b e 00 (x) 2 dx +2 S 00 (x)e 00 (x)dx: (1.1.89) a We complete the proof by showing that the last term in the right-hand side of (1.1.89) is 0. Integrating by parts we obtain Z xi+1 x i e 00 (x)s 00 (x)dx = S 00 (x)e 0 (x)j x=x i+1 x=x i Z xi+1 x i S 000 (x)e 0 (x)dx: Summing over all intervals, using the fact that e 2 C 2 and S 00 (a) =S 00 (b) =0. Noting that S 000 (x) =C i is a constant on (x i ;x i+1 ) we obtain n 1 Z xi+1 X i=0 x i e 00 (x)s 00 (x)dx = = n 1 X i=0 n 1 X i=0 Z xi+1 C i e 0 (x)dx x i C i [e(x i+1 ) e(x i )]=0 We used the fact that e(x i )=0we establish R b a S00 (x)e 00 (x)dx =0. Combining this with (1.1.89) leads to (1.1.88). The same result holds for the clamped cubic spline with S 0 (a) = f 0 (a) and S 0 (b) =f 0 (b). We follow the same line of reasoning to prove it. Thus, among all C 2 functions interpolating f at x 0 ;::: ;x n, the natural cubic spline has the smallest curvature. This includes the clamped spline and nota=knot splines.

55 1.7. SPLINE INTERPOLATION B-splines We describe a system of B-splines (B stands for basis) from which other splines can be obtained. We first start with B-splines of degree 0, i.e., piecewise constant splines defined as ( 1 x Bi 0 i» x<x i+1 (x) = ;i2 Z (1.1.90) 0 otherwise Properties of B 0 i (x): 1. B 0 i (x) 0; for all x and i 2. 1P i= 1 B 0 i (x) = 1, for all x 3. The support of B 0 i (x) is [x i ;x i+1 ) 4. B 0 i (x) 2 C 1. We show property (2) by noting that for arbitrary x there exists m such that x 2 [x m ;x m+1 ) then write 1X i= 1 B 0 i (x) =B 0 m(x) =1: Use the recurrence formula to generate the next basis functions B k i (x) = x x i x i+k x i B k 1 i (x)+ x i+k+1 x x i+k+1 x i+1 B k 1 i+1 (x); for k>0: (1.1.91) Properties of B 1 i : 1. B 1 i (x) are the classical piecewise linear hat functions equal to 1 at x i+1 and zero all other nodes. 2. B 1 i (x) 2 C P Bi 1 i= 1 (x) = 1 for all x

56 5 CHAPTER 1. POLYNOMIAL INTERPOLATION 4. The support of B 1 i (x) is (x i ;x i+2 ) 5. B 1 i (x) 0 for all x and i In general for arbitrary k one can show that: 1. B k i (x) are piecewise polynomials of degree k 2. B k i (x) 2 C k P i= 1 B k i (x) = 1 for all x 4. The support of B k i (x) is (x i ;x i+k+1 ) 5. B k i (x) 0 for all x and i. B k i (x), 1 <i<1 are linearly independent, i.e., they form a basis. See Figure for plots of the first four b-splines. Interpolation using B-splines: (i) For k = 0, we construct a piecewise constant spline interpolation by writing f(x) ß P 0 (x) = 1X i= 1 Using the properties of B 0 i (x) we show that c i = f(x i ). c i B 0 i (x) (1.1.92) (ii) For k = 1, we construct a piecewise linear spline interpolation by writing f(x) ß P 1 (x) = 1X i= 1 Again using B 1 i (x j+1 )=ffi ij we show that c i = f(x i+1 ). c i B 1 i (x) (1.1.93) (ii) For k = 3, we construct a piecewise cubic spline interpolation by writing f(x) ß P 3 (x) = 1X i= 1 c i B 3 i (x) (1.1.94)

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

We consider the problem of finding a polynomial that interpolates a given set of values:

We consider the problem of finding a polynomial that interpolates a given set of values: Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.

More information

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ). 1 Interpolation: The method of constructing new data points within the range of a finite set of known data points That is if (x i, y i ), i = 1, N are known, with y i the dependent variable and x i [x

More information

Scientific Computing

Scientific Computing 2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation

More information

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1

More information

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam Jim Lambers MAT 460/560 Fall Semester 2009-10 Practice Final Exam 1. Let f(x) = sin 2x + cos 2x. (a) Write down the 2nd Taylor polynomial P 2 (x) of f(x) centered around x 0 = 0. (b) Write down the corresponding

More information

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Polynomial Interpolation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 24, 2013 1.1 Introduction We first look at some examples. Lookup table for f(x) = 2 π x 0 e x2

More information

Some notes on Chapter 8: Polynomial and Piecewise-polynomial Interpolation

Some notes on Chapter 8: Polynomial and Piecewise-polynomial Interpolation Some notes on Chapter 8: Polynomial and Piecewise-polynomial Interpolation See your notes. 1. Lagrange Interpolation (8.2) 1 2. Newton Interpolation (8.3) different form of the same polynomial as Lagrange

More information

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning 76 Chapter 7 Interpolation 7.1 Interpolation Definition 7.1.1. Interpolation of a given function f defined on an interval [a,b] by a polynomial p: Given a set of specified points {(t i,y i } n with {t

More information

3.1 Interpolation and the Lagrange Polynomial

3.1 Interpolation and the Lagrange Polynomial MATH 4073 Chapter 3 Interpolation and Polynomial Approximation Fall 2003 1 Consider a sample x x 0 x 1 x n y y 0 y 1 y n. Can we get a function out of discrete data above that gives a reasonable estimate

More information

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j

More information

COURSE Numerical integration of functions

COURSE Numerical integration of functions COURSE 6 3. Numerical integration of functions The need: for evaluating definite integrals of functions that has no explicit antiderivatives or whose antiderivatives are not easy to obtain. Let f : [a,

More information

Interpolation Theory

Interpolation Theory Numerical Analysis Massoud Malek Interpolation Theory The concept of interpolation is to select a function P (x) from a given class of functions in such a way that the graph of y P (x) passes through the

More information

November 20, Interpolation, Extrapolation & Polynomial Approximation

November 20, Interpolation, Extrapolation & Polynomial Approximation Interpolation, Extrapolation & Polynomial Approximation November 20, 2016 Introduction In many cases we know the values of a function f (x) at a set of points x 1, x 2,..., x N, but we don t have the analytic

More information

Chapter 8: Taylor s theorem and L Hospital s rule

Chapter 8: Taylor s theorem and L Hospital s rule Chapter 8: Taylor s theorem and L Hospital s rule Theorem: [Inverse Mapping Theorem] Suppose that a < b and f : [a, b] R. Given that f (x) > 0 for all x (a, b) then f 1 is differentiable on (f(a), f(b))

More information

Numerical Analysis: Interpolation Part 1

Numerical Analysis: Interpolation Part 1 Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,

More information

c) xy 3 = cos(7x +5y), y 0 = y3 + 7 sin(7x +5y) 3xy sin(7x +5y) d) xe y = sin(xy), y 0 = ey + y cos(xy) x(e y cos(xy)) e) y = x ln(3x + 5), y 0

c) xy 3 = cos(7x +5y), y 0 = y3 + 7 sin(7x +5y) 3xy sin(7x +5y) d) xe y = sin(xy), y 0 = ey + y cos(xy) x(e y cos(xy)) e) y = x ln(3x + 5), y 0 Some Math 35 review problems With answers 2/6/2005 The following problems are based heavily on problems written by Professor Stephen Greenfield for his Math 35 class in spring 2005. His willingness to

More information

Additional exercises with Numerieke Analyse

Additional exercises with Numerieke Analyse Additional exercises with Numerieke Analyse March 10, 017 1. (a) Given different points x 0, x 1, x [a, b] and scalars y 0, y 1, y, z 1, show that there exists at most one polynomial p P 3 with p(x i )

More information

LECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes).

LECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes). CE 025 - Lecture 6 LECTURE 6 GAUSS QUADRATURE In general for ewton-cotes (equispaced interpolation points/ data points/ integration points/ nodes). x E x S fx dx hw' o f o + w' f + + w' f + E 84 f 0 f

More information

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2. INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 = 0, x 1 = π/4, x

More information

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration. Unit IV Numerical Integration and Differentiation Numerical integration and differentiation quadrature classical formulas for equally spaced nodes improper integrals Gaussian quadrature and orthogonal

More information

Convergence rates of derivatives of a family of barycentric rational interpolants

Convergence rates of derivatives of a family of barycentric rational interpolants Convergence rates of derivatives of a family of barycentric rational interpolants J.-P. Berrut, M. S. Floater and G. Klein University of Fribourg (Switzerland) CMA / IFI, University of Oslo jean-paul.berrut@unifr.ch

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 4 Interpolation 4.1 Polynomial interpolation Problem: LetP n (I), n ln, I := [a,b] lr, be the linear space of polynomials of degree n on I, P n (I) := { p n : I lr p n (x) = n i=0 a i x i, a i lr, 0 i

More information

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation Outline Interpolation 1 Interpolation 2 3 Michael T. Heath Scientific Computing 2 / 56 Interpolation Motivation Choosing Interpolant Existence and Uniqueness Basic interpolation problem: for given data

More information

5. Hand in the entire exam booklet and your computer score sheet.

5. Hand in the entire exam booklet and your computer score sheet. WINTER 2016 MATH*2130 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie 19 April, 2016 INSTRUCTIONS: 1. This is a closed book examination, but a calculator is allowed. The test

More information

Solutions Final Exam May. 14, 2014

Solutions Final Exam May. 14, 2014 Solutions Final Exam May. 14, 2014 1. Determine whether the following statements are true or false. Justify your answer (i.e., prove the claim, derive a contradiction or give a counter-example). (a) (10

More information

CHAPTER 3 Further properties of splines and B-splines

CHAPTER 3 Further properties of splines and B-splines CHAPTER 3 Further properties of splines and B-splines In Chapter 2 we established some of the most elementary properties of B-splines. In this chapter our focus is on the question What kind of functions

More information

Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. What for x = 1.2?

Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. What for x = 1.2? Applied Numerical Analysis Interpolation Lecturer: Emad Fatemizadeh Interpolation Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. 0 1 4 1-3 3 9 What for x = 1.? Interpolation

More information

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994 International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994 1 PROBLEMS AND SOLUTIONS First day July 29, 1994 Problem 1. 13 points a Let A be a n n, n 2, symmetric, invertible

More information

Differentiation. Table of contents Definition Arithmetics Composite and inverse functions... 5

Differentiation. Table of contents Definition Arithmetics Composite and inverse functions... 5 Differentiation Table of contents. Derivatives................................................. 2.. Definition................................................ 2.2. Arithmetics...............................................

More information

Cubic Splines MATH 375. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Cubic Splines

Cubic Splines MATH 375. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Cubic Splines Cubic Splines MATH 375 J. Robert Buchanan Department of Mathematics Fall 2006 Introduction Given data {(x 0, f(x 0 )), (x 1, f(x 1 )),...,(x n, f(x n ))} which we wish to interpolate using a polynomial...

More information

Preliminary Examination in Numerical Analysis

Preliminary Examination in Numerical Analysis Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify

More information

, applyingl Hospital s Rule again x 0 2 cos(x) xsinx

, applyingl Hospital s Rule again x 0 2 cos(x) xsinx Lecture 3 We give a couple examples of using L Hospital s Rule: Example 3.. [ (a) Compute x 0 sin(x) x. To put this into a form for L Hospital s Rule we first put it over a common denominator [ x 0 sin(x)

More information

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

Approximation theory

Approximation theory Approximation theory Xiaojing Ye, Math & Stat, Georgia State University Spring 2019 Numerical Analysis II Xiaojing Ye, Math & Stat, Georgia State University 1 1 1.3 6 8.8 2 3.5 7 10.1 Least 3squares 4.2

More information

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain. Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain. For example f(x) = 1 1 x = 1 + x + x2 + x 3 + = ln(1 + x) = x x2 2

More information

. (a) Express [ ] as a non-trivial linear combination of u = [ ], v = [ ] and w =[ ], if possible. Otherwise, give your comments. (b) Express +8x+9x a

. (a) Express [ ] as a non-trivial linear combination of u = [ ], v = [ ] and w =[ ], if possible. Otherwise, give your comments. (b) Express +8x+9x a TE Linear Algebra and Numerical Methods Tutorial Set : Two Hours. (a) Show that the product AA T is a symmetric matrix. (b) Show that any square matrix A can be written as the sum of a symmetric matrix

More information

Math Numerical Analysis Mid-Term Test Solutions

Math Numerical Analysis Mid-Term Test Solutions Math 400 - Numerical Analysis Mid-Term Test Solutions. Short Answers (a) A sufficient and necessary condition for the bisection method to find a root of f(x) on the interval [a,b] is f(a)f(b) < 0 or f(a)

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

CHAPTER 10 Shape Preserving Properties of B-splines

CHAPTER 10 Shape Preserving Properties of B-splines CHAPTER 10 Shape Preserving Properties of B-splines In earlier chapters we have seen a number of examples of the close relationship between a spline function and its B-spline coefficients This is especially

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Numerical Methods. King Saud University

Numerical Methods. King Saud University Numerical Methods King Saud University Aims In this lecture, we will... find the approximate solutions of derivative (first- and second-order) and antiderivative (definite integral only). Numerical Differentiation

More information

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method COURSE 7 3. Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method The presence of derivatives in the remainder difficulties in applicability to practical problems

More information

Chapter 3 Interpolation and Polynomial Approximation

Chapter 3 Interpolation and Polynomial Approximation Chapter 3 Interpolation and Polynomial Approximation Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128A Numerical Analysis Polynomial Interpolation

More information

Section Taylor and Maclaurin Series

Section Taylor and Maclaurin Series Section.0 Taylor and Maclaurin Series Ruipeng Shen Feb 5 Taylor and Maclaurin Series Main Goal: How to find a power series representation for a smooth function us assume that a smooth function has a power

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

1 Piecewise Cubic Interpolation

1 Piecewise Cubic Interpolation Piecewise Cubic Interpolation Typically the problem with piecewise linear interpolation is the interpolant is not differentiable as the interpolation points (it has a kinks at every interpolation point)

More information

Chapter 3: Root Finding. September 26, 2005

Chapter 3: Root Finding. September 26, 2005 Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division

More information

Computational Physics

Computational Physics Interpolation, Extrapolation & Polynomial Approximation Lectures based on course notes by Pablo Laguna and Kostas Kokkotas revamped by Deirdre Shoemaker Spring 2014 Introduction In many cases, a function

More information

Chapter 1 Numerical approximation of data : interpolation, least squares method

Chapter 1 Numerical approximation of data : interpolation, least squares method Chapter 1 Numerical approximation of data : interpolation, least squares method I. Motivation 1 Approximation of functions Evaluation of a function Which functions (f : R R) can be effectively evaluated

More information

CS 257: Numerical Methods

CS 257: Numerical Methods CS 57: Numerical Methods Final Exam Study Guide Version 1.00 Created by Charles Feng http://www.fenguin.net CS 57: Numerical Methods Final Exam Study Guide 1 Contents 1 Introductory Matter 3 1.1 Calculus

More information

4. We accept without proofs that the following functions are differentiable: (e x ) = e x, sin x = cos x, cos x = sin x, log (x) = 1 sin x

4. We accept without proofs that the following functions are differentiable: (e x ) = e x, sin x = cos x, cos x = sin x, log (x) = 1 sin x 4 We accept without proofs that the following functions are differentiable: (e x ) = e x, sin x = cos x, cos x = sin x, log (x) = 1 sin x x, x > 0 Since tan x = cos x, from the quotient rule, tan x = sin

More information

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft) Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft) 1 2 Contents 1 Error Analysis 5 2 Nonlinear Algebraic Equations 7 2.1 Convergence

More information

Spline Methods Draft. Tom Lyche and Knut Mørken

Spline Methods Draft. Tom Lyche and Knut Mørken Spline Methods Draft Tom Lyche and Knut Mørken 25th April 2003 2 Contents 1 Splines and B-splines an Introduction 3 1.1 Convex combinations and convex hulls..................... 3 1.1.1 Stable computations...........................

More information

CALCULUS JIA-MING (FRANK) LIOU

CALCULUS JIA-MING (FRANK) LIOU CALCULUS JIA-MING (FRANK) LIOU Abstract. Contents. Power Series.. Polynomials and Formal Power Series.2. Radius of Convergence 2.3. Derivative and Antiderivative of Power Series 4.4. Power Series Expansion

More information

Math 334 A1 Homework 3 (Due Nov. 5 5pm)

Math 334 A1 Homework 3 (Due Nov. 5 5pm) Math 334 A1 Homework 3 Due Nov. 5 5pm No Advanced or Challenge problems will appear in homeworks. Basic Problems Problem 1. 4.1 11 Verify that the given functions are solutions of the differential equation,

More information

Taylor and Maclaurin Series

Taylor and Maclaurin Series Taylor and Maclaurin Series MATH 211, Calculus II J. Robert Buchanan Department of Mathematics Spring 2018 Background We have seen that some power series converge. When they do, we can think of them as

More information

f (r) (a) r! (x a) r, r=0

f (r) (a) r! (x a) r, r=0 Part 3.3 Differentiation v1 2018 Taylor Polynomials Definition 3.3.1 Taylor 1715 and Maclaurin 1742) If a is a fixed number, and f is a function whose first n derivatives exist at a then the Taylor polynomial

More information

Chapter 2 Interpolation

Chapter 2 Interpolation Chapter 2 Interpolation Experiments usually produce a discrete set of data points (x i, f i ) which represent the value of a function f (x) for a finite set of arguments {x 0...x n }. If additional data

More information

WORKSHEET FOR THE PUTNAM COMPETITION -REAL ANALYSIS- lim

WORKSHEET FOR THE PUTNAM COMPETITION -REAL ANALYSIS- lim WORKSHEET FOR THE PUTNAM COMPETITION -REAL ANALYSIS- INSTRUCTOR: CEZAR LUPU Problem Let < x < and x n+ = x n ( x n ), n =,, 3, Show that nx n = Putnam B3, 966 Question? Problem E 334 from the American

More information

Math Review for Exam Answer each of the following questions as either True or False. Circle the correct answer.

Math Review for Exam Answer each of the following questions as either True or False. Circle the correct answer. Math 22 - Review for Exam 3. Answer each of the following questions as either True or False. Circle the correct answer. (a) True/False: If a n > 0 and a n 0, the series a n converges. Soln: False: Let

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

WORKSHEET FOR THE PUTNAM COMPETITION -REAL ANALYSIS- lim

WORKSHEET FOR THE PUTNAM COMPETITION -REAL ANALYSIS- lim WORKSHEET FOR THE PUTNAM COMPETITION -REAL ANALYSIS- INSTRUCTOR: CEZAR LUPU Problem Let < x < and x n+ = x n ( x n ), n =,, 3, Show that nx n = Putnam B3, 966 Question? Problem E 334 from the American

More information

Numerical Methods for Differential Equations Mathematical and Computational Tools

Numerical Methods for Differential Equations Mathematical and Computational Tools Numerical Methods for Differential Equations Mathematical and Computational Tools Gustaf Söderlind Numerical Analysis, Lund University Contents V4.16 Part 1. Vector norms, matrix norms and logarithmic

More information

Lecture 10 Polynomial interpolation

Lecture 10 Polynomial interpolation Lecture 10 Polynomial interpolation Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0 Lecture 22 Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) Recall a few facts about power series: a n z n This series in z is centered at z 0. Here z can

More information

Interpolation and Approximation

Interpolation and Approximation Interpolation and Approximation The Basic Problem: Approximate a continuous function f(x), by a polynomial p(x), over [a, b]. f(x) may only be known in tabular form. f(x) may be expensive to compute. Definition:

More information

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that Problem 1A. Suppose that f is a continuous real function on [, 1]. Prove that lim α α + x α 1 f(x)dx = f(). Solution: This is obvious for f a constant, so by subtracting f() from both sides we can assume

More information

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0 8.7 Taylor s Inequality Math 00 Section 005 Calculus II Name: ANSWER KEY Taylor s Inequality: If f (n+) is continuous and f (n+) < M between the center a and some point x, then f(x) T n (x) M x a n+ (n

More information

SPLINE INTERPOLATION

SPLINE INTERPOLATION Spline Background SPLINE INTERPOLATION Problem: high degree interpolating polynomials often have extra oscillations. Example: Runge function f(x = 1 1+4x 2, x [ 1, 1]. 1 1/(1+4x 2 and P 8 (x and P 16 (x

More information

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 = Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

Cubic Splines. Antony Jameson. Department of Aeronautics and Astronautics, Stanford University, Stanford, California, 94305

Cubic Splines. Antony Jameson. Department of Aeronautics and Astronautics, Stanford University, Stanford, California, 94305 Cubic Splines Antony Jameson Department of Aeronautics and Astronautics, Stanford University, Stanford, California, 94305 1 References on splines 1. J. H. Ahlberg, E. N. Nilson, J. H. Walsh. Theory of

More information

Fixed point iteration and root finding

Fixed point iteration and root finding Fixed point iteration and root finding The sign function is defined as x > 0 sign(x) = 0 x = 0 x < 0. It can be evaluated via an iteration which is useful for some problems. One such iteration is given

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include PUTNAM TRAINING POLYNOMIALS (Last updated: December 11, 2017) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

2016 EF Exam Texas A&M High School Students Contest Solutions October 22, 2016

2016 EF Exam Texas A&M High School Students Contest Solutions October 22, 2016 6 EF Exam Texas A&M High School Students Contest Solutions October, 6. Assume that p and q are real numbers such that the polynomial x + is divisible by x + px + q. Find q. p Answer Solution (without knowledge

More information

Ma 530 Power Series II

Ma 530 Power Series II Ma 530 Power Series II Please note that there is material on power series at Visual Calculus. Some of this material was used as part of the presentation of the topics that follow. Operations on Power Series

More information

Continuity. Chapter 4

Continuity. Chapter 4 Chapter 4 Continuity Throughout this chapter D is a nonempty subset of the real numbers. We recall the definition of a function. Definition 4.1. A function from D into R, denoted f : D R, is a subset of

More information

MA2501 Numerical Methods Spring 2015

MA2501 Numerical Methods Spring 2015 Norwegian University of Science and Technology Department of Mathematics MA5 Numerical Methods Spring 5 Solutions to exercise set 9 Find approximate values of the following integrals using the adaptive

More information

Section 3.7. Rolle s Theorem and the Mean Value Theorem

Section 3.7. Rolle s Theorem and the Mean Value Theorem Section.7 Rolle s Theorem and the Mean Value Theorem The two theorems which are at the heart of this section draw connections between the instantaneous rate of change and the average rate of change of

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes Root finding: 1 a The points {x n+1, }, {x n, f n }, {x n 1, f n 1 } should be co-linear Say they lie on the line x + y = This gives the relations x n+1 + = x n +f n = x n 1 +f n 1 = Eliminating α and

More information

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim An idea how to solve some of the problems 5.2-2. (a) Does not converge: By multiplying across we get Hence 2k 2k 2 /2 k 2k2 k 2 /2 k 2 /2 2k 2k 2 /2 k. As the series diverges the same must hold for the

More information

SOLVED PROBLEMS ON TAYLOR AND MACLAURIN SERIES

SOLVED PROBLEMS ON TAYLOR AND MACLAURIN SERIES SOLVED PROBLEMS ON TAYLOR AND MACLAURIN SERIES TAYLOR AND MACLAURIN SERIES Taylor Series of a function f at x = a is ( f k )( a) ( x a) k k! It is a Power Series centered at a. Maclaurin Series of a function

More information

Interpolation Atkinson Chapter 3, Stoer & Bulirsch Chapter 2, Dahlquist & Bjork Chapter 4 Topics marked with are not on the exam

Interpolation Atkinson Chapter 3, Stoer & Bulirsch Chapter 2, Dahlquist & Bjork Chapter 4 Topics marked with are not on the exam Interpolation Atkinson Chapter 3, Stoer & Bulirsch Chapter 2, Dahlquist & Bjork Chapter 4 Topics marked with are not on the exam 1 Polynomial interpolation, introduction Let {x i } n 0 be distinct real

More information

Numerical Methods of Approximation

Numerical Methods of Approximation Contents 31 Numerical Methods of Approximation 31.1 Polynomial Approximations 2 31.2 Numerical Integration 28 31.3 Numerical Differentiation 58 31.4 Nonlinear Equations 67 Learning outcomes In this Workbook

More information

Notes for Numerical Analysis Math 5466 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft) Contents Numerical Methods for ODEs 5. Introduction............................

More information

Review of Power Series

Review of Power Series Review of Power Series MATH 365 Ordinary Differential Equations J. Robert Buchanan Department of Mathematics Fall 2018 Introduction In addition to the techniques we have studied so far, we may use power

More information

Lecture 4: Fourier Transforms.

Lecture 4: Fourier Transforms. 1 Definition. Lecture 4: Fourier Transforms. We now come to Fourier transforms, which we give in the form of a definition. First we define the spaces L 1 () and L 2 (). Definition 1.1 The space L 1 ()

More information

Numerical Methods I: Polynomial Interpolation

Numerical Methods I: Polynomial Interpolation 1/31 Numerical Methods I: Polynomial Interpolation Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 16, 2017 lassical polynomial interpolation Given f i := f(t i ), i =0,...,n, we would

More information

8.2 Discrete Least Squares Approximation

8.2 Discrete Least Squares Approximation Chapter 8 Approximation Theory 8.1 Introduction Approximation theory involves two types of problems. One arises when a function is given explicitly, but we wish to find a simpler type of function, such

More information

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by 1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How

More information

Continuity. Chapter 4

Continuity. Chapter 4 Chapter 4 Continuity Throughout this chapter D is a nonempty subset of the real numbers. We recall the definition of a function. Definition 4.1. A function from D into R, denoted f : D R, is a subset of

More information

Series Solution of Linear Ordinary Differential Equations

Series Solution of Linear Ordinary Differential Equations Series Solution of Linear Ordinary Differential Equations Department of Mathematics IIT Guwahati Aim: To study methods for determining series expansions for solutions to linear ODE with variable coefficients.

More information

Fourier Series. 1. Review of Linear Algebra

Fourier Series. 1. Review of Linear Algebra Fourier Series In this section we give a short introduction to Fourier Analysis. If you are interested in Fourier analysis and would like to know more detail, I highly recommend the following book: Fourier

More information

CHAPTER 4. Interpolation

CHAPTER 4. Interpolation CHAPTER 4 Interpolation 4.1. Introduction We will cover sections 4.1 through 4.12 in the book. Read section 4.1 in the book on your own. The basic problem of one-dimensional interpolation is this: Given

More information

LEGENDRE POLYNOMIALS AND APPLICATIONS. We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates.

LEGENDRE POLYNOMIALS AND APPLICATIONS. We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates. LEGENDRE POLYNOMIALS AND APPLICATIONS We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates.. Legendre equation: series solutions The Legendre equation is

More information

Polynomial Approximations and Power Series

Polynomial Approximations and Power Series Polynomial Approximations and Power Series June 24, 206 Tangent Lines One of the first uses of the derivatives is the determination of the tangent as a linear approximation of a differentiable function

More information

SOLUTIONS FOR PRACTICE FINAL EXAM

SOLUTIONS FOR PRACTICE FINAL EXAM SOLUTIONS FOR PRACTICE FINAL EXAM ANDREW J. BLUMBERG. Solutions () Short answer questions: (a) State the mean value theorem. Proof. The mean value theorem says that if f is continuous on (a, b) and differentiable

More information

Polynomial Interpolation

Polynomial Interpolation Chapter Polynomial Interpolation. Introduction Suppose that we have a two sets of n + real numbers {x i } n+ i= and {y i} n+ i=, and that the x i are strictly increasing: x < x < x 2 < < x n. Interpolation

More information