2

Similar documents
Chapter 4: Interpolation and Approximation. October 28, 2005

We consider the problem of finding a polynomial that interpolates a given set of values:

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

Scientific Computing

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Some notes on Chapter 8: Polynomial and Piecewise-polynomial Interpolation

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

3.1 Interpolation and the Lagrange Polynomial

Lectures 9-10: Polynomial and piecewise polynomial interpolation

COURSE Numerical integration of functions

Interpolation Theory

November 20, Interpolation, Extrapolation & Polynomial Approximation

Chapter 8: Taylor s theorem and L Hospital s rule

Numerical Analysis: Interpolation Part 1

c) xy 3 = cos(7x +5y), y 0 = y3 + 7 sin(7x +5y) 3xy sin(7x +5y) d) xe y = sin(xy), y 0 = ey + y cos(xy) x(e y cos(xy)) e) y = x ln(3x + 5), y 0

Additional exercises with Numerieke Analyse

LECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes).

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

Convergence rates of derivatives of a family of barycentric rational interpolants

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

5. Hand in the entire exam booklet and your computer score sheet.

Solutions Final Exam May. 14, 2014

CHAPTER 3 Further properties of splines and B-splines

Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. What for x = 1.2?

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994

Differentiation. Table of contents Definition Arithmetics Composite and inverse functions... 5

Cubic Splines MATH 375. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Cubic Splines

Preliminary Examination in Numerical Analysis

, applyingl Hospital s Rule again x 0 2 cos(x) xsinx

Homework and Computer Problems for Math*2130 (W17).

Approximation theory

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.

. (a) Express [ ] as a non-trivial linear combination of u = [ ], v = [ ] and w =[ ], if possible. Otherwise, give your comments. (b) Express +8x+9x a

Math Numerical Analysis Mid-Term Test Solutions

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

CHAPTER 10 Shape Preserving Properties of B-splines

Scientific Computing: An Introductory Survey

Numerical Methods. King Saud University

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method

Chapter 3 Interpolation and Polynomial Approximation

Section Taylor and Maclaurin Series

Polynomial Interpolation

1 Piecewise Cubic Interpolation

Chapter 3: Root Finding. September 26, 2005

Computational Physics

Chapter 1 Numerical approximation of data : interpolation, least squares method

CS 257: Numerical Methods

4. We accept without proofs that the following functions are differentiable: (e x ) = e x, sin x = cos x, cos x = sin x, log (x) = 1 sin x

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)

Spline Methods Draft. Tom Lyche and Knut Mørken

CALCULUS JIA-MING (FRANK) LIOU

Math 334 A1 Homework 3 (Due Nov. 5 5pm)

Taylor and Maclaurin Series

f (r) (a) r! (x a) r, r=0

Chapter 2 Interpolation

WORKSHEET FOR THE PUTNAM COMPETITION -REAL ANALYSIS- lim

Math Review for Exam Answer each of the following questions as either True or False. Circle the correct answer.

Introductory Numerical Analysis

WORKSHEET FOR THE PUTNAM COMPETITION -REAL ANALYSIS- lim

Numerical Methods for Differential Equations Mathematical and Computational Tools

Lecture 10 Polynomial interpolation

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0

Interpolation and Approximation

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

SPLINE INTERPOLATION

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Cubic Splines. Antony Jameson. Department of Aeronautics and Astronautics, Stanford University, Stanford, California, 94305

Fixed point iteration and root finding

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

2016 EF Exam Texas A&M High School Students Contest Solutions October 22, 2016

Ma 530 Power Series II

Continuity. Chapter 4

MA2501 Numerical Methods Spring 2015

Section 3.7. Rolle s Theorem and the Mean Value Theorem

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim

SOLVED PROBLEMS ON TAYLOR AND MACLAURIN SERIES

Interpolation Atkinson Chapter 3, Stoer & Bulirsch Chapter 2, Dahlquist & Bjork Chapter 4 Topics marked with are not on the exam

Numerical Methods of Approximation


Review of Power Series

Lecture 4: Fourier Transforms.

Numerical Methods I: Polynomial Interpolation

8.2 Discrete Least Squares Approximation

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Continuity. Chapter 4

Series Solution of Linear Ordinary Differential Equations

Fourier Series. 1. Review of Linear Algebra

CHAPTER 4. Interpolation

LEGENDRE POLYNOMIALS AND APPLICATIONS. We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates.

Polynomial Approximations and Power Series

SOLUTIONS FOR PRACTICE FINAL EXAM

Polynomial Interpolation

Transcription:

1 Notes for Numerical Analysis Math 54 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft)

2

Contents 1 Polynomial Interpolation 5 1.1 Review............................... 5 1.2 Introduction............................ 1.3 Lagrange Interpolation...................... 7 1.4 Interpolation error and convergence............... 11 1.4.1 Interpolation error.................... 12 1.4.2 Convergence........................ 15 1.5 Interpolation at Chebyshev points................ 19 1. Hermite interpolation....................... 2 1..1 Lagrange form of Hermite interpolation polynomials.. 2 1..2 Newton form of Hermite interpolation polynomial... 28 1..3 Hermite interpolation error................ 30 1.7 Spline Interpolation........................ 32 1.7.1 Piecewise Lagrange interpolation............ 32 1.7.2 Cubic spline interpolation................ 34 1.7.3 Convergence of cubic splines............... 4 1.7.4 B-splines.......................... 55 1.8 Interpolation in multiple dimensions............... 58 1.9 Least-squares Approximations.................. 58 3

4 CONTENTS

Chapter 1 Polynomial Interpolation 1.1 Review Mean-value theorem: Let f 2 C[a; b] and differentiable on (a; b) then there exists c 2 (a; b) such that f 0 (c) = f(b) f(a) : b a Weighted Mean-value theorem: If f 2 C[a; b] and g(x) > 0 on [a; b], then there exists c 2 (a; b) such that Z b a f(x)g(x)dx = f(c) Z b a g(x)dx: Rolle's theorem: If f 2 C[a; b], differentiable on (a; b) and f(a) = f(b), then there exists c 2 (a; b) such that f 0 (c) =0. Generalized Rolle's theorem: If f 2 C[a; b], n + 1 times differentiable on (a; b) and admits n +2 zeros in [a; b], then there exists c 2 (a; b) such that f (n+1) (c) =0. Intermediate value theorem: If f 2 C[a; b] such that f(a) = f(b), then for each y between f(a) and f(b) there exists c 2 (a; b) such that f(c) =y. 5

CHAPTER 1. POLYNOMIAL INTERPOLATION 1.2 Introduction From the Webster dictionary the definition of interpolation reads as follows: Interpolation is the act of introducing something, especially, spurious and foreign, the act of calculating values of functions between values already known" Our goal is approximate a set of data points or a function by a simpler polynomial function. Given a set of data points x i ; i = 0; 1; n; x i = x j ; if i = j we would like toconstruct a polynomial p m (x) such that p (k) m (x i )=f (k) (x i ); i =0; 1; m; k =0; 1; n i ; with n = mx i=0 n i 1: Lagrange Interpolation: n i =0,m 1,p n (x i )=f(x i ); i =0; 1; ;n Taylor Interpolation: n 0 > 1, m =0,p (k) n (x 0 )=f (k) (x 0 ); k =0; 1; ;n 0. Hermite Interpolation: n i 1, m 1, p (k) n (x i )=f (k) (x i ); i =0; 1; ;m; k =0; 1; n i. Why Interpolation? For instance interpolation is used to approximate integrals derivatives Z b a f(x)dx ß Z b a p m (x)dx f (k) (x) ß p (k) m (x) and plays a major role in approximating differential equations. Taylor interpolation: We first study Taylor polynomials defined as p n (x) =f(x 0 )+(x x 0 )f 0 (x 0 )+ (x x 0) 2 f (2) (x 0 )+ + (x x 0) n f (n) (x 0 ): 2 n!

1.3. LAGRANGE INTERPOLATION 7 The interpolation error or remainder formula in Taylor expansions is written as f(x) p n (x) = (x x 0) n+1 f (n+1) (ο) (n + 1)! Example: f(x) =sin(x); x 0 =0 p 2n+1 (x) = nx k=0 The interpolation error can be written as jsin(x) p 2n+1 (x)j = ( 1) k x 2k+1 (2k + 1)! jxj2n+3 jxj2n+3 jcos(ο)j < (2n + 3)! (2n + 3)! On the interval 0 <x<1=2 the interpolation error is bounded as jsin(x) p 2n+1 (x)j < 1 2 2n+3 (2n + 3)! : 1.3 Lagrange Interpolation Lagrange form: Given a set of points (x i ;f(x i )); i = 0; 1; 2; n; x j = x i we define the Lagrange coefficient polynomials l i (x); i =0; 1; n: such as ( 1; if i = j l i (x j )= 0; otherwise and is defined as l i (x) = (x x 0)(x x 1 ) (x x i 1 )(x x i+1 ) (x x n ) (x i x 0 )(x i x 1 ) (x i x i 1 )(x i x i+1 ) (x i x n ) : The Lagrange form of the interpolation polynomial is p n (x) = nx i=0 f(x i )l i (x)

8 CHAPTER 1. POLYNOMIAL INTERPOLATION Example: Let us consider the data set x = [0; 1; 2]; f =[ 2; 1; 2] l 0 (x) = (x 1)(x 2) ( 1)( 2) =(x 2 3x +2)=2 l 1 (x) = l 2 (x) = x(x 2) (1)( 1) = x2 +2x x(x 1) (2)(1) =(x 2 x)=2 p 2 (x) = 2l 0 (x) l 1 (x)+2l 2 (x) =x 2 2 Example: f(x) = cos(x) 5 using 8 points x = [0; 1; 2; 3; 4; 5; ; 7] and 14 points x i = i Λ 0:5;i=0; 2; 14 A Matlab example x=[0 1 2 3 4 5 7]; y=cos(x).^5; c = polyfit(x,y,length(x)-1); xi = 0:0.1:7; zi = cos(xi).^5; yi =polyval(c,xi); subplot(2,1,1) title('interpolation') plot(xi,yi,'-.',xi,zi,x,y,'*'); subplot(2,1,2) title('interpolation Error') plot(xi,zi-yi,x,zeros(1,length(x)),'-*'); Newton form and divided differences: We develop a procedure to compute a 0 ; a 1 ; ; a n such that the interpolation polynomial has the form p n (x) =a 0 + a 1 (x x 0 )+a 2 (x x 0 )(x x 1 )+ + a n (x x 0 ) (x x n 1 )

1.3. LAGRANGE INTERPOLATION 9 where a 0 = f(x 0 ), a 1 = f (x 1) f (x 0 ) x 1 x 0 and such that a k = f[x 0 ;x 1 ; ;x k ] is called the k th divided difference. All divided differences are generated by the following recurrence formula f[x i ]=f(x i ) f[x i ;x k ]= f[x k] f[x i ] x k x i f[x 0 ;x 1 ; ;x k 1 ;x k ]= f[x 1; ;x k ] f[x 0 ; ;x k 1 ] x k x 0 x i f(x i ) 1 st DD 2 nd DD 3 rd DD 4 th DD x 0 f(x 0 ) f[x 0 ;x 1 ] x 1 f(x 1 ) f[x 0 ;x 1 ;x 2 ] f[x 1 ;x 2 ] f[x 0 ;x 1 ;x 2 ;x 3 ] x 2 f(x 2 ) f[x 1 ;x 2 ;x 3 ] f[x 0 ;x 1 ;x 2 ;x 3 ;x 4 ] f[x 2 ;x 3 ] f[x 1 ;x 2 ;x 3 ;x 4 ] x 3 f(x 3 ) f[x 2 ;x 3 ;x 4 ] f[x 3 ;x 4 ] x 4 f(x 4 ) The forward Newton polynomial can be written as p 4 (x) =f(x 0 )+f[x 0 ;x 1 ](x x 0 )+f[x 0 ;x 1 ;x 2 ](x x 0 )(x x 1 )+ f[x 0 ;x 1 ;x 2 ;x 3 ](x x 0 )(x x 1 )(x x 2 ) +f[x 0 ;x 1 ;x 2 ;x 3 ;x 4 ](x x 0 )(x x 1 )(x x 2 )(x x 3 ) The backward Newton polynomial can be written as p 4 (x) =f(x 4 )+f[x 3 ;x 4 ](x x 4 )+f[x 2 ;x 3 ;x 4 ](x x 4 )(x x 3 )+ f[x 1 ;x 2 ;x 3 ;x 4 ](x x 4 )(x x 3 )(x x 2 ) +f[x 0 ;x 1 ;x 2 ;x 3 ;x 4 ](x x 4 )(x x 3 )(x x 2 )(x x 1 )

10 CHAPTER 1. POLYNOMIAL INTERPOLATION Example: x i f(x i ) 1 st DD 2 nd DD 3 rd DD 4 th DD 5 th DD 2 1 8 1 8 2 4 10=3 0 4 8 10=3 20 10-11/ 1 1 22 35= 24 40=3 2 8 18 12 3 4 The forward Newton polynomial p 5 (x) =1 8(x +2)+2(x + 2)(x +1) 10 3 (x + 2)(x +1)x + 10 3 11 (x + 2)(x +1)x(x 1) (x + 2)(x +1)x(x 1)(x 2): The Backward Newton polynomial is given by p 5 (x) = 4 12(x 3)18(x 3)(x 2) 40 (x 3)(x 2)(x 1) 3 + 35 Remarks: 11 (x 3)(x 2)(x 1)x (x 3)(x 2)(x 1)x(x 1): (i) The upper diagonal contains the coefficients for the forward Newton polynomials. (ii) The lower diagonal contains the coefficients for the backward Newton polynomials. (iii) p k (x) interpolates f at x 0 ;x 1 ; ;x k and is obtained as

1.4. INTERPOLATION ERROR AND CONVERGENCE 11 p k (x) =f(x 0 )+ kx j=1 (iv) p 2 (x) that interpolates f at x 2 ;x 3 and x 4 is Y j 1 f[x 0 ;x 1 ; ;x j ] (x x i ): p 2 (x) =f(x 2 )+f[x 2 ;x 3 ](x x 2 )+f[x 2 ;x 3 ;x 4 ](x x 2 )(x x 3 ): i=0 (v) If we decide to add an additional point, it should be added at the bottom for forward Newton polynomials and at the top for backward Newton polynomials. (vi) f[x 0 ;x 1 ; ;x k ]= f (k) (ο) k! ; ο 2 [a; b]: Nested mutliplication An efficient algorithm to evaluate Newton polynomial can be obtained by writing p n (x) =a 1 +(a 2 + (a n 2 +(a n 1 +(a n + a n+1 (x x n )) Matlab program (x x n 1 ))(x x n 2 ) )(x x 1 )): %input a(i), i=1,2,...,n+1, x(i),i=1,...,n+1, and x % p = a(n+1); for i=n:-1:1 p = a(i) + p*(x-x(i)); end; %p = p_n(x) 1.4 Interpolation error and convergence In this section we study the interpolation error and convergence of interpolation polynomials to the interpolated function.

12 CHAPTER 1. POLYNOMIAL INTERPOLATION 1.4.1 Interpolation error Theorem 1.4.1. Let f 2 C[a; b] and x 0 ;x 1 ;x 2 ; x n ; be n+1 distinct points in [a; b]. Then there exists a unique polynomial p n of degree at most n such that p n (x i )=f(x i ); i =0; 1; n. Proof. Existence: we define and One can verify that Uniqueness: nq nq j=0;j=i (x x j ) L i (x) = (x i x j ) j=0;j=i ( 1; i=j L i (x j )=ffi ij = 0; otherwise p n (x) = nx i=0 f(x i )L i (x) p n (x j )=f(x j ): Assume there are two polynomials q n (x) and p n (x) such that and consider the difference q n (x j )=p n (x j )=f(x j ); j =0; 1; 2; ;n d n (x) =p n (x) q n (x): d n (x j ) = 0; i = 0; 1; ;n so d n (x) has n +1 roots. By the fundamental theorem of Algebra d n (x) =0.

1.4. INTERPOLATION ERROR AND CONVERGENCE 13 Theorem 1.4.2. Let f 2 C[a; b] (n + 1) differentiable on (a; b) and let x 0 ; x 1 ; ; x n ; be (n +1) distinct points in [a; b]. If p n (x) is such that p n (x i ) = f(x i ); i = 0; 1; ;n, then for each x 2 [a; b] there exists ο(x) 2 [a; b] such that Q where W (x) = n (x x i ). i=0 f(x) p n (x) = f n+1 (ο(x)) W (x); (n + 1)! Proof. Let x 2 [a; b] and x = x i ; i =0; 1; ;n and define the function g(t) =f(t) p n (t) f(x) p n(x) W (t): W (x) We note that g has (n + 2) roots, i.e., g(x i )=0; i =0; 1; n and g(x) =0. Using the generalized Rolle's Theorem there exits ο(x) 2 (a; b) such that which leads to g (n+1) (ο(x)) = 0 g (n+1) (ο(x)) = f (n+1) (ο(x)) 0 f(x) p n(x) (n + 1)! = 0; (1.1.1) W (x) We solve (1.1.1) to find f(x) p n (x) = f (n+1) (ο(x)) W (x) which completes the (n+1)! proof. Corollary 1. Assume that max jf (n+1) (x)j = M n+1 then x2[a;b] and jf(x) p n (x)j» M n+1 jw (x)j; 8 x 2 [a; b]: (n + 1)! max jf(x) p n (x)j» M n+1 x2[a;b] (n + 1)! max jw (x)j: x2[a;b] Proof. The proof is straight forward.

14 CHAPTER 1. POLYNOMIAL INTERPOLATION Examples: jjf P 1 jj 1» M 2h 2 ; [x 0 ; x 0 + h]: 8 jjf p 2 jj 1» M 3h 3 jjf p 3 jj 1» M 4h 4 9 p 3 ; [x 0;x 0 +2h]: 24 ; [x 0;x 0 +3h]: Example: Let us interpolate f(x) =e x 3 on [0; 1] at x 0 ;x 1 ; ;x n. The n +1 derivative isf (n+1) (x) = e x 3 3 n+1 where M n+1 = max jf (n+1) (x)j = e 1 3 x2[0;1] 3 : n+1 The interpolation error can be bounded as jf(x) p n (x)j» e1=3 jw (x)j ; x 2 [0; 1]: 3 n+1 (n + 1)! For instance, for n =4and x =[0; 1=4; 1=2; 3=4; 1], W (x) =x(x 1=4)(x 1=2)(x 3=4)(x 1) The error at x =0:3 can be bounded as jf(0:3) p 4 (0:3)j» e1=3 jw (0:3)j 3 5 5! ß 0:45210 7 : Example: Let us consider f(x) =cos(x)+x on [0; 2] which satisfies jf (k) (x)j»1. max x2[0;2] The interpolation error can be bounded as Case 1: Two interpolation points with h =2, max jf(x) p 1 (x)j» h2 M 2 x2[0;2] 8» 4=8 =0:5:

1.4. INTERPOLATION ERROR AND CONVERGENCE 15 Case 2: Three interpolation points with h =1, max jf(x) p 2 (x)j» h2 M 3 x2[0;2] 9 p 3» 1=(9p 3 ß 0:041: Case 3: Four interpolation points with h = 2=3, max jf(x) p 3 (x)j» h4 M 4 x2[0;2] 24 1.4.2 Convergence» (2=3) 4 =24 ß 0:0082: We start by reviewing the convergence of functions and defining simple and uniform convergence of sequences of functions. Let f n (x); n =0; 1; be a sequence of continuous functions on [a; b]. Definition 1. (Simple convergence): f n (x) converges simply to f(x) if and only if at every x 2 [a; b] lim n!1 jf n (x) f(x)j =0: Definition 2. (Uniform convergence): f n converges uniformly to f if and only if lim max jf n (x) f(x)j =0. n!1 x2[a;b] Example: Let us consider the sequence f n (x) = 1 ;n=0; 1; ; x 2 [0; 1]: 1+nx (i) The sequence f n converges simply to 0 for all 0 <x» 1 while f n (0) = 1. However, f n does not converge uniformly since jjf n jj 1 =1. (ii) The sequence f n converges uniformly to 0 on [2; 3] since jjf n jj 1 =1=(1 + 2n). Next, we will study the uniform convergence of interpolation polynomials on a fixed interval [a; b] as the number of interpolation points approaches

1 CHAPTER 1. POLYNOMIAL INTERPOLATION infinity. Let h =(b a)=n and x i = a + ih; i =0; 1; 2; ;n; equidistant interpolation points. Let p n (x) denote the Lagrange interpolation polynomial, i.e., p n (x i )=f(x i ); i =0; ;n and let us study the limit lim jjf p njj 1 = lim max jf(x) p n (x)j: n!1 n!1 x2[a;b] For x 2 [a; b], jx x i j»(b a) which leads to jw (x)j»(b a) n+1. Thus, the interpolation error is bounded as jjf p n jj 1» M n+1 (n + 1)! (b a)n+1 : We have uniform convergence when M n+1 (n+1)! (b a)n+1! 0 as n!1. Theorem 1.4.3. Let f be an analytic function on a disk centered at(a+b)=2 with a radius r > 3(b a)=2. Then, the interpolation polynomial p n (x) satisfying p n (x i )=f(x i ); i =0; 1; 2; n; converges to f as n!1, i.e., lim jjf p njj 1 =0: n!1 Proof. A function is analytic at (b+a)=2 if it admits a power series expansion that converges on a disk of radius r and centered at (a + b)=2. Applying Cauchy's formula I f (k) (x) = k! 2ßi C r f(z) dz; x 2 [a; b]: k+1 (z x) I jf (k) (x)j» k! 2ß C r jf(z)j dz; k+1 j(z x)j x 2 [a; b]:

y 1.4. INTERPOLATION ERROR AND CONVERGENCE 17 1.5 1 + z pole 0.5 * r r 0 d + + + a (a+b)/2 x b 0.5 C r 1 1.5 1.5 1 0.5 0 0.5 1 1.5 x Let z be a point on the circle C r and x 2 [a; b]. From the triangle with vertices z, (a + b)=2 and x the following triangle inequality holds jz xj + d r: Noting that d» (b a)=2 the triangle inequality yields jf (k) (x)j» k! 2ß jz xj r (b a)=2: Assume r> b a 2 ([a; b] ρ C r ) to obtain M k» max jf(z)j z2c r jr (b a)=2j (k+1) 2ßr: r r (b a)=2 max k! jf(z)j z2c r (r (b a)=2) : k Using k = n + 1 the interpolation error may bebounded as jf(x) p n (x)j» M n+1 (n + 1)! (b a)(n+1)» max jf(z)j z2c r ψ r(b a) r (b a) 2 Finally, wehave uniform convergence if establishes the theorem.! b a r (b a)=2 n b a < 1, i.e., r> 3 (b a) which r (b a)=2 2 :

18 CHAPTER 1. POLYNOMIAL INTERPOLATION Examples of analytic functions are sin(z), e z, cos(z 2 ). Runge phenomenon: Let f(x) = 1 4+x 2 is C 1 [ 10; 10] but we do not have uniform convergence on [ 10; 10] when using the n +1equally spaced interpolation points x i = 10 + ih; i =0; 1; ;n, h =20=n. The function f(z) has two poles z = ±2i, thus, can't be analytic on disk C r with radius r > 3(b a)=2 = 30 and center at (a + b)=2. We cannot apply the previous theorem and we should expect convergence problems for x far away from the origin. The largest interval [ a; a] satisfying r > 3(b a)=2 = 3a corresponds to a < 2=3. Actually, it may converge on a larger interval because this is a sufficient condition. Interpolation errors and divided differences: The Newton form of p n (x) that interpolates f at x i ; i =0; 1; ;n is p n (x) =f[x 0 ]+f[x 0 ;x 1 ](x x 0 )+ nx f[x 0 ; ;x i ] i 1 Y i=2 j=0 (x x j ): We proof a theorem relating the interpolation errors and divided differences. Theorem 1.4.4. If f 2 C[a; b] and n +1 times differentiable, then for every x 2 [a; b] and f(x) p n (x) =f[x 0 ;x 1 ; ;x n ;x] ny i=0 (x x i ) f[x 0 ;x 1 ;x 2 ; ;x k ]= f (k) (ο) ; ο 2 [ min k! x i; max x i]: i=0; ;k i=0; ;n Proof. Let us introduce another point x distinct from x i ;i = 0; 1; n and let p n+1 interpolate f at x 0 ;x 1 ; x n and x, thus

1.5. INTERPOLATION AT CHEBYSHEV POINTS 19 p n+1 (x) =p n (x)+f[x 0 ;x 1 ; ;x n ;x] ny i=0 (x x i ): Combining the equation p n+1 (x) =f(x) and the interpolation error formula we write f(x) p n (x) =f[x 0 ;x 1 ; ;x n ;x] ny i=0 (x x i )= f (n+1) (ο(x)) (n + 1)! ny i=0 (x x i ): This leads to f[x 0 ;x 1 ; ;x k ]= f (k) (ο) ; ο 2 [ min k! x i; max x i]; i=0; ;k i=0; ;k which completes the proof. Remark: lim f[x 0;x 1 ; ;x k ]= f (k) (x 0 ) x i!x 0 ;i=1; k k! 1.5 Interpolation at Chebyshev points In the previous section we have shown that uniform convergence does not occur using uniform interpolation points for some functions. Now, we study the interpolation error on [ 1; 1] where the (n + 1) interpolation points, x Λ i ; i =0; 1; ;n; in [ 1; 1] are selected such that jjw Λ (:)jj 1 = min Q2 ~ P n+1 jjq(:)jj 1 where ~P n is the set of the monic polynomials ~P n =[Q 2P n jq = x n + n 1 X i=1 c i x i ];

20 CHAPTER 1. POLYNOMIAL INTERPOLATION Q and W Λ (x) = n (x x Λ i ). i Question: Are there interpolation points x Λ i ; i =0; 1; 2; ;n in [ 1; 1] such that jjw Λ jj 1 = min x i 2[a;b];i=0;1; ;n jjw jj 1 If the above statement is true, the interpolation error can be bounded by jje n jj 1» M n+1 (n+1)! jjw Λ jj 1 The Answer : The best interpolation points x Λ i ; i = 0; 1; 2; ;n are the roots of the Chebyshev polynomial T n+1 (x) defined as T k (x) =cos(karcos(x)); k =0; 1; 2; : In the following theorem we will prove some properties of Chebyshev polynomials. Theorem 1.5.1. The Chebyshev polynomials T k (x); k =0; 1; 2;, satisfy the following properties: (i) jt k (x)j»1; for all 1» x» 1 (ii) T k+1 (x) =2xT k (x) T k 1 (iii) T k (x) has k roots x Λ j = cos( 2j+1 ß); j =0; 1; ;k 1; 2 [ 1; 1] 2k Q k 1 (iv) T k (x) =2 k 1 (x x Λ j) j=0 (v) If ~T k (x) = T k(x) then max j ~T 2 k 1 k (x)j = 1. 2 x2[ 1;1] k 1 Proof. We obtain (i) by noting that the range of the cosine function is [ 1; 1]. To obtain (ii) we write T k+1 (x) =cos(karcos(x)+arcos(x)) = cos(k + )

1.5. INTERPOLATION AT CHEBYSHEV POINTS 21 where = arcos(x) and write T k+1 = cos(k + ) T k 1 = cos(k ) Use the trigonometric identity cos(a ± b) = cos(a)cos(b) sin(a)sin(b) to obtain and cos(k + ) =cos(k )cos( ) sin(k )sin( ) cos(k ) =cos(k )cos( )+sin(k )sin( ) Adding the previous equations to obtain T k+1 (x)+t k 1 (x) =2cos(k )cos( ) =2xT k (x) This proves (ii). To obtain the roots we set cos(karcos(x)) = 0 which leads to karcos(x) = If we solve for x, we obtain leads to the roots x = cos( x Λ j (2j +1) ß; j =0; ±1; ±2; : 2 (2j +1) ß);j =0; ±1; ±2; ; 2k +1) = cos((2j ß);j =0; 1; ;k 1: 2k Use induction to prove (iv) step 1: T 1 (x) =2 0 x, T 2 (x) =2 1 x 2 1 (iv) is true for k =1; 2. P Step 2: Assume T k = 2 k 1 x k + k 1 c i x i, for k = 1; 2; ;n and use (ii) we write i=0 T n+1 (x) =2xT n T n 1 (x) =2 n x n+1 + nx i=0 a i x i

22 CHAPTER 1. POLYNOMIAL INTERPOLATION k 1 This establishes (iv), i.e., T k =2 k 1 (x x Λ j). Q j=0 Applying (iv) we show that (v) is true. Corollary 2. If ~T n (x) is the monic Chebyshev polynomial of degree n, then max j Q ~ n (x)j max j T ~ n (x)j = 1 1»x»1 1»x»1 2 ; 8 Q ~ n 2 P ~ n : n 1 Proof. Assume there is another monic polynomial ~ R n (x) such that We also note that max j ~R n (x)j < 1 1»x»1 2 n 1 ~T n (z k )= ( 1)k 2 n 1 ; z k = cos(kß=n); k =0; 1; 2; ;n: The (n 1)-degree polynomial d n 1 (x) = ~T n (x) ~R n (x) satisfies d n 1 (z 0 ) > 0, d n 1 (z 1 ) < 0, d n 1 (z 2 ) > 0, d n 1 (z 3 ) < 0 So d n 1 (x) changes sign between each pair z k and z k+1 ; k =0; 1; 2; ;n and thus has n roots. Thus d n 1 (x) =0,identically, i.e., ~T (x) and ~R n (x) are identical. This leads to a contradiction with the assumption above. Below are the first five Chebyshev polynomials. T 0 (x) = 1 T 1 (x) = x T 2 (x) = 2x 2 1 T 3 (x) = 4x 3 3x T 4 (x) = 8x 4 8x 2 +1 Example of Chebyshev points:

1.5. INTERPOLATION AT CHEBYSHEV POINTS 23 k x Λ 0 x Λ 1 x Λ 2 x Λ 3 1 p 0 p 2 p 2=2 2=2 p 3 3=2 0 3=2 4 cos(ß=8) cos(3ß=8) cos(5ß=8) cos(7ß=8) Application to interpolation: Let p n (x) 2 P n interpolate f(x) 2 C n+1 [ 1; 1] at the roots of T n+1 (x), x Λ j; j = 0; 1; 2; ;n. Thus, we can write the interpolation error formula as f(x) p n (x) = f n+1 (ο(x)) (n + 1)! ~T n+1 (x) Using (v) from the previous theorem and assuming jjf n+1 jj 1;[ 1;1]» M n+1 we obtain Remarks: max jf(x) p n (x)j» M n+1 x2[ 1;1] 2 n (n + 1)! 1. We note that this choice of interpolation points reduces the error significantly. 2. With Chebyshev points, p n converges uniformly to f when f 2 C 1 [ 1; 1] only. The function f does not have to be analytic (see Gautschi). Example 1: Consider f(x) =e x ; x 2 [ 1; 1] Case 1: with three points; n=2: Case 2: with points; n=5: jje 2 jj 1» M 3 = e=24 = 0:113: 2 3!2

24 CHAPTER 1. POLYNOMIAL INTERPOLATION jje 5 jj 1» M!2 5 = e=(720 32) = 0:11710 3 : How many Chebyshev points are needed to have jje n jj 1 < 10 8 jje n jj 1» M n+1 2 n (n + 1)! = e (n + 1)!2 n =0:13110 9 ; for n=9: Thus, 10 Chebyshev points are needed. Chebyshev points on [a; b]: Chebyshev points can be used on an arbitrary interval [a; b] using the linear transformation x = a + b 2 We also need the inverse mapping t =2 x a b a First, we order the Chebyshev nodes in [ 1; 1] as + b a t; 1» t» 1: (1.1.2a) 2 1; a» x» b: (1.1.2b) t Λ 2k +1 +1 k = cos( ß ß) = cos(2k ß); k =0; 1; 2; ;n: 2n +2 2n +2 we define the interpolation nodes on an arbitrary interval [a; b] as Remarks: x Λ k = a + b 2 + b a 2 tλ k; k =0; 1; 2; ;n 1. x Λ 0 <x Λ 1 < <x Λ n 2. x Λ k are symmetric with respect to the center (a + b)=2 3. x Λ k are independent of the interpolated function f

1.5. INTERPOLATION AT CHEBYSHEV POINTS 25 Theorem 1.5.2. Let f 2 C n+1 [a; b] and p n interpolate f at the Chebyshev nodes x Λ k ;k =0; 1; 2; ;n, in [a,b]. Then (b a) n+1 max jf(x) p n (x)j»2m n+1 x2[a;b] 4 n+1 (n + 1)! : Where M n+1 = max jf (n+1) (x)j. x2[a;b] Q Proof. It suffices to rewrite W (x) = n (x x i Λ) using the mapping (1.1.2) to find that i=0 (x x Λ i )= b a 2 (t tλ i ); and W (x) = ny i=0 (x x i Λ)=( b a 2 )n+1 ny i=0 (t t Λ i )=( b a 2 )n+1 ~ T n+1 (t): Finally, using jj ~T n+1 jj 1;[ 1;1] = 1 2 n we complete proof. Example 2: Consider f(x) = 3 x = e ln(3)x ; x 2 [0; 1] whose derivative is f (n+1) (x) = ln(3) n+1 e ln(3)x. Noting that f (n+1) is a monotonically increasing function, M n+1 = f (n+1) (1)=3ln(3) n+1. Therefore, jje n jj 1» 2 3 ln(3)n+1 4 n+1 (n + 1)! = ln(3)n+1 4 n+1 (n + 1)! :

2 CHAPTER 1. POLYNOMIAL INTERPOLATION # of Chebyshev points Error bound 2 0.22 3 0.0207 4 0.00142 5 0.000078 0.358 10 5 7 0.140 10 8 0.48 10 8 9 0.14 10 9 1. Hermite interpolation We restate the general Hermite interpolation by Letting x 0 <x 1 <x 2 x m be m + 1 distinct points such that f (k) (x i )=p (k) n (x i ); k =0; 1; n i 1; i =0; 1; m; (1.1.3) P where m n i = n + 1 and n i 1. We note that n i =1; i =0; ;m; leads to i=0 Lagrange interpolation. 1..1 Lagrange form of Hermite interpolation polynomials Theorem 1..1. There exists a unique polynomial p n (x) that satisfies (1.1.3) with n i =2and n =2m +1. Proof. Existence: Next, we study the special case n i =2; i =0; 1; 2; ; where l i;1 (x) =(x x i )l i (x) 2 : (1.1.4)

1.. HERMITE INTERPOLATION 27 and l i;0 = l i (x) 2 2l 0 i(x i )(x x i )l i (x) 2 : (1.1.5) Now, we can verify that l i;1 (x j )=0; j =0; 1; 2; ;m Thus, l 0 i;1(x j )=ffi ij. l 0 i;1(x) =2(x x i )l 0 i(x)l i (x)+l i (x) 2 : One can easily check that l i;0 (x j )=ffi ij. For l 0 i;0(x) we have l 0 i;0(x) =(1 2l 0 i(x i )(x x i ))2l 0 i(x)l i (x) 2l 0 i(x i )l i (x) 2 : Thus, l 0 i;0(x j )=0; j =0; 1; ;m: Existence of Hermite interpolation polynomial is established by writing the Lagrange form of Hermite polynomial as p n (x) = mx i=0 f(x i )l i;0 (x)+ mx i=0 f 0 (x i )l i;1 (x): (1.1.) Uniqueness: Assume there are two polynomials p n (x) and q n (x) that satisfy (1.1.3) and consider the difference d n (x) =p n (x) q n (x) which satisfies d (s) n (x j )=0; s =0; 1; j =0; 1; ;m: Thus, d n (x) is a polynomial of degree at most n and has (n+1) roots counting the multiplicity of each root. The fundamental theorem of Algebra shows that d n (x) is identically zero. With this we establish the uniqueness of p n and finish the proof of the theorem.

28 CHAPTER 1. POLYNOMIAL INTERPOLATION 1..2 Newton form of Hermite interpolation polynomial Using the following relation f[x 0 ;x 0 + h; x 0 +2h; ;x 0 + kh] = f (k) (ο) ; x 0 <ο<x 0 + kh (1.1.7) k! and taking the limit when h! 0we obtain that f[x 0 ;x 0 ;x 0 ; ;x 0 ]= f k (x 0 ) : (1.1.8) k! The divided difference table for the data (x k + ih; f(x 0 + kh)); i =0; 1; 2; 3; 4 will converge to the table shown below where we recover the Taylor polynomial about x k and with n k =5. Using this observation we initialize the table for x k and n k =5asfollows: (i) every point x i is repeated n i times (ii) we set z i = x k ; z i+1 = x k ; ; z i+4 = x k (iii) we initialize f[z i ;z i+1 ; ; z i+s ]= f (s) (x k ) s! as shown in the following table z i x k f(x k ) z i+1 x k f(x k ) f 0 (x k ) z i+2 x k f(x k ) f 0 (x k ) f 00 (x k )=2! z i+3 x k f(x k ) f 0 (x k ) f 00 (x k )=2! f 000 (x k )=3! z i+4 x k f(x k ) f 0 (x k ) f 00 (x k )=2! f 000 (x k )=3! f (4) (x k )=4! The general formula for divided differences with repeated arguments for x 0» x 1» :::<x n is given by f[x i ;x i+1 ;::: ;x i+k ]= ( f [xi+1 ;x i+2 ;::: ;x i+k ] f [x i ;::: ;x k 1 ] f (k) (ο) k! x i+k x i ; if x i = x i+k Example: Let f(x) = x 4 +1, f 0 (x) = 4x 3. We will construct a polynomial p 5 (x) such that p(x i ) = f(x i ) and p 0 (x i ) = f 0 (x i ) with x i = 1; 0; 1. The Hermite divided difference table is given as

1.. HERMITE INTERPOLATION 29 z i f(z i ) 1DD 2 DD 3 DD 4 DD 5DD z 0-1 2 z 1-1 2 f 0 ( 1) = 4 z 2 0 1-1 3 z 3 0 1 f 0 (0) = 0 1-2 z 4 1 2 1 1 0 1 z 5 1 2 f 0 (1) = 4 3 2 1 0 The forward Hermite polynomial is given as p 5 (x) =2 4(x +1)+3(x +1) 2 2(x +1) 2 x +(x +1) 2 x 2 =1+x 4 (1.1.9) The Hermite polynomial that interpolates f and f 0 at x = 1; 0isgiven as p 3 (x) =2 4(x +1)+3(x +1) 2 2(x +1) 2 x: (1.1.10) The backward Hermite polynomial is given as p 5 (x) =2 4(x 1)+3(x 1) 2 +2(x 1) 2 x +(x 1) 2 x 2 =1+x 4 (1.1.11) The Hermite polynomial that interpolates f and f 0 at x =0; 1 is given by p 3 (x)=2+4(x 1) + 3(x 1) 2 +2(x 1) 2 x: (1.1.12) Example: Consider the data with m =1; n 0 =1; n 1 = 2 given in the following table x i 0 1 f(x i ) 1 2 f 0 (x i ) 0 1 f 00 (x i ) NA 2 We write the divided differences table as

30 CHAPTER 1. POLYNOMIAL INTERPOLATION z I f(z i ) 1DD 2DD 3DD 4DD z 0 0 1 z 1 0 1 f 0 (x 0 )=0 z 2 1 2 1 1 z 3 1 2 f 0 (x 1 )=1 0-1 z 4 1 2 f 0 (x 1 )=1 f 00 (x 1 )=2!=1 1 2 The Hermite polynomial is given as H 4 (x)=1+0(x 0)+(x 0) 2 (x 0) 2 (x 1) + 2(x 0) 2 (x 1) 2 =1+2x 2 x 3 +2x 2 (x 1) 2 : (1.1.13) 1..3 Hermite interpolation error Theorem 1..2. Let f(x) 2 C[a; b] be 2m +2 differentiable on (a; b) and consider x 0 <x 1 <x 2 ; ;x m in [a; b] with n i =2; i =0; 1; ;m. If p 2m+1 is the Hermite polynomial such that p (k) 2m+1(x i ) = f (k) (x i ), i = 0; 1; ; m, k =0; 1, then there exists ο(x) 2 [a; b] such that f(x) p 2m+1 (x) = f (2m+2) (ο(x)) W (x) (2m + 2)! (1.1.14a) where W (x) = my i=0 (x x i ) 2 : (1.1.14b) Proof. We consider the special case n i = 2; i.e., n = 2m +1, select an arbitrary point x 2 [a; b]; x = x i ; i =0; ;m and define the function g(t) =f(t) p 2m+1 (t) f(x) p 2m+1(x) W (t): (1.1.15) W (x)

1.. HERMITE INTERPOLATION 31 We note that g has (m+2) roots, i.e., g(x i )=0; i =0; 1; m and g(x) =0. Applying the generalized Rolle's Theorem we show that g 0 (ο i )=0;i=0; 1; ;m where ο i 2 [a; b], ο i = x j, ο i = x. Using (1.1.3) with n i = 2 we have g 0 (x i ) = 0; i = 0; 1; ;m. Thus, g 0 (t) has 2m +2roots in [a; b]. Applying the generalized Rolle's theorem we show that there exists ο 2 (a; b) such that g (2m+2) (ο) =0: Combining this with (1.1.15) yields 0=f (2m+2) (ο) f(x) p 2m+1(x) (2m + 2)! W (x) Solving for f(x) p 2m+1 (x) leads to (1.1.14). Corollary 3. If f(x) and p 2m+1 (x) are as in the previous theorem, then and jf(x) p 2m+1 (x)j < M 2m+2 jw (x)j; x 2 [a; b] (1.1.1) (2m + 2)! jjf(x) p 2m+1 (x)jj 1;[a;b]» M 2m+2 (2m + 2)! (b a)2m+2 : (1.1.17) Proof. The proof is straight forward. At this point wewould like to note that we can prove a uniform convergence result under the same conditions as for Lagrange interpolation. Example: Let f(x) =sin(x); x 2 [0;ß=2] and p 5 (x) interpolate f and f 0 at x i =0; 0:2; ß 2, with n i = 2 and 2m +2=

32 CHAPTER 1. POLYNOMIAL INTERPOLATION Using the error bound 1.1.1 with M 2m+2 =1,n i = 2 and W (x) =[(x 0)(x 0:2)(x ß 2 )]2 ; we obtain je 5 (1:1)j» jw (1:1)j! ß 0:4082! ß 3:017 10 4 : (1.1.18) 1.7 Spline Interpolation In this section we will study piecewise polynomial interpolation and write the interpolation errors in terms of the subdivision size and the degree of polynomials. 1.7.1 Piecewise Lagrange interpolation We construct the piecewise linear interpolation for the data (x i ;f(x i )); i = 0; 1; ;n such that x 0 <x 1 < <x n as P 1 (x) = 8 >< >: p 1;0 (x) =f(x 0 ) (x x 1) x 0 x 1 + f(x 1 ) (x x 0) x 1 x 0 ; x 2 [x 0 ;x 1 ]; p 1;i (x) =f(x i ) (x x i+1) x i x i+1 + f(x i+1 ) (x x i) x i+1 x i ; x 2 [x i ;x i+1 ]: i =0; 1; ;n 1: (1.1.19) The interpolation error on (x i ;x i+1 ) is bounded as where M 2;i = jje 1;i (x)jj» M 2;i 2 j(x x i)(x x i+1 )j; x 2 (x i ;x i+1 ); (1.1.20) max jf (2) (x)j. This can be written as x i»x»x i+1 The global error is jje 1;i jj 1» M 2;ih 2 i 8 ; h i = x i+1 x i : (1.1.21)

1.7. SPLINE INTERPOLATION 33 jje 1 jj 1» M 2H 2 8 ; H = max h i : (1.1.22) i=0; ;n 1 Theorem 1.7.1. Let f 2 C 0 [a; b] be twice differentiable on (a; b). If P 1 (x) is the piecewise linear interpolant of f at x i = a + i Λ h; i = 0; 1; ;n, h =(b a)=n, then P 1 converges uniformly to f as n!1. Proof. We prove this theorem using the error estimate (1.1.22). For piecewise quadratic interpolation we select h =(b a)=n and construct p 2;i that interpolates f at x i, (x i + x i+1 )=2 and x i+1. In this case the interpolation error is bounded as A global bound is jje 2;i jj 1» M 3;i(h i =2) 3 9 p ; h i = x i+1 x i : (1.1.23) 3 jje 2 jj 1» M 3(H=2) 3 9 p 3 ; H = max h i : (1.1.24) i=0; ;2n 1 Theorem 1.7.2. Let f 2 C 0 [a; b] and be m+1 times differentiable on (a; b) and x 0 < x 1 < < x n with h i = x i+1 x i and H = max h i. If P m (x) is i the piecewise polynomial of degree m on each subinterval [x i ;x i+1 ] and P m (x) interpolates f on [x i ;x i+1 ] at x i;k = x i + k Λ h ~ i ; k =0; 1; ;m, h ~ i = h i =m, then P m converges uniformly to f as H! 0. Proof. Again we prove this theorem using the error bound jje m jj 1» M m+1h m+1 ; H = max h i : (1.1.25) (m + 1)! i=0;1; ;nm 1 Similarly,we may construct piecewise Hermite interpolation polynomials following the same line of reasoning as for Lagrange interpolation.

34 CHAPTER 1. POLYNOMIAL INTERPOLATION 1.7.2 Cubic spline interpolation We use piecewise polynomials of degree three that are C 2 and interpolate the data such as S(x k )=f(x k )=y k ; k =0; 1; n. Algorithm (i) Order the points x k ; x =0; 1; n such that a = x 0 <x 1 <x 2 < x n 1 <x n = b: (ii) Let S(x) be a piecewise spline defined by n cubic polynomials such that S(x) =S k (x) =a k + b k (x x k )+c k (x x k ) 2 + d k (x x k ) 3 ; x k» x» x k+1 (iii) find a k ; b k ; c k ; d k ; k =0; 1; n 1 such that (1) S(x k )=y k ; k =0; 1; ;n (2) S k (x k+1 )=S k+1 (x k+1 ); k =0; ;n 2 (3) S 0 k (x k+1) =S 0 k+1 (x k+1); k =0; ;n 2 (4) S 00 k (x k+1) =S 00 k+1 (x k+1); k =0; ;n 2 Theorem 1.7.3. If A is an n n strictly diagonally dominant matrix, i.e., ja kk j > n P i=1;i=k ja k;i j; k =1; 2; n, then A is nonsingular. Proof. By contradiction, we assume that A is singular, i.e., there exists a nonzero vector x such that Ax = 0 and let x k such that jx k j = maxjx i j. This leads to a kk x k = nx i=1;i=k a ki x i : (1.1.2) Taking the absolute value and using the triangle inequality we obtain ja kk jjx k j» nx i=1;i=k ja ki jjx i j: (1.1.27)

1.7. SPLINE INTERPOLATION 35 Dividing both terms by jx k j we get ja kk j» nx i=1;i=k ja ki j jx ij jx k j» nx i=1;i=k ja ki j (1.1.28) This leads to a contradiction since A is strictly diagonally dominant. Theorem 1.7.4. Let us consider the set of data points (x i ;f(x i )); i = 0; 1; ;n; such that x 0 < x 1 < ;x n. If S 00 (x 0 ) = S 00 (x n ) = 0, then there exists a unique piecewise cubic polynomial that satisfies the conditions (iii). Proof. Existence: we assume S " (x k ) = m k where h k = x k+1 x k and use piecewise linear interpolation of S " to write x x S 00 k+1 x x k k (x) =m k + m k+1 x k x k+1 x k+1 x k = m k h k (x x k+1 )+ m k+1 h k (x x k ); x k» x» x k+1 With this definition of S ", condition (4) is automatically satisfied. Integrating S 00 k (x) we obtain S k (x) = m k h k (x x k+1 ) 3 + m k+1 h k (x x k ) 3 + p k (x k+1 x)+q k (x x k ) Need to find m k ; q k and p k ; k =0; 1; 2; ;n 1: In order to enforce the conditions (1) and (2) we write S k (x k )=y k = m k h2 k + p k h k S k (x k+1 )=y k+1 = m k+1 h2 k + q k h k Solve for p k and q k to solve p k = y k m kh k h k (1.1.29a)

3 CHAPTER 1. POLYNOMIAL INTERPOLATION q k = y k+1 h k m k+1h k : (1.1.29b) We note that if m k ; k =0; 1;::: ;n are known, The previous equations may be used to compute p k and q k. Now, substitute p k and q k in the equation for S k to have S k (x) = m k (x x k+1 ) 3 + m k+1 (x x k ) 3 +( y k m kh k )(x k+1 x)+ h k h k h k ( y k+1 h k m k+1h k )(x x k ) (1.1.30) Applying condition (3) to enforce the continuity of S 0 (x) where S 0 k(x k+1 )=S 0 k+1(x k+1 ); k =0; 1; ;n 1: (1.1.31) and S 0 k(x) = m k 2h k (x x k+1 ) 2 + m k+1 2h k (x x k ) 2 ( y k m kh k )+( y k+1 h k h k m k+1h k ); x k» x» x k+1 ; (1.1.32) S 0 k+1(x) = m k+1 2h k+1 (x x k+2 ) 2 + m k+2 2h k+1 (x x k+1 ) 2 ( y k+1 m k+1h k+1 )+( y k+2 m k+2h k+1 ); x k+1» x» x k+2 : (1.1.33) h k+1 h k+1 Taking the limit from the left at x k+1 leads to S 0 k(x k+1 )= m k+1h k 3 + m kh k + d k : (1.1.34) Taking the limit from the right atx k+1 yields

1.7. SPLINE INTERPOLATION 37 where S 0 k+1(x k+1 )= m k+1h k+1 3 m k+2h k+1 d k = y k+1 y k h k ; k =0; 1; ;n 1 + d k+1 Using (1.1.31) we obtain the following system having n + 1 unknowns and n 1 equations. (I) ( m k h k +2m k+1 (h k + h k+1 )+m k+2 h k+1 =(d k+1 d k ); k =0; 1; 2; ;n 2: (1.1.35) Nowwe need to close the system by adding two more equations from S 00 (x 0 )= 0 and S 00 (x n ) = 0 which leads to This is called the natural spline. The system (1.1.35) and (1.1.3) lead to (I:NAT) 8 >< >: In matrix form we write 2 4 m 0 =0; m n =0: (1.1.3) (2h 0 +2h 1 )m 1 + h 1 m 2 = u 0 m k h k +2m k+1 (h k + h k+1 )+m k+1 h k+1 = u k ; 1» k» n 3 h n 2 m n 2 +2(h n 2 + h n 1 )m n 1 = u n 2 2(h 0 + h 1 ) h 1 0 0 h 1 2(h 1 + h 2 ) h 2... 0 0 h 2 2(h 2 + h 3 ) 0.......... h n 2 0 0 h n 2 2(h n 2 + h n 1 ) 3 2 7 5 4 m 1 m 2 m 3. m n 1 3 7 5 = The resulting matrix is strictly symmetric positive definite and diagonally dominant and yields a unique solution. 2 4 u 0 u 1 u 2. u n 2 3 7 5

38 CHAPTER 1. POLYNOMIAL INTERPOLATION Other splines include: Not-a-Knot Spline: We add the following two conditions: S 000 0 (x 1 )=S 000 1 (x 1 ) which leads to and the condition S 000 n 2(x n 1 )=S 000 n 1(x n 1 ) which leads to m 1 m 0 h 0 = m 2 m 1 h 1 (1.1.37) m n m n 1 h n 1 = m n 1 m n 2 h n 2 (1.1.38) Solve (1.1.37) and (1.1.38) for m 0 and m n to obtain m 0 =(1+h 0 =h 1 )m 1 (h 0 =h 1 )m 2 (1.1.39) m n = h n =h n1 m n 2 +(1+h n =h n 1 )m n 1 (1.1.40) Substitute into the system (1.1.35) to obtain (I:NK) 8 >< >: (3h 0 +2h 1 + h 2 0=h 1 )m 1 +(h 1 h 2 0=h 1 )m 2 = u 0 m k h k +2m k+1 (h k + h k+1 )+m k+1 h k+1 = u k ; k =1; ;n 3 (h n 2 h 2 n 1=h n 2 )m n 2 +(2h n 2 +3h n 1 + h 2 n 1=h n 2 )m n 1 = u n 2 where u k = (d k+1 d k ); k = 0; 1; ;n 2. We solve the system for m 1 ;m 2 ; m n 1 and use (1.1.39) and (1.1.40) to find m 0 and m n. Use (1.1.29) to find p k and q k ;k =0; 1; n 1. Finally, we use the formula (1.1.30) that defines S k (x).

1.7. SPLINE INTERPOLATION 39 Clamped Spline: We close the system (I) using the following conditions S0(x 0 0 )=f 0 (x 0 ) Sn 1(x 0 n )=f 0 (x n ) S 0 0(x) = m 0 2h 0 (x x 1 ) 2 + m 1 2h 0 (x x 0 ) 2 ( y 0 h 0 m 0h 0 )+(y 1 h 0 m 1h 0 ) S 0 0(x 0 )= m 0h 0 3 m 1h 0 The boundary equation S 0 0(x 0 )=f 0 (x 0 ) leads to + y 1 y 0 h 0 : 2m 0 h 0 + m 1 h 0 =(d 0 f 0 (x 0 )) (1.1.41) The boundary condition S 0 n 1(x n )=f 0 (x n ) yields the equation S 0 n 1(x n )= m nh n 1 3 + m n 1h n 1 + d n 1 = f 0 (x n ); which leads Now, the system (I) is reduced to (I:CL) 8 >< >: ( 3h 0 2m n h n 1 + m n 1 h n 1 =(f 0 (x n ) d n 1 ) (1.1.42) +2h 2 1 )m 1 + h 1 m 2 = u 0 3(d 0 f 0 (x 0 )) m k h k +2m k+1 (h k + h k+1 )+m k+1 h k+1 = u k ; k =1; 2; ;n 3 h n 2 m n 2 +(2h n 2 + 3h n 1 )m 2 n 1 = u n 2 3(f 0 (x n ) d n 1 ) Matrix formulation for the not-a-knot spline AM = U

40 CHAPTER 1. POLYNOMIAL INTERPOLATION where A = 2 4 3h 0 +2h 1 + h2 0 h 1 h 1 h2 0 h 1 0 0 h 1 2(h 1 + h 2 ) h 2... 0 0 h 2 2(h 2 + h 3 ) 0.......... h n 2 0 0 h n 2 h2 n 1 h n 2 2h n 2 +3h n 1 + h2 n 1 h n 2 M = 2 4 m 1 m 2 m 3. 3 7 5 ; B = 2 4 u 0 u 1 u 2. 3 7 5 3 7 5 m n 1 u n 2 The system admits a unique solution since the matrix is strictly diagonally dominant. Example 1: consider the function f(x) =x=(2 + x) at 1; 1; 2; 3 x i f(x i ) 1 st DD *(2nd Diff) 1 1 2=3 1 1=3 3 1= 2 1=2 2=5 1=10 3 3=5 h 0 =2,h 1 =1,h 2 =1.» 3h0 +2h 1 + h 2 0=h 1 h 1 h 2 0=h 1» m1 h 1 h 2 2=h 1 2h 1 +3h 2 + h 2 2=h 1 )»» 12 3 m1 = 0 m 2» 3 2=5 The solution is m 1 = 4=15, m 2 = 1=15 = m 2» u0 u 1

1.7. SPLINE INTERPOLATION 41 Matrix formulation for the clamped spline 2 4 ( 3h 0 +2h 2 1 ) h 1 0 0 h 1 2(h 1 + h 2 ) h 2... 0 0 h 2 2(h 2 + h 3 ) 0.......... h n 2 0 0 h n 2 (2h n 2 + 3h n 1 = 2 4 u 0 3(d 0 f 0 (x 0 )) u 1 u 2. u n 2 3(f 0 (x n ) d n 1 ) 3 7 5 2 ) 3 2 7 5 4 m 1 m 2 m 3. m n 1 The system admits a unique solution since the matrix is symmetric diagonally dominant and positive definite (SPD). Example 1: Let us interpolate the function f(x) = x=(2 + x) where f 0 (x) =2=(2 + x) 2. Thus S 0 0(x 0 )=2=f 0 ( 1) and S 0 n 1(x 3 )=2=25 = f 0 (3). To compute the u i ; i =0; 1we use the following table x i f(x i ) 1 st DD *(2nd Diff) 1 1 2=3 1 1=3 3 1= 2 1=2 2=5 1=10 3 3=5 h 0 =2,h 1 =1,h 2 =1.»» 3h0 =2+2h 1 h 1 m1 = h 1 2h 1 +3h 2 =2) m 2» u0 3(d 0 f 0 (x 0 )) u 1 3(f 0 (x 3 ) d 2 ) 3 7 5

42 CHAPTER 1. POLYNOMIAL INTERPOLATION»»» 5 1 m1 1 = 1 7=2 m 2 17=50 The solution is m 1 =4=275, m 2 = 9=55 m 0 =3 f [x 0;x 1 ] f 0 (x 0 ) h 0 m 1 =2= 582=275 m 3 =3 f 0 (x 3 ) f [x 2 ;x 3 ] h 2 m 2 =2==275 On [-1,1] : p 0 = y 0 =h 0 (m 0 h 0 )= = 113=550, q 0 = y 1 =h 0 (m 1 h 0 )= =49=550 On [1; 2] S 0 (x) = 582 12 275 (x 4 1)3 + (x +1)3 12 275 p 1 = y 1 =h 1 (m 1 h 1 )= =81=275, q 1 = y 2 =h 1 (m 2 h 1 )= =29=55 + 113 49 (1 x)+ (x +1) 550 550 S 1 (x) = 4 275 (x 2)3 9 (x 1)3 55 +81(2 x)=275 + 29(x 1)=55 On [2,3] p 2 = y 2 =h 2 (m 2 h 2 )= =29=55, q 2 = y 3 =h 2 (m 3 h 2 )= = 14=275 S 2 (x) = 9 55 (x 3)3 1 (x 2)3 275 +29(3 x)=55 + 14(x 2)=275 Examples of natural spline approximations

1.7. SPLINE INTERPOLATION 43 Example 1: Let f(x) = jxj and construct the cubic spline interpolation at the points x i = 2+i; i =0; 1; 2; 3; 4 x i f(x i ) 1 st DD *(2nd Diff) 2 2 1 1 1 0 1 0 0 12 1 1 1 0 1 2 2 We note that h i = h; i =0; 1; 2; 3, m 0 = m 4 = 0 which leads to the following system 2 4 4 1 0 1 4 1 0 1 4 3 2 5 4 m 1 m 2 m 3 3 2 5 = 4 0 12 0 3 5 The solution is m 1 = =7, m 2 =24=7, m 3 = =7. On [ 2; 1] p 0 =(y 0 m 0h 2 0 )=h 0 =(2 0)=1 =2q 0 =(y 1 m 1h 2 0 )=h 0 =(1+1=7)=8=7 S 0 (x) = 1 7 (x +2)3 +2( 1 x)+ 8 (x +2) 7 On [ 1; 0] p 1 =(y 1 m 1h 2 1 )=h 1 =(2 0)=1 =8=7 q 1 =(y 2 m 2h 2 1 )=h 1 =(1+1=7) = 4=7 S 1 (x) = 1 7 x3 + 4 7 (x +1)3 + 8 7 (0 x) 4 (x +1) 7

44 CHAPTER 1. POLYNOMIAL INTERPOLATION On [0; 1] p 2 =(y 2 m 2h 2 2 )=h 2 =(0 24 7 )= 4=7 q 2 =(y 3 m 3h 2 2 )=h 2 =(1+ 1 7 )=8=7 S 2 (x) = 4 7 (x 1)3 x 3 =7 4 7 (1 x)+8 7 x On [1; 2] p 3 =(y 3 m 3h 2 3 )=h 3 =8=7, q 3 =(y 4 m 4h 2 3 )=h 3 =2 S 3 (x) = (2 x) 3 =7+2(x 1) + 8(2 x)=7 p =(2; 8=7; 4=7; 8=7), q =(8=7; 4=7; 8=7; 2) Example 2: x i f(x i ) 1 st DD *(2nd Diff) 1 1 2=3 1 1=3 3 1= 2 1=2 2=5 1=10 3 3=5 h 0 =2,h 1 =1,h 2 =1. Natural cubic spline leads to m 0 = m 3 =0. The other coefficients satisfy the system»»» 2(h0 + h 1 ) h 1 m1 3 = h 1 2(h 1 + h 2 ) m 2 2=5»» 1 m1 = 1 4 m 2» 3 2=5

1.7. SPLINE INTERPOLATION 45 which admits the following solution m 1 = 58 115, m 2 = 3 115 On [ 1; 1] p 0 =(y 0 m 0h 2 0 )=h 0 = 1=2, q 0 =(y 1 m 1h 2 0 )=h 0 =77=230 On [1; 2] p 1 =(y 1 m 1h 2 1 )=h 1 =48=115, q 1 =(y 2 m 2h 2 1 )=h 1 =57=115 On [2; 3] p 2 =(y 2 m 2h 2 2 )=h 2 =57=115, q 2 =(y 3 m 3h 2 2 )=h 2 =3=5 p =[ 1=2; 48=115; 57=115], q = [77=230; 57=115; 3=5] Matlab commands for splines x=0:1:10; y=sin(x); xi=0:0.2:10; yi = sin(xi); %piecewise linear interpolation y1 = interp1(x,y,xi) plot(x,y,'0',xi,yi) %plot the exact function hold on; plot(xi,y1); % y2 = interp1(x,y,xi,'spline') % spline interpolation y3 = interp1(x,y,'cubic') % piecewise cubic interpolation y4 = spline(x,y,xi) %not-a-knot spline plot(xi,y4); plot(xi,y2); plot(xi,y3);

4 CHAPTER 1. POLYNOMIAL INTERPOLATION 1.7.3 Convergence of cubic splines We will study the uniform convergence of the clamped cubic spline for f 2 C 4 [a; b]. We first write the matrix formulation for the clamped cubic spline as AM = B where M =[m 0 ;m 1 ; ;m n ] t, B =[b 0 ;b 1 ; ;b n ] t. Let us recall that and S 0 0(x 0 )= h 0m 0 3 S 0 n 1(x n )= h n 1m n 1 + h 0m 1 + h n 1m n 3 = (y 1 y 0 ) h 0 f 0 (x 0 ) = f 0 (x n ) (y n y n 1 ) h n 1 : We combine (1.1.41), (1.1.42) and (1.1.35) to write the (n +1) (n +1) matrix A as a i;j = 8 >< >: 2; if i = j 1; if (i; j) =(1; 2) or (i; j) =(n; n 1) h i h i +h i 1 ; if j = i +1; 1 <i<n h i 1 h i +h i 1 ; if j = i 1; 1» i<n 1 0; otherwise; : (1.1.43) the n +1by n +1matrix A can be written as A = 2 4 2 1 0 0 ::: 0....... h... i 1 h h i +h i 1 2 i h i +h i 1 ::: 0....... 0 0 0 ::: 1 2 3 7 5 (1.1.44) The right-hand side B is defined by b 0 = h 0 ( y 1 y 0 h 0 f 0 (x 0 )); (1.1.45)

1.7. SPLINE INTERPOLATION 47 b i = h i + h i 1 ( y i+1 y i h i y i y i 1 h i 1 ); i =1; 2; ;n 1; (1.1.4) b n = h n 1 (f 0 (x n ) y n y n 1 h n 1 ): (1.1.47) Lemma 1.7.1. Let A be defined in (1.1.43) such that Az = w. Then, jjzjj 1»jjwjj 1 Proof. Let z k be such that jz k j = jjzjj 1. leads to The k th equation from Az = w Applying the triangle inequality we have a k;k 1 z k 1 +2z k + a k;k+1 z k+1 = w k (1.1.48) jjwjj 1 jw k j = ja k;k 1 z k 1 +2z k + a k;k+1 z k+1 j (1.1.49) 2jz k j a k;k 1 jz k 1 j a k;k+1 jz k+1 j (1.1.50) (2 (a k;k 1 + a k;k+1 )jz k j: (1.1.51) Using the fact that a k;k 1 + a k;k+1 =1we complete the proof. In order to state the following lemma we let F =[f 00 (x 0 );f 00 (x 1 ); ;f 00 (x n )] t, R = B AF = A(M F) and H = h i. max i=0; n 1 Lemma 1.7.2. If f 2 C 4 [a; b] and jjf (4) jj 1» M 4, then jjm Fjj 1»jjRjj 1» 3 4 M 4H 2 : (1.1.52)

48 CHAPTER 1. POLYNOMIAL INTERPOLATION Proof. r 0 = b 0 2f 00 (x 0 ) f 00 (x 1 )= h 0 ( y 1 y 0 h 0 f 0 (x 0 )) 2f 00 (x 0 ) f 00 (x 1 ): Using Taylor expansion we write = 1 h 0 (h 0 f 0 (x 0 )+ h2 0f 00 (x 0 ) 2 (1.1.53) y 1 y 0 h 0 = f(x 0 + h 0 ) f(x 0 ) h 0 (1.1.54) + h3 0f 000 (x 0 ) + h4 0f (4) (fi 1 ) ) (1.1.55) 24 which leads to f 00 (x 0 + h 0 )=f 00 (x 0 )+h 0 f 000 (x 0 )+ h2 0f (4) (fi 2 ) 2 (1.1.5) Thus, r 0 = h2 0f (4) (fi 1 ) 4 h2 0f (4) (fi 2 ) 2 (1.1.57) jr 0 j < 3H 2 M 4 =4: (1.1.58) Similarly for Using Taylor series r n = b n f 00 (x n 1 ) 2f 00 (x n ): (1.1.59) b n = h n 1 (f 0 (x n ) y n y n 1 h n 1 ) (1.1.0) f(x n ) f(x n 1 ) h n 1 = f(x n 1) f(x n ) h n 1 = 1 h n 1 ( h n 1 f 0 (x n )+ h2 n 1 2 f 00 (x n ) h3 n 1 f 000 (x n )+ h4 n 1 24 f (4) (fi 1 )): (1.1.1)

1.7. SPLINE INTERPOLATION 49 f 00 (x n 1 )=f 00 (x n h n 1 )=f 00 (x n ) h n 1 2 f 000 (x n )+ h2 n 1 2 f (4) (fi 2 ) (1.1.2) jr n j < 3H 2 M 4 =4: (1.1.3) r j = b j μ j f 00 (x j 1 ) 2f 00 (x j ) j f 00 (x j+1 ); (1.1.4a) where and μ j = h j 1 h j + h j 1 ; j = b j = Using Taylor expansion we write h j h j + h j 1 (1.1.4b) h j 1 + h j ( y j+1 y j h j y j y j 1 h j 1 ): (1.1.4c) y j+1 y j =[f 0 (x j )+ h jf 00 (x j ) + h2 jf 000 (x j ) h j 2 y j y j 1 =[+f 0 (x j ) h j 1f 00 (x j ) + h2 j 1f 000 (x j ) h j 1 2 + h3 jf (4) (fi 1 ) ]; (1.1.4d) 24 h3 j 1f (4) (fi 2 ) ]; 24 (1.1.4e) f 00 (x j 1 )=f 00 (x j ) h j 1 f 000 (x j )+h 2 j 1f (4) (fi 3 )=2; (1.1.4f) f 00 (x j+1 )=f 00 (x j )+h j f 000 (x j )+h 2 jf (4) (fi 4 )=2: (1.1.4g) Note that fi i 2 (x j 1 ;x j+1 ). Combining (1.1.4) we obtain r j = 1 [ h3 jf (4) (fi 1 ) + h3 j 1f (4) (fi 2 ) h3 j 1f (4) (fi 3 ) h j + h j 1 4 4 2 h3 jf (4) (fi 4 ) ]: 2 (1.1.5)

50 CHAPTER 1. POLYNOMIAL INTERPOLATION This can be bounded as jr j j» 3 4 M h 3 j + h 3 j 1 4 (1.1.) h j + h j 1 Without loss of generality we assume h j h j 1 and write h 3 j + h 3 j 1 h j + h j 1 = h 2 j 1+( h j 1 h j ) 3 1+ h j 1 h j» h 2 j» H 2 : (1.1.7) Thus, jr j j < 3 4 M 4H 2 : (1.1.8) Finally, using Lemma 1.7.1 we have which completes the proof. jjm Fjj 1»jjRjj 1» 3 4 M 4H 2 ; (1.1.9) Theorem 1.7.5. Le f(x) 2 C 4 [a; b], a» x 0 < x 1 < < x n» b, h j = x j+1 x j and H = h j. Assume there exits K > 0 independent of H such that max i=0; ;n 1 H h j» K; j =0; 1; ;n 1: (1.1.70) If S(x) is the clamped cubic spline approximation of f at x i ; i =0; 1; ;x n, then there exists C k > 0 independent of H such that jjf (k) S (k) jj 1;[a;b]» C k M 4 KH 4 k ; k =0; 1; 2; 3; (1.1.71) where M 4 = jjf (4) jj 1. Proof. For k = 3 and x 2 [x j 1 x j ]by adding and subtracting few auxiliary terms the error can be written e 000 (x) =f 000 (x) S 000 (x) =f 000 (x) m j m j 1 h j 1

1.7. SPLINE INTERPOLATION 51 = f 000 (x) m j f 00 (x j ) h j 1 + m j 1 f 00 (x j 1 ) h j 1 f 00 (x j ) f 00 (x) h j 1 Using Lemma 1.7.2 we bound the following terms + f 00 (x j 1 ) f 00 (x) h j 1 : (1.1.72) j m j f 00 (x j ) h j 1 j» 3M 4H 2 4h j 1 ; j m j 1 f 00 (x j 1 ) h j 1 j» 3M 4H 2 4h j 1 : (1.1.73) We use Taylor series to obtain and f 00 (x j ) f 00 (x) =(x j x)f 000 (x)+ (x j x) 2 f (4) (fi 1 ) 2 f 00 (x j 1 ) f 00 (x) =(x j 1 x)f 000 (x)+ (x j 1 x) 2 f (4) (fi 2 ) 2 we bound the error je 000 (x)j» 3M 4H 2 4h j 1 + 1 h j 1 j(x j x)f 000 (x)+(x j x) 2 f (4) (fi 1 ) (1.1.74) (x j 1 x)f 000 (x) (x j 1 x) 2 f (4) (fi 2 ) h j 1 f 000 (x)j (1.1.75) 2 The f 000 terms cancel out to give Using we write je 000 (x)j» 3M 4H 2 2h j 1 + M 4 2h j [(x j x) 2 +(x j 1 x) 2 ]: jj(x j x) 2 +(x j 1 x) 2 jj 1;[xj 1 ;x j ] = h 2 j 1» H 2 ; Since H=h j» K we have je 000 (x)j» 3M 4H 2 2h j 1 + M 4H 2 2h j 1 : (1.1.7)

52 CHAPTER 1. POLYNOMIAL INTERPOLATION jf 000 (x) S 000 (x)j»2m 4 KH; 8x: (1.1.77) For k =2,we note that for each x 2 (a; b), there is x j such that jx j xj» H=2. Now we rewrite e 00 (x) as e 00 (x) =f 00 (x) S 00 (x) =f 00 (x j ) S 00 (x j )+ Z x x j (f 000 (t) S 000 (t))dt (1.1.78) Using j R gj < R jgjdx and Lemma 1.7.2 we obtain je 00 (x)j» 3M 4H 2 4 + j(x x j )j max jjf 000 S 000 jj (1.1.79) x2[a;b] Thus,» 3M 4H 2 4 + M 4 KH 2» 7KM 4H 2 ; K > 1: (1.1.80) 4 jjf 00 (x) S 00 (x)jj 1» 7KM 4H 2 : (1.1.81) 4 For k =1,we consider e(t) =f(t) S(t), since e(x j ) = 0, by Rolle's theorem there exist ο j ;j =0; 1; ;n 1 such that e 0 (ο i )=0ande 0 (x 0 )=e 0 (x n )=0. For every x 2 [a; b] there exists ο i such that jx ο i j» H and e 0 (x) can be written as f 0 (x) S 0 (x) = Thus, Z x ο i (f 00 (t) S 00 (t))dt (1.1.82) je 0 (x)j»jx ο i jjje 00 jj 1» 7 4 M 4KH 3 : (1.1.83)

1.7. SPLINE INTERPOLATION 53 For k = 0, for every x 2 [a; b] there is x j such that jx x j j» H=2 we also write f(x) S(x) = Z x x j (f 0 (t) S 0 (t))dt (1.1.84) je(x)j»jx x j jjje 0 jj 1» 7 8 M 4KH 4 : (1.1.85) We conclude that S (k) converges uniformly to f (k) for k =0; 1; 2; 3 and H! 0. Optimal bounds are proved by Birkhoff and De Boor (Burden and Faires) as We recall the Hermite interpolation error jf H 3 j < jjf Sjj 1 < 5 384 M 4H 4 : (1.1.8) 1 24 1 M 4H 4 = M 4H 4 384 ; (1.1.87) Comparing the cubic spline and Hermite interpolation errors, we see that the ratio between the spline and the Hermite errors is only 5. We also note that Hermite interpolation requires the derivative at all the interpolation points while the clamped spline needs the derivatives at the end points only. Optimality of Splines: The optimality is in the sense that cubic spline has the smallest curvature. For a curve defined by y = f(x) the curvature is defined as fi = jf 00 (x)jj [1+(f 0 (x)) 2 ] 3=2 : Here the curvature is approximated by jf 00 (x)j and R b a S00 (x) 2 dx is minimized. More precisely we state the following theorem.

54 CHAPTER 1. POLYNOMIAL INTERPOLATION Theorem 1.7.. If f 2 C 2 [a; b] and S(x) is the natural cubic spline that interpolates f at n +1 points x i ; i =0; 1; ;n; then Z b a S 00 (x) 2 dx» Z b a f 00 (x) 2 dx: (1.1.88) Proof. We consider the function e(x) = f(x) S(x) with e(x i ) = 0; i = 0; 1; ;n; and write the approximate curvature of f = S + e as Z b a Z b a S 00 (x) 2 dx + f 00 (x) 2 dx = Z b a Z b a (S 00 (x)+e 00 (x)) 2 dx = Z b e 00 (x) 2 dx +2 S 00 (x)e 00 (x)dx: (1.1.89) a We complete the proof by showing that the last term in the right-hand side of (1.1.89) is 0. Integrating by parts we obtain Z xi+1 x i e 00 (x)s 00 (x)dx = S 00 (x)e 0 (x)j x=x i+1 x=x i Z xi+1 x i S 000 (x)e 0 (x)dx: Summing over all intervals, using the fact that e 2 C 2 and S 00 (a) =S 00 (b) =0. Noting that S 000 (x) =C i is a constant on (x i ;x i+1 ) we obtain n 1 Z xi+1 X i=0 x i e 00 (x)s 00 (x)dx = = n 1 X i=0 n 1 X i=0 Z xi+1 C i e 0 (x)dx x i C i [e(x i+1 ) e(x i )]=0 We used the fact that e(x i )=0we establish R b a S00 (x)e 00 (x)dx =0. Combining this with (1.1.89) leads to (1.1.88). The same result holds for the clamped cubic spline with S 0 (a) = f 0 (a) and S 0 (b) =f 0 (b). We follow the same line of reasoning to prove it. Thus, among all C 2 functions interpolating f at x 0 ;::: ;x n, the natural cubic spline has the smallest curvature. This includes the clamped spline and nota=knot splines.

1.7. SPLINE INTERPOLATION 55 1.7.4 B-splines We describe a system of B-splines (B stands for basis) from which other splines can be obtained. We first start with B-splines of degree 0, i.e., piecewise constant splines defined as ( 1 x Bi 0 i» x<x i+1 (x) = ;i2 Z (1.1.90) 0 otherwise Properties of B 0 i (x): 1. B 0 i (x) 0; for all x and i 2. 1P i= 1 B 0 i (x) = 1, for all x 3. The support of B 0 i (x) is [x i ;x i+1 ) 4. B 0 i (x) 2 C 1. We show property (2) by noting that for arbitrary x there exists m such that x 2 [x m ;x m+1 ) then write 1X i= 1 B 0 i (x) =B 0 m(x) =1: Use the recurrence formula to generate the next basis functions B k i (x) = x x i x i+k x i B k 1 i (x)+ x i+k+1 x x i+k+1 x i+1 B k 1 i+1 (x); for k>0: (1.1.91) Properties of B 1 i : 1. B 1 i (x) are the classical piecewise linear hat functions equal to 1 at x i+1 and zero all other nodes. 2. B 1 i (x) 2 C 0 3. 1P Bi 1 i= 1 (x) = 1 for all x

5 CHAPTER 1. POLYNOMIAL INTERPOLATION 4. The support of B 1 i (x) is (x i ;x i+2 ) 5. B 1 i (x) 0 for all x and i In general for arbitrary k one can show that: 1. B k i (x) are piecewise polynomials of degree k 2. B k i (x) 2 C k 1 3. 1P i= 1 B k i (x) = 1 for all x 4. The support of B k i (x) is (x i ;x i+k+1 ) 5. B k i (x) 0 for all x and i. B k i (x), 1 <i<1 are linearly independent, i.e., they form a basis. See Figure 1.7.4 for plots of the first four b-splines. Interpolation using B-splines: (i) For k = 0, we construct a piecewise constant spline interpolation by writing f(x) ß P 0 (x) = 1X i= 1 Using the properties of B 0 i (x) we show that c i = f(x i ). c i B 0 i (x) (1.1.92) (ii) For k = 1, we construct a piecewise linear spline interpolation by writing f(x) ß P 1 (x) = 1X i= 1 Again using B 1 i (x j+1 )=ffi ij we show that c i = f(x i+1 ). c i B 1 i (x) (1.1.93) (ii) For k = 3, we construct a piecewise cubic spline interpolation by writing f(x) ß P 3 (x) = 1X i= 1 c i B 3 i (x) (1.1.94)