3 Interpolation of Functions

Size: px
Start display at page:

Download "3 Interpolation of Functions"

Transcription

1 3 Interpolation of Functions 3(a) Polynomial Interpolation Read Ch 3 (Section 35 optional) Applications: approximation theory, numerical integration & differentiation, integration of ODE s & PDE s Interpolation problem: Given distinct numbers x 0, x 1,, x n (nodes) and the corresponding function values f(x 0 ) = y 0, f(x 1 ) = y 1,, f(x n ) = y n find the function F in a given class F such that F (x i ) = y i, i = 0, 1,, n We consider mostly the case of polynomial interpolation, where P m is the set of polynomials over the real field with degree at most m: P (x) = a 0 + a 1 x + + a n x m, a i R If m = n, then we showed in Ch 2 as a consequence of Hörner s method that there may be at most one solution Directly, if P 1, P 2 P n are solutions, then P 1 P 2 P n & (P 1 P 2 )(x i ) = 0, i = 0,, n By the fundamental theorem, P 1, = P 2 Hence, solution, if it exists, is unique Uniqueness + linearity = existence a 0 + a 1 x i + + a n x n i = 1 x 0 x 2 0 x n 0 1 x 1 x 2 1 x n 1 1 x 2 x 2 2 x n 2 1 x n x 2 n x n n = y i, i = 0, 1,, n a 0 a 1 a 2 a n = y 0 y 1 y 2 y n or Va = y Because this linear system has a unique solution for all y R n+1, the matrix V must be nonsingular Hence, the solution also exists 1

2 Exercise The matrix V n (x 0, x 1,, x n ) is called the Vandermonde matrix Show that det V n (x 0, x 1,, x n ) = (x i x j ) 0 i<j n This gives a direct proof that V n (x) is nonsingular if x i x j, i j [Hint: Show that det V n (x) is a polynomial of degree n in x n, that its roots are x 0,, x n 1, and the coefficient of x n n is det V n 1 (x 0, x 1,, x n 1 )] Theorem (Existence & Uniqueness) Given (n+1) distinct numbers x 0, x 1,, x n, then for any y 0, y 1,, y n,!p P n st P (x i ) = y i, i = 0,, n Lagrange interpolation: Given x 0, x 1,, x n, the Lagrange polynomials L i (x), i = 0, 1,, n are defined by L i P n and L i (x) = j i x x j x i x j L i (x i ) = 1, L i (x j ) = 0, i j n = P (x) = y i L i (x) is an explicit representation of the interpolating polynomial Error estimation: Theorem Let f C n+1 (I) where I = I[x 0,, x n, x] is the smallest interval which contains x and all x i s Then, ξ I such that i=0 f( x) P ( x) = f (n+1) (ξ) (n + 1)! ( x x 0) ( x x n ) Proof: The result is trivially true if x = x i for some i Hence, may assume that x x i for any i Because x 0, x 1,, x n, x are all distinct, a constant K such that F (x) f(x) P (x) K(x x 0 ) (x x n ) vanishes for x = x: F ( x) = 0 2

3 Consequently, F (x) has at least the (n + 2) zeroes x 0,, x n, x in I By the generalized Rolle s theorem, ξ I, such that Since P (n+1) (x) = 0, F (n+1) (ξ) = 0 0 = F (n+1) (ξ) = f (n+1) (ξ) (n + 1)!K = K = f (n+1) (ξ) (n+1)! QED Exericse: Check that if x i = x 0 + ih, h > 0 (uniform nodes), then for x [x 0, x n ], and thus However, (x x 0 ) (x x n ) n!h n+1 max f(x) P (x) hn+1 x [x 0,x n] (n + 1) max [x 0,x n] f (n+1) (x) Interpolants P (x) can be very poor approximations outside of the interval [x 0, x n ] Even inside a fixed interval [a, b] = [x 0, x n ] with h = b a, the successive approximations P n (x) with uniform nodes need not converge! The n Runge example is f(x) = 1, 5 < x < 5: 1+x 2 3

4 It can be shown that for any 364 < x < 5, lim f(x) P n(x) = + n See Isaacson & Keller (1966), pp Although interpolants on uniform grids may fail to converge, it can be shown that for any f C 1 [a, b], grid points {x 0,, x n } may be chosen so that lim n f(x) P n (x) = 0 uniformly on [a, b] Neville s Algorithm For given support points (x i, y i ), i = 1,, n, denote by P i0 i 1 i k P k the unique interpolating polynomial for the subset (x i0, y i0 ),, (x ik, y ik ) Theorem: P i (x) = y i and P i0 i 1 i k (x) = (x x i 0 )P i1 i 2 i k (x) (x x ik )P i0 i 1 i k 1 (x) x ik x i0 Proof: See proof of Theorem 35 in the text Leads to Neville s algorithm to iteratively evaluate P n (x) at any chosen point x = x: k = 0 k = 1 k = 2 k = 3 x 0 y 0 = P 0 ( x) P 01 ( x) x 1 y 1 = P 1 ( x) P 012 ( x) P 12 ( x) x 2 y 2 = P 2 ( x) P 123 ( x) P 23 ( x) y 3 = P 3 ( x) x 3 P 0123 ( x) Abbreviate, for j k, Q j,k = P j k,j k+1,,j 4

5 Tableau: x 0 y 0 = Q 00 Q 11 For efficient evaluation Q i0 = y i x 1 y 1 = Q 10 Q 22 Q 21 Q 33 x 2 y 2 = Q 20 Q 32 Q 31 x 3 y 3 = Q 30 Q jk = ( x x j k)q j,k 1 ( x x j )Q j 1,k 1 x j x j k = Q j,k 1 + Q j,k 1 Q ( ) j 1,k 1, x xj k x x j 1 See Algorithm 31 of the text 1 k j, j = 0, 1,, n The special case with x = 0 with be very useful later for extrapolation methods Newton s Interpolation & Divided Differences Note that P 0,1,,n (x) P 0,1,,n 1 (x) is a polynomial of degree at most n with n roots x 0, x 1,, x n 1 Thus, Iteratively, P 0,1,,n (x) = P 0,1,,n 1 (x) + a n (x x 0 ) (x x n 1 ) P (x) P 0,1,,n (x) = a 0 + a 1 (x x 0 ) + a 2 (x x 0 )(x x 1 ) + Useful for evaluation by Horner s method With support points (x 0, y 0 ),, (x n, y n ), + a n (x x 0 )(x x 1 ) (x x n 1 ) P i0 i 1 i k (x) = P i0 i 1 i k 1 (x) + f[x i0, x i1,, x ik ](x x i0 ) (x x ik 1 ) = f[x i0 ] + f[x i0, x i1 ](x x i0 ) + + f[x i0, x i1,, x ik ](x x i0 ) (x x ik 1 ) 5

6 The divided differences are defined iteratively by f[x i ] = f(x i ) f[x i0,, x ik ] = f[x i 1,, x ik ] f[x i0,, x ik 1 ] x ik x i0 Tableau: See Algorithm 32 x 0 y 0 = f[x 0 ] f[x 0, x 1 ] x 1 y 1 = f[x 1 ] f[x 0, x 1, x 2 ] f[x 1, x 2 ] f[x 0, x 1, x 2, x 3 ] x 2 y 2 = f[x 2 ] f[x 1, x 2, x 3 ] f[x 2, x 3 ] x 3 y 3 = f[x 3 ] Theorem (Newton s Interpolation Formula) If f C n+1 (I) for I = I[x 0,, x n, x], then for some ξ I, f(x) = f(x 0 ) + n f[x 0, x 1,, x k ](x x 0 ) (x x k 1 ) k=1 + f (n+1) (ξ) (n + 1)! (x x 0) (x x n ) ( ) Proof: Follows from above & uniqueness of interpolation polynomial in P n In fact, the formula holds with f (n+1) (ξ) = f[x (n+1)! 0,, x n, x], since if x is added as an (n + 2)nd point, f(x) = P n+1 (x) = P n (x) + f[x 0,, x n, x](x x 0 ) (x x n ) QED Theorem (Properties of Divided Differences) (i) f[x 0,, x n ] is symmetric in x 0,, x n (ii) For f C n (I[x 0,, x n ]), ξ I[x 0,, x n ] such that f[x 0,, x n ] = f (n) (ξ) n! 6

7 (iii) For f C n (I[x 0,, x n ]), with t 0 = 1 f[x 0,, x n ] = 1 0 dt n t i 1 i=1 n t i (Hermite-Gennochi formula) i=1 dt n f (n) (t 0 x t n x n ) Proof: (i) P 0,1,,n is uniquely determined by (x 0, y 0 ),, (x n, y n ) and f[x 0,, x n ) is the coefficient of its highest term (ii) Use (*) and the similar formula for n 1 to infer that n 1 f(x) = f(x 0 ) + f[x 0,, x k ](x x 0 ) (x x k 1 ) k=1 + f (n) (ξ ) (x x 0 ) (x x n 1 ) n! f[x 0,, x n ](x x 0 ) (x x n 1 ) + f (n+1) (ξ) (n + 1)! (x x 0) (x x n ) = f (n) (ξ ) (x x 0 ) (x x n 1 ) n! Setting x = x n, gives the result f[x 0,, x n ] = f (n) (ξ ) n! (iii) Exercise: Use induction and integration by parts QED The H-G formula shows that f[x 0,, x n ] may be continuously extended to points where the arguments coincide For example, Also, if f C n+2, f [x 0,, x 0 ] = f (n) (x 0 ) }{{} n! (n+1)times d dx f[x f[x 0,, x n, x + h] f[x 1 x 0,, x n, x] 0,, x n, x] = lim h 0 h = lim f[x 1 x 0,, x n, x, x + h] h 0 = f[x 0,, x n x, x] 7

8 These facts are of particular utility in Hermite interpolation(see below) Finite difference formulas: For sequences y i, i Z and Exercise: ( y) i = y i+1 y i ( y) i = y i y i 1 ( n y) 0 = ( n y) 0 = forward difference operator backward difference operator n ( ) n ( 1) k y n k k k=0 n ( ) n ( 1) k y k k k=0 where ( ) n k = n! k!(n k)! For a uniform grid, x i = x 0 + ih, h > 0, and f[x 0, x 1,, x k ] = 1 k! f[x n k,, x n 1, x n ] = 1 k! k f(x 0 ) h k k f(x n ) h k Check it! Also, with x = x 0 + sh, (x x 0 ) (x x k 1 ) = s(s 1) (s k + 1)h k so that n ( ) s P n (x) = k Newton forward f(x 0 ) k difference formula k=0 and with x = x n sh, (x x 0 ) (x x k 1 ) = ( 1) k s(s 1) (s k + 1)h k so that n ( ) s P n (x) = ( 1) k k Newton backward f(x n ) k difference formula k=0 For points close to x 0, forward formula is most accurate & for those close to x n, backward is best In the middle, zigzag See F B Hildebrand, 2nd ed (1974), for a complete discussion Hermite interpolation Consider sample points (x i, y (k) i ), k = 0, 1,, n i 8

9 for i = 0, 1,, m with x 0 < x 1 < < x m The Hermite interpolation problem is to find P P n with m n + 1 = (n i + 1) such that (*) P (k) (x i ) = y (k) i, k = 0, 1,, n i, i = 0, 1,, m i=0 Theorem (i) (Existence & Uniqueness) For arbitrary sample points x 0 < x 1 < < x m, y (k) i, k = 0, 1,, n i, i = 0, 1,, m,!p P n which satisfies (*) (ii) (Error estimate) If f C n+1 (I[x 0, x 1,, x m, x]), then f( x) = P ( x) + f (n+1) (ξ) (n + 1)! (x x 0) n 0+1 (x x m ) nm+1 for some ξ I[x 0, x 1,, x m, x] Proofs: Entirely analogous to those for Lagrange interpolation There is an explicit representation in terms of P (x) = m i=0 n i k=0 y (k) i L ik (x) where L ik P n are generalized Lagrange polynomials, satisfying { L (s) 1 if i = j, k = s ik (x j) = 0 ow See Stoer & Bulirsch (1980), Section 215 Also, generalizations of Neville s algorithm & Newton s formula exist: Eg the unique cubic polynomial which solves P (a) = f(a), P (a) = f (a) and P (b) = f(b), P (b) = f (b) is obtained from a f(a) = f[a] f (a) = f[a, a] a f(a) = f[a] f[a, a, b] f[a, b] f[a, a, b, b] b f(b) = f[b] f[a, b, b] f (b) = f[b, b] b f(b) = f[b] 9

10 as P (x) = f(a) = f (a)(x a)+f[a, a, b](x a) 2 +f[a, a, b, b](x a) 2 (x b) See Project #5 for examples For general discussion, Stoer & Bulirsch, Section 215 Other important interpolation problems: rational functions P (x)/q(x) trigonometric polynomials c 0 + c 1 e ix + + c n 1 e i(n 1)x 3(b) Trigonometric Interpolation Theorem Given N = 2n + 1 distinct points x 0, x 1,, x N 1 T and arbitrary numbers y 0, y 1,, y N 1, real or complex, there is a unique trigonometric polynomial T T n := { n k= n c ke ikx : c k C} such that T (x k ) = y k, k = 0, 1,, N 1 Proof: The Haar condition for the system e 2πikx, k n, is that the following determinant not vanish: e inx 0 e inx 1 e inx N 1 e i(n 1)x 0 e i(n 1)x 1 e i(n 1)x N 1 0 e i(n 1)x 0 e i(n 1)x 1 e i(n 1)x N 1 e inx 0 e inx 1 e inx N = e in(x 0+x 1 + +x N 1 ) e ix 0 e ix 1 e ix N 1 e i(n 1)x 0 e i(n 1)x 1 e i(n 1)x N 1 The latter is a Vandermonde determinant in the complex numbers z 0 = e ix 0,, z N 1 = e ix N 1 Thus, the condition becomes 0 e in(x 0+x 1 + +x N 1 ) k<j(e ix k e ix j ), which is true as long as x k x j for k j QED It may be shown (see Homework!) that the interpolating trigonometric polynomial may be written as T (x) = N 1 j=0 10 y j t j (x)

11 where t j (x) is the Fourier-Lagrange polynomial t j (x) = k j sin ( ) x x k 2 sin ( x j x k ) 2 Furthermore, if interpolation at an even number of points N = 2n is required, then it may be shown that a unique solution T (x) exists for trigonometric polynomials of the form T (x) = 1 2 a n [a k cos(2πkx) + b k sin(2πkx)] + d n cos(2πnx + φ) k=1 for any choice of the phase φ See A Zygmund, Trigonometric Series, Chapter X, Section 3 We usually take φ = 0, d n = a n For f C(T), the unique polynomial I n (f) T n of the above forms which satisfies I n (f; x k ) = f(x k ), k = 0, 1,, N 1 is called the interpolating trigonometric polynomial It is interesting to ask for the behavior of I n (f) as n However, little is known except for the following special case: Interpolation on the uniform grid: Nodal points are taken at x k = x 0 + k, k = 0, 1,, N 1 N with x 0 T arbitrary The following is known: Theorem Let I n (f) be the interpolating trigonometric polynomial on a uniform grid for f C(T) Then, ϱ n (f) f I n (f) C log(n + 2) ϱ n (f) for some constant C, with ϱ n (f) := inf Tn Tn f T n Proof: See A Zygmund, Trigonometric Series, Ch X, Section 5 Contrast this behavior with that for ordinary polynomials (Runge phenomenon) Trigonometric polynomials on uniform grids are much better behaved (in fact, almost optimal) 11

12 There is a very elegant method of finding I n (f) for a uniform grid of N points, based upon the following: Discrete Fourier Series For f = (f 0, f 1,, f N 1 ) C N, define ˆf = F by ˆf k = 1 N N 1 l=0 f l e 2πikl/N F k, k Z This is called the discrete Fourier transform of f Note that ˆf k+n = ˆf k for all k Z Thus, it is possible to view ˆf as a function on Z N, the integers modulo N Likewise, f may be periodically extended to Z, f k+n = f k, and considered as a function on Z N The following is basic: Theorem The set of functions e (N) l (k) = 1 N e 2πikl/N, l = 0, 1,, N 1 is an orthonormal basis for l 2 (Z N ) Thus, any function f l 2 (Z N ) may be expanded into a Fourier series with There is a Parseval equality f k = ˆf l = 1 N N 1 1 N k=0 N 1 l=0 N 1 l=0 f k 2 = ˆf l e 2πikl/N f k e 2πikl/N N 1 l=0 ˆf l 2 Proof: For k = 0, 1,, N 1, the complex numbers ω k e 2πik/N are the Nth roots of unity Hence, for k 0, N 1 l=0 ω l k = ωn k 1 ω k 1 = 0, 12

13 while for k = 0 It follows that Substituting k = n m gives N 1 l=0 N 1 l=0 N 1 l=0 ω l 0 = N 1 l=0 1 = N e 2πikl/N = Nδ k,0 e 2πikl/N e 2πiml/N = Nδ n,m, which is the first statement The rest of the theorem follows easily QED Then there is: Theorem If I n (f) is the unique interpolating polynomial in T n for the N points x k = k, k = 0, 1,, N 1 N (with N = 2n or 2n+1, according as N is even or odd), then I n (f) is given by the discrete Fourier transform ˆf of the vector f = (f(x 0 ), f(x 1 ),, f(x N 1 )), as I n (f, x) = ˆf l e 2πilx, N = 2n + 1 odd and I n (f, x) = l n 1 l n ˆf l e 2πilx + (Re ˆf n ) cos(2πnx), N = 2n even with the phase φ = 0 in the case N = 2n, even Proof: Consider first N = 2n + 1, odd Since I n T n, as defined above, we need only verify the interpolation property However, as defined, I n (x k ) = n ˆf l e 2πilx k = l= n n ˆf l e 2πilk/N = e 2πink/N 2n ˆf l n e 2πilk/N l= n l=0 13

14 But ˆf l n = 1 N N 1 p=0 f(x p )e 2πi(l n)p/n = 1 N N 1 (e 2πinp/N f(x p ))e 2πilp/N p=0 is just the discrete Fourier transform ĝ l of g p = e 2πinp/N f(x p ) Thus, I n (x k ) = e 2πink/N g k = e 2πink/N e 2πink/N f(x k ) = f(x k ), as claimed For N = 2n, even, an identical argument shows that both n 1 l= n Thus, also, I n (x k ) = 1 n 1 2 ˆf l e 2πilx k = l= n n l= (n 1) ˆf l e 2πilx k + ˆf l e 2πilx k = f(x k ) n l= (n 1) ˆf l e 2πilx k = f(x k ) We have used Re[ ˆf n e 2πinx k ] = Re[ ˆfn ] cos(2πnx k ), since e 2πinx k = cos(2πnxk ) = ( 1) k QED Equivalently, Theorem The trigonometric polynomials and T (x) = a n [a l cos(2πlx) + b l sin(2πlx)] l=0 l=0 T (x) = a n [a l cos(2πlx) + b l sin(2πlx)] + a n 2 cos(2πnx) for N = 2n+1 and N = 2n, respectively, satisfy T (x k ) = f k, k = 0, 1,, N 1 for x k = k/n if and only if the coefficients of T (x) are given by a l = 2 N 1 ( ) 2πkl f k cos, b l = 2 N 1 ( ) 2πkl f k sin N N N N Proof: Obvious! k=0 14 k=0

15 Aliasing Error: Consider a function of the form f(x) = Ae 2πi(p+sN)x with p n, s Z, s 0 and with, for simplicity, N = 2n + 1 Since f(x k ) = e 2πi(px k+sk) = e 2πpx ki for x k = k N it follows that I n (f; x) = Ae 2πipx Thus, all large wavenumbers p + sn, s 0 are folded back to the wavenumbers p, p n, in the interpolating polynomial This is called aliasing error in I n (f; x) Fast Fourier Transform Naively, the calculation of the discrete Fourier transform requires N 2 complex multiplications However, with appropriate choices of N, eg N = 2 n, n N, the number of operations can be reduced to N log N Such methods have a long history, but first became popular in computing after the work of J W Cooley & J W Tukey, Math Comput (1965) For a history, see E O Brigham, The Fast Fourier Transform (Englewood Cliffs, Prentice-Hall, 1974) There are two basic methods: Cooley-Tukey method: We follow the elegant derivation of Danielson-Lanczos (1942) To simplify the derivation, we define F k = 1 N F 0 k, k = 0, 1,, N 1 15

16 where F 0 k = N 1 l=0 f l e 2πikl/N has the normalization factor 1 omitted Recall N = N 2n The basic observation is that Fk 0 for each k can be written as the sum of two discrete Fourier transforms, each of length N = 2 2n 1 One is formed from the even-numbered points of the original N, the other from the oddnumbered ponts: F 0 k = = N/2 1 l=0 l=0 e 2πik(2l)/N f 2l + ( ) N/2 1 l=0 e 2πik(2l+1)/N f 2l+1 N/2 1 N/2 1 e 2πik/l(N/2) f 2l + ωn k e 2πikl/(N/2) f 2l+1 Fk 00 + ωnf k k 01 with ω n e 2πi/2n Here k still runs over k = 0, 1,, N 1 However, if we note that Fk 00, F k 01 are periodic to translations of k by N and that 2 then we obtain The two relations l=0 ω k+ N 2 n = ω k n e πi = ω k n, f 0 k+ 1 2 N = F 00 k ω k nf 01 k { F 0 k = Fk 00 + ωnf k k 01 R n = Fk 00 ωnf k 01 F 0 k+ N 2 k k = 0, 1,, N 2 1 with the definitions of Fk 00, F k 01 above, replace the previous expression (*) for Fk 0, k = 0, 1,, N 1 Notice that one multiplication is now required for the two terms, giving a savings by a factor of 2 in number of operations However, the main savings in the algorithm comes from nesting of multiplications (Cf Horner s method for polynomials) A careful operation count is given below 16

17 The savings are increased by iterating If we adopt the convention b 1 = 0, then the general recursion may be written, for each m = n, n 1,, 1 { F b 1b 0 b n m 1 k = F b 1b 0 b n m 1 0 k + ω R mf k b 1b 0 b n m 1 1 k m F b 1b 0 b n m 1 k+2 = F b 1b 0 b n m 1 0 m 1 k ωmf k b 1b 0 b n m 1 1 k with 2 m 1 1 F b 1b 0 b n m k = e 2πikl/(2m 1) f 2 n m+1 l+(2 n m b n m + +2b 1 +b 0 ) l=0 for k = 0, 1,, 2 m 1 1 The quantities b 0, b 1,, b n 1 are all bits, taking values 0 or 1, which indicate whether the sublattice chosen is even or odd, respectively Exercise: Prove by induction! At the final iteration, one obtains F b 1b 0 b n 1 0 = f 2 n 1 b n b 1 +b 0 Note that 2 n 1 b n b 1 + b 0 is nothing but the binary expansion of k = 0, 1,, 2 n 1 Therefore, the discrete Fourier transform F k = ˆf k of f k can be obtained from the following algorithm: Step 1: Initiate the array F b 1b 0 b n 1 0 in bit-reversed order, using Step 2: Use R m for m = 1,, n to calculate the arrays F b 1b 0 b n m 1 k, k = 0, 1,, 2 m 1 from the arrays F b 1b 0 b n m k, k = 0, 1,, 2 m 1 1 Step 3: Calculate F k from Fk 0 = F b 1 k via F k = 1 N F 0 k, k = 0, 1,, 2 n 1 Operation Count: Note that the relations R m at stage m of Step 2 require multiplication of the factors indexed by 2 m 1 values of k and 2 n m values of (b 0, b 1,, b n m 1, 1) This is for each m = 1, 2,, n or 2 m 1 2 n m = 2 n 1 multiplications 2 n 1 n = N 2 log 2 N multiplications 17

18 Including the N multiplications by the factor 1 for each k = 0, 1,, N 1 N in Step 3 gives a total of N 2 log 2 N + N multiplications Sande-Tukey method: Another FFT method exists with the same operation count based upon an opposite idea, ie combining terms in the sum rather than separating into subsums See the Homework! This is sometimes called decimation-in-frequency whereas the Cooley-Tukey method is called decimation-in-time Additional Remarks on FFT: Many additional practical aspects of the FFT are nicely discussed in Numerical Recipes [NR]: FFT s can be carried out not only for N = 2 n but for any factorization N = N 1 N p Highly optimized codings exist for taking small-n discrete Fourier transforms, eg for N = 2, 3, 4, 5, 7, 8, 11, 13, 16, etc, called Winograd FFT s See NR, Section 122 If the function f is real, then F k = F k This symmetry can be used to further economize the FFT See NR, Section 123 There are also Fast-Fourier-Sine and Fast-Fourier-Cosine Transforms See NR, Section 123 Special bookkeeping methods can be employed to reduce unnecessary copying of arrays in multidimensional FFT See NR, Sections It is possible to compute convolutions (f g)(x) = 1 0 f(x y)g(y)dy with FFT, using the convolution theorem: (f g) (n) = ˆf(n)ĝ(n) This can be done with algorithms which eliminate the bit-reordering steps required in usual FFT See NR, Section

19 3(c) Rational Interpolation Problems with polynomial interpolation: Functions with poles on the real interval of interpolation, such as cot x = cos x sin x at x = 0 in [0, 2π] or with poles in the complex plane very near the interval of interpolation, such as 1 ε 2 + x = 1 2 (x + iε)(x iε) at x = ±iε near [ 1, 1] are not accurately interpolated with polynomials Polynomials usually oscillate, so that error bounds frequently exceed the average approximation error by a significant amount Better interpolants are found with rational functions R m,n (x) = P m(x) Q n (x) = a 0 + a 1 x + + a m x m b 0 + b 1 x + + b n x n The pair [m, n] is called the degree-type, and N = m + n is called the index The number of free constants is N + 1, since one of the N + 2 constants is able to be chosen arbitrarily (A frequent convention is to take b 0 = 1, when that is possible) Interpolation problem: For N + 1 distinct points (x i, y i ), i = 0, 1,, N find R m,n so that A R m,n (x i ) = y i, i = 0, 1,, N For any solution, it must hold that P m (x i ) y i Q n (x i ) = 0, or B a 0 + a 1 x i + + a m x m i y i (b 0 + b 1 x i + + b n x n i ) = 0 i = 0, 1,, N B is a necessary, but not sufficient, condition for A 19

20 Example: x i y i with m = n = 1 a 0 1 b 0 = 0 a 0 + a 1 2(b 0 + b 1 ) = 0 a 0 + 2a 1 2(b 0 + 2b 1 ) = 0 has solution, unique up to a common constant factor, given by a 0 = 0, b 0 = 0, a 1 = 2, b 1 = 1 Thus, R 1,1 (x) = 2x x, or equivalently, R1,1 (x) = 2 However, (x 0, y 0 ) = (0, 1) is missed Thus, B is satisfied but A is not Moral: A rational interpolant of specified degree-type [m, n] may not exist! The notion of equivalence of rational expressions used above is that R (1) (x) = P (i) (x) Q (i) (x), i = 1, 2 (Q(i) (x) 0) are equivalent, or if R (1) R (2), P (1) (x)q (2) (x) = P (2) (x)q (1) (x) A rational expression is called relatively prime if its numerator P (x) and denominator Q(x) are not both divisible by a common polynomial of positive degree Given the rational expression R(x) we denote by R(x) the corresponding equivalent relatively prime rational, obtained by cancelling the common factors of P (x), Q(x) It is unique up to a common factor in the numerator & denominator We now have the following theorem: Theorem In any rational interpolation problem, the solution of the linear system B exists and is unique, in the following sense: 20

21 (i) The homogeneous linear system has nontrivial solutions and for each such, R m,n (x) = Pm(x) Q n(x), Q n 0, ie it defines a rational function (ii) Any two nontrivial solutions are equivalent: R (1) R (2) Proof: (i) Since the homogeneous linear system has n + m + 1 iterations for n + m + 2 unknowns, there are solutions such that (*) (a 0,, a m, b 0,, b n ) (0,, 0, 0,, 0) In that case, also, Otherwise, from B (b 0,, b n ) (0,, 0) P n (x i ) = y i Q m (x i ) = 0, i = 0, 1,, n + m Since P n has degree at most n, this would imply P n (x) 0, contradicting (*) Thus, Q n (x) 0 (ii) Given two solutions R (1), R (2), consider S(x) := P (1) (x)q (2) (x) P (2) (x)q (1) (x) P m+n Since S(x i ) = (y i Q (1) (x i ))Q (2) (x i ) (y i Q (2) (x i ))Q (1) (x i ) = 0, for i = 0, 1,, n+ m, S(x) 0 and thus R (1) R (2) QED Important Remark: The converse to (ii) is false: not every rational expression R equivalent to a solution R of B is itself a solution In the previous example, we saw that R failed to be a solution of B [and hence of A] Since B is necessary for A, then either the solution R of B solves A, or else A is not solvable at all In the latter case, there must be support points (x i, y i ) which are not hit by R These are called inaccessible Thus, A is solvable iff there are no inaccessible points The following theorem is basic: Theorem: For any rational interpolation problem A, a solution exists iff, for any solution R of B, its relatively prime version R is also a solution of B Proof: Suppose R m,n (x) = Pm(x) is a solution of B Then a point x Q n(x) i, i = 0, 1,, m + n is inaccessible if and only if Q n (x i ) = 0: otherwise, dividing 21

22 B by Q n (x i ) would give R m,n (x i ) = y i However, for any solution of B, Q n (x i ) = 0 iff both Q n (x i ) = 0 and P m (x i ) = 0, since P m (x i ) = y i Q n (x i ) In that case, Q n (x) & P m (x) must have a common factor (x x i ) and cannot be relatively prime Thus, an inaccessible point x i exists iff no solution R m,n (x) of B is relatively prime QED A set of support points (x p, y p ), p = 0, 1,, s are said to be in special position if they are interpolated by a rational expression of degree type [k, l] with k + l < s Theorem In a nonsolvable rational interpolation problem, the accessible support points are in special position Proof: Let R m,n be the solution of B and x i1,, x iα the inaccessible points, a 1 Then, P m (x) & Q n (x) have a common factor (x x i1 ) (x x ia ) whose cancellation gives so that P m (x) P k (x) = (x x i1 ) (x x ia ), k = m a Q n (x) Q l (x) = (x x i1 ) (x x ia ), l = n a R k,l (x) P k(x) Q l (x) solves the interpolation problem for the m+n+1 a accessible points Since k + l + 1 = m + n + 1 2a < m + n + 1 a, the accessible points are obviously in special position QED Thus, nonsolvable rational interpolation problems are highly nongeneric! Algorithms to Solve Rational Interpolation Problems (i) Inverse & Reciprocal Differences; Continued Fraction Representations The algorithms discussed here solve for the rational interpolants R m,n along the main diagonal of the [m, n] plane: 22

23 m n inverse differences are defined recursively ϕ(x i1,, x il 1, x i l, x i l+1 ) = initiated with x i l x i l+1 ϕ(x i1,, x il 1 x i l ) ϕ(x i1,, x il 1, x i l+1 ) ϕ(x i, x j ) = x i x j y i y j = x i x j f(x i ) f(x j ) Note: It is possible for ϕ =, when denominators vanish Also, ϕ(x i1,, x il ) is generally not symmetric in its arguments x i1,, x il Theorem When the solution R n,n (x) to a rational interpolation problem of degree type [n, n] exists, it is represented by the continued fraction R n,n (x) = f 0 + x x 0 ϕ(x 0, x 1 ) + x x 1 ϕ(x 0, x 1, x 2 ) + x x 2 ϕ(x 0, x 1, x 2, x 3 ) + + x x 2n 1 ϕ(x 0, x 1, x 2,, x 2n ) Proof: We use P k, Q k generically to denote elements of P k Thus, if R n,n (x) = Pn(x) Q n(x) exists, it follows that R n,n(x 0 ) = f 0 and thus R n,n (x) = f 0 + R n,n (x) R n,n (x 0 ) = f 0 + P n(x) Q n (x) P n(x 0 ) Q n (x 0 ) = f 0 + (x x 0 ) P n 1(x) Q n (x) = f x x Q n (x)/p n 1 (x) 23

24 Since also R n,n (x 1 ) = f 1, it follows that Q n (x 1 ) P n 1 (x 1 ) = x 1 x 0 f 1 f 0 = ϕ(x 0, x 1 ) Hence, it follows in the same way that Q n (x) P n 1 (x) = ϕ(x 0, x 1 ) + Q n(x) P n 1 (x) Q n(x 1 ) P n 1 (x 1 ) = ϕ(x 0, x 1 ) + (x x 1 ) Q n 1(x) P n 1 (x) = ϕ(x x x 1 0, x 1 ) + P n 1 (x)/q n 1 (x) and thus P n 1 (x 2 ) Q n 1 (x 2 ) = x 2 x 1 ϕ(x 0, x 2 ) ϕ(x 0, x 1 ) = ϕ(x 0, x 1, x 2 ) Continuing in this manner, one obtains x x 0 R n,n (x) = f 0 + Q n (x)/p n 1 (x) x x 0 = f 0 + ϕ(x 0, x 1 ) + = x x 0 = f 0 + ϕ(x 0, x 1 ) + x x 1 P n 1 (x)/q n 1 (x) x x 1 ϕ(x 0, x 1, x 2 )+ x x 2n 1 + ϕ(x 0, x 1,, x 2n ) The same argument shows as well that QED R n,n 1 (x) = f 0 + x x 0 ϕ(x 0, x 1 ) + x x 1 ϕ(x 0, x 1, x 2 ) + + x x 2n 2 ϕ(x 0, x 1,, x 2n 1 ) when the problem of degree type [n, n 1] is solvable The inverse differences are not fully symmetric in their arguments, but so-called reciprocal differences ϱ(x i0,, x ik ) can be defined iteratively as 24

25 (*) ϱ(x i0, x i1,, x ik 1, x ik ) = x i0 x ik ϱ(x i0,, x ik 1 ) ϱ(x i1,, x ik ) + ϱ(x i 1,, x ik 1 ) initiated as ϱ(x i ) = f i This may be arranged in a tableau: x 0 f 0 ϱ(x 0, x 1 ) x 1 f 1 ϱ(x 0, x 1, x 2 ) ϱ(x 1, x 2 ) ϱ(x 0, x 1, x 2, x 3 ) x 2 f 2 ϱ(x 1, x 2, x 3 ) ϱ(x 2, x 3 ) x 3 f 3 Theorem (i) The reciprocal difference ϱ(x 0,, x k ) is symmetric in its arguments; and (ii) for k 1 ϕ(x 0, x 1,, x k ) = ϱ(x 0,, x k ) ϱ(x 0,, x k 2 ) Proof: See L M Milne-Thomson, The Calculus of Finite Differences (Landon, MacMillan, 1993) 25

26 Combining the two theorems gives Thiele s continued fraction representation R n,n (x) = f 0 + x x 0 ϱ(x 0, x 1 ) + x x 1 ϱ(x 0, x 1, x 2 ) ϱ(x 0 ) + + x x 2n 1 ϱ(x 0,, x 2n ) ϱ(x 0,, x 2n 2 ) and a similar representation for R n,n 1 (x) x i f i Example: f 0 = 0, ϕ(x 0, x 1 ) = 1, ϕ(x 0, x 1, x 2 ) = = 1 2 and ϕ(x 0, x 1, x 2, x 3 ) = 1 2 ( 1) = 1 2 so that R 2,1 (x) = 0 + x 1 + x x 2 1 9x + 4x2 = 2 7 2x For computational purposes it is generally better to leave the solution R n,n or R n,n 1 in the form of a continued fraction, rather than converting to a rational fraction In this way, the number of operations (multiplications and divisions) is significantly reduced 26

27 Operation Count Degree type Rational fraction Continued fraction [3, 3] 5 3 [4, 3] 6 4 [4, 4] 7 4 [5, 4] 7 5 [5, 5] 7 5 [6, 5] 8 6 [6, 6] 9 6 (However, number of divisions is greatly increased, so that rational fractions maybe preferred if divisions are more costly!) Note that the continued fraction formulation deserves emphasis because the rational interpolants of type [n, n] and [n, n 1] are generally the most accurate approximations among all rational functions of the same index! (ii) Algorithms of the Neville Type We use R m,n(x) (s) = P m (s) (x) Q (s) n (x) to denote the rational interpolant with R (s) m,n(x i ) = f i, i = s, s + 1,, s + m + n Then, the following theorem gives a basis to a Neville-type algorithm to generate the R m,n: (s) Theorem: For m, n 1 and R (s) m,n(x) = R (s+1) m 1,n(x) + R (s) m,n(x) = R (s+1) m,n 1(x) + Proof: See Homework! ( x x s x x s+m+n ( x x s x x s+m+n 27 R (s+1) m 1,n(x) R (s) m 1,n(x) 1 R(s+1) m,n 1 (x) R(s) m 1,n (x) R (s+1) m 1,n (x) R(s+1) m 1,n 1 (x) R (s+1) m,n 1(x) R (s) m,n 1 1 R(s+1) m,n 1 (x) R(s) m,n 1 (x) R (s+1) m,n 1 (x) R(s+1) m 1,n 1 (x) ) 1 ) 1

28 This can be used to calculate R m,n along zig-zag paths: m n Preferred path: [0, 0] [0, 1] [1, 1] [1, 2] [2, 2] Along this path set The recursion becomes T ij (x) R (s) m,n(x) for i = s + m + n, j = m + n T i, 1 0, T i0 = f i T i,k = T i,k 1 + T i,k 1 T ( i 1,k 1 ) x x i k x x i 1 T i,k 1 T i 1,k 1 T i,k 1 T i 1,k 2 1 This is identical with the recursion in Nelville s algorithm for polynomials, except that there is the term ( ) = 1 28

29 In a tableau this is: [0, 0] f 0 = T 00 [0, 1] T 11 [1, 1] f 1 = T 10 T 22 [1, 2] T 21 T 33 f 2 = T 20 T 32 f 3 = T 30 T 31 This algorithm is especially useful if the values of the rational interpolant are required at a particular point x, rather than the rational function itself (ie its coefficients) An important modern application of the algorithm is in extrapolation methods to solve initial-value problems for ODE s or Bulirsch-Stoer methods This is discussed in Numerical Recipes, Section 164 and in C W Gear, Numerical Initial-Value Problems in ODE s (Prentice-Hall, 1971), pp It is perhaps the best known way to obtain high-accuracy solutions to ODE s with minimal computational effort! 29

30 3(d) Piecewise Polynomial Interpolation & Splines Given a partition : a = x 0 < x 1 < < x m = b x i are called knots, breakpoints, or nodes of an interval [a, b], the Hermite function space H (n) is defined to be the space of functions as [a, b] such that ϕ H (n) iff (i) ϕ C n [a, b] (ii) ϕ [xi,x i+1 ] P 2n+1 One can choose P i = ϕ [xi,x i+1 ] P 2n+1 to be the Hermite interpolating polynomial with P (k) i (x i ) = f (k) (x i ) & P (k) i (x i+1 ) = f (k) (x i+1 ) for k = 0, 1,, n This is widely used with n = 1 and the cubic Hermite polynomials considered in Project #5 It can be shown that if f C 2n+2 [a, b], then max f (k) (x) P (k) i (x) (x x i)(x x i+1 ) n k+1 (x i+1 x i ) k x [x i,x i+1 ] k!(2n 2k + 2)! max x [x i,x x+1 ] f (2n+2) (x) and thus f (k) ϕ (k) 2n k+2 2 2n 2k+2 k!(2n 2k + 2)! f (2n+2) for k = 0, 1,, n+1 and = max x i+1 x i Unlike simple polynomial 0 i m 1 interpolation, error goes to zero as 0! See Ciarlet, Schultz & Varga, Num Math (1967) Exercise: Show directly from the error estimate on Hermite interpolation that the k = 0 bound is true, ie f ϕ 2n+2 2 2n+2 (2n + 2)! f (2n+2) 30

31 Note f = max f(x) for f C[a, b] x [a,b] Thus, piecewise polynomial interpolation is convergent! However, use of Hermite interpolants requires knowledge of derivatives such as f (x i ), f (x i+1 ) to calculate P i = ϕ [xi,x i+1 ] (although this is employed for Bézier curves in computer graphics: see Section 35) Definition Given a partition : a = x 0 < x 1 < < x n = b of an interval [a, b], the space of spline functions of order m S (m) consists of S : [a, b] R such that (i) S C m 1 [a, b] (ii) S [xi,x x+1 ] P m Note: If S S (m), then S S (m 1) and S S (m+1) The most common case, m = 3, are cubic splines They are finding applications in graphics and, increasingly, in numerical methods For instance, spline functions may be used as trial functions in connection with the Rayleigh-Ritz-Galerkin method for solving boundary-value problems of ordinary and partial differential equations Introdutions are for instance Greville (1969), Schultz (1973), Böhmer (1974), and de Boor (1978) Theoretical Foundations Consider a set Y := {y 0, y 1,, y n } of n + 1 real numbers We denote by S (Y ; ) an interpolating cubic spline function S with S (Y ; x i ) = y i, i = 0, 1,, n Such an interpolating spline function S (Y ; ) is not uniquely determined by the set Y of support ordinates Roughly speaking, there are still two degrees of freedom left, calling for suitable additional requirements The following three additional requirements are most commonly considered: (a) S (Y ; a) = S (Y ; b) = 0, (b) S (k) (Y ; a) = S(k) (Y ; b) for k = 0, 1, 2 : S (Y ; ) is periodic, (c) S (Y ; a) = y 0, S (Y ; b) = y n for given numbers y 0, y n We will confirm that each of these three conditions by itself ensures uniqueness of the interpolating spline function S (Y ; ) A prerequisite of the condition (b) is, of course, that y n = y 0 For this purpose, and to establish a characteristic minimum property of spline functions, we consider the sets for integer m > 0 31

32 K m [a, b] = {f : [a, b] R : f (m) L 2 [a, b]} By Kp m [a, b] we denote the set of all functions in K m [a, b] with f (k) (a) = f (k) (b) for k = 0, 1,, m 1 We call such functions periodic, because they arise as restrictions to [a, b] of functions which are periodic with period b a Note that S K 3 [a, b], and that S (Y ; ) Kp[a, 3 b] if (b) holds If f K 2 [a, b], then we can define f 2 := b a f (x) 2 dx Note that f 0 However, f = 0 may hold for functions which are not identically zero, for instance, for all linear functions f(x) cx + d The function is called a seminorm (in this case, the Sobolev seminorm) We proceed to show a fundamental identity due to Holladay [see for instance Ahlberg, Nilson, and Walsh (1967)] Lemma If f K 2 (a, b) if = {a = x 0 < x 1 < < x n = b} is a partition of the interval [a, b], and if S is a spline function with knots x i, then f S 2 = f 2 S 2 [ 2 (f (x) S (x))s (x) b a n (f(x) S (x))s (x) i=1 x i x + i 1 Here g(x) u stands for g(u) g(v) Since S (x) is piecewise constant with v possible discontinuities at the knots x 1,, x n 1, we have to use the left and right limits of S (x) at the locations x i and x i 1, respectively, in the above formula This is indicated by the notation x i, x+ i 1 Proof: By the definition of, f S 2 = b a = f 2 2 = f 2 2 f (x) S (x) 2 dx b a b a f (x)s (x)dx + S 2 (f (x) S (x))s (x)dx S 2 32 ]

33 Integration by parts gives for i = 1, 2,, n xi (f (x) S (x))s (x)dx = (f (x) S (x))s (x) x i 1 xi x i x i 1 (f (x) S (x))s (x)dx x i 1 = (f (x) S (x))s (x) x i (f(x) S (x))s (x) x i 1 + xi x i 1 (f(x) S (x))s (4) (x)dx Now S (4) (x) 0 on the subintervals (x i 1, x i ), and f, S, S are continuous on [a, b] Adding these formulas for i = 1, 2,, n yields the proposition of the theorem, since n (f (x) S (x))s (x) x i = (f (x) S (x))s (x) b x i 1 a i=1 With the help of this theorem we will prove the important minimum-norm property of spline functions Theorem Given a partition := {a = x 0 < x 1 < < x n = b} of the interval [a, b], values Y := {y 0,, y n } and a function f K 2 [a, b] with f(x i ) = y i, for i = 0, 1, n, then f 2 S (Y ; ) 2, and more precisely f 2 S (Y ; ) 2 = f S (Y ; ) 2 0 holds for every spline function S (Y ; ), provided one of the conditions (a) S (Y ; a) = S (Y ; b) = 0, (b) f Kp(a, 2 b), S (Y ; ) periodic, (c) f (a) = S (Y ; a), f (b) = S (Y ; b), is met In each of these cases, the spline function S (Y ; ) is uniquely determined The existence of such spline functions will be shown later Proof: In each of the above three cases (a, b, c), the expression (f (x) S (x))s (x)) b a n (f(x) S (x))s (x) i=1 33 x i x + i 1 = 0 x i x + i 1

34 vanishes in the Holladay identity if S S (Y ; ) This proves the minimum property of the spline function S (Y ; ) Its uniqueness can be seen as follows: suppose S (Y ; ) Letting S (Y ; ) play the role of the function f K 2 [a, b] in the theorem, the minimum property of S (Y ; ) requires that S (Y ; ) S (Y ; ) 2 = S (Y ; ) 2 S (Y ; ) 2 0, and since S (Y ; ) and S (Y ; ) may switch roles, S (Y ; ) S (Y ; ) 2 = b Since S (Y ; ) and S (Y ; ) are both continuous, a ( S (Y ; x) S (Y ; x)) 2 dx = 0 S (Y ; x) S (Y ; x), from which S (Y ; x) S (Y ; x) + cx + d follows by integration But S (Y ; x) = S (Y ; x) holds for x = a, b, and this implies c = d = 0 The minimum-norm property of the spline function expressed in the theorem implies in case (a) that, among all functions f in K 2 [a, b] with f(x i ) = y i, i = 0, 1,, n, it is precisely the spline function S (Y ; ) with S (Y ; x) = 0 for x = a, b that minimizes the integral f 2 = b a f (x) 2 dx The spline function of case (a) is often referred to as the natural spline In case (b) with f minimized over the more restricted set K 2 p[a, b], the function is called the periodic spline, and in case (c) minimizing over {f K 2 [a, b] f (a) = y 0, f (b) = y n}, the clamped spline The expression f (x)(1 + f (x) 2 ) 3/2 indicates the curvature of the function f(x) at x [a, b] If f (x) is small compared to 1, then the curvature is approximately equal to f (x) The value f provides us therefore with an approximate measure of the total curvature of the function f in the interval [a, b] In this sense, the natural spline function is the straightest function to interpolate given support points (x i, y i ), i = 0, 1,, n 34

35 Determining Interpolating Spline Functions In this section, we will describe computational methods for determining cubic spline functions which assume prescribed values at their knots and satisfy one of the side conditions (a,b,c) In the course of this, we will have also proved the existence of such spline functions; their uniqueness has already been established In what follows, = {x i i = 0, 1,, n} will be a fixed partition of the interval [a, b] by knots a = x 0 < x 1 < < x n = b, and Y = {y i i = 0, 1,, n} will be a set of n + 1 prescribed real numbers In addition let h j+1 := x j+1 x j, j = 0, 1,, n 1 We refer to the values of the second derivatives at knots x j, M j := S (Y ; x j ), j = 0, 1,, n, of the desired spline function S (Y ; ) as the moments M j of S (Y ; ) We will show that spline functions are readily characterized by their moments, and that the moments of the interpolating spline function can be calculated as the solution of a system of linear equations Note that the second derivative S (Y ; ) of the spline function coincides with a linear function in each interval [x j, x j+1 ], j = 0,, n 1, and that these linear functions can be described in terms of the moments M i of S (Y ; ): By integration, S (Y ; x) = M j x j+1 x h j+1 + M j+1 x x j h j+1 for x [x j, x j+1 ] S (Y ; x) = M j (x j+1 x) 2 2h j+1 + M j+1 (x x j ) 2 2h j+1 + A j, S (Y ; x) = M j (x j+1 x) 3 6h j+1 + M j+1 (x x j ) 3 6h j+1 + A j (x x j ) + B j, for x [x j, x j+1 ], j = 0, 1,, n 1, where A j, B j are constants of integration From S (Y ; x j ) = y j, S (Y ; x j+1 ) = y j+1, we obtain the following 35

36 equations for these constants A j and B j : Consequently, Mj h2 j+1 6 M j+1 h 2 j+1 6 +B j = y j, + A j h j+1 +B j = y j+1 B j = y j M j h 2 j+1 6, A j = y j+1 y j h j+1 h j+1 6 (M j+1 M j ) This yields the following representation of the spline function in terms of its moments: S (Y ; x) = α j + β j (x x j ) + γ j (x x j ) 2 + δ j (x x j ) 3 for x [x j, x j+1 ], where α j : = y j, γ j : = M j 2, β j : = S (Y ; x j ) = M jh j A j = y j+1 y j 2M j + M j+1 h j+1, h j+1 6 δ j = S (Y ; x+ j ) 6 = M j+1 M j 6h j+1 Thus S (Y ; ) has been characterized by its moments M j The task of calculating these moments will now be addressed The continuity of S (Y ; ) at the knots x = x j, j = 1, 2,, n 1 [namely, the relations S (Y ; x j ) = S (Y ; x+ j )] yields n 1 equations for the moments M j Substituting the values for A j and B j gives S (Y ; x) = M j (x j+1 x) 2 2h j+1 + M j+1 (x x j ) 2 2h j+1 + y j+1 y j h j+1 h j+1 6 (M j+1 M j ) 36

37 For j = 1, 2,, n 1, we have therefore S (y; x j ) = y j y j 1 h j + h j 3 M j + h j 6 M j 1, S (Y ; x + j ) = y j+1 y j h j+1 h j+1 3 M j h j+1 6 M j+1, and since S (Y ; x+ j ) = S (Y ; x j ), h j 6 M j 1 + h j + h j+1 M j + h j M j+1 = y j+1 y j h j+1 y j y j 1 h j ( ) for j = 1, 2,, n 1 These are n 1 equations for the n + 1 unknown moments Two further equations can be gained separately from each of the side conditions (a), (b), and (c) Case (a): S (Y ; a) = M 0 = 0 = M n = S (Y ; b) Case (b): S (Y ; a) = S (Y ; b) M 0 = M n, S (Y ; a) = S (Y ; b) h n 6 M n 1 + h n + h 1 M n + h M 1 = y 1 y n h 1 y n y n 1 h n The latter condition is identical with (*) for j = n if we put Recall that (b) requires y n = y 0 h n+1 := h 1, M n+1 := M 1, y n+1 := y 1 Case (c): S (Y ; a) = y 0 h 1 3 M 0 + h 1 6 M 1 = y 1 y 0 h 1 y 0, S (Y ; b) = y n h n 6 M n 1 + h n 3 M n = y n y n y n 1 h n These last two equations in cases (a-c), as well as those in (*), can be written in a common format: µ j M j 1 + 2M j + λ j M j+1 = d j, j = 1, 2,, n 1, 37

38 upon introducing the abbreviations λ j := h j+1, µ j := 1 λ j = h j + h j+1 { h j + h j+1 6 yj+1 y j d j := y } j y j 1 h j + h j+1 h j+1 h j In case (a), we define in addition h j j = 1, 2,, n 1 and in case (c) λ 0 := 0, d 0 := 0, µ n := 0, d n := 0, λ 0 := 1, d 0 := 6 ( ) y1 y 0 y 0 h 1 h 1, µ n := 1, d n := 6 ( y n y n y n 1 h n h n This leads in cases (a) and (c) to the following system of linear equations for the moments M i : 2M 0 + λ 0 M 1 = d 0, µ 1 M 0 + 2M 1 + λ 1 M 2 = d 1, In matrix notation, we have 2 λ 0 0 µ 1 2 λ 1 µ 2 2 λ n 1 0 µ n 2 ) µ n 1 M n 2 + 2M n 1 + λ n 1 M n = d n 1, 38 µ n M n 1 + 2M n = d n M 0 M 1 M n = d 0 d 1 d n

39 The periodic case (b) also requires further definitions, λ n := h 1, µ n := 1 λ n = h n, h n + h 1 h n + h { 1 6 y1 y n d n := y } n y n 1, h n + h 1 h 1 h n which then lead to the following linear system of equations for the moments M 1, M 2,, M n (= M 0 ): 2 λ 1 µ 1 M 1 d 1 µ 1 2 λ 2 M 2 d 2 µ 3 = 2 λ n 1 λ n µ n 2 M n d n The coefficients λ i, µ i, d i depend only on the location of the knots x j and not on the prescribed values y i Y nor on y 0, y n in case (c) Theorem The above systems of linear equations are nonsingular for any partition of [a, b] This means that the above systems of linear equations have unique solutions for arbitrary right-hand sides, and that consequently the problem of interpolation by cubic splines has a unique solution in each of the three cases (a), (b), (c) Proof: Consider the (n + 1) (n + 1) matrix A = 2 λ 0 0 µ 1 2 λ 1 µ 2 2 λ n 1 0 µ 2 2 of the linear system Note from their definitions that λ i 0, µ i 0, λ i + µ i = 1 39

40 for all coefficients λ i, µ i Hence, A is strictly diagonally dominant and its nonsingularity follows directly However, for later purposes we shall require also the following stronger property: Az z for every vector z = (z 0,, z n ) T Indeed, let r be such that z r = max i z i and w := Az Then, µ r z r 1 + 2z r + λ r z r+1 = w r (µ 0 := 0, λ n := 0) By the definition of r and because µ r + λ r = 1, max w i i w r 2 z r µ r z r 1 λ r z r+1 2 z r µ r z r λ r z r = (2 µ r λ r ) z r = z r = max i z i To solve the equations, we may proceed as follows: subtract µ 1 /2 times the first equation from the second, thereby annihilating µ 1, and then a suitable multiple of the second equation from the third to annihilate µ 2, and so on This leads to a triangular system of equations which can be solved in a straightforward fashion Note that this method is the Gaussian elimination algorithm for tridiagonal matrices q 0 := λ 0 /2; u 0 := d 0 /2; λ n := 0; for k := 1, 2,, n do begin p k := µ k q k 1 + 2; M n := u n ; for q k := λ k /p k ; µ k := (d k µ k u k 1 )/p k end; k := n 1, n 2, 0 do M k := q k M k+1 + u k ; It can be shown that p k > 0, so that q k, µ k are well defined (Exercise) The linear system in case (b) can be solved in a similar, but not as straightforward, 40

41 fashion An ALGOL program by C Reinsch can be found in Bulirsch and Rutishauser (1968) The reader can find more details in Greville (1969) and de Boor (1972), ALGOL programs in Herriot and Reinsch (1971), and FORTRAN programs in de Boor (1978) These references also contain information and algorithms for the higher spline functions S m, m 2, and other generalizations Convergence Properties of Spline Functions Interpolating polynomials may not converge to a function f whose values they interpolate, even if the partitions are chosen arbitrarily fine (recall the Runge example) In contrast, we will show in this section that, under mild conditions on the function f and the partitions, the interpolating spline functions do converge towards f as the fineness of the underlying partitions approaches zero We will show first that the moments of the interpolating spline function converge to the second derivatives of the given function More precisely, consider a fixed partition = {a = x 0 < x 1 < < x n = b} of [a, b], and let M = M 0 M n be the vector of moments M j of the spline function S (Y ; ) with f(x j ) = y j for j = 1,, n 1, as well as S (Y ; a) = f (a), S (Y ; b) = f (b) We are thus dealing with case (c) The vector M of moments satisfies the equation AM = d, which expresses the linear system of equations in matrix form The components of d are the d j previously defined Let F and r be the vectors F := f (x 0 ) f (x 1 ) f (x n ), r := d AF = A(M F) 41

42 Writing z := max i z i for vectors z, and for the fineness of the partition, we have: := max x j+1 x j j Proposition: If f C 4 [a, b] and f (4) (x) L for x [a, b], then M F r 3 4 L 2 Proof: By definition, r 0 = d 0 2f (x 0 ) f (x 1 ), and by definition of d 0 r 0 = 6 ( ) y1 y 0 y 0 2f (x 0 ) f (x 1 ) h 1 h 1 Using Taylor s theorem to express y 1 = f(x 1 ) and f (x 1 ) in terms of the value and the derivatives of the function f at x 0 yields r 0 = 6 [ f (x 0 ) + h ] 1 h 1 2 f (x 0 ) + h2 1 6 f (x 0 ) + h f (4) (τ 1 ) f (x 0 ) [ ] 2f (x 0 ) f (x 0 ) + h 1 f (x 0 ) + h2 1 2 f (4) (τ 2 ) = h2 1 4 f (4) (τ 1 ) h2 1 2 f (4) (τ 2 ) with τ 1, τ 2 [x 0, x 1 ] Therefore Analogously, we find for that r L 2 r n = d n f (x n 1 ) 2f (x n ) r n 3 4 L 2 For the remaining components of r = d AF, we find similarly r j = d j µ j f (x j 1 ) 2f (x j ) λ i f (x j+1 ) [ 6 yj+1 y j = y ] j y j 1 h j + h j+1 h j+1 h j h j f (x j 1 ) 2f (x j ) h j+1 f (x j+1 ) h j + h j+1 h j + h j+1 42

43 Taylor s formula at x j then gives r j = = { [ 1 6 f (x j ) + h j+1 h j + h j+1 2 f (x j ) + h2 j+1 6 f (x j ) + h3 j+1 24 f (4) (τ 1 ) f (x j ) + h ] j 2 f (x j ) h2 j 6 f (x j ) + h3 j 24 f (4) (τ 2 ) [ ] h j f (x j ) h j f (x j ) + h2 j 2 f (4) (τ 3 ) 1 h j + h j+1 2f (x j )(h j + h j+1 ) h j+1 [ f (x j ) + h j+1 f (x j ) + h2 j+1 ]} 2 f (4) (τ 4 ) [ h 3 j+1 4 f (4) (τ 1 ) + h3 j 4 f (4) (τ 2 ) h3 j 2 f (4) (τ 3 ) h3 j+1 2 f (4) (τ 4 ) Here τ 1,, τ 4 [x j 1, x j+1 ] Therefore for j = 1, 2,, n 1 In sum, r j 3 4 L 1 h j + h j+1 [h 3 j+1 + h 3 j] 3 4 L 2 ] r 3 4 L 2 and since r = A(M F), it follows that M F r Theorem Suppose f C 4 [a, b] and f (4) (x) L for x [a, b] Let be a partition = {a = x 0 < < x n = b} of the interval [a, b], and K a constant such that x j+1 x j K for j = 0,, n 1 If S is the spline function which interpolates the values of the function f at the knots x 0,, x n and satisfies S (x) = f (x) for x = a, b, then there exist constants C k 2, which do not depend on the partition, such that for x [a, b], f (k) (x) S (k) (x) C klk 4 k, k = 0, 1, 2, 3 43

44 Note that the constant K 1 bounds the deviation of the partition from uniformity Proof: We prove the theorem first for k = 3 For x [x j 1, x j ], S (x) f (x) = M j M j 1 f (x) h j = M j f (x j ) M j 1 f (x j 1 ) h j h j + f (x j ) f (x) [f (x j 1 ) f (x)] f (x) h j Using the previous proposition and Taylor s theorem at x, we conclude that S (x) f (x) 3 2 L h j h j (x j x)f (x) + (x j x) 2 f (4) (η 1 ) 2 (x j 1 x)f (x) (x j 1 x) 2 f (4) (η 2 ) h j f (x) L 2 + L 2, η 1, η 2 [x j 1, x j ] h j 2 h j By hypothesis, /h j K for every j Thus f (x) S (x) 2LK To prove the proposition for k = 2, we observe: for each x (a, b) there exists a closest knot x j = x j (x), for which x j (x) x 1 From 2 f (x) S (x) = f (x j (x)) S (x j (x)) + and since K 1, x x j (x) (f (t) S (t))dt, f (x) S (x) 3 4 L LK LK 2, x [a, b] We consider k = 1 next In addition to the boundary points ξ 0 := a, ξ n+1 := b, there exist, by Rolle s theorem, n further points ξ j (x j 1, x j ), j = 1,, n, with f (ξ j ) = S (ξ j ), j = 0, 1,, n

45 For any x [a, b] there exists a closest one of the above points ξ j = ξ j (x), for which consequently ξ j (x) x < Thus and f (x) S (x) = x ξ j (x) (f (t) S (t))dt, f (x) S (x) 7 4 LK 2 = 7 4 LK 3, x [a, b] The case k = 0 remains Since f(x) S (x) = x x j (x) it follows from the above result for k = 1 that 1 (f (t) S (t))dt, f(x) S (x) 7 4 LK = 7 8 LK 4, x [a, b] Clearly, this theorem implies that for sequences m = {a = x (m) 0 < x (m) 1 < < x (m) n m = b}, m = 0, 1,, of partitions with m 0 and sup m,j m x (m) j+1 x(m) j K < +, the corresponding spline functions S m and their first three derivatives converge to f and its corresponding derivatives uniformly on [a, b] Note that even the third derivative f is uniformly approximated by S m, a usually discontinuous sequence of step functions 1 The estimates of the theorem have been improved by Hall and Meyer (1976): f (k) (x) S (k) (x) c kl 4 k, k = 0, 1, 2, 3, with c 0 := 5/384, c 1 := 1/24, c 2 := 3/8, c 3 := (K + K 1 )/2 Here c 0 and c 1 are optimal 45

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 4 Interpolation 4.1 Polynomial interpolation Problem: LetP n (I), n ln, I := [a,b] lr, be the linear space of polynomials of degree n on I, P n (I) := { p n : I lr p n (x) = n i=0 a i x i, a i lr, 0 i

More information

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1

More information

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Polynomial Interpolation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 24, 2013 1.1 Introduction We first look at some examples. Lookup table for f(x) = 2 π x 0 e x2

More information

Maria Cameron Theoretical foundations. Let. be a partition of the interval [a, b].

Maria Cameron Theoretical foundations. Let. be a partition of the interval [a, b]. Maria Cameron 1 Interpolation by spline functions Spline functions yield smooth interpolation curves that are less likely to exhibit the large oscillations characteristic for high degree polynomials Splines

More information

We consider the problem of finding a polynomial that interpolates a given set of values:

We consider the problem of finding a polynomial that interpolates a given set of values: Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.

More information

CHAPTER 4. Interpolation

CHAPTER 4. Interpolation CHAPTER 4 Interpolation 4.1. Introduction We will cover sections 4.1 through 4.12 in the book. Read section 4.1 in the book on your own. The basic problem of one-dimensional interpolation is this: Given

More information

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ). 1 Interpolation: The method of constructing new data points within the range of a finite set of known data points That is if (x i, y i ), i = 1, N are known, with y i the dependent variable and x i [x

More information

Chapter 2 Interpolation

Chapter 2 Interpolation Chapter 2 Interpolation Experiments usually produce a discrete set of data points (x i, f i ) which represent the value of a function f (x) for a finite set of arguments {x 0...x n }. If additional data

More information

3.1 Interpolation and the Lagrange Polynomial

3.1 Interpolation and the Lagrange Polynomial MATH 4073 Chapter 3 Interpolation and Polynomial Approximation Fall 2003 1 Consider a sample x x 0 x 1 x n y y 0 y 1 y n. Can we get a function out of discrete data above that gives a reasonable estimate

More information

Supplementary Notes for W. Rudin: Principles of Mathematical Analysis

Supplementary Notes for W. Rudin: Principles of Mathematical Analysis Supplementary Notes for W. Rudin: Principles of Mathematical Analysis SIGURDUR HELGASON In 8.00B it is customary to cover Chapters 7 in Rudin s book. Experience shows that this requires careful planning

More information

Fast Polynomial Multiplication

Fast Polynomial Multiplication Fast Polynomial Multiplication Marc Moreno Maza CS 9652, October 4, 2017 Plan Primitive roots of unity The discrete Fourier transform Convolution of polynomials The fast Fourier transform Fast convolution

More information

November 20, Interpolation, Extrapolation & Polynomial Approximation

November 20, Interpolation, Extrapolation & Polynomial Approximation Interpolation, Extrapolation & Polynomial Approximation November 20, 2016 Introduction In many cases we know the values of a function f (x) at a set of points x 1, x 2,..., x N, but we don t have the analytic

More information

Introductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19

Introductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19 Introductory Analysis I Fall 204 Homework #9 Due: Wednesday, November 9 Here is an easy one, to serve as warmup Assume M is a compact metric space and N is a metric space Assume that f n : M N for each

More information

7: FOURIER SERIES STEVEN HEILMAN

7: FOURIER SERIES STEVEN HEILMAN 7: FOURIER SERIES STEVE HEILMA Contents 1. Review 1 2. Introduction 1 3. Periodic Functions 2 4. Inner Products on Periodic Functions 3 5. Trigonometric Polynomials 5 6. Periodic Convolutions 7 7. Fourier

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

Error formulas for divided difference expansions and numerical differentiation

Error formulas for divided difference expansions and numerical differentiation Error formulas for divided difference expansions and numerical differentiation Michael S. Floater Abstract: We derive an expression for the remainder in divided difference expansions and use it to give

More information

Numerical interpolation, extrapolation and fi tting of data

Numerical interpolation, extrapolation and fi tting of data Chapter 6 Numerical interpolation, extrapolation and fi tting of data 6.1 Introduction Numerical interpolation and extrapolation is perhaps one of the most used tools in numerical applications to physics.

More information

Polynomial Interpolation

Polynomial Interpolation Polynomial Interpolation (Com S 477/577 Notes) Yan-Bin Jia Sep 1, 017 1 Interpolation Problem In practice, often we can measure a physical process or quantity (e.g., temperature) at a number of points

More information

CHAPTER 3 Further properties of splines and B-splines

CHAPTER 3 Further properties of splines and B-splines CHAPTER 3 Further properties of splines and B-splines In Chapter 2 we established some of the most elementary properties of B-splines. In this chapter our focus is on the question What kind of functions

More information

Lecture 10 Polynomial interpolation

Lecture 10 Polynomial interpolation Lecture 10 Polynomial interpolation Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

Cubic Splines. Antony Jameson. Department of Aeronautics and Astronautics, Stanford University, Stanford, California, 94305

Cubic Splines. Antony Jameson. Department of Aeronautics and Astronautics, Stanford University, Stanford, California, 94305 Cubic Splines Antony Jameson Department of Aeronautics and Astronautics, Stanford University, Stanford, California, 94305 1 References on splines 1. J. H. Ahlberg, E. N. Nilson, J. H. Walsh. Theory of

More information

REAL AND COMPLEX ANALYSIS

REAL AND COMPLEX ANALYSIS REAL AND COMPLE ANALYSIS Third Edition Walter Rudin Professor of Mathematics University of Wisconsin, Madison Version 1.1 No rights reserved. Any part of this work can be reproduced or transmitted in any

More information

Numerical Analysis: Interpolation Part 1

Numerical Analysis: Interpolation Part 1 Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,

More information

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UFD. Therefore

More information

In practice one often meets a situation where the function of interest, f(x), is only represented by a discrete set of tabulated points,

In practice one often meets a situation where the function of interest, f(x), is only represented by a discrete set of tabulated points, 1 Interpolation 11 Introduction In practice one often meets a situation where the function of interest, f(x), is only represented by a discrete set of tabulated points, {x i, y i = f(x i ) i = 1 n, obtained,

More information

Scientific Computing

Scientific Computing 2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

be the set of complex valued 2π-periodic functions f on R such that

be the set of complex valued 2π-periodic functions f on R such that . Fourier series. Definition.. Given a real number P, we say a complex valued function f on R is P -periodic if f(x + P ) f(x) for all x R. We let be the set of complex valued -periodic functions f on

More information

Continuity. Chapter 4

Continuity. Chapter 4 Chapter 4 Continuity Throughout this chapter D is a nonempty subset of the real numbers. We recall the definition of a function. Definition 4.1. A function from D into R, denoted f : D R, is a subset of

More information

1 Solutions to selected problems

1 Solutions to selected problems Solutions to selected problems Section., #a,c,d. a. p x = n for i = n : 0 p x = xp x + i end b. z = x, y = x for i = : n y = y + x i z = zy end c. y = (t x ), p t = a for i = : n y = y(t x i ) p t = p

More information

Fixed point iteration and root finding

Fixed point iteration and root finding Fixed point iteration and root finding The sign function is defined as x > 0 sign(x) = 0 x = 0 x < 0. It can be evaluated via an iteration which is useful for some problems. One such iteration is given

More information

Mathematics Course 111: Algebra I Part I: Algebraic Structures, Sets and Permutations

Mathematics Course 111: Algebra I Part I: Algebraic Structures, Sets and Permutations Mathematics Course 111: Algebra I Part I: Algebraic Structures, Sets and Permutations D. R. Wilkins Academic Year 1996-7 1 Number Systems and Matrix Algebra Integers The whole numbers 0, ±1, ±2, ±3, ±4,...

More information

Poisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material.

Poisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material. Poisson Solvers William McLean April 21, 2004 Return to Math3301/Math5315 Common Material 1 Introduction Many problems in applied mathematics lead to a partial differential equation of the form a 2 u +

More information

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j

More information

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning 76 Chapter 7 Interpolation 7.1 Interpolation Definition 7.1.1. Interpolation of a given function f defined on an interval [a,b] by a polynomial p: Given a set of specified points {(t i,y i } n with {t

More information

On the positivity of linear weights in WENO approximations. Abstract

On the positivity of linear weights in WENO approximations. Abstract On the positivity of linear weights in WENO approximations Yuanyuan Liu, Chi-Wang Shu and Mengping Zhang 3 Abstract High order accurate weighted essentially non-oscillatory (WENO) schemes have been used

More information

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous: MATH 51H Section 4 October 16, 2015 1 Continuity Recall what it means for a function between metric spaces to be continuous: Definition. Let (X, d X ), (Y, d Y ) be metric spaces. A function f : X Y is

More information

Chapter 8. P-adic numbers. 8.1 Absolute values

Chapter 8. P-adic numbers. 8.1 Absolute values Chapter 8 P-adic numbers Literature: N. Koblitz, p-adic Numbers, p-adic Analysis, and Zeta-Functions, 2nd edition, Graduate Texts in Mathematics 58, Springer Verlag 1984, corrected 2nd printing 1996, Chap.

More information

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation Outline Interpolation 1 Interpolation 2 3 Michael T. Heath Scientific Computing 2 / 56 Interpolation Motivation Choosing Interpolant Existence and Uniqueness Basic interpolation problem: for given data

More information

Continuity. Chapter 4

Continuity. Chapter 4 Chapter 4 Continuity Throughout this chapter D is a nonempty subset of the real numbers. We recall the definition of a function. Definition 4.1. A function from D into R, denoted f : D R, is a subset of

More information

Numerical Analysis: Approximation of Functions

Numerical Analysis: Approximation of Functions Numerical Analysis: Approximation of Functions Mirko Navara http://cmp.felk.cvut.cz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office 104a

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

Numerical Methods I Orthogonal Polynomials

Numerical Methods I Orthogonal Polynomials Numerical Methods I Orthogonal Polynomials Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 Nov. 4th and 11th, 2010 A. Donev (Courant Institute)

More information

CALCULUS JIA-MING (FRANK) LIOU

CALCULUS JIA-MING (FRANK) LIOU CALCULUS JIA-MING (FRANK) LIOU Abstract. Contents. Power Series.. Polynomials and Formal Power Series.2. Radius of Convergence 2.3. Derivative and Antiderivative of Power Series 4.4. Power Series Expansion

More information

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018 Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

MATH 117 LECTURE NOTES

MATH 117 LECTURE NOTES MATH 117 LECTURE NOTES XIN ZHOU Abstract. This is the set of lecture notes for Math 117 during Fall quarter of 2017 at UC Santa Barbara. The lectures follow closely the textbook [1]. Contents 1. The set

More information

2

2 1 Notes for Numerical Analysis Math 54 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft) 2 Contents 1 Polynomial Interpolation 5 1.1 Review...............................

More information

TD 1: Hilbert Spaces and Applications

TD 1: Hilbert Spaces and Applications Université Paris-Dauphine Functional Analysis and PDEs Master MMD-MA 2017/2018 Generalities TD 1: Hilbert Spaces and Applications Exercise 1 (Generalized Parallelogram law). Let (H,, ) be a Hilbert space.

More information

Real Analysis Problems

Real Analysis Problems Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.

More information

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that Problem 1A. Suppose that f is a continuous real function on [, 1]. Prove that lim α α + x α 1 f(x)dx = f(). Solution: This is obvious for f a constant, so by subtracting f() from both sides we can assume

More information

Abstract. 2. We construct several transcendental numbers.

Abstract. 2. We construct several transcendental numbers. Abstract. We prove Liouville s Theorem for the order of approximation by rationals of real algebraic numbers. 2. We construct several transcendental numbers. 3. We define Poissonian Behaviour, and study

More information

Real Analysis - Notes and After Notes Fall 2008

Real Analysis - Notes and After Notes Fall 2008 Real Analysis - Notes and After Notes Fall 2008 October 29, 2008 1 Introduction into proof August 20, 2008 First we will go through some simple proofs to learn how one writes a rigorous proof. Let start

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include PUTNAM TRAINING POLYNOMIALS (Last updated: December 11, 2017) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

Chapter 3 Interpolation and Polynomial Approximation

Chapter 3 Interpolation and Polynomial Approximation Chapter 3 Interpolation and Polynomial Approximation Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128A Numerical Analysis Polynomial Interpolation

More information

1 Solutions to selected problems

1 Solutions to selected problems Solutions to selected problems Section., #a,c,d. a. p x = n for i = n : 0 p x = xp x + i end b. z = x, y = x for i = : n y = y + x i z = zy end c. y = (t x ), p t = a for i = : n y = y(t x i ) p t = p

More information

Roots of Unity, Cyclotomic Polynomials and Applications

Roots of Unity, Cyclotomic Polynomials and Applications Swiss Mathematical Olympiad smo osm Roots of Unity, Cyclotomic Polynomials and Applications The task to be done here is to give an introduction to the topics in the title. This paper is neither complete

More information

Convergence rates of derivatives of a family of barycentric rational interpolants

Convergence rates of derivatives of a family of barycentric rational interpolants Convergence rates of derivatives of a family of barycentric rational interpolants J.-P. Berrut, M. S. Floater and G. Klein University of Fribourg (Switzerland) CMA / IFI, University of Oslo jean-paul.berrut@unifr.ch

More information

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam Jim Lambers MAT 460/560 Fall Semester 2009-10 Practice Final Exam 1. Let f(x) = sin 2x + cos 2x. (a) Write down the 2nd Taylor polynomial P 2 (x) of f(x) centered around x 0 = 0. (b) Write down the corresponding

More information

Factorization in Integral Domains II

Factorization in Integral Domains II Factorization in Integral Domains II 1 Statement of the main theorem Throughout these notes, unless otherwise specified, R is a UFD with field of quotients F. The main examples will be R = Z, F = Q, and

More information

Computational Physics

Computational Physics Interpolation, Extrapolation & Polynomial Approximation Lectures based on course notes by Pablo Laguna and Kostas Kokkotas revamped by Deirdre Shoemaker Spring 2014 Introduction In many cases, a function

More information

MADHAVA MATHEMATICS COMPETITION, December 2015 Solutions and Scheme of Marking

MADHAVA MATHEMATICS COMPETITION, December 2015 Solutions and Scheme of Marking MADHAVA MATHEMATICS COMPETITION, December 05 Solutions and Scheme of Marking NB: Part I carries 0 marks, Part II carries 30 marks and Part III carries 50 marks Part I NB Each question in Part I carries

More information

0 Sets and Induction. Sets

0 Sets and Induction. Sets 0 Sets and Induction Sets A set is an unordered collection of objects, called elements or members of the set. A set is said to contain its elements. We write a A to denote that a is an element of the set

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

MORE ON CONTINUOUS FUNCTIONS AND SETS

MORE ON CONTINUOUS FUNCTIONS AND SETS Chapter 6 MORE ON CONTINUOUS FUNCTIONS AND SETS This chapter can be considered enrichment material containing also several more advanced topics and may be skipped in its entirety. You can proceed directly

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Gábor P. Nagy and Viktor Vígh University of Szeged Bolyai Institute Winter 2014 1 / 262 Table of contents I 1 Introduction, review Complex numbers Vectors and matrices Determinants

More information

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

The discrete and fast Fourier transforms

The discrete and fast Fourier transforms The discrete and fast Fourier transforms Marcel Oliver Revised April 7, 1 1 Fourier series We begin by recalling the familiar definition of the Fourier series. For a periodic function u: [, π] C, we define

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Gaussian integers. 1 = a 2 + b 2 = c 2 + d 2.

Gaussian integers. 1 = a 2 + b 2 = c 2 + d 2. Gaussian integers 1 Units in Z[i] An element x = a + bi Z[i], a, b Z is a unit if there exists y = c + di Z[i] such that xy = 1. This implies 1 = x 2 y 2 = (a 2 + b 2 )(c 2 + d 2 ) But a 2, b 2, c 2, d

More information

Orthogonality of hat functions in Sobolev spaces

Orthogonality of hat functions in Sobolev spaces 1 Orthogonality of hat functions in Sobolev spaces Ulrich Reif Technische Universität Darmstadt A Strobl, September 18, 27 2 3 Outline: Recap: quasi interpolation Recap: orthogonality of uniform B-splines

More information

Proofs. Chapter 2 P P Q Q

Proofs. Chapter 2 P P Q Q Chapter Proofs In this chapter we develop three methods for proving a statement. To start let s suppose the statement is of the form P Q or if P, then Q. Direct: This method typically starts with P. Then,

More information

Chapter Five Notes N P U2C5

Chapter Five Notes N P U2C5 Chapter Five Notes N P UC5 Name Period Section 5.: Linear and Quadratic Functions with Modeling In every math class you have had since algebra you have worked with equations. Most of those equations have

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

FOURIER SERIES, HAAR WAVELETS AND FAST FOURIER TRANSFORM

FOURIER SERIES, HAAR WAVELETS AND FAST FOURIER TRANSFORM FOURIER SERIES, HAAR WAVELETS AD FAST FOURIER TRASFORM VESA KAARIOJA, JESSE RAILO AD SAMULI SILTAE Abstract. This handout is for the course Applications of matrix computations at the University of Helsinki

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

Lecture 1 INF-MAT : Chapter 2. Examples of Linear Systems

Lecture 1 INF-MAT : Chapter 2. Examples of Linear Systems Lecture 1 INF-MAT 4350 2010: Chapter 2. Examples of Linear Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo August 26, 2010 Notation The set of natural

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Integration, differentiation, and root finding. Phys 420/580 Lecture 7 Integration, differentiation, and root finding Phys 420/580 Lecture 7 Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule:

More information

Immerse Metric Space Homework

Immerse Metric Space Homework Immerse Metric Space Homework (Exercises -2). In R n, define d(x, y) = x y +... + x n y n. Show that d is a metric that induces the usual topology. Sketch the basis elements when n = 2. Solution: Steps

More information

MS 3011 Exercises. December 11, 2013

MS 3011 Exercises. December 11, 2013 MS 3011 Exercises December 11, 2013 The exercises are divided into (A) easy (B) medium and (C) hard. If you are particularly interested I also have some projects at the end which will deepen your understanding

More information

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u. 5. Fields 5.1. Field extensions. Let F E be a subfield of the field E. We also describe this situation by saying that E is an extension field of F, and we write E/F to express this fact. If E/F is a field

More information

Function approximation

Function approximation Week 9: Monday, Mar 26 Function approximation A common task in scientific computing is to approximate a function. The approximated function might be available only through tabulated data, or it may be

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

Math 489AB A Very Brief Intro to Fourier Series Fall 2008

Math 489AB A Very Brief Intro to Fourier Series Fall 2008 Math 489AB A Very Brief Intro to Fourier Series Fall 8 Contents Fourier Series. The coefficients........................................ Convergence......................................... 4.3 Convergence

More information

Solutions Final Exam May. 14, 2014

Solutions Final Exam May. 14, 2014 Solutions Final Exam May. 14, 2014 1. (a) (10 points) State the formal definition of a Cauchy sequence of real numbers. A sequence, {a n } n N, of real numbers, is Cauchy if and only if for every ɛ > 0,

More information

On a max norm bound for the least squares spline approximant. Carl de Boor University of Wisconsin-Madison, MRC, Madison, USA. 0.

On a max norm bound for the least squares spline approximant. Carl de Boor University of Wisconsin-Madison, MRC, Madison, USA. 0. in Approximation and Function Spaces Z Ciesielski (ed) North Holland (Amsterdam), 1981, pp 163 175 On a max norm bound for the least squares spline approximant Carl de Boor University of Wisconsin-Madison,

More information

Sets, Structures, Numbers

Sets, Structures, Numbers Chapter 1 Sets, Structures, Numbers Abstract In this chapter we shall introduce most of the background needed to develop the foundations of mathematical analysis. We start with sets and algebraic structures.

More information

The Fast Fourier Transform: A Brief Overview. with Applications. Petros Kondylis. Petros Kondylis. December 4, 2014

The Fast Fourier Transform: A Brief Overview. with Applications. Petros Kondylis. Petros Kondylis. December 4, 2014 December 4, 2014 Timeline Researcher Date Length of Sequence Application CF Gauss 1805 Any Composite Integer Interpolation of orbits of celestial bodies F Carlini 1828 12 Harmonic Analysis of Barometric

More information

Size properties of wavelet packets generated using finite filters

Size properties of wavelet packets generated using finite filters Rev. Mat. Iberoamericana, 18 (2002, 249 265 Size properties of wavelet packets generated using finite filters Morten Nielsen Abstract We show that asymptotic estimates for the growth in L p (R- norm of

More information

MA2501 Numerical Methods Spring 2015

MA2501 Numerical Methods Spring 2015 Norwegian University of Science and Technology Department of Mathematics MA5 Numerical Methods Spring 5 Solutions to exercise set 9 Find approximate values of the following integrals using the adaptive

More information

Mathematical Methods for Physics and Engineering

Mathematical Methods for Physics and Engineering Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory

More information

Math 321 Final Examination April 1995 Notation used in this exam: N. (1) S N (f,x) = f(t)e int dt e inx.

Math 321 Final Examination April 1995 Notation used in this exam: N. (1) S N (f,x) = f(t)e int dt e inx. Math 321 Final Examination April 1995 Notation used in this exam: N 1 π (1) S N (f,x) = f(t)e int dt e inx. 2π n= N π (2) C(X, R) is the space of bounded real-valued functions on the metric space X, equipped

More information

CHAPTER 10 Shape Preserving Properties of B-splines

CHAPTER 10 Shape Preserving Properties of B-splines CHAPTER 10 Shape Preserving Properties of B-splines In earlier chapters we have seen a number of examples of the close relationship between a spline function and its B-spline coefficients This is especially

More information

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods by Hae-Soo Oh Department of Mathematics, University of North Carolina at Charlotte, Charlotte, NC 28223 June

More information

The method of lines (MOL) for the diffusion equation

The method of lines (MOL) for the diffusion equation Chapter 1 The method of lines (MOL) for the diffusion equation The method of lines refers to an approximation of one or more partial differential equations with ordinary differential equations in just

More information

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions ARCS IN FINITE PROJECTIVE SPACES SIMEON BALL Abstract. These notes are an outline of a course on arcs given at the Finite Geometry Summer School, University of Sussex, June 26-30, 2017. Let K denote an

More information

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0 Lecture 22 Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) Recall a few facts about power series: a n z n This series in z is centered at z 0. Here z can

More information