Hermite Interpolation

Similar documents
Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Interpolation & Polynomial Approximation. Hermite Interpolation I

Jim Lambers MAT 169 Fall Semester Lecture 6 Notes. a n. n=1. S = lim s k = lim. n=1. n=1

Fall 2014 MAT 375 Numerical Methods. Numerical Differentiation (Chapter 9)

Numerical Methods. King Saud University

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form

Jim Lambers MAT 460 Fall Semester Lecture 2 Notes

Simple Iteration, cont d

November 20, Interpolation, Extrapolation & Polynomial Approximation

FINITE DIFFERENCES. Lecture 1: (a) Operators (b) Forward Differences and their calculations. (c) Backward Differences and their calculations.

Interpolation Theory

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Chapter 3 Interpolation and Polynomial Approximation

Numerical Analysis: Interpolation Part 1

MATH 1231 MATHEMATICS 1B Calculus Section 4.4: Taylor & Power series.

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 772 Fall Semester Lecture 21 Notes

AP Calculus Testbank (Chapter 9) (Mr. Surowski)

MAT137 Calculus! Lecture 45

Lecture 10 Polynomial interpolation

We consider the problem of finding a polynomial that interpolates a given set of values:

Homework and Computer Problems for Math*2130 (W17).

1 Solutions to selected problems

MAT 460: Numerical Analysis I. James V. Lambers

AP Calculus (BC) Chapter 9 Test No Calculator Section Name: Date: Period:

Vectors in Function Spaces

1 Solutions to selected problems

Computational Physics

Chapter 8: Taylor s theorem and L Hospital s rule

Mathematics for Engineers. Numerical mathematics

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

Interpolation and Approximation

UNIT-II INTERPOLATION & APPROXIMATION

Consistency and Convergence

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Completion Date: Monday February 11, 2008

Jim Lambers MAT 419/519 Summer Session Lecture 13 Notes

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

1.1 Introduction to Limits

Chapter 5: Numerical Integration and Differentiation

Euler s Method, cont d

1 What is numerical analysis and scientific computing?

Chapter 1 Mathematical Preliminaries and Error Analysis

Sequences and Summations

CHAPTER 4. Interpolation

Polynomial Interpolation with n + 1 nodes

Caculus 221. Possible questions for Exam II. March 19, 2002

8.5 Taylor Polynomials and Taylor Series

Let s Get Series(ous)

Differentiation. Table of contents Definition Arithmetics Composite and inverse functions... 5

Section Taylor and Maclaurin Series

Taylor approximation

Math 651 Introduction to Numerical Analysis I Fall SOLUTIONS: Homework Set 1

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

Taylor and Maclaurin Series

Notation Nodes are data points at which functional values are available or at which you wish to compute functional values At the nodes fx i

Taylor and Maclaurin Series. Copyright Cengage Learning. All rights reserved.

Math 180, Exam 2, Practice Fall 2009 Problem 1 Solution. f(x) = arcsin(2x + 1) = sin 1 (3x + 1), lnx

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Introductory Numerical Analysis

6 Lecture 6b: the Euler Maclaurin formula

Floating Point Number Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

1 Lecture 8: Interpolating polynomials.

5.3 Definite Integrals and Antiderivatives

Chapter 1 Numerical approximation of data : interpolation, least squares method

Gaussian Elimination and Back Substitution

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

Applications of the Maximum Principle

A first order divided difference

MAT137 Calculus! Lecture 48

derivatives of a function at a point. The formulas will be in terms of function data; in our case

Calculus. Central role in much of modern science Physics, especially kinematics and electrodynamics Economics, engineering, medicine, chemistry, etc.

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Differentiation. f(x + h) f(x) Lh = L.

Section 9.7 and 9.10: Taylor Polynomials and Approximations/Taylor and Maclaurin Series

MA 3021: Numerical Analysis I Numerical Differentiation and Integration

AP Calculus Chapter 9: Infinite Series

Tangent Planes, Linear Approximations and Differentiability

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson

Making Models with Polynomials: Taylor Series

Chapter 4: Interpolation and Approximation. October 28, 2005

100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX

Chapter 1. Numerical Errors. Module No. 3. Operators in Numerical Analysis

Linear Algebra and Matrix Inversion

1 Implicit Differentiation

f (r) (a) r! (x a) r, r=0

Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 2013

Polynomial Approximations and Power Series

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

Fourth Order RK-Method

11.10a Taylor and Maclaurin Series

8. Limit Laws. lim(f g)(x) = lim f(x) lim g(x), (x) = lim x a f(x) g lim x a g(x)

Applied Numerical Analysis Quiz #2

Do not turn over until you are told to do so by the Invigilator.

Numerical Algorithms. IE 496 Lecture 20

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0

Convergence Rates on Root Finding

Transcription:

Jim Lambers MAT 77 Fall Semester 010-11 Lecture Notes These notes correspond to Sections 4 and 5 in the text Hermite Interpolation Suppose that the interpolation points are perturbed so that two neighboring points x i and x i+1, 0 i < n, approach each other What happens to the interpolating polynomial? In the limit, as x i+1 x i, the interpolating polynomial p n (x) not only satisfies p n (x i ) = y i, but also the condition p n(x i ) = y i+1 y i lim x i+1 x i x i+1 x i It follows that in order to ensure uniqueness, the data must specify the value of the derivative of the interpolating polynomial at x i In general, the inclusion of an interpolation point x i k times within the set x 0,, x n must be accompanied by specification of p (j) n (x i ), j = 0,, k 1, in order to ensure a unique solution These values are used in place of divided differences of identical interpolation points in Newton interpolation Interpolation with repeated interpolation points is called osculatory interpolation, since it can be viewed as the limit of distinct interpolation points approaching one another, and the term osculatory is based on the Latin word for kiss In the case where each of the interpolation points x 0, x 1,, x n is repeated exactly once, the interpolating polynomial for a differentiable function f(x) is called the Hermite polynomial of f(x), and is denoted by H n+1 (x), since this polynomial must have degree n + 1 in order to satisfy the n + constraints H n+1 (x i ) = f(x i ), H n+1(x i ) = f (x i ), i = 0, 1,, n To satisfy these constraints, we define, for i = 0, 1,, n, H i (x) = [L i (x)] (1 L i(x i )(x x i )), K i (x) = [L i (x)] (x x i ), where, as before, L i (x) is the ith Lagrange polynomial for the interpolation points x 0, x 1,, x n It can be verified directly that these polynomials satisfy, for i, j = 0, 1,, n, H i (x j ) = δ ij, H i(x j ) = 0, 1

where δ ij is the Kronecker delta It follows that H n+1 (x) = K i (x j ) = 0, K i(x j ) = δ ij, δ ij = { 1 i = j 0 i = j n [f(x i )H i (x) + f (x i )K i (x)] i=0 is a polynomial of degree n + 1 that satisfies the above constraints To prove that this polynomial is the unique polynomial of degree n + 1, we assume that there is another polynomial Hn+1 of degree n + 1 that satisfies the constraints Because H n+1 (x i ) = H n+1 (x i ) = f(x i ) for i = 0, 1,, n, H n+1 H n+1 has at least n+1 zeros It follows from Rolle s Theorem that H n+1 H n+1 has n zeros that lie within the intervals (x i 1, x i ) for i = 0, 1,, n 1 Furthermore, because H n+1 (x i) = H n+1 (x i) = f (x i ) for i = 0, 1,, n, it follows that H n+1 H n+1 has n + 1 additional zeros, for a total of at least n + 1 However, H n+1 H n+1 is a polynomial of degree n + 1, and the only way that a polynomial of degree n + 1 can have n + 1 zeros is if it is identically zero Therefore, H n+1 = H n+1, and the Hermite polynomial is unique Using a similar approach as for the Lagrange interpolating polynomial, combined with ideas from the proof of the uniqueness of the Hermite polynomial, the following result can be proved Theorem Let f be n + times continuously differentiable on [a, b], and let H n+1 denote the Hermite polynomial of f with interpolation points x 0, x 1,, x n in [a, b] Then there exists a point ξ(x) [a, b] such that f(x) H n+1 (x) = f (n+) (ξ(x)) (x x 0 ) (x x 1 ) (x x n ) (n + )! The representation of the Hermite polynomial in terms of Lagrange polynomials and their derivatives is not practical, because of the difficulty of differentiating and evaluating these polynomials Instead, one can construct the Hermite polynomial using a Newton divided-difference table, in which each entry corresponding to two identical interpolation points is filled with the value of f (x) at the common point Then, the Hermite polynomial can be represented using the Newton divided-difference formula Differentiation We now discuss how polynomial interpolation can be applied to help solve a fundamental problem from calculus that frequently arises in scientific applications, the problem of computing the derivative of a given function f(x)

Finite Difference Approximations Recall that the derivative of f(x) at a point x 0, denoted f (x 0 ), is defined by f f(x 0 + h) f(x 0 ) (x 0 ) = lim h 0 h This definition suggests a method for approximating f (x 0 ) If we choose h to be a small positive constant, then f (x 0 ) f(x 0 + h) f(x 0 ) h This approximation is called the forward difference formula To estimate the accuracy of this approximation, we note that if f (x) exists on [x 0, x 0 + h], then, by Taylor s Theorem, f(x 0 + h) = f(x 0 ) + f (x 0 )h + f (ξ)h /, where ξ [x 0, x 0 + h] Solving for f (x 0 ), we obtain f (x 0 ) = f(x 0 + h) f(x 0 ) + f (ξ) h, h so the error in the forward difference formula is O(h) We say that this formula is first-order accurate The forward-difference formula is called a finite difference approximation to f (x 0 ), because it approximates f (x) using values of f(x) at points that have a small, but finite, distance between them, as opposed to the definition of the derivative, that takes a limit and therefore computes the derivative using an infinitely small value of h The forward-difference formula, however, is just one example of a finite difference approximation If we replace h by h in the forward-difference formula, where h is still positive, we obtain the backward-difference formula f (x 0 ) f(x 0) f(x 0 h) h Like the forward-difference formula, the backward difference formula is first-order accurate If we average these two approximations, we obtain the centered-difference formula f (x 0 ) f(x 0 + h) f(x 0 h) To determine the accuracy of this approximation, we assume that f (x) exists on the interval [x 0 h, x 0 + h], and then apply Taylor s Theorem again to obtain f(x 0 + h) = f(x 0 ) + f (x 0 )h + f (x 0 ) f(x 0 h) = f(x 0 ) f (x 0 )h + f (x 0 ) h + f (ξ + ) h 3, h f (ξ ) h 3, 3

where ξ + [x 0, x 0 + h] and ξ [x 0 h, x 0 ] Subtracting the second equation from the first and solving for f (x 0 ) yields f (x 0 ) = f(x 0 + h) f(x 0 h) f (ξ + ) + f (ξ ) h 1 By the Intermediate Value Theorem, f (x) must assume every value between f (ξ ) and f (ξ + ) on the interval (ξ, ξ + ), including the average of these two values Therefore, we can simplify this equation to f (x 0 ) = f(x 0 + h) f(x 0 h) f (ξ) h, where ξ [x 0 h, x 0 +h] We conclude that the centered-difference formula is second-order accurate This is due to the cancellation of the terms involving f (x 0 ) Example Consider the function f (x) = ( x ) sin +x f(x) = ) sin ( x 1 Our goal is to compute f (05) Differentiating, using the Quotient Rule and the Chain Rule, we obtain ( x ) ( x ) [ sin +x cos +x x+1 ( x ) ( x 1 sin +x cos sin ) [ sin ( x 1 ] + (sin x+1) () () ( x 1 ) 1 x x( ] x 1) () 3/ ) Evaluating this monstrous function at x = 05 yields f (05) = 9098770 An alternative approach is to use a centered difference approximation, f (x) f(x + h) f(x h) Using this formula with x = 05 and h = 0005, we obtain the approximation f (05) f(055) f(045) 001 = 9074495, which has absolute error 77 10 4 While this complicated function must be evaluated twice to obtain this approximation, that is still much less work than using differentiation rules to compute f (x), and then evaluating f (x), which is much more complicated than f(x) 4

While Taylor s Theorem can be used to derive formulas with higher-order accuracy simply by evaluating f(x) at more points, this process can be tedious An alternative approach is to compute the derivative of the interpolating polynomial that fits f(x) at these points Specifically, suppose we want to compute the derivative at a point x 0 using the data (x j, y j ),, (x 1, y 1 ), (x 0, y 0 ), (x 1, y 1 ),, (x k, y k ), where j and k are known nonnegative integers, x j < x j+1 < < x k 1 < x k, and y i = f(x i ) for i = j,, k Then, a finite difference formula for f (x 0 ) can be obtained by analytically computing the derivatives of the Lagrange polynomials {L n,i (x)} k i= j for these points, where n = j + k + 1, and the values of these derivatives at x 0 are the proper weights for the function values y j,, y k If f(x) is n + 1 times continuously differentiable on [x j, x k ], then we obtain an approximation of the form k f (x 0 ) = y i L n,i(x 0 ) + f (n+1) (ξ) k (x 0 x i ), (n + 1)! i= j i= j,i =0 where ξ [x j, x k ] Among the best-known finite difference formulas that can be derived using this approach is the second-order-accurate three-point formula f (x 0 ) = 3f(x 0) + 4f(x 0 + h) f(x 0 + ) + f (ξ) h, 3 ξ [x 0, x 0 + ], which is useful when there is no information available about f(x) for x < x 0 If there is no information available about f(x) for x > x 0, then we can replace h by h in the above formula to obtain a second-order-accurate three-point formula that uses the values of f(x) at x 0, x 0 h and x 0 Another formula is the five-point formula f (x 0 ) = f(x 0 ) 8f(x 0 h) + 8f(x 0 + h) f(x 0 + ) 1 + f (5) (ξ) h 4, 30 ξ [x 0, x 0 + ], which is fourth-order accurate The reason it is called a five-point formula, even though it uses the value of f(x) at four points, is that it is derived from the Lagrange polynomials for the five points x 0, x 0 h, x 0, x 0 + h, and x 0 + However, f(x 0 ) is not used in the formula because L 4,0 (x 0) = 0, where L 4,0 (x) is the Lagrange polynomial that is equal to one at x 0 and zero at the other four points If we do not have any information about f(x) for x < x 0, then we can use the following five-point formula that actually uses the values of f(x) at five points, f (x 0 ) = 5f(x 0) + 48f(x 0 + h) 3f(x 0 + ) + 1f(x 0 + 3h) 3f(x 0 + 4h) 1 5 + f (5) (ξ) h 4, 5

where ξ [x 0, x 0 + 4h] As before, we can replace h by h to obtain a similar formula that approximates f (x 0 ) using the values of f(x) at x 0, x 0 h, x 0, x 0 3h, and x 0 4h The strategy of differentiating Lagrange polynomials to approximate derivatives can be used to approximate higher-order derivatives For example, the second derivative can be approximated using a centered difference formula, which is second-order accurate f (x 0 ) f(x 0 + h) f(x 0 ) + f(x 0 h) h, Example We will construct a formula for approximating f (x) at a given point x 0 by interpolating f(x) at the points x 0, x 0 + h, and x 0 + using a second-degree polynomial p (x), and then approximating f (x 0 ) by p (x 0) Since p (x) should be a good approximation of f(x) near x 0, especially when h is small, its derivative should be a good approximation to f (x) near this point Using Lagrange interpolation, we obtain p (x) = f(x 0 )L,0 (x) + f(x 0 + h)l,1 (x) + f(x 0 + )L, (x), where {L,j (x)} j=0 are the Lagrange polynomials for the points x 0, x 1 = x 0 + h and x = x 0 + Recall that these polynomials satisfy { 1 if j = k L,j (x k ) = δ jk = 0 otherwise Using the formula for the Lagrange polynomials, we obtain L,j (x) = i=0,i =j (x x i ) (x j x i ), L,0 (x) = (x (x 0 + h))(x (x 0 + )) (x 0 (x 0 + h))(x 0 (x 0 + )) = x (x 0 + 3h)x + (x 0 + h)(x 0 + ), (x x 0 )(x (x 0 + )) L,1 (x) = (x 0 + h x 0 )(x 0 + h (x 0 + )) = x (x 0 + )x + x 0 (x 0 + ) h, (x x 0 )(x (x 0 + h)) L, (x) = (x 0 + x 0 )(x 0 + (x 0 + h)) = x (x 0 + h)x + x 0 (x 0 + h)

It follows that We conclude that f (x 0 ) p (x 0), where L,0(x) = x (x 0 + 3h) L,1(x) = x (x 0 + ) h L,(x) = x (x 0 + h) p (x 0 ) = f(x 0 )L,0(x 0 ) + f(x 0 + h)l,0(x 0 ) + f(x 0 + )L,0(x 0 ) f(x 0 ) 3 + f(x 0 + h) h + f(x 0 + ) 1 3f(x 0) + 4f(x 0 + h) f(x 0 + ) Using Taylor s Theorem to write f(x 0 + h) and f(x 0 + ) in terms of Taylor polynomials centered at x 0, it can be shown that the error in this approximation is O(h ), and that this formula is exact when f(x) is a polynomial of degree or less In a practical implementation of finite difference formulas, it is essential to note that roundoff error in evaluating f(x) is bounded independently of the spacing h between points at which f(x) is evaluated It follows that the roundoff error in the approximation of f (x) actually increases as h decreases, because the errors incurred by evaluating f(x) are divided by h Therefore, one must choose h sufficiently small so that the finite difference formula can produce an accurate approximation, and sufficiently large so that this approximation is not too contaminated by roundoff error 7