ERROR IN LINEAR INTERPOLATION

Similar documents
INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

Numerical Mathematics & Computing, 7 Ed. 4.1 Interpolation

Scientific Computing

Section 4.1: Polynomial Functions and Models

LEAST SQUARES DATA FITTING

Algebra II Chapter 5: Polynomials and Polynomial Functions Part 1

8.5 Taylor Polynomials and Taylor Series

AP Calculus Chapter 9: Infinite Series

THE LANGUAGE OF FUNCTIONS *

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

Section 3.1. Best Affine Approximations. Difference Equations to Differential Equations

A first order divided difference

Coordinate Systems. Recall that a basis for a vector space, V, is a set of vectors in V that is linearly independent and spans V.

THE SECANT METHOD. q(x) = a 0 + a 1 x. with

P.5 Solving Equations Graphically, Numerically, and Algebraically

Chapter 1A -- Real Numbers. iff. Math Symbols: Sets of Numbers

Algebra Review. Finding Zeros (Roots) of Quadratics, Cubics, and Quartics. Kasten, Algebra 2. Algebra Review

Numerical Analysis: Interpolation Part 1

approximation of the dimension. Therefore, we get the following corollary.

MATH 415, WEEK 11: Bifurcations in Multiple Dimensions, Hopf Bifurcation

5 Set Operations, Functions, and Counting

Numerical Methods. Equations and Partial Fractions. Jaesung Lee

Linear Least-Squares Data Fitting

Secondary Math 3 Honors - Polynomial and Polynomial Functions Test Review

MA 123 September 8, 2016

MTH 2032 Semester II

Chapter 2 Linear Equations and Inequalities in One Variable

Definition: Quadratic equation: A quadratic equation is an equation that could be written in the form ax 2 + bx + c = 0 where a is not zero.

Factoring and Algebraic Fractions

CH 59 SQUARE ROOTS. Every positive number has two square roots. Ch 59 Square Roots. Introduction

2.2 Graphs of Functions

Introduction to Techniques for Counting

A repeated root is a root that occurs more than once in a polynomial function.

Law of Trichotomy and Boundary Equations

Math 10b Ch. 8 Reading 1: Introduction to Taylor Polynomials

Equations in Quadratic Form

Lagrange Multipliers

z-transforms 21.1 The z-transform Basics of z-transform Theory z-transforms and Difference Equations 36

Normed and Banach spaces

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Polynomial functions right- and left-hand behavior (end behavior):

INFINITE SUMS. In this chapter, let s take that power to infinity! And it will be equally natural and straightforward.

9.2 Multiplication Properties of Radicals

To get horizontal and slant asymptotes algebraically we need to know about end behaviour for rational functions.

2.3. Polynomial Equations. How are roots, x-intercepts, and zeros related?

Finite Mathematics : A Business Approach

Solution: By inspection, the standard matrix of T is: A = Where, Ae 1 = 3. , and Ae 3 = 4. , Ae 2 =

Unit 2-1: Factoring and Solving Quadratics. 0. I can add, subtract and multiply polynomial expressions

2-7 Solving Quadratic Inequalities. ax 2 + bx + c > 0 (a 0)

The residual again. The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute. r = b - A x!

Coordinate Systems. S. F. Ellermeyer. July 10, 2009

For all For every For each For any There exists at least one There exists There is Some

Remember, you may not use a calculator when you take the assessment test.

3.3 Real Zeros of Polynomial Functions

Math 115 Spring 11 Written Homework 10 Solutions

CIS 2033 Lecture 5, Fall

Warm-Up. Given f ( x) = x 2 + 3x 5, find the function value when x = 4. Solve for x using two methods: 3x 2 + 7x 20 = 0

Numerical differentiation

Mathematics: applications and interpretation SL

1 Series Solutions Near Regular Singular Points

Calculus with business applications, Lehigh U, Lecture 01 notes Summer

MATH 320, WEEK 6: Linear Systems, Gaussian Elimination, Coefficient Matrices

Numerical Methods. King Saud University

means is a subset of. So we say A B for sets A and B if x A we have x B holds. BY CONTRAST, a S means that a is a member of S.

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 7

Polynomial Interpolation Part II

Lectures 9-10: Polynomial and piecewise polynomial interpolation

PAP Geometry Summer Work- Show your work

Empirical Models Interpolation Polynomial Models

Notes on arithmetic. 1. Representation in base B

Parabolas and lines

Intermediate Algebra Summary - Part II

2017 SUMMER REVIEW FOR STUDENTS ENTERING GEOMETRY

Mon Jan Improved acceleration models: linear and quadratic drag forces. Announcements: Warm-up Exercise:

19. TAYLOR SERIES AND TECHNIQUES

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Rational Functions. Elementary Functions. Algebra with mixed fractions. Algebra with mixed fractions

Simple Iteration, cont d

Review test 1. C. 2 and 4. B. 2 and 4. D. 2 and 4. A. 8 and 0 B. 13 and 5 C. 0 D. 5

Using Scientific Measurements

Numerical Analysis Spring 2001 Professor Diamond

Algebra II Unit #2 4.6 NOTES: Solving Quadratic Equations (More Methods) Block:

The Completion of a Metric Space

5.6 Logarithmic and Exponential Equations

Corrigendum: The complexity of counting graph homomorphisms

Module 5 : Linear and Quadratic Approximations, Error Estimates, Taylor's Theorem, Newton and Picard Methods

Lesson #33 Solving Incomplete Quadratics

100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX

Longest element of a finite Coxeter group

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

Honors Advanced Mathematics November 4, /2.6 summary and extra problems page 1 Recap: complex numbers

IB Mathematics HL Year 2 Unit 11: Completion of Algebra (Core Topic 1)

Boolean Algebra and Digital Logic

Solving Quadratic Equations

The Integers. Math 3040: Spring Contents 1. The Basic Construction 1 2. Adding integers 4 3. Ordering integers Multiplying integers 12

1 Lesson 13: Methods of Integration

Cubic Splines MATH 375. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Cubic Splines

The first two give solutions x = 0 (multiplicity 2), and x = 3. The third requires the quadratic formula:

Honors Advanced Algebra Unit 3: Polynomial Functions October 28, 2016 Task 10: Factors, Zeros, and Roots: Oh My!

Interpolation and the Lagrange Polynomial

Transcription:

ERROR IN LINEAR INTERPOLATION Let P 1 (x) be the linear polynomial interpolating f (x) at x 0 and x 1. Assume f (x) is twice continuously differentiable on an interval [a, b] which contains the points x 0 < x 1. Then for a x b, f (x) P 1 (x) = (x x 0) (x x 1 ) f (c x ) 2 for some c x between the minimum and maximum of x 0, x 1, and x. We usually use P 1 (x) as an approximation of f (x) for x [x 0, x 1 ]. Then for an error bound, f (x) P 1 (x) (x x 0) (x 1 x) 2 Easily, with h = x 1 x 0, Therefore, max (x x 0 ) (x 1 x) = h 2 /4. x 0 x x 1 f (x) P 1 (x) h2 8 max x 0 x x 1 f (x), x [x 0, x 1 ]. max f (x), x [x 0, x 1 ]. x 0 x x 1

EXAMPLE Let f (x) = log 10 x; and in line with typical tables of log 10 x, we take 1 x 0 x x 1 10, and again let h = x 1 x 0. Then Thus, f (x) = log 10 e x 2 max f (x) = log 10 e. x 0 x x 1 log 10 x P 1 (x) h2 8 x 2 0 log 10 e x0 2, x [x 0, x 1 ]. Typical high school algebra textbooks contain tables of log 10 x with a spacing of h =.01. What is the error in this case? log 10 x P 1 (x) 0.012 8 log 10 e. = 5.4287 10 6.

In most tables, a typical entry is given to only four decimal places to the right of the decimal point, e.g. log 5.41. =.7332 Therefore the entries are in error by as much as.00005. Comparing this with the interpolation error, we see the latter (with h =.01) is less important than the rounding errors in the table entries. From the bound, log 10 x P 1 (x) h2 log 10 e 8x 2 0. =.0543 h2 x 2 0 we see the error decreases as x 0 increases, and it is about 100 times smaller for points near 10 than for points near 1.

THE QUADRATIC CASE For n = 2, we have f (x) P 2 (x) = (x x 0) (x x 1 ) (x x 2 ) f (3) (c x ) (*) 3! with c x some point between the minimum and maximum of the points in {x, x 0, x 1, x 2 }. To illustrate the use of this formula, consider the case of evenly spaced nodes: x 1 = x 0 + h, x 2 = x 1 + h Further suppose we have x 0 x x 2, as we would usually have when interpolating in a table of given function values (e.g. log 10 x). The quantity Ψ 2 (x) = (x x 0 ) (x x 1 ) (x x 2 ) can be evaluated directly for a particular x.

Graph of using (x 0, x 1, x 2 ) = ( h, 0, h): Ψ 2 (x) = (x + h) x (x h) y h h x

In the formula ( ), however, we do not know c x, and therefore we replace f (3) (c x ) with a maximum of f (3) (x) as x varies over x 0 x x 2. This yields f (x) P 2 (x) Ψ 2(x) 3! f max (3) (x) (**) x 0 x x 2 If we want a uniform bound for x 0 x x 2, we must compute max Ψ 2 (x) = max (x x 0 ) (x x 1 ) (x x 2 ) x 0 x x 2 x 0 x x 2 With a change of variables x = x 1 + t h, x x 0 = (1 + t) h, x x 1 = t h, x x 2 = (t 1) h max Ψ 2 (x) = h 3 x 0 x x 2 max 1 t 1 t t 3 = 2h3 3 3 Combined with ( ), this yields f (x) P 2 (x) h3 f 9 3 max (3) (x) x 0 x x 2 for x 0 x x 2.

For f (x) = log 10 x, with 1 x 0 x x 2 10, this leads to log 10 x P 2 (x) = h3 9 3 max 2 log 10 e x 0 x x 2 x 3.05572 h3 x0 3 For the case of h =.01, we have For comparison, log 10 x P 2 (x) 5.57 10 8 x 3 0 5.57 10 8 log 10 x P 1 (x) 5.43 10 6

Question: How much larger could we make h so that quadratic interpolation would have an error comparable to that of linear interpolation of log 10 x with h =.01? The error bound for the linear interpolation was 5.43 10 6, and therefore we want the same to be true of quadratic interpolation. Using a simpler bound, we want to find h so that log 10 x P 2 (x).05572 h 3 5 10 6 This is true if h =.04477. Therefore a spacing of h =.04 would be sufficient. A table with this spacing and quadratic interpolation would have an error comparable to a table with h =.01 and linear interpolation.

AN ERROR FORMULA: THE GENERAL CASE Recall the general interpolation problem: find a polynomial P n (x) for which deg(p n ) n P n (x i ) = f (x i ), i = 0, 1,, n with distinct node points {x 0,..., x n } and a given function f (x). Let [a, b] be a given interval on which f (x) is (n + 1)-times continuously differentiable; and assume the points x 0,..., x n, and x are contained in [a, b]. Then f (x) P n (x) = (x x 0) (x x 1 ) (x x n ) f (n+1) (c x ) (n + 1)! with c x some point between the minimum and maximum of the points in {x, x 0,..., x n }.

As shorthand, introduce Ψ n (x) = (x x 0 ) (x x 1 ) (x x n ) a polynomial of degree n + 1 with roots {x 0,..., x n }. Then f (x) P n (x) = Ψ n(x) (n + 1)! f (n+1) (c x ) When bounding the error we replace f (n+1) (c x ) with its maximum over the interval containing {x, x 0,..., x n }, as we have illustrated earlier in the linear and quadratic cases. We concentrate on understanding the size of Ψ n (x) or Ψ n (x) (n + 1)!

ERROR FOR EVENLY SPACED NODES We consider first the case in which the node points are evenly spaced, as this seems the natural way to define the points at which interpolation is carried out. Moreover, using evenly spaced nodes is the case to consider for table interpolation. For evenly spaced node points on [0, 1], For this case, h = 1 n, x 0 = 0, x 1 = h, x 2 = 2h,..., x n = nh = 1 Ψ n (x) = x (x h) (x 2h) (x 1)

Using the following table n M n n M n 1 1.25E 1 6 4.76E 7 2 2.41E 2 7 2.20E 8 3 2.06E 3 8 9.11E 10 4 1.48E 4 9 3.39E 11 5 9.01E 6 10 1.15E 12 we can observe that the maximum M n max x 0 x x n Ψ n (x) (n + 1)! becomes smaller with increasing n. Graphs of Ψ n (x) are shown next for the cases of n = 2,..., 9.

y n = 2 y n = 3 1 x 1 x y n = 4 y n = 5 1 x 1 x Figure: Graphs of Ψ n (x) on [0, 1] for n = 2, 3, 4, 5

y n = 6 y n = 7 1 x 1 x y n = 8 y n = 9 1 x 1 x Figure: Graphs of Ψ n (x) on [0, 1] for n = 6, 7, 8, 9

More detailed view of the graph of with evenly spaced nodes: Ψ 6 (x) = (x x 0 ) (x x 1 ) (x x 6 ) x 0 x 1 x 2 x 3 x 4 x 5 x 6 x

What can we learn from the given graphs? Observe that there is enormous variation in the size of Ψ n (x) as x varies over [0, 1]; and thus there is also enormous variation in the error as x so varies. For example, in the n = 9 case, max x 0 x x 1 max x 4 x x 5 Ψ n (x) = 3.39 10 11 (n + 1)! Ψ n (x) = 6.89 10 13 (n + 1)! and the ratio of these two errors is approximately 49. Thus the interpolation error is likely to be around 49 times larger when x 0 x x 1 as compared to the case when x 4 x x 5. When doing table interpolation, the point x at which we interpolate should be centrally located with respect to the interpolation nodes {x 0,..., x n } being used to define the interpolation, if possible.

Discussion on Convergence Consider now the problem of using an interpolation polynomial to approximate a given function f (x) on a given interval [a, b]. In particular, take interpolation nodes a x 0 < x 1 < < x n 1 < x n b and produce the interpolation polynomial P n (x) that interpolates f (x) at the given node points. We would like to have Does it happen? Recall the error bound max f (x) P n(x) 0 as n a x b max f (x) P n(x) max a x b a x b Ψ n (x) (n + 1)! max a x b f (n+1) (x) We begin with an example using evenly spaced node points.

RUNGE S EXAMPLE Use evenly spaced node points: h = b a n, x i = a + ih for i = 0,..., n For some functions, such as f (x) = e x, the maximum error goes to zero quite rapidly. But the size of the derivative term f (n+1) (x) in max f (x) P n(x) max a x b a x b Ψ n (x) (n + 1)! max a x b f (n+1) (x) can badly hurt or destroy the convergence of other cases. In particular, we show the graph of f (x) = 1/ ( 1 + x 2) and P n (x) on [ 5, 5] for the cases n = 8 and n = 12. The case n = 10 is in the text on page 127. It can be proven that for this function, the maximum error on [ 5, 5] does not converge to zero. Thus the use of evenly spaced nodes is not necessarily a good approach to approximating a function f (x) by interpolation.

y y=p 10 (x) y=1/(1+x 2 ) x Figure: Runge s example with n = 10

OTHER CHOICES OF NODES Recall the general error bound max f (x) P n(x) max a x b a x b Ψ n (x) (n + 1)! max a x b f (n+1) (x) There is nothing we really do with the derivative term for f ; but we can examine the way of defining the nodes {x 0,..., x n } within the interval [a, b]. We ask how these nodes can be chosen so that the maximum of Ψ n (x) over [a, b] is made as small as possible.

There is an elegant solution to this problem, and it is taken up in 4.6. The node points {x 0,..., x n } turn out to be the zeros of a particular polynomial T n+1 (x) of degree n + 1, called a Chebyshev polynomial. These zeros are known explicitly, and with them max Ψ n(x) = a x b ( ) b a n+1 2 n This turns out to be smaller than for evenly spaced cases; and although this polynomial interpolation does not work for all functions f (x), it works for all differentiable functions and more. 2

ANOTHER ERROR FORMULA Recall the error formula f (x) P n (x) = Ψ n(x) (n + 1)! f (n+1) (c) Ψ n (x) = (x x 0 ) (x x 1 ) (x x n ) with c between the minimum and maximum of {x 0,..., x n, x}. A second formula is given by f (x) P n (x) = Ψ n (x) f [x 0,..., x n, x] To show this is a simple, but somewhat subtle argument. Let P n+1 (x) denote the polynomial of degree n + 1 which interpolates f (x) at the points {x 0,..., x n, x n+1 }. Then P n+1 (x) = P n (x) + f [x 0,..., x n, x n+1 ] (x x 0 ) (x x n )

Substituting x = x n+1, and using the fact that P n+1 (x) interpolates f (x) at x n+1, we have f (x n+1 ) = P n (x n+1 ) + f [x 0,..., x n, x n+1 ] (x n+1 x 0 ) (x n+1 x n ) In this formula, the number x n+1 is completely arbitrary, other than being distinct from the points in {x 0,..., x n }. To emphasize this fact, replace x n+1 by x throughout the formula, obtaining f (x) = P n (x) + f [x 0,..., x n, x] (x x 0 ) (x x n ) = P n (x) + Ψ n (x) f [x 0,..., x n, x] provided x x 0,..., x n. This formula is also easily true for x a node point provided f (x) is sufficiently differentiable (to insure the existence of the divided difference f [x 0,..., x n, x]). This shows f (x) P n (x) = Ψ n (x) f [x 0,..., x n, x] (1)

Compare the two error formulas: Then f (x) P n (x) = Ψ n (x) f [x 0,..., x n, x] = f [x 0,..., x n, x] = f (n+1) (c) (n + 1)! Ψ n(x) (n + 1)! f (n+1) (c) for some c between the smallest and largest of the numbers in {x 0,..., x n, x}. To make this symmetric in its arguments, let x = x n+1, m = n + 1. Then f [x 0,..., x m 1, x m ] = f (m) (c) (2) m! with c an unknown number between the smallest and largest of the numbers in {x 0,..., x m }. This was given in the preceding section where divided differences were introduced.