Lectures 9-10: Polynomial and piecewise polynomial interpolation

Similar documents
Chapter 4: Interpolation and Approximation. October 28, 2005

We consider the problem of finding a polynomial that interpolates a given set of values:

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Scientific Computing

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

BSM510 Numerical Analysis

Interpolation Theory

1 Piecewise Cubic Interpolation

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

1 Review of Interpolation using Cubic Splines

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

Scientific Computing: An Introductory Survey

Cubic Splines; Bézier Curves

Lecture 10 Polynomial interpolation

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

CHAPTER 4. Interpolation

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Numerical Analysis: Interpolation Part 1

Empirical Models Interpolation Polynomial Models

Math Numerical Analysis Mid-Term Test Solutions

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

MA 323 Geometric Modelling Course Notes: Day 07 Parabolic Arcs

Homework and Computer Problems for Math*2130 (W17).

3.1 Interpolation and the Lagrange Polynomial

Some notes on Chapter 8: Polynomial and Piecewise-polynomial Interpolation

1 Lecture 8: Interpolating polynomials.

MA2501 Numerical Methods Spring 2015

Numerical Mathematics & Computing, 7 Ed. 4.1 Interpolation

Lecture 9: Elementary Matrices

Ma 530 Power Series II

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

Chapter 2: Differentiation

MATH ASSIGNMENT 07 SOLUTIONS. 8.1 Following is census data showing the population of the US between 1900 and 2000:

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Math Numerical Analysis

Numerical Methods of Approximation

Mathematics for Engineers. Numerical mathematics

8.5 Taylor Polynomials and Taylor Series

Applied Numerical Analysis Homework #3

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

CHAPTER 10 Shape Preserving Properties of B-splines

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

Cubic Splines MATH 375. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Cubic Splines

Chapter 2: Differentiation

Numerical Methods. King Saud University

Fixed point iteration and root finding

Review I: Interpolation

Maria Cameron Theoretical foundations. Let. be a partition of the interval [a, b].

Sec 4.1 Limits, Informally. When we calculated f (x), we first started with the difference quotient. f(x + h) f(x) h

Interpolation and Approximation

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Lecture 6. Numerical methods. Approximation of functions

5 Numerical Integration & Dierentiation

Chapter 1 Numerical approximation of data : interpolation, least squares method

Approximation theory

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

Introduction Linear system Nonlinear equation Interpolation

Outline. 1 Numerical Integration. 2 Numerical Differentiation. 3 Richardson Extrapolation

Eigenvalues and Eigenvectors

Computational Physics

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Applied Math 205. Full office hour schedule:

Function approximation

LEAST SQUARES APPROXIMATION

11.10a Taylor and Maclaurin Series

Lösning: Tenta Numerical Analysis för D, L. FMN011,

Interpolation. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Interpolation 1 / 34

Numerical Methods I: Polynomial Interpolation

Numerical Methods for Differential Equations Mathematical and Computational Tools

Math 3191 Applied Linear Algebra

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. What for x = 1.2?

CS 257: Numerical Methods

CHAPTER 3 Further properties of splines and B-splines

Numerical Integration (Quadrature) Another application for our interpolation tools!

i x i y i

November 20, Interpolation, Extrapolation & Polynomial Approximation

Math 365 Homework 5 Due April 6, 2018 Grady Wright

EIGENVALUES AND EIGENVECTORS 3

LEAST SQUARES DATA FITTING

Relevant sections from AMATH 351 Course Notes (Wainwright): Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls):

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

DIFFERENTIATION RULES

Interpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

MATH 1902: Mathematics for the Physical Sciences I

Algebraic. techniques1

Curve Fitting and Interpolation

Numerical Analysis Exam with Solutions

MATH 350: Introduction to Computational Mathematics

Chapter 3 Interpolation and Polynomial Approximation

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

Polynomial Interpolation Part II

Transcription:

Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j = 1,,,n For instance, we may have obtained these values through measurements and now would like to determine f(x) for other values of x Example 91: Assume that you need to evaluate cos(π/6), but the trigonometric function-key on your calculator is broken and you do not have access to a computer You do remember that cos() = 1, cos(π/4) = 1/, and cos(π/) = How can we use this information about the cosine function to determine an approximation of cos(π/6) Example 9: Let x represent time (in hours) and f(x) be the amount of rain falling at time x Assume that f(x) is measured once an hour at a weather station We would like to determine the total amount of rain fallen during a 4-hour period, ie, we would like to compute 4 f(x)dx How can we determine an estimate of this integral? Example 9: Let f(x) represent the position of a car at time x and assume that we know f(x) at the times x 1, x,,x n How can we determine the velocity at time x? Can we also find out the acceleration? Interpolation by polynomials or piecewise polynomials provide approaches to solving the problems in the above examples We first discuss polynomial interpolation and then turn to interpolation by piecewise polynomials 1 Polynomial interpolation Given n distinct nodes x 1, x,, x n and associated function values y 1, y,,y n, determine the polynomial p(x) of degree at most n 1, such that p(x j ) = y j, j = 1,,,n (1) The polynomial is said to interpolate the values y j at the nodes x j, and is referred to as the interpolating polynomial Example 94: Let n = Then the interpolating polynomial is linear and is given by p(x) = y 1 + y y 1 x x 1 (x x 1 ) Example 91 cont d: We may seek to approximate cos(π/6) by first determining the polynomial p of degree at most, which interpolates cos(x) at x =, x = π/4, and x = π/, and the evaluate p(π/6) Before dwelling more on applications of interpolating polynomials, we have to establish that they exist and are unique We also will consider several representations of the interpolating polynomial, starting with the power form p(x) = a 1 + a x + a x + + a n x () This is a polynomial of degree at most n 1 We would like to determine the coefficients a j, which multiply powers of x, so that the conditions (1) are satisfied This gives the equations a 1 + a x j + a x j + + a nx j = y j, j = 1,,, n, 1

which we can be expressed in matrix form, 1 x 1 x 1 x1 n x1 1 x x x n x 1 x x x n 1 x n x n x n n x xn a 1 a a a n = y 1 y y y n () The above matrix is known as a Vandermonde matrix It is nonsingular when the nodes x j are distinct For instance, when n =, we have [ ] 1 x1 det = x 1 x x 1 Below we will be able to conclude that Vandermonde matrices with distinct nodes are nonsingular without evaluating their determinants The representation of a polynomial p(x) in terms of the powers of x, like in (), is convenient for many applications, because this representation easily can be integrated or differentiated However, Vandermonde matrices generally are severely ill-conditioned This is illustrated in Exercise 9 When the function values y j are obtained by measurements, and therefore are contaminated by measurement errors, the ill-conditioning implies that the computed coefficients a j may differ significantly from the coefficients that would have been obtained with error-free data Moreover, round-off errors introduced during the solution of the linear system of equations () also can give rise to a large propagated error in the computed coefficients We are therefore interested in investigating other polynomial bases than the power basis The Newton basis for polynomials of degree n 1 is given by 1, x x 1, (x x 1 )(x x ),, (x x j ) We seek to determine the interpolating polynomial in the form p(x) = a 1 + a (x x 1 ) + a (x x 1 )(x x ) + + a n (x x j ) (4) The interpolation conditions (1) give the linear system of equations 1 1 x x 1 1 x x 1 (x x 1)(x x ) 6 Q 4 1 x x 1 (x x 1)(x x ) n (x xj) Q 1 x n x 1 (x n x 1)(x n x ) n (xn xj) Q (xn xj) 6 4 5 a 1 a a a n = 6 5 4 y 1 y y y n 5 with a lower triangular matrix We refer to these equations as the Newton equations The matrices in the Newton equations have smaller condition number than the corresponding Vandermonde matrices; cf Exercise 9 The Newton form (4) of the interpolation polynomial is quite easy to use for hand computations For instance, Example 94 uses this representation

Example 95: Consider the situation when we have determined the interpolation of Example 94 and would like to add another data point {x, y } Thus, we would like to compute the polynomial of degree at most that interpolates the data points {x j, y j}, j = 1,, The desired interpolation polynomial is obtained by adding a quadratic term to the polynomial of Example 94, p(x) = y 1 + y y1 x x 1 (x x 1) + a (x x 1)(x x ) (5) This polynomial satisfies the interpolation conditions of p(x j) = y j for j = 1, for any coefficient a This coefficient is determined by the interpolation condition p(x ) = y Let a = Substituting x = x into (5) then yields the expression a = y y1 x x 1 y y1 a(x x1) (x x 1)(x x ) These computations amount to one step of forward substitution in the solution of the Newton equations The determinant of a lower triangular matrix is the product of its diagonal entries These are all nonvanishing in the Newton equation matrix when the nodes x j are distinct Hence the matrix is nonsingular when the nodes are distinct This shows that the coefficients in the Newton form are uniquely determined by the interpolation conditions (1) We conclude that the polynomial interpolation problem has a unique solution when the polynomial is expressed in Newton form Given a polynomial in Newton form, we may express it in power form The transformation to power form is unique Thus, the interpolation conditions (1) also determine the interpolating polynomial in power form uniquely It follows that the Vandermonde matrix in () is nonsingular when the nodes are distinct (If the matrix were singular, then the interpolating polynomial either would not be unique or would not exist) The last polynomial basis we consider are Lagrange polynomials l k (x) = ny j k These polynomials are of degree n 1 and satisfy x x j x k x j, k = 1,,, n l k (x j) = j 1, k = j,, k j (6) Consider the interpolation polynomial expressed in Lagrange form, p(x) = The interpolation conditions (1) become l 1(x 1) l (x 1) l (x 1) l n(x 1) l 1(x ) l (x ) l (x ) l n(x ) 6 6 4 l 1(x ) l (x ) l (x ) l n(x ) 5 4 l 1(x n) l (x n) l (x n) l n(x n) nx a k l k (x) () k=1 a 1 a a a n = 6 5 4 y 1 y y y n (8) 5 Using the property (6), we see that the matrix reduces to the identity matrix and it follows that the coefficients a k in () are given by a k = y k, k = 1,,, n

Thus, no linear system of equations has to be solved in order to determine the Lagrange form of the interpolation polynomial The only drawback of the the representation () is that it is somewhat cumbersome to evaluate Recall that the identity matrix has condition number 1, which is minimal Let the nodes x j be distinct and live in the real interval [a, b], and let f(x) be a function, which is n times differentiable in [a, b] Let f (n) (x) denote the nth derivative Assume that y j = f(x j), j = 1,,, n Then the difference f(x) p(x) can be expressed as f(x) p(x) = ny (x x j) f(n) (ξ), (9) n! where ξ is a function of x 1, x,, x n and x The exact value of ξ is difficult to pin down, however, it is known that ξ is in the interval [a, b] when x and x 1, x,, x n are there One can derive the expression by using a variant of the mean-value theorem from Calculus We will not prove the error-formula (9) in this course Instead, we will use the formula to learn about some properties of the polynomial interpolation problem Usually, the nth derivative of f is not available and only the product over the interpolation points can be studied easily It is remarkable how much useful information can be gained by investigating this product First we note that the interpolation error max a x b f(x) p(x) is likely to be larger when the interval [a, b] is long than when it is short We can see this by doubling the size of the interval, ie, we multiply a, b, x and the x j by Then the product in the right-hand side of (9) is replaced by ny Y n (x x j) = n (x x j), which shows that the interpolation error might be multiplied by n when doubling the size of the interval Actual computations show that, indeed, the error typically increases with the length of the interval when other relevant quantities are not changed The error-formula (9) also raises the question how the interpolation points should be distributed in the interval [a, b] in order to give a small error max a x b f(x) p(x) For instance, we may want to choose interpolation points x j that solve the minimization problem min x j max a x b ny x x j (1) This fairly complicated problem turns out to have a simple solution! Let for the moment a = 1 and b = 1 Then the solution is given by «j 1 x j = cos n π, j = 1,,, n (11) These points are the projection of n equidistant points on the upper half of the unit circle onto the x-axis; see Figure 1 For intervals with endpoints a < b, the solution of (1) is given by x j = 1 (b + a) + 1 «j 1 (b a)cos n π, j = 1,,, n (1) Finally, the presence of a high-order derivative in the error-formula (9) suggests that interpolation polynomials are likely to give small approximation errors when the function has many continuous derivatives that are not very large in magnitude In the next subsection, we will discuss a modification of polynomial interpolation, that can work better when the function to be approximated does not have many (or any) continuous derivatives that can give better approximations 4

1 1 8 6 4 1 8 6 4 4 6 8 1 Figure 1: Upper half of the unit circle with 8 equidistant points marked in red, and their projection onto the x-axis marked in magenta The latter are, from left to right, the points x 1, x,, x 8 defined by (11) for n = 8 1 11 15 145 4 16 5 18 Table 1: Grind size (Left-hand side column) versus pressure (right-hand side column) Exercises Exercise 91: Solve the interpolation problem of Example 91 Exercise 9: Let V n be an n n Vandermonde matrix determined by n equidistant nodes in the interval [ 1, 1] How quickly does the condition number of V n grow with n? Linearly, quadratically, cubically,, exponentially? Use the MATLAB or Octave functions vander and cond Determine the growth experimentally Describe how you designed the experiments Show your MATLAB or Octave codes and relevant input and output Exercise 9: Let L n be an n n lower triangular matrix determined Newton interpolation at n equidistant nodes in the interval [ 1, 1] How do the condition numbers of L n and V n compare? How quickly does the condition number of L n grow with n? Determine the growth experimentally Show your MATLAB or Octave codes and relevant input and output Exercise 94: The pressure in a fully automatic espresso machine varies with size of the coffee grinds The pressure is measured in bar and the grind size in units, where 1 signifies a coarse grind and 5 a fine one The relation is tabulated in Table 1 (a) Determine by linear interpolation, which pressure grind size 1 gives rise to Which data should be used? (b) Determine by quadratic interpolation, which pressure grind size 1 gives rise to Which data should be used? Are the results for linear and quadratic interpolation close? 5

1 4 6 5 4 6 1 Table : n and Γ(n) Exercise 95: The Γ-function is defined by Γ(x) = Z t x 1 e t dt Direct evaluation of the integral yields Γ(1) = 1 and integration by parts shows that Γ(x + 1) = xγ(x) In particular, for integer-values n > 1, we obtain that Γ(n + 1) = nγ(n) and therefore Γ(n + 1) = n(n 1)(n ) 1 We would like to determine an estimate of Γ(45) using the tabulated values of Table (a) Determine the actual value of Γ(45) by interpolation by in and 5 nodes Which nodes should be used? Determine the actual value of Γ(45) Are the computed approximations close? Which one is more accurate (b) Also, investigate the following approach Instead of interpolating Γ(x), interpolate ln(γ(x)) by polynomials at and 5 nodes Evaluate the computed polynomial at 45 and exponentiate How do the computed approximations in (a) and (b) compare? Explain! Exercise 96: (a) Interpolate the function f(x) = e x at equidistant nodes in [ 1,1] This gives an interpolation polynomial p of degree at most 19 Measure the approximation error f(x) p(x) by measure the difference at 5 equidistant nodes t j in [ 1,1] We refer to the quantity max f(tj) p(tj) t j,,,5 as the error Compute the error (b) Repeat the above computations with the function f(x) = e x /(1 5x ) Plot p(t j) f(t j), j = 1,,, 5 Where in the interval [ 1, 1] is the error the largest? (c) Repeat the computations in (a) using Chebyshev points (11) as interpolation points How do the errors compare for equidistant and Chebyshev points? Plot the error (d) Repeat the computations in (b) using Chebyshev points (11) as interpolation points How do the errors compare for equidistant and Chebyshev points? Exercise 9: Compute an approximation of the integral Z 1 xexp(x )dx by first interpolating the integrand by a rd degree polynomial and then integrating the polynomial Which representation of the polynomial is most convenient to use? Specify which interpolation points you use Exercise 98: The function f(t) gives the position of a ball at time t Table displays a few values of f and t Interpolate f by a quadratic polynomial and estimate the velocity and acceleration of the ball at time t = 1 6

1 1 6 Table : t and f(t) Interpolation by piecewise polynomials In the above subsection, we sought to determine one polynomial that approximates a function on a specified interval This works well if either one of the following conditions hold: The polynomial required to achieve desired accuracy is of fairly low degree The function has a few continuous derivatives and interpolation can be carried out at the Chebyshev points (11) or (1) A quite natural and different approach to approximate a function on an interval is to first split the interval into subintervals and then approximate the function by a polynomial of fairly low degree on each subinterval Example 96: We would like to approximate a function on the interval [ 1,1] Let the function values y j = f(x j) be available, where x 1 = 1, x =, x = 1, and y 1 = y =, y = 1 It is easy to approximate f(x) by a linear function on each subinterval [x 1, x ] and [x, x ] We obtain p(x) = y 1 + p(x) = y + y y1 x x 1 (x x 1) = x + 1, 1 x, y y x x (x x ) = 1 x, x 1 The MATLAB command plot([-1,,1],[,1,]) gives the continuous graph of Figure This is a piecewise linear approximation of the unknown function f(x) If f(x) indeed is a piecewise linear function with a kink at x =, then the computed approximation is appropriate, On the other hand, if f(x) displays the trajectory of a baseball, then the smoother function p(x) = 1 x, which is depicted by the dashed curve, may be a more suitable approximation of f(x), since baseball trajectories do not exhibit kinks - even if some players occasionally may wish they do Piecewise linear functions give better approximations of a smooth function if more interpolation points {x j, y j} are used We can increase the accuracy of the approximation by reducing the lengths of the subintervals We conclude that piecewise linear approximations of functions are easy to compute These approximations display kinks Many interpolation points may be required to determine a piecewise linear approximant of desired accuracy The benefits of piecewise linear approximation is the ease of the computations on each subinterval There are several ways to modify piecewise linear functions to give them a more pleasing look Here we will discuss how to use derivative information to obtain smoother approximants A different approach, which uses Bézier curves is described in the next lecture Assume that not only the function values y j = f(x j), but also the derivative values y j = f (x j), are available at the nodes a x 1 < x < < x n b We can then on each subinterval, say [x j, x j+1], approximate f(x) by a polynomial that interpolates both f(x) and f (x) at the endpoints Thus, we would like to determine a polynomial p j(x), such that p j(x j) = y j, p j(x j+1) = y j+1, p j(x j) = y j, p j(x j+1) = y j+1 (1) These are 4 conditions, and we seek to determine a polynomial of degree, p j(x) = a 1 + a x + a x + a 4x, (14)

1 9 8 6 5 4 1 1 8 6 4 4 6 8 1 Figure : Example 96: Quadratic polynomial p(x) = 1 x (red dashed graph) and piecewise linear approximation (continuous blue graph) which satisfies these conditions Our reason for choosing a polynomial of degree is that it has 4 coefficients, one for each condition Substituting the polynomial (14) into the conditions (1) gives the linear system of equations, 1 x j x j x j a 1 y j 6 1 x j+1 x j+1 x j+1 6 a 4 1 x j x 5 4 j a 5 = 6 y j+1 4 y 5 (15) 1 x j+1 x j j+1 a 4 y j+1 The last rows impose interpolation of the derivative values The matrix can be shown to be nonsingular when x j x j+1 The polynomials p 1(x),p (x),, p (x) provide a piecewise cubic polynomial approximation of f(x) on the whole interval [a, b] They can be computed independently and yield an approximation with a continuous derivative on [a, b] The latter can be seen as follows: The polynomial p j is defined and differentiable on the interval [x j, x j+1] for j = 1,,, n 1 What remains to be established is that our approximant also has a continuous derivative at the interpolation points This, however, follows from the interpolation conditions (1) We have that lim xրx j+1 p j(x) = p j(x j+1) = y j+1, lim xցx j+1 p j+1(x) = p j+1(x j+1) = y j+1 The existence of the limit follows from the continuity of each polynomial on the interval where it is defined, and the other equalities are the interpolation conditions Thus, p j(x j+1) = p j+1(x j+1), which shows the continuity of the derivative at x j+1 of our piecewise cubic polynomial approximant The use of piecewise cubic polynomials as described gives attractive approximations However, the approach discussed requires derivative information be available When no derivative information is explicitly known, modifications of the scheme outlined can be used A simple modification is to use estimates the derivative-values of the function f(x) at the nodes; see Exercise 91 Another possibility is to impose the conditions p j(x j) = y j, p j(x j+1) = y j+1, p j(x j) = p j 1(x j), p j (x j) = p j 1(x j), for j =,,, Thus, at the subinterval boundaries at x x,, x we require the piecewise cubic polynomial to have continuous first and second derivatives However, these derivatives are not required to take on prescibed 8

values The piecewise cubic polynomials obtained in the manner are known as splines They are popular design tools in industry Their determination requires the solution of a linear system of equations, which is somewhat complicated to derive We will therefore omit its derivation Also, extra conditions at the interval endpoints have to be imposed in order to make the linear system of solvable with a unique solution Exercise 99: Consider the function in Example 96 Assume that we also know the derivative values y 1 =, y =, and y = Determine a piecewise polynomial approximation on [ 1, 1] by using the interpolation conditions (1) Plot the resulting function Exercise 91: Assume the derivative values in the above exercise are not available How can one determine estimates of these values? Use these estimates in the interpolation conditions (1) and compute a piecewise cubic approximation How does it compare with the one from Exercise 99 and with the piecewise linear approximation of Example 96 Exercise 911: Compute a spline approximant, eg, by using the function spline in MATLAB or Octave 9