Review I: Interpolation

Similar documents
1 Lecture 8: Interpolating polynomials.

Numerical Analysis: Interpolation Part 1

CHAPTER 4. Interpolation

BSM510 Numerical Analysis

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Function approximation

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

We consider the problem of finding a polynomial that interpolates a given set of values:

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

APPM/MATH Problem Set 6 Solutions

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Scientific Computing: An Introductory Survey

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Scientific Computing

Introduction Linear system Nonlinear equation Interpolation

Numerical Mathematics & Computing, 7 Ed. 4.1 Interpolation

Interpolation Theory

Linear Multistep Methods I: Adams and BDF Methods

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

Hermite Interpolation

Autonomous Systems and Stability

Chapter 4: Interpolation and Approximation. October 28, 2005

Multistage Methods I: Runge-Kutta Methods

NUMERICAL ANALYSIS I. MARTIN LOTZ School of Mathematics The University of Manchester. May 2016

MA2501 Numerical Methods Spring 2015

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this.

Math 365 Homework 5 Due April 6, 2018 Grady Wright

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Do not turn over until you are told to do so by the Invigilator.

The answer in each case is the error in evaluating the taylor series for ln(1 x) for x = which is 6.9.

MA2501 Numerical Methods Spring 2015

Preliminary Examination in Numerical Analysis

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis

Polynomial Interpolation

Lecture 10 Polynomial interpolation

MATH 320, WEEK 7: Matrices, Matrix Operations

Interpolation and Approximation

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Numerical Methods. King Saud University

Note on Newton Interpolation Formula

100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX

CS S Lecture 5 January 29, 2019

Curve Fitting and Interpolation

Numerical Integration (Quadrature) Another application for our interpolation tools!

A first order divided difference

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

Infinite series, improper integrals, and Taylor series

An Introduction to Numerical Analysis. James Brannick. The Pennsylvania State University

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Chaos and Liapunov exponents

Chapter 1 Computer Arithmetic

Linear Least-Squares Data Fitting

Application of modified Leja sequences to polynomial interpolation

HIGHER ORDER METHODS. There are two principal means to derive higher order methods. b j f(x n j,y n j )

QR Decomposition. When solving an overdetermined system by projection (or a least squares solution) often the following method is used:

Homogeneous Linear Systems and Their General Solutions

AP Calculus Chapter 9: Infinite Series

Numerical Methods and Computation Prof. S.R.K. Iyengar Department of Mathematics Indian Institute of Technology Delhi

1 Extrapolation: A Hint of Things to Come

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

Efficient algorithms for the design of finite impulse response digital filters

Mathematics for Engineers. Numerical mathematics

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

Exact and Approximate Numbers:

Chapter 1 Mathematical Preliminaries and Error Analysis

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Solving Boundary Value Problems (with Gaussians)

Taylor approximation

Introduction to Numerical Analysis

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

22: Applications of Differential Calculus

LECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes).

Numerical Evaluation of Standard Distributions in Random Matrix Theory

Interpolation & Polynomial Approximation. Hermite Interpolation I

Applied Math 205. Full office hour schedule:

Numerical Methods I Solving Nonlinear Equations

November 20, Interpolation, Extrapolation & Polynomial Approximation

8 A pseudo-spectral solution to the Stokes Problem

MATH ASSIGNMENT 07 SOLUTIONS. 8.1 Following is census data showing the population of the US between 1900 and 2000:

CHAPTER 10: Numerical Methods for DAEs

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

SYSTEMS OF NONLINEAR EQUATIONS

Derivatives and the Product Rule

Numerical Methods for Differential Equations Mathematical and Computational Tools

NP, polynomial-time mapping reductions, and NP-completeness

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Applied Numerical Analysis Quiz #2

Scientific Computing: An Introductory Survey

Sin, Cos and All That

Numerical methods. Examples with solution

Examination paper for TMA4215 Numerical Mathematics

A description of a math circle set of activities around polynomials, especially interpolation.

On Multivariate Newton Interpolation at Discrete Leja Points

Pascal s Triangle on a Budget. Accuracy, Precision and Efficiency in Sparse Grids

CS412: Introduction to Numerical Methods

Taylor Series. richard/math230 These notes are taken from Calculus Vol I, by Tom M. Apostol,

Transcription:

Review I: Interpolation Varun Shankar January, 206 Introduction In this document, we review interpolation by polynomials. Unlike many reviews, we will not stop there: we will discuss how to differentiate and integrate these interpolants, allowing us to approximate derivatives and integrals of unknown functions. 2 Interpolation versus Approximation Let X = {x k } N be some set of points (or nodes) at which we are provided samples of some function f(x). Thus, we are given y k = f(x k ). The goal of most approximation techniques is to construct a proxy for f(x) using the supplied y k values. If this proxy is reasonable, we can use it to give us more information about the function f, despite having been given only discrete samples of the function f. Such a proxy is in general called an approximant. This technique of generating proxies is called function approximation. An interpolant is an approximant that exactly agrees with the y k values at the x k nodes. This is not always reasonable, especially if you are supplied with some spurious y k values. Let us formalize things a bit further. Let I X f be the interpolant to the function f on the set of nodes X. Then, at X = {x k } N, we require that I X f = f X, () = I X f(x k ) = f(x k ) = y k, k = 0... N. (2) This notation is quite abstract, but therein lies its utility. This formula will make more sense when we discuss specific interpolants.

A note on indices With some interpolants, it is common to use indices k = 0... N. On the other hand, for implementation purposes, it is often more useful to think of the indices as k =... N. We will freely switch between the two representations. Note that certain index representations make more sense for certain programming languages. C/C++ have array indices starting from 0, while Matlab has indices starting from. Fortran lets you start with any index you want. 3 Polynomial Interpolation Let us focus on the most powerful interpolation tool in D: polynomial interpolation. This is a bold statement; everyone has his/her own favorite interpolation technique (mine are RBFs). This issue is further confounded by some myths surrounding polynomial interpolation, which we will now attempt to dispel. The goal is to find a polynomial p that interpolates f at the N + points x k. In other words, we seek an interpolant of the form p(x) = a k x k, (3) p(x k ) = y k, (4) where a k are the unknown interpolation coefficients. To find these coefficients, we simply need to enforce the interpolation conditions p(x k ) = y k. This gives rise to a linear system of the form x 0 x 2 0 x 3 0... x N 0 a 0 y 0 x x 2 x 3... x N a y x 2 x 2 2 x 3 2... x N 2 a 2 = y 2 (5)....... } x N x 2 N x3 N {{... xn N } V The above matrix V is called the Vandermonde matrix. To solve for the unknown a k values, we need to solve the above dense linear system. The obvious choice is some variant of Gaussian elimination (some form of LU decomposition). The cost of doing so is O(N 3 ). Of course, as you should already know, the above Vandermonde matris typically disastrously ill-conditioned, for most common choices of the nodes x k. In fact, the condition number κ(v ) e N, N. This is problematic! At N 37, this puts κ(v ) 0 6. Since computers can only store 6 digits of precision when using double-precision arithmetic, this means our answer will. a N. y N 2

not even be correct to one digit of precision. The only way to really compute a polynomial interpolant for large N is to somehow avoid construction of the Vandermonde matrix V altogether. However, for small N (say 2 to 6), the monomial form of the polynomial interpolant is fine. Given a fixed node set X in D and some fixed set of data, there is a unique polynomial interpolant through that data. That said, it is possible to write the same interpolant in many different forms. There are two important forms: the Lagrange form, and the Newton form. We will focus on the Lagrange form and a form derived from the Lagrange form. The Lagrange form of the polynomial interpolant is given by p(x) = l k (x) = y k l k (x), (6) N (x x j ) j=0, j k. (7) N (x k x j ) j=0, j k The Lagrange basis l k has the so-called Kronecker delta property { k = j l k (x j ) = 0 k j Notice that we never form a matrin this scenario. The basis is computed from the node locations, and the coefficients are the function samples y k themselves. At this point, most texts remark that the Lagrange form. is numerically unstable 2. is expensive to evaluate (O(N 2 ) per evaluation). 3. has to be recomputed when new nodes and samples are added. 4. is worse than the Newton form. These are all true statements! However, it would be premature to abandon the Lagrange basis at this point. We will now improve the Lagrange interpolation formula to ameliorate the above difficulties. 3. Barycentric Lagrange Interpolation To define the improved Lagrange interpolation formula, first define l(x) = (x x 0 )(x x )... (x x N ), (8) 3

and the barycentric weights by w k =, k = 0,..., N. (9) (x k x j ) j k Then, l k (x) = l(x) w k x x k, and the interpolant becomes p(x) = l(x) w k x x k y k. (0) This is the first form of the barycentric formula. It is easy to show that adding a new node x N+ now costs only O(N) flops. This formula can be made even more stable, yielding p(x) = w k x x k y k, () w k which is the second form of the barycentric formula. This form is used in practice in most tools for polynomial interpolation. 3.2 Weights for specific point distributions If the nodes are equispaced on any interval, the weights can be easily computed as w k = ( ) k( ) N k. However, recall that polynomial interpolation on equispaced nodes is highly unstable, susceptible to the Runge phenomenon; this manifests as exponentially growing weights w k. This, to many, was the final nail in the coffin for polynomial interpolation. However, that issue too is surmountable if one is allowed to change the node set X. If X is chosen ( to be the set of Chebshev zeros, the weights are given by w k = ( ) k 2k+ sin 2n+2 ). π This set of points does not include the endpoints of the interval. If X is chosen to be the set of Chebyshev extrema, the weights are given by w k = ( ) k δ k, where δ k = 2 if k = 0 or k = N, and δ k =, otherwise. These points include the endpoints of the interval. Use barycentric interpolation with any Chebyshev nodes, and suddenly polynomial interpolation is stable for thousands of points. Contrast that with the degree 37 polynomial interpolant allowed by Vandermonde matrices! Confusingly, when people say Chebyshev nodes, they can alternatively mean either zeros or extrema. 4

4 Differentiating and integrating polynomial interpolants Since the monomial form is stable for small values of N, and the barycentric form for all values of N, we will use a simple rule of thumb: use monomials when it is safe to do so, barycentric form when necessary. However, regardless of how we compute the interpolant, recall that there is a unique polynomial interpolant to the data. For convenience, we will work with the monomial or Lagrange forms, rather than the barycentric form. Consider again the polynomial interpolant to the data y k, k = 0... N, given by p(x) = a k x k. (2) Now, assuming the data y k is sampled from some function f(x), it is natural to attempt an approximation to the derivative of f using the derivative of p. Say we want f x. This can be approximated by f x p x = N = a k x xk, (3) a k kx k. (4) It is also possible (though a little more painful) to differentiate the Lagrange basis. Why would one do this instead of the approach above? Recall: when we use the Lagrange form of the interpolating polynomial, there is no linear system to solve, and the interpolation coefficients are the function samples y k. Thus, we have f x p x = N y k x l k(x). (5) We will revisit the idea of differentiating the Lagrange form shortly. In general, the above procedure applies for any general linear operator L in place of the first derivative. For example, L = 2 x + 7. Note that integrals are 2 also linear operators. Consider approximating the definite integral of a function over the interval [, x ]. We have x fdx pdx = a k x k dx, (6) x i [ ] x k+ = a k k + xk+ i. (7) k + 5

Alternatively, one could use the Lagrange form again, giving us x fdx pdx = y k l k (x)dx, (8) with this approach doing away with the need for computing interpolation coefficients. Step back and think for a moment on the power of this approach. The insights here will be used over and over again for the rest of this course. Error Analysis Let f be a function in C N+ [a, b], and p be the polynomial interpolant to that function. Then, for each x [a, b], there exists a point ξ x [a, b] such that f(x) p(x) = f (N+) (ξ x ) (N + )! N (x x k ). (9) This formula applies for points not in X, the set of interpolation nodes. At the interpolation nodes, the error is exactly zero (which is the definition of interpolation). At Chebyshev zeros, we can show that the product term is minimized. We can now get a sharper error bound (indeed, the best possible) f(x) p(x) 2 N max (N + )! f (N+) (t). (20) t We will use both these error estimates over the course of this semester. 5 Looking ahead Next, we will revisit finite differences and quadrature from the perspective of polynomial interpolation. We will compare the Taylor series approach of generating finite differences to the polynomial approach, and decide which approach is more general/powerful. We will skip that particular discussion for quadrature, and jump straight to the generation of quadrature from polynomial interpolation. 6