Numerical Analysis: Interpolation Part 1

Similar documents
(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

BSM510 Numerical Analysis

Polynomial Interpolation Part II

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

1 Lecture 8: Interpolating polynomials.

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Chapter 4: Interpolation and Approximation. October 28, 2005

Introductory Numerical Analysis

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

We consider the problem of finding a polynomial that interpolates a given set of values:

Hermite Interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

Scientific Computing

Review I: Interpolation

Interpolation Theory

Interpolation and Approximation

MA2501 Numerical Methods Spring 2015

f (r) (a) r! (x a) r, r=0

ROOT FINDING REVIEW MICHELLE FENG

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Polynomial Interpolation

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim

1 Directional Derivatives and Differentiability

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

8. Limit Laws. lim(f g)(x) = lim f(x) lim g(x), (x) = lim x a f(x) g lim x a g(x)

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

11.10a Taylor and Maclaurin Series

CALCULUS JIA-MING (FRANK) LIOU

1 Solutions to selected problems

Section 3.1. Best Affine Approximations. Difference Equations to Differential Equations

Chapter 8: Taylor s theorem and L Hospital s rule

Introduction Linear system Nonlinear equation Interpolation

Differentiation. Table of contents Definition Arithmetics Composite and inverse functions... 5

CS 323: Numerical Analysis and Computing

CHAPTER 4. Interpolation

Interpolation & Polynomial Approximation. Hermite Interpolation I

Chapter 3: Root Finding. September 26, 2005

Lecture 10 Polynomial interpolation

Numerical Methods for Differential Equations Mathematical and Computational Tools

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.

Math Numerical Analysis

Do not turn over until you are told to do so by the Invigilator.

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

ON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT

11.8 Power Series. Recall the geometric series. (1) x n = 1+x+x 2 + +x n +

Midterm Review. Igor Yanovsky (Math 151A TA)

Scientific Computing: An Introductory Survey

MATH. 4548, Autumn 15, MWF 12:40 p.m. QUIZ 1 September 4, 2015 PRINT NAME A. Derdzinski Show all work. No calculators. The problem is worth 10 points.

8.5 Taylor Polynomials and Taylor Series

Lecture 04: Secret Sharing Schemes (2) Secret Sharing

Notes on generating functions in automata theory

Introduction to Numerical Analysis

3.1 Interpolation and the Lagrange Polynomial

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

3 Interpolation and Polynomial Approximation

1 Solutions to selected problems

AP Calculus Testbank (Chapter 9) (Mr. Surowski)

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

MA2501 Numerical Methods Spring 2015

Math 651 Introduction to Numerical Analysis I Fall SOLUTIONS: Homework Set 1

Chapter 11 - Sequences and Series

Chapter 3a Topics in differentiation. Problems in differentiation. Problems in differentiation. LC Abueg: mathematical economics

are continuous). Then we also know that the above function g is continuously differentiable on [a, b]. For that purpose, we will first show that

5. Hand in the entire exam booklet and your computer score sheet.

Fixed point iteration and root finding

Mathematical Olympiad Training Polynomials

8.3 Numerical Quadrature, Continued

Higher Order Taylor Methods

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Polynomial Interpolation with n + 1 nodes

Preliminary Examination in Numerical Analysis

, applyingl Hospital s Rule again x 0 2 cos(x) xsinx

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods

MA 8019: Numerical Analysis I Solution of Nonlinear Equations

Differentiation. f(x + h) f(x) Lh = L.

TAYLOR AND MACLAURIN SERIES

REVIEW OF DIFFERENTIAL CALCULUS

Orthogonal Polynomials and Gaussian Quadrature

Ma 530 Power Series II

FIXED POINT ITERATIONS

FIXED POINT ITERATION

Assignment-10. (Due 11/21) Solution: Any continuous function on a compact set is uniformly continuous.

Additional exercises with Numerieke Analyse

Scientific Computing: An Introductory Survey

100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX

Advanced Calculus Math 127B, Winter 2005 Solutions: Final. nx2 1 + n 2 x, g n(x) = n2 x

Math 321 Final Examination April 1995 Notation used in this exam: N. (1) S N (f,x) = f(t)e int dt e inx.

Order of convergence

Numerical Solutions to Partial Differential Equations

Homework and Computer Problems for Math*2130 (W17).

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Integer-Valued Polynomials

Chapter 3 Interpolation and Polynomial Approximation

CS 257: Numerical Methods

MATH 409 Advanced Calculus I Lecture 16: Mean value theorem. Taylor s formula.

Picard s Existence and Uniqueness Theorem

Transcription:

Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 1 / 50

1 2 Analyzing the Error in BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 2 / 50

Definition (a polynomial (from R to R)) A polynomial, from R R, is a function of the form n P n (x) = a j x j = a n x n + a n 1 x n 1 +... + a 1 x + a 0 j=0 where n is a nonnegative integer and (a j ) n j=0 are real numbers. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 3 / 50

Using Polynomials for Representing/Approximating Functions The importance of polynomials, for representing/approximating functions stems from the ease of computations of d x P n (x), dx P n(x) and P n (x) dx and from the fact they (uniformly) approximate continuous functions (see Theorem below) BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 4 / 50

Theorem (Weierstrass) Every continuous function defined on a closed interval can be uniformly approximated as closely as desired by a polynomial function. More formally, let f C([a, b]). Then for every ε > 0 there exists a polynomial, P n, such that, for every x [a, b], f(x) P n (x) < ε. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 5 / 50

The fundamental problem: Definition (the polynomial interpolation problem) Given (n + 1) points, ((x i, y i )) n i=0 x i R, y i R i {0, 1,..., n} such that the x i s are distinct, namely, i j x i x j, find a polynomial, P (x), of minimal degree, such that P interpolates the data ; i.e. P (x i ) = y i i {0, 1,..., n}. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 6 / 50

Example Assume the setting from the previous slide with the additional assumption that y i = c for every i, where c R is some constant. The zeroth-order polynomial, interpolates the data. P (x) c, Since its degree is zero, it is also minimal. Clearly, if c R is some other constant, with c c, then the zeroth-order polynomial Q(x) c does not interpolate the data, since, for every i, Q(x i ) = c c = y i. It follows that P (x) c is the the unique polynomial that satisfies the requirements. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 7 / 50

Some Observations from that Example In the definition of polynomial interpolation, only the x i s are required to be distinct, not the y i s. We just saw, in that particular example, existence and uniqueness of the interpolation polynomial. As will see, the interpolation polynomial always exists and is always unique, not just in that example. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 8 / 50

Theorem Given ((x i, y i )) n i=0, with x i x j whenever i j, there exists a unique polynomial P n, of degree no greater than n, such that P (x i ) = y i i {0, 1,..., n}. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 9 / 50

Proof uniqueness. A proof by contradiction. Suppose there are two different polynomials, P n and Q n, such that P n (x i ) = y i = Q n (x i ) i {0, 1,..., n}. Thus, defining a third polynomial by S(x) = P n (x) Q n (x) we get S(x i ) = P n (x i ) Q n (x i ) = 0 i {0, 1,..., n}. This implies that S has at least n + 1 distinct roots. By construction, the degree of S is no higher than n. By the fundamental theorem of algebra, a polynomial of degree n has exactly n roots (including complex roots, and including multiplicities), unless it is the zero polynomial. It follows that S(x) 0, and thus P n Q n, a contradiction. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 10 / 50

[Proof existence] A proof by induction. Base. n = 0, given (x 0, y 0 ). Choose P 0 (x) C 0 y 0. Assumption. Suppose there exists a polynomial, P k 1, of degree no higher than k 1, such that it interpolates the first k points: P k 1 (x i ) = y i i {0, 1,..., k 1}. Step. We will now build a polynomial, P k, of degree no higher than k, that interpolates the first k + 1 points: (continue to next slide) P k (x i ) = y i i {0, 1,..., k}. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 11 / 50

Proof existence. We do this by setting P k (x) = P k 1 (x) + C k k 1 j=0 (x x j). Now, k 1 j=0 (x x j) is a polynomial, of degree k, that vanishes on (x j ) k 1 So P k interpolates the first k points, and its degree is no higher than k. We still need it to interpolate (x k, y k ), the (k + 1)-th point. As for C k : y k required = P k (x k ) = P k 1 (x k ) + C k k 1 j=0 (x k x j ) }{{} 0 C k = y k P k 1 (x k ) k 1 j=0 (x k x j ) j=0. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 12 / 50

The proof for existence was also constructive, in the sense that it gave a concrete formulate for the interpolation polynomial: BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 13 / 50

Definition (Newton s form of the interpolation polynomial) The form we saw in the proof, P k (x) =C 0 + C 1 (x x 0 ) + C 2 (x x 0 )(x x 1 ) +... + C k (x x 0 ) (x x k 1 ), is called Newton s form of the interpolation polynomial. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 14 / 50

{ C 0 k = 0 P k (x) = P k 1 (x) + C k 1 k j=0 (x x j) k {1,..., n} { y 0 k = 0 where C k = k {1,..., n}, y k P k 1 (x k ) k 1 j=0 (x k x j ) Example (n = 0 (a single point)) Given a single point, (x 0, y 0 ), the interpolation polynomial is P 0 (x) = C 0 = y 0 BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 15 / 50

{ C 0 k = 0 P k (x) = P k 1 (x) + C k 1 k j=0 (x x j) k {1,..., n} { y 0 k = 0 where C k = k {1,..., n}, Example (n = 1 (two points)) y k P k 1 (x k ) k 1 j=0 (x k x j ) Given two points, ((x i, y i )) 1 i=0, the interpolation polynomial is (the line) P 1 (x) = C 0 + C 1 (x x 0 ) = C 0 {}}{ y 0 + P 1 (x 0 ) = y 0 + y 1 y 0 x 1 x 0 (x 0 x 0 ) = y 0 P 1 (x 1 ) = y 0 + y 1 y 0 x 1 x 0 (x 1 x 0 ) = y 1 C 1 {}}{ y 1 y 0 x 1 x 0 (x x 0 ) BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 16 / 50

The Interpolation Polynomial by Solving a Linear System Here is another way to derive the interpolation polynomial. The n + 1 equations, n P n (x i ) = a j x j i = [ 1 x i x 2 i... xn i ] j=0 are linear w.r.t. the a i s: n j=0 a jx j 0 n j=0 a jx j 1. = n j=0 a jx j n 0 1 a 2. a n 1 x 0 x 2 0... x n 0 1 x 1 x 2 1... x n 1. 1 x n x 2 n... x n n = y i, i = 0, 1,..., n a 0 a 1 a 2. a n = y 0 y 1. y n BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 17 / 50

1 x 0 x 2 0... x n 0 1 x 1 x 2 1... x n 1. 1 x n x 2 n... x n n }{{} V :(n+1) (n+1) a 0 a 1 a 2. a n }{{} a:(n+1) 1 = y 0 y 1. y n }{{} y:(n+1) 1 So can solve via a = V 1 y. V is called the Vandermonde Matrix. Assuming the x i s are distinct, we know that V is guaranteed (by the theorem) to be invertible. By uniqueness, we know the solution will coincide with Newton s form. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 18 / 50

Problem with This Method Computationally expensive. Matrix inversion is expensive; that said, in the reminder of the course we will study practical methods to solve a linear system without inversion. Ill-conditioned. The values of a = [ a 0 a 1 a 2... a n ] T might be determined inaccurately. We will later see how to characterize the condition number of a linear system. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 19 / 50

Uniqueness We again emphasize that the interpolation polynomial is unique: regardless whether it is expressed via 1 2 or it is the same polynomial. P n (x) = P n (x) = C 0 + n k=1 n a j x j i j=1 (x x j ) C k k 1 j=0 BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 20 / 50

Remark Recall that one advantage of polynomials is the ease of their evaluations. By inspection of P n (x) = n j=0 a jx j i, it seems that x P n(x) requires n additions and n j=1 j = n n+1 2 multiplications. This is O(n 2 ). It is possible, however, to reduce the # of operations to O(n), using Horner s rule (see next slide). BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 21 / 50

Horner s Rule (AKA Nested Polynomials) Set b n a n, and then, iteratively, set b k 1 a k 1 + b k x till obtaining b 0 = a 0 + b 1 x and this is equal to P n (x). Example P 4 (x) = 4 {}}{ a jx j = a 0 + x (a 1 + x (a 2 + x (a 3 + a 4 x)) ) j=0 }{{} b 1 {}}{ b 3 b 4 }{{} } {{ } b 0 b 2 This gives O(n) additions and O(n) multiplications (instead of O(n) and O(n 2 )). BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 22 / 50

Returning to the Uniqueness of the Interpolation Polynomial Since the interpolation polynomial is unique, if the n + 1 points were sampled from a polynomial of degree n, then we will recover that polynomial. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 23 / 50

Example Let P (x) = 2x 3 + 3x 2 4x 5. Consider the following 4 sampled points. i = 0 i = 1 i = 2 i = 3 x i -1 0 2 3 y i 0-5 15 64 Reminder: Newton s form { C 0 k = 0 P k (x) = k 1 P k 1 (x) + C k j=0 (x x j) k {1,..., n} { y 0 k = 0 where C k = k {1,..., n} y k P k 1 (x k ) k 1 j=0 (x k x j) P 0 (x) C 0 = y 0 = 0 BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 24 / 50

Example (continued) Let P (x) = 2x 3 + 3x 2 4x 5. Consider the following 4 sampled points. i = 0 i = 1 i = 2 i = 3 x i -1 0 2 3 y i 0-5 15 64 Reminder: Newton s form { C 0 k = 0 P k (x) = k 1 P k 1 (x) + C k j=0 (x x j) k {1,..., n} { y 0 k = 0 where C k = k {1,..., n} y k P k 1 (x k ) k 1 j=0 (x k x j) P 0 (x) 0 (saw in the previous slide) C 1 = y1 P0(x1) x 1 x 0 = 5 0+1 = 5 P 1 (x) = P 0 (x) + C 1 (x x 0 ) = 0 5(x + 1) = 5x 5 BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 25 / 50

Example (continued) Let P (x) = 2x 3 + 3x 2 4x 5. Consider the following 4 sampled points. i = 0 i = 1 i = 2 i = 3 x i -1 0 2 3 y i 0-5 15 64 Reminder: Newton s form { C 0 k = 0 P k (x) = k 1 P k 1 (x) + C k j=0 (x x j) k {1,..., n} { y 0 k = 0 where C k = k {1,..., n} y k P k 1 (x k ) k 1 j=0 (x k x j) P 1 (x) = 5x 5 (saw in the previous slide) C 2 = y2 P1(x2) (x = 15 ( 10 5) 2 x 0)(x 2 x 1) (2+1)(2) = 30 6 = 5 P 2 (x) = P 1 (x) + C 2 (x x 0 )(x x 1 ) = 5x 2 5 BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 26 / 50

Example (continued) Let P (x) = 2x 3 + 3x 2 4x 5. Consider the following 4 sampled points. i = 0 i = 1 i = 2 i = 3 x i -1 0 2 3 y i 0-5 15 64 Reminder: Newton s form { C 0 k = 0 P k (x) = k 1 P k 1 (x) + C k j=0 (x x j) k {1,..., n} { y 0 k = 0 where C k = k {1,..., n} P 2 (x) = 5x 2 5 (saw in the previous slide) C 3 = y k P k 1 (x k ) k 1 j=0 (x k x j) y 3 P 2(x 3) (x 3 x 0)(x 3 x 1)(x 3 x 2) = 64 (5 9 5) (3+1)(3)(3 2) = 24 12 = 2 P 3 (x) = P 2 (x) + C 3 (x x 0 )(x x 1 )(x x 2 ) = 5x 2 5 + 2(x + 1)(x)(x 2) = 5x 2 5 + (2x 2 + 2x)(x 2) = 5x 2 5 + 2x 3 + 2x 2 4x 2 4x = 2x 3 + 3x 2 4x 5 as expected. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 27 / 50

P n (x) as a Linear combination of Basis Functions An alternative way to represent the interpolation polynomial is as the following linear combination of basis functions. P n (x) = n y i L i (x) i=0 where (L i (x)) n i=0 are polynomials that depend on the values of x i s but not on the values of the y i s. Since we can t control the y i s, the most general constraint on L i (x) is: L i (x j ) = δ ij { 0 i j 1 i = j (Kronecker delta) together with the restriction that L i (x) is a polynomial of degree no higher than n. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 28 / 50

P n (x) as a Linear combination of Basis Functions P n (x) = n y i L i (x) i=0 The constraint i j L i (x j ) = 0 implies the following form: L i (x) = C i (x x j ) j:j i where the value of C i is found via L i (x i ) = 1 = C i (x i x j ) C i = (x i x j ) 1 j:j i j:j i BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 29 / 50

P n (x) as a Linear combination of Basis Functions In other words, we have just derived the family of the basis functions, (L i (x)) n i=0 : n x x j i 1 L i (x) = = x x j n x x j x i x j x i x j x i x j j=0,j i j=0 And as mentioned earlier, the polynomial j=i+1 P n (x) = n y i L i (x) = i=0 n i=0 y i n j=0,j i x x j x i x j satisfies P n (x i ) = y i so by uniqueness of the interpolation polynomial, it is the same polynomial we saw in other forms. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 30 / 50

This is Lagrange s form of the interpolation polynomial. It is often called Lagrange Approximation, but of course if the data came from a polynomial (of degree no higher than n) then it is exact. Remark: This form is easy to construct by is expensive to evaluate. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 31 / 50

Example Again let P (x) = 2x 3 + 3x 2 4x 5 and consider i = 0 i = 1 i = 2 i = 3 x i -1 0 2 3 y i 0-5 15 64 Lagrange s form of the interpolation polynomial is given by P n (x) = 3 y i L i (x) = 0 L 0 (x) 5L 1 (x) + 15L 2 (x) + 64L 3 (x) i=0 L 0 (x) = don t bother since y 0 = 0 L 1 (x) = (x+1)(x 2)(x 3) (0+1)(0 2)(0 3) = 1 6 (x + 1)(x 2)(x 3) L 2 (x) = (x+1)x(x 3) (2+1)(2 0)(2 3) = 1 6 (x + 1)x(x 3) L 3 (x) = (x+1)x(x 2) (3+1)(3 0)(3 2) = 1 12 (x + 1)x(x 2) P n (x) = 5 6 (x + 1)(x 2)(x 3) 15 6 (x + 1)x(x 3) + 32 6 (x + 1)x(x 2) =... = 2x 3 + 3x 2 4x 5 as expected. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 32 / 50

Analyzing the Error in The Interpolation Polynomial s Error Our examples so far involved the (exact) recovering of a polynomial. Polynomial interpolation, however, can also be used to approximate non-polynomials. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 33 / 50

Analyzing the Error in Example (approximating sin(x)) Let f(x) = sin(x). It is easy to evaluate f in several key points. i = 0 i = 1 i = 2 i = 3 i = 4 x i π π/2 0 π/2 π y i 0-1 0 1 0 Can now apply the tools we learned to find a polynomial, of degree 4, that approximates sin(x) on [ π, π]. We will get the following result: Exercise Verify this at home. P 4 (x) = 8 3π 3 x3 + 8 3π x A natural question: how good is this approximation? BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 34 / 50

Error Analysis Analyzing the Error in More generally, how can we quantify the interpolation error? Can we derive an upper bound on it? BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 35 / 50

Analyzing the Error in Theorem Let f C n+1 [a, b] and let P n (x) be the interpolation polynomial of f at nodes (x i ) n i=0 [a, b]. Then, the interpolation error at x [a, b] is given by for some ξ [a, b]. E n (x) f(x) P n (x) = f (n+1) (ξ) (n + 1)! n (x x i ) i=0 BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 36 / 50

Analyzing the Error in Before the Proof... E n (x) f(x) P n (x) = f (n+1) (ξ) (n + 1)! n (x x i ) i=0 This theoretical result is of paramount importance since it lets us known, a-priori, the expected maximal error and enables us to design an interpolation polynomial to meet a given accuracy criterion. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 37 / 50

Analyzing the Error in Example Find the degree of the interpolation polynomial, P n (x) that will guarantee that approximation error will be 10 5 for f(x) = sin(x) on [ π, π] (even w/o knowing the nodes, (x i ) n i=1 ). Solution: E n (x) = f (n+1) (ξ) (n + 1)! f (n+1) (ξ) (n + 1)! n (x x i ) i=0 (2π)n+1 (2π)n+1 (n + 1)! required 10 5 By search we can find E 25 (x) M 5.8 10 6. Note, however, that by using a better placement of the nodes, we will be able to get the desired accuracy using a much smaller n. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 38 / 50

Analyzing the Error in Before the proof of the theorem, we need a lemma. Lemma If g C k 1 [a, b] has k roots in [a, b] then g (k 1) has at least one root in [a, b]. Proof of the lemma. By the Mean Value Theorem: g has at least k 1 roots in [a, b]. g has at least k 2 roots in [a, b].. g k 1 has at least one rootin [a, b] (note that k (k 1) = 1). BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 39 / 50

Analyzing the Error in Proof of the theorem If x = x i, then the claim holds. We only need to prove it for the case where x x i, i = {0, 1,..., n}. Define a new function whose argument is t [a, b], and for which x is a constant parameter: t g(t; x) = E n (t) }{{} f(t) P n(t) E n (x) }{{} n j=0 f(x) P n(x) t x j x x j Note that t g(t; x) C n+1 [a, b]. This g was designed to satisfy the following property: it has n + 2 distinct roots. To see that, take t = x: g(x; x) = E n (x) E n (x) n j=0 x x j x x j = E n (x) E n (x) = 0. Thus, x is a root of t g(t; x). Moreover, each of the x i s is a root: (the proof continues on the next slide) BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 40 / 50

Analyzing the Error in Proof of the theorem continued. g(x i ; x) = E n (x i ) E n (x) n j=0 x i x j x x j = 0 E n (x) 0 = 0. Together, this means g has n + 2 distinct roots. By the lemma, g (n+1) has at least one root: ξ [a, b] such that g (n+1) (ξ; x) = 0. Inspect this derivative: g (n+1) (t; x) = E n (n+1) (t) E n (x) dn+1 n t x j dt n+1 j=0 x x j = f (n+1) (t) P n (n+1) 1 (t) E n (x) }{{} n j=0 (x x j) 0 = f (n+1) (n + 1)! (t) E n (x) n j=0 x x j d n+1 tn+1 dtn+1 0 = g (n+1) (ξ; x) = f (n+1) (n + 1)! (ξ) E n (x) n j=0 (x x j) E n (x) = f (n+1) (ξ) (n + 1)! n j=0 (x x j) BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 41 / 50

Analyzing the Error in Connection to Taylor s Expansion The expression, E n (x) = f (n+1) (ξ) (n + 1)! n is similar to the error from Taylor s expansion: f (n+1) (ξ) (n + 1)! (x x 0) n+1 j=0 (x x j) Taylor s expansion approximates a function using a single point (while here we use n + 1 points). Thus, we can interpret the Taylor expansion as taking the limits of the distances between all our n + 1 points to zero, collapsing them to a single point. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 42 / 50

Analyzing the Error in Bounding the Error We are now in a position to discuss a bound on the error. Clearly, the theorem is useful only if we can bound f (n+1) (x). If we can assume, or prove, a bound on f (n+1) (x) (as we could in the example with f(x) = sin(x)), then f (n+1) (x) < M n+1 x [a, b] E n (x) = f(x) P n (x) M n+1 (n + 1)! max x [a,b] n x x j j=0 BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 43 / 50

Analyzing the Error in Bounding the Error However, we can t always assume or prove such a bound! Thus, in such cases, there is no guarantee that increasing n will decrease a bound on the error. In fact, can t even guarantee that lim n E n = 0. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 44 / 50

Analyzing the Error in Evenly-spaced Nodes Often, it is convenient to sample the function at evenly-spaced points: x i = x 0 + i h i = 1,..., n In which case, it is possible to derive more convenient error bounds. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 45 / 50

Analyzing the Error in Example Consider a quadratic interpolation on [a, b], where x 0 = a, x 1 = a + h, and x 2 = a + 2h = b. By the theorem, if M 3 is an upper bound on f (3) (x), E 2 (x) M 3 (3)! max (x x 0)(x x 1 )(x x 2 ) M 3 x [a,b] 3! H(x) = (x x 0 )(x x 1 )(x x 2 ) = max H(x) x [a,b]... = x 3 x 2 (x 0 + x 1 + x 2 ) x(x 0 x 1 + x 0 x 2 + x 1 x 2 ) x 0 x 1 x 2 Looking for the extremum, we set H (x) = 3x 2 2x(x 0 + x 1 + x 2 ) (x 0 x 1 + x 0 x 2 + x 1 x 2 ) = 0. Now substitute x i = x 0 + i h and obtain 0 = 3x 2 6x(x 0 + h) + 3(x 2 0 + 2x 0 h + 2 3 h2 ) x max = 6(x 0 + h) ± 36(x 0 + h) 2 36(x 2 0 + 2x 0h + 2 3 h2 ) = x 0 + h ± 6 = x 0 + h(1 ± 1 ) (continue next slide) 3 1 3 h2 BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 46 / 50

Analyzing the Error in Example (continued) Going back to max x [a,b] H(x), we get H max = (x max x 0 )(x max x 0 h)(x max x 0 2h) = h(1 ± 1 )h(± 1 )h(± 1 1) 3 3 3 = h 3 3 + 1 3 1 3 3 1 3 = 2 3 3 h3 E 2 (x) M 3 3! max H(x) = x [a,b] h3 9 3 M 3 BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 47 / 50

Analyzing the Error in More generally: Theorem Let f C n+1 ([a, b]) and let P n (x) be its interpolation polynomial for the evenly-spaced nodes x i = X 0 + ih, i = 0, 1,..., n. If then f (n+1) (x) M n+1 x [a, b] E n (x) = f(x) P n (x) O(h n+1 )M n+1 We omit the proof. Some particular cases: E 0 (x) hm 1 E 1 (x) 1 8 h2 M 2 E 2 (x) 1 9 3 h3 M 3 E 3 (x) 1 24 h4 M 4 BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 48 / 50

Analyzing the Error in Reminder: increasing the number of nodes does not guarantee convergence of P n (x) to f(x). This is true even if the points are evenly spaced, despite the fact that E n (x) O(h n+1 )M n+1 since M n+1, the bound on f (n+1) (x), might increase faster than the decrease of O(h n+1 ) so the error might, in fact, in crease. This is the notorious Runge s phenomenon, expressed via sharp oscillations near the interval s endpoints. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 49 / 50

Version Log Analyzing the Error in 11/11/2018, ver 1.00. BGU CS Interpolation (ver. 1.00) AY 2018/2019, Fall Semester 50 / 50