Scientific Computing

Similar documents
LEAST SQUARES APPROXIMATION

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

Approximation theory

Chapter 4: Interpolation and Approximation. October 28, 2005

Interpolation Theory

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

A first order divided difference

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Numerical Methods. King Saud University

We consider the problem of finding a polynomial that interpolates a given set of values:

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

Numerical Mathematics & Computing, 7 Ed. 4.1 Interpolation

Cubic Splines MATH 375. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Cubic Splines

Polynomials. p n (x) = a n x n + a n 1 x n 1 + a 1 x + a 0, where

3.1 Interpolation and the Lagrange Polynomial

Scientific Computing: An Introductory Survey

Computational Physics

LEAST SQUARES DATA FITTING

Ma 530 Power Series II

Chapter 2 Interpolation

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0

Interpolation and extrapolation

Some notes on Chapter 8: Polynomial and Piecewise-polynomial Interpolation

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

Polynomial Approximations and Power Series

Numerical Analysis: Interpolation Part 1

8.2 Discrete Least Squares Approximation

MA2501 Numerical Methods Spring 2015

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Taylor and Maclaurin Series. Approximating functions using Polynomials.

Interpolation APPLIED PROBLEMS. Reading Between the Lines FLY ROCKET FLY, FLY ROCKET FLY WHAT IS INTERPOLATION? Figure Interpolation of discrete data.

Mathematics for Engineers. Numerical mathematics

i x i y i

Chapter 1 Numerical approximation of data : interpolation, least squares method

Legendre s Equation. PHYS Southern Illinois University. October 18, 2016

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

ERROR IN LINEAR INTERPOLATION

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

SPLINE INTERPOLATION

Taylor and Maclaurin Series. Approximating functions using Polynomials.

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

8.5 Taylor Polynomials and Taylor Series

3.4 Introduction to power series

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

CALCULUS JIA-MING (FRANK) LIOU

AP Calculus Chapter 9: Infinite Series

CHAPTER 4. Interpolation

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

Review of Power Series

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Curve Fitting and Approximation

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

PUTNAM PROBLEMS SEQUENCES, SERIES AND RECURRENCES. Notes

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

swapneel/207

PURE MATHEMATICS AM 27

Interpolation and Approximation

1 Lecture 8: Interpolating polynomials.

Approximation Theory

1 Roots of polynomials

November 20, Interpolation, Extrapolation & Polynomial Approximation

Series Solutions of Differential Equations

13 Path Planning Cubic Path P 2 P 1. θ 2

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

Math 4263 Homework Set 1

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

8.7 MacLaurin Polynomials

Infinite Series. 1 Introduction. 2 General discussion on convergence

Q 0 x if x 0 x x 1. S 1 x if x 1 x x 2. i 0,1,...,n 1, and L x L n 1 x if x n 1 x x n

Further Mathematical Methods (Linear Algebra) 2002

Numerical Methods of Approximation

Q1. Discuss, compare and contrast various curve fitting and interpolation methods

Function approximation

Lecture 10 Polynomial interpolation

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

Math Review for Exam Answer each of the following questions as either True or False. Circle the correct answer.

Infinite series, improper integrals, and Taylor series

Chapter 3: Root Finding. September 26, 2005

SOLVED PROBLEMS ON TAYLOR AND MACLAURIN SERIES

Completion Date: Monday February 11, 2008

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

LECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes).

Math Numerical Analysis Mid-Term Test Solutions

AP Calculus (BC) Chapter 9 Test No Calculator Section Name: Date: Period:

Preliminary Examination in Numerical Analysis

CALCULUS: Math 21C, Fall 2010 Final Exam: Solutions. 1. [25 pts] Do the following series converge or diverge? State clearly which test you use.

Power Series Solutions to the Legendre Equation

Polynomial Interpolation Part II

Interpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter

Taylor and Maclaurin Series

Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. What for x = 1.2?

FIXED POINT ITERATION

Chapter 3a Topics in differentiation. Problems in differentiation. Problems in differentiation. LC Abueg: mathematical economics

Taylor and Maclaurin Series. Copyright Cengage Learning. All rights reserved.

14 Fourier analysis. Read: Boas Ch. 7.

Curve Fitting and Interpolation

Solution of Algebric & Transcendental Equations

Transcription:

2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66

Contents 1. Polynomial interpolation 2. Spline functions 3. The best approximation problem 4. Chebyshev polynomials 5. A near-minimax approximation method 6. Legendre Polynomials and Least squares approximation Chapter 2 Interpolation and Approximation p. 2/66

1. Polynomial Interpolation Mathematical functions sometimes cannot be evaluated exactly. We are limited to elementary arithmetic operations +,,, and in computers. Combination of these operations allow the evaluation of only polynomials and rational functions. All other functions are approximated based on them. Interpolation is the process of finding and evaluating a function whose graph goes through a set of given points. The interpolation function is usually chosen from a restricted class of functions, and polynomials are the most commonly used class. Spline functions refer to the class of piecewise polynomial functions. Chapter 2 Interpolation and Approximation p. 3/66

1. Polynomial Interpolation Interpolation is originally used to tabulate common mathematical functions, but it is a far less important use in the present day. Interpolation is an important tool in producing computable approximations to commonly used functions. To numerically integrate or differentiate a function, we often replace the function with a simpler approximating expressions. Interpolation is widely used in computer graphics to produce smooth curves and surfaces when the geometric objects of interest is given at only a discrete set of data points. Chapter 2 Interpolation and Approximation p. 4/66

1.1 Linear Interpolation Linear interpolation is the construction of a straight line passing through two given data points. It is used here as an introduction to more general polynomial interpolation Given two points (x 0,y 0 ) and (x 1,y 1 ) with x 0 x 1, the straight line drawn through them is the graph of the linear polynomial P 1 (x) = (x 1 x)y 0 + (x x 0 )y 1 x 1 x 0 This function interpolates the value y i at the point x i, i = 0, 1, or P 1 (x i ) = y i, i = 0, 1. Chapter 2 Interpolation and Approximation p. 5/66

1.2 Quadratic Interpolation Then graph of data are curved rather than straight, we better approximate by looking at polynomial of a degree greater than 1. Given three data points (x 0,y 0 ), (x 1,y 1 ), and (x 2,y 2 ), with x 0,x 1,x 2 begin distinct numbers. The quadratic polynomial passing through these points are constructed as follows: P 2 (x) = y 0 L 0 (x) + y 1 L 1 (x) + y 2 L 2 (x) Chapter 2 Interpolation and Approximation p. 6/66

1.2 (Cont d) L 0 (x) = (x x 1)(x x 2 ) (x 0 x 1 )(x 0 x 2 ) L 1 (x) = (x x 0)(x x 2 ) (x 1 x 0 )(x 1 x 2 ) L 2 (x) = (x x 0)(x x 1 ) (x 2 x 0 )(x 2 x 1 ) We see that L i (x j ) = δ ij, 0 i,j 2 and P 2 (x i ) = y i,i = 0, 1, 2. Chapter 2 Interpolation and Approximation p. 7/66

1.3 Higher-Degree Interpolation Consider the general case where n + 1 data points, (x 0,y 0 ),...,(x n,y n ) are given with all the x i s distinct. The interpolating polynomial of degree n is given by P n (x) = y 0 L 0 (x) + y 1 L 1 (x) + + y n L n (x) with each L i (x) a polynomial of degree n given by L i (x) = (x x 0) (x x i 1 )(x x i+1 ) (x x n ) (x i x 0 ) (x i x i 1 )(x i x i+1 ) (x i x n ) for i = 0, 1,...,n. This formula is called Lagrange s formula. Chapter 2 Interpolation and Approximation p. 8/66

1.3 (Cont d) It is easy to show that L i (x j ) = δ ij, 0 i,j n. In addition, P n (x j ) = y j,j = 0, 1,...,n. With linear interpolation, it was obvious that there was only one straight line passing through two given data points. But with n data points, it is less obvious that there is only one interpolating polynomial of degree n whose graph passes through the points. Question 1. Prove that there is only one polynomial P n (x) among all polynomials of degree n that satisfy the interpolating conditions P n (x i ) = y i, i = 0, 1,...,n where the x i s are distinct. Chapter 2 Interpolation and Approximation p. 9/66

1.4 Divided Differences The Lagrange s formula is well-suited for many theoretical uses of interpolation, but it is less desirable when actually computing the value of an interpolating polynomial. As an example, knowing P 2 (x) does not lead to a less expensive way to evaluate P 3 (x). For this reason, we introduce an alternative and more easily calculable formulation for the interpolation polynomials P 1 (x),p 2 (x),...,p n (x). As a needed preliminary to this new formula for interpolation, we introduce a discrete version of the derivative of a function f(x). Chapter 2 Interpolation and Approximation p. 10/66

1.4 (Cont d) Let x 0 and x 1 be distinct numbers and define f[x 0,x 1 ] = f(x 1) f(x 0 ) x 1 x 0 This is called the first-order divided difference of f(x). If f(x) is differentiable on an interval containing x 0 and x 1, then the mean value theorem implies f[x 0,x 1 ] = f (c) for some c between x 0 and x 1. If x 0 and x 1 are close enough, then f[x 0,x 1 ] f ( x 0 +x 1 ) 2. Chapter 2 Interpolation and Approximation p. 11/66

1.4 (Cont d) We define higher-order divided differences recursively using lower-order ones. Let x 0,x 1, and x 2 be distinct real numbers, and define f[x 0,x 1,x 2 ] = f[x 1,x 2 ] f[x 0,x 1 ] x 2 x 0 This is called the second-order divided difference. For x 0,x 1,x 2, and x 3 distinct, define f[x 0,x 1,x 2,x 3 ] = f[x 1,x 2,x 3 ] f[x 0,x 1,x 2 ] x 3 x 0 Chapter 2 Interpolation and Approximation p. 12/66

1.4 (Cont d) In general, let x 0,x 1,...,x n be n + 1 distinct numbers, and define f[x 0,x 1,...,x n ] = f[x 1,...,x n ] f[x 0,...,x n 1 ] x n x 0 This is the divided difference of order n, sometime also called the Newton divided difference. Chapter 2 Interpolation and Approximation p. 13/66

1.4 (Cont d) Theorem 1 Let n 1, and assume f(x) is n-time continuously differentiable on some interval α x β. Let x 0,x 1,...,x n be n + 1 distinct numbers in [α,β]. Then f[x 0,x 1,...,x n ] = 1 n! f(n) (c) for some unknown point c lying between the minimum and maximum of the numbers x 0,x 1,...,x n. Chapter 2 Interpolation and Approximation p. 14/66

1.5 Properties of Divided Differences The divided differences have a number of special properties that can simplify work with them. First, let (i 0,i 1,...,i n ) denote a permutation (or rearrangement) of the integers (0, 1,...,n). Then it can be shown that f[x i0,x i1,...,x in ] = f[x 0,x 1,...,x n ] The original definition seems to imply that the order of x 0,x 1,...,x n will make a difference in the calculation of f[x 0,x 1,...,x n ]; but the above equation asserts that this is not true. The proof is nontrivial and we will consider only the cases n = 1 and n = 2. Chapter 2 Interpolation and Approximation p. 15/66

1.5 (Cont d) For n = 1, f[x 1,x 0 ] = f(x 0) f(x 1 ) x 0 x 1 = f(x 1) f(x 0 ) x 1 x 0 = f[x 0,x 1 ] For n = 2, f[x 0,x 1,x 2 ] = f(x 0 ) (x 0 x 1 )(x 0 x 2 ) + f(x 1 ) (x 1 x 0 )(x 1 x 2 ) + f(x 2 ) (x 2 x 0 )(x 2 x 1 ) Interchanging x 0,x 1,x 2 will interchange the order of the variables, but the sum will remain the same. Chapter 2 Interpolation and Approximation p. 16/66

1.5 (Cont d) A second useful property is that the definitions of divided differences can be extended to the case where some or all of the node points x i are coincident provided that f(x) is sufficiently differentiable. For example, define f[x 0,x 0 ] = lim x 1 x 0 f(x 1 ) f(x 0 ) x 1 x 0 = f (x 0 ) For any arbitrary n 1, let all of the nodes approach x 0. This leads to the definition f[x 0,x 0,...,x 0 ] = 1 n! f(n) (x 0 ) Chapter 2 Interpolation and Approximation p. 17/66

1.5 (Cont d) For cases where only some of the nodes are coincident, we can rearrange the variables to extended the definition of divided differences. For example, f[x 0,x 1,x 0 ] = f[x 0,x 0,x 1 ] = f[x 0,x 1 ] f[x 0,x 0 ] x 1 x 0 = f[x 0,x 1 ] f (x 0 ) x 1 x 0 Chapter 2 Interpolation and Approximation p. 18/66

ton s Divided Differences Interpolation Fo The Lagrange s formula is very inconvenient for actual calculations fora sequence of interpolation polynomials of increasing degree. We will avoid this problem by using the divided differences of the data being interpolated to calculate P n (x). Let P n (x) denote the polynomial interpolating f(x i ) at x i, for i = 0, 1,...,n. Thus, deg(p n ) n and P n (x i ) = y i, i = 0, 1,...,n Chapter 2 Interpolation and Approximation p. 19/66

1.6 (Cont d) Then the interpolation polynomials can be written as follows: P 1 (x) = f(x 0 ) + (x x 0 )f[x 0,x 1 ] P 2 (x) = f(x 0 ) + (x x 0 )f[x 0,x 1 ]. +(x x 0 )(x x 1 )f[x 0,x 1,x 2 ] P n (x) = f(x 0 ) + (x x 0 )f[x 0,x 1 ] + +(x x 0 )(x x 1 ) (x x n 1 )f[x 0,...,x n ] This is called Newton s divided difference formula for the interpolating polynomial. Chapter 2 Interpolation and Approximation p. 20/66

1.6 (Cont d) Note that for k 0 P k+1 (x) = P k (x) + (x x 0 ) (x x k )f[x 0,...,x k+1 ] Thus, we can go from degree k to degree k + 1 with a minimum of calculation, once the divided difference coefficients have been computed. We will prove the formula for P 1 (x). Consider P 1 (x 0 ) = f(x 0 ) and [ ] f(x1 ) f(x 0 ) P 1 (x 1 ) = f(x 0 ) + (x 1 x 0 ) x 1 x 0 = f(x 0 ) + [f(x 1 ) f(x 0 )] = f(x 1 ) Chapter 2 Interpolation and Approximation p. 21/66

1.6 (Cont d) Thus, deg(p 1 ) 1, and it satisfies the interpolation conditions. By the uniqueness of polynomial interpolation, the Newton s divided differences formula for P 1 (x) is the linear interpolation polynomial to f(x) at x 0,x 1. Question 2. Prove that the Newton s divided difference formula for P 2 (x) is the quadratic interpolation polynomial to f(x) at x 0,x 1,x 2. Chapter 2 Interpolation and Approximation p. 22/66

2.1 Spline Interpolation To simplest method to interpolate data is to connect the node points by straight-line segments. This is called piecewise linear interpolation. x 0 1 2 2.5 3 3.5 4 y 2.5 0.5 0.5 1.5 1.5 1.125 0 2.5 y 2 1.5 1 0.5 1 2 3 4 x Chapter 2 Interpolation and Approximation p. 23/66

2.1 (Cont d) We may use the polynomial interpolation, P 6 (x), instead. 5 y 4 3 2 1 1 2 3 4 x Although it is a smooth graph, it is quite different from that of the piecewise linear interpolation. Chapter 2 Interpolation and Approximation p. 24/66

2.1 (Cont d) To pose the problem more generally, suppose n data points (x i,y i ),i = 1,...,n, are given. For simplicity, assume that x 1 < x 2 < < x n and let a = x 1,b = x n. We seek a function s(x) defined on [a, b] that interpolates the data: s(x i ) = y i, i = 1,...,n For smoothness of s(x), we require that s (x) and s (x) be continuous. In addition, the curve connects the data points (x i,y i ) in the same way as the piecewise linear interpolation does. Chapter 2 Interpolation and Approximation p. 25/66

2.2 Natural cubic spline If we require that s (x) does not change too rapidly between node points, then s (x) should be as small as possible and, more precisely, we require that b a [s (x)] 2 dx is as small as possible. There is a unique solution s(x) satisfying the following: s(x) is a polynomial of degree 3 on each subinterval [x i 1,x i ],i = 2, 3,...,n. s(x),s (x), and s (x) are continuous for a x b. s (x 1 ) = s (x n ) = 0. The function s(x) is called the natural cubic spline. Chapter 2 Interpolation and Approximation p. 26/66

2.2 (Cont d) We will now construct s(x). Introduce the variables M 1,...,M n with M i s (x i ), i = 1, 2,...,n We will express s(x) in terms of the (unknown) values M i ; then, we will produce a system of linear equations from which the values M i can be calculated. Chapter 2 Interpolation and Approximation p. 27/66

2.2 (Cont d) Since s(x) is cubic on each interval [x i 1,x i ], the function s (x) is linear on the interval. A linear function is determined by its values at two points, and we use s (x i 1 ) = M i 1, s (x i ) = M i Then for x i 1 x x i. s (x) = (x i x)m i 1 + (x x i 1 )M i x i x i 1 Chapter 2 Interpolation and Approximation p. 28/66

2.2 (Cont d) We will now form the second antiderivative of s (x) on [x i 1,x i ] and apply the interpolating conditions s(x i 1 ) = y i 1, s(x i ) = y i Chapter 2 Interpolation and Approximation p. 29/66

2.2 (Cont d) We will now form the second antiderivative of s (x) on [x i 1,x i ] and apply the interpolating conditions s(x i 1 ) = y i 1, s(x i ) = y i After quite a bit of manipulation, this results in s(x) = (x i x) 3 M i 1 + (x x i 1 ) 3 M i 6(x i x i 1 ) + (x i x)y i 1 + (x x i 1 )y i x i x i 1 1 6 (x i x i 1 ) [ (x i x)m i 1 + (x x i 1 )M i ] Chapter 2 Interpolation and Approximation p. 29/66

2.2 (Cont d) The formula applies to each of the intervals [x 1,x 2 ],...,[x n 1,x n ]. The formulas for adjacent intervals [x i 1,x i ] and [x i,x i+1 ] will agree at their common point x = x i because of the interpolating conditions s(x i ) = y i, which is common to the definitions. This implies that s(x) will be continuous over the entire interval [a, b]. Similarly, s (x) is continuous on [a,b]. To ensure the continuity of s (x) over [a,b], the formula for s (x) on [x i 1,x i ] and [x i,x i+1 ] are required to give the same value at their common points x = x i, for i = 2, 3,...,n 1. Chapter 2 Interpolation and Approximation p. 30/66

2.2 (Cont d) After a great deal of simplification, this leads to the following system of linear equations: x i x i 1 6 M i 1 + x i+1 x i 1 3 M i + x i+1 x i 6 M i+1 = y i+1 y i x i+1 x i y i y i 1 x i x i 1 for i = 2, 3,...,n 1. These n 2 equations together with M 1 = M n = 0 leads to the values M 1,...,M n and then to the interpolating function s(x). The above system is called a tridiagonal system. Chapter 2 Interpolation and Approximation p. 31/66

2.2 (Cont d) Question 3 Calculate the natural cubic spline interpolating the data {(1, 1), (2, 12 ), (3, 13 ), (4, 14 ) } Chapter 2 Interpolation and Approximation p. 32/66

2.2 (Cont d) Question 3 Calculate the natural cubic spline interpolating the data {(1, 1), (2, 12 ), (3, 13 ), (4, 14 ) } Answer s(x) = 1 12 x3 1 4 x2 3 1x + 2 3 1 x 2 12 1 x3 + 3 4 x2 3 7x + 17 6 2 x 3 12 1 x + 12 7 3 x 4 Chapter 2 Interpolation and Approximation p. 32/66

2.3 Other Spline Functions Up until this point, we have not considered the accuracy of the interpolating spline s(x). This is satisfactory where only the data points are known, and we only want a smooth curve that looks correct to the eye. But often, we want the spline to interpolate a known function, and then we are also interested in accuracy. Let f(x) be given on [a,b]. We will consider the case where the interpolation of f(x) is performed at evenly spaced values of x. Let n > 1, h = b a n 1, x i = a + (i 1)h, i = 1, 2...,n Chapter 2 Interpolation and Approximation p. 33/66

2.3 (Cont d) Let s n (x) be the natural cubic spline interpolating f(x) at x 1,...,x n. Then it can be shown that max f(x) s n(x) ch 2 a x b where c depends on f (a),f (b), and max a x b f (4) (x). The primary reason that the approximation s n (x) does not converge more rapidly is that f (x) is generally nonzero at x = a and b, whereas s n(a) = s n(b) = 0 by definition. For functions f(x) with f (a) = f (b) = 0, the right-hand side of the above estimation can be replaced by ch 4. Chapter 2 Interpolation and Approximation p. 34/66

2.3 (Cont d) To improve on s n (x), we can look at other cubic spline functions s(x) that interpolate f(x). We say that s(x) is a cubic spline on [a,b] if s(x) is cubic on each subinterval [x i 1,x i ]. s(x),s (x), and s (x) are all continuous on [a,b] If s(x) is chosen to satisfy s(x i ) = y i = f(x i ) for i = 1,...,n, then we chose endpoint conditions (or boundary conditions) for s(x) that will result in a better approximation to f(x). If possible require { s (x 1 ) = f (x 1 ) s (x n ) = f (x n ) or { s (x 1 ) = f (x 1 ) s (x n ) = f (x n ) Chapter 2 Interpolation and Approximation p. 35/66

3.1 The Best Approximation Problem We will look at the concept of the best possible approximation. This is illustrated with improvements to the Taylor polynomials for f(x) = e x. Let f(x) be a given function that is continuous on some interval a x b. If p(x) is a polynomial, then we are interested in measuring E(p) = max f(x) p(x) a x b the maximum possible error in the approximation of f(x) by p(x) on the interval [a,b]. Chapter 2 Interpolation and Approximation p. 36/66

3.1 (Cont d) For each degree n > 0, define ρ n (f) = min deg(p) n E(p) = min deg(p) n [ ] max f(x) p(x) a x b This is the smallest possible value for E(p) that can be attained with a polynomial of degree n. It is called the minimax error. It can be shown that there is a unique polynomial of degree n for which the maximum error on [a,b] is ρ n (f). This polynomial is called the minimax polynomial of order n, and we denote it here by m n (x). Chapter 2 Interpolation and Approximation p. 37/66

Example Let f(x) = e x on 1 x 1, and consider the linear polynomial approximation to f(x). The linear Taylor polynomial is t 1 (x) = 1 + x and E(t 1 ) = max 1 x 1 ex t 1 (x) = e 2 0.718 The linear minimax polynomial to e x on [ 1, 1] is m 1 (x) = a + bx 1.243 + 1.1752x, where a = e b ln b 2 and b = e e 1 2. E(m 1 ) = max 1 x 1 ex m 1 (x) = e a b 0.279 Chapter 2 Interpolation and Approximation p. 38/66

Example Again let f(x) = e x for 1 x 1. The cubic minimax polynomial to e x on [ 1, 1] is m 3 (x) = 0.994579 + 0.995668x + 0.542973x 2 + 0.179533x 3 compared to te Taylor approximation t 3 (x) = 1 + x + 1 2 x2 + 1 6 x3 E(m 3 ) 0.00553 and E(t 3 ) 0.0516 Chapter 2 Interpolation and Approximation p. 39/66

3.1 (Cont d) These examples illustrate several general properties of the minimax approximation. m n (x) is usually a significant improvement on the taylor polynomial t n (x). The larger values of the error f(x) m n (x) are dispersed over [a,b]. The Taylor error f(x) t n (x) is much smaller around the point of expansion. The error f(x) m n (x) is oscillatory on [a,b]. It can be shown that this error will change sign at least n + 1 times inside the interval [a,b], and the sizes of the oscillations will be equal. Chapter 2 Interpolation and Approximation p. 40/66

3.2 Accuracy of the Minimax Approximation For f(x) = e x, it appears that m n (x) is very accurate for relatively small values of n. This can be made more precise for some commonly occuring functions such as e x, cos x, and others. Assume f(x) has an infinite number of continuous derivatives on [a, b]. The minimax error satisfies ρ n (f) [(b a)/2]n+1 (n + 1)!2 n max a x b f (n+1) (x) This error bound will not always become smaller with increasing n, but it will give a fairly accurate bound for many common functions f(x). Chapter 2 Interpolation and Approximation p. 41/66

Example Let f(x) = e x for 1 x 1. Then ρ n (e x ) e (n + 1)!2 n n bound ρ n (f) 1 6.80 10 1 2.79 10 1 2 1.13 10 1 4.50 10 2 3 1.42 10 2 5.53 10 3 4 1.42 10 3 5.47 10 4 5 1.18 10 4 4.52 10 5 6 8.43 10 6 3.21 10 6 7 5.27 10 7 2.00 10 7 Chapter 2 Interpolation and Approximation p. 42/66

4.1 Chebyshev Polynomials We introduce a family of polynomials, the Chebyshev polynomials, that are used in many parts of numerical analysis and, more generally, in mathematics and physics. A few of their properties will be given in this section, and then they will be used in the next section to produce a polynomial approximation close to the minimax approximation. Chapter 2 Interpolation and Approximation p. 43/66

4.1 (Cont d) For an integer n 0, define the function T n (x) = cos ( n cos 1 x ), 1 x 1 This may not appear to be a polynomial, but we will show it is a polynomial of degree n. To simplify the manipulation of the definition, we introduce θ = cos 1 x or x = cosθ, 0 θ π Then T n (x) = cosnθ Chapter 2 Interpolation and Approximation p. 44/66

Example 1 y 0.5-1 -0.5 0.5 1 x -0.5-1 T 0 (x) = cos 0 = 1 T 1 (x) = cosθ = x T 2 (x) = cos 2θ = 2 cos 2 θ = 2x 2 1 T 3 (x) = cos 3θ = 4 cos 3 θ 3 cosθ = 4x 3 3x Chapter 2 Interpolation and Approximation p. 45/66

4.2 The Triple Recursion Relation Recall the trigonometric addition formulas cos(α ± β) = cosαcos β sin α sin β For any n 1, apply these identities to get T n+1 (x) = cos[(n + 1)θ] = cosnθ cos θ sin nθ sin θ T n 1 (x) = cos[(n 1)θ] = cosnθ cosθ + sinnθ sin θ Add these two equations to obtain T n+1 (x) + T n 1 (x) = 2xT n (x) This is called the triple recursion relation for the Chebyshev polynomials. Chapter 2 Interpolation and Approximation p. 46/66

4.3 The Minimum Size Property We note that T n (x) 1 for 1 x 1 and for all n 0. Also note that T n (x) = 2 n 1 x n + lower-degree terms, n 1 Introduce a modified version of T n (x) T n (x) = 1 2 n 1T n(x) = xn + lower-degree terms, n 1 2n 1 Then, T n (x) 1 2 n 1 for 1 x 1 and for all n 1. Chapter 2 Interpolation and Approximation p. 47/66

4.3 (Cont d) A polynomial whose highest-degree term has a coefficient of 1 is called a monic polynomial. The monic polynomail T n (x) has size 2 1 on [ 1, 1], and this becomes smaller as n 1 the degree n increases. But max 1 x 1 xn = 1. Thus, x n is a monic polynomial whose size does not change with increasing n. Theorem Let n 1 be an integer, and consider all possibel monic polynomials of degree n. Then the degree n monic polynomial with the smallest maximum absolute value on [ 1, 1] is the modified Chebyshev polynomial Tn (x), and its maximum value on [ 1, 1] is 1 2 n 1. Chapter 2 Interpolation and Approximation p. 48/66

5.1 A Near-Minimax Approximation Method For polynomial approximations to f(x), we consider using an interpolating polynomial. The most obvious choice is to choose an evenly spaced set of interpolation node points on the interval a x b of interest. Unfortunately, this often gives an interpolating polynomial that is a very poor approximation to f(x). To make it simple, choose the special approximation interval 1 x 1. Initially consider the approximating polynomail of degree n = 3. Let x 0,x 1,x 2,x 3 be the interpolation node points in [ 1, 1], and let c 3 (x) denote the polynomial of degree 3 that interpolates f(x) at x 0,x 1,x 2,x 3. Chapter 2 Interpolation and Approximation p. 49/66

5.1 (Cont d) It can be shown that the interpolation error is given by f(x) c 3 (x) = (x x 0)(x x 1 )(x x 2 )(x x 3 ) f (4) (c x ) 4! for 1 x 1 and for some c x in [ 1, 1]. The nodes are to be chosen so that the maximum value of f(x) c 3 (x) on [ 1, 1] is made as small as possible. The only quantity on the right side that we can use to influence the size of the error is the degree 4 polynomial ω(x) = (x x 0 )(x x 1 )(x x 2 )(x x 3 ) Chapter 2 Interpolation and Approximation p. 50/66

5.1 (Cont d) We want to choose the interpolation points x 0,x 1,x 2,x 3 so that ω(x) is made as small as possible. max 1 x 1 It is easy to see that ω(x) = x 4 + lower-degree terms This is a monic polynomial of degree 4. From the theorem in the preceding section, the smallest possible value for the maximum is obtained with ω(x) = T 4 (x) = T 4(x) 2 3 = x 4 x 2 + 1 8 Chapter 2 Interpolation and Approximation p. 51/66

5.1 (Cont d) ω(x) implicitly defines the interpolation node points and they are the zeros of ω(x), which in turn are the zeros of T 4 (x). In this case, T 4 (x) = cos 4θ and x = cosθ, which is zero when 4θ = ± π 2, ±3π 2, ±5π 2, ±7π 2,... x = cos π 8, cos 3π 8, cos 5π 8, cos 7π 8,... using cos( θ) = cosθ. The first four values of x are distinct, but the successive values repeat the first four values. Thus, the nodes are approximately ±0.382683, ±0.923880. Chapter 2 Interpolation and Approximation p. 52/66

Example Let f(x) = e x on [ 1, 1], by evaluating c 3 (x) at a large number of points, we find that Compare this error to max 1 x 1 ex c 3 (x) 0.00666 ρ 3 (e x ) 0.00553 Chapter 2 Interpolation and Approximation p. 53/66

5.1 (Cont d) The construction of c 3 (x) generalizes to finding a degree n near-minimax approximation to f(x) on [ 1, 1]. The interpolation error is given by f(x) c n (x) = (x x 0) (x x n ) f (n+1) (c x ), 1 x 1 (n + 1)! and we seek to minimize max (x x 0) (x x n ) 1 x 1 Chapter 2 Interpolation and Approximation p. 54/66

5.1 (Cont d) The polynomial being minimized is monic of degree n + 1. This minimum is attained by the monic polynomial T n (x) = 1 2 nt n+1(x) Thus, the interpolation nodes are the zeros of T n+1 (x), which are given by ( ) 2i + 1 x i = cos 2n + 2 π, i = 0, 1,...,n The near-minimax approximation c n (x) of degree n is obtained by interpolating to f(x) at these n + 1 nodes on [ 1, 1]. Chapter 2 Interpolation and Approximation p. 55/66

5.2 Odd and Even Functions We say that f(x) is even if f( x) = f(x) for all x. Such functions have graphs that are symmetric about the y-axis. We say f(x) is odd if f( x) = f(x) for all x. Such functions are said to be symmetric about the origin. If f(x) is even or odd on [ 1, 1], then n shold be chosen in a more restricted way: If f(x) is odd, then choose n even. If f(x) is even, then choose n odd. This will result in c n (x) having degree only n 1, but it will given an appropriate formula. Chapter 2 Interpolation and Approximation p. 56/66

6.1 Least Squares Approximation We will now turn to polynomial approximation with a small average error over the interval of approximation. If a function f(x) is approximated by p(x) over [a,b], then the average error is defined by E(p;f) 1 b a b a (f(x) p(x)) 2 dx This is also called the root-mean-square-error. Note that minimizing E(p; f) for different choices of p(x) is equivalent to minimizing b a (f(x) p(x))2 dx. Chapter 2 Interpolation and Approximation p. 57/66

Example Let f(x) = e x, and let p(x) = α 0 + α 1 x. We want to choose α 0,α 1 so as to minimize the integral g(α 0,α 1 ) 1 1 (e x α 0 α 1 x) 2 dx Its minimum can be found by solving g α 0 = 0, and g α 1 = 0 Solving the system of equations yields α 0 = e e 1 2 1.1752 and α 1 = 3e 1 1.1036 Chapter 2 Interpolation and Approximation p. 58/66

Example (Cont d) We denote the resulting linear approximation by l 1 (x) = α 0 + α 1 x It is called the best linear approximation to e x in the sense of least squares. The error is max 1 x 1 ex l 1 (x) 0.439 Approximation Maximum Error RMS-Error Taylor t 1 (x) 0.718 0.246 Least squares l 1 (x) 0.439 0.162 Chebyshev c 1 (x) 0.372 0.184 Minimax m 1 (x) 0.279 0.190 Chapter 2 Interpolation and Approximation p. 59/66

6.1 (Cont d) For E(p;f) for a general function f(x) on [a,b]. We seek a polynomial p(x) of a degree n that minimizes E(p;f). We can write p(x) = α 0 + α 1 x + + α n x n We define g(α 0,...,α n ) b a (f(x) α 0 α n x n ) 2 dx leading to a set of n + 1 equations that must be satisfied by a minimizing set α 0,α 1,...,α n for g. Chapter 2 Interpolation and Approximation p. 60/66

6.1 (Cont d) A minimizer for g(α 0,...,α n ) can be found from the conditions g = 0, i = 0, 1,...,n α i For the special case of [a,b] = [0, 1]. The linear system is n j=0 α i i + j + 1 = 1 0 x i f(x)dx, i = 0, 1,...,n The linear system is ill-conditioned and difficult to solve accurately even for n = 5. Chapter 2 Interpolation and Approximation p. 61/66

6.2 Legendre Polynomials A better approach to minimizing E(p;f) requires the introduction of a special set of polynomials, the Legendre polynomials, defined by For example, P 0 (x) = 1 1 d n [ P n (x) = (x 2 n!2 n dx n 1) n], n = 1, 2,... P 1 (x) = x P 2 (x) = 1 2 (3x2 1) P 3 (x) = 1 2 (5x3 3x) P 4 (x) = 1 8 (35x4 30x 2 + 3) Chapter 2 Interpolation and Approximation p. 62/66

6.2 (Cont d) We introduce (f,g) = b a f(x)g(x)dx. Properties deg P n = n and P n (1) = 1, n 0 P n+1 (x) = 2n+1 n+1 xp n(x) n+1 n P n 1(x), n 1 { 0, i j Orthogonality: (P i,p j ) = 2 2i+1, i = j All zeros of P n (x) are simple and are in [ 1, 1]. Every polynomial p(x) of degree n can be written as p(x) = n i=0 β ip i (x) with the choice of β 0,β 1,...,β n uniquely determined from p(x). Chapter 2 Interpolation and Approximation p. 63/66

6.3 Solving for the Least Squares Approximation It is sufficient to solve the problem on the interval [ 1, 1]. We seek to minimize ( ) n n g (f p,f p) = f β i P i,f β i P i Using the orthogonality of P i (x), g = (f,f) 2 = (f,f) n β i (f,p i ) + i=0 n i=0 i=0 n βi 2 (P i,p i ) i=0 (f,p i ) 2 n (P i,p i ) + (P i,p i ) i=0 i=0 [ β i (f,p ] 2 i) (P i,p i ) Chapter 2 Interpolation and Approximation p. 64/66

6.6.3 (Cont d) g is smallest when β i = (f,p i), i = 0, 1,...,n (P i,p i ) the minimum for this choice of coefficients is n (f,p i ) 2 g = (f,f) (P i,p i ) 2 We call l n (x) = n i=0 i=0 (f,p i ) (P i,p i ) P i(x) the least squares approximation of degree n to f(x) on [ 1, 1]. Chapter 2 Interpolation and Approximation p. 65/66

Example For f(x) = e x on [ 1, 1], the cubic least squares approximation is l 3 (x) = 0.996294 + 0.997955x + 0.536722x 2 + 0.176139x 3 Approximation Maximum Error RMS-Error Taylor t 3 (x) 0.0516 0.0145 Least squares l 3 (x) 0.0112 0.00334 Chebyshev c 3 (x) 0.00666 0.00384 Minimax m 3 (x) 0.00553 0.00388 Chapter 2 Interpolation and Approximation p. 66/66