Calculation of Orthogonal Polynomials and Their Derivatives

Similar documents
Numerical Methods. King Saud University

Math 578: Assignment 2

LEAST SQUARES APPROXIMATION

Secondary Math 3 Honors - Polynomial and Polynomial Functions Test Review

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

Polynomial Interpolation

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

CAM Ph.D. Qualifying Exam in Numerical Analysis CONTENTS

We consider the problem of finding a polynomial that interpolates a given set of values:

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Mathematics for Engineers and Scientists

Applied Math 205. Full office hour schedule:

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

Preliminary Examination in Numerical Analysis

Scientific Computing

Polynomial Solutions of the Laguerre Equation and Other Differential Equations Near a Singular

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

Sequences of Real Numbers

Lösning: Tenta Numerical Analysis för D, L. FMN011,

NUMERICAL MATHEMATICS AND COMPUTING

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

Linear Models Review

Numerical Methods for Engineers. and Scientists. Applications using MATLAB. An Introduction with. Vish- Subramaniam. Third Edition. Amos Gilat.

a b c d e GOOD LUCK! 3. a b c d e 12. a b c d e 4. a b c d e 13. a b c d e 5. a b c d e 14. a b c d e 6. a b c d e 15. a b c d e

Error formulas for divided difference expansions and numerical differentiation

BSM510 Numerical Analysis

Taylor and Maclaurin Series. Approximating functions using Polynomials.

Polynomials. p n (x) = a n x n + a n 1 x n 1 + a 1 x + a 0, where

Integration of Ordinary Differential Equations

CUBIC SPLINE VIRTUAL INSTRUMENTS ASSOCIATED TO SYMBOLIC TRANSMITTANCE OF CIRCUIT. Gheorghe Todoran 1, Rodica Holonec 2

The Tangent Parabola The AMATYC Review, Vol. 23, No. 1, Fall 2001, pp

Interpolation and extrapolation

Cubic Splines MATH 375. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Cubic Splines

Taylor and Maclaurin Series. Approximating functions using Polynomials.

1 Examples of Weak Induction

Dynamic Programming with Hermite Interpolation

Math 113 (Calculus 2) Exam 4

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Chapter 3 Interpolation and Polynomial Approximation

Numerical Analysis: Interpolation Part 1

SPLINE INTERPOLATION

Numerical Integration

References. Microcomputer, Macmillan. Angell, I. O. and Jones, B. J. (1983b). Advanced Graphics with the Sinclair ZX

Interpolation and Approximation

Numerical techniques to solve equations

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

Discrete Orthogonal Polynomials on Equidistant Nodes

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson

STAT Checking Model Assumptions

Computational Methods

1 Solutions to selected problems

THE SECANT METHOD. q(x) = a 0 + a 1 x. with

Math Numerical Analysis Mid-Term Test Solutions

Curve Fitting. 1 Interpolation. 2 Composite Fitting. 1.1 Fitting f(x) 1.2 Hermite interpolation. 2.1 Parabolic and Cubic Splines

1 Solutions to selected problems

Ch 5.4: Regular Singular Points

Improving analytic function approximation by minimizing square error of Taylor polynomial

A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis

An Analysis of Five Numerical Methods for Approximating Certain Hypergeometric Functions in Domains within their Radii of Convergence

Series Solutions of Differential Equations

SOLVING SOLVABLE QUINTICS. D. S. Dummit

Comments on An Improvement to the Brent s Method

Approximation theory

7. Piecewise Polynomial (Spline) Interpolation

Integer Multiplication

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

CHAPTER 4. Interpolation

Numerical Methods I: Polynomial Interpolation

Interpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter

Program : M.A./M.Sc. (Mathematics) M.A./M.Sc. (Final) Paper Code:MT-08 Numerical Analysis Section A (Very Short Answers Questions)

Relevant sections from AMATH 351 Course Notes (Wainwright): Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls):

1 Ordinary points and singular points

MATHEMATICAL METHODS INTERPOLATION

Lecture 10 Polynomial interpolation

Function approximation

Scientific Computing: An Introductory Survey

Chapter 5 Confidence Intervals

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

CHAPTER 1. INTRODUCTION. ERRORS.

11.10a Taylor and Maclaurin Series

3.1 Interpolation and the Lagrange Polynomial

THE NUMERICAL TREATMENT OF DIFFERENTIAL EQUATIONS

Statistical Methods for SVM

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Polynomial Interpolation

Properties of Transformations

Singular Value Decomposition

Butterworth Filter Properties

EE5120 Linear Algebra: Tutorial 3, July-Dec

Inversion Base Height. Daggot Pressure Gradient Visibility (miles)

Lack-of-fit Tests to Indicate Material Model Improvement or Experimental Data Noise Reduction

AM205: Assignment 3 (due 5 PM, October 20)

Checking the Radioactive Decay Euler Algorithm

Chapter 3: Root Finding. September 26, 2005

BTAM 101Engineering Mathematics-I Objective/s and Expected outcome PART A 1. Differential Calculus: 2. Integral Calculus: 3. Partial Derivatives:

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

Transcription:

Biostatistics Department R eport Technical BST2017-0 01 C alculation of Orthogonal Polynomials T heir Derivatives and C harles R. Katholi, PhD M ay 2017 D epartment of Biostatistics S chool of Public Health U niversity of Alabama at Birmingham ckatholi @uab.edu

Calculation of Orthogonal Polynomials and Their Derivatives Charles R. Katholi, PhD May 27, 2017 1 Introduction The purpose of this report is to explore the calculation of orthogonal polynomials and their derivatives. The basic method follows the approach given by Emerson (1968)[2]. Given a set of points x 1,, x n, polynomials p j (x i ), j = 0,, m where p j (x i ) is of degree j, are found such that the matrix of values P = p 0 (x 1 ) p 1 (x 1 ) p2(x 1 ) p m (x 1 ) p 0 (x 2 ) p 1 (x 2 ) p2(x 2 ) p m (x 2 ) p 0 (x 3 ) p 1 (x 3 ) p2(x 3 ) p m (x 3 )..... p 0 (x n ) p 1 (x n ) p2(x n ) p m (x n ) (1.1) is orthonormal; that is such that P T P = I m+1 where I m+1 is the (m + 1) (m + 1) identity matrix. From this matrix of values recursion coefficients A j, B j and C j, j = 1,, m are found and used to calculate values of the polynomials at any point x. It will be shown that the derivatives of the polynomials can also be found recursively utilizing constants A j, B j and C j defined in the next section. 1

2 Derivations of the methods 2.1 Calculation of the matrix P of equation (1.1) We shall utilize the notation of Emerson s paper [2]. Thus, let x i, i = 1,, n be given values of x and let w i be a weight associated with each x i. We shall find the A j, B j and C j such that at any x the values of the orthogonal polynomials are given by the simple recursion (Equation (6) in Emerson), p j (x) = (A j x + B j )p j 1 (x) C j p j 2 (x), j = 2, 3,, m < n (2.1) ( 1 where p 1 (x) = 0, x and p 0 (x) = n i) w, x. In the derivation of these recursion constants we make use of the following conditions w i p j (x i )p k (x i ) = { 1, if j = k 0, if j k (2.2) The values of A j, B j and C j are then found recursively from the following equations [ 2 [ ] 2 A j = w i x 2 i p 2 j 1(x i ) w i x i p 2 j 1(x i )] w i x i p j 1 (x i )p j 2 (x i ) B j = A j C j = A j w i x i p 2 j 1(x i ) (2.3) w i x i p j 1 (x i )p j 2 (x i ) The steps in the calculation of the A j, B j and C j given w i and x i, i = 1,, n can be summarized in the following steps 1. For the weights, w i calculate s w = [ n w(i)] 1/2. For i = 1,, n set p 1 (x i ) = 0 and p 0 (x i ) = s w. 2. For j = 1, 2,, n 1/2 2

(i) Calculate s 1 = s 2 = s 3 = w i x 2 i p 2 j 1(x i ) w i x i p j 1 (x i ) w i x i p j 1 (x i )p j 2 (x i ) (ii) Calculate A j = { s 1 s 2 2 s 2 3 B j = A j s 2 C j = A j s 3 (iii) For i = 1, 2,, n calculate 3. End of loop started at step 2. } 1/2 p j (x i ) = (A j x i + B j )p j 1 (x i ) C j p j 2 (x i ) 2.2 Approximating a function using the orthogonal polynomials At this point the n (m + 1) orthogonal matrix P has been calculated. Next given observations of a function y = f(x) for x = x 1, x 2,, x n we determine coefficients α j, j = 0,, m such that f(x) is approximated on the range of the x i by f(x) m j=0 α jp j (x). This is accomplished by finding the least squares solution of the n (m + 1) system of linear equations P α = y, y = (y 1, y 2,, y n ) T Because of the orthogonality of P the solution is trivially found to be α = P T y. Noting that C 1 is arbitrary and so can be set to zero, the approximation of the function f(x) can be found for any x by setting p 0 (x) = s w and using the recursion of equation(2.1) to calculate p 1 (x),, p m (x) and then form the linear combination f(x) ˆ = m j=0 α jp j (x). 3

2.3 The derivative of the model function If we wish to use the model to estimate the derivative of the approximating polynomial function we require the derivatives p j(x), j = 0, 1,, m. Noting that p 0 (x) is a constant independent of x we see that p 0(x) = 0 for all x and that C 1 = 0 we can find p j(x) by differentiating equation (2.1) to obtain the recursion p j(x) = A j p j 1 (x) + (A j x + B j )p j 1(x) C j p j 2(x), j = 1, 2,, m (2.4) Since the recurrence of equation(2.4) requires p j 1 (x) it is necessary for any j to first utilize the recursion of equation (2.1) and then the recursion of equation (2.4). The justification for equation (2.4) is given in Appendix A. 3 Some examples 3.1 Approximating the Runge Function: A well known example from elementary Numerical Analysis is the interpolation of the Runge Function f(x) = 1, x [ 1, 1] (3.1) 1 + 25x2 Standard divided difference interpolation based on equally spaced points fails completely due to rapid oscillation of the polynomial approximation near 1 and 1.[1],[3] The problem is due to the equal spacing of the interpolation data points and becomes worse as the degree of the polynomial increases. The problem can be remedied by making the interpolation table based on the Chebyshev points on the interval [a, b]. x j = ( a + b (a b) cos[(2j 1)π] 2n ) /2 (3.2) In this case a = 1 and b = 1. Rather than approach this problem by means of approximation by an interpolating polynomial we shall use the methods of least squares for approximation by a set of orthogonal polynomials based on function evaluations at the Chebyshev points. We consider two approximations based on n = 51 Chebyshev points on [ 1, 1] and polynomials of degree 10 and 20 respectively. Noting that the Runge function is symmetric about 4

x = 0 we expect that only the polynomials of even degree will contribute to the approximation and that proves to be true. Figures 1 and 2 below illustrates the fit based on polynomials up to degree 10 and 20 respectively. The ANOVA table associated with the approximation based on n = 51 points and polynomials up to degree 10 is Source df SS MS F Regression 11 5.13254 0.46659 307.155 Error 40 0.06763 0.00159 Total 51 5.19331 R 2 = 0.9883 Table 1: Analysis of variance summary for approximating the Runge function by a set of orthogonal polynomials up to degree 10 for x values taken to be the Chebyshev points on [ 1, 1] based on n = 51 values.the values of the function are not assumed to be subject to any additive errors beyond normal computational rounding errors. Similarly, taking n = 51 and the maximum degree polynomial as m = 20 leads to Table 2 Source df SS MS F Regression 21 5.19216 0.2474590 6491.4 Error 30 0.00114 0.0000381 Total 51 5.19331 R 2 = 0.9998 Table 2: Analysis of variance summary for approximating the Runge function by a set of orthogonal polynomials up to degree 20 for x values taken to be the Chebyshev points on [ 1, 1] based on n = 51 values.the values of the function are not assumed to be subject to any additive errors beyond normal computational rounding errors. In Figure 1 we note that the polynomial model of degree 10 fails to capture the peak of the function at x = 0 and shows significant oscillation in the tails of the function. These differences are made clear in the second figure that plots the derivatives. The large and rapid changes of the derivative of the 5

polynomial model near -1 and 1 clarify the extent of the poor fit. On the other hand, the 20 th degree polynomial approximation shown in Figure 2, is a great improvement. The oscillation in the tails is greatly reduced and the fit at x = 0 is much better. Again, the derivatives display the problems in the tail of the function. This example deals with a very hard problem and the results of the approximation by the polynomials with maximum degree 20 does much to tame the bad behavior. 6

Figure 1: Approximation of the Runge function, f(x) = (1 + 25x 2 ) 1 by orthogonal polynomials of maximum degree 10 by the method of Least squares. 7

Figure 2: Approximation of the Runge function, f(x) = (1 + 25x 2 ) 1 by orthogonal polynomials of maximum degree 20 by the method of Least squares. 8

3.2 A true polynomial data set For this example we choose the polynomial f(x) = 1 x + x 2 + 3x 3 and generate pseudo data with the following SAS code, seed =12345; c a l l s t r e a m i n i t ( seed ) ; do i =1 to 3 0 ; j=i 1; x=mod( j, 5 ) 0.25+0.5 rand ( uniform ) ; y=1 x+x 2+3x 3+0.5 rand ( normal ) ; output ; end ; run ; Note that in this example, the values of X have been subjected to a random element. This is a device for generating values that are different but more or less the same. The y values are are generated subject to an error that is distributed N(0, 0.25). We would expect to be able to capture the third degree polynomial nearly exactly and this precisely what we find. The results of the regress are summarized in the following Analysis of Variance table, Source df SS MS F Regression 5 1328517.6 265703.52 1434650.0 Error 25 4.6301195 0.1852044 Total 30 1328522.3 R 2 = 0.9999997 Table 3: Analysis of Variance summary for fitting a simple cubic polynomial using orthogonal polynomials up to and including degree four. Although not shown, the coefficient for the fourth degree polynomial is small relative to the other coefficients and is not significant. In this case, the values of the observations are subject to an additive error distributed as N(0, 0.25). For this example the plots of the fit and of the derivative are so close to the true values as to be indistinguishable. 9

4 Summary We have shown how to calculate the derivatives of a set of orthogonal polynomials given the constants for the three term recursion which generates the polynomial values. We have seen that these same constants can be used in a recursion to calculate the derivatives of the polynomials and hence of any model function based on a linear combination of the polynomials. A down side of this approach for SAS users is that the procedure ORPOL does not return the recurrence coefficients that are calculated by the methods described in this report [2]. The computations for this report were done using a program written in FORTRAN2003. A subroutine called ORPOLY.F95 is available from the author. Appendices A Derivation of equation(2.4) Note that in the construction of the recursion given in equation(2.1), the constants A j, B j and C j, j = 1, 2,, m are found recursively and once they are found, the value of the polynomials at any point x can be found using equation(2.1), p j (x) = (A j x + B j )p j 1 (x) C j p j 2 (x), j = 1, 2, 3,,, m < n In particular, given any x we can use the recursion to find values of the polynomials at the points x+h where h is a small number, the by elementary calculus we know that p j(x) = lim h 0 p j (x + h) p j (x) h and from application of the Taylor expansion we can write, p j (x + h) = p j (x) + hp j(x) + (h) (A.1) From equation(2.1) we have that p j (x + h) = (A j (x + h) + B j )p j 1 (x + h) C j p j 2 (x + h) 10

if we now replace p k (x + h) for k = j, j 1, j 2 with the appropriate expressions from equation(a.1) in equation(2.1), subtracting equation(2.1) leads (after considerable algebra) to p j (x + h) p j (x) h = (A j x + b j )p j 1(x) C j p j 2(x) + A j p j 1 (A.2) + ha j p j 1 + (h) Taking the limit of both sides of this equation leads to the result given in equation(2.4). References [1] de Boor, Carl (2001),A Proactial Guide to Splines: revised edition, Springer, New York. [2] Emerson, Phillip L.,(1968) Numerical Construction of Orthogonal Polynomials From a General Recurrence Formula, Biometrics, Vol 24, pp 695-701. [3] Forsythe, G.,Malcolm, M. and Moler, C.,Computer Methods for Mathematical Computations, Prentice-Hall series in automaticd computation, Prentice Hall, 1977. 11