Advanced Computational Fluid Dynamics AA215A Lecture 2 Approximation Theory. Antony Jameson

Similar documents
Recall that any inner product space V has an associated norm defined by

Approximation theory

Vectors in Function Spaces

APPENDIX B GRAM-SCHMIDT PROCEDURE OF ORTHOGONALIZATION. Let V be a finite dimensional inner product space spanned by basis vector functions

Math 489AB A Very Brief Intro to Fourier Series Fall 2008

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Approximation Theory

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.

MATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY

Lecture 16: Bessel s Inequality, Parseval s Theorem, Energy convergence

Functional Analysis Exercise Class

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

Background and Definitions...2. Legendre s Equation, Functions and Polynomials...4 Legendre s Associated Equation and Functions...

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

Linear Algebra Massoud Malek

Exercise 11. Isao Sasano

Polynomial approximation via de la Vallée Poussin means

Numerical Analysis An Advanced Course. Gheorghe Coman, Ioana Chiorean, Teodora Cătinaş

swapneel/207

Section 7.5 Inner Product Spaces

i x i y i

Qualification Exam: Mathematical Methods

Best approximation in the 2-norm

General Inner Product and The Fourier Series

8.2 Discrete Least Squares Approximation

CONVERGENCE OF THE FOURIER SERIES

Real Variables # 10 : Hilbert Spaces II

Solutions: Problem Set 3 Math 201B, Winter 2007

Numerical Methods I Orthogonal Polynomials

EXAMINATION: MATHEMATICAL TECHNIQUES FOR IMAGE ANALYSIS

LEGENDRE POLYNOMIALS AND APPLICATIONS. We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates.

THE PROBLEMS FOR THE SECOND TEST FOR BRIEF SOLUTIONS

Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that. (1) (2) (3) x x > 0 for x 0.

Scientific Computing

Topics in Fourier analysis - Lecture 2.

Lectures notes. Rheology and Fluid Dynamics

Fourier Series. 1. Review of Linear Algebra

be the set of complex valued 2π-periodic functions f on R such that

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Chebyshev approximation

1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e.,

ON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT

MATH 5640: Fourier Series

Mathematical Methods for Computer Science

Introduction to Orthogonal Polynomials: Definition and basic properties

Math 115 ( ) Yum-Tong Siu 1. Derivation of the Poisson Kernel by Fourier Series and Convolution

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products

TAYLOR AND MACLAURIN SERIES

Polynomials. p n (x) = a n x n + a n 1 x n 1 + a 1 x + a 0, where

Further Mathematical Methods (Linear Algebra) 2002

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

Examples of the Fourier Theorem (Sect. 10.3). The Fourier Theorem: Continuous case.

A Concise Introduction to Numerical Analysis. Douglas N. Arnold

You may not start to read the questions printed on the subsequent pages until instructed to do so by the Invigilator.

Exercise Solutions to Functional Analysis

Math 61CM - Solutions to homework 6

Lecture # 3 Orthogonal Matrices and Matrix Norms. We repeat the definition an orthogonal set and orthornormal set.

ETNA Kent State University

where the bar indicates complex conjugation. Note that this implies that, from Property 2, x,αy = α x,y, x,y X, α C.

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

Hilbert Spaces. Contents

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Ma 221 Eigenvalues and Fourier Series

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

12.0 Properties of orthogonal polynomials

Two special equations: Bessel s and Legendre s equations. p Fourier-Bessel and Fourier-Legendre series. p

Preliminary Exam 2016 Solutions to Morning Exam

Introduction to Signal Spaces

Differential Equations

Methods of Mathematical Physics X1 Homework 3 Solutions

d 1 µ 2 Θ = 0. (4.1) consider first the case of m = 0 where there is no azimuthal dependence on the angle φ.

AP Calculus Chapter 9: Infinite Series

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

SPECTRAL METHODS: ORTHOGONAL POLYNOMIALS

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan

Least Squares Curve Fitting A.F. Emery 1

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Lecture 1. 1, 0 x < 1 2 1, 2. x < 1, 0, elsewhere. 1

Mathematics Department Stanford University Math 61CM/DM Inner products

Power Series Solutions to the Legendre Equation

7: FOURIER SERIES STEVEN HEILMAN

Bessel s and legendre s equations

CHAPTER VIII HILBERT SPACES

Class notes: Approximation

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim

Overview of Fourier Series (Sect. 6.2). Origins of the Fourier Series.

14 Fourier analysis. Read: Boas Ch. 7.

Power Series Solutions to the Legendre Equation

Waves on 2 and 3 dimensional domains

Reconstruction of sparse Legendre and Gegenbauer expansions

Curve Fitting and Approximation

LECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes).

Chapter 4. Series Solutions. 4.1 Introduction to Power Series

Real Analysis Problems

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

Transcription:

Advanced Computational Fluid Dynamics AA5A Lecture Approximation Theory Antony Jameson Winter Quarter, 6, Stanford, CA Last revised on January 7, 6

Contents Approximation Theory. Least Squares Approximation and Projection....................... Alternative Proof of Least Squares Solution....................... 3.3 Bessel s Inequality..................................... 3.4 Orthonormal Systems................................... 4.5 Construction of Orthogonal Functions.......................... 4.6 Orthogonal Polynomials.................................. 5.7 Zeros of Orthogonal Polynomials............................. 5.8 Legendre Polynomials................................... 5.9 Chebyshev Polynomials.................................. 6. Best Approximation Polynomial............................. 8. Best Interpolating Polynomial............................... 8. Jacobi Polynomials..................................... 9.3 Fourier Series........................................ 9.4 Error Estimate for Fourier Series..............................5 Orthogonality Relations for Discrete Fourier Series....................6 Trigonometric Interpolation.................................7 Error Estimate for Trigonometric Interpolation..................... 3.8 Fourier Cosine Series.................................... 5.9 Cosine Interpolation.................................... 5. Chebyshev Expansions................................... 6. Chebyshev Interpolation.................................. 6. Portraits of Fourier, Legendre and Chebyshev...................... 7

Lecture Approximation Theory. Least Squares Approximation and Projection Suppose that we wish to approximate a given function f by a linear combination of independent basis functions φ j, j =,.., n. In general we wish to choose coefficients c j to minimize the error f n c j φ j (.) in some norm. Ideally we might use the infinity norm, but it turns out it is computationally very expensive to do this, requiring an iterative process. The best known method for finding the best approximation in the infinity norm is the exchange algorithm (M.J.D.Powell) On the other hand it is very easy to calculate the best approximation in the Euclidean norm as follows. Choose c j to minimize the least squares integral J(c, c, c,...c n ) = n c j φ j f dx (.) equivalent to minimizing the Euclidean norm. Then we require = J n = c j φ j f φ i dx (.3) c i Thus the best approximation is Where or f = n c jφ j (.4) n c jφ j f, φ i = (f f, φ i ) =, for i =,,,..., n (.5) n a ij c j = b i (.6)

LECTURE. APPROXIMATION THEORY 3 where a ij = (φ i, φ j ), b i = (f, φ i ) (.7) Notice that since f is a linear combination of the φ j these equations state that (f f, f ) = (.8) Thus the least squares approximation f is orthogonal to the error f f. This means that f is the projection of f onto the space spanned by the basis functions φ j, in the same way that a three dimensional vector might be projected onto a plane. If the φ j form an orthonormal set, a ij = δ ij (.9) where δ ij = if i = j and if i j, and we have directly c i = (f, φ i ) (.). Alternative Proof of Least Squares Solution Let f = c jφ j (.) be the best approximation and consider another approximation cj φ j (.) Then if f f is orthogonal to all φ j c j φ j f = (c j c j)φ j (f f ) (.3) = (cj c j)φ j + (f f ) (.4) Recall that in the Lecture Notes = = (, ) /.3 Bessel s Inequality Since f is orthogonal to f f the Pythagorean theorem gives Also if the φ j are orthonormal, so we have Bessel s inequality (f f ) (.5) f = f + f f = f + f f (.6) f = c j (.7) c j f (.8) Thus the series c j is convergent and if f f as n we have Parseval s formula c j = f (.9) This is the case for orthogonal polynomials because of the Weiesstrass theorem.

LECTURE. APPROXIMATION THEORY 4.4 Orthonormal Systems The functions φ i form an orthonormal set if they are orthogonal and φ i =. We can write where δ ij = if i = j and if i j. An example of an orthogonal set is on the inerval [, π]. Then if j k (φ i, φ j ) = δ ij (.) φ j = cos(jx), for j =,,..., n (.) Also (φ j, φ k ) = π cos(jx) cos(kx) dx = (φ j, φ j ) = π π cos (jx)dx = (cos(j k)x + cos(j + k)x) dx = (.) π ( + cos(jx)) dx = π (.3).5 Construction of Orthogonal Functions This can be done by Gram Schmidt orthogonalization. Define the inner product (f, g) = Functions g i are called independent if b a f(x)g(x)dx (.4) n α i g i = implies α i = for i =,..., n (.5) i= Let g i (x), i =,,..., n, be any set of independent functions. Then set f (x) = d g (x) (.6) f (x) = d (g (x) c f (x)) (.7). f n (x) = d n (g n (x) c n f (x)... c n,n f n (x)) (.8) We require This gives (f i, f j ) = δ ij (.9) d = (g, g ) (.3) Then = (f, f ) = d ((f, g ) c ) and in general c jk = (f j, g k ) (.3) while the d k are easily obtained from (f k, f k ) = (.3)

LECTURE. APPROXIMATION THEORY 5.6 Orthogonal Polynomials A similar approach can be used to construct orthogonal polynomials. Given orthogonal polynomials, φ j (x), j =,,..., n, we can generate a polynomial of degree n + by multiplying φ n (x) by x, and we can make it orthogonal to the previous φ j (x) by subtracting a linear combination of these polynomials. Thus, we set n φ n+ (x) = α n xφ n (x) c ni φ i (x) (.33) Now, But (φ i, φ j ) = for i j so α n (xφ n, φ j ) i= n c ni (φ i, φ j ) = (.34) i= c nj φ j = α n (xφ n, φ j ) = α n (φ n, xφ j ) (.35) Since xφ j is a polynomial of order j + this vanishes except for j = n, n. Thus φ n+ (x) = αxφ n (x) c nn φ n (x) c n,n φ n (x) (.36) where and α is arbitrary. c nn = α n(φ n, xφ n ) φ n, c n,n = α n(φ n, xφ n ) φ n (.37).7 Zeros of Orthogonal Polynomials An orthogonal polynomial P n (x) for the interval [a, b] has n distinct zeros in [a, b]. Suppose it had only k < n zeros x, x,..., x k at which P n (x) changes sign. Then consider Q k (x) = (x x )... (x x k ) (.38) where for k = we define Q (x) =. Then Q k (x) changes sign at the same points as P n (x), and b a P n (x)q k (x)w(x)dx (.39) But this is impossible since P n (x) is orthogonal to all polynomials of degree < n..8 Legendre Polynomials These are orthogonal in the interval [, ] with a constant weight function w(x) =. They can be generated from Rodrigues formula L (x) = (.4) L n (x) = d n [ (x n n! dx n ) n], for n =,,... (.4)

LECTURE. APPROXIMATION THEORY 6 The first few are L = L (x) = x L (x) = (3x ) L 3 (x) = (5x3 3x) (.4) L 4 (x) = 8 (35x4 3x + 3) L 5 (x) = 8 (63x5 7x 3 + 5x) where they can be successively generated for n by the recurrence formula L n+ = n + n + xl n(x) n n + L n (x) (.43) They are alternately odd and even and are normalized so that while for x < Also (L n, L j ) = L n () =, L n ( ) = ( ) n (.44) Writing the recurrence relation as L n (x) < (.45) { if n j L n (x)l j (x)dx = n+ if n = j (.46) L n (x) = n xl n (x) n n n L n (x) (.47) it can be seen that the leading coefficient of L n (x) is multiplied by (n )/n when L n (x) is generated. Hence the leading coefficient is a n = 3 5... (n ) 3... n = n n! (n!) (.48) L n (x) also satisfies the differential equation d dx [ ( x ) d dx L n(x).9 Chebyshev Polynomials ] + n(n + )L n (x) = (.49) Let and define x = cos φ, φ π (.5) T n (x) = cos(nφ) (.5)

LECTURE. APPROXIMATION THEORY 7 in the interval [-,]. Since we have These are the Chebyshev polynomials. Property : Leading coefficient is n for n Property : Property 3: T n (x) has n zeros at cos(n + )φ + cos(n )φ = cos φ cos nφ, n (.5) T (x) = (.53) T (x) = x (.54). T n+ (x) = xt n (x) T n (x) (.55) T n ( x) = ( ) n T n (x) (.56) ( ) k π x k = cos n and n + extrema in [, ] with the values ( ) k, at for k =,..., n (.57) x k = cos kπ n for k =,..., n (.58) Property 4 (Continuous orthogonality): They are orthogonal in the inner product since (T r, T s ) = (f, g) = π Property 5 (Discrete orthogonality): They are orthogonal in the discrete inner product dx f(x)g(x) x if r s π cos rθ cos sθ dθ = if r = s π if r = s = (.59) (.6) where x j are the zeros of T n+ (x) and (f, g) = n f(x j )g(x j ) (.6) arccos x j = θ j = j + π, j =,,..., n (.6) n +

LECTURE. APPROXIMATION THEORY 8 Then (T r, T s ) = n if r s n+ cos rθ j cos sθ j = if r = s n + if r = s = (.63) Property ( 6 (Minimax): / n ) T n (x) has smallest maximum norm in [, ] of all polynomials with leading coefficient unity. Proof: Suppose p n (x) were smaller. Now at the n + extrema x,..., x n of T n (x), which all have the same magnitude of unity, p n (x n ) < n T n(x n ) (.64) p n (x n ) > n T n(x n ) (.65). Thus p n (x) ( / n ) T n (x) changes sign n times in [, ], but this is impossible since it is of degree n and has n roots.. Best Approximation Polynomial Let f(x) P n (x) have maximum deviations, +e, e, alternately at n + points x,..., x n+ in [a, b]. Then P n (x) minimizes f(x) P n (x) Proof: Suppose f(x) Q n (x) < e, then we have at x,..., x n+ Q n (x) P n (x) = f(x) P n (x) (f(x) Q n (x)) (.66) has same sign as f(x) P n (x). Thus it has opposite sign at n + points, giving n + sign changes. But this is impossible since it has only n roots.. Best Interpolating Polynomial The remainder for the Lagrange interpolation polynomial is f(x) P n (x) = w n (x) f n+ (ξ) (n + )! (.67) where w n (x) = (x x )(x x )... (x x n ) (.68) Thus we can minimize w n (x) in [, ] by making x k the zeros of T n+ (x). Then ( ) n w n (x) = T n+(x) (.69) ( ) n w n (x) = (.7)

LECTURE. APPROXIMATION THEORY 9. Jacobi Polynomials Jacobi polynomials P n (α,β) (x) are orthogonal in the interval [, ] for the weight function w(x) = ( x) α ( + x) β (.7) Thus the family of Jacobi polynomials includes both the Legendre polynomials L n (x) = P (,) n (x) (.7) and the Chebyshev polynomials T n (x) = P (, ) n (x) (.73).3 Fourier Series Let f(θ) be periodic with period π and square integrable on [, π]. Consider the approximation of f(θ) S n (θ) = n a + (a r cos rθ + b r sin rθ) (.74) Let us minimize Let r= { π f S n = ( f(θ) Sn (θ) ) dθ } (.75) J(a, a,..., a n, b,..., b n ) = f S n (.76) The trigonometric functions satisfy the orthogonality relations π π cos(rθ) cos(sθ)dθ = sin(rθ) sin(sθ)dθ = { if r s π if r = s { if r s π if r = s (.77) (.78) At the minimum allowing for these π sin(rθ) cos(sθ)dθ = (.79) = J π π = f(θ) cos(rθ)dθ + a r cos (rθ)dθ (.8) a r whence and similarly a r = π b r = π π π f(θ) cos(rθ)dθ (.8) f(θ) sin(rθ)dθ (.8)

LECTURE. APPROXIMATION THEORY Also since f S n we obtain Bessel s inequality a + n r= ( a r + b π r) f (θ)dθ (.83) π and since the right hand side is independent of n the sum converges and Also lim a r =, r lim b r =, (.84) r lim f S n = (.85) n If not, it would violate Weierstrass s theorem translated to trigonometric functions (Isaacson and Keller, p 3, p 98)..4 Error Estimate for Fourier Series Let f(θ) have K continuous derivatives. Then the Fourier coefficients decay as ( ) ( ) a r = O r K, b r = O r K and Integrating repeatedly by parts ( ) f(θ) S n (θ) = O n K a r = π π = rπ. = r π π π f(θ) cos(rθ)dθ f (θ) sin(rθ)dθ f (θ) cos(rθ)dθ (.86) (.87) (.88) Let M = sup f (K) (θ). Then since cos(rθ) and sin(rθ) and similarly a r M r K (.89) b r M r K (.9)

LECTURE. APPROXIMATION THEORY Also S m (θ) S n (θ) m r=n+ m r K r=n+ 4M n = 4M K (a r cos rθ + b r sin rθ) 4M dξ ξ K n K where the integral contains the sum of the rectangles shown in the sketch. (.9) Thus the partial sums S n (θ) form a Cauchy sequence and converge uniformly to f(θ), and f(θ) S n (θ) 4M K n K (.9).5 Orthogonality Relations for Discrete Fourier Series Consider where Then S = N e irθ j = N ω rj (.93) ω = e i π N, ω N = (.94) S = { ωnr if ωr ω r = N if ω r = (.95)

LECTURE. APPROXIMATION THEORY Therefore Taking the real part so N cos rθ j cos sθ j = N { e irθ j e isθ j if r s =, N, 4N = N if r s =, N, 4N N N N (.96) cos(r s)θ j = if r s =, N, 4N (.97) cos(r + s)θ j = if r + s =, N, 4N (.98) ( cos(r s)θj + cos(r + s)θ j ) = if r s N if r = s, N N if r = s =, N (.99) Note that N w rj is the sum of N equally spaced unit vectors as sketched, except in the case when ω r =..6 Trigonometric Interpolation Instead of finding a least square fit we can calculate the coefficients of the sum so that the sum exactly fits the function at equal intervals around the circle. Let A r and B r be chosen so that at the n + points It is easy to verify that U n (θ) = n A + (A r cos rθ + B r sin rθ) + A n cos nθ = f(θ) (.) n n r= cos rθ j cos sθ j = sin rθ j sin sθ j = θ j = j π (.) n if r s n if r = s, n n if r = s =, n if r s n if r = s, n if r = s =, n (.) (.3)

LECTURE. APPROXIMATION THEORY 3 n sin rθ j cos sθ j = (.4) It follows on multiplying through by cos rθ j or sin rθ j and summing over θ j that A r = n n f(θ j ) cos rθ j (.5) for r =,..., n. Note that writing B r = n n f(θ j ) sin rθ j (.6) c = A (.7) c r = (A r ib r ) (.8) c r = (A r + ib r ) (.9) the sum can be expressed as where and is defined as U n (θ) = n r= n c r e irθ = n cr e irθ (.) r= n c r = n f(θ j )e irθ j (.) n n ar = r= n n r= n a r (a n + a n ) (.).7 Error Estimate for Trigonometric Interpolation In order to estimate the error of the interpolation we compare the interpolation coefficients A r and B r with the Fourier coefficients a r and b r. The total error can then be estimated as the sum of the error in the truncated Fourier terms and the additional error introduced by replacing the Fourier coefficients by the interpolation coefficients. The interpolation coefficients can be expressed in terms of the Fourier coefficients by substituting the infinite Fourier series for f(θ) into the formula for the interpolation coefficients. For this purpose it is convenient to use the complex form. Thus C r = n f(θ j )e irθ j (.3) n = n n = n n e irθ j k= k= c k e ikθ j (.4) c k e i(r k)θ j (.5)

LECTURE. APPROXIMATION THEORY 4 Here Hence and similarly Accordingly and similarly e i(r k)θ j = { n if r k = qn where q =,,... if r k qn C r = c r + C r = c r + (.6) (c kn+r + c kn r ) (.7) k= (c kn+r + c kn r ) (.8) k= A r = C r + C r = a r + B r = i(c r C r ) = b r + (a kn+r + a knr r ) (.9) k= (b kn+r b kn r ) (.) Thus the difference between the interpolation and Fourier coefficients can be seen to be an aliasing error in which all the higher harmonics are lumped into the base modes. Now we can estimate A r a r, B r b r using a r M r K, k= b r M r K. (.) We get A r a r M (kn r) K + (kn + r) K k= = M (.) (n) K k K ( ) k= r K + ( ) kn + r K kn Since r n this is bounded by M (n) K k= k K (k + ) (K + )M K n K Using a similar estimate for B r we get ( + A r a r < 5M n K, ) ( dξ ξ K (K + )M K n K + ) K (.3) B r b r < 5M n K (.4) for K. Finally we can estimate the error of the interpolation sum u n (θ) as f(θ) u n (θ) f(θ) S n (θ) + S n (θ) u n (θ) (.5) < 4M n { } K n K + A r a r + B r b r (.6) r= < 4M M(n + ) + K nk n K (.7)

LECTURE. APPROXIMATION THEORY 5.8 Fourier Cosine Series For a function defined for θ π the Fourier cosine series is where a r = π S c (θ) = a = π π a r cos rθ (.8) r= π f(θ)dθ (.9) f(θ) cos(rθ)dθ, r > (.3) Hence since d dθ cos(rθ) = at θ =, π the function will not be well approximated at, π unless f (θ) = f (π) =. The cosine series in fact implies an even continuation at θ =, π. Now if all derivatives of f(θ) of order,,..., K, are continuous, and f (p) () = f (p) (π) = for all odd p < K and f (K) (θ) is integrable, then integration by parts from the coefficients of f (K) (θ) gives.9 Cosine Interpolation Let f(θ) be approximated by and let where Then U n (θ) = a r M r k (.3) n a r cos rθ (.3) U n (θ j ) = f(θ j ) for j =,,...n (.33) n cos rθ j cos sθ j = θ j = j + π n + if r s n n+ if r = s n n + if = r = s Now if we multiply the first equation by cos sθ j and sum, we find that a r = n + (.34) (.35) n f(θ j ) cos rθ j for < r < n (.36) a = n + n f(θ j ) (.37)

LECTURE. APPROXIMATION THEORY 6. Chebyshev Expansions Let g(x) = a r T r (x) (.38) r= approximate f(x) in the interval [, ]. Then G(θ) = g(cos θ) is the Fourier cosine series for F (θ) = f(cos θ) for θ π, since T r (cos θ) = cos rθ (.39) G(θ) = g(cos θ) = a r cos rθ (.4) Thus and a r = π a = π π π f(cos θ)dθ = π f(cos θ) cos rθdθ = π r= dx f(x) x dx f(x)t r (x) x (.4) (.4) Now from the theory of Fourier cosine series if f (p) (x) is continuous for x and p =,,..., K, and f (K) (x) is integrable a r M r K (.43) Since T r (x) for x the remainder after n terms is O ( n K ). Now F (θ) = f (cos θ) sin θ (.44) F (θ) = f (cos θ) sin θ f (cos θ) cos θ (.45) F (θ) = f (cos θ) sin 3 θ + 3f (cos θ) sin θ cos θ + f (cos θ) sin θ (.46) and in general, if F (p) is bounded, F (p) () = F (p) (π) = (.47) when p is odd, since then F (p) contains the term sin θ. As a result, the favorable error estimate applies, provided that derivatives of f exist up to the required order.. Chebyshev Interpolation We can transform cosine interpolation to interpolation with Chebyshev polynomials by setting and choosing the coefficients so that G n (x) = n a r T r (x) (.48) r= G n (x j ) = f(x j ) for j =,,...n (.49)

LECTURE. APPROXIMATION THEORY 7 where x j = cos θ j = cos ( j + n + ) π (.5) These are the zeros of T n+ (x). Now for any k < n we can find the discrete least squares approximation. When k = n, because of the above equation, the error is zero, so the least squares approximation is the interpolation polynomial. Moreover, since G n (x) is a polynomial of degree n, this is the Lagrange interpolation polynomial at the zeros of T n+ (x).. Portraits of Fourier, Legendre and Chebyshev Figure.: Joseph Fourier (768-83)

LECTURE. APPROXIMATION THEORY 8 Figure.: Adrien-Marie Legendre (75-833) Figure.3: Pafnuty Lvovich Chebyshev (8-894)