Inner Product Spaces

Similar documents
Inner Product Spaces. Another notation sometimes used is X.?

8œ! This theorem is justified by repeating the process developed for a Taylor polynomial an infinite number of times.

Gene expression experiments. Infinite Dimensional Vector Spaces. 1. Motivation: Statistical machine learning and reproducing kernel Hilbert Spaces

Math 131 Exam 4 (Final Exam) F04M

Infinite Dimensional Vector Spaces. 1. Motivation: Statistical machine learning and reproducing kernel Hilbert Spaces

: œ Ö: =? À =ß> real numbers. œ the previous plane with each point translated by : Ðfor example,! is translated to :)

Here are proofs for some of the results about diagonalization that were presented without proof in class.

Math 131, Exam 2 Solutions, Fall 2010

Proofs Involving Quantifiers. Proof Let B be an arbitrary member Proof Somehow show that there is a value

Æ Å not every column in E is a pivot column (so EB œ! has at least one free variable) Æ Å E has linearly dependent columns

Definition Suppose M is a collection (set) of sets. M is called inductive if

PART I. Multiple choice. 1. Find the slope of the line shown here. 2. Find the slope of the line with equation $ÐB CÑœ(B &.

Math 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections

Relations. Relations occur all the time in mathematics. For example, there are many relations between # and % À

Example Let VœÖÐBßCÑ À b-, CœB - ( see the example above). Explain why

2. Let 0 be as in problem 1. Find the Fourier coefficient,&. (In other words, find the constant,& the Fourier series for 0.)

Lesson 6.6 Radical Equations

Evaluating Limits Analytically. By Tuesday J. Johnson

The Spotted Owls 5 " 5. = 5 " œ!þ")45 Ð'!% leave nest, 30% of those succeed) + 5 " œ!þ("= 5!Þ*%+ 5 ( adults live about 20 yrs)

Dr. H. Joseph Straight SUNY Fredonia Smokin' Joe's Catalog of Groups: Direct Products and Semi-direct Products

e) D œ < f) D œ < b) A parametric equation for the line passing through Ð %, &,) ) and (#,(, %Ñ.

Systems of Equations 1. Systems of Linear Equations

The Rational Numbers

solve EB œ,, we can row reduce the augmented matrix

Introduction to Diagonalization

Review of Elementary Probability Theory Math 495, Spring 2011

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

solve EB œ,, we can row reduce the augmented matrix

Math 132, Spring 2003 Exam 2: Solutions

2.1 Definition. Let n be a positive integer. An n-dimensional vector is an ordered list of n real numbers.

B-9= B.Bà B 68 B.Bß B/.Bß B=38 B.B >+8 ab b.bß ' 68aB b.bà

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

Statistical machine learning and kernel methods. Primary references: John Shawe-Taylor and Nello Cristianini, Kernel Methods for Pattern Analysis

The Gram Schmidt Process

The Gram Schmidt Process

Recall that any inner product space V has an associated norm defined by

Dr. Nestler - Math 2 - Ch 3: Polynomial and Rational Functions

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Example: A Markov Process

Vector Geometry. Chapter 5

The Leontief Open Economy Production Model

The Hahn-Banach and Radon-Nikodym Theorems

The geometry of least squares

4.1 Distance and Length

Math 24 Spring 2012 Questions (mostly) from the Textbook

Instructor (Brad Osgood)

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

INFINITE SUMS. In this chapter, let s take that power to infinity! And it will be equally natural and straightforward.

Methods of Mathematical Physics X1 Homework 2 - Solutions

Main topics for the First Midterm Exam

Things You Should Know (But Likely Forgot)

Chapter Seven The Quantum Mechanical Simple Harmonic Oscillator

MCR3U - Practice Mastery Test #6

Approximations - the method of least squares (1)

AQA Level 2 Further mathematics Further algebra. Section 4: Proof and sequences

Linear Algebra Massoud Malek

The Leontief Open Economy Production Model

MATH 113: ELEMENTARY CALCULUS

7. Random variables as observables. Definition 7. A random variable (function) X on a probability measure space is called an observable

The Gram-Schmidt Process

You may have a simple scientific calculator to assist with arithmetic, but no graphing calculators are allowed on this exam.

Further Mathematical Methods (Linear Algebra) 2002

MAT137 - Term 2, Week 2

MTH 2310, FALL Introduction

REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS

Vector calculus background

Quadratic Equations Part I

MTH 309Y 37. Inner product spaces. = a 1 b 1 + a 2 b a n b n

1.6 and 5.3. Curve Fitting One of the broadest applications of linear algebra is to curve fitting, especially in determining unknown coefficients in

SIMULATION - PROBLEM SET 1

Sequences and Series

MITOCW ocw f99-lec16_300k

Chapter 11 - Sequences and Series

B œ c " " ã B œ c 8 8. such that substituting these values for the B 3 's will make all the equations true

Algebra & Trig Review

LINEAR ALGEBRA KNOWLEDGE SURVEY

Review and problem list for Applied Math I

There are two things that are particularly nice about the first basis

Limits Involving Infinity (Horizontal and Vertical Asymptotes Revisited)

Inner product spaces. Layers of structure:

Math 61CM - Solutions to homework 6

22 Approximations - the method of least squares (1)

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Equivalent Sets. It is intuitively clear that for finite sets E F iff Eand Fhave the same number of elementsþ

Chapter 06: Analytic Trigonometry

Math Computer Lab 4 : Fourier Series

Linear Models Review

8.5 Taylor Polynomials and Taylor Series

Generating Function Notes , Fall 2005, Prof. Peter Shor

Sequences and Series

Sequences and Series. 256 Chapter 11 Sequences and Series. and then lim 1 1 = 1 0 = 1.

Midterm 1 Review. Distance = (x 1 x 0 ) 2 + (y 1 y 0 ) 2.

Math Real Analysis II

Framework for functional tree simulation applied to 'golden delicious' apple trees

Section 20: Arrow Diagrams on the Integers

5.2 Infinite Series Brian E. Veitch

Coordinate Systems. Recall that a basis for a vector space, V, is a set of vectors in V that is linearly independent and spans V.

2. Signal Space Concepts

MAY 2005 SOA EXAM C/CAS 4 SOLUTIONS

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Transcription:

Inner Product Spaces In 8 X, we defined an inner product? @? @?@ ÞÞÞ? 8@ 8. Another notation sometimes used is? @? ß@. The inner product in 8 has several important properties ( see Theorem, p. 33) that we have used over and over. Written with the? ß@ notation, they are a)? ß@ @ß? b)?@ ßA?ßA @ ßA c) -? ß@ -? ß@ d)? ß? 0 and? ß?! if and only if?! Þ Using the inner product, we then defined length?? ß? Î and distance between two Î vectors: ll? @ ll? @ ß? @. Finally we discussed the angle between vectors and defined orthogonality (? ¼ @) by? ß@!Þ Earlier in the course, we took the essential properties of vectors in 8 for a starting point to define more general vector spaces Z Ðp. 90). In the same spirit, we now use the essential properties a)-d) of the inner product in 8 as a guide for inner products in any vector space Z. For a vector space Z with real scalars, an inner product is a rule? ß@ that produces a scalar for every pair of vectors? ß@ in Z in a way that a ) - d) are true. We call such a rule an inner product because it acts like an inner product (in 8 ÑÞ ( Properties a) - d) are modified slightly when complex scalars are allowed.) A vector space Z with an inner product defined is called an inner product space. Because any inner product acts just like the inner product from 8 8, many of the theorems we proved about inner products for are also true in any inner product space. You can look at an introduction to this material in Section 6.7 of the textbook. Here is a little more detail using one specific exampleß GÒßÓ the vector space of all continuous real-valued functions defined on the interval ÒßÓ. We'll call this vector space G for short. Everything hereafter in these notes is happening in GÞ For vectors (functions) 0ß in Gß define an inner product by 0ß ' 0ÐBÑÐBÑ.B (a scalar!) A purely heuristic comment: in Calculus I this integral is defined (roughly) as follows: ñ divide Òß Ó into subintervals of length 8 and pick a point B3 in each subinterval ñ form a Riemann sum 0ÐB ÑÐB Ñ 0ÐB ÑÐB Ñ 3 3 8 8 3 3 ñ let n Ä : the integral is the limit of the Riemann sums The Riemann sum 0ÐB ÑÐB Ñ resembles the definition for the dot product in À if you 3 3 imagine 0ÐB Ñ and ÐB Ñ as the B coordinates for the vectors 0 and, then 0ÐB ÑÐB Ñ is 3 3 3 3 3 analogous to an inner product in 8 À add up the product of coordinates from 0 and Þ 8

Notice that 0ß does satisfy a) - d): that is, 0ß behaves like the inner product in 8 : a) 0ß ß0 because ' 0 ÐBÑÐBÑ.B ' ÐBÑ0 ÐBÑ.B b) 0 ß2 0ß2 ß2 because ' Ð0ÐBÑÐBÑÑ2ÐBÑ.B ' 0ÐBÑ2ÐBÑ.B' ÐBÑ2ÐBÑ.B c) c0ß - 0ß because? ' d) 0ß0 0 because 0 ÐBÑ.B 0. And 0ß0! if and only if 0! Ð the constant function! Ñ You should try to convince yourself about d). Checking it needs the fact that function 0 in G are continuous. Why? Using this new inner product, we make definitions in G parallel to definitions in 8 : Î ' Î Define the norm of 0 by ll0ll 0ß0 Ð 0 ÐBÑ.BÑ ' Î Define the distance between 0 and as ll0 ll Ð Ð0ÐBÑÐBÑÑ.BÑ There are certainly other ways to define distance between two functions 0 and. This way is called the mean square distance between 0 and. It's popular with mathematicians and statisticians because it resembles the definition of distance in 8 as a square root of a sum of squares where sum is replaced by integral. Moreover, ll0 ll behaves nicely, with properties similar to ordinary distance in 8 Þ Notice that because of the squaring in the formula, large differences l0ðbñðbñl have more influence than small differences in calculating the distancell0 llþ We say 0 and are orthogonal if 0ß ' 0ÐBÑÐBÑ.B! For example: sin and cos are orthogonal on sin, cos ' Ð BÑÐ BÑ.B sin cos sinðbñ.b cosðbñl! ' Most of the tools we developed using inner products in ÒßÓ because 8 still work. For example: ¼ For a subspace [ of G À we define [ Ö0 À 0ß! for all in [ Orthogonal Decomposition Theorem: Suppose 0 G and that [ is a subspace of G with an orthogonal basis Ö ßÞÞÞß. Then we can write 0 0s where 0s 8 [ and ¼ [, and 0s and are unique. 0s is called the projection of 0 on [, also denoted proj 0ß and [

s 0ß 0ß ß ß 8 8 0 ÞÞÞ 8 8 0 s is the function in [ closest to 0, meaning that 0 0s ll 0 ll for all in [ß Á 0s If Ö0 ß0 ßÞÞÞß0 8 is a basis for a subspace [ of G, we can convert the basis into an orthogonal basis Ö ßÞÞÞß using the same Gram Schmidt formulas as in 8 Þ 8 Note: Unlike 8, G is infinite dimensional but that doesn't matter for the resultsa/ just listed Þ What does matter is that the subspace [ is finite dimensional: [ has a finite (orthogonal) basis Ö ßÞÞÞß 8. Some approximations in G: three examples Example Consider the subspace of G À [ Span Öß cos Bß cosbß ÞÞÞß cos 8B, sin Bß sin Bß ÞÞÞß sin 8B Functions in [ are the linear combinations of the functions that span [ ; these are sometimes called trigonometric polynomials: Ð Ñ ÐBÑ -! + cosbþþþ+ 8 cos8b, sinbþþþ, 8 sin8b (*) 8 8 - + cosb, sinb! proj 0 is one such function: it is in [ and it is the best approximation to 0 from [ meaning [ that if is in [, then the distance ll0 ll is smallest possible when proj [ 0Þ Finding proj [ 0 is relatively easy because the functions ß sinbß sin BßÞÞÞß sin8bß cosbß cosbßþþþß cos 8B form an orthogonal basis for [. However, we should check that fact. Doing so involves some integrations that use some trig identities: sin E sin F Ò cos ÐEFÑ cosðefñó cose cos F Ò cos ÐEFÑ cosðefñó sin E cos F Ò sin ÐEFÑ sinðefñó cos E sin E cos E cose ñ is orthogonal to any of the other functions cos B to sinb À

- ' cosðbñ.b sinðbñ¹! - ' sinðbñ.b cosðbñ¹! ñ sin B and cos B are orthogonal: sin ' - sinðbñ cosðbñ.b ÐBÑ ¹! ñ finally, for Á 7 ' ' ' - - sin sin ' Ð7BÑ ÐBÑ.B ÒcosÐ7ÑB cosð7ñbó.b - - sinð7ñb sinð7ñb Œ Ð7Ñ Ð7Ñ ¹! cos cos ' Ð7BÑ ÐBÑ.B ÒcosÐ7ÑB cosð7ñbó.b - - sinð7ñb sinð7ñb Œ Ð7Ñ Ð7Ñ ¹! sin cos ' Ð7BÑ ÐBÑ.B ÒsinÐ7ÑB sinð7ñbó.b We can also compute two other integrals that we will need: cosð7ñb cosð7ñb Œ Ð7Ñ Ð7Ñ ¹! ' cos cos ' - B B.B cosb -.B B sinb Œ % º ' sin cos ' - B B.B cosb -.B B sinb Œ % º 3 3 Notation Alert: If 0 is any function in G, we can compute proj 0 the function in [ closest [ to 0. As you might recognize, these calculations are closely related to a topic called Fourier analysis. In Fourier analysisß some functions 0 have what's called a Fourier transform, denoted by 0s. This is something very different from 0s proj 0à so in this example we will use [ only the notation proj 0 instead of 0s to avoid possible confusion with the Fourier transform by [ those who know something about it.) proj [ 0 is a function that looks like (*). We can use the projection formula to determine its coefficients.

0ÐBÑß proj [ 0 ß 0ÐBÑß cosb 0ÐBÑß cos8b cosbßcosb cosbþþþ cos8bßcos8b cos8b 0ÐBÑß sin B 0ÐBÑß sin8b sinbßsinb sinbþþþ sin8bßsin8b sin8b So proj [ 0 The denominators don't depend on 0 À ' ß.B cos Bß cos B> ' cos B.B ( see calculation above) sin Bß sin B> ' sin B.B ( see calculation above) 0ÐBÑß 0ÐBÑß cosb ÞÞÞ 0ÐBÑß cos 8B> 0ÐBÑß sinb ÞÞÞ 0ÐBÑß sin 8B> ' Ð 0ÐBÑ.BÑ B Ð' 0ÐBÑ B.BÑ ÞÞÞ Ð' cos cos 0ÐBÑ cos8b.bñ cos 8B ' ' sin sinb Ð 0ÐBÑ B.BÑ ÞÞÞ Ð 0ÐBÑ sin8b.bñ sin8b so + 8 8 proj 0 Ð ** Ñ [! + cosb, sin B ' where +! 0ÐBÑ.B and + Ð' 0ÐBÑ cos B.BÑ for Ÿ Ÿ 8, Ð' 0ÐBÑ sin8b.bñ for Ÿ Ÿ 8 The coefficients + and, used here to write proj[ 0 are called the Fourier coefficients of 0 th and the trigonometric series (**) is the 8 Fourier approximation for 0. Fact: If 8 Ä ß then the approximation error Ðas measured by our distance functionñ ll0 proj [ 0 ll ) Ä!. This is called mean square convergence. Mean square convergence is not equivalent to saying: for each B ÒßÓß lim proj[ 0ÐBÑ Ä 0ÐBÑ ( this is called pointwise convergence) 8Ä It's a much harder problem to characterize for which 0's and B's this is true.

Concrete Example: Find the 8 th Fourier approximation for 0ÐBÑ B on the interval Ò ß Ó and compare the graphs.! ' + B.B! + Ð ' B cos B.B! because B cos ÐBÑ is an odd function (for an odd function over Ò ßÓ the positive and negative areas between the graph and the B-axis cancel out), Ð ' cosb sinb B sinb.bñ Ð Ñ º ÐÑ ÐÑ B Ò Ð ÑÓ Å integration by parts ÐÑ Ð Ñ So for 0ÐBÑ B on ÒßÓ, we get the approximation B, sinbþþþ, sin 8B sinb sinb sinbþþþ sin8b 8 ÐÑ 8 sinb sinb 8 sin8b Š sinb ÞÞÞ ÐÑ 8 8 Here are three graphs for comparison, using 8 ß ß and 0: Notice ( not clear in the pictures) that every Fourier approximation sinb sinb 8 sin8b Š sinb ÞÞÞ ÐÑ B has value! at the endpoints, but the function 0ÐBÑ B is not! at the endpoints: at and ß the value of Fourier polynomials do have a limit,!ß but! Á 0ÐÑ 0ÐÑÞ

Example 2 We are still working in G GÒßÓÞ Let [ be the subspace of G containing the polynomials of degree Ÿ & À % & [ SpanÖßBßB ßB ßB ßB Find the polynomial in [ closest to the function sin. MATLAB will handle the detail = for us. The polynomial we want is ; proj [ sin; ; is the best approximation from [ to the function sin, meaning that the approximation error ' ll; sinll Š l;ðbñ sin Bl.B Î is smaller than ' ll: sinll Š l:ðbñ sin Bl.B for any : [ß : Á ; Î

It's easy to compute proj sin if we have orthonormal basis for [, so we convert [ & the standard basis for [ Ö@ ß@ ßÞÞÞß@ ' ÖßBßB ßÞÞÞßB into an orthonormal basis that we'll call Ö/ ß/ ß/ ß ÞÞÞß/ ' using the Gram-Schmidt ProcessÞ Since MATLAB is going to do the work, we will normalize at each step in the process. (For hand calculations, the arithmetic would be simpler to just use Gram Schmidt to get an orthogonal basis and after Gram Schmidt is completed, then normalize each of those vectors.) Note: the integrations below were done using MATLAB. Notice that every integration used to find the / 3 's is very easy, but that the constants that arise are messy and pile up fast; they can easily lead to errors when the computation is done by hand. Try to compute at least e ß / ß/ for yourself (with or without computer assistance) to be sure you understand what's going on. The steps are the same as for the usual Gram Schmidt process in 8 Þ We start the process with @. But @ is not a unit vector in G because ll@ ll llll.b Þ So we normalize and let ' / @ ll@ ll llll È For 4 ßÞÞÞß' in turn we use the Gram Schmidt formula, normalizing at each step to get a unit vector: / 4 @ 4 @ 4 ß/ / @ 4 ß/ / 2 ÞÞÞ @ 4ß/ 4 / 4 @ @ ß/ / @ ß/ / ÞÞÞ @ ß/ / 4 4 4 4 4 4 So @ @ ß/ / / ll@ @ ß/ / ll B ˆ ' B.B È È llb ˆ ' B.B ll È È B llbll ' Ðsince B.B!Ñ B B È'B Î ˆ ' B B.B Ð' ÑÎ È Then / @ @ ß/ / @ ß/ / ll@ @ ß/ / @ ß/ / ll B Ð' B /.BÑ/ Ð' B /.BÑ/ llb Ð B ' /.BÑ/ Ð' B /.BÑ/ ll È È È È È È'B È È È È È È B Ð' 'B B.BÑ Ð' B.BÑ ll B Ð' B.BÑ Ð' B 'B 'B.BÑ ll Notice that each integration requires no calculus harder than. 'B.B 8 But already the constants are becoming a headache.

ÞÞÞ ) Continuing in this way gives: / ÞÞÞ % / ÞÞÞ & / ÞÞÞ ' È È ) %& ÐB Ñ È& ) È(& È)ÐB È( ) ) & BÑ È!& È) Š B % % ' ÐB & ( Ñ È ß and finally * È) È%'&* Š B & %! B ÐB BÑ ( * & È Then / ßÞÞÞß/ ' form an orthonormal basis for [. With them, we can use the projection formula to compute ;ÐBÑ proj sinb sin B,/ / sin B,/ / ÞÞÞ sinbß/ / [ ' ' ' ' Ð' ÐsinBÑ /.BÑ/ Ð' Ðsin BÑ/.BÑ/ ÞÞÞ Ð' ÐsinBÑ/.BÑ/ Ð' Ð BÑ.BÑ Ð' È'B È'B sin È ÐsinBÑ.BÑ È È È Ð' Ð sin BÑ È) È %& ÐB Ñ.B È) È %& ÐB Ñ ) È Ñ & ) È& ÞÞÞ three more integral terms corresponding to /% ß /& ß and /' (Notice that the integrations needed are now a bit more challenging because they involve terms like ' 8 B sin B.B. But they are manageable, with enough patience, using integration by parts.)

When all the smoke clears and terms are combined, MATLAB has produced ;ÐBÑ ) ŠÐ& ('& (%& ÑBÐ(&!! %'&! ÑB! ) ' % % ' % Ð %'& )&ÑB & (***) If we convert the exact coefficients in (*) to approximate decimal coefficients we have & ;ÐBÑ!Þ*)()'&&(%'( B!Þ&&(%!'% B!Þ!!&'%(*('& B (***) Remember that everything in these examples is happening on the interval ÒßÓ À on that interval, the polynomial ; is the best fit to the function sin from among the polynomials in [ (the polynomials with degree Ÿ &). Best fit means that if : is any polynomial with degree Ÿ &, : Á ;Þ ll; sin ll (' Î Ð;ÐBÑ sin BÑ.B) Ð' Î Ð:ÐBÑ sinbñ.bñ ll: sin th Example 3 For comparison with ;ÐBÑß here is a degree polynomial approximation for sinb th that you should already know from Calculus II: the degree Taylor polynomial, X ÐBÑ 0Ð!Ñ0 Ð!ÑB B ÞÞÞ B & For 0ÐBÑ sinbß this gives ww w 0 Ð!Ñ 0 ÐBÑ & x &x sinb X ÐBÑ B & & B B x &x With approximate decimal coefficients, Ð@Ñ & X& ÐBÑ B!Þ''''''''''''(B!Þ!!)B Ð compare coefficients with those in ;ÐBÑÑ Because X& ÐBÑ is constructed using derivatives of 0 at 0, it gives a better approximation for sinb near!, but the approximation error gets larger and larger error as B moves further from!. We can see this in the figure on the next page.

The figure below shows the graphs of sin B, X& ÐBÑ and ;ÐBÑ on the interval ÒßÓÞ Across the whole interval ÒßÓ À the graphs of sin B and ;ÐBÑ are so close that you can't see a difference between them (at the scale of this graph). Near!, you also can't see the difference between X& ÐBÑ and sin B, but that difference becomes clear in the picture as you move closer to the endpoints Þ The table on the following page illustrates three things (the third is not clear from the graphs above): i) As we move away from! toward, the approximation to sin B using X& ÐBÑ is not as good as the ;ÐBÑ approximation ii) Linear algebra gives us the ;ÐBÑ approximation. It has the advantage that it gives us a good approximation for sin B over the whole interval ÒßÓÞ iii) Near!, the Taylor polynomial X& ÐBÑ is actually a better approximation than ;ÐBÑ to th sin B ( in some sense, X& ÐBÑ is actually the best of all degree polynomial approximations to sin B near!).

All table values are rounded to 4 significant digits À for example, 0.0000 is not exactly! lerror l lerror l*** B sin B X ÐBÑ ;ÐBÑ l sin BX ÐBÑ l l sin B;ÐBÑ l & -3.46-0.0000-0.240-0.060 0.240 Ã large X &!.060 Ã smallish; -2.946-0.987-0.347-0.966 0.336 Ã error 0.002 Ã error -2.746-0.3894-0.979-0.3827 0.2084 Ã near 0.0067 Ã over -2.46-0.646-0.689-0.600 0.244 ã 0.0047 Ã the whole -2.346-0.774-0.7884-0.769 0.070 0.000 Ã interval -2.46-0.84-0.8800-0.8447 0.038 0.0032 ã -.946-0.9320-0.96-0.9372 0.096 0.002 -.746-0.984-0.9947-0.9906 0.0092 Î 0.002 Î -.46-0.9996 -.003 -.0032 0.0040 0.0036 -.346-0.9738-0.974-0.9749 0.00 0.00 -.46-0.9093-0.9098-0.9077 0.000 0.006-0.946-0.808-0.8086-0.8047 0.000 0.0038-0.46-0. -0. -0.06 0.0000 Ã very 0.0049 Ã but error -0.346-0.330-0.330-0.333 0.0000 Ã small 0.0037 Ã for ; -0.46-0.4-0.4-0.394 0.0000 Ã lerrorl 0.007 Ã near! is small 0.084 0.084 0.084 0.077 0.0000 Ã near! 0.0007 Ã but larger 0.284 0.2 0.2 0.226 0.0000 Ã using 0.0029 Ã than error 0.484 0.442 0.442 0.4380 0.0000 Ã X& 0.004 Ã for the T& 0.684 0.69 0.69 0.6068 0.0000 Ã approx 0.00 Ã approx 0.884 0.768 0.769 0.724 0.000 ã 0.0044 ã.084 0.876 0.879 0.8690 0.0003 0.0026.284 0.96 0.926 0.9 0.000 0.000.484 0.9937 0.9964 0.9963 0.0027 0.0026.684 0.9962.0028.0009 0.0066 0.0047.884 0.989 0.9734 0.9644 0.04 Î 0.004 Î 2.084 0.883 0.928 0.8877 0.0293 0.0043 2.284 0.7728 0.8282 0.7740 0.04 0.002 2.484 0.633 0.7304 0.6283 0.099 0.0030 Ã smallish; 2.684 0.4646 0.6336 0.483 0.690 ã 0.0063 Ã error 2.884 0.2794 0.6 0.2742 0.2767 Ã large X& 0.002 Ã over 3.084 0.083 0.204 0.0894 0.4373 Ã error 0.0063 Ã the whole 3.46 9.0000 0.240 0.060 0.240 Ã near 0.060 Ã interval & ***Note: To be honest, when we picked ; as a best approximation for sin on ÒßÓ, we did so to make llsin;ll Ð' Î Ð;ÐBÑ sin BÑ.BÑ as a small as possible; we didn't actually look at point-by-point of l;ðbñ sin Bl, as we do in the table. For a Fourier approximation J ÐBÑ 8 + 8 8! + cosb, sin B to a function 0, specifically said that having lljr 0ll get small might not force lljrðbñ0ðbñll get small for particular values of B. Nevertheless, the table helps to give a feel of how the trigonometric polynomial ;ÐBÑ is a better approximation to the sin function over the whole interval ÒßÓ than some other polynomial of degree & such as X& ÐBÑÞ we