Further Mathematical Methods (Linear Algebra) 2002

Similar documents
Further Mathematical Methods (Linear Algebra) 2002

8.5 Taylor Polynomials and Taylor Series

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3

Math 489AB A Very Brief Intro to Fourier Series Fall 2008

MAT Linear Algebra Collection of sample exams

There are two things that are particularly nice about the first basis

Chapter 7: Techniques of Integration

Functional Analysis Exercise Class

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

q-series Michael Gri for the partition function, he developed the basic idea of the q-exponential. From

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

Partial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions.

Vectors in Function Spaces

A glimpse of Fourier analysis

Math Real Analysis II

IB Mathematics HL Year 2 Unit 11: Completion of Algebra (Core Topic 1)

and the compositional inverse when it exists is A.

MATH 118, LECTURES 27 & 28: TAYLOR SERIES

MATH 1231 MATHEMATICS 1B CALCULUS. Section 5: - Power Series and Taylor Series.

Chapter 11 - Sequences and Series

1.4 Techniques of Integration

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Math 24 Spring 2012 Questions (mostly) from the Textbook

UNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH *

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA

MATH 369 Linear Algebra

Math 115 ( ) Yum-Tong Siu 1. Derivation of the Poisson Kernel by Fourier Series and Convolution

1. General Vector Spaces

Calculus II Lecture Notes

8.7 MacLaurin Polynomials

FOURIER SERIES, HAAR WAVELETS AND FAST FOURIER TRANSFORM

f(x 0 + h) f(x 0 ) h slope of secant line = m sec

Convergence of Fourier Series

Solutions: Problem Set 3 Math 201B, Winter 2007

10.7 Trigonometric Equations and Inequalities

Exercises * on Linear Algebra

Math 3150 HW 3 Solutions

Power Series Solutions We use power series to solve second order differential equations

Mathematics 1 Lecture Notes Chapter 1 Algebra Review

Course Notes for Calculus , Spring 2015

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Physics 250 Green s functions for ordinary differential equations

Math 113: Quiz 6 Solutions, Fall 2015 Chapter 9

CONVERGENCE OF THE FOURIER SERIES

Fourier and Partial Differential Equations

Section 3.1: Definition and Examples (Vector Spaces), Completed

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Chapter 4: More Applications of Differentiation

Notes on Some Methods for Solving Linear Systems

Limits at Infinity. Horizontal Asymptotes. Definition (Limits at Infinity) Horizontal Asymptotes

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

be the set of complex valued 2π-periodic functions f on R such that

10.7 Trigonometric Equations and Inequalities

MTH 299 In Class and Recitation Problems SUMMER 2016

Some approximation theorems in Math 522. I. Approximations of continuous functions by smooth functions

Recall that any inner product space V has an associated norm defined by

d 1 µ 2 Θ = 0. (4.1) consider first the case of m = 0 where there is no azimuthal dependence on the angle φ.

Fourier Series and the Discrete Fourier Expansion

11.10a Taylor and Maclaurin Series

swapneel/207

Week 2 Techniques of Integration

Solutions to the Exercises * on Linear Algebra

Families of Functions, Taylor Polynomials, l Hopital s

Mathematics Revision Questions for the University of Bristol School of Physics

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Section 9.7 and 9.10: Taylor Polynomials and Approximations/Taylor and Maclaurin Series

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Scientific Computing

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Formulas to remember

4.10 Dirichlet problem in the circle and the Poisson kernel

Infinite series, improper integrals, and Taylor series

Elementary Linear Transform Theory

1 Question related to polynomials

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

7: FOURIER SERIES STEVEN HEILMAN

SOLVED PROBLEMS ON TAYLOR AND MACLAURIN SERIES

10.7 Trigonometric Equations and Inequalities


i x i y i

Mathematics for Graphics and Vision

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

Convergence of sequences and series

Second-Order Homogeneous Linear Equations with Constant Coefficients

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

Examples of the Fourier Theorem (Sect. 10.3). The Fourier Theorem: Continuous case.

Fourier Series. 1. Review of Linear Algebra

Advanced Mathematics Unit 2 Limits and Continuity

Advanced Mathematics Unit 2 Limits and Continuity

Solutions Serie 1 - preliminary exercises

The Hilbert Space of Random Variables

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Transcription:

Further Mathematical Methods (Linear Algebra) Solutions For Problem Sheet 9 In this problem sheet, we derived a new result about orthogonal projections and used them to find least squares approximations to some simple functions. We also looed at Fourier series in theory and in practice.. We are given a vector x R n and told that S is the subspace of R n spanned by x, i.e. S = Lin{x. Now, we now that the matrix, P representing the orthogonal projection of R n onto the range of A is given by P = A(A t A) A t. So, if we let S be the range of A, i.e. A is the matrix with x as its [only column vector, we can see that as required. P = x(x t x) x t = x( x ) x t = xxt x,. (a) We are ased to find an orthonormal basis for P [,π, i.e. the vector space spanned by the vectors {, x, x, x, using the inner product f, g = f(x)g(x), where f : x f(x), g : x g(x) for all x [, π. Clearly, the set {, x, x, x is a basis for P [,π, and so we can construct an orthonormal basis for this space by using the Gram-Schmidt procedure, i.e. Taing v =, we get and so we set e = / π. v = v, v = Taing v = x, we construct the vector u where since v, e = = [x = π, u = v v, e e = x, x, π = π x = [ x π Then, we need to normalise this vector, i.e. as [ x u = u, u = x = we set e = π x. = π, Note that if x is a unit vector, this derivation could replace part of our analysis of the n n matrices E i considered in Question of Problem Sheet 8. Bearing in mind Footnote in the Solutions for Problem Sheet, you may bal at this solution. However, our convention holds good here. To see this, we can re-run the argument above without assuming that x t x is a scalar, but treating it as a matrix. Consider, P = x(x t x) x t = x x x t. But, the inverse of a matrix [a is just [a = [a as [a[a = [a[a = [aa = [, the identity matrix. Thus, P = x x x t. However, the quantity x x t is equivalent to having the n vector x t with each element multiplied by the scalar x. Consequently, taing out this common factor we get x x t = x x t and hence the desired result. =.

Taing v = x, we construct the vector u where since and, u = v v, e e v, e e = x π, v, e = x, π = π v, e = x = [ x π π π x, x = π x = π Then, we need to normalise this vector, i.e. as u = u, u = [ x = π 9 x + π 9 x we set e = (x π ). 8π ) (x π = = 8π, Taing v = x, we construct the vector u where since and, whilst, v, e = = π π, [ x =. ) (x π x + π 9 u = v v, e e v, e e v, e e = x π x, v, e = x, π = π v, e = x = [ x π π π x, x = π x = π [ x π 8π x, x = 8π x = 8π Then, we need to normalise this vector, i.e. as u = u, u = [ x 7 = 7 π x + π x 7 we set e = (x π x). 8π 7 Consequently, the set of vectors, {, π π x, is an orthonormal basis for P [,π. ) (x π x = = 8π7 7, 8π (x π ), [ x =, = π π, =. ) (x π x + 9π x 7 8π 7 (x π x), Hence, to find a least squares approximation to sin x in P [,π we just have to evaluate the orthogonal projection of the vector sinx onto P [,π, i.e. sinx, e e + sinx, e e + sinx, e e + sinx, e e,

where sinx : x sin x. To do this, it is convenient to note that using parts twice we have: for n, whilst I n = x n sin x = [ x n cos x + nx n cos x { [x = [ ( ) n π n + n n sin x π (n )x n sin x I n = [ ( ) n π n n(n )I n, If n =, we have i.e. I =. I = sin x, If n =, we have I = x sin x = [ x cos x + cos x = π +, i.e. I = π. So, we can see that the reduction formula that we have just derived gives us: I = [ ( ) π I =, and I = [ ( ) π I = π π Thus, we can see that looing at each of the terms that we have to evaluate we get: sinx, e = π I = and so, sinx, e e =. sinx, e = I π = π and so, sinx, e π e = πx = x π π sinx, e = (I 8π π I ) = and so, sinx, e e =. 7 sinx, e = (I 8π 7 π 7 I ) = (π π π 7 ) = (π ) and so, 8π 7 8π 7 sinx, e e = 7 π (π )(x π x). and so our desired approximation to sin x is which on simplifying yields π x + 7 π (π )(x π x), π [ ( π )π x + (7π )x. So, to calculate the mean square error associated with this approximation, we use the result from the lectures, i.e. MSE = f f, e i, which as f = f, f = sin x = i= [ cos(x) = π + = π,

gives us [ MSE = π + π π + + 7 8π 7 (π π) = π π + π π. Evaluating this expression then tells us that the mean square error for this approximation is.7 to four decimal places. (b) We are told that the first two [non-zero terms in the Taylor series for sin x are given by x x!, for all x [, π and we are ased to find the mean square error between sin x and this approximation. This is fairly straightforward as most of the integrals that have to be evaluated have already been calculated in (a). So, using the definition of the mean square error, it should be clear that: MSE = = [f(x) g(x) = [sin x x + x [sin x + x + x x x x sin x + sin x = π + π + π7 π + π π π MSE = π7 π + π 7π. Evaluating this expression then tells us that the mean square error for this approximation is.8 to four decimal places. So, which of these cubics, i.e. the one calculated in (a) or the Taylor series, provides the best approximation to sin x? Well, by looing at the mean square errors it should be clear that the least squares approximation calculated in (a) gives a better approximation to sin x than the Taylor series. Indeed, the mean square error for the Taylor series is roughly times bigger than the mean square error for the result calculated in part (a). To put this all into perspective, these results are illustrated in Figure.. We are ased to find least squares approximations to two functions over the interval [, using the inner product f, g = f(x)g(x), where f : x f(x) and g : x g(x) for all x [,. We shall do each of these in turn: Firstly, we are ased to find a least squares approximation to x by a function of the form a + b e x. That is, we need to find the orthogonal projection of the vector x onto the subspace spanned by the vectors and e x where e x : x e x. So, we note that the set of vectors {, e x is linearly independent since, for any x [,, the Wronsian for these functions is given by: W (x) = ex e x = ex, i.e. this set of vectors gives us a basis for the subspace that we are going to orthogonally project onto. However, to do the orthogonal projection, we need an orthonormal basis for this subspace and to get this, we use the Gram-Schmidt procedure: Notice that we can not use the nice mean square error formula which we used in (a) here. This is because this Taylor series is not written in terms of an orthonormal basis.

.. x.. Figure : This figure illustrates the function sin x (i.e. - - - ), the first two [non-zero terms in the Taylor series for sin x (i.e. - - - ) and the least squares approximation for sin x calculated in part (a) (i.e. ). Notice that the Taylor series gives a very good approximation to sin x for x, but then starts to deviate rapidly from this function (we should expect this since we now that the Taylor series for sin x is a good approximation for small x). However, the least squares approximation for sin x calculated in part (a) gives a good approximation for all values of x in the interval [, π. Taing v =, we get and so we set e =. v = v, v = = [x =, Taing v = e x, we construct the vector u where since u = v v, e e = e x (e ), v, e = e x, = Then, we need to normalise this vector, i.e. as u = u, u = [e x (e ) = [ e x = (e )ex + (e ) x i.e. u = ( e)(e ), e x = [e x = e. [ e x (e )e x + (e ) [ e = (e )e + (e ) which is a positive quantity since < e <, we set e = ( e)(e ). [ (e ) α [ex (e ) where α =

Consequently, the set of vectors, {, ex (e ), ( e)(e ) is an orthonormal basis for the subspace spanned by the vectors {, e x. Hence, to find a least squares approximation to x in this subspace we just have to evaluate where x : x x. To do this, we note that x, = [ x = x x, e e + x, e e, = and so, x, e =. x, e x = xex = [xe x ex = e (e ) = and so, x, e = α [ x, [ ex (e ) x, = e α and so our desired approximation to x is which on simplifying yields, + ex (e ), e + ex e. So, using the form given in the question, the least squares approximation to x over the interval [, has a = / and b = /(e ). Secondly, we are ased to find a least squares approximation to e x by a function of the form a + bx. That is, we need to find the orthogonal projection of the vector e x onto the subspace spanned by the vectors and x. However, the set of vectors {, x is actually a basis for the vector space P [, and we now from Question of Problem Sheet that an orthonormal basis for this vector space is {, (x ). Hence, to find the required least squares approximation to e x we just have to evaluate e x, e e + e x, e e, where e = and e = (x ). To do this, we note that e x, = ex = [e x = e and so, ex, e = e. e x, x = xex = (from above) and so, e x, e = ( e x, x e x, ) = ( e). and so our desired approximation to e x is which on simplifying yields, (e ) + ( e)(x ), (e ) + ( e)x. So, using the form given in the question, the least squares approximation to e x over the interval [, has a = (e ) and b = ( e).. We are ased to find a vector q PR such that p() = p, q,

for every vector p : x p(x) in PR where the inner product defined on this vector space is given by p, q = p(i)q(i). i= (Notice that this is an instantiation of the inner product given in Question of Problem Sheet where in this case,,, are the three fixed and distinct real numbers required to define it.) So, we need to find a quadratic q(x) [say ax + bx + c such that its inner product with any other quadratic p(x) [say αx + βx + γ, gives p() [which in this case is α + β + γ. So, taing p and q as suggested we have p, q = p(i)q(i) = (a b + c)(α β + γ) + ( + + c)( + + γ) + (a + b + c)(α + β + γ) {{{{{{ i= i= i= i= and this must equal p(), i.e. α + β + γ. So, to find q we equate the coefficients of the terms in α, β and γ to get (a b + c) + + (a + b + c) = a+c = (a b + c) + + (a + b + c) = = b = (a b + c) + c + (a + b + c) = a+c = which on solving gives us a = b = / and c =. Thus, the required quadratic q is given by q(x) = x + x. It may surprise you that there is only one quadratic that does this job, or indeed that there is one at all. But, there is actually some theory that guarantees its existence and uniqueness (although, we do not cover it in this course). Notice that we can chec that this answer is correct since, for any quadratic p(x) = ax + bx + c, it gives as desired. p, q = (a b + c)( {{ ) + ( + + c)( + ) + (a + b + c)( {{ + ) = a + b + c = p(), {{ i= i= i= The rest of the questions on this problem sheet are intended to develop your intuitions as to what a Fourier series is in terms of orthogonal projections onto subspaces. To do this, we start by considering an example where the function we are dealing with lies in Lin(G n ) for some n. Indeed, Question 9 gives us a possible application of this ind of situation.. We are ased to consider the function cos(x) defined over the interval [, π. To find the Fourier series of orders, and that represent this function we evaluate integrals of the form a = π where if =, we have cos(x) cos(x) = π {cos( + )x + cos( )x, a = {cos(x) + = + π π π =, and if, we get a =. Also, as cos(x) is an even function, the b coefficients will be zero too. Thus, we find that the required Fourier series are: For simplicity, we shall deal with a cases where the required n is small! 7

Order Fourier series MSE π cos(x) cos(x) where the mean square error for the second order Fourier series is given by cos (x) = [cos(x) + = π = π, and the third and fourth order Fourier series have a mean square error of zero. Which tells us [unsurprisingly, perhaps that the Fourier series for cos(x) is cos(x) and that cos(x) is in Lin(G ) and Lin(G ), but not in Lin(G ) (as one would expect from the mean square errors). Other Problems. Here we derived the formulae for calculating Fourier series. We then saw how to apply these formulae by finding the Fourier series of some simple functions. Warning: The remaining solutions will mae prodigious use of the following facts about sines and cosines: If r N, then sin(rπ) = and cos(rπ) = ( ) r. The sine function is odd, i.e. cos( x) = cos(x). sin( x) = sin(x), whereas the cosine function is even, i.e. The integral of an odd function over the interval [, π is zero. The following trigonometric identities: ( ) ( ) θ + φ θ φ sin θ + sin φ = sin cos, ( ) ( ) θ + φ θ φ sin θ sin φ = cos sin, ( ) ( ) θ + φ θ φ cos θ + cos φ = cos cos, ( ) ( ) θ + φ θ φ cos θ + cos φ = sin sin, from which, among other things, the double angle formulae follow. You should bear them in mind when looing at many of the integrals that are evaluated below!. We are ased to consider the subset of F [,π given by G n = {g, g,..., g n, g n+,..., g n where g (x) = π, and g (x) = π cos(x), g (x) = π cos(x),..., g n (x) = π cos(nx), whereas, (for convenience we relabel these) h (x) = g n+ (x) = π sin(x), h (x) = g n+ (x) = π sin(x),..., h n (x) = g n (x) = π sin(nx). To show that G n is an orthonormal set when using the inner product f, g = 8 f(x)g(x),

we have to establish that they satisfy the orthonormality condition, i.e. if i = j g i, g j = if i j Firstly, we can see that the vector g satisfies this condition, as g, g = π and so it is unit; whilst for n, we have and g, g = π g, h = π =, cos(x) = [ sin(x) π =, sin(x) = [ cos(x) =, π and so the vector g is orthogonal to the other vectors in G n. Secondly, note that for, l n, g, g l = π and so if l, then cos(lx) cos(x) = π g, g l = [ sin( + l)x + π + l {cos( + l)x + cos( l)x, sin( l)x π =, l whereas if = l, we have g, g = {cos(x) + = [ sin(x) π + x =. π π Thus, we can see that, for n, the vectors g are unit and mutually orthogonal. Thirdly, using a similar calculation, we note that for n, l n, h, h l = π and so if l, then sin(x) sin(lx) = π h, h l = [ sin( + l)x π + l {cos( + l)x cos( l)x, sin( l)x π =, l whereas if = l, we have h, h = {cos(x) = [ sin(x) π x =. π π Thus, we can see that, for n, the vectors h are unit and orthogonal., l n, we can see that Lastly, taing g, h l = π cos(x) sin(lx) = {sin( + l)x sin( l)x, π and so if l, then g, h l = [ cos( + l)x + π + l cos( l)x π =, l 9

whereas if = l, we have g, h = π sin(x) = [ cos(x) =. Thus, for, l n, the vectors g and h l are mutually orthogonal. Consequently, we have shown that the vectors in G n are orthonormal, as required. Further, we are ased to show that any trigonometric polynomial of order n or less can be represented by a vector in Lin(G n ). This is clearly the case as a trigonometric polynomial of degree n or less is of the form c + c cos(x) + + c n cos(nx) + d sin(x) + + d n sin(nx), and this can therefore be represented by the vector πc g + πc g + + πc n g n + πd h + + πd n h n, which is in Lin(G n ), as required. Note: In this question, we have established some orthogonality relations which may be useful later. For convenience, we list them here for easy reference: π { if = l cos(x) cos(lx) = π if l π { if = l sin(x) sin(lx) = π if l π cos(x) sin(lx) =, π where and l are positive integers. Further, just in case you haven t twigged, we note that when, again, is a positive integer. sin(x) = and cos(x) =, 7. Let us suppose that the vector f F [,π represents a function that is not in Lin(G n ), using the result from the lectures we can see that the orthogonal projection of f onto Lin(G n ) is just Pf = = f, g g. As this series represents the orthogonal projection, we now (from the lectures) that it will minimise the quantity given by f g where g Lin(G n ). That is, it minimises the quantity f g = f g, f g = where f : x f(x) and g : x g(x). Further, we must show that this series can be written as..) a [f(x) g(x), + {a cos(x) + b sin(x), Clearly, as G n F [,π and we are considering the subspace given by Lin(G n). (This is a subspace by Theorem

where the coefficients are given by a = π To do this, we note that Pf(x) = f(x) cos(x) and b = π = f, g g = f, g g + and rewriting this in terms of functions we get Pf = { π f(u)du + π f, g g + { π { π + π f(x) sin(x). =n+ f, g g, f(u) cos(u)du π cos(x) f(u) sin(u)du π sin(x), which gives the required result when we replace the integrals with the appropriate coefficients. 7 Indeed, as noted in the question, this is the Fourier series of order n representing f(x). As it turns out, this series can sometimes be simplified due to the fact that the integral of an odd function over the range [, π is zero. 8 So, if f(x) is an odd (even) function, then as the product of an odd (even) function and an even (odd) function is an odd function, we can see that the a (b ) coefficients will be zero. Thus, a useful result is that the Fourier series of order n will be b sin(x) if f(x) is odd, and a + a cos(x) if f(x) is even. 8. Following on from Question, when we are ased to find the Fourier series of order n representing the function sin(x) over the interval [, π, it should be obvious (i.e. there should be no need to integrate!) that it is just sin(x) if n and zero if n. The corresponding mean square errors are zero if n and sin (x) = [ cos(x) = π = π, if n. But, if you are not convinced, we can verify this result by calculating the coefficients. In this case, because sin(x) is an odd function the a are zero and so we just have to evaluate (for > ) an integral of the form b = π where if =, we have sin(x) sin(x) = π and if, we get a =, as before. {cos( + )x + cos( )x, b = {cos(x) + = + π π π =, When writing the integrals that correspond to the inner products, we use the dummy variable u to replace the variable x to avoid confusion. 7 Notice that the coefficient a is a special case of the coefficient a (as if we set = in the formula for a, cos(x) = ). This little simplification is the reason for the extra factor of a half in the first term of this series. 8 Again, recall that an odd function is such that f( x) = f(x) and an even function is such that f( x) = f(x).

Note: In case you hadn t guessed, this all wors because the vectors being used are linearly independent 9 and so the series that we find is unique. Thus, whether we calculate a Fourier series by calculating the coefficients, or by some other method (say, using trigonometric identities), we will always get the same result! To illustrate this, let us consider a bonus question! Question: Find the Fourier series for cos x and sin x by evaluating the coefficients. Further, amaze your friends by doing this in four lines using complex numbers! Answer: As cos x is an even function, the b coefficients are zero and so we only need to find the a, i.e. a = π = π = π = π cos x cos(x) = π cos x cos x cos(x) {cos(x) + {cos( + )x + cos( )x {cos(x) cos( + )x + cos(x) cos( )x + cos( + )x + cos( )x { [cos( + )x + cos( )x + [cos( + )x + cos( )x + cos( + )x + cos( )x a = {cos( + )x + cos( )x + cos( + )x + cos( )x. 8π Thus, for = and = we get a = / and a = / respectively, whereas for other non-negative values of we get a =. Consequently, the Fourier series for cos x is [ cos x + cos(x). Similarly, for sin x, an odd function, the a coefficients are zero and so we only need to find the b, i.e. b = π = π = π = π sin x cos(x) = π sin x sin x sin(x) { cos(x) {cos( + )x cos( )x {cos( + )x cos( )x cos(x) cos( + )x + cos(x) cos( )x { cos( + )x cos( )x [cos( + )x + cos( )x + [cos( + )x + cos( )x b = {cos( + )x cos( )x cos( + )x + cos( )x. 8π Thus, for = and = we get b = / and b = / respectively, whereas for other positive values of we get b =. Consequently, the Fourier series for sin x is [ sin x sin(x). These results should loo familiar as they are just the triple angle identities for sines and cosines. An alternative way to derive them would be to use complex numbers, and it is convenient for us to pause for a moment to note some useful results: 9 Recall that in Question of Problem Sheet, we established that orthogonal vectors are linearly independent.

We can write complex numbers in their exponential form, i.e. which allows us to write cos θ = eiθ + e iθ e ±iθ = cos θ ± i sin θ, and sin θ = eiθ e iθ. i This form is particularly useful when used in conjunction with De Moivre s theorem, which tells us that (e iθ ) n = (cos θ ± i sin θ) n = cos(nθ) ± i sin(nθ). In what follows we use these ideas together with a rather nice substitution, namely z = e iθ. (Notice that this implies that /z = z = e iθ.) So, to proceed we consider the following identity ( z ± ) = z ± z z z + z z ± z = z ± ( z ± z ± ). z Now, if we let z = e ix, and tae the + we get whereas, taing the we find which are the desired results. 9. We are ased to show that cos x = cos x + ( cos x) = cos x = cos x + cos x, (i) sin x = i sin x (i sin x) = sin x = sin x sin x, + cos(x) + cos(x) + + cos(nx) = sin(n + )x sin x, without integrating to find the coefficients and assuming that x rπ where r Z. To do this we consider the series S n = + e ix + e ix + + e inx, which is a geometric progression, and so summing this we get S n = ei(n+)x e ix, provided that x rπ (as if this were the case, e ix = ). Multiplying the top and bottom of this expression by e ix gives S n = e ix e i(n+ )x e ix e ix = [cos( x) i sin( x) [cos(n + )x + i sin(n + )x i sin( x) S n = i[cos( x) cos(n + )x + [sin( x) + sin(n + )x sin( x), To get this, use the Binomial Theorem (remember?) as I have, or just expand out the bracets. Either way this should be obvious!

and taing the real part of this expression yields + cos(x) + cos(x) + + cos(nx) = + sin(n + )x sin( x), which gives the desired result. If the condition that x rπ with r N is violated, then the left-hand-side of this expression becomes + + + {{ + = + n. n times But, the right-hand-side is indeterminate and so we need to evaluate sin(n + lim )x (n + x rπ sin( x) = lim ) cos(n + )x x rπ cos( x) sin(n + lim )x x rπ sin( x) = n +. = (n + ) cos(n + )rπ cos(rπ) = (n + )( )(n+)r ( ) r :Using l Hôpital s rule. Thus, the result still holds if x = rπ with r N.. We are ased to find the Fourier series of order n of three functions which are defined in the interval [, π. Further, we must calculate the mean square error in each case. The first function to consider is f(x) = π x. To find the required Fourier series, we have to calculate the coefficients by evaluating integrals of the form a = π (see Question 7) and so for =, we have whereas, for, we have a = π a = π (π x) = π (π x) cos(x), {[ (π x) sin(x) + Also, for, we have to evaluate integrals of the form b = (π x) sin(x) π = {[ (π x) cos(x) π = [ π cos( π) π b = ( ), [πx x = π, sin(x) =. cos(x) (again, see Question 7). Thus, the Fourier series of order n representing the function π x is π + ( ) sin(x).

Using this, the mean square error between a function, f(x) and the corresponding Fourier series of order n, g(x) is given by and so in this case we have MSE = MSE = [ x + [f(x) g(x), ( ) sin(x). This loos a bit tricy, but squaring the bracet you should be able to convince yourself that this gives [ π MSE = x ( ) ( ) +l + x sin(x) + sin(x) sin(lx) l But, noting that + ( ) sin (x). x sin(x) = [ x cos(x) + as well as our earlier results, we can see that this is just l= (l ) cos(x) = π ( ), MSE = ( ) π 8π + + π ( ) = π π. This Fourier series is illustrated in Figure. x x x terms terms terms Figure : The function π x and its Fourier series of order, and respectively. (Notice that the Fourier series of order n for π x behaves unusually as x ±π. There is a reason for this, but we will not discuss it in this course.) The second function to consider is f(x) = x. To find the required Fourier series, we have to calculate the coefficients by evaluating integrals of the form a = π (see Question 7) and so for =, we have x cos(x), [ x a = x = π π = π,

whereas, for, we have a = {[ x sin(x) π = { = π = π π a = ( ). x sin(x) {[ x cos(x) [ π ( ) + + x sin(x) cos(x) Also, as f(x) = x is an even function, we now (from the last part of Question 7) that the b (for n) will be zero. Thus, the Fourier series of order n representing the function x is π + ( ) cos(x). Using this, the mean square error between a function, f(x) and the corresponding Fourier series of order n, g(x) is given by and so in this case we have MSE = MSE = [ x π [f(x) g(x), ( ) cos(x). This [again, loos a bit tricy, but squaring the bracet you should be able to convince yourself that this gives [ π ( ) ) MSE = x π 8 (x π ( ) cos(x) + l= (l ) ( ) +l l cos(x) cos(lx) + ( ) cos (x). But, noting that ) ) [ (x π = (x π x + π x = 9 x π + π 9 x = π π π + π 9 π = 8 π, and ) ) (x π cos(x) = [(x π sin(x) = {[ x cos(x) (x π = ( ) π ) cos(x) = π ( ), + + x sin(x) cos(x)

as well as our earlier results, we can see that this is just MSE = 8 π π ( ) + + π This Fourier series is illustrated in Figure. ( ) = 8 π π. 8 8 8 x x x term terms terms Figure : The function x and its Fourier series of order, and respectively. The third function to consider is f(x) = x. To find the required Fourier series, we have to calculate the coefficients by evaluating integrals of the form a = π (see Question 7) and so for =, we have a = π x = { ( x) + π x cos(x), x = π whereas, for, we have a = { ( x) cos(x) + x cos(x) π { [ = x sin(x) sin(x) + π = { sin(x) π sin(x) + + π { [ = cos(x) [ cos(x) π π = π {[ + ( ) a = π ( ). [ ( ) + [ + x sin(x) { [ x + [ x = π, sin(x) Also, as f(x) = x is an even function, we now (from the last part of Question 7) that the b (for n) will be zero. Thus, the Fourier series of order n representing the function x is π + π ( ) cos(x). Using this, the mean square error between a function, f(x) and the corresponding Fourier series of order n, g(x) is given by MSE = [f(x) g(x), 7

and so in this case we have MSE = [ x π π ( ) cos(x). This [again, loos a bit tricy, but squaring the bracet you should be able to convince yourself that this gives [ π ( MSE = x π ) ( x π ) ( ) π cos(x) But, noting that + π l= (l ) ( x π ) π = [ x = ( ) ( ) l l cos(x) cos(lx) + π ( x π π x + [ + π x x + π ) = (x + π [ x π [ π π = [( ) cos (x). ) π + π ( x) π π π = π, x and ( x π ) cos(x) = ( x π ) cos(x) + ( x π ) cos(x) [ ( = x + π ) sin(x) sin(x) + [ ( + x π ) sin(x) π sin(x) [ cos(x) [ cos(x) π = + + ( x π ) cos(x) = [ ( ), as well as our earlier results, we can see that this is just MSE = π 8 π [( ) + + π [( ) = π π [( ). This Fourier series is illustrated in Figure............. x x x term terms terms Figure : The function x and its Fourier series of order, and respectively. (Notice that the Fourier series of orders and are the same here as the second order term is zero!) 8

Note: You may wonder why we have bothered to calculate the mean square error in this question (as it is not normally calculated, or even introduced, in methods courses). We tae this opportunity to note the following facts: = π, = π 9, [( ) = π. Thus, if we let n in the above results, the mean square errors will tend to zero. This indicates that if we tae the Fourier series of these functions (i.e. their representation in terms of a trigonometric polynomial of infinite degree), we will have an exact expression for them in terms of sines and cosines. Indeed, this seems to imply that these functions are in the subspace of F [,π spanned by the vectors in G. This idea is explored further (in a more transparent context) in Questions and 8. Howvever, you may care to tae a loo at the illustrations of these Fourier series that are given in Figures, and. (Notice how the Fourier series gives a better approximation to the function in question as we increase the number of terms!) Also observe how the mean square error is much easier to find if we use the result derived in the lectures (as we did in Question ) instead of just evaluating the appropriate norm (as we did in Questions, 8 and )! Remar: There is, of course, much more to Fourier series than we have covered here. Once again, you should note that the Fourier series of a function is the series that we get as n (the order in our exposition) tends to infinity. 9