Further Mathematical Methods (Linear Algebra) 2002

Size: px
Start display at page:

Download "Further Mathematical Methods (Linear Algebra) 2002"

Transcription

1 Further Mathematical Methods (Linear Algebra) Solutions For Problem Sheet 9 In this problem sheet, we derived a new result about orthogonal projections and used them to find least squares approximations to some simple functions. We also looed at Fourier series in theory and in practice.. We are given a vector x R n and told that S is the subspace of R n spanned by x, i.e. S = Lin{x. Now, we now that the matrix, P representing the orthogonal projection of R n onto the range of A is given by P = A(A t A) A t. So, if we let S be the range of A, i.e. A is the matrix with x as its [only column vector, we can see that as required. P = x(x t x) x t = x( x ) x t = xxt x,. (a) We are ased to find an orthonormal basis for P [,π, i.e. the vector space spanned by the vectors {, x, x, x, using the inner product f, g = f(x)g(x), where f : x f(x), g : x g(x) for all x [, π. Clearly, the set {, x, x, x is a basis for P [,π, and so we can construct an orthonormal basis for this space by using the Gram-Schmidt procedure, i.e. Taing v =, we get and so we set e = / π. v = v, v = Taing v = x, we construct the vector u where since v, e = = [x = π, u = v v, e e = x, x, π = π x = [ x π Then, we need to normalise this vector, i.e. as [ x u = u, u = x = we set e = π x. = π, Note that if x is a unit vector, this derivation could replace part of our analysis of the n n matrices E i considered in Question of Problem Sheet 8. Bearing in mind Footnote in the Solutions for Problem Sheet, you may bal at this solution. However, our convention holds good here. To see this, we can re-run the argument above without assuming that x t x is a scalar, but treating it as a matrix. Consider, P = x(x t x) x t = x x x t. But, the inverse of a matrix [a is just [a = [a as [a[a = [a[a = [aa = [, the identity matrix. Thus, P = x x x t. However, the quantity x x t is equivalent to having the n vector x t with each element multiplied by the scalar x. Consequently, taing out this common factor we get x x t = x x t and hence the desired result. =.

2 Taing v = x, we construct the vector u where since and, u = v v, e e v, e e = x π, v, e = x, π = π v, e = x = [ x π π π x, x = π x = π Then, we need to normalise this vector, i.e. as u = u, u = [ x = π 9 x + π 9 x we set e = (x π ). 8π ) (x π = = 8π, Taing v = x, we construct the vector u where since and, whilst, v, e = = π π, [ x =. ) (x π x + π 9 u = v v, e e v, e e v, e e = x π x, v, e = x, π = π v, e = x = [ x π π π x, x = π x = π [ x π 8π x, x = 8π x = 8π Then, we need to normalise this vector, i.e. as u = u, u = [ x 7 = 7 π x + π x 7 we set e = (x π x). 8π 7 Consequently, the set of vectors, {, π π x, is an orthonormal basis for P [,π. ) (x π x = = 8π7 7, 8π (x π ), [ x =, = π π, =. ) (x π x + 9π x 7 8π 7 (x π x), Hence, to find a least squares approximation to sin x in P [,π we just have to evaluate the orthogonal projection of the vector sinx onto P [,π, i.e. sinx, e e + sinx, e e + sinx, e e + sinx, e e,

3 where sinx : x sin x. To do this, it is convenient to note that using parts twice we have: for n, whilst I n = x n sin x = [ x n cos x + nx n cos x { [x = [ ( ) n π n + n n sin x π (n )x n sin x I n = [ ( ) n π n n(n )I n, If n =, we have i.e. I =. I = sin x, If n =, we have I = x sin x = [ x cos x + cos x = π +, i.e. I = π. So, we can see that the reduction formula that we have just derived gives us: I = [ ( ) π I =, and I = [ ( ) π I = π π Thus, we can see that looing at each of the terms that we have to evaluate we get: sinx, e = π I = and so, sinx, e e =. sinx, e = I π = π and so, sinx, e π e = πx = x π π sinx, e = (I 8π π I ) = and so, sinx, e e =. 7 sinx, e = (I 8π 7 π 7 I ) = (π π π 7 ) = (π ) and so, 8π 7 8π 7 sinx, e e = 7 π (π )(x π x). and so our desired approximation to sin x is which on simplifying yields π x + 7 π (π )(x π x), π [ ( π )π x + (7π )x. So, to calculate the mean square error associated with this approximation, we use the result from the lectures, i.e. MSE = f f, e i, which as f = f, f = sin x = i= [ cos(x) = π + = π,

4 gives us [ MSE = π + π π π 7 (π π) = π π + π π. Evaluating this expression then tells us that the mean square error for this approximation is.7 to four decimal places. (b) We are told that the first two [non-zero terms in the Taylor series for sin x are given by x x!, for all x [, π and we are ased to find the mean square error between sin x and this approximation. This is fairly straightforward as most of the integrals that have to be evaluated have already been calculated in (a). So, using the definition of the mean square error, it should be clear that: MSE = = [f(x) g(x) = [sin x x + x [sin x + x + x x x x sin x + sin x = π + π + π7 π + π π π MSE = π7 π + π 7π. Evaluating this expression then tells us that the mean square error for this approximation is.8 to four decimal places. So, which of these cubics, i.e. the one calculated in (a) or the Taylor series, provides the best approximation to sin x? Well, by looing at the mean square errors it should be clear that the least squares approximation calculated in (a) gives a better approximation to sin x than the Taylor series. Indeed, the mean square error for the Taylor series is roughly times bigger than the mean square error for the result calculated in part (a). To put this all into perspective, these results are illustrated in Figure.. We are ased to find least squares approximations to two functions over the interval [, using the inner product f, g = f(x)g(x), where f : x f(x) and g : x g(x) for all x [,. We shall do each of these in turn: Firstly, we are ased to find a least squares approximation to x by a function of the form a + b e x. That is, we need to find the orthogonal projection of the vector x onto the subspace spanned by the vectors and e x where e x : x e x. So, we note that the set of vectors {, e x is linearly independent since, for any x [,, the Wronsian for these functions is given by: W (x) = ex e x = ex, i.e. this set of vectors gives us a basis for the subspace that we are going to orthogonally project onto. However, to do the orthogonal projection, we need an orthonormal basis for this subspace and to get this, we use the Gram-Schmidt procedure: Notice that we can not use the nice mean square error formula which we used in (a) here. This is because this Taylor series is not written in terms of an orthonormal basis.

5 .. x.. Figure : This figure illustrates the function sin x (i.e ), the first two [non-zero terms in the Taylor series for sin x (i.e ) and the least squares approximation for sin x calculated in part (a) (i.e. ). Notice that the Taylor series gives a very good approximation to sin x for x, but then starts to deviate rapidly from this function (we should expect this since we now that the Taylor series for sin x is a good approximation for small x). However, the least squares approximation for sin x calculated in part (a) gives a good approximation for all values of x in the interval [, π. Taing v =, we get and so we set e =. v = v, v = = [x =, Taing v = e x, we construct the vector u where since u = v v, e e = e x (e ), v, e = e x, = Then, we need to normalise this vector, i.e. as u = u, u = [e x (e ) = [ e x = (e )ex + (e ) x i.e. u = ( e)(e ), e x = [e x = e. [ e x (e )e x + (e ) [ e = (e )e + (e ) which is a positive quantity since < e <, we set e = ( e)(e ). [ (e ) α [ex (e ) where α =

6 Consequently, the set of vectors, {, ex (e ), ( e)(e ) is an orthonormal basis for the subspace spanned by the vectors {, e x. Hence, to find a least squares approximation to x in this subspace we just have to evaluate where x : x x. To do this, we note that x, = [ x = x x, e e + x, e e, = and so, x, e =. x, e x = xex = [xe x ex = e (e ) = and so, x, e = α [ x, [ ex (e ) x, = e α and so our desired approximation to x is which on simplifying yields, + ex (e ), e + ex e. So, using the form given in the question, the least squares approximation to x over the interval [, has a = / and b = /(e ). Secondly, we are ased to find a least squares approximation to e x by a function of the form a + bx. That is, we need to find the orthogonal projection of the vector e x onto the subspace spanned by the vectors and x. However, the set of vectors {, x is actually a basis for the vector space P [, and we now from Question of Problem Sheet that an orthonormal basis for this vector space is {, (x ). Hence, to find the required least squares approximation to e x we just have to evaluate e x, e e + e x, e e, where e = and e = (x ). To do this, we note that e x, = ex = [e x = e and so, ex, e = e. e x, x = xex = (from above) and so, e x, e = ( e x, x e x, ) = ( e). and so our desired approximation to e x is which on simplifying yields, (e ) + ( e)(x ), (e ) + ( e)x. So, using the form given in the question, the least squares approximation to e x over the interval [, has a = (e ) and b = ( e).. We are ased to find a vector q PR such that p() = p, q,

7 for every vector p : x p(x) in PR where the inner product defined on this vector space is given by p, q = p(i)q(i). i= (Notice that this is an instantiation of the inner product given in Question of Problem Sheet where in this case,,, are the three fixed and distinct real numbers required to define it.) So, we need to find a quadratic q(x) [say ax + bx + c such that its inner product with any other quadratic p(x) [say αx + βx + γ, gives p() [which in this case is α + β + γ. So, taing p and q as suggested we have p, q = p(i)q(i) = (a b + c)(α β + γ) + ( + + c)( + + γ) + (a + b + c)(α + β + γ) {{{{{{ i= i= i= i= and this must equal p(), i.e. α + β + γ. So, to find q we equate the coefficients of the terms in α, β and γ to get (a b + c) + + (a + b + c) = a+c = (a b + c) + + (a + b + c) = = b = (a b + c) + c + (a + b + c) = a+c = which on solving gives us a = b = / and c =. Thus, the required quadratic q is given by q(x) = x + x. It may surprise you that there is only one quadratic that does this job, or indeed that there is one at all. But, there is actually some theory that guarantees its existence and uniqueness (although, we do not cover it in this course). Notice that we can chec that this answer is correct since, for any quadratic p(x) = ax + bx + c, it gives as desired. p, q = (a b + c)( {{ ) + ( + + c)( + ) + (a + b + c)( {{ + ) = a + b + c = p(), {{ i= i= i= The rest of the questions on this problem sheet are intended to develop your intuitions as to what a Fourier series is in terms of orthogonal projections onto subspaces. To do this, we start by considering an example where the function we are dealing with lies in Lin(G n ) for some n. Indeed, Question 9 gives us a possible application of this ind of situation.. We are ased to consider the function cos(x) defined over the interval [, π. To find the Fourier series of orders, and that represent this function we evaluate integrals of the form a = π where if =, we have cos(x) cos(x) = π {cos( + )x + cos( )x, a = {cos(x) + = + π π π =, and if, we get a =. Also, as cos(x) is an even function, the b coefficients will be zero too. Thus, we find that the required Fourier series are: For simplicity, we shall deal with a cases where the required n is small! 7

8 Order Fourier series MSE π cos(x) cos(x) where the mean square error for the second order Fourier series is given by cos (x) = [cos(x) + = π = π, and the third and fourth order Fourier series have a mean square error of zero. Which tells us [unsurprisingly, perhaps that the Fourier series for cos(x) is cos(x) and that cos(x) is in Lin(G ) and Lin(G ), but not in Lin(G ) (as one would expect from the mean square errors). Other Problems. Here we derived the formulae for calculating Fourier series. We then saw how to apply these formulae by finding the Fourier series of some simple functions. Warning: The remaining solutions will mae prodigious use of the following facts about sines and cosines: If r N, then sin(rπ) = and cos(rπ) = ( ) r. The sine function is odd, i.e. cos( x) = cos(x). sin( x) = sin(x), whereas the cosine function is even, i.e. The integral of an odd function over the interval [, π is zero. The following trigonometric identities: ( ) ( ) θ + φ θ φ sin θ + sin φ = sin cos, ( ) ( ) θ + φ θ φ sin θ sin φ = cos sin, ( ) ( ) θ + φ θ φ cos θ + cos φ = cos cos, ( ) ( ) θ + φ θ φ cos θ + cos φ = sin sin, from which, among other things, the double angle formulae follow. You should bear them in mind when looing at many of the integrals that are evaluated below!. We are ased to consider the subset of F [,π given by G n = {g, g,..., g n, g n+,..., g n where g (x) = π, and g (x) = π cos(x), g (x) = π cos(x),..., g n (x) = π cos(nx), whereas, (for convenience we relabel these) h (x) = g n+ (x) = π sin(x), h (x) = g n+ (x) = π sin(x),..., h n (x) = g n (x) = π sin(nx). To show that G n is an orthonormal set when using the inner product f, g = 8 f(x)g(x),

9 we have to establish that they satisfy the orthonormality condition, i.e. if i = j g i, g j = if i j Firstly, we can see that the vector g satisfies this condition, as g, g = π and so it is unit; whilst for n, we have and g, g = π g, h = π =, cos(x) = [ sin(x) π =, sin(x) = [ cos(x) =, π and so the vector g is orthogonal to the other vectors in G n. Secondly, note that for, l n, g, g l = π and so if l, then cos(lx) cos(x) = π g, g l = [ sin( + l)x + π + l {cos( + l)x + cos( l)x, sin( l)x π =, l whereas if = l, we have g, g = {cos(x) + = [ sin(x) π + x =. π π Thus, we can see that, for n, the vectors g are unit and mutually orthogonal. Thirdly, using a similar calculation, we note that for n, l n, h, h l = π and so if l, then sin(x) sin(lx) = π h, h l = [ sin( + l)x π + l {cos( + l)x cos( l)x, sin( l)x π =, l whereas if = l, we have h, h = {cos(x) = [ sin(x) π x =. π π Thus, we can see that, for n, the vectors h are unit and orthogonal., l n, we can see that Lastly, taing g, h l = π cos(x) sin(lx) = {sin( + l)x sin( l)x, π and so if l, then g, h l = [ cos( + l)x + π + l cos( l)x π =, l 9

10 whereas if = l, we have g, h = π sin(x) = [ cos(x) =. Thus, for, l n, the vectors g and h l are mutually orthogonal. Consequently, we have shown that the vectors in G n are orthonormal, as required. Further, we are ased to show that any trigonometric polynomial of order n or less can be represented by a vector in Lin(G n ). This is clearly the case as a trigonometric polynomial of degree n or less is of the form c + c cos(x) + + c n cos(nx) + d sin(x) + + d n sin(nx), and this can therefore be represented by the vector πc g + πc g + + πc n g n + πd h + + πd n h n, which is in Lin(G n ), as required. Note: In this question, we have established some orthogonality relations which may be useful later. For convenience, we list them here for easy reference: π { if = l cos(x) cos(lx) = π if l π { if = l sin(x) sin(lx) = π if l π cos(x) sin(lx) =, π where and l are positive integers. Further, just in case you haven t twigged, we note that when, again, is a positive integer. sin(x) = and cos(x) =, 7. Let us suppose that the vector f F [,π represents a function that is not in Lin(G n ), using the result from the lectures we can see that the orthogonal projection of f onto Lin(G n ) is just Pf = = f, g g. As this series represents the orthogonal projection, we now (from the lectures) that it will minimise the quantity given by f g where g Lin(G n ). That is, it minimises the quantity f g = f g, f g = where f : x f(x) and g : x g(x). Further, we must show that this series can be written as..) a [f(x) g(x), + {a cos(x) + b sin(x), Clearly, as G n F [,π and we are considering the subspace given by Lin(G n). (This is a subspace by Theorem

11 where the coefficients are given by a = π To do this, we note that Pf(x) = f(x) cos(x) and b = π = f, g g = f, g g + and rewriting this in terms of functions we get Pf = { π f(u)du + π f, g g + { π { π + π f(x) sin(x). =n+ f, g g, f(u) cos(u)du π cos(x) f(u) sin(u)du π sin(x), which gives the required result when we replace the integrals with the appropriate coefficients. 7 Indeed, as noted in the question, this is the Fourier series of order n representing f(x). As it turns out, this series can sometimes be simplified due to the fact that the integral of an odd function over the range [, π is zero. 8 So, if f(x) is an odd (even) function, then as the product of an odd (even) function and an even (odd) function is an odd function, we can see that the a (b ) coefficients will be zero. Thus, a useful result is that the Fourier series of order n will be b sin(x) if f(x) is odd, and a + a cos(x) if f(x) is even. 8. Following on from Question, when we are ased to find the Fourier series of order n representing the function sin(x) over the interval [, π, it should be obvious (i.e. there should be no need to integrate!) that it is just sin(x) if n and zero if n. The corresponding mean square errors are zero if n and sin (x) = [ cos(x) = π = π, if n. But, if you are not convinced, we can verify this result by calculating the coefficients. In this case, because sin(x) is an odd function the a are zero and so we just have to evaluate (for > ) an integral of the form b = π where if =, we have sin(x) sin(x) = π and if, we get a =, as before. {cos( + )x + cos( )x, b = {cos(x) + = + π π π =, When writing the integrals that correspond to the inner products, we use the dummy variable u to replace the variable x to avoid confusion. 7 Notice that the coefficient a is a special case of the coefficient a (as if we set = in the formula for a, cos(x) = ). This little simplification is the reason for the extra factor of a half in the first term of this series. 8 Again, recall that an odd function is such that f( x) = f(x) and an even function is such that f( x) = f(x).

12 Note: In case you hadn t guessed, this all wors because the vectors being used are linearly independent 9 and so the series that we find is unique. Thus, whether we calculate a Fourier series by calculating the coefficients, or by some other method (say, using trigonometric identities), we will always get the same result! To illustrate this, let us consider a bonus question! Question: Find the Fourier series for cos x and sin x by evaluating the coefficients. Further, amaze your friends by doing this in four lines using complex numbers! Answer: As cos x is an even function, the b coefficients are zero and so we only need to find the a, i.e. a = π = π = π = π cos x cos(x) = π cos x cos x cos(x) {cos(x) + {cos( + )x + cos( )x {cos(x) cos( + )x + cos(x) cos( )x + cos( + )x + cos( )x { [cos( + )x + cos( )x + [cos( + )x + cos( )x + cos( + )x + cos( )x a = {cos( + )x + cos( )x + cos( + )x + cos( )x. 8π Thus, for = and = we get a = / and a = / respectively, whereas for other non-negative values of we get a =. Consequently, the Fourier series for cos x is [ cos x + cos(x). Similarly, for sin x, an odd function, the a coefficients are zero and so we only need to find the b, i.e. b = π = π = π = π sin x cos(x) = π sin x sin x sin(x) { cos(x) {cos( + )x cos( )x {cos( + )x cos( )x cos(x) cos( + )x + cos(x) cos( )x { cos( + )x cos( )x [cos( + )x + cos( )x + [cos( + )x + cos( )x b = {cos( + )x cos( )x cos( + )x + cos( )x. 8π Thus, for = and = we get b = / and b = / respectively, whereas for other positive values of we get b =. Consequently, the Fourier series for sin x is [ sin x sin(x). These results should loo familiar as they are just the triple angle identities for sines and cosines. An alternative way to derive them would be to use complex numbers, and it is convenient for us to pause for a moment to note some useful results: 9 Recall that in Question of Problem Sheet, we established that orthogonal vectors are linearly independent.

13 We can write complex numbers in their exponential form, i.e. which allows us to write cos θ = eiθ + e iθ e ±iθ = cos θ ± i sin θ, and sin θ = eiθ e iθ. i This form is particularly useful when used in conjunction with De Moivre s theorem, which tells us that (e iθ ) n = (cos θ ± i sin θ) n = cos(nθ) ± i sin(nθ). In what follows we use these ideas together with a rather nice substitution, namely z = e iθ. (Notice that this implies that /z = z = e iθ.) So, to proceed we consider the following identity ( z ± ) = z ± z z z + z z ± z = z ± ( z ± z ± ). z Now, if we let z = e ix, and tae the + we get whereas, taing the we find which are the desired results. 9. We are ased to show that cos x = cos x + ( cos x) = cos x = cos x + cos x, (i) sin x = i sin x (i sin x) = sin x = sin x sin x, + cos(x) + cos(x) + + cos(nx) = sin(n + )x sin x, without integrating to find the coefficients and assuming that x rπ where r Z. To do this we consider the series S n = + e ix + e ix + + e inx, which is a geometric progression, and so summing this we get S n = ei(n+)x e ix, provided that x rπ (as if this were the case, e ix = ). Multiplying the top and bottom of this expression by e ix gives S n = e ix e i(n+ )x e ix e ix = [cos( x) i sin( x) [cos(n + )x + i sin(n + )x i sin( x) S n = i[cos( x) cos(n + )x + [sin( x) + sin(n + )x sin( x), To get this, use the Binomial Theorem (remember?) as I have, or just expand out the bracets. Either way this should be obvious!

14 and taing the real part of this expression yields + cos(x) + cos(x) + + cos(nx) = + sin(n + )x sin( x), which gives the desired result. If the condition that x rπ with r N is violated, then the left-hand-side of this expression becomes {{ + = + n. n times But, the right-hand-side is indeterminate and so we need to evaluate sin(n + lim )x (n + x rπ sin( x) = lim ) cos(n + )x x rπ cos( x) sin(n + lim )x x rπ sin( x) = n +. = (n + ) cos(n + )rπ cos(rπ) = (n + )( )(n+)r ( ) r :Using l Hôpital s rule. Thus, the result still holds if x = rπ with r N.. We are ased to find the Fourier series of order n of three functions which are defined in the interval [, π. Further, we must calculate the mean square error in each case. The first function to consider is f(x) = π x. To find the required Fourier series, we have to calculate the coefficients by evaluating integrals of the form a = π (see Question 7) and so for =, we have whereas, for, we have a = π a = π (π x) = π (π x) cos(x), {[ (π x) sin(x) + Also, for, we have to evaluate integrals of the form b = (π x) sin(x) π = {[ (π x) cos(x) π = [ π cos( π) π b = ( ), [πx x = π, sin(x) =. cos(x) (again, see Question 7). Thus, the Fourier series of order n representing the function π x is π + ( ) sin(x).

15 Using this, the mean square error between a function, f(x) and the corresponding Fourier series of order n, g(x) is given by and so in this case we have MSE = MSE = [ x + [f(x) g(x), ( ) sin(x). This loos a bit tricy, but squaring the bracet you should be able to convince yourself that this gives [ π MSE = x ( ) ( ) +l + x sin(x) + sin(x) sin(lx) l But, noting that + ( ) sin (x). x sin(x) = [ x cos(x) + as well as our earlier results, we can see that this is just l= (l ) cos(x) = π ( ), MSE = ( ) π 8π + + π ( ) = π π. This Fourier series is illustrated in Figure. x x x terms terms terms Figure : The function π x and its Fourier series of order, and respectively. (Notice that the Fourier series of order n for π x behaves unusually as x ±π. There is a reason for this, but we will not discuss it in this course.) The second function to consider is f(x) = x. To find the required Fourier series, we have to calculate the coefficients by evaluating integrals of the form a = π (see Question 7) and so for =, we have x cos(x), [ x a = x = π π = π,

16 whereas, for, we have a = {[ x sin(x) π = { = π = π π a = ( ). x sin(x) {[ x cos(x) [ π ( ) + + x sin(x) cos(x) Also, as f(x) = x is an even function, we now (from the last part of Question 7) that the b (for n) will be zero. Thus, the Fourier series of order n representing the function x is π + ( ) cos(x). Using this, the mean square error between a function, f(x) and the corresponding Fourier series of order n, g(x) is given by and so in this case we have MSE = MSE = [ x π [f(x) g(x), ( ) cos(x). This [again, loos a bit tricy, but squaring the bracet you should be able to convince yourself that this gives [ π ( ) ) MSE = x π 8 (x π ( ) cos(x) + l= (l ) ( ) +l l cos(x) cos(lx) + ( ) cos (x). But, noting that ) ) [ (x π = (x π x + π x = 9 x π + π 9 x = π π π + π 9 π = 8 π, and ) ) (x π cos(x) = [(x π sin(x) = {[ x cos(x) (x π = ( ) π ) cos(x) = π ( ), + + x sin(x) cos(x)

17 as well as our earlier results, we can see that this is just MSE = 8 π π ( ) + + π This Fourier series is illustrated in Figure. ( ) = 8 π π x x x term terms terms Figure : The function x and its Fourier series of order, and respectively. The third function to consider is f(x) = x. To find the required Fourier series, we have to calculate the coefficients by evaluating integrals of the form a = π (see Question 7) and so for =, we have a = π x = { ( x) + π x cos(x), x = π whereas, for, we have a = { ( x) cos(x) + x cos(x) π { [ = x sin(x) sin(x) + π = { sin(x) π sin(x) + + π { [ = cos(x) [ cos(x) π π = π {[ + ( ) a = π ( ). [ ( ) + [ + x sin(x) { [ x + [ x = π, sin(x) Also, as f(x) = x is an even function, we now (from the last part of Question 7) that the b (for n) will be zero. Thus, the Fourier series of order n representing the function x is π + π ( ) cos(x). Using this, the mean square error between a function, f(x) and the corresponding Fourier series of order n, g(x) is given by MSE = [f(x) g(x), 7

18 and so in this case we have MSE = [ x π π ( ) cos(x). This [again, loos a bit tricy, but squaring the bracet you should be able to convince yourself that this gives [ π ( MSE = x π ) ( x π ) ( ) π cos(x) But, noting that + π l= (l ) ( x π ) π = [ x = ( ) ( ) l l cos(x) cos(lx) + π ( x π π x + [ + π x x + π ) = (x + π [ x π [ π π = [( ) cos (x). ) π + π ( x) π π π = π, x and ( x π ) cos(x) = ( x π ) cos(x) + ( x π ) cos(x) [ ( = x + π ) sin(x) sin(x) + [ ( + x π ) sin(x) π sin(x) [ cos(x) [ cos(x) π = + + ( x π ) cos(x) = [ ( ), as well as our earlier results, we can see that this is just MSE = π 8 π [( ) + + π [( ) = π π [( ). This Fourier series is illustrated in Figure x x x term terms terms Figure : The function x and its Fourier series of order, and respectively. (Notice that the Fourier series of orders and are the same here as the second order term is zero!) 8

19 Note: You may wonder why we have bothered to calculate the mean square error in this question (as it is not normally calculated, or even introduced, in methods courses). We tae this opportunity to note the following facts: = π, = π 9, [( ) = π. Thus, if we let n in the above results, the mean square errors will tend to zero. This indicates that if we tae the Fourier series of these functions (i.e. their representation in terms of a trigonometric polynomial of infinite degree), we will have an exact expression for them in terms of sines and cosines. Indeed, this seems to imply that these functions are in the subspace of F [,π spanned by the vectors in G. This idea is explored further (in a more transparent context) in Questions and 8. Howvever, you may care to tae a loo at the illustrations of these Fourier series that are given in Figures, and. (Notice how the Fourier series gives a better approximation to the function in question as we increase the number of terms!) Also observe how the mean square error is much easier to find if we use the result derived in the lectures (as we did in Question ) instead of just evaluating the appropriate norm (as we did in Questions, 8 and )! Remar: There is, of course, much more to Fourier series than we have covered here. Once again, you should note that the Fourier series of a function is the series that we get as n (the order in our exposition) tends to infinity. 9

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002 Further Mathematical Methods (Linear Algebra) 22 Solutions For Problem Sheet 3 In this Problem Sheet, we looked at some problems on real inner product spaces. In particular, we saw that many different

More information

8.5 Taylor Polynomials and Taylor Series

8.5 Taylor Polynomials and Taylor Series 8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:

More information

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3 MATH 167: APPLIED LINEAR ALGEBRA Chapter 3 Jesús De Loera, UC Davis February 18, 2012 Orthogonal Vectors and Subspaces (3.1). In real life vector spaces come with additional METRIC properties!! We have

More information

Math 489AB A Very Brief Intro to Fourier Series Fall 2008

Math 489AB A Very Brief Intro to Fourier Series Fall 2008 Math 489AB A Very Brief Intro to Fourier Series Fall 8 Contents Fourier Series. The coefficients........................................ Convergence......................................... 4.3 Convergence

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

There are two things that are particularly nice about the first basis

There are two things that are particularly nice about the first basis Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially

More information

Chapter 7: Techniques of Integration

Chapter 7: Techniques of Integration Chapter 7: Techniques of Integration MATH 206-01: Calculus II Department of Mathematics University of Louisville last corrected September 14, 2013 1 / 43 Chapter 7: Techniques of Integration 7.1. Integration

More information

Functional Analysis Exercise Class

Functional Analysis Exercise Class Functional Analysis Exercise Class Week: December 4 8 Deadline to hand in the homework: your exercise class on week January 5. Exercises with solutions ) Let H, K be Hilbert spaces, and A : H K be a linear

More information

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 = Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values

More information

q-series Michael Gri for the partition function, he developed the basic idea of the q-exponential. From

q-series Michael Gri for the partition function, he developed the basic idea of the q-exponential. From q-series Michael Gri th History and q-integers The idea of q-series has existed since at least Euler. In constructing the generating function for the partition function, he developed the basic idea of

More information

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson JUST THE MATHS UNIT NUMBER.5 DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) by A.J.Hobson.5. Maclaurin s series.5. Standard series.5.3 Taylor s series.5.4 Exercises.5.5 Answers to exercises

More information

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.

More information

Partial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions.

Partial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions. Partial Fractions June 7, 04 In this section, we will learn to integrate another class of functions: the rational functions. Definition. A rational function is a fraction of two polynomials. For example,

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

A glimpse of Fourier analysis

A glimpse of Fourier analysis Chapter 7 A glimpse of Fourier analysis 7.1 Fourier series In the middle of the 18th century, mathematicians and physicists started to study the motion of a vibrating string (think of the strings of a

More information

Math Real Analysis II

Math Real Analysis II Math 4 - Real Analysis II Solutions to Homework due May Recall that a function f is called even if f( x) = f(x) and called odd if f( x) = f(x) for all x. We saw that these classes of functions had a particularly

More information

IB Mathematics HL Year 2 Unit 11: Completion of Algebra (Core Topic 1)

IB Mathematics HL Year 2 Unit 11: Completion of Algebra (Core Topic 1) IB Mathematics HL Year Unit : Completion of Algebra (Core Topic ) Homewor for Unit Ex C:, 3, 4, 7; Ex D: 5, 8, 4; Ex E.: 4, 5, 9, 0, Ex E.3: (a), (b), 3, 7. Now consider these: Lesson 73 Sequences and

More information

and the compositional inverse when it exists is A.

and the compositional inverse when it exists is A. Lecture B jacques@ucsd.edu Notation: R denotes a ring, N denotes the set of sequences of natural numbers with finite support, is a generic element of N, is the infinite zero sequence, n 0 R[[ X]] denotes

More information

MATH 118, LECTURES 27 & 28: TAYLOR SERIES

MATH 118, LECTURES 27 & 28: TAYLOR SERIES MATH 8, LECTURES 7 & 8: TAYLOR SERIES Taylor Series Suppose we know that the power series a n (x c) n converges on some interval c R < x < c + R to the function f(x). That is to say, we have f(x) = a 0

More information

MATH 1231 MATHEMATICS 1B CALCULUS. Section 5: - Power Series and Taylor Series.

MATH 1231 MATHEMATICS 1B CALCULUS. Section 5: - Power Series and Taylor Series. MATH 1231 MATHEMATICS 1B CALCULUS. Section 5: - Power Series and Taylor Series. The objective of this section is to become familiar with the theory and application of power series and Taylor series. By

More information

Chapter 11 - Sequences and Series

Chapter 11 - Sequences and Series Calculus and Analytic Geometry II Chapter - Sequences and Series. Sequences Definition. A sequence is a list of numbers written in a definite order, We call a n the general term of the sequence. {a, a

More information

1.4 Techniques of Integration

1.4 Techniques of Integration .4 Techniques of Integration Recall the following strategy for evaluating definite integrals, which arose from the Fundamental Theorem of Calculus (see Section.3). To calculate b a f(x) dx. Find a function

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form Taylor Series Given a function f(x), we would like to be able to find a power series that represents the function. For example, in the last section we noted that we can represent e x by the power series

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold: Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v

More information

Math 24 Spring 2012 Questions (mostly) from the Textbook

Math 24 Spring 2012 Questions (mostly) from the Textbook Math 24 Spring 2012 Questions (mostly) from the Textbook 1. TRUE OR FALSE? (a) The zero vector space has no basis. (F) (b) Every vector space that is generated by a finite set has a basis. (c) Every vector

More information

UNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH *

UNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH * 4.4 UNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH 19 Discussion Problems 59. Two roots of a cubic auxiliary equation with real coeffi cients are m 1 1 and m i. What is the corresponding homogeneous

More information

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA Andrew ID: ljelenak August 25, 2018 This assignment reviews basic mathematical tools you will use throughout

More information

MATH 369 Linear Algebra

MATH 369 Linear Algebra Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine

More information

Math 115 ( ) Yum-Tong Siu 1. Derivation of the Poisson Kernel by Fourier Series and Convolution

Math 115 ( ) Yum-Tong Siu 1. Derivation of the Poisson Kernel by Fourier Series and Convolution Math 5 (006-007 Yum-Tong Siu. Derivation of the Poisson Kernel by Fourier Series and Convolution We are going to give a second derivation of the Poisson kernel by using Fourier series and convolution.

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Calculus II Lecture Notes

Calculus II Lecture Notes Calculus II Lecture Notes David M. McClendon Department of Mathematics Ferris State University 206 edition Contents Contents 2 Review of Calculus I 5. Limits..................................... 7.2 Derivatives...................................3

More information

8.7 MacLaurin Polynomials

8.7 MacLaurin Polynomials 8.7 maclaurin polynomials 67 8.7 MacLaurin Polynomials In this chapter you have learned to find antiderivatives of a wide variety of elementary functions, but many more such functions fail to have an antiderivative

More information

FOURIER SERIES, HAAR WAVELETS AND FAST FOURIER TRANSFORM

FOURIER SERIES, HAAR WAVELETS AND FAST FOURIER TRANSFORM FOURIER SERIES, HAAR WAVELETS AD FAST FOURIER TRASFORM VESA KAARIOJA, JESSE RAILO AD SAMULI SILTAE Abstract. This handout is for the course Applications of matrix computations at the University of Helsinki

More information

f(x 0 + h) f(x 0 ) h slope of secant line = m sec

f(x 0 + h) f(x 0 ) h slope of secant line = m sec Derivatives Using limits, we can define the slope of a tangent line to a function. When given a function f(x), and given a point P (x 0, f(x 0 )) on f, if we want to find the slope of the tangent line

More information

Convergence of Fourier Series

Convergence of Fourier Series MATH 454: Analysis Two James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University April, 8 MATH 454: Analysis Two Outline The Cos Family MATH 454: Analysis

More information

Solutions: Problem Set 3 Math 201B, Winter 2007

Solutions: Problem Set 3 Math 201B, Winter 2007 Solutions: Problem Set 3 Math 201B, Winter 2007 Problem 1. Prove that an infinite-dimensional Hilbert space is a separable metric space if and only if it has a countable orthonormal basis. Solution. If

More information

10.7 Trigonometric Equations and Inequalities

10.7 Trigonometric Equations and Inequalities 0.7 Trigonometric Equations and Inequalities 79 0.7 Trigonometric Equations and Inequalities In Sections 0., 0. and most recently 0., we solved some basic equations involving the trigonometric functions.

More information

Exercises * on Linear Algebra

Exercises * on Linear Algebra Exercises * on Linear Algebra Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 7 Contents Vector spaces 4. Definition...............................................

More information

Math 3150 HW 3 Solutions

Math 3150 HW 3 Solutions Math 315 HW 3 Solutions June 5, 18 3.8, 3.9, 3.1, 3.13, 3.16, 3.1 1. 3.8 Make graphs of the periodic extensions on the region x [ 3, 3] of the following functions f defined on x [, ]. Be sure to indicate

More information

Power Series Solutions We use power series to solve second order differential equations

Power Series Solutions We use power series to solve second order differential equations Objectives Power Series Solutions We use power series to solve second order differential equations We use power series expansions to find solutions to second order, linear, variable coefficient equations

More information

Mathematics 1 Lecture Notes Chapter 1 Algebra Review

Mathematics 1 Lecture Notes Chapter 1 Algebra Review Mathematics 1 Lecture Notes Chapter 1 Algebra Review c Trinity College 1 A note to the students from the lecturer: This course will be moving rather quickly, and it will be in your own best interests to

More information

Course Notes for Calculus , Spring 2015

Course Notes for Calculus , Spring 2015 Course Notes for Calculus 110.109, Spring 2015 Nishanth Gudapati In the previous course (Calculus 110.108) we introduced the notion of integration and a few basic techniques of integration like substitution

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Physics 250 Green s functions for ordinary differential equations

Physics 250 Green s functions for ordinary differential equations Physics 25 Green s functions for ordinary differential equations Peter Young November 25, 27 Homogeneous Equations We have already discussed second order linear homogeneous differential equations, which

More information

Math 113: Quiz 6 Solutions, Fall 2015 Chapter 9

Math 113: Quiz 6 Solutions, Fall 2015 Chapter 9 Math 3: Quiz 6 Solutions, Fall 05 Chapter 9 Keep in mind that more than one test will wor for a given problem. I chose one that wored. In addition, the statement lim a LR b means that L Hôpital s rule

More information

CONVERGENCE OF THE FOURIER SERIES

CONVERGENCE OF THE FOURIER SERIES CONVERGENCE OF THE FOURIER SERIES SHAW HAGIWARA Abstract. The Fourier series is a expression of a periodic, integrable function as a sum of a basis of trigonometric polynomials. In the following, we first

More information

Fourier and Partial Differential Equations

Fourier and Partial Differential Equations Chapter 5 Fourier and Partial Differential Equations 5.1 Fourier MATH 294 SPRING 1982 FINAL # 5 5.1.1 Consider the function 2x, 0 x 1. a) Sketch the odd extension of this function on 1 x 1. b) Expand the

More information

Section 3.1: Definition and Examples (Vector Spaces), Completed

Section 3.1: Definition and Examples (Vector Spaces), Completed Section 3.1: Definition and Examples (Vector Spaces), Completed 1. Examples Euclidean Vector Spaces: The set of n-length vectors that we denoted by R n is a vector space. For simplicity, let s consider

More information

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning: Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =

More information

Chapter 4: More Applications of Differentiation

Chapter 4: More Applications of Differentiation Chapter 4: More Applications of Differentiation Autumn 2017 Department of Mathematics Hong Kong Baptist University 1 / 68 In the fall of 1972, President Nixon announced that, the rate of increase of inflation

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

Limits at Infinity. Horizontal Asymptotes. Definition (Limits at Infinity) Horizontal Asymptotes

Limits at Infinity. Horizontal Asymptotes. Definition (Limits at Infinity) Horizontal Asymptotes Limits at Infinity If a function f has a domain that is unbounded, that is, one of the endpoints of its domain is ±, we can determine the long term behavior of the function using a it at infinity. Definition

More information

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I Physics 342 Lecture 2 Linear Algebra I Lecture 2 Physics 342 Quantum Mechanics I Wednesday, January 3th, 28 From separation of variables, we move to linear algebra Roughly speaking, this is the study of

More information

be the set of complex valued 2π-periodic functions f on R such that

be the set of complex valued 2π-periodic functions f on R such that . Fourier series. Definition.. Given a real number P, we say a complex valued function f on R is P -periodic if f(x + P ) f(x) for all x R. We let be the set of complex valued -periodic functions f on

More information

10.7 Trigonometric Equations and Inequalities

10.7 Trigonometric Equations and Inequalities 0.7 Trigonometric Equations and Inequalities 857 0.7 Trigonometric Equations and Inequalities In Sections 0., 0. and most recently 0., we solved some basic equations involving the trigonometric functions.

More information

MTH 299 In Class and Recitation Problems SUMMER 2016

MTH 299 In Class and Recitation Problems SUMMER 2016 MTH 299 In Class and Recitation Problems SUMMER 2016 Last updated on: May 13, 2016 MTH299 - Examples CONTENTS Contents 1 Week 1 3 1.1 In Class Problems.......................................... 3 1.2 Recitation

More information

Some approximation theorems in Math 522. I. Approximations of continuous functions by smooth functions

Some approximation theorems in Math 522. I. Approximations of continuous functions by smooth functions Some approximation theorems in Math 522 I. Approximations of continuous functions by smooth functions The goal is to approximate continuous functions vanishing at ± by smooth functions. In a previous homewor

More information

Recall that any inner product space V has an associated norm defined by

Recall that any inner product space V has an associated norm defined by Hilbert Spaces Recall that any inner product space V has an associated norm defined by v = v v. Thus an inner product space can be viewed as a special kind of normed vector space. In particular every inner

More information

d 1 µ 2 Θ = 0. (4.1) consider first the case of m = 0 where there is no azimuthal dependence on the angle φ.

d 1 µ 2 Θ = 0. (4.1) consider first the case of m = 0 where there is no azimuthal dependence on the angle φ. 4 Legendre Functions In order to investigate the solutions of Legendre s differential equation d ( µ ) dθ ] ] + l(l + ) m dµ dµ µ Θ = 0. (4.) consider first the case of m = 0 where there is no azimuthal

More information

Fourier Series and the Discrete Fourier Expansion

Fourier Series and the Discrete Fourier Expansion 2 2.5.5 Fourier Series and the Discrete Fourier Expansion Matthew Lincoln Adrienne Carter sillyajc@yahoo.com December 5, 2 Abstract This article is intended to introduce the Fourier series and the Discrete

More information

11.10a Taylor and Maclaurin Series

11.10a Taylor and Maclaurin Series 11.10a 1 11.10a Taylor and Maclaurin Series Let y = f(x) be a differentiable function at x = a. In first semester calculus we saw that (1) f(x) f(a)+f (a)(x a), for all x near a The right-hand side of

More information

swapneel/207

swapneel/207 Partial differential equations Swapneel Mahajan www.math.iitb.ac.in/ swapneel/207 1 1 Power series For a real number x 0 and a sequence (a n ) of real numbers, consider the expression a n (x x 0 ) n =

More information

Week 2 Techniques of Integration

Week 2 Techniques of Integration Week Techniques of Integration Richard Earl Mathematical Institute, Oxford, OX LB, October Abstract Integration by Parts. Substitution. Rational Functions. Partial Fractions. Trigonometric Substitutions.

More information

Solutions to the Exercises * on Linear Algebra

Solutions to the Exercises * on Linear Algebra Solutions to the Exercises * on Linear Algebra Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 ebruary 7 Contents Vector spaces 4. Definition...............................................

More information

Families of Functions, Taylor Polynomials, l Hopital s

Families of Functions, Taylor Polynomials, l Hopital s Unit #6 : Rule Families of Functions, Taylor Polynomials, l Hopital s Goals: To use first and second derivative information to describe functions. To be able to find general properties of families of functions.

More information

Mathematics Revision Questions for the University of Bristol School of Physics

Mathematics Revision Questions for the University of Bristol School of Physics Mathematics Revision Questions for the University of Bristol School of Physics You will not be surprised to find you have to use a lot of maths in your stu of physics at university! You need to be completely

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Section 9.7 and 9.10: Taylor Polynomials and Approximations/Taylor and Maclaurin Series

Section 9.7 and 9.10: Taylor Polynomials and Approximations/Taylor and Maclaurin Series Section 9.7 and 9.10: Taylor Polynomials and Approximations/Taylor and Maclaurin Series Power Series for Functions We can create a Power Series (or polynomial series) that can approximate a function around

More information

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I Physics 342 Lecture 2 Linear Algebra I Lecture 2 Physics 342 Quantum Mechanics I Wednesday, January 27th, 21 From separation of variables, we move to linear algebra Roughly speaking, this is the study

More information

Scientific Computing

Scientific Computing 2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Formulas to remember

Formulas to remember Complex numbers Let z = x + iy be a complex number The conjugate z = x iy Formulas to remember The real part Re(z) = x = z+z The imaginary part Im(z) = y = z z i The norm z = zz = x + y The reciprocal

More information

4.10 Dirichlet problem in the circle and the Poisson kernel

4.10 Dirichlet problem in the circle and the Poisson kernel 220 CHAPTER 4. FOURIER SERIES AND PDES 4.10 Dirichlet problem in the circle and the Poisson kernel Note: 2 lectures,, 9.7 in [EP], 10.8 in [BD] 4.10.1 Laplace in polar coordinates Perhaps a more natural

More information

Infinite series, improper integrals, and Taylor series

Infinite series, improper integrals, and Taylor series Chapter 2 Infinite series, improper integrals, and Taylor series 2. Introduction to series In studying calculus, we have explored a variety of functions. Among the most basic are polynomials, i.e. functions

More information

Elementary Linear Transform Theory

Elementary Linear Transform Theory 6 Whether they are spaces of arrows in space, functions or even matrices, vector spaces quicly become boring if we don t do things with their elements move them around, differentiate or integrate them,

More information

1 Question related to polynomials

1 Question related to polynomials 07-08 MATH00J Lecture 6: Taylor Series Charles Li Warning: Skip the material involving the estimation of error term Reference: APEX Calculus This lecture introduced Taylor Polynomial and Taylor Series

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

7: FOURIER SERIES STEVEN HEILMAN

7: FOURIER SERIES STEVEN HEILMAN 7: FOURIER SERIES STEVE HEILMA Contents 1. Review 1 2. Introduction 1 3. Periodic Functions 2 4. Inner Products on Periodic Functions 3 5. Trigonometric Polynomials 5 6. Periodic Convolutions 7 7. Fourier

More information

SOLVED PROBLEMS ON TAYLOR AND MACLAURIN SERIES

SOLVED PROBLEMS ON TAYLOR AND MACLAURIN SERIES SOLVED PROBLEMS ON TAYLOR AND MACLAURIN SERIES TAYLOR AND MACLAURIN SERIES Taylor Series of a function f at x = a is ( f k )( a) ( x a) k k! It is a Power Series centered at a. Maclaurin Series of a function

More information

10.7 Trigonometric Equations and Inequalities

10.7 Trigonometric Equations and Inequalities 0.7 Trigonometric Equations and Inequalities 857 0.7 Trigonometric Equations and Inequalities In Sections 0. 0. and most recently 0. we solved some basic equations involving the trigonometric functions.

More information

1 Solving equations 1.1 Kick off with CAS 1. Polynomials 1. Trigonometric symmetry properties 1.4 Trigonometric equations and general solutions 1.5 Literal and simultaneous equations 1.6 Review 1.1 Kick

More information

i x i y i

i x i y i Department of Mathematics MTL107: Numerical Methods and Computations Exercise Set 8: Approximation-Linear Least Squares Polynomial approximation, Chebyshev Polynomial approximation. 1. Compute the linear

More information

Mathematics for Graphics and Vision

Mathematics for Graphics and Vision Mathematics for Graphics and Vision Steven Mills March 3, 06 Contents Introduction 5 Scalars 6. Visualising Scalars........................ 6. Operations on Scalars...................... 6.3 A Note on

More information

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1. MTH4101 CALCULUS II REVISION NOTES 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) 1.1 Introduction Types of numbers (natural, integers, rationals, reals) The need to solve quadratic equations:

More information

Convergence of sequences and series

Convergence of sequences and series Convergence of sequences and series A sequence f is a map from N the positive integers to a set. We often write the map outputs as f n rather than f(n). Often we just list the outputs in order and leave

More information

Second-Order Homogeneous Linear Equations with Constant Coefficients

Second-Order Homogeneous Linear Equations with Constant Coefficients 15 Second-Order Homogeneous Linear Equations with Constant Coefficients A very important class of second-order homogeneous linear equations consists of those with constant coefficients; that is, those

More information

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of

More information

Examples of the Fourier Theorem (Sect. 10.3). The Fourier Theorem: Continuous case.

Examples of the Fourier Theorem (Sect. 10.3). The Fourier Theorem: Continuous case. s of the Fourier Theorem (Sect. 1.3. The Fourier Theorem: Continuous case. : Using the Fourier Theorem. The Fourier Theorem: Piecewise continuous case. : Using the Fourier Theorem. The Fourier Theorem:

More information

Fourier Series. 1. Review of Linear Algebra

Fourier Series. 1. Review of Linear Algebra Fourier Series In this section we give a short introduction to Fourier Analysis. If you are interested in Fourier analysis and would like to know more detail, I highly recommend the following book: Fourier

More information

Advanced Mathematics Unit 2 Limits and Continuity

Advanced Mathematics Unit 2 Limits and Continuity Advanced Mathematics 3208 Unit 2 Limits and Continuity NEED TO KNOW Expanding Expanding Expand the following: A) (a + b) 2 B) (a + b) 3 C) (a + b)4 Pascals Triangle: D) (x + 2) 4 E) (2x -3) 5 Random Factoring

More information

Advanced Mathematics Unit 2 Limits and Continuity

Advanced Mathematics Unit 2 Limits and Continuity Advanced Mathematics 3208 Unit 2 Limits and Continuity NEED TO KNOW Expanding Expanding Expand the following: A) (a + b) 2 B) (a + b) 3 C) (a + b)4 Pascals Triangle: D) (x + 2) 4 E) (2x -3) 5 Random Factoring

More information

Solutions Serie 1 - preliminary exercises

Solutions Serie 1 - preliminary exercises D-MAVT D-MATL Prof. A. Iozzi ETH Zürich Analysis III Autumn 08 Solutions Serie - preliminary exercises. Compute the following primitive integrals using partial integration. a) cos(x) cos(x) dx cos(x) cos(x)

More information

The Hilbert Space of Random Variables

The Hilbert Space of Random Variables The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2

More information

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03 Page 5 Lecture : Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 008/10/0 Date Given: 008/10/0 Inner Product Spaces: Definitions Section. Mathematical Preliminaries: Inner

More information