Lecture 5. Inner product spaces (cont d) Orthogonality in inner product spaces

Size: px
Start display at page:

Download "Lecture 5. Inner product spaces (cont d) Orthogonality in inner product spaces"

Transcription

1 Lecture 5 Inner product spaces (cont d) Orthogonality in inner product spaces An important property of inner product spaces is orthogonality. Let X be an inner product space. If x,y = for two elements x,y X, then x and y are said to be orthogonal (to each other). Mathematically, this relation is written as x y, just as is done for vectors in R n. We re now going to need a few ideas and definitions for the discussion that is coming up. Recall that a subspace Y of a vector space X is a nonempty subset Y X such that for all y,y 2 Y, and all scalars c,c 2, the element c y +c 2 y 2 Y, i.e., Y is itself a vector space. This implies that Y must contain the zero element, i.e., y =. Moreover the subspace Y is convex: For every x,y Y, the segment joining x+y, i.e., the set of all convex combinations, z = αx+( α)y, α, () is contained in Y. A vector space X is said to be the direct sum of two subspaces, Y and Z, written as follows, X = Y Z, (2) if each x X has a unique representation of the form x = y +z, y Y, z Z. (3) The sets Y and Z are said to be algebraic complements of each other. In R n and inner product spaces X in general, it is convenient to consider spaces that are orthogonal to each other. Let S X be a subset of X and define S = {x X x S} = {x X x,y = y S}. (4) The set S is said to be the orthogonal complement of S. 55

2 Aside: Some remarks regarding the idea of a complement This section was not covered in the lecture. It is included here only for supplementary purposes. You will not be examined on this material. The concept of an algebraic complement does not have to invoke the use of orthogonality. With thanks to a student who once raised the question of algebraic vs. orthogonal complementarity in class, let us consider the following example. Let X denote the (-dimensional) vector space of polynomials of degree, i.e., X = { c k x k, c k R, k }. (5) k= Equivalently, X = span{,x,x 2,,x }. (6) Now define Y = span{,x,x 2,x 3,x 4,x 5 }, Z = span{x 6,x 7,x 8,x 9,x }. (7) First of all, Y and Z are subspaces of X. Furthermore, X is a direct sum of the subspaces Y and Z. However, the spaces Y and Z are not orthogonal complements of each other. First of all, for the notion of orthogonal complmentarity, we would have to define an interval of support, e.g., [,], over which the inner products of the functions is defined. (And then we would have to make sure that all inner products involving these functions are defined.) Using the linearly independent functions, x k, k, one can then construct (via Gram-Schmidt orthogonalization) an orthogonal set of polynomial basis functions, {φ k (x)}, k, over X. It is possible that the first few members of this orthogonal set will contain the functions x k, k 5, which come from the set Y. But the remaining members of the orthogonal set will contain higher powers of x, i.e., x k, 6 k, as well as lower powers of x, i.e., x k k 5. In other words, the remaining members of the orthogonal set will not be elements of the set Y they will have nonzero components in X. See also Example 3 below. 56

3 Examples:. Let X be the Hilbert space R 3 and S X defined as S = span{(,,)}. Then S = span{(,,),(,,)}. In this case both S and S are subspaces. 2. As before X = R 3 but with S = {(c,,) c [,]}. Now, S is no longer a subspace but simply a subset of X. Nevertheless S is the same set as in. above, i.e., S = span{(,,),(,,)}. We have to include all elements of X that are orthogonal to the elements of S. That being said, we shall normally be working more along the lines of Example, i.e., subspaces and their orthogonal complements. 3. Further to the discussion of algebraic vs. orthogonal complementarity, consider the same spaces X, Y and Z as defined in Eqs. (6) and (7), but defined over the interval [,]. The orthogonal polynomials φ k (x) over [,] that may be constructed from the functions x k, k are the so-called Legendre polynomials, P n (x), listed below: n P n (x) x 2 2 (3x2 ) 3 2 (5x3 3x) 4 8 (35x4 3x 2 +3) 5 8 (63x5 7x 3 +5x) 6 6 (23x6 35x 4 +5x 2 5) 7 6 (429x7 693x 5 +35x 3 35x) 8 28 (6435x8 22x x 4 26x 2 +35) 9 28 (255x9 2574x 7 +88x 5 462x 3 +35x) 256 (4689x 9395x 8 +99x 6 33x x 2 63) (8) These polynomials satisfy the following orthogonality property, P m (x)p n (x)dx = 2 2n+ δ mn, (9) 57

4 where δ mn is the Kronecker delta,, m = n, δ mn =, m n. () From the above table, we see that the Legendre polynomials P n (x), n 5 belong to space Y, whereas the polynomials P n, 5 n, do not belong solely to Z. This suggests that the spaces Y and Z are not orthogonal complements. However, the following spaces are orthogonal complements: Y = span{p,p,p 2,P 3,P 4,P 5 }, Z = span{p 6,P 7,P 8,P 9,P }. () (Actually, Y is identical to Y defined earlier.) There is, however, another decomposition going on in this space, which is made possible by the fact that the interval [,] is symmetric with respect to the point x =. Note that the polynomials P n (x) are either even or odd. This suggests that we should consider the following subsets of X, Y = {u X u is an even function}, Z = {u X u is an odd function}. (2) It is a quite simple exercise to show that any function f(x) defined on an interval [ a,a] may be written as a sum of an even function and an odd function. This implies that any element u X may be expressed n the form u = v +w, v Y, w Z. (3) Therefore the spaces Y and Z are algebraic complements. In terms of the inner product of functions over [,], however, Y and Z are also orthogonal complements since if f and g have different parity. f(x)g(x)dx =, (4) 58

5 The discussion that follows will be centered around Hilbert spaces, i.e., complete inner product spaces. This is because we shall need the closure properties of these spaces, i.e., that they contain the limit points of all sequences. The following result is very important. The Projection Theorem for Hilbert spaces Theorem: Let H be a Hilbert space and Y H any closed subspace of H. (Note: This means that Y contains its limit points. In the case of finite-dimensional spaces, e.g., R n, a subspace is closed. But a subspace of an infinite-dimensional vector space need not be closed.) Now let Z = Y. Then for any x H, there is a unique decomposition of the form x = y +z, y Y, z Z = Y. (5) The point y is called the (orthogonal) projection of x on Y. This is an extremely important result from analysis, and equally important in applications. We ll examine its implications a little later, in terms of best approximations in a Hilbert space. In the figure below, we provide a sketch that will hopefully illustrate the situation. H x Y y Y The space Y is contained between the two lines that emanate from. Note that Y lies on both sides of : If p Y, then p, which lies on the other side of, is also an element of Y. The point y Y is the orthogonal projection of x onto the set Y and may be viewed as the point of intersection betwenn a line which extends from x to the set Y in such a way that it is perpendicular to Y. The examples that we consider should clarify this idea. 59

6 From Eq. (5), we can define a mapping P Y : H Y, the projection of H onto Y so that P Y : x y. (6) Note that P Y : H Y, P Y : Y Y, P Y : Y {}. (7) Furthermore, P Y is an idempotent operator, i.e., PY 2 = P Y. (8) This follows from the fact that P Y (x) = y and P Y (y) = y, implying that P Y (P Y (x)) = P Y (x). Finally, note that x 2 = y 2 + z 2. (9) This follows from the fact that the norm is defined by means of the inner product: x 2 = x,x = y +z,y +z = y,y + y,z + z,y + z,z = y 2 + z 2, (2) where the final line results from the fact that y,z = z,y since y Y and z Z = y. Example: Let H = R 3, Y = span{(,,)} and Y = span{(,,),(,,)}. Then x = (,2,3) admits the unique expansion, (,2,3) = (,,) +(,2,3), (2) where y = (,,) Y and z = (,2,3) Y. y is the unique projection of x on Y. Orthogonal/orthonormal sets of a Hilbert space Let H denote a Hilbert space. A set {u,u 2, u n } H is said to form an orthogonal set in H if u i,u j = for i j. (22) If, in addition, u i,u i = u i 2 =, i n, (23) 6

7 then the {u i } are said to form an orthonormal set in H. You will not be surprised by the following result, since you have most probably seen it in earlier courses in linear algebra. Theorem: An orthogonal set {u,u 2, } not containing the element {} is linearly independent. Proof: Assume that there are scalars c,c 2,,c n such that c u +c 2 u 2 c n u n =. (24) For each k =,2, n, form the inner product of both sides of the above equation with u k, i.e., u k,c u +c 2 u 2 +c n u n = u k, =. (25) By the orthogonality of the u i, the LHS of the above equation reduces to c k u k 2, implying that c k u k 2 =, k =,2,,n. (26) By assumption, however, u k, implying that u k 2. This implies that all scalars a k are zero, which means that the set {u,u 2,,u n } is linearly independent. Note: Asyouhavealsoseenincoursesinlinearalgebra, givenalinearlyindependentset{v,v 2,,v n }, wecan always constructan orthonormal set {e,e 2,,e n }via thegram-schmidt orthogonalization procedure. Moreover, span{v,v 2,,v n } = span{e,e 2,,e n }. (27) More on this later. We have now arrived at the most important result of this section. Best approximation in Hilbert spaces Recall the idea of the best approximation in normed linear spaces, discussed a couple of lectures ago. Let X be an infinite-dimensional normed linear space. Furthermore, let u i X, i n, be a set of n linearly independent elements of X and define the n-dimensional subspace, S n = span{u,u 2,,u n }. (28) 6

8 Then let x be an arbitrary element of X. We wish to find the best approximation to x in the subspace S n. It will be given by the element y n S n that lies closest to x, i.e., y n = arg min v S n x v. (29) (The variables used above may be different from those used in the earlier lecture.) We are going to use the same idea of best approximation, but in a Hilbert space setting, where we have the additional property that an inner product exists in our space. This, of course, opens the door to the idea of orthogonality, which will play an important role. The best approximation in Hilbert spaces may be phrased as follows: Theorem: Let {e,e 2,,e n } be an orthonormal set in a Hilbert space H. Define Y = span{e i } n i=. Y is a subspace of H. Then for any x H, the best approximation of x in Y is given by the unique element where y = P Y (x) = c k e k (projection of x onto Y), (3) c k = x,e k, k =,2,,n. (3) The c k are called the Fourier coefficients of x w.r.t. the set {e k }. Furthermore, c k 2 x 2. (32) Proof: Any element v Y may be written in the form v = c k e k. (33) The best approximation to x in Y is the point y Y that minimizes the distance x v, v Y, i.e., y = arg min x v. (34) v Y In other words, we must find scalars c,c 2,,c n such that the distance, f(c,c 2,,c n ) = x c k e k, (35) 62

9 is minimized. Here, f : R n (or C n ) R. It is easier to consider the non-negative squared distance function, g(c,c 2,,c n ) = x Minimizing g is equivalent to minimizing f. = x c k e k 2 c k e k, x c l e l. (36) A quick derivation may be done for the real-scalar-valued case, i.e., c i R. Using the linearity properties of the inner product, we can first right-hand side into four components as follows, x c k e k, x c l e l = x,x x, l= l= m c l e l c k e k,x c k e k, l= c l e l. (37) The first term on the RHS is simply x,x = x 2. The second term, using once again the linearity properties of the inner product, may be expressed as follows, x, c l e l = x,c l e l = c l x,e l. (38) l= Likewise, the third term becomes c k e k,x = l= c k e k,x = The final inner product on the RHS of Eq. (37) becomes, c k e k, c l e l = l= = l= l= l= c k x,e k. (39) c k c l e k,e l c 2 k, (4) where the final line is a result of the orthonormality of the e n. From all of these calculations, Eq. (37) becomes, g(c,c 2,,c n ) = x 2 c l x,e l l= c k x,e k + c 2 k. (4) Recall that we would like to minimize this function of n variables. We first impose the necessary stationarity conditions for a minimum, g c p = x,e p e p,x +2c p =, p =,2,,n. (42) 63

10 Therefore, c p = x,e p, p =,2,,n, (43) which identifies a unique point c R n, to which corresponds a unique element y Y. In order to check that this point corresponds to a minimum, we examine the second partial derivatives, 2 g c j c i = 2δ ij. (44) In other words, the Hessian matrix is diagonal and positive definite for all c. Therefore the point correponds to a global minimum. The complex-scalar case, i.e., c k C, is slightly more complicated. A proof (which, of course, would include the real-valued case) is given at the end of this section. Finally, substitution of these (real or complex) values of c k into the squared distance function in Eq. (36) yields the result g(c,c 2, c n ) = x 2 c l 2 c k 2 + c k 2 = x 2 l= c k 2, (45) which then implies Eq. (32). The proof of the Theorem is complete. Some additional comments regarding the best approximation:. The above result implies that the element x H may be expressed uniquely as x = y +z, y Y, z Y, (46) where y Y is the best approximation to x in y as given in Eq. (3). To see this, define z = x y = x c k e k, (47) where the c k are given by (3). For l =,2,,n, take the inner product of e l with both sides of this equation to give z,e l = x,e l c k e k,e l = x,e l c l =. (48) 64

11 Therefore, z e l, l =,2,,n, implying that z Y. 2. Here, we simply repeat the fact that the best approximation y Y to x in Eq. (3) is the projection of x onto Y, i.e., y = P Y (x). 3. The term z in (46) may be viewed as the residual in the approximation x y. The norm of this residual is the magnitude of the error of the approximation x y, i.e., n = z = x y. (49) Also recall that n is the distance between x and the space S n. 4. As the dimension n of the orthonormal set {e,e 2,,e n } is increased, we expect to obtain better approximations to the element x unless, of course, x is an element of one of these finite-dimensional spaces, in which case we arrive at zero approximation error, and no further improvement is possible. For an n, define the subspace, Y n = span{e,e 2,,e n }, (5) and let y n = P Yn (x) be the (unique) best approximation to x in Y n. Then the (magnitude of the) error of the approximation x y n is given by z n = x y n. (5) We expect that z n+ z n. 5. Note also that the inequality c k 2 x 2 (52) holds for all appropriate values of n >. (If H is finite-dimensional, i.e., dim(h) = N, then n =,2,,N.) In other words, the partial sums on the left are bounded from above. This inequality, known as Bessel s inequality, will have important consequences. Some examples We now consider some examples to illustrate the property that the best approximation of an element x H in a subspace Y H is the projection P Y (x). 65

12 . The finite dimensional case H = R 3, as an easy starter. Let x = (a,b,c) = ae +be 2 +ce 3. Geometrically, the e i may be visualized as the i, j and k unit vectors which form an orthonormal set. Now let Y = span{e,e 2 }. Then y = P Y (x) = (a,b,), which lies in the e -e 2 plane. Moreover, the distance between x and y is the distance from x to the e -e 2 plane. x y = [(a a) 2 +(b b) 2 +(c ) 2 ] /2 = c, (53) 2. Now let H = L 2 [ π,π], the space of square-integrable functions on [ π,π] and consider the following set of functions, e (x) = 2π, e 2 (x) = π cosx, e 3 (x) = π sinx. (54) These three functions form an orthonormal set in H, i.e., e i,e j = π π e i (x)e j (x)dx = δ ij. (55) (The reader is invited to verify the above statement.) Let Y 3 = span{e,e 2,e 3 }. Now consider the function f(x) = x 2 in L 2 [ π,π]. The best approximation to f in Y 3 will be given by the function We now compute the Fourier coefficients: f 3 = P Y3 f = f,e e + f,e 2 e 2 + f,e 3 e 3. (56) The final result is f,e = π 2π π f,e 2 = π π π f,e 3 = π π f 3 (x) = = π2 3 π x 2 dx = = 2 3 π5/2, x 2 cosx dx = = 4 π x 2 sinx dx =. (57) ( 2 3 π5/2 ) 4 ( ) π π cosx 2π 4cosx. (58) Note: This result is identical to the result obtained by the traditional Fourier series method, where you simply worked with the cosx and sinx functions and computed the expansion coefficients using the formulas from AMATH 23, cf. Lecture 2. Computationally, more work is 66

13 involved in producing the above result because you have to work with factors and powers of π that may eventually disappear. The advantage of the above approach is to illustrate the best approximation idea in terms of projections onto spans of orthonormal sets. Finally, we compute the error of the above approximation to be (via MAPLE) f f 3 2 = [ π The approximation is sketched in the top figure on the next page. π ) 2 ] /2 (x 2 π2 3 +4cosx dx (59) 3. Same space and function f(x) = x 2 as above, but we add two more elements to our orthonormal basis set, e 4 (x) = π cos2x, e 5 (x) = π sin2x. (6) Now define Y 5 = span{e,,e 5 }. The approximation to f in this space, f 5 (x) = P Y5 f, is 5 f 5 = f,e k e k = f 3 + f,e 4 e 4 + f,e 5 e 5. (6) As you already know from AMATH 23, the first three coefficients of the expansion do not have to be recomputed. This is the advantage of working with an orthonormal basis set. The final two Fourier coefficients are now computed, The final result is f,e 4 = π π π f,e 5 = π π f 5 (x) = π2 3 π x 2 cos2x dx = π x 2 sin2x dx =. (62) 4cosx+cos2x. (63) This approximation is sketched in the bottom figure on the next page. Finally, the error of this approximation is computed to be (via MAPLE) f f , (64) which is lower than the approximation error yielded by f 3, as expected. 67

14 5 y=f(x)=x^2 y=f_3(x) x Approximation f 3 (x) = (P Y3 f)(x) to f(x) = x 2. Error f f y=f(x)=x^2 y=f_5(x) x Approximation f 5 (x) = (P Y5 f)(x) to f(x) = x 2. Error f f

15 4. Now consider the space H = L 2 [,] and the following set of functions e (x) = 3 5, e 2 (x) = 2 2 x, e 3(x) = 2 2 (3x2 ). (65) Thesethreefunctionsformanorthonormalseton[,]. Moreover, span{e,e 2,e 3 } = span{,x,x 2 }. These functions result from the application of the Gram-Schmidt orthogonalization procedure to the linearly independent set {,x,x 2 }. 5. We now consider a slightly more intriguing example that will provide a preview to our study of wavelet functions later in this course. Consider the function space L 2 [,] and the two elements e and e 2 given by They are sketched in the figure below., x /2, e (x) =, e 2 (x) =, /2 < x. (66) y y = e (x) y y = e 2(x) x x - - It is not too hard to see that these two functions form an orthonormal set in L 2 [,], i.e., e,e = e 2,e 2 =, e,e 2 =. (67) Now let f(x) = x 2 as before. Let us first consider the subspace Y = span{e }. It is the one-dimensional subspace of functions in L 2 [,] that are constant on the interval. The approximation to f in this space is given by f = (P Y )f = f,e e = f,e, (68) since e =. The Fourier coefficient is given by f,e = x 2 dx = 3. (69) Therefore, the function f (x) =, sketched in the left subfigure below, is the best constantfunction approximation to f(x) = x 2 on the interval. It is the mean value of f on 3 [,]. 69

16 Now consider the space Y 2 = span{e,e 2 }. The best approximation to f in this space will be given by f 2 = f,e e + f,e 2 e 2. (7) The first term, of course, has already been computed. The second Fourier coefficient is given by f,e 2 = = /2 x 2 e 2 (x) dx x 2 dx x 2 dx /2 = /2 3 x3 3 x3 /2 = 4. (7) Therefore f 2 = 3 e 4 e 2. (72) In order to get the graph of f 2 from that of f, we simply subtract /4 from the value of /3 over the interval [,/2] and add /4 to the value of /3 over the interval (/2,]. The result is /2, x /2, f 3 (x) = (73) 7/2, /2 < x. Thegraphof f 3 (x) is sketched in theright subfigurebelow. Thevalues /2 and 7/2 correspond to the mean values of f(x) = x 2 over the intervals [,/2] and (/2,], respectively. (These should, of course, agree with your calculations in Problem Set No..) y y = x 2 y y = x 2 7/2 /3 /3 /2 x /2 /2 x The space Y 2 is the vector space of functions in L 2 [,] that are piecewise-constant over the half-intervals [,/2] and (/2,]. The function f 2 is the best approximation to x 2 from this space. 7

17 A natural question to ask is, What would be the next functions in this set of piecewise constant orthonormal functions? Two possible candidates are the functions sketched below. y y = e 3(x) y y = e 4(x) x x - - These, in fact, are called Walsh functions and have been used in signal image processing. However, another set of functions which can be employed, and which will be quite relevant later in the course, include the following: 2 y y = e 3(x) 2 y y = e 4(x) x x 2 2 Thesearethenexttwo Haarwavelet functions. WeclaimthatthespaceY 4 = span{e,e 2,e 3,e 4 } is the set of all functions in L 2 [,] that are piece-wise constant on the half-intervals [,/2] and (/2,]. A note on the Gram-Schmidt orthogonalization procedure As you may recall from earlier courses in linear algebra, the Gram-Schmidt procedure allows the construction of an orthonormal set of elements {e k } n from a linearly independent set {v k} n, with span{e k } n = span{v k} n. Here we simply recall the procedure. Start with an element, say v, and define e = v v. (74) 7

18 Now take element v 2 and remove the component of e from v 2 by defining z 2 = v 2 v 2,e e. (75) We check that e z 2 : Now define e,z 2 = e,v 2 e,v 2 e,e =. (76) e 2 = z 2 z 2. (77) We continue the procedure, taking v 3 and eliminating the components of e and e 2 from it. Define, z 3 = v 3 v 3,e e v 3,e 2 e 2. (78) It is straightforward to show that z 3 e and e 2. Then define e 3 = z 3 z 3. (79) In general, from a knowledge of {e,,e k }, we can produce the next element e k as follows: e k = z k z k, where z k k = v k v k,e i e i. (8) Of course, if the inner product space in which we are working is finite dimensional, then the procedure terminates at k = n = dim(h). But when H is infinite-dimensional, we may, at least in principle, be able to continue to process indefinitely, producing a countably infinite orthonormal set of elements i= {e k }. The next question is, Is such an orthonormal set useful? The answer is, Yes. Appendix: Best approximation in the complex-scalar case We now consider the squared distance function g(a) in Eq. (36) in the case that H is a complex Hilbert space, i.e., a C n. g(c,c 2,,c n ) = x = x c k e k 2 c k e k,x = x 2 c l e l l= c k e k,x x, c l e l + c k e k,c l e l l= l= 72

19 = x 2 = x 2 + = x 2 + [a k x,e k +c k x,e k + c k 2] [ x,e k 2 a k x,e k c k x,e k + c k 2] x,e k c k 2 x,e k 2 x,e k 2. (8) The first and last terms are fixed. The middle term is a sum of nonnegative numbers. The minimum value is achieved when all of these terms are zero. Consequently, f(c,,c n ) is a minimum if and only if c k = x,e k for k =,2,,n. As in the real case, we have x 2 x,e k 2. (82) 73

20 Lecture 6 Inner product spaces (cont d) Complete orthonormal basis sets in an infinite-dimensional Hilbert space Let H be an infinite-dimensional Hilbert space. Let us also suppose that we have an infinite sequence of orthonormal elements {e k } H, with e i,e j = δ ij. We consider the finite orthonormal sets E n = {e,e 2,,e n }, n =,2,, and define V n = span{e,e 2,,e n }, n =,2,. (83) Clearly, each V n is an n-dimensional subspace of H. Recall that for an x H, the best approximation to x in V n is given by y n = P Vn (x) = x,e k e k, (84) with y n 2 = x,e k 2. (85) Let us denote the error associated with the approximation x y n as n = x y n. (86) Note that this error is the distance between x and y n as defined by the norm on H which, in turn, is defined by the inner product, on H. Now consider V n+ = span{e,e 2,,e n,e n+ }, which we may write as V n+ = V n span{e n+ }. (87) It follows that V n V n+. (88) This, in turn implies that n+ n. (89) We can achieve the same error n in V n+ by imposing the condition that the coefficient c n+ of e n+ is zero in the approximation. By allowing c n+ to vary, it might be possible to obtain a better 74

21 approximation. In other words, since we are minimizing over a larger set, we can t do any worse than we did before. If our Hilbert space H were finite dimensional, i.e., dim(h) = N >, then N = for all x H. But in the case that H is infinite-dimensional, we would like that n as n, for all x H. (9) In other words all approximation errors go to zero in the limit. (Of course, in the particular case that x V N, then N =. But we want to be able to say something about all x H.) Then we shall be able to write the infinite-sum result, x = x,e k e k. (9) The property (9) will hold provided that the orthonormal set {e k } is a complete or maximal orthonormal set in H. Definition: An orthonormal set {e k } is said to be complete or maximal if the following is true: If x,e k = for all k then x =. (92) The idea is that the {e k } elements detect everything in the Hilbert space H. And if none of them detect anything in an element x H, then x must be the zero element. Now, how do we know if a complete orthonormal set can exist in a given Hilbert space? The answer is that if the Hilbert space is separable, then such a complete, countably-infinite set exists. (A separable space contains a dense countable subset.) OK, so this doesn t help yet, because we now have to know whether our Hilbert space of interest is separable. Let it suffice here to state that most of the Hilbert spaces that we use in applications are separable. (See Note below.) Therefore, complete orthonormal basis sets can exist. And the final icing on the cake is the fact that, for separable Hilbert spaces, the Gram-Schmidt orthogonalization procedure can produce such a complete orthonormal basis. Note to above: In the case of L 2 [a,b], we have the following results from advanced analysis:. The space of all polynomials P[a,b] defined on [a,b] is dense in L 2 [a,b]. That means that given any function u L 2 [a,b] and an ǫ >, we can find an element p P[a,b] such that u p 2 < ǫ. 75

22 2. The set of polynomials with rational coefficients, call it P R [a,b], a subset of P[a,b] is dense in P[a,b]. (You may know that the set of rational numbers is dense in R.) And finally, the set P[a,b] is countable. (The set of rational numbers on R is countable.) 3. Therefore the set P R [a,b] is a dense and countable subset of L 2 [a,b]. Complete orthonormal basis sets Generalized Fourier series We now conclude our discussion of complete orthonormal basis sets in a Hilbert space. In what follows, we let H denote an infinite-dimensional Hilbert space (for example, the space of square-integrable functions L 2 [ π,π]). It may help to recall the following definition. Definition: An orthonormal set {e k } is said to be complete or maximal if the following is true: If x,e k = for all k then x =. (93) Here is the main result: Theorem: Let {e k } denote an orthonormal set on a separable Hilbert space. Then the following statements are equivalent:. The set {e k } is complete (or maximal). (In other words, it serves as a complete basis for H.) 2. For any x H, x = x,e k e k. (94) (In other words, x has a unique representation in the basis {e k }. 3. For any x H, This is called Parseval s equation. x 2 = x,e k 2. (95) Notes:. The expansion in Eq. (94) is also called a Generalized Fourier Series. Note that the basis elements e k do not have to be sine or cosine functions they can be polynomials in x: the term Generalized Fourier Series may still be used. 76

23 2. The coefficients c k = x,e k in Eq. (94) are often called Fourier coefficients even if the e k are not sine or cosine functions. 3. Most important is the fact that from Eq. (95), the infinite sequence of Fourier coefficients c = (c,c 2, ) is square-summable. In other words, c l 2, i.e., c is an element of the sequence space l 2. Parseval s relation in Eq. (95) may be rewritten as follows, x L 2 = c l 2. (96) An important consequence of the above theorem: As before, let V n = span{e,e 2,,e n }. For a given x H, let y n V n be the best approximation of x in V n so that y n = c k e k, c k = x,e k. (97) Then the magnitude of the error of approximation x y n is given by n = x y n = c k e k c k e k = c k e k k=n+ [ /2 = ck] 2. (98) k=n+ Inother words, themagnitudeof theerror is themagnitude(in l 2 norm)of the tail of thesequenceof Fourier coefficients c, i.e., the sequence of Fourier coefficients {c n+,c n+2, } that has been thrown away in order to produce the approximation x y n. This truncation occurs because the basis functions e n+,e n+2, do not belong to V n. 77

24 Some alternate versions of Fourier sine/cosine series. We have already seen and used one important orthonormal basis set: The (normalized) cosine/sine basis functions for Fourier series on [ π, π]: {e k } = { 2π, π cosx, π sinx, } cos2x, π (99) These functions form a complete orthonormal basis in the space of real-valued square-integrable functions L 2 [ π,π]. This is a natural, but particular case, of the more general class of orthonormal sine/cosine functions on the interval [ a,a], where a > : {e k } = { 2a, a cos ( πx ), a a sin ( πx ), a a cos ( πx ) }, a () These functions form a complete orthonormal basis in the space of real-valued square-integrable functions L 2 [ a,a]. When a = π, we have the usual Fourier series functions coskx and sinkx. We shall return to this set in the next lecture. 2. Sometimes, it is convenient to employ the following set of complex-valued square-integrable functions on [ π, π]: e k = π exp(ikx), k =, 2, 2,,,2. () Note that the index k is infinite in both directions. These functions form a complete set in the complex-valued space L 2 [ π,π]. The orthonormality of this set with respect to the complexvalued inner product on [ π,π] is left as an exercise. Because of Euler s formula, e ikx = coskx+isinkx, (2) expansions in this basis are related to Fourier series expansions. In a sense, this set of complexvalued functions combines the infinite cosine and sine functions from Eq. (99) to produce one doubly infinite sequence of functions. We shall return to this set in the near future. By means of scaling of the above result, it is easy to show that the following set forms a complete orthonormal basis for the complex-valued space L 2 [ a,a]: e k = a exp ( ikπx a ), k =, 2, 2,,,2. (3) 78

25 3. In many books and discussion, the interval of interest is [,]. Using a change of variable in Eq. (99), one can show that the following sequence of functions {e k } = {, 2cosπx, 2sinπx, 2cos2πx, } 2sin2πx,,, (4) forms an orthonormal basis over the space L 2 [,]. 79

26 Lecture 7 Inner product spaces (cont d) Convergence of Fourier series expansions Here we briefly discuss some convergence properties of Fourier series expansions, namely,. pointwise convergence: the convergence of a series at a point x, 2. uniform convergence: theconvergence of aserieson aninterval [a,b] inthe norm/metric, 3. convergence in mean: the convergence of a series on an interval [a,b] in the 2 norm/metric. You may have seen some, perhaps all, of these ideas in AMATH 23 (or equivalent). We shall cover them briefly, and without proof. What will be of greater concern to us in this course is the rate of convergence of a series and its importance in signal/image processing. RecallthattheFourierseriesexpansionofafunctionf(x)overtheinterval[ π,π]hasthefollowing form, f(x) = a + [a k coskx+b k sinkx], x [ π,π]. (5) The right-hand-side of Eq. (5) is clearly a 2π-periodic function of x. As such, one expects that it will represent a 2π-periodic function. Indeed, as you probably saw in AMATH 23, this is the case. If we consider values of x outside the interval ( π,π), then the series will represent the so-called 2π-extension of f(x) one essentially takes the graph of f(x) on ( π,π) and copies it on each interval ( π + 2kπ,π + 2kπ), k =, 2, and k =,2,. There are some potential complications, however, at the connection points (2k +)π, k R. It is for this reason that we used the open interval ( π, π) above. Case : f is 2π-periodic. In this case, there are no problems with translating the graph of f(x) on [ π,π], since f( π) = f(π). Two contiguous graphs will intersect at the connection points. Without loss of generality, let us simply assume that f(x) is continuous on [ π,π]. Then the graph of its 2π-extension is continuous at all x R. A sketch is given below. 8

27 y 3π 2π π π 2π 3π x extension (copy) f(x) extension (copy) Case : 2π-extension of a 2π-periodic function f(x). Case 2: f is not 2π-periodic. In particular f( π) f(π). Then there is no way that two contiguous graphs of f(x) will intersect at the connection points there will be discontinuities, as sketched below. y 3π 2π π π 2π 3π x extension (copy) f(x) extension (copy) Case 2: 2π-extension of a function f(x) that is not 2π-periodic. The existence of such discontinuities in the 2π-extension of f will have consequences regarding the convergence of Fourier series to the function f(x), even on ( π,π). Of course, there may be other discontinuities of f inside the interval ( π, π), which will also affect the convergence. We now state some convergence results, starting with the weakest result, i.e., the result that has minimal assumptions on f(x). Convergence Result No. : f is square-integrable on [ π, π]. Mathematically, we require that f L 2 [ π,π], that is, π π f(x) 2 dx <. (6) As we discussed in a previous lecture, the function f(x) doesn t have to be continuous it can be piecewise continuous, or even worse! For example, it doesn t even have to be bounded. In the appli- 8

28 cations examined in this course, however, we shall be dealing with bounded functions. In engineering parlance, if f satisfies the condition in Eq. (6) then it is said to have finite energy. In this case, the convergence result is as follows: The Fourier series in (5) converges in mean to f. In other words, the Fourier series converges to f in L 2 norm/metric. By this we mean that the partial sums S n of the Fourier series converge to f as follows, f S n 2 as n. (7) Note that this property does not imply that the partial series S n converge pointwise, i.e., that f(x) S n (x) as n at an x [ π,π]. As for a proof of this convergence result: It follows from the Theorem stated at the beginning of this lecture. The sine/cosine basis used in Fourier series is complete in L 2 [ π,π]. On the other side of the spectrum, we have the strongest result, i.e., the result that has quite stringent demands on the behaviour of f. Convergence Result No. 2: f is 2π-periodic and continuous on [ π, π]. In this case, the Fourier series in(5) converges uniformly to f on[ π, π]. This means that the Fourier series converges tof inthe norm/metric: Frompreviousdiscussions,thisimpliesthatthepartialsumsS n converge to f as follows, f S n as n. (8) This is a very strong result it implies that the partial sums S n converge pointwise to f: For all x [ π,π], f(x) S n (x) as n. (9) But the result is actually stronger than this since the pointwise convergence is uniform over the interval [ π,π], in an ǫ-ribbonlike fashion. This comes from the definition of the norm. A proof of this result can be found in the book by Boggess and Narcowich see Theorem.3 and its proof, pp

29 Example: Consider the function x, π < x, f(x) = x = x, < x π, () which is continuous on [ π,π]. Moreover, its 2π-extension is also continuous on R since f( π) = f(π) = π. Because the function f(x) is even, the expansion is only in terms of the cosine functions. The series has the form (Exercise) f(x) = a + a k coskx, a = π 2, a 4/k 2, k odd, k =, k even, k. () In the figure below is presented a plot of the partial sum S 9 (x) which is comprised of only six nonzero coefficients, a,a,a 3,a 5,a 7,a 9. Despite the fact that we use only six terms of the Fourier series, an excellent approximation to f(x) is achieved over the interval [ π, π]. The use of nonzero coefficients, i.e., the partial sum S 9 (x) produces an approximation that is virtually indistinguishable from the plot of the function f(x) in the figure! x Partial sum S 9 (x) of the Fourier cosine series expansion () of the 2π-periodic piecewise constant function, x, π < x, f(x) = x = (2) x, < x π, The function f(x) is also plotted. Convergence Result No. 2 is applicable in this case, so we may conclude that the Fourier series converges uniformly to f(x) over the entire interval [ π,π]. That being said, we notice that the degree of accuracy achieved at the points x =, x = ±π is not the same as at other points, in particular, x = ±π/2. Even though uniform convergence is guaranteed, the rate of convergence is seen to be a 83

30 little slower at these kinks. These points actually represent singularities of the function not points of discontinuity of the function f(x) but of its derivative f (x). Even such singularities can affect the rate of convergence of a Fourier series expansion. We ll say more about this later. Uniform convergence implies convergence in mean We expect that if the stronger convergence result, No. 2, applies to a function f, then the weaker result, No., will also apply to it, i.e., uniform convergence on [ π,π] implies L 2 convergence on [ π,π]. A quick way to see this is that if f C[a,b], it is boundedon [a,b], implying that it must be in L 2 [a,b]. But let s go through the mathematical details, since they are revealing. Suppose that f C[a,b]. Its L 2 norm is given by [ b /2 f 2 = f(x) dx] 2. (3) Since f C[a,b] it is bounded on [a,b]. Let M denote the value of the infinity norm of f, i.e., a M = max a x b f(x) = f. (4) Now return to Eq. (3) and note that, from the basic properties of integrals, b a f(x) 2 dx b a M 2 dx = M 2 (b a). (5) Subsituting this result into (3), we have f 2 M b a = b a f. (6) Now replace f with f S n : f S n 2 b a f S n. (7) Uniform convergence implies that the RHS goes to zero as n. This, in turn, implies that the LHS goes to zero as n, which implies convergence in L 2, proving the desired result. Convergence Results and 2 appear to represent opposite sides of the spectrum, in terms of the behaviour of f. Result assumes that f is square integrable over the interval whereas Result 2 84

31 assumes a good deal more, namely continuity. The following result, where f is assumed to be piecewise continuous, is a kind of intermediate result which is quite applicable in signal and image processing. Recall that f is said to be piecewise continuous on an interval I if it is continuous at all x I with the exception of a finite number of points in I. In this way, it can have jumps. Convergence Result No. 3: f is piecewise C on [ π,π]. In this case:. The Fourier series converges uniformly to f on any closed interval [a,b] that does not contain a point of discontinuity of f. 2. If p denotes a point of discontinuity of f, then at p the Fourier series converges to the value where f(p+)+f(p ), (8) 2 f(p+) = lim h +f(p+h), f(p ) = lim (9) +f(p h). h Note: The piecewise C requirement, as opposed to piecewise C guarantees that the slopes f (x) of tangents to the curve remain finite as x approaches points of discontinuity both from the left and fromtheright. A proofof this convergence resultmay befoundin thebookby Boggess and Narcowich, cf. Theorem.22, p. 63 and Theorem.28, p. 7. Example: Consider the function defined by, π < x, f(x) =, < x π. (2) Because the function f(x) is odd, the expansion is only in terms of the sine functions (as you found in Problem Set No. ). The series has the form 4/(kπ), k odd, f(x) = b k sinkx, b k =, k even. (2) Clearly, f(x) is discontinuous at x = because of the jump there. But its 2π-extension is also discontinuous at x = ±π. In the figure below is presented a plot of the partial sum S 5 (x) of the Fourier series expansion to this function. 85

32 x Partial sum S 5 (x) of Fourier sine series expansion (2) of the 2π-periodic piecewise constant function,, π < x, f(x) =, < x π. (22) The function f(x) is also plotted. Clearly, f(x) is continuous at all x [ π,π] except at x = and x = ±π. In the vicinity of these points, the convergence of the Fourier series appears to be slower one would need a good number of additional terms in the expansion in order to approximate f(x) near these points to the accuracy demonstrated elsewhere, say near x = ±π/2. According to the first point of Convergence Result No. 3, the Fourier series converges uniformly on any closed interval [a, b] that does not contain the points of discontinuity x =, x = π, x = π. Even though the convergence on such a closed interval is uniform, it may not necessarily be very rapid. Compare this result to that obtained by only six terms of the Fourier series to the continuous function f(x) = x shown in the previous figure. We ll return to this point in the next lecture. Intuitively, one may imagine that it takes a great deal of effort for the series to be approximating the value f(x) = for negative values of x near, and then having to jump up to approximate the value f(x) = for positive values of x near. As such, more terms of the expansion are required because of the dramatic jump in the function. We shall return to this point in the next lecture as well. At each of the three discontinuities in the plot, we see that the second point of Convergence Result No. 3, regarding the behaviour of the Fourier series at a discontinuity, is obeyed. For example, at x =, the series converges to zero, because all terms are zero: sin(k) = for all k. And zero is precisely the average value of the left and right limits f( ) = and f( + ) =. The same holds true at x = π and x = π. 86

33 The visible oscillatory behaviour of the partial sum function S 5 (x) in the plot is called Gibbs ringing or the Gibbs artifact. For lower partial sums, i.e., S n (x) for n < 5, the oscillatory nature is even more pronounced. Such ringing is a fact-of-life in image processing, since images generally contain a good number of discontinuities, namely edges. Since image compression methods such as JPEG rely on the truncation of Fourier series, they are generally plagued by ringing artifacts near edges. This is yet another point that will be addressed later in this course. The moral of the story regarding discontinuities: They affect the rate of convergence of Fourier series As suggested by the previous example, discontinuities of a function f(x) create problems for its Fourier series expansion by slowing down its rate of convergence. At a jump discontinuity, the convergence may be quite slow, with the partial sums demonstrating Gibbs ringing. Another way to look at this situation is as follows: Generally, a higher number of terms in the Fourier series expansion or higher frequencies are needed in order to approximate a function f(x) near points of discontinuity. But, in fact, it doesn t stop there the existence of points of discontinuity actually affects the rate convergence at other regions of the interval of expansion. To see this, let s return to the two examples studied above, i.e., the functions x, π < x, f (x) = x = x, < x π, (23) and, π < x, f 2 (x) =, < x π, (24) Note that we have subscripted them for convenience. Recall that the function f (x) is continuous on [ π,π] and its 2π-extension is continuous for all x R. On the other hand f 2 (x) has a discontinuity at x = and its 2π-extension has discontinuities at all points kπ. We noticed how well a rather low number of terms (i.e., 5) in the Fourier expansion of f (x) approximated it over the interval [ π,π]. On the other hand, we saw how the discontinuities of f 2 (x) affected the performance of the Fourier expansion, even for a much larger number of terms (i.e., 5). 87

34 This is not so surprising when we examine the decay rates of the Fourier series coefficients for each function:. For f (x), the coefficients a k decay as O(/k 2 ) as k. 2. For f 2 (x), the coefficients b k decay as O(/k) as k. The coefficients for f are seen to decay more rapidly than those of f 2. As such, you don t have to go to such high k values (which multiply sine and cosine functions, of maximum absolute value ) for the coefficients a k to become negligible to some prescribed accuracy ǫ. (Of course, there is the infinite tail of the series to worry about, but the above reasoning is still valid.) The other important point is that the rate of decay of the coefficients affects the convergence over the entire interval, not just around points of discontinuity. This has been viewed as a disadvantage of Fourier series expansions: that a bad point, p, i.e. a point of discontinuity, even near or at the end of an interval will affect the convergence of a Fourier series over the entire interval, even if the function f(x) is very nice on the other side of the interval. We illustrate this situation in the sketch on the left in the figure below. Researchers in the signal/image processing community recognized this problem years ago and came up with a clever solution: If the convergence of the Fourier series over the entire interval [a, b] is being affected by such a bad point p, why not split the interval into two subintervals, say A = [a,c] and B = [c, b] and perform separate Fourier series expansions over each subinterval. Perhaps in this way, the number of coefficients saved by the niceness of f(x) over [a,c] might exceed the number of coefficients needed to accomodate the bad point p. The idea is illustrated in the sketch on the right in the figure below. The above discussion is, of course, rather simplified, but it does describe the basic idea behind block coding, i.e., partitioning a signal or image into subblocks and Fourier coding each subblock, as opposed to coding the entire signal/image. Block coding is the basis of the JPEG compression method for images as well as for the MPEG method for video sequences. More on this later. Greater degree of smoothness implies faster decay of Fourier series coefficients The effect of discontinuities on the rate of convergence of Fourier series expansions does not end with the discussion above. Recall that the Fourier series for the continuous function f (x) given above 88

35 y = f(x) y = f(x) a p b a c p b nice region of smoothness of f(x) bad point of discontinuity Fourier series on [a,c] Fourier series on [c,d] Fourier series on [a,b]. demonstrated quite rapid convergence. But it is possible that series will demonstrate even more rapid convergence due to the fact that the Fourier series coefficients a k and b k decay even more rapidly than /k 2. Recall that the function f (x) is continuous, but that its derivative f (x) is only piecewise continuous, having discontinuities at x = and x = ±π. Functions with greater degrees of smoothness, i.e., higher-order continuous derivatives will have Fourier series with more rapid convergence. We simply state the following result without proof: Theorem: Supposethat f(x) is 2π-periodicand C n [ π,π], for somen > that is, its nthderivative (and all lower order derivatives) is continuous. Then the Fourier series coefficients a k and b k in Eq. (5) decay as ( ) a k,b k = O k n+, as k. An idea of the proof is as follows. To avoid complications, suppose that f is piecewise continuous, corresponding to n = above, the coefficients must decay at least as quickly as /k, since they comprise a square-summable sequence in l 2. Now consider the function g(x) = x f(s) ds, (25) which is a continuous function of x (Exercise). The Fourier series coefficients of g(x) may be obtained by termwise integration of the coefficients of f(x) (AMATH 23). This implies that the series coefficients of g(x) will decay at least as quickly as /k 2. Integrate again, etc.. In other words, the more regular or smooth a function f(x) is, the faster the decay of its Fourier series coefficients, implying that you can generally approximate f(x) to a desired accuracy 89

36 over the interval with a fewer number of terms in the Fourier series expansion. Conversely, the more irregular a function f(x) is, the slower the decay of its FS coefficients, so that you ll need more terms in the FS expansion to approximate it to a desired accuracy. This feature of regularity/approximability is very well-known and appreciated in the signal and image processing field. In fact, it is a very important, and still ongoing, field of research in analysis. The above discussion may seem somewhat handwavy and imprecise. Let s look at the problem in a little more detail. And we ll consider the more general case in which a function f(x) is expressed in terms of a set of of functions, {φ k (x)}, which form a complete and orthonormal basis on an interval [a, b], i.e., f(x) = c k φ k (x), c k = f,φ k. (26) Here, the equation is understood in the L 2 sense, i.e., the sequence of partial sums, S n (x), defined as follows, converges to f in L 2 norm/metric, i.e., S n (x) = c k φ k (x), (27) f S n 2 as n. (28) The expression in the above equation is the magnitude of the error associated with the approximation f(x) = S n (x), which we shall simply refer to as the error in the approximation. This error may be expressed in terms of the Fourier coefficients c k. First note that f(x) S n (x) = c k φ k. (29) Therefore the L 2 -squared error is given by k=n+ f S n 2 2 = f S n,f S n = c k φ k, c l φ l = k=n+ k=n+ l=n+ c k 2. (3) Thus, [ ] /2 f S n 2 = c k 2. (3) k=n+ 9

37 Recall that for the above sum of an infinite series to be finite, the coefficients c k must tend to zero sufficiently rapidly. The above summation of coefficients starting at k = n+ may be viewed as involving the tail of the infinite sequence of coefficients c k, as sketched schematically below. c k 2 vs. k n+ k tail of infinite sequence For a fixed n >, the greater the rate of decay of the coefficients c k, the smaller the area under the curve that connects the tops of these lines representing the coefficient magnitudes, i.e., the smaller the magnitude of the term on the right of Eq. (3), hence the smaller the error in the approximation. From a signal processing point of view, more of the signal is concentrated in the first n coefficients c k. From the examples presented earlier, we see that singularities in the function/signal, e.g., discontinuities of the function, will generally reduce the rate of decay of the Fourier coefficients. As such, for a given n, the error of approximation by the partial sum S n will be larger. This implies that in order to achieve a certain accuracy in our approximation, we shall have to employ more coefficients in our expansion. In the case of the Fourier series, this implies the use of functions sinkx and coskx with higher k, i.e., higher frequencies. Unfortunately, such singularities cannot be avoided, especially in the case of images. Images are defined by edges, i.e., sharp changes in greyscale values, which are precisely the points of discontinuity in an image. However, singularities are not the only reason that the rate of decay of Fourier coefficients may be reduced, as we ll see below. 9

where the bar indicates complex conjugation. Note that this implies that, from Property 2, x,αy = α x,y, x,y X, α C.

where the bar indicates complex conjugation. Note that this implies that, from Property 2, x,αy = α x,y, x,y X, α C. Lecture 4 Inner product spaces Of course, you are familiar with the idea of inner product spaces at least finite-dimensional ones. Let X be an abstract vector space with an inner product, denoted as,,

More information

The moral of the story regarding discontinuities: They affect the rate of convergence of Fourier series

The moral of the story regarding discontinuities: They affect the rate of convergence of Fourier series Lecture 7 Inner product spaces cont d The moral of the story regarding discontinuities: They affect the rate of convergence of Fourier series As suggested by the previous example, discontinuities of a

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

Inner Product Spaces

Inner Product Spaces Inner Product Spaces Introduction Recall in the lecture on vector spaces that geometric vectors (i.e. vectors in two and three-dimensional Cartesian space have the properties of addition, subtraction,

More information

Recall that any inner product space V has an associated norm defined by

Recall that any inner product space V has an associated norm defined by Hilbert Spaces Recall that any inner product space V has an associated norm defined by v = v v. Thus an inner product space can be viewed as a special kind of normed vector space. In particular every inner

More information

Relevant sections from AMATH 351 Course Notes (Wainwright): 1.3 Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls): 1.1.

Relevant sections from AMATH 351 Course Notes (Wainwright): 1.3 Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls): 1.1. Lecture 8 Qualitative Behaviour of Solutions to ODEs Relevant sections from AMATH 351 Course Notes (Wainwright): 1.3 Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls): 1.1.1 The last few

More information

There are two things that are particularly nice about the first basis

There are two things that are particularly nice about the first basis Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially

More information

swapneel/207

swapneel/207 Partial differential equations Swapneel Mahajan www.math.iitb.ac.in/ swapneel/207 1 1 Power series For a real number x 0 and a sequence (a n ) of real numbers, consider the expression a n (x x 0 ) n =

More information

1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e.,

1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e., Abstract Hilbert Space Results We have learned a little about the Hilbert spaces L U and and we have at least defined H 1 U and the scale of Hilbert spaces H p U. Now we are going to develop additional

More information

Examples of the Fourier Theorem (Sect. 10.3). The Fourier Theorem: Continuous case.

Examples of the Fourier Theorem (Sect. 10.3). The Fourier Theorem: Continuous case. s of the Fourier Theorem (Sect. 1.3. The Fourier Theorem: Continuous case. : Using the Fourier Theorem. The Fourier Theorem: Piecewise continuous case. : Using the Fourier Theorem. The Fourier Theorem:

More information

Relevant sections from AMATH 351 Course Notes (Wainwright): Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls):

Relevant sections from AMATH 351 Course Notes (Wainwright): Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls): Lecture 5 Series solutions to DEs Relevant sections from AMATH 35 Course Notes (Wainwright):.4. Relevant sections from AMATH 35 Course Notes (Poulin and Ingalls): 2.-2.3 As mentioned earlier in this course,

More information

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,

More information

7: FOURIER SERIES STEVEN HEILMAN

7: FOURIER SERIES STEVEN HEILMAN 7: FOURIER SERIES STEVE HEILMA Contents 1. Review 1 2. Introduction 1 3. Periodic Functions 2 4. Inner Products on Periodic Functions 3 5. Trigonometric Polynomials 5 6. Periodic Convolutions 7 7. Fourier

More information

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent Chapter 5 ddddd dddddd dddddddd ddddddd dddddddd ddddddd Hilbert Space The Euclidean norm is special among all norms defined in R n for being induced by the Euclidean inner product (the dot product). A

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan Wir müssen wissen, wir werden wissen. David Hilbert We now continue to study a special class of Banach spaces,

More information

CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

More information

2. Signal Space Concepts

2. Signal Space Concepts 2. Signal Space Concepts R.G. Gallager The signal-space viewpoint is one of the foundations of modern digital communications. Credit for popularizing this viewpoint is often given to the classic text of

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Advanced Computational Fluid Dynamics AA215A Lecture 2 Approximation Theory. Antony Jameson

Advanced Computational Fluid Dynamics AA215A Lecture 2 Approximation Theory. Antony Jameson Advanced Computational Fluid Dynamics AA5A Lecture Approximation Theory Antony Jameson Winter Quarter, 6, Stanford, CA Last revised on January 7, 6 Contents Approximation Theory. Least Squares Approximation

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

Computer Problems for Fourier Series and Transforms

Computer Problems for Fourier Series and Transforms Computer Problems for Fourier Series and Transforms 1. Square waves are frequently used in electronics and signal processing. An example is shown below. 1 π < x < 0 1 0 < x < π y(x) = 1 π < x < 2π... and

More information

Math 489AB A Very Brief Intro to Fourier Series Fall 2008

Math 489AB A Very Brief Intro to Fourier Series Fall 2008 Math 489AB A Very Brief Intro to Fourier Series Fall 8 Contents Fourier Series. The coefficients........................................ Convergence......................................... 4.3 Convergence

More information

Approximation theory

Approximation theory Approximation theory Xiaojing Ye, Math & Stat, Georgia State University Spring 2019 Numerical Analysis II Xiaojing Ye, Math & Stat, Georgia State University 1 1 1.3 6 8.8 2 3.5 7 10.1 Least 3squares 4.2

More information

Convex Analysis and Economic Theory Winter 2018

Convex Analysis and Economic Theory Winter 2018 Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 0: Vector spaces 0.1 Basic notation Here are some of the fundamental sets and spaces

More information

A Primer in Econometric Theory

A Primer in Econometric Theory A Primer in Econometric Theory Lecture 1: Vector Spaces John Stachurski Lectures by Akshay Shanker May 5, 2017 1/104 Overview Linear algebra is an important foundation for mathematics and, in particular,

More information

8.5 Taylor Polynomials and Taylor Series

8.5 Taylor Polynomials and Taylor Series 8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:

More information

Practice Exercises on Differential Equations

Practice Exercises on Differential Equations Practice Exercises on Differential Equations What follows are some exerices to help with your studying for the part of the final exam on differential equations. In this regard, keep in mind that the exercises

More information

Separation of Variables in Linear PDE: One-Dimensional Problems

Separation of Variables in Linear PDE: One-Dimensional Problems Separation of Variables in Linear PDE: One-Dimensional Problems Now we apply the theory of Hilbert spaces to linear differential equations with partial derivatives (PDE). We start with a particular example,

More information

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6A: Inner products In this chapter, the field F = R or C. We regard F equipped with a conjugation χ : F F. If F =

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

Lecture notes: Introduction to Partial Differential Equations

Lecture notes: Introduction to Partial Differential Equations Lecture notes: Introduction to Partial Differential Equations Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 Classification of Partial Differential

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Orthonormal Systems. Fourier Series

Orthonormal Systems. Fourier Series Yuliya Gorb Orthonormal Systems. Fourier Series October 31 November 3, 2017 Yuliya Gorb Orthonormal Systems (cont.) Let {e α} α A be an orthonormal set of points in an inner product space X. Then {e α}

More information

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright cs416: introduction to scientific computing 01/9/07 Unit : Solving Scalar Equations Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright 1 Introduction We now

More information

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002 Further Mathematical Methods (Linear Algebra) Solutions For Problem Sheet 9 In this problem sheet, we derived a new result about orthogonal projections and used them to find least squares approximations

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3 MATH 167: APPLIED LINEAR ALGEBRA Chapter 3 Jesús De Loera, UC Davis February 18, 2012 Orthogonal Vectors and Subspaces (3.1). In real life vector spaces come with additional METRIC properties!! We have

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Introduction to Signal Spaces

Introduction to Signal Spaces Introduction to Signal Spaces Selin Aviyente Department of Electrical and Computer Engineering Michigan State University January 12, 2010 Motivation Outline 1 Motivation 2 Vector Space 3 Inner Product

More information

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning: Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Lecture 34. Fourier Transforms

Lecture 34. Fourier Transforms Lecture 34 Fourier Transforms In this section, we introduce the Fourier transform, a method of analyzing the frequency content of functions that are no longer τ-periodic, but which are defined over the

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

14 Fourier analysis. Read: Boas Ch. 7.

14 Fourier analysis. Read: Boas Ch. 7. 14 Fourier analysis Read: Boas Ch. 7. 14.1 Function spaces A function can be thought of as an element of a kind of vector space. After all, a function f(x) is merely a set of numbers, one for each point

More information

Math 115 ( ) Yum-Tong Siu 1. Derivation of the Poisson Kernel by Fourier Series and Convolution

Math 115 ( ) Yum-Tong Siu 1. Derivation of the Poisson Kernel by Fourier Series and Convolution Math 5 (006-007 Yum-Tong Siu. Derivation of the Poisson Kernel by Fourier Series and Convolution We are going to give a second derivation of the Poisson kernel by using Fourier series and convolution.

More information

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of CHAPTER III APPLICATIONS The eigenvalues are λ =, An orthonormal basis of eigenvectors consists of, The eigenvalues are λ =, A basis of eigenvectors consists of, 4 which are not perpendicular However,

More information

96 CHAPTER 4. HILBERT SPACES. Spaces of square integrable functions. Take a Cauchy sequence f n in L 2 so that. f n f m 1 (b a) f n f m 2.

96 CHAPTER 4. HILBERT SPACES. Spaces of square integrable functions. Take a Cauchy sequence f n in L 2 so that. f n f m 1 (b a) f n f m 2. 96 CHAPTER 4. HILBERT SPACES 4.2 Hilbert Spaces Hilbert Space. An inner product space is called a Hilbert space if it is complete as a normed space. Examples. Spaces of sequences The space l 2 of square

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

Lecture 1. 1, 0 x < 1 2 1, 2. x < 1, 0, elsewhere. 1

Lecture 1. 1, 0 x < 1 2 1, 2. x < 1, 0, elsewhere. 1 0 - - -3 Lecture Introductory mathematical ideas The term wavelet, meaning literally little wave, originated in the early 980s in its French version ondelette in the work of Morlet and some French seismologists.

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Midterm 1 Review. Distance = (x 1 x 0 ) 2 + (y 1 y 0 ) 2.

Midterm 1 Review. Distance = (x 1 x 0 ) 2 + (y 1 y 0 ) 2. Midterm 1 Review Comments about the midterm The midterm will consist of five questions and will test on material from the first seven lectures the material given below. No calculus either single variable

More information

Differentiation. f(x + h) f(x) Lh = L.

Differentiation. f(x + h) f(x) Lh = L. Analysis in R n Math 204, Section 30 Winter Quarter 2008 Paul Sally, e-mail: sally@math.uchicago.edu John Boller, e-mail: boller@math.uchicago.edu website: http://www.math.uchicago.edu/ boller/m203 Differentiation

More information

Lagrange Multipliers

Lagrange Multipliers Optimization with Constraints As long as algebra and geometry have been separated, their progress have been slow and their uses limited; but when these two sciences have been united, they have lent each

More information

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces 9.520: Statistical Learning Theory and Applications February 10th, 2010 Reproducing Kernel Hilbert Spaces Lecturer: Lorenzo Rosasco Scribe: Greg Durrett 1 Introduction In the previous two lectures, we

More information

Topic Subtopics Essential Knowledge (EK)

Topic Subtopics Essential Knowledge (EK) Unit/ Unit 1 Limits [BEAN] 1.1 Limits Graphically Define a limit (y value a function approaches) One sided limits. Easy if it s continuous. Tricky if there s a discontinuity. EK 1.1A1: Given a function,

More information

A glimpse of Fourier analysis

A glimpse of Fourier analysis Chapter 7 A glimpse of Fourier analysis 7.1 Fourier series In the middle of the 18th century, mathematicians and physicists started to study the motion of a vibrating string (think of the strings of a

More information

Linear algebra and differential equations (Math 54): Lecture 10

Linear algebra and differential equations (Math 54): Lecture 10 Linear algebra and differential equations (Math 54): Lecture 10 Vivek Shende February 24, 2016 Hello and welcome to class! As you may have observed, your usual professor isn t here today. He ll be back

More information

1.10 Continuity Brian E. Veitch

1.10 Continuity Brian E. Veitch 1.10 Continuity Definition 1.5. A function is continuous at x = a if 1. f(a) exists 2. lim x a f(x) exists 3. lim x a f(x) = f(a) If any of these conditions fail, f is discontinuous. Note: From algebra

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Chapter 3a Topics in differentiation. Problems in differentiation. Problems in differentiation. LC Abueg: mathematical economics

Chapter 3a Topics in differentiation. Problems in differentiation. Problems in differentiation. LC Abueg: mathematical economics Chapter 3a Topics in differentiation Lectures in Mathematical Economics L Cagandahan Abueg De La Salle University School of Economics Problems in differentiation Problems in differentiation Problem 1.

More information

Fourier Series. 1. Review of Linear Algebra

Fourier Series. 1. Review of Linear Algebra Fourier Series In this section we give a short introduction to Fourier Analysis. If you are interested in Fourier analysis and would like to know more detail, I highly recommend the following book: Fourier

More information

Spanning and Independence Properties of Finite Frames

Spanning and Independence Properties of Finite Frames Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

Appendix E : Note on regular curves in Euclidean spaces

Appendix E : Note on regular curves in Euclidean spaces Appendix E : Note on regular curves in Euclidean spaces In Section III.5 of the course notes we posed the following question: Suppose that U is a connected open subset of R n and x, y U. Is there a continuous

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Sequence convergence, the weak T-axioms, and first countability

Sequence convergence, the weak T-axioms, and first countability Sequence convergence, the weak T-axioms, and first countability 1 Motivation Up to now we have been mentioning the notion of sequence convergence without actually defining it. So in this section we will

More information

Mathematical Methods for Physics and Engineering

Mathematical Methods for Physics and Engineering Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Lecture 27. Wavelets and multiresolution analysis (cont d) Analysis and synthesis algorithms for wavelet expansions

Lecture 27. Wavelets and multiresolution analysis (cont d) Analysis and synthesis algorithms for wavelet expansions Lecture 7 Wavelets and multiresolution analysis (cont d) Analysis and synthesis algorithms for wavelet expansions We now return to the general case of square-integrable functions supported on the entire

More information

Vector Calculus. Lecture Notes

Vector Calculus. Lecture Notes Vector Calculus Lecture Notes Adolfo J. Rumbos c Draft date November 23, 211 2 Contents 1 Motivation for the course 5 2 Euclidean Space 7 2.1 Definition of n Dimensional Euclidean Space........... 7 2.2

More information

February 13, Option 9 Overview. Mind Map

February 13, Option 9 Overview. Mind Map Option 9 Overview Mind Map Return tests - will discuss Wed..1.1 J.1: #1def,2,3,6,7 (Sequences) 1. Develop and understand basic ideas about sequences. J.2: #1,3,4,6 (Monotonic convergence) A quick review:

More information

PART II : Least-Squares Approximation

PART II : Least-Squares Approximation PART II : Least-Squares Approximation Basic theory Let U be an inner product space. Let V be a subspace of U. For any g U, we look for a least-squares approximation of g in the subspace V min f V f g 2,

More information

REAL AND COMPLEX ANALYSIS

REAL AND COMPLEX ANALYSIS REAL AND COMPLE ANALYSIS Third Edition Walter Rudin Professor of Mathematics University of Wisconsin, Madison Version 1.1 No rights reserved. Any part of this work can be reproduced or transmitted in any

More information

1 Question related to polynomials

1 Question related to polynomials 07-08 MATH00J Lecture 6: Taylor Series Charles Li Warning: Skip the material involving the estimation of error term Reference: APEX Calculus This lecture introduced Taylor Polynomial and Taylor Series

More information

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002 Further Mathematical Methods (Linear Algebra) 22 Solutions For Problem Sheet 3 In this Problem Sheet, we looked at some problems on real inner product spaces. In particular, we saw that many different

More information

Fourier and Partial Differential Equations

Fourier and Partial Differential Equations Chapter 5 Fourier and Partial Differential Equations 5.1 Fourier MATH 294 SPRING 1982 FINAL # 5 5.1.1 Consider the function 2x, 0 x 1. a) Sketch the odd extension of this function on 1 x 1. b) Expand the

More information

Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that. (1) (2) (3) x x > 0 for x 0.

Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that. (1) (2) (3) x x > 0 for x 0. Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that (1) () () (4) x 1 + x y = x 1 y + x y y x = x y x αy = α x y x x > 0 for x 0 Consequently, (5) (6)

More information

Functional Analysis HW #5

Functional Analysis HW #5 Functional Analysis HW #5 Sangchul Lee October 29, 2015 Contents 1 Solutions........................................ 1 1 Solutions Exercise 3.4. Show that C([0, 1]) is not a Hilbert space, that is, there

More information

Ma 530 Power Series II

Ma 530 Power Series II Ma 530 Power Series II Please note that there is material on power series at Visual Calculus. Some of this material was used as part of the presentation of the topics that follow. Operations on Power Series

More information

Introductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19

Introductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19 Introductory Analysis I Fall 204 Homework #9 Due: Wednesday, November 9 Here is an easy one, to serve as warmup Assume M is a compact metric space and N is a metric space Assume that f n : M N for each

More information

MATH 250 TOPIC 13 INTEGRATION. 13B. Constant, Sum, and Difference Rules

MATH 250 TOPIC 13 INTEGRATION. 13B. Constant, Sum, and Difference Rules Math 5 Integration Topic 3 Page MATH 5 TOPIC 3 INTEGRATION 3A. Integration of Common Functions Practice Problems 3B. Constant, Sum, and Difference Rules Practice Problems 3C. Substitution Practice Problems

More information

4.2. ORTHOGONALITY 161

4.2. ORTHOGONALITY 161 4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance

More information