MATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY

Similar documents
Mathematics Department Stanford University Math 61CM/DM Inner products

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Vectors in Function Spaces

Elements of linear algebra

Recall that any inner product space V has an associated norm defined by

Ordinary Differential Equations II

CONVERGENCE OF THE FOURIER SERIES

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality

Recitation 1 (Sep. 15, 2017)

Hilbert Spaces. Contents

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

Functional Analysis MATH and MATH M6202

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

I teach myself... Hilbert spaces

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan

6 Inner Product Spaces

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

MATH COURSE NOTES - CLASS MEETING # Introduction to PDEs, Spring 2018 Professor: Jared Speck

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

MATH COURSE NOTES - CLASS MEETING # Introduction to PDEs, Fall 2011 Professor: Jared Speck

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Final: Solutions Math 118A, Fall 2013

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

MATH 131P: PRACTICE FINAL SOLUTIONS DECEMBER 12, 2012

Eigenvalues and Eigenfunctions of the Laplacian

FFTs in Graphics and Vision. The Laplace Operator

Linear Algebra Massoud Malek

Topics in Fourier analysis - Lecture 2.

Functional Analysis Review

Fourier Series. 1. Review of Linear Algebra

Math 328 Course Notes

Consequences of Orthogonality

MATH 115A: SAMPLE FINAL SOLUTIONS

Lecture 16: Bessel s Inequality, Parseval s Theorem, Energy convergence

1 Math 241A-B Homework Problem List for F2015 and W2016

MATH 426, TOPOLOGY. p 1.

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

3 Orthogonality and Fourier series

On the spectrum of the Laplacian

Solutions Final Exam May. 14, 2014

The following definition is fundamental.

Class Meeting # 2: The Diffusion (aka Heat) Equation

Math Linear Algebra II. 1. Inner Products and Norms

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

1. General Vector Spaces

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

General Inner Product and The Fourier Series

y 2 . = x 1y 1 + x 2 y x + + x n y n 2 7 = 1(2) + 3(7) 5(4) = 3. x x = x x x2 n.

Example 9 Algebraic Evaluation for Example 1

Math 489AB A Very Brief Intro to Fourier Series Fall 2008

Spectral Theory, with an Introduction to Operator Means. William L. Green

1 Separation of Variables

Advanced Computational Fluid Dynamics AA215A Lecture 2 Approximation Theory. Antony Jameson

Solutions Final Exam May. 14, 2014

Real Analysis Notes. Thomas Goller

Index. C 2 ( ), 447 C k [a,b], 37 C0 ( ), 618 ( ), 447 CD 2 CN 2

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products

MATH 124B: HOMEWORK 2

Physics 250 Green s functions for ordinary differential equations

10.2-3: Fourier Series.

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

Elementary linear algebra

Here we used the multiindex notation:

21 Laplace s Equation and Harmonic Functions

Overview of Fourier Series (Sect. 6.2). Origins of the Fourier Series.

LINEAR ALGEBRA W W L CHEN

Math Tune-Up Louisiana State University August, Lectures on Partial Differential Equations and Hilbert Space

INNER PRODUCT SPACE. Definition 1

Variational Formulations

NORMS ON SPACE OF MATRICES

FUNCTIONAL ANALYSIS LECTURE NOTES: ADJOINTS IN HILBERT SPACES

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by

14 Fourier analysis. Read: Boas Ch. 7.

Representation theory and quantum mechanics tutorial Spin and the hydrogen atom

Topics in Harmonic Analysis Lecture 1: The Fourier transform

Vector Spaces. Commutativity of +: u + v = v + u, u, v, V ; Associativity of +: u + (v + w) = (u + v) + w, u, v, w V ;

Sturm-Liouville Theory

Thus f is continuous at x 0. Matthew Straughn Math 402 Homework 6

Continuity. Chapter 4

Review of Linear Algebra

Linear Algebra I. Ronald van Luijk, 2015

GQE ALGEBRA PROBLEMS

2. Review of Linear Algebra

Math 302 Outcome Statements Winter 2013

MATH 5640: Fourier Series

Notes on Integrable Functions and the Riesz Representation Theorem Math 8445, Winter 06, Professor J. Segert. f(x) = f + (x) + f (x).

Continuity. Chapter 4

MATH 117 LECTURE NOTES

Inner product spaces. Layers of structure:

Projection Theorem 1

Linear Operators, Eigenvalues, and Green s Operator

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent

Mathematics 324 Riemann Zeta Function August 5, 2005

LEGENDRE POLYNOMIALS AND APPLICATIONS. We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates.

Transcription:

MATH 22: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY When discussing separation of variables, we noted that at the last step we need to express the inhomogeneous initial or boundary data as a superposition of functions arising in the process of separation of variables. For instance, for the Dirichlet problem u =, u D = h, we had to express h as (1) h(θ) = A + R n (A n cos(nθ) + B n sin(nθ)), n=1 i.e. we had to find constants A n and B n so that this expression holds. The basic framework for this is inner product spaces, which we now discuss. Definition 1. An inner product on a complex vector space V is a map such that (i).,. is linear in the first slot:.,. : V V C c 1 v 1 + c 2 v 2, w = c 1 v 1, w + c 2 v 2, w, c 1, c 2 C, v 1, v 2, w V, (ii).,. is Hermitian symmetric: v, w = w, v, with the bar denoting complex conjugate, (iii).,. is positive definite: v V v, v, and v, v = v =. A vector space with an inner product is also called an inner product space. While one should write (V,..,. ) to specify the inner product space, one typically says merely that V is an inner product space when the inner product is understood. For real vector spaces, one makes essentially the same definition, except that, as the complex conjugate does not make sense, one simply has symmetry: V real vector space v, w = w, v, v, w V. We also introduce the notation for the norm associated to this inner product: v = v, v 1/2, where the square root is the unique non-negative square root of a non-negative number (see (iii)). Thus, v, v = v 2. We note some examples: (i) V = R n, with inner product x, y = x j y j, where x = (x 1,..., x n ), y = (y 1,..., y n ). Thus x 2 = n j=1 x2 j. j=1 1

2 (ii) V = C n, with inner product x, y = x j y j, j=1 where x = (x 1,..., x n ), y = (y 1,..., y n ). Thus x 2 = n j=1 x j 2, which explains why we need Hermitian symmetry for complex vector spaces. (iii) V = R n, with inner product x, y = a j x j y j, j=1 where x = (x 1,..., x n ), y = (y 1,..., y n ), a j > for all j. Thus x 2 = n j=1 a jx 2 j. (iv) V = C () (complex valued continuous functions on the closure of ), where is a bounded domain in R n, with inner product f, g = f(x) g(x). Thus, f 2 = f(x) 2. We often write f L 2 = f L2 () for this norm. (v) V = C (), as above, with inner product f, g = f(x) g(x) a(x), where a C (), a > is fixed. Thus, f 2 = f(x) 2 a(x). We may write f L 2 (,a(x) ) for this norm. (vi) Let N > n/2, let V = {f C (R n ) : (1 + x ) N f is bounded}, with the inner product f, g = f(x) g(x). R n Note that the integral converges under this decay assumption. Thus, f 2 = f(x) 2. R n We often write f L 2 = f L2 (R n ) for this norm. A few properties on inner products should be observed immediately. First, the inner product is conjugate-linear in the second variable (which simply means linear if the vector space is real): v, c 1 w 1 + c 2 w 2 = c 1 w 1 + c 2 w 2, v = c 1 w 1, v + c 2 w 2, v = c 1 w 1, v + c 2 w 2, v = c 1 v, w 1 + c 2 v, w 2. A map V V C which is linear in the first variable and conjugate-linear in the second variable is called sesquilinear. Second, by (ii), if v V then v, v = v, v, so v, v is real. Thus, (iii) is the statement that this real number is non-negative, and it is actually positive if v. Also, by linearity, denoting the vector in V by V (usually we simply denote this by, but here this care clarifies the calculation), V, v = V, v = V, v =, v V,

3 and by Hermitian symmetry then We also note a useful properties of. : so v, V = V, v =. cv 2 = cv, cv = c v, cv = cc v, v = c 2 v 2, c C, v V, cv = c v. This property of. is called absolute homogeneity (of degree 1). One concept that is tremendously useful in inner product spaces is orthogonality: Definition 2. Suppose V is an inner product space. For v, w V we say that v is orthogonal to w if v, w =. Note that v, w = if and only of w, v =, so v is orthogonal to w if and only if w is orthogonal to v so we often say simply that v and w are orthogonal. As an illustration of its use, let s prove Pythagoras theorem: Lemma.1. Suppose V is an inner product space, v, w V and v and w are orthogonal. Then v + w 2 = v 2 + w 2 = v w 2. Proof. Since v w = v+( w), the statement about v w follows from the statement for v + w and w = w. Now, v+w, v+w = v, v+w + w, v+w = v, v + v, w + w, v + w, w = v, v + w, w by the orthogonality of v and w, proving the result. One use of orthogonality is the following: Lemma.2. Suppose v, w V, w. Then there exist unique v, v V such that v = v + v, v = cw for some c C and v, w =. Proof. If v = v + v then taking the inner product with w and using v = cw we deduce v, w = cw, w + v, w = c w 2, so as w, v, w c = w 2. Thus, v = cw and v = v cw, giving uniqueness. On the other hand, if we let c = v, w w 2, v = cw, v = v cw, then v + v = v and v = cw are satisfied, so we merely need to check v, w =. But v, w v, w = v, w c w, w = v, w w 2 w 2 =, so the desired vectors v and v indeed exist. One calls v the orthogonal projection of v to the span of w. The inner product can be used to define a notion of convergence: Definition 3. We say that a sequence v j converges to v in V if v j v as j.

4 In order to make this a useful tool, we need to be able to estimate the inner product using the norm. This is achieved by the Cauchy-Schwarz inequality, which we already mentioned when discussing energy estimates: Lemma.3. In an inner product space V, v, w v w, v, w V. Proof. If w =, then both sides vanish, so we may assume w. Write v = v +v as in Lemma.2, so v, w v = cw, c = w 2. Then by Pythagoras theorem, using v, v = c w, v =, v 2 = v 2 + v 2 v 2 = c 2 w 2 = v, w 2 w 2. Multiplying through by w 2 and taking the non-negative square root completes the proof of the lemma. A useful consequence of the Cauchy-Schwarz inequality is the triangle inequality for the norm: Lemma.4. In an inner product space V, v + w v + w. Proof. One only needs to prove the equivalent estimate where one takes the square of both sides: v + w 2 v 2 + 2 v w + w 2. But v +w 2 = v +w, v +w = v 2 + v, w + w, v + w 2 v 2 +2 v w + w 2, where the last inequality is the Cauchy-Schwarz inequality. Recall that in general, if one has a vector space V, one defines the notion of a norm on it as follows: Definition 4. Suppose V is a vector space. A norm on V is a map. : V R such that (i) (positive definiteness) v for all v V, and v = if and only if v =. (ii) (absolute homogeneity) cv = c v, v V, and c a scalar (so c R or c C, depending on whether V is real or complex), (iii) (triangle inequality) v + w v + w for all v, w V. Thus, Lemma.4 shows that the map. : V R we defined on an inner product space is indeed a norm in this sense, i.e. our use of the word norm was justified. Some examples of a normed vector space where the norm does not come from an inner product are: (i) V = C () (complex valued continuous functions on the closure of ), where is a bounded domain in R n, and f C = sup f(x) ; x

5 (ii) V = C () (complex valued continuous functions on the closure of ), where is a bounded domain in R n, with norm f L 1 = f(x). Note that in a normed vector space, with convergence defined as for inner product spaces, the vector space operations are (jointly) continuous: Lemma.5. If V is a normed vector space, v j v, w j w in V and c j c in the scalars, then c j v j cv and v j + w j v + w in V. Proof. Consider the sequence {v j + w j } j=1 first. Then (v j + w j ) (v + w) = (v j v) + (w j w) v j v + w j w as v j v, w j w. Similarly, c j v j cv = (c j v j c j v) + (c j v cv) c j v j c j v + c j v cv = c j (v j v) + (c j c)v = c j v j v + c j c v since convergent sequences in R or C are bounded. We used in the proof that convergent sequences in the scalars are bounded; this is also true for convergent sequences in a normed vector space V. Namely, if v j v in V, then for j sufficiently large, v j v 1, hence v j = (v j v) + v v j v + v v + 1. Thus, for all but finitely many j, v j v + 1, hence the sequence { v j } j=1 is bounded. We can now show that the inner product is jointly continuous with respect to the notion of convergence we discussed: Lemma.6. Suppose that V is an inner product space. If v j, w j V, and v j v, w j w in V then v j, w j v, w. Proof. We have Thus, v j, w j v, w = v j, w j v j, w + v j, w v, w = v j, w j w + v j v, w. v j, w j v, w v j, w j w + v j v, w v j w j w + v j v w since { v j } j=1 is bounded. Suppose now that we have a sequence of orthogonal non-zero vectors {x j } j=1 in an inner product space V, and suppose that v = c j x j, j=1 i.e. this sum converges to v, which, recall, means that with v N = N j=1 c jx j, v N v as N. Then, by the continuity of the inner product, for any w, v, w = c j x j, w. j=1 Applying this with w = x k, all inner products on the right hand side but the one with j = k vanish, and we deduce that v, x k = c k x k 2,

6 hence (2) c k = v, x k x k 2. As these orthogonal collections of vectors are so useful, we make a definition. Definition 5. An orthogonal set S of vectors in an inner product space V is a subset of V consisting of non-zero vectors such that if x, x S and x x then x is orthogonal to x. Now, one could check easily by an explicit computation that the functions we have considered, in terms of which we would like to express our initial or boundary data, are orthogonal to each other. Thus, if we can write our data as an infinite linear combination of these functions, then the coefficients can be determined easily. For instance, if we want to write h(θ) = a + a n cos(nθ) + b n sin(nθ), n=1 where we consider the inner product on functions on S 1 to be given by then we must have h, 1 a = 1 2 = a n = b n = f, g = 2π f(θ) g(θ) dθ, 2π h(θ) dθ 2π = 1 1 dθ 2π 2π h(θ) cos(nθ) dθ 2π cos 2 (nθ) dθ = 1 π 2π h(θ) sin(nθ) dθ 2π = 1 sin 2 (nθ) dθ π 2π 2π 2π h(θ) dθ, h(θ) cos(nθ) dθ, h(θ) sin(nθ) dθ. Here we used the computation cos(2nθ) = 2 cos 2 (nθ) 1, so cos 2 (nθ) = 1 2 (cos(2nθ)+ 1), and thus 2π cos 2 (nθ) dθ = 1 2 2π (cos(2nθ) + 1) dθ = 1 4n sin(2nθ) 2π + 1 2 θ 2π = π, with a similar result with cosine replaced by sine. In particular, returning to (1), we deduce that A = 1 2π A n = 1 πr n B n = 1 πr n 2π 2π 2π h(θ) dθ, h(θ) cos(nθ) dθ, h(θ) sin(nθ) dθ. We of course need to discuss whether we can indeed write h in this form, but before this we should give a conceptual reason why the functions we considered are automatically orthogonal to each other. For this purpose we need to consider symmetric operators on V. Indeed, the operators A we want to consider are not defined on all of V (or if we define A on V, it will not map V to itself), so we have to enlarge our framework.

7 So we consider operators defined on a linear subspace D of V. In order to make this definition well-behaved, we need to assume that D is dense in V, i.e. if v V then there exists v j D such that v j v. Definition 6. A linear operator A : D V is called symmetric if for all v, w D. Av, w = v, Aw Recall that an eigenvector of A is an element v of D such that Av = λv for some λ C; λ is then an eigenvalue of A. Lemma.7. Suppose A : D V is symmetric. Then all eigenvalues of A are real, and eigenvectors corresponding to distinct eigenvalues are orthogonal to each other. Proof. Suppose that λ is an eigenvalue of A, so for some v D, v, Av = λv. Then λ v 2 = λv, v = Av, v = v, Av = v, λv = λ v 2, so dividing through by v 2 we deduce that λ = λ, so λ is real. Now suppose that λ and µ are eigenvalues of A (so they are both real), and Av = λv, Aw = µw. Then λ v, w = λv, w = Av, w = v, Aw = v, µw = µ v, w. If λ µ then rearranging and dividing by λ µ gives v, w =, as claimed. Now, the operator A = d2 dθ defined for functions f C 2 (S 1 ) (which, recall, 2 can be identified with 2π-periodic functions on R), is symmetric, since Af, g = 2π f (θ)g(θ) dθ = f (θ)g(θ) 2π = f(θ)g (θ) 2π 2π 2π + f(θ)g (θ) dθ = f, Ag, f (θ)g (θ) dθ where the boundary terms vanishes due to periodicity. A similar computation for the operator A = d2 defined for functions f C 2 ([, l]) gives 2 Af, g = l f (x)g(x) = f (x)g(x) l + l l = f (x)g(x) l + f(x)g (x) l f(x)g (x) ( ) = f(x)g (x) f (x)g(x) l + f, Ag f (x)g (x) Thus, the operator A is symmetric provided we restrict it to a domain D so that the boundary terms vanish. For instance, (i) A = d2 defined for functions f C 2 ([, l]) such that f() = = f(l) 2 (Dirichlet boundary condition) is symmetric, (ii) A = d2 defined for functions f C 2 ([, l]) such that f () = = f (l) 2 (Neumann boundary condition) is symmetric, (iii) A = d2 defined for functions f C 2 ([, l]) such that f () af() = 2 and f (l) bf(l) = for some given a, b R (Robin boundary condition) is symmetric.

8 As another example, the operator A = i d defined on C1 functions which are 2l-periodic, mapping into C ([, 2l]) with the standard inner product, is symmetric since Af, g = 2l if (x)g(x) = if(x)g(x) 2l + i As a consequence, we deduce: 2l f(x)g (x) = f, Ag. Corollary.8. The following sets of functions are all orthogonal to each other in the respective vector spaces. (i) In V = C ([, l]), with the standard inner product, ( nπx ) f n (x) = sin, n 1, n Z. l (ii) In V = C ([, l]), with the standard inner product, ( nπx ) f (x) = 1, f n (x) = cos, n 1, n Z. l (iii) In V = C ([, 2l]), with the standard inner product, f n (x) = e inπx/l, n Z. (iv) In V = C ([, 2l]), with the standard inner product, ( nπx ) f (x) = 1, f n (x) = cos, n 1, n Z, g n (x) = sin l ( nπx l ), n 1, n Z. Proof. These are all consequences of being eigenfunctions of symmetric operators, namely: (i) A = d2, with Dirichlet boundary condition, the eigenvalue of f 2 n is λ n = n 2 π 2 l, 2 (ii) A = d2, with Neumann boundary condition, the eigenvalue of f 2 n is λ n = n2 π 2 l, 2 (iii) A = i d, on 2l-periodic C1 functions, the eigenvalue of f n is λ n = nπ l, (iv) A = d2, on 2l-periodic C 2 functions, the eigenvalue of f 2 n is λ n = n2 π 2 l 2 which is also the eigenvalue of g n. Thus, in all cases but the last the functions are eigenfunctions of a symmetric operator with distinct eigenvalues, hence are orthogonal. In the last case, we have all of the claimed orthogonality by the same argument, except the orthogonality of f n to g n. This can be seen easily, however, as the cosines are even around x = l, while the sines are odd around x = l, so the product is odd, hence its integral over the interval [, 2l], which is symmetric around l, vanishes. We also have the following general symmetry result for the Laplacian: Proposition.9. Let be a bounded domain with smooth boundary, V = C ( ). The Laplacian, defined on any one of the following domains: (i) (Dirichlet) D = {f C 2 () : f = }, (ii) (Neumann) D = {f C 2 f () ( : n ) = }, (iii) (Robin) D = {f C 2 () : f n af = }, is symmetric. Here, in case (iii), a is a given continuous function on.

9 Proof. We recall Green s identity, (f g ( f)g) = Replacing g by g, we have (f g ( f)g) = ( f g n f ) n g ds(x). ( f g n f ) n g ds(x). Under each of the conditions listed above, the right hand side vanishes. Thus, f g = ( f)g, so is symmetric. We recall how Green s identity is proved: consider the vector field f g. By the divergence theorem, and similarly div(f g) = div(g f) = f ˆn g ds(x) = g f n ds(x). f g n ds(x), Subtracting these two yields ( f g n f ) n g ds(x) = (div(f g) div(g f)) = (f g + f g g f g f) = (f g ( f)g), as claimed. We still need to discuss whether any v in V can be written as a stated linear combination. This is in general not the case: for instance, if V = R 2, and x 1 = (1, ), then {x 1 } is an orthogonal set of non-zero vectors, but not every element of V can be written as a linear combination of {x 1 } (i.e. is not, in general, a multiple of x 1 ): we need another vector, such as x 2 = (, 1). We thus make the following definition. Definition 7. Suppose V is an inner product space. We say that an orthogonal set {x n } n=1 is complete if for every v V there exist scalars c n such that v = n=1 c nx n. Note that, as discussed before, if the c n exist, they are automatically unique: they are determined by (2). In fact, we can always define a series by the formula (2): given v V, let (3) v n = c k x k, c k = v, x k x k 2 ; the question is whether v n v. The coefficients c k in (3) may be called the generalized Fourier coefficients of v. Note that v n has the useful property that v v n is orthogonal to x j for 1 j n. Indeed, v v n, x j = v, x j c k x k, x j = c j c j =, j n,

1 hence v v n is also orthogonal to any vector of the form n b kx k, i.e. to the linear span of {x 1,..., x n }, in particular to v n itself. Thus, by Pythagoras theorem, we have v 2 = v v n 2 + v n 2 v n 2 = c j c k x j, x k = c k 2 x k 2. j, This is Bessel s inequality. Thus, we deduce that for any v V, the generalized Fourier coefficients satisfy c k 2 x k 2 v 2, hence, as these partial sums form a bounded monotone increasing (which, recall, means non-decreasing) sequence, c k 2 x k 2 converges, and as v 2 is an upper bound for the partial sums, the limit is v 2. If this limit is v 2, then we deduce that v v n as n, so with the converse being clear we conclude the following: Lemma.1. Suppose V is an inner product space. An orthogonal set {x n } n=1 is complete if and only if for every v V the generalized Fourier coefficients satisfy c k 2 x k 2 = v 2. This may be called Bessel s equality. Now, let s compute v n a kx k 2, where a k C. This is best done by writing v = v + v, where v = c k x k, the c k being the generalized Fourier coefficients, for then v is orthogonal to all x k, and hence v a k x k 2 = v + (c k a k )x k 2 = v 2 + c k a k 2 x k 2. In particular, v a k x k 2 v 2, and equality holds if and only of a k = c k for k = 1,..., n. This is the least squares approximation: Proposition.11. Given v V, and an orthogonal set {x 1,..., x n }, the choice of a k that minimizes the error v n a kx k 2 of the approximation of v by n a kx k is given by the generalized Fourier coefficients, c k. Thus, even if the generalized Fourier series c kx k does not converge to the function, its partial sums give the best possible approximations of v in the precise sense of this Proposition. We now actually turn to completeness statements. We prove in the next lecture that all of the orthogonal sets listed in Corollary.8 are complete. Moreover, for the Laplacian we have the following result, which is beyond our tools in this class:

11 Theorem.12. Suppose is a bounded domain in R n with smooth boundary. For the Laplacian, defined on the subspace D of C () given by (i), (ii) or (iii) of Proposition.9, the eigenspace for each eigenvalue λ R is finite dimensional, the eigenvalues can be arranged in an increasing sequence, tending to infinity, λ 1 λ 2..., and choosing an orthogonal basis x k,j, j = 1,..., N k, of the λ j -eigenspace, the orthogonal set {x k,j : k Z, k 1, j = 1,..., N k } is complete. The analogue of this theorem for variable coefficient symmetric elliptic second order differential operators with appropriate boundary conditions is also valid. There is one more issue that should be observed. For the generalized Fourier series of v as in (3), and for n < m, m (4) v n v m 2 = c k 2 x k 2. k=n+1 Since c k 2 x k 2 converges, the differences of the partial sums, on the right hand side of (4), go to as n, m. Thus, by (4), the same holds for the differences of the partial sums of c kx k. In an ideal world, this ought to imply that the series c kx k converges. However, this need not be a case. Definition 8. Suppose that V is a normed vector space. We say that a sequence {v n } n=1 is a Cauchy sequence if v n v m as n, m. Explicitly, this means that given any ɛ > there is N > such that for n, m N, v n v m < ɛ, i.e. beyond a certain point, all elements of the sequence are within ɛ of each other. We say that a normed vector space V is complete if every Cauchy sequence converges. Thus, we see that for any inner product space and any orthogonal set inside it, the generalized Fourier series for any v V is Cauchy. If V is complete, it will thus converge to some v V (not necessarily to v though: recall the R 2 example). Now, an example of a complete normed vector space is, for as above, V = C () with the C -norm: f C = sup f(x). x Unfortunately, an example of an incomplete normed vector space is V = C () with the L 2 -norm, i.e. our inner product space. It turns out that every incomplete normed vector space V can be completed, i.e. there is a complete normed vector space ˆV and an inclusion map ι : V ˆV which is linear such that for v V, ι(v) ˆV = v V, and such that for any v ˆV, there is a sequence {v n } n=1 in V such that ι(v n ) v in ˆV (i.e. the image of V under ι is dense in ˆV ). This completion is (essentially) unique. Note that if {v n } n=1 is Cauchy in V, then {ι(v n )} n=1 is Cauchy in ˆV, so it converges. Moreover, ι is one-to-one, so one can simply think of elements of V as elements of ˆV. It is also useful to note that if V is an inner product space, then so is ˆV. The actual construction of completion is very similar to how the real numbers are constructed from the rationals. There are Cauchy sequences of rationals which do not converge inside Q, for instance, take truncated decimal expressions for 2. One constructs R by considering the set of such Cauchy sequences, more precisely equivalence classes of Cauchy sequences: two Cauchy sequences {v n } n=1 and {v n} n=1

12 are equivalent if v n v n as n. For instance, in the case of R, one wants the decimal and binary expansions of 2 to be equivalent, i.e. give the same element of R, namely 2, which is exactly what is accomplished. We thus make a bold definition. Suppose that is a bounded domain in R n. We let L 2 () be the completion ˆV of V = C () in the L 2 ()-norm. Thus, L 2 () is a complete inner product space, and C () can be regarded as a subset of L 2 (). Similarly, we let L 1 () be the completion ˆV of V = C () in the L 1 ()-norm, which is thus a complete normed vector space. We also let L 2 (R n ) be the completion ˆV of V N = {f C (R n ) : (1 + x ) N f is bounded}, with the inner product f, g = f(x) g(x), R n where we fixed some N > n/2. It is straightforward to show that the completion is independent of the choice of such an N, for if n/2 < N < N, then any element of V N can be approximated by elements of V N in the L 2 (R n )-norm. While this definition of L 2 () may sound radical indeed, elements of L 2 () are actually distributions, i.e. very familiar objects. First, note that if φ Cc (), f L 2 (), we can define ι f (φ) = f, φ. By Cauchy-Schwarz, we have ι f (φ) f L 2 φ L 2 = f L 2( φ 2 ) 1/2 f L 2( 1 ) 1/2 sup φ(x), x so ι f is indeed a distribution, i.e. it is continuous Cc () C (linearity being clear). If f C (), this agrees with the usual definition since then f, φ = fφ. Thus, ι : L 2 () D (). In order to be able to consider L 2 () a subset of D (), we also need that ι is injective. But this is easy to see: first, for f L 2 () there is a sequence φ n Cc () such that φ n f in L 2 (). Indeed, the existence of such a sequence with φ n replaced by f n C () follows from the definition of L 2 () as a completion, so one merely needs to check that the f n can be approximated within, say, 1/n, in L 2 -norm by some φ n, but this is straightforward, and now all integrals are simply Riemann integrals. Now, for this sequence φ n, by the continuity of the inner product, f, φ n f, f = f 2 L 2. Now, if f, then the right hand side is non-zero, so in particular for sufficiently large n, ι f (φ n ) = f, φ n, showing that ι f is not the zero distribution, so ι is injective. Hence we can indeed simply think of L 2 () as a subspace of distributions. We also remark that if V is an inner product space, and {x n } n=1 is a complete orthogonal set in V, then it is also complete in the completion ˆV of V : this follows immediately since for v V we can take a sequence v n V such that v n v, and use that the generalized Fourier series of v n converges to v n, as well as the fact that the generalized Fourier series of v converges to some v to show that v = v.