Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Similar documents
4.3 - Linear Combinations and Independence of Vectors

Math Linear Algebra II. 1. Inner Products and Norms

Introduction to Linear Algebra. Tyrone L. Vincent

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Functional Analysis: Assignment Set # 3 Spring 2009 Professor: Fengbo Hang February 25, 2009

Homework 11 Solutions. Math 110, Fall 2013.

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016

4 Linear Algebra Review

Chapter 6 Inner product spaces

Mathematics Department Stanford University Math 61CM/DM Inner products

6 Inner Product Spaces

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

October 25, 2013 INNER PRODUCT SPACES

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Linear Algebra Massoud Malek

1. General Vector Spaces

Algebra II. Paulius Drungilas and Jonas Jankauskas

Math 702 Problem Set 10 Solutions

Functional Analysis Review

Lecture notes: Applied linear algebra Part 1. Version 2

Spectral Theorem for Self-adjoint Linear Operators

LEARNING OBJECTIVES FOR THIS CHAPTER

Section 7.5 Inner Product Spaces

Lecture Notes 1: Vector spaces

Exercise Sheet 1.

Math 108b: Notes on the Spectral Theorem

Typical Problem: Compute.

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

INNER PRODUCT SPACE. Definition 1

Linear Algebra 2 Spectral Notes

There are two things that are particularly nice about the first basis

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Chapter 4 Euclid Space

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Review of Linear Algebra

Chapter 3 Transformations

Elementary linear algebra

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Chapter 6: Orthogonality

LINEAR ALGEBRA W W L CHEN

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products

Problem Set 9 Due: In class Tuesday, Nov. 27 Late papers will be accepted until 12:00 on Thursday (at the beginning of class).

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Frames inr n. Brody Dylan Johnson Department of Mathematics Washington University Saint Louis, Missouri

Math 1180, Notes, 14 1 C. v 1 v n v 2. C A ; w n. A and w = v i w i : v w = i=1

Linear Algebra. Session 12

Review problems for MA 54, Fall 2004.

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Numerical Linear Algebra

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

EIGENVALUES AND EIGENVECTORS 3

MATH 431: FIRST MIDTERM. Thursday, October 3, 2013.

3 a 21 a a 2N. 3 a 21 a a 2M

Quantum Computing Lecture 2. Review of Linear Algebra

Math Linear Algebra

MTH 2032 SemesterII

Lecture 23: 6.1 Inner Products

A 2 G 2 A 1 A 1. (3) A double edge pointing from α i to α j if α i, α j are not perpendicular and α i 2 = 2 α j 2

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Linear Algebra Lecture Notes-II

MATH 235. Final ANSWERS May 5, 2015

GQE ALGEBRA PROBLEMS

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

Review of linear algebra

1 Last time: least-squares problems

Conceptual Questions for Review

Diagonalizing Matrices

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Linear Algebra: Characteristic Value Problem

Vectors in Function Spaces

CS 246 Review of Linear Algebra 01/17/19

Math 24 Spring 2012 Sample Homework Solutions Week 8

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

A PRIMER ON SESQUILINEAR FORMS

Linear Models Review

Spectral Theorems in Euclidean and Hermitian Spaces

Supplementary Notes on Linear Algebra

Inner Product and Orthogonality

MATH 22A: LINEAR ALGEBRA Chapter 4

Linear Algebra Review. Vectors

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

if <v;w>=0. The length of a vector v is kvk, its distance from 0. If kvk =1,then v is said to be a unit vector. When V is a real vector space, then on

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

SUMMARY OF MATH 1600

Applied Linear Algebra in Geoscience Using MATLAB

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Linear algebra and applications to graphs Part 1

Linear Algebra Highlights

MATH 115A: SAMPLE FINAL SOLUTIONS

Transcription:

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector space over F = R or C. An inner product on V is a function V V! F; denoted (x; y)! hx; yi ; such that the following hold for all x; y; z 2 V and c 2 F : 1. hx + z; yi = hx; yi + hz; yi. 2. hcx; yi = c hx; yi : 3. hy; xi = hx; yi where the bar denotes complex conjugation. 4. hx; xi > 0 if x 6= 0: Here are some observations: The rst two conditions could be called being linear in the rst component. The third condition is called being conjugate symmetric (or symmetric if F = R). Linearity in the rst component and conjugate symmetry imply linearity in the second component, and being linear in both components is called being bilinear. Notice that conjugate symmetry implies that hx; xi 2 R even if F = C since hx; xi = hx; xi: D The second condition also implies that ~ 0;~0E = 0; which together with the fourth condition is called being positive de nite. From all of this, we could have just speci ed that h; i is bilinear, conjugate symmetric, and positive de nite. 1

We often write kvk 2 to represent hv; vi : Since kvk 2 0; it has a unique positive square root that we call kvk : Example 2 The rst example is the dot product for R n ; where if x = (x 1 ; : : : ; x n ) and y = (y 1 ; : : : ; y n ) then hx; yi = x y = nx x i y i : Example 3 The standard inner product on C n is Check the properties. hx; yi = nx x i y i : Example 4 Given any inner product h; i ; we can multiply it by a positive real number r > 0 to get another inner product hx; yi 0 = r hx; yi : Note that it would not be an inner product if r 0 or r is not a real number. Example 5 For continuous, real-valued functions on [0; 1] ; there is an inner product hf; gi = Z 1 0 f (t) g (t) dt: Note that it is important that the functions be continuous to ensure that hf; fi = R 1 0 f (t)2 dt > 0 if f 6= ~0: Example 6 Consider the vector space C nn : We de ne the conjugate transpose, or adjoint, A of a matrix A by specifying the entries as: We can now de ne an inner product by (A ) ij = A ji : ha; Bi = tr (B A) : Let s check it is an inner product. First, ha + A 0 ; Bi = tr (B (A + A 0 )) = tr (B A + B A 0 ) = tr (B A) + tr (B A 0 ) = ha; Bi + ha 0 ; Bi : Also, hb; Ai = tr (A B) ha; Bi = tr B A = tr (A B) : Also, ha; Ai = tr (A A) = X i X A ija ji = X j j;i A ji A ji = X j;i ja ji j 2 and so this is nonnegative and equals zero if and only if A = 0: 2

De nition 7 A vector space V together with an inner product is called an inner product space. If the eld is C then we call it a complex inner product space, and if the eld is R we call it a real inner product space. The following are basic properties of inner product spaces. Theorem 8 Let V be an inner product space. Then for x; y; z 2 V and c 2 F the following statements are true: 1. hx; y + zi = hx; yi + hx; zi : 2. hx; cyi = c hx; yi : D E D E 3. x;~0 = ~ 0; x = 0: 4. hx; xi = 0 if and only if x = ~0: 5. If hx; yi = hx; zi for all x 2 V; then y = z: Proof. Most of these follow pretty easily. The last one can be shown as follows. Suppose hx; yi = hx; zi for all x 2 V: Then hx; y zi = 0 for all x: In particular, taking x = y z; we get that hy z; y zi = 0: But that implies y z = ~0: Note that the rst two statements in the above theorem are called being conjugate linear in the second component. Recall the de nition of the length, or norm: kxk = p hx; xi: This generalizes the Euclidean norm k(x 1 ; : : : ; x n )k = p x 2 1 + + x2 n: Many properties are still true for inner product spaces: Theorem 9 Let V be an inner product space over F = R or C. The for x; y 2 V and c 2 F; the following are true: 1. kcxk = jcj kxk : 2. kxk = 0 i x = ~0: In general, kxk 0: 3. (Cauchy-Schwarz inequality) jhx; yij kxk kyk : 4. (Triangle inequality) kx + yk kxk + kyk : Proof. We will just prove the last two. Consider x 0 kx cyk 2 Notice that if we take c = hx;yi kyk 2 = hx cy; x cyi = kxk 2 hcy; xi hx; cyi + kcyk 2 = kxk 2 2 Re c hx; yi + jcj 2 kyk 2 : then cy and notice that 0 kxk 2 2 jhx; yij2 jhx; yij 2 kyk 2 + kyk 2 = kxk 2 2 jhx; yij kyk 2 ; 3

which implies the Cauchy-Schwarz inequality. We can now show kx + yk 2 = hx + y; x + yi = kxk 2 + 2 Re hx; yi + kyk 2 kxk 2 + 2 kxk kyk + kyk 2 = (kxk + kyk) 2 since 2a a 2 + b 2 = ja + bij 2 and so 2 Re hx; yi jhx; yij kxk kyk : De nition 10 Let V be an inner product space. Vectors x and y in V are orthogonal ( perpendicular) if hx; yi = 0: As subset S V is orthogonal if any two distinct vectors in S are orthogonal. A vector x in V is a unit vector if kxk = 1: A subset S V is orthonormal if S is orthogonal and consists entirely of unit vectors. Note that S = fx 1 ; : : : ; x k g is orthonormal i hx i ; x j i = ij : Also note that we can make an orthonormal set from an orthogonal set by replacing D each E vector x by 1 kxk x: This will not change the orthogonality since x kxk ; y kyk = hx; yi since kyk 2 R. We call this process normalizing the set. 1 kxkkyk Proposition 11 If V is an inner product space and S V is orthogonal and nonzero, then S is linearly independent. Proof. We rst note that if S is not the set consisting only of zero, then zero cannot be in S: Suppose a 1 x 1 + + a k x k = ~0 for scalars a 1 ; : : : ; a k and vectors x 1 ; : : : ; x k in S: Then we see that 0 = ha 1 x 1 + + a k x k ; x i i = a i kx i k 2 and since kx i k 2 6= 0; we must have a i = 0. This can be done for all i: 2 Problems FIS Section 6.1 exercises 2, 3, 8, 10, 11, 16, 17, 19, 20, 22 3 Orthonormal bases De nition 12 Let V be an inner product space. A subset of V is an orthonormal basis for V if it is an ordered basis that is orthonormal. The standard basis is orthonormal for the usual inner product. So is any rotation of the standard basis! Orthonormal bases make it easier to write the coe cients than in any old basis. 4

Theorem 13 Let V be an inner product space and S = fv 1 ; v 2 ; : : : ; v k g be an orthogonal subset of V consisting of nonzero vectors. If y 2 span S; then y = hy; v i i hv i ; v i i v i: Proof. Since y 2 span S; we must have that there exist scalars a 1 ; : : : ; a k such that y = a i v i : We can now take the inner product with v j for j = 1; : : : ; k and nd that * + hy; v j i = a i v i ; v j and so (since kv j k 6= 0), a j = hy;vji hv j;v ji : y = = a i hv i ; v j i = a j hv j ; v j i hy; v i i hv i ; v i i v i: Note that if S is orthonormal, then the denominators are all 1: We can always take a linearly independent set and use it to nd an orthogonal set with the same span. We do this by considering orthogonal projections of vectors onto a subspace. Theorem 14 Let W be a nite dimensional subspace of the inner product space V: Then for a vector y 2 V; there is a unique vector u 2 W that minimizes ky wk 2 for all w 2 W: Proof. Suppose there is a u 2 W such that hw; y if w 2 W (and hence so is u w), ui = 0 for any w 2 W: Then ky wk 2 = ku + (y u) wk 2 = hu w + (y u) ; u w + (y u)i = ku wk 2 + hu w; y ui + hy u; u wi + ky uk 2 = ku wk 2 + ky uk 2 ky uk 2 : We can do this if W is nite dimensional using the following theorem. 5

De nition 15 We call the assignment of u the orthogonal projection of y onto W; denoted u = P W (y) : It is an important fact that hy P W (y) ; wi = 0 for all w 2 W: De nition 16 The orthogonal complement of W; written W? (pronounced W perp ), is the set of all vectors v 2 V such that hv; wi for all w 2 W. Note that W? is a vector space. Proposition 17 W? is a vector space. D E Proof. It is straightforward to see that ~ 0; w = 0 for all w 2 W; so ~0 2 W? : if v; u 2 W? and c 2 F then hcv + u; wi = c hv; wi = hu; wi = 0 so cv + u 2 W? : We can construct an orthogonal set from a linearly independent set by starting with the rst vector and then projecting the next vector into Theorem 18 Let V be an inner product space and S = fw 1 ; : : : ; w n g be a linearly independent subset of V: De ne S 0 = fv 1 ; : : : ; v n g by v 1 = w 1 and 1 v k = w k j=1 hw k ; v j i hv j ; v j i v j for k = 2; : : : ; n: Then S 0 span S 0 = span S: is an orthogonal set of nonzero vectors such that Proof. We show inductively that v k+1 is orthogonal to v 1 ; : : : ; v k : It is clear that hw 2 ; v 1 i hv 2 ; v 1 i = w 2 hv 1 ; v 1 i v 1; v 1 = hw 2 ; v 1 i hw 2 ; v 1 i hv 1 ; v 1 i hv 1; v 1 i = 0: We then can use the inductive hypothesis to assume hv i ; v j i = 0 for i; j k and see that * + 1 hw k ; v j i hv k ; v i i = w k hv j ; v j i v j; v i = hw k ; v i i j=1 hw k ; v i i hv i ; v i i hv i; v i i = 0: Thus S 0 is orthogonal. Hence S 0 is linearly independent and since each element of S 0 is in the span of S; span S 0 span S; and hence span S 0 = span S (since they have the same dimension). Note: this process of producing an orthogonal set is called the Gram-Schmidt process. 6

Theorem 19 Suppose that S = fv 1 ; : : : ; v k g is an orthonormal set in a n- dimensional inner product space V: Then 1. S can be extended to an orthonormal basis fv 1 ; : : : ; v k ; v k+1; : : : ; v n g for V: 2. If W = span S; then S 1 = fv k+1 ; : : : ; v n g is an orthonormal basis for W? : 3. If W is any subspace of V; then dim V = dim W + dim W? : Proof. By the replacement theorem, S can be extended into a basis, and then the Gram-Schmidt process can be used to turn this into an orthogonal set. Then normalizing gives an orthonormal set. S 1 is clearly a linearly independent subset of W? : Since fv 1 ; : : : ; v n g is a basis, any vector in W? can be written as a linear combination of these vectors. However, since w 2 W? satis es hw; v i i = 0 for i = 1; : : : ; k; w is in the span of S 1 ; hence S 1 is a basis. The dimension statement is clear now that we know that S is a basis for S; S 0 is a basis for W? ; and fv 1 ; : : : ; v n g is a basis for V: 4 Problems FIS Section 6.2 exercises 4, 5, 7, 8, 10, 13, 22. 5 Adjoints and eigenvalues De nition 20 Suppose T is a linear operator on an inner product space V: If T is a linear operator on V such that ht (x) ; yi = hx; T (y)i for all x; y 2 V; we say T is the adjoint of T: We read T as T star. Theorem 21 Let V be a nite-dimensional inner product space and let T be a linear operator on V: Then there exists a unique function T : V! V that for all x; y 2 V; ht (x) ; yi = hx; T (y)i : Furthermore, T is linear and hence the adjoint of T: In order to prove this theorem, we need to show that every linear map on a nite-dimensional vector space can be represented in terms of the inner product. We will then use this idea to construct the adjoint. Theorem 22 Let V be a nite dimensional vector space over F and let g : V! F be a linear transformation. Then there exists a unique y 2 V such that g (x) = hx; yi for all x 2 V: 7

Proof. Let = fv 1 ; : : : ; v n g be an orthonormal basis for V: For any x 2 V; we must have y = P b i v i for some b i ; so in order for g (x) = hx; yi we must have that g (v i ) = hv i ; yi = b i : Hence any y that satis es the equation must have b i = g (v i ) (this proves uniqueness). We now con rm that y = P g (v i )v i satis es the theorem: if x = P a i v i then * n + X nx hx; yi = a i v i ; g (v j )v j = = nx j=1 j=1 nx a i g (v j ) hv i ; v j i nx a i g (v i ) = g (x) : Proof of Theorem 21. Let = fv 1 ; : : : ; v n g ; be an orthonormal basis for V and de ne T i : V! F by T i (x) = ht (x) ; v i i : By Theorem 22, there exists a unique y i 2 V such that T i (x) = hx; y i i ; and so ht (x) ; v i i = hx; y i i : We can thus de ne a linear tranformation T by de ning it on the basis to be T (v i ) = y i and extending it linearly to a transformation on V: Now, for any vector y 2 V; y = P b i v i for some scalars b 1 ; : : : ; b n and we check that ht (x) ; yi = DT (x) ; X E b i v i = X bi ht (x) ; v i i = X bi hx; T (v i )i = X hx; b i T (v i )i = hx; T (y)i : T is unique since if there were another linear transformation U satisfying the properties, then for any y 2 V; hx; T (y)i = ht (x) ; yi = hx; U (y)i must be true for all x 2 V; implying that T (y) = U (y) : (See 6.1, problem 9.) Notice that the adjoint works the other way as well: hx; T (y)i = ht (y) ; xi = hy; T (x)i = ht (x) ; yi : 8

Note our proof of the existence of an adjoint used a basis, and it turns out that for a linear operator on an in nite dimensional vector space, the existence of an adjoint is not guaranteed. However, most properties we will derive are true if the adjoint exists (sometimes it does). Recall that for a matrix A; we denote its conjugate transpose by A (conjugate transpose means we take a transpose and replace each entry with its complex conjugate). This is related to the adjoint. Theorem 23 Let V be a nite dimensional inner product space and be an orthonormal basis for V: If T is a linear operator on V; then [T ] = [T ] : It might help to be clear about the content here. If one writes the matrix for the adjoint transformation T, this matrix is the same as the conjugate transpose of the matrix for the transformation T (provided the basis used in both cases is orthonormal). Proof. If we write A = [T ] and B = [T ] then by orthonormality we have that B ij = ht (v j ) ; v i i = hv j ; T (v i )i = ht (v i ) ; v j i = A ji : It follows that B = A : Corollary 24 Left multiplication by an n n matrix A satis es L A = L A: Theorem 25 Let V be an inner product space and T; U be linear operators on V that have an adjoint (this is always true if V is nite dimensional). Then 1. (T + U) = T + U 2. (ct ) = ct for any c 2 F 3. (T U) = U T 4. T = T 5. I V = I V : Proof. The proofs are pretty straightforward. We prove the rst and third: for any x; y 2 V h(t + U) x; yi = ht (x) + U (x) ; yi = ht (x) ; yi + hu (x) ; yi = hx; T (y)i + hx; U (y)i = hx; T (y) + U (y)i = hx; (T + U ) (y)i : 9

Also, ht U (x) ; yi = ht (U (x)) ; yi = hu (x) ; T (y)i = hx; U T (y)i : 6 Problems FIS Section 6.3 exercises 4, 6, 8, 10, 12, 13, 15 7 Schur s theorem and the (baby) spectral theorem We would like to understand when there exists an orthonormal basis of eigenvectors. We rst have Schur s theorem, that says that we can represent a linear transformation by an upper triangular matrx. Theorem 26 (Schur) Let T be a linear operator on a nite dimensional inner product space V: Suppose the characteristic polynomial of T splits. The there exists an orthonormal basis of V such that [T ] is upper triangular. The main idea behind the proof is that we can restrict T to a subspace. If W is a subspace of V and T (W ) W; then we say that W is T -invariant. Hence there is a linear operator T W 2 L (W ) that is just T W (x) = T (x) : If W is T -invariant and is a basis for V that is an extension of a basis for W; then [T ] has block form [TW ] 0 where the s are unknown but generally not zero. We now see that if z is an eigenvector of T and if z? = fx 2 V : hx; zi = 0g ; then z? is T -invariant, i.e., ht (y) ; zi = hy; T (z)i = hy; zi = hy; zi = 0: Proof. We rst notice that the characteristic polynomial of T satis es det (T I) = det T I since determinant is invariant under trace. Hence if the characteristic polynomial of T splits, then there is an eigenvector z of T : Then z? is T -invariant, and so we can use the inductive hypothesis to show that [T ] is upper triangular. This completes the proof. Notice the following consequence: 10

Theorem 27 Suppose T 2 L (V ) for a nite dimensional inner product space V: If T is self-adjoint, i.e., T = T, and its characteristic polynomial splits, then V has an orthonormal basis of eigenvectors of T: Proof. Since T = T ; we have that [T ] = [T ] : If we choose the basis from the Schur theorem, [T ] is upper triangular and [T ] is lower triangular and since [T ] = [T ] = [T ] we must have that [T ] is triangular. Note that we needed that the characteristic polynomials splits, which is not known for real inner product spaces, so we need the following lemma to complete the proof. Lemma 28 Let T be a self-adjoint operator on a nite dimensional inner product space V. Then 1. T has real eigenvalues 2. The characteristic polynomial of T splits. Proof. If is an eigenvalue with eigenvector v; then hv; vi = ht (v) ; vi = hv; T (v)i = hv; T (v)i = hv; vi = hv; vi and so = and is real. The second statement follows from the fundamental theorem of algebra if F = C. If F = R then we can consider A = [T ] for some basis ; and then consider the linear transformation L A : C n! C n given by L A (x) = Ax: Since T is self-adjoint, A = A and hence L A is self-adjoint and hence has real eigenvalues. Since all polynomials split over the complex numbers, the characteristic polynomial for L A splits, and since the eigenvalues are real, the polynomial has real coe cients. The characteristic polynomial for L A is the same as that for T; and since the eigenvalues are real, that means that the characteristic polynomial for T splits. Remark 29 The theorem above is if and only if, actually, for real inner product spaces. For complex inner product spaces, self-adjoint can be relaxed to normal, which means that T and T commute, i.e. T T = T T: For a normal operator, any eigenvector x of T; with eigenvalue ; is an eigenvector for T since 0 = ht x x; T x xi = hx; T T xi hx; T xi ht x; xi + jj 2 hx; xi = hx; T T xi ht x; xi hx; T xi + jj 2 hx; xi = T x x; T x x and so x is an eigenvector for T with eigenvalue : 8 Problems 11