Math 108b: Notes on the Spectral Theorem

Similar documents
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Chapter 6 Inner product spaces

Linear Algebra 2 Spectral Notes

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Eigenvalues and Eigenvectors

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Diagonalization of Matrix

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Quantum Computing Lecture 2. Review of Linear Algebra

Diagonalizing Matrices

Linear Algebra: Matrix Eigenvalue Problems

Exercise Set 7.2. Skills

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Chapter 4 Euclid Space

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Math Homework 8 (selected problems)

Schur s Triangularization Theorem. Math 422

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

REPRESENTATION THEORY WEEK 7

Linear algebra and applications to graphs Part 1

Matrix Theory, Math6304 Lecture Notes from Sept 11, 2012 taken by Tristan Whalen

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math Linear Algebra II. 1. Inner Products and Norms

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Review of some mathematical tools

Matrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury

CS 246 Review of Linear Algebra 01/17/19

Definition (T -invariant subspace) Example. Example

Spectral Theorem for Self-adjoint Linear Operators

I. Multiple Choice Questions (Answer any eight)

Final Exam. Linear Algebra Summer 2011 Math S2010X (3) Corrin Clarkson. August 10th, Solutions

Chapter 6: Orthogonality

Numerical Methods I Eigenvalue Problems

Lecture 10 - Eigenvalues problem

Math 408 Advanced Linear Algebra

Math 315: Linear Algebra Solutions to Assignment 7

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus

Linear Algebra Lecture Notes-II

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

Review problems for MA 54, Fall 2004.

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Econ Slides from Lecture 7

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

1. General Vector Spaces

MATH 583A REVIEW SESSION #1

EXAM. Exam 1. Math 5316, Fall December 2, 2012

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants

Math 312 Final Exam Jerry L. Kazdan May 5, :00 2:00

Properties of Linear Transformations from R n to R m

Math Linear Algebra Final Exam Review Sheet

1 Last time: least-squares problems

Symmetric and self-adjoint matrices

1 Inner Product and Orthogonality

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Numerical Linear Algebra Homework Assignment - Week 2

Lec 2: Mathematical Economics

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Eigenvalue and Eigenvector Problems

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Symmetric and anti symmetric matrices

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

NATIONAL UNIVERSITY OF SINGAPORE DEPARTMENT OF MATHEMATICS SEMESTER 2 EXAMINATION, AY 2010/2011. Linear Algebra II. May 2011 Time allowed :

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Lecture 7: Positive Semidefinite Matrices

Linear Algebra. Workbook

Math 407: Linear Optimization

The Eigenvalue Problem: Perturbation Theory

Summary of Week 9 B = then A A =

Review of similarity transformation and Singular Value Decomposition

MTH 2032 SemesterII

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

Recall : Eigenvalues and Eigenvectors

Homework 11 Solutions. Math 110, Fall 2013.

Elementary linear algebra

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Eigenvalues and Eigenvectors

Math 102 Final Exam - Dec 14 - PCYNH pm Fall Name Student No. Section A0

Eigenvectors and Hermitian Operators

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Linear Algebra using Dirac Notation: Pt. 2

Linear algebra 2. Yoav Zemel. March 1, 2012

The Spectral Theorem for normal linear maps

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Transcription:

Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator on V such that T (x), y = x, T (y) for every x, y V see Theroems 6.8 and 6.9.) When V is infinite dimensional, the adjoint T may or may not exist. One useful fact (Theorem 6.10) is that if β is an orthonormal basis for a finite dimensional inner product space V, then [T ] β = [T ] β. That is, the matrix representation of the operator T is equal to the complex conjugate of the matrix representation for T. For a general vector space V, and a linear operator T, we have already asked the question when is there a basis of V consisting only of eigenvectors of T? this is exactly when T is diagonalizable. Now, for an inner product space V, we know how to check whether vectors are orthogonal, and we know how to define the norms of vectors, so we can ask when is there an orthonormal basis of V consisting only of eigenvectors of T? Clearly, if there is such a basis, T is diagonalizable and moreover, eigenvectors with distinct eigenvalues must be orthogonal. Definitions Let V be an inner product space. Let T L(V ). (a) T is normal if T T = T T (b) T is self-adjoint if T = T For the next two definitions, assume V is finite-dimensional: Then, (c) T is unitary if F = C and T (x) = x for every x V (d) T is orthogonal if F = R and T (x) = x for every x V Notes 1. Self-adjoint Normal (Clearly, if T = T, then T and T commute!) 2. Unitary or Orthogonal Invertible (Show that N(T ) = {0}.) 3. Unitary or Orthogonal Normal (This is because, when T is unitary (or orthogonal), we must have T T = T T = I, the identity operator on V see Theorem 6.18.)

Example A simple example of an orthogonal operator is the rotation operator T θ on R 2. In the standard basis, ( ) cos θ sin θ A θ = [T θ ] β =. sin θ cos θ Notice that A is orthogonal. Also, A = A T A, so A is not self-adjoint. However, it is easy to check that AA = A A = I, so A is normal. If the matrix A were over the complex field, it would have a orthonormal basis of eigenvectors by the spectral theorem (see below)! However, over the reals, we see that the characteristic polynomial of A does not split (unless sin θ = 0) therefore, there are no real eigenvalues and therefore no eigenvectors at all for the operator T θ on R 2. Spectral Theorem 1 Let T be a linear operator on a finite dimensional complex inner product space V. Then, T is normal if and only if there exists an orthonormal basis (for V ) consisting of eigenvectors of T. Spectral Theorem 2 Let T be a linear operator on a finite dimensional real inner product space V. Then, T is self-adjoint if and only if there exists an orthonormal basis (for V ) consisting of eigenvectors of T. We ll prove the simple direction first: Assume that there is an orthonormal basis β consisting of eigenvectors of T. Then, we know that [T ] β is diagonal, and also that [T ] β = [T ] β is diagonal. Since diagonal matrices commute, we have that [T T ] β = [T ] β [T ] β = [T ] β [T ] β = [T T ] β. Hence, T T = T T, so T is normal. This holds whether F = C or F = R. We ll see in the lemmas below that T normal implies that every eigenvector of T with eigenvalue λ is also an eigenvector of T with eigenvalue λ. Since β is a basis of eigenvectors of either T or T, we have that both [T ] β and [T ] β are diagonal matrices with their eigenvalues on the diagonal. In the case F = R, every eigenvalue of T is real, so λ = λ, and we must have that [T ] β = [T ] β. Therefore, in the case of a real inner product space, we know that T is not only normal, but also self-adjoint! Lemma (pg. 269) Let V be a finite-dimensional inner product space and T L(V ). T has an eigenvector with eigenvalue λ, then T has an eigenvector with eigenvalue λ If See the book for the proof. Note that the above lemma is true for any linear operator T but we do not know that the eigenvectors for T and T are related! The next theorem states that when T is normal, we know the eigenvectors for T and T are the same this is the fact we used in the proof of the spectral theorem above.

Theorem 6.15 (pg. 371) Let V be an inner product space and T L(V ) be a normal operator. Then, (c) If T (x) = λx (for some x V and some λ F ), then T (x) = λx. (d) If λ 1 and λ 2 are distinct eigenvalues of T with eigenvectors x 1 and x 2, then x 1, x 2 = 0. Sketch of proof: First prove that, since T is normal, T (x) = T (x) for every x V. Then, since U = T λi is normal, and U = T λi, we know that U(x) = 0 if and only if U (x) = 0. In other words, T (x) = λx if and only if T (x) = λi. For (d), Simply compute λ 1 x 1, x 2 = λ 2 x 1, x 2. Since λ 1 λ 2, we must have x 1, x 2 = 0. We re almost ready to finish the proof of the first spectral theorem, but first, we ll need the following theorem: Schur s Theorem Let T be a linear operator on a finite-dimensional inner product space V. Suppose the characteristic polynomial of T splits. Then, there exists an orthonormal basis β for V such that the matrix [T ] β is upper triangular. Proof: Induct on n = dim V. Of course, if n = 1, take any basis β = {x} with x = 1. Then, [T ] β = (a) (where a = T (x)) is upper-triangular. Now, assume the result for n 1: that is, suppose that if V is any finite-dimensional inner product space with dimension n 1 and if T is any linear operator such that the characteristic polynomial splits, then there exists an orthonormal basis for V such that [T ] β is upper triangular. We wish to show this result for n. Fix an n dimensional inner product space V and a T L(V ) such that the characteristic polynomial of T splits. This implies that T has an eigenvalue λ. Therefore, T has an eigenvalue λ (by Lemma, pg. 369). Let z be a unit eigenvector of T, so T z = λz. Consider the space W = span({z}). Clearly, W is n 1 dimensional, and T W is a linear operator on W, so by the induction hypothesis, there exists a basis γ such that [T W ] γ is upper-triangular. (Notice that we know that T W splits since W is T invariant! Check this fact below:) Let y W. We want to show that T (y) is in W. For every, compute Finally β = γ {z} is orthonormal, and [T ] β is upper-triangular: Letting [T z] β = a F n, [T W ] γ [T ] β = a. 0

Proof of Spectral Theorem 1 Assume T is normal. Since F = C, we know (from the fundamental theorem of algebra) that the characteristic polynomial of T splits. By Schur s theorem, we can find an orthonormal basis β = {v 1, v 2,..., v n } such that A = [T ] β is uppertriangular. Clearly v 1 is an eigenvector of T (since T (v 1 ) = A 11 v 1.) The fact that T is normal will imply that v 2,..., v n are also eigenvectors! Assume v 1, v 2,..., v k 1 are all eigenvectors Let λ 1, λ 2,..., λ k 1 be the corresponding eigenvalues. Then, since A is upper-triangular, look at the k th column of A and note A mk = 0 if m > k: This lets us compute T (v k ) = A 1k v 1 + A 2k v 2 +...A kk v k From Theorem 6.15, the fact that T is normal implies that T (v j ) = λ j v j. Since the basis is orthonormal, we have the formula A jk = T (v k ), v j for the coefficients in the equation above. Computing, A jk = T (v k ), v j = v k, T (v j ) = v k, λ j v j = λj v k, v j = 0. Therefore, we have simply that T (v k ) = A kk v k, so v k is an eigenvector for T. By induction, we have shown that v 1, v 2,..., v n are all eigenvectors of T, so β is an orthonormal basis consisting of only eigenvectors of T, and the spectral theorem is proven. Before we can prove the second version of the spectral theorem, for F = R, we need the following lemma: Lemma (pg. 373) Let T be a self-adjoint operator on a finite-dimensional inner product space V. Then the following two facts hold (whether we have F = R or F = C) (a) Every eigenvalue of T is real. (b) The characteristic polynomial of T splits. Proof of (a): From Theorem 6.15, if x is an eigenvalue of T, we have both T (x) = λx for some λ F and T (x) = λx. However, since T is self-adjoint, T (x) = T (x). Therefore, λx = λx. Since x 0, we have that λ = λ; i.e., λ is real. Proof of (b): When F = C, we already know the characteristic polynomial splits. Let β be any orthonormal basis for V ; we know that A = [T ] β satisfies A = A. Define, for every x C n, S(x) = Ax. S has the same characteristic polynomial as T, but since S is a linear operator on a complex inner product space, we know the characteristic polynomial of S splits. Moreover, since S is self-adjoint, each eigenvalue λ 1,..., λ n is real, and the polynomial p T (t) = p S (t) = (λ 1 t)(λ 2 t)...(λ n t) splits over the field F = R.

Proof of Spectral Theorem 2: Assume F = R and T is a self-adjoint linear operator on V. Then, the characteristic polynomial of T splits, and Schur s theorem implies that there exists an orthonormal basis β for V such that A = [T ] β is upper-triangular. The same proof as for the first spectral theorem now works since T is normal, but it is easier to note that since T = T, we know that both A and A T = A = A are upper triangular. Therefore, A is diagonal. Finally, we will show the next theorem, which includes the fact that unitary (and orthogonal) operators must be normal! Theorem 6.18 Let V be a finite dimensional inner product space and T L(V ). The following are equivalent: Proof: (a) T T = T T = I (b) T (x), T (y) = x, y for every x, y V (e) T (x) = x for every x V (a) (b) Assume (a) T T = T T = I. Given x, y V, since T T (x) = I(x) = x, we have x, y = T T (x), y = T (x), T (y). (b) (e) Assume (b) x, y = T (x), T (y) for every x, y V Let β = {v 1, v 2,..., v n } be an orthnormal basis for V. Then T (β) = {T (v 1 ), T (v 2 ),..., T (v n )} is an orthonormal basis for V (since, from (b), T (v i ), T (v j ) = v i, v j = δ ij ). Given any x V, write x = n i=1 a iv i, we can easily compute x 2 = n i=1 a i 2. Since T is linear, T (x) = n i=1 a it (v i ); the same computation shows T (x) 2 = n i=1 a i 2

(e) (a) Assume (e) T (x) 2 = x 2 for every x V. For all x V, This implies, for all x V, that x, x = x 2 = T (x) 2 = T (x), T (x) = x, T T (x) ( ) x, (I T T )(x) = 0. Let U = I T T. Check that U is self-adjoint: U = I (T T ) = I T T. Then, by the spectral theorem there exists an orthonormal basis β consisting of eigenvectors of U. For all x β, U(x) = λx. By ( ), 0 = x, U(x) = λ x, x. Since x 0, we must have λ = 0. This implies that [U] β is the zero matrix! Hence, U = I T T = T o (the zero transformation on V ), so T T = I. Since V is finite dimensional, this proves that T must be the inverse of T and therefore T T = I.