Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1)

Similar documents
Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Lecture 23: Trace and determinants! (1) (Final lecture)

Lecture 21: The decomposition theorem into generalized eigenspaces; multiplicity of eigenvalues and upper-triangular matrices (1)

Lecture 23: Determinants (1)

FALL 2011, SOLUTION SET 10 LAST REVISION: NOV 27, 9:45 AM. (T c f)(x) = f(x c).

Travis Schedler. Thurs, Oct 27, 2011 (version: Thurs, Oct 27, 1:00 PM)

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

The Spectral Theorem for normal linear maps

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Lecture 22: Jordan canonical form of upper-triangular matrices (1)

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Review of linear algebra

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Review problems for MA 54, Fall 2004.

Summary of Week 9 B = then A A =

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Math 113 Final Exam: Solutions

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Linear Algebra Lecture Notes-II

Math 396. An application of Gram-Schmidt to prove connectedness

12x + 18y = 30? ax + by = m

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

2. Every linear system with the same number of equations as unknowns has a unique solution.

1 Last time: least-squares problems

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

4.2. ORTHOGONALITY 161

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Linear algebra II Homework #1 due Thursday, Feb A =

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Linear Algebra Formulas. Ben Lee

Eigenvalues and Eigenvectors

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

Math 408 Advanced Linear Algebra

Online Exercises for Linear Algebra XM511

Math 108b: Notes on the Spectral Theorem

REPRESENTATION THEORY WEEK 7

LinGloss. A glossary of linear algebra

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Eigenvalues and Eigenvectors A =

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Lecture 2: Eigenvalues and their Uses

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Conceptual Questions for Review

Mathematical Methods wk 2: Linear Operators

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Some notes on Coxeter groups

Numerical Linear Algebra

Linear Algebra Short Course Lecture 4

Rotations & reflections

MATHEMATICS 217 NOTES

ANSWERS. E k E 2 E 1 A = B

A Brief Outline of Math 355

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

Linear Algebra 2 Spectral Notes

6 Inner Product Spaces

Problems in Linear Algebra and Representation Theory

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Linear Algebra Review. Vectors

Rotations & reflections

Lecture 2j Inner Product Spaces (pages )

Linear Algebra: Matrix Eigenvalue Problems

1. General Vector Spaces

Math 4242 Fall 2016 (Darij Grinberg): homework set 8 due: Wed, 14 Dec b a. Here is the algorithm for diagonalizing a matrix we did in class:

Homework 11 Solutions. Math 110, Fall 2013.

SYLLABUS. 1 Linear maps and matrices

Linear algebra and applications to graphs Part 1

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C :

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

= W z1 + W z2 and W z1 z 2

Singular Value Decomposition (SVD) and Polar Form

Numerical Linear Algebra Homework Assignment - Week 2

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

LINEAR ALGEBRA MICHAEL PENKAVA

1. Select the unique answer (choice) for each problem. Write only the answer.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Diagonalizing Matrices

Linear Algebra - Part II

Spectral Theorem for Self-adjoint Linear Operators

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Linear algebra II Homework #1 due Thursday, Feb. 2 A =. 2 5 A = When writing up solutions, write legibly and coherently.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

CHAPTER 2 -idempotent matrices

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Math Linear Algebra II. 1. Inner Products and Norms

CS 143 Linear Algebra Review

Math 4153 Exam 1 Review

Linear Algebra and Dirac Notation, Pt. 2

Lecture 2: Linear Algebra Review

MIT Final Exam Solutions, Spring 2017

Transcription:

Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1) Travis Schedler Thurs, Nov 18, 2010 (version: Wed, Nov 17, 2:15 PM)

Goals (2) Isometries Positive operators Polar and singular value decompositions Unitary matrices and classical matrix groups Preview of rest of course!

Characterization of isometries (3) Correction: An isometry V W need not be invertible even if V and W are finite-dimensional: e.g., if V = 0 the zero map is an isometry! However, it must be injective so invertible if dim V = dim W <. Proposition If T exists (e.g., if V is finite-dimensional), then T L(V, W ) is an isometry iff T T = I V. Proof. First, T is an isometry iff T Tv, v = v, v for all v. Next, T T and hence S := T T I is always self-adjoint. T is an isometry iff Sv, v = 0 for all v. Now, apply the skipped result from last time below. Proposition (Proposition 7.4) If T is self-adjoint, then Tv, v = 0 for all v iff T = 0.

Proof of the proposition (4) Proposition (Proposition 7.4) If T is self-adjoint, then Tv, v = 0 for all v iff T = 0. Proof. Assume Tv, v = 0 for all v. Then T (v + u), v + u T (v), v T (u), u = 0 for all v, u. Thus, T (v), u + T (u), v = 0 for all u, v. When T is self-adjoint, this says 2R T (v), u = 0 for all u, v. Plugging in iu for u, also 2I T (v), u = 0 for all u, v. Thus T (v) = 0 for all v, i.e., T = 0. Note: From another skipped result on last week s slides, when V is finite-dimensional, Tv, v = 0 for all v iff T = 0 or F = R and T is anti-self-adjoint.

Positive operators (5) Definition A positive operator T is a self-adjoint operator such that Tv, v 0 for all v. In view of the spectral theorem, a self-adjoint operator is positive iff its eigenvalues are nonnegative (part of Theorem 7.27). Theorem (Remainder of Theorem 7.27) (i) Every operator of the form T = S S is positive. (ii) Every positive operator admits a positive square root. Proof. (i) First, T = (S S) = S S is self-adjoint. Next, Tv, v = Sv, Sv 0 for all v. (ii) For any orthonormal eigenbasis of T, let T be the operator with the same orthonormal eigenbasis, but with the nonnegative square root of the eigenvalues.

Polar decomposition (6) After the spectral theorem, the second-most important theorem of Chapters 6 and 7 is: Theorem (Polar decomposition: Theorem 7.41) Every T L(V ) equals S T T for some isometry S. Main difficulty: T need not be invertible! Lemma For all v V, Tv = T T v. Proof. T T v 2 = T T v, T Tv = ( T T ) T T v, v = T Tv, v = Tv, Tv = Tv 2. Corollary: null(t ) = null( T T ). We may thus define S 1 : range( T T ) range(t ) by S 1 ( T T v) = Tv. Thus, for all v V, S 1 T T v = Tv. Also, S 1 u = u for all u(= T T v) by the lemma. So S 1 is an isometry.

Completion of proof (7) We only have to extend S 1 to an isometry on all of V. Note that range( T T ) range( T T ) = V = range(t ) range(t ). Thus, the extensions of S 1 : range( T T ) range(t ) to an isometry S : V V are exactly S = S 1 S 2, where S 2 : range( T T ) range(t ) is an isometry. Since these are inner product spaces of the same dimension, there always exists an isometry, by taking an orthonormal basis to an orthonormal basis. Recall here that T 1 T 2 on U 1 U 2 means (T 1 T 2 )(u 1 + u 2 ) = T 1 (u 1 ) + T 2 (u 2 ), u 1 U 1, u 2 U 2.

Singular value decomposition (SVD) (8) Let V be finite dimensional and T L(V ). Theorem There exist orthonormal bases (e 1,..., e n ) and (f 1,..., f n ) and nonnegative values s i 0 such that Te i = s i f i. The s i are called the singular values. Proof. Let (e 1,..., e n ) be an orthonormal eigenbasis of the positive T T. Let s 1,..., s n be the nonnegative eigenvalues. Using polar decomposition, T = S T T for S an isometry. Let f i := Se i. Then Te i = s i f i. Corollary: M (ei ),(f i )(T ) = D, where D is diagonal with entries s i 0. Improvement on normal form: (e i ), (f i ) orthonormal! Note: To compute, first find eigenvalues si 2 of T T, then orthonormal eigenbasis e i. This yields f i, S, and T T!

Unitary matrices and A = U 1 DU 1 2 decomposition (9) Corollary: If A = M (vi )(T ) for any orthonormal basis, then A = U 1 DU 1 2 for the change-of-basis matrices U 1, U 2. Moreover, the columns of U 1 and U 2 form orthonormal bases of Mat(n, 1, F): such matrices are called unitary. Proposition The following are equivalent for U Mat(n, n, F): The columns form an orthonormal basis; UU = I = U U; U = M (ei )(S) for some isometry S L(V ) and some orthonormal basis (e i ) of V. Proof. The first two are immediately equivalent. In general, M (ei )(S) has columns equal to Se i in the basis (e i ), so Se i, Se j = the dot product of the i-th and j-th column of M (ei )(S).

Example of SVD (A = U 1 DU2 1 decomposition)! (10) Let A = ( ) 10 2. Compute the decomposition A = U 5 11 1 DU2 1. First step: ( Find eigenvalues ) and eigenbasis of 125 75 A A =. 75 125 Char. poly of A = x 2 (tr A)x + det A = x 2 250x + 10000 = (x 200)(x 50). Eigenvalues: s1 2 = 200, s2 2 = ( 50. ) 75 75 Nullspace of A A 200I = : 75 75 ( ) spanned by e 1 := 1 1 2. 1 ( ) Same computation for e 2 : get e 2 := 1 1 2. 1 Alternative: e 2 determined up to scaling so that e 1 e 2.

Computation continued (11) ( ) 10 2 Recall: A =, 5 11 ( ) ( ) s1 2 = 200, s2 2 = 50, e 1 = 1 1 2, e 1 2 = 1 1 2. 1 ( ) ( ) Now, f 1 = s1 1 Ae 1 = 1 12 3/5 20 =. 16 4/5 ( ) ( ) Similarly, f 2 = s2 1 Ae 2 = 1 8 4/5 10 =. 6 3/5 Caution: f 2 f 1, but this only determines f 2 up to scaling by absolute value one (here ±1)! ( ) 3/5 4/5 Now, U 1 = (f 1 f 2 ) =, 4/5 3/5 ( ) ( ) s1 0 10 2 0 D = = 0 s 2 0 5, 2 ( ) and U 2 = (e 1 e 2 ) = 1 1 1 2. Check: A = U 1 1 1 DU2 1!

Classical matrix groups (12) Definition A group is a set G with an associative operation G G G with an identity and inverses (not necessarily commutative). Definition GL(n, F) Mat(n, n, F) is the group of invertible matrices general linear group, under multiplication. SL(n, F) Mat(n, n, F) is the subgroup of matrices of det= 1 special linear group. (We haven t defined determinant yet. But det(ab) = det(a) det(b) so SL(n, F) is closed under mult.) Definition O(n, R) = the group of orthogonal (=unitary) matrices: O such that O t O = I = OO t. U(n, C) = the group of unitary matrices: U such that U U = I = UU. Note: Also have O(n, F): matrices such that O t O = I = OO t for any F. Hence U(n, C) O(n, C)!

Finite subgroups; platonic solids (13) O(n, R) = group generated by reflections (and rotations)! Big question: What are the finite subgroups G < O(n, R)? In case{ n = 2: Just cyclic groups (cos ) } θ sin θ C m = : θ = 2πk/m; 1 k m, and sin θ cos θ dihedral groups C m TC m, where T is a reflection. In case n = 3: Groups of rotations only are: cyclic and dihedral groups on the plane, and platonic solid rotation groups: groups of rotation fixing the platonic solids! The other groups are G TG where G is as above, and T is a single reflection. This gives a classification of the platonic solids: the only three finite rotation groups in O(3, R) which don t fix a plane!

Classical groups of V ; bilinear and quadratic forms (14) Similarly: GL(V ), SL(V ) L(V ) are the groups of invertible, det= 1 linear transformations (V = vector space); O(V ), U(V ) are the groups of isometries in the cases F = R, C (V = inner product space); For general F, we can define groups O(V ) if we generalize inner product spaces: A bilinear form, : V V F is a map which is additive and homogeneous (in both slots!). It is symmetric if u, v = v, u. It is nondegenerate if u, v = 0 for all u V implies v = 0, and also v, u = 0 for all u V implies v = 0. Definition Let V have a (nondegenerate symmetric) bilinear form. Then O(V ) L(V ) = {T : u, v = Tu, Tv for all u, v V }. Quadratic form: q(v) := v, v, satisfying q(λv) = λ 2 q(v), parallelogram identity; uniquely determines, if 1/2 F.

Preview of rest of course: C case (15) Characteristic polynomial: write upper-triangular matrix A = M(T ). If the diagonal entries are λ 1,..., λ N, then char. poly := (x λ 1 ) (x λ N ). Theorem 1: This does not depend on basis. Call it χ T (x) = x N + a N 1 x N 1 + + a 0. Definition: tr(t ) = a N 1 = λ i ; det(t ) = a 0 = λ i. Theorem 2 (Cayley-Hamilton): χ T (T ) = 0. Theorem 3: tr(t ) = tr(a) = sum of diagonal entries. Corollary: tr(t ) + tr(s) = tr(t + S) (sum of sums of eigenvalues = sum of eigenvalues of sum!) Theorem 4: a 0 = det(a) = an explicit formula we will study, satisfying det(ab) = det(a) det(b). Case F = R: det(a) = volume of A(unit n-cube). Corollary: det(ts) = det(t ) det(s). Corollary: χ T (T ) = det(m(t ) xi ). So the a i are polynomial functions in entries of A!

Arbitrary F (16) Definition: χ T (x) = det(m(t ) xi ). Theorem 1 : Does not depend on basis. Theorem 2 (C-H): χ T (T ) = 0. Corollary: The roots of χ T are exactly the eigenvalues.

Jordan canonical form (17) Finally, back to F = C. The following only requires Theorem 1: Theorem For F = C, in some basis, T has block diagonal form with blocks λ 1......... 1. λ More generally (unfortunately we don t have time to prove): For ( F = ) R, we can say the same thing if we also ( allow) λ to be a b, and then 1 becomes I (alternatively, ). b a 1 For general F, we can say the same if we allow λ to be a matrix ( with ) irreducible characteristic polynomial, and replace 1 by. 1