Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1) Travis Schedler Thurs, Nov 18, 2010 (version: Wed, Nov 17, 2:15 PM)
Goals (2) Isometries Positive operators Polar and singular value decompositions Unitary matrices and classical matrix groups Preview of rest of course!
Characterization of isometries (3) Correction: An isometry V W need not be invertible even if V and W are finite-dimensional: e.g., if V = 0 the zero map is an isometry! However, it must be injective so invertible if dim V = dim W <. Proposition If T exists (e.g., if V is finite-dimensional), then T L(V, W ) is an isometry iff T T = I V. Proof. First, T is an isometry iff T Tv, v = v, v for all v. Next, T T and hence S := T T I is always self-adjoint. T is an isometry iff Sv, v = 0 for all v. Now, apply the skipped result from last time below. Proposition (Proposition 7.4) If T is self-adjoint, then Tv, v = 0 for all v iff T = 0.
Proof of the proposition (4) Proposition (Proposition 7.4) If T is self-adjoint, then Tv, v = 0 for all v iff T = 0. Proof. Assume Tv, v = 0 for all v. Then T (v + u), v + u T (v), v T (u), u = 0 for all v, u. Thus, T (v), u + T (u), v = 0 for all u, v. When T is self-adjoint, this says 2R T (v), u = 0 for all u, v. Plugging in iu for u, also 2I T (v), u = 0 for all u, v. Thus T (v) = 0 for all v, i.e., T = 0. Note: From another skipped result on last week s slides, when V is finite-dimensional, Tv, v = 0 for all v iff T = 0 or F = R and T is anti-self-adjoint.
Positive operators (5) Definition A positive operator T is a self-adjoint operator such that Tv, v 0 for all v. In view of the spectral theorem, a self-adjoint operator is positive iff its eigenvalues are nonnegative (part of Theorem 7.27). Theorem (Remainder of Theorem 7.27) (i) Every operator of the form T = S S is positive. (ii) Every positive operator admits a positive square root. Proof. (i) First, T = (S S) = S S is self-adjoint. Next, Tv, v = Sv, Sv 0 for all v. (ii) For any orthonormal eigenbasis of T, let T be the operator with the same orthonormal eigenbasis, but with the nonnegative square root of the eigenvalues.
Polar decomposition (6) After the spectral theorem, the second-most important theorem of Chapters 6 and 7 is: Theorem (Polar decomposition: Theorem 7.41) Every T L(V ) equals S T T for some isometry S. Main difficulty: T need not be invertible! Lemma For all v V, Tv = T T v. Proof. T T v 2 = T T v, T Tv = ( T T ) T T v, v = T Tv, v = Tv, Tv = Tv 2. Corollary: null(t ) = null( T T ). We may thus define S 1 : range( T T ) range(t ) by S 1 ( T T v) = Tv. Thus, for all v V, S 1 T T v = Tv. Also, S 1 u = u for all u(= T T v) by the lemma. So S 1 is an isometry.
Completion of proof (7) We only have to extend S 1 to an isometry on all of V. Note that range( T T ) range( T T ) = V = range(t ) range(t ). Thus, the extensions of S 1 : range( T T ) range(t ) to an isometry S : V V are exactly S = S 1 S 2, where S 2 : range( T T ) range(t ) is an isometry. Since these are inner product spaces of the same dimension, there always exists an isometry, by taking an orthonormal basis to an orthonormal basis. Recall here that T 1 T 2 on U 1 U 2 means (T 1 T 2 )(u 1 + u 2 ) = T 1 (u 1 ) + T 2 (u 2 ), u 1 U 1, u 2 U 2.
Singular value decomposition (SVD) (8) Let V be finite dimensional and T L(V ). Theorem There exist orthonormal bases (e 1,..., e n ) and (f 1,..., f n ) and nonnegative values s i 0 such that Te i = s i f i. The s i are called the singular values. Proof. Let (e 1,..., e n ) be an orthonormal eigenbasis of the positive T T. Let s 1,..., s n be the nonnegative eigenvalues. Using polar decomposition, T = S T T for S an isometry. Let f i := Se i. Then Te i = s i f i. Corollary: M (ei ),(f i )(T ) = D, where D is diagonal with entries s i 0. Improvement on normal form: (e i ), (f i ) orthonormal! Note: To compute, first find eigenvalues si 2 of T T, then orthonormal eigenbasis e i. This yields f i, S, and T T!
Unitary matrices and A = U 1 DU 1 2 decomposition (9) Corollary: If A = M (vi )(T ) for any orthonormal basis, then A = U 1 DU 1 2 for the change-of-basis matrices U 1, U 2. Moreover, the columns of U 1 and U 2 form orthonormal bases of Mat(n, 1, F): such matrices are called unitary. Proposition The following are equivalent for U Mat(n, n, F): The columns form an orthonormal basis; UU = I = U U; U = M (ei )(S) for some isometry S L(V ) and some orthonormal basis (e i ) of V. Proof. The first two are immediately equivalent. In general, M (ei )(S) has columns equal to Se i in the basis (e i ), so Se i, Se j = the dot product of the i-th and j-th column of M (ei )(S).
Example of SVD (A = U 1 DU2 1 decomposition)! (10) Let A = ( ) 10 2. Compute the decomposition A = U 5 11 1 DU2 1. First step: ( Find eigenvalues ) and eigenbasis of 125 75 A A =. 75 125 Char. poly of A = x 2 (tr A)x + det A = x 2 250x + 10000 = (x 200)(x 50). Eigenvalues: s1 2 = 200, s2 2 = ( 50. ) 75 75 Nullspace of A A 200I = : 75 75 ( ) spanned by e 1 := 1 1 2. 1 ( ) Same computation for e 2 : get e 2 := 1 1 2. 1 Alternative: e 2 determined up to scaling so that e 1 e 2.
Computation continued (11) ( ) 10 2 Recall: A =, 5 11 ( ) ( ) s1 2 = 200, s2 2 = 50, e 1 = 1 1 2, e 1 2 = 1 1 2. 1 ( ) ( ) Now, f 1 = s1 1 Ae 1 = 1 12 3/5 20 =. 16 4/5 ( ) ( ) Similarly, f 2 = s2 1 Ae 2 = 1 8 4/5 10 =. 6 3/5 Caution: f 2 f 1, but this only determines f 2 up to scaling by absolute value one (here ±1)! ( ) 3/5 4/5 Now, U 1 = (f 1 f 2 ) =, 4/5 3/5 ( ) ( ) s1 0 10 2 0 D = = 0 s 2 0 5, 2 ( ) and U 2 = (e 1 e 2 ) = 1 1 1 2. Check: A = U 1 1 1 DU2 1!
Classical matrix groups (12) Definition A group is a set G with an associative operation G G G with an identity and inverses (not necessarily commutative). Definition GL(n, F) Mat(n, n, F) is the group of invertible matrices general linear group, under multiplication. SL(n, F) Mat(n, n, F) is the subgroup of matrices of det= 1 special linear group. (We haven t defined determinant yet. But det(ab) = det(a) det(b) so SL(n, F) is closed under mult.) Definition O(n, R) = the group of orthogonal (=unitary) matrices: O such that O t O = I = OO t. U(n, C) = the group of unitary matrices: U such that U U = I = UU. Note: Also have O(n, F): matrices such that O t O = I = OO t for any F. Hence U(n, C) O(n, C)!
Finite subgroups; platonic solids (13) O(n, R) = group generated by reflections (and rotations)! Big question: What are the finite subgroups G < O(n, R)? In case{ n = 2: Just cyclic groups (cos ) } θ sin θ C m = : θ = 2πk/m; 1 k m, and sin θ cos θ dihedral groups C m TC m, where T is a reflection. In case n = 3: Groups of rotations only are: cyclic and dihedral groups on the plane, and platonic solid rotation groups: groups of rotation fixing the platonic solids! The other groups are G TG where G is as above, and T is a single reflection. This gives a classification of the platonic solids: the only three finite rotation groups in O(3, R) which don t fix a plane!
Classical groups of V ; bilinear and quadratic forms (14) Similarly: GL(V ), SL(V ) L(V ) are the groups of invertible, det= 1 linear transformations (V = vector space); O(V ), U(V ) are the groups of isometries in the cases F = R, C (V = inner product space); For general F, we can define groups O(V ) if we generalize inner product spaces: A bilinear form, : V V F is a map which is additive and homogeneous (in both slots!). It is symmetric if u, v = v, u. It is nondegenerate if u, v = 0 for all u V implies v = 0, and also v, u = 0 for all u V implies v = 0. Definition Let V have a (nondegenerate symmetric) bilinear form. Then O(V ) L(V ) = {T : u, v = Tu, Tv for all u, v V }. Quadratic form: q(v) := v, v, satisfying q(λv) = λ 2 q(v), parallelogram identity; uniquely determines, if 1/2 F.
Preview of rest of course: C case (15) Characteristic polynomial: write upper-triangular matrix A = M(T ). If the diagonal entries are λ 1,..., λ N, then char. poly := (x λ 1 ) (x λ N ). Theorem 1: This does not depend on basis. Call it χ T (x) = x N + a N 1 x N 1 + + a 0. Definition: tr(t ) = a N 1 = λ i ; det(t ) = a 0 = λ i. Theorem 2 (Cayley-Hamilton): χ T (T ) = 0. Theorem 3: tr(t ) = tr(a) = sum of diagonal entries. Corollary: tr(t ) + tr(s) = tr(t + S) (sum of sums of eigenvalues = sum of eigenvalues of sum!) Theorem 4: a 0 = det(a) = an explicit formula we will study, satisfying det(ab) = det(a) det(b). Case F = R: det(a) = volume of A(unit n-cube). Corollary: det(ts) = det(t ) det(s). Corollary: χ T (T ) = det(m(t ) xi ). So the a i are polynomial functions in entries of A!
Arbitrary F (16) Definition: χ T (x) = det(m(t ) xi ). Theorem 1 : Does not depend on basis. Theorem 2 (C-H): χ T (T ) = 0. Corollary: The roots of χ T are exactly the eigenvalues.
Jordan canonical form (17) Finally, back to F = C. The following only requires Theorem 1: Theorem For F = C, in some basis, T has block diagonal form with blocks λ 1......... 1. λ More generally (unfortunately we don t have time to prove): For ( F = ) R, we can say the same thing if we also ( allow) λ to be a b, and then 1 becomes I (alternatively, ). b a 1 For general F, we can say the same if we allow λ to be a matrix ( with ) irreducible characteristic polynomial, and replace 1 by. 1