Then since v is an eigenvector of T, we have (T λi)v = 0. Then

Similar documents
LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Linear Algebra Highlights

MATH 583A REVIEW SESSION #1

Linear Algebra Lecture Notes-II

1. General Vector Spaces

Elementary linear algebra

LINEAR ALGEBRA MICHAEL PENKAVA

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

2. Every linear system with the same number of equations as unknowns has a unique solution.

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Math 113 Midterm Exam Solutions

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

235 Final exam review questions

Linear Algebra Lecture Notes-I

LINEAR ALGEBRA REVIEW

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Math 115A: Homework 5

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

Linear Algebra. Workbook

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

Math 489AB Exercises for Chapter 2 Fall Section 2.3

6 Inner Product Spaces

Definitions for Quizzes

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

Linear Algebra Practice Problems

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MAT Linear Algebra Collection of sample exams

Spectral Theorem for Self-adjoint Linear Operators

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Study Guide for Linear Algebra Exam 2

MATHEMATICS 217 NOTES

Lecture Summaries for Linear Algebra M51A

LINEAR ALGEBRA SUMMARY SHEET.

Vector Spaces and Linear Transformations

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors

Linear Algebra: Graduate Level Problems and Solutions. Igor Yanovsky

Online Exercises for Linear Algebra XM511

NORMS ON SPACE OF MATRICES

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Matrix Theory. A.Holst, V.Ufnarovski

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Foundations of Matrix Analysis

Linear Algebra Formulas. Ben Lee

Solution to Homework 1

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Solution. That ϕ W is a linear map W W follows from the definition of subspace. The map ϕ is ϕ(v + W ) = ϕ(v) + W, which is well-defined since

Solutions to Final Exam

Math113: Linear Algebra. Beifang Chen

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Exercise Sheet 1.

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

Review problems for MA 54, Fall 2004.

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Mathematics 1. Part II: Linear Algebra. Exercises and problems

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

Introduction to Linear Algebra, Second Edition, Serge Lange

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition).

GQE ALGEBRA PROBLEMS

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

First we introduce the sets that are going to serve as the generalizations of the scalars.

Math 396. Quotient spaces

Linear algebra 2. Yoav Zemel. March 1, 2012

MATH 235. Final ANSWERS May 5, 2015

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Topics in linear algebra

1 Last time: least-squares problems

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Final Exam Practice Problems Answers Math 24 Winter 2012

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

CS 246 Review of Linear Algebra 01/17/19

Chapter 3. Matrices. 3.1 Matrices

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Linear Algebra Review

A PRIMER ON SESQUILINEAR FORMS

Honors Algebra II MATH251 Course Notes by Dr. Eyal Goren McGill University Winter 2007

4. Linear transformations as a vector space 17

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 10, 2011

Math Linear Algebra II. 1. Inner Products and Norms

G1110 & 852G1 Numerical Linear Algebra

Symmetric and self-adjoint matrices

Transcription:

Problem F02.10. Let T be a linear operator on a finite dimensional complex inner product space V such that T T = T T. Show that there is an orthonormal basis of V consisting of eigenvectors of B. Solution. When T satisfies T T = T T, we call T normal. We prove this by induction on the dimension of the space that T operates on. If T is operating on a 1-dimensional space, the claim is obvious. Suppose the claim holds for any normal T operating on an n 1 dimensional space (n 2). By the fundamental theorem of algebra, the characteristic polynomial of T has a root λ which is an eigenvalue of T. Let v be the corresponding non-zero eigenvector (wlog v = 1). Then v = {x V : (x, v) = 0} has dimension n 1. Also, if x v, then (T x, v) = (x, T v) = λ(x, v) = 0. Thus v is T -invariant. Then the restriction of T to v is a normal operator on an n 1 dimensional space. Then by our inductive hypothesis, there is an orthonormal basis {v 2,..., v n } for v consisting of eigenvectors of T. Then {v, v 2,..., v n } is an orthonormal set with n elements and is thus a basis for V. It remains to prove that v is also an eigenvector of T and then we will have the required basis. Since T is normal, so is T and thus T ci for every c C. Also, for any normal operator S and any vector x, we see Sx 2 = (Sx, Sx) = (x, S Sx) = (x, SS x) = (S x, S x) = S x 2. Then since v is an eigenvector of T, we have (T λi)v = 0. Then 0 = (T λi)v 2 = (T λi) v 2 = (T λi)v 2. Thus T v = λv so v is also an eigenvector of T. Thus {v, v 2,..., v n } is a basis of V consisting of eigenvectors of T. This completes the induction and the proof. Problem W02.8. Let T : V W and S : W X be linear transformations of real finite dimensional vector spaces. Prove that rank(t ) + rank(s) dim(w ) rank(s T ) max{rank(t ), rank(s)}. Solution. By the Rank-Nullity Theorem, rank(t ) + dim(ker(t )) = dim(v ), (1) rank(s) + dim(ker(s)) = dim(w ), (2) rank(s T ) + dim(ker(s T )) = dim(v ). (3) Adding the (1), (2) and then subtracting (3) gives rank(t ) + rank(s) rank(s T ) + dim(ker(t )) + dim(ker(s)) dim(ker(s T )) = dim(w ). Let {v 1,..., v l } be a basis for ker(t ). Then (S T )(v i ) = 0 for each i so ker(t ) ker(s T ). Thus we can extend this to a basis {v 1,..., v l, y 1,..., y k } for ker(s T ). Then for each j = 1,..., k, we have 0 = (S T )(y j ) = S(T (y j )). 1

Hence T (y j ) ker(s) for each j. Further if a 1,..., a k C are such that Then a 1 T (y 1 ) + + a k T (y k ) = 0, T (a 1 y 1 + + a k y k ) = 0 so a 1 y 1 + + a k y k ker(t ) so there are b 1,..., b l C such that a 1 y 1 + + a k y k = b 1 v 1 + b l v l = a 1 y 1 + + a k y k b 1 v 1 b l v l = 0. But these vectors form a basis for ker(s T ) so in particular, a 1 = = a k = 0. Thus {T (y 1 ),..., T (y k )} is a linearly independent subset of ker(s) and so dim(ker(s)) k. Hence and so the equation above yields or dim(ker(t )) + dim(ker(s)) dim(ker(s T )) rank(t ) + rank(s) rank(s T ) dim(w ) rank(t ) + rank(s) dim(w ) rank(s T ) which is the first half of the inequality. Now suppose that {x 1,..., x m } is a basis for im(s T ). Then there are u 1,..., u m V such that (S T )(u i ) = x i, i = 1,..., m. Thus S(T (u i )) = x i for each i, and so in particular x i im(s) for each i and so we have m linearly independent vectors in im(s). This gives rank(s T ) rank(s) max{rank(s), rank(t )}. This is the second half of the inequality. Problem W02.10. Let V be a finite dimensional complex inner product space and f : V C a linear functional. Show that there exists a vector w V such that f(v) = (v, w) for all v V. Solution. Let v 1,..., v n be an orthonormal basis for V. Given f V, put f(v i ) = α i C. Next set w = α 1 v 1 + + α n v n. For any v V, there are β 1,..., β n C such that Then and ( (v, w) = β i v i, v = β 1 v 1 + + β n v n. f(v) = β 1 f(v 1 ) + + β n f(v n ) = β 1 α 1 + + β n α n i=1 Thus by orthonormality, ) α j v j = j=1 (v, w) = i=1 (β i v i, α j v j ) = j=1 β i α i = f(v). i=1 2 i=1 β i α j (v i, v j ). j=1

Since v was arbitrary, f(v) = (v, w) for all v V. Problem W02.11. Let V be a finite dimensional complex inner product space and T : V V a linear transformation. Prove that there exists an orthonormal ordered basis for V such that the matrix representation of T in this basis is upper triangular. Solution. We prove this by induction on the dimension of the space T acts upon. If T is acting on a 1-dimensional space, the claim is obvious. Suppose the claim holds for linear maps acting on n 1 dimensional spaces. Let dim(v ) = n. By the fundamental theorem of algebra, there is an eigenvalue λ C and corresponding non-zero eigenvector 0 v 1 V of T ; wlog v 1 = 1. Then v1 = {x V : (x, v 1 ) = 0} is an n 1-dimensional space. Also for x v1, we have (T (x), v 1 ) = (x, T (v 1 )) = (x, λv 1 ) = λ(x, v 1 ) = 0. Hence v 1 is T -invariant. Thus T v 1 is an operator acting on an n 1-dimensional space. By our inductive hypothesis, there is an orthonormal basis {v 2,..., v n } for v such that the matrix of T is upper-triangular with respect to this basis. Then {v 1, v 2,..., v n } is an orthonormal basis for V. Further, T (v 1 ) = n j=1 a 1jv j for some a 1j C and by assumption T (v i ) = n j=i a ijx j. Thus the matrix of T with respect to this basis is a 11 a 12 a 13 a 1n 0 a 22 a 23 a 2n A = 0 0 a 33 a 3n...... 0 0 0 a nn Problem S03.8. Let V be an n-dimensional complex vector space and T : V V a linear operator. Suppose that the characteristic polynomial of T has n distinct roots. Show that there is a basis B of V such that the matrix representation of T in the basis B is diagonal. Solution. Since each root of the characteristic polynomial (and thus each eigenvalue of T ) is distinct and since eigenvectors corresponding to different eigenvalues are linearly independent, each eigenspace E λ is a one-dimensional T -invariant subspace. Let λ 1,..., λ n be the distinct eigenvalues of T with corresponding eigenvectors v 1,..., v n. We know that eigenvectors corresponding to distinct eigenvalues are linearly independent, thus E λi E λj = {0} whenever i j. Further, since we have n-linearly independent vectors, {v 1,..., v n } is a basis for V. The matrix of T with respect to this basis is λ 1 λ 2 0. [T ] =..,. 0.. λ n 3

since T (v i ) = λ i v i, i = 1,..., n. Problem S03.9. Let A M 3 (R) satisfy det(a) = 1 and A t A = I = AA t where I is the identity matrix. Prove that the characteristic polynomial of A has 1 as a root. Solution. Clearly the characteristic polynomial of A has a real root since it has odd order. Let λ be a real root of the characteristic polynomial. Then λ is an eigenvalue of A. Suppose 0 v R 3 is a normalized eivengector corresponding to λ. Then λ 2 = λ 2 (v, v) = (λv, λv) = (Av, Av) = (v, A t Av) = (v, v) = 1. Thus λ = ±1. If λ = 1, then we are done. If λ = 1, suppose µ, ν C are the other eigenvalues of A. Then 1 = det(a) = µνλ = µν. If µ, ν are not real, they must be a conjugate pair since A is real. But this is impossible, because then µν 0. Thus both µ, ν are real. By the same reasoning as above, µ, ν = ±1. Then µν = 1 forces µ = 1, ν = 1 (or vice versa). Thus A has 1 as an eigenvalue and so the characteristic polynomial of A has 1 as a root. Problem S03.10. Let V be a finite dimensional real inner product space and T : V V a hermitian linear operator. Suppose the matrix representation of T 2 in the standard basis has trace zero. Prove that T is the zero operator. Solution. Let dim(v ) = n and let A be the matrix of T in the standard basis. Since T is hermitian, so is A and thus by the spectral theorem, there is an orthonormal basis {v 1,..., v n } for R n consisting of eigenvectors of A. Let λ 1,..., λ n R be the corresponding eigenvalues (repeats are allowed and the eigenvalues are real since A is hermitian). Then Av i = λ i v i = A 2 v i = λ i Av i = λ 2 i v i. Thus (λ 2 i, v i ) is an eigenpair for A 2 which is the matrix of T 2. We are given that the trace of the matrix is zero, but the trace is the sum of the eigenvalues. Hence λ 2 i = 0 = λ 1 = = λ n = 0; i=1 again this holds since all eigenvalues of a hermitian operator are real. Then Av i = 0, i = 1,..., n. But this means A sends a basis for R n to zero. This is only possible if A is the zero matrix. Thus T is the zero transformation. Problem F03.9. Consider a 3 3 real symmetric matrix with determinant 6. Assume that (1, 2, 3) and (0, 3, 2) are eigenvectors with corresponding eigenvalues 1 and 2 respectively. (a) Give an eigenvector of the form (1, x, y) which is linearly independent from the two vectors above. (b) What is the eigenvalue of this eigenvector? 4

Solution. We answer the questions in the reverse order. The product of the eigenvalues equals the determinant, so the third eigenvalue is 3. This answers (b). By the spectral theorem, the eigenspaces corresponding to distinct eigenvalues will be orthogonal. Here all eigenvalues are distinct. Since the first two eigenvectors span a two dimensional space, any vector orthogonal to both will necessarily be a third eigenvector. Taking the cross product of the two vectors gives a vector which is orthogonal to both. We see î ĵ ˆk (1, 2, 3) (0, 3, 2) = 1 2 3 = ( 13, 2, 3). 0 3 2 Then, v = (1, 2/13, 3/13) is an eigenvector of the desired form. This answers (a). Problem F03.10. (a) Take t R such that t is not an integer multiple of π. Prove that if ( ) cos(t) sin(t) A = sin(t) cos(t) then there is no invertible real matrix B such that BAB 1 is diagonal. (b) Do the same for where λ R {0}. A = ( ) 1 λ 0 1 Solution. (a) A doesn t have any real eigenvalues so it cannot be diagonalizable in M 2 (R). Indeed, any eigenvalue λ of A would need to satisfy (cos(t) λ) 2 + sin 2 (t) = 0 = cos(t) λ = ±i sin(t) = λ = cos(t) ± i sin(t) which is not real since t is not an integer multiple of π. (b) The only eigenvalue of A is 1 and it has algebraic multiplicity 2. However, the only eigenvalue corresponding to this eigenvalue (up to scaling) is v = (1, 0). Thus the geometric multiplicity of the eigenvalue is 1. Since A has an eigenvalues whose algebraic and geometric multiplicities are unequal, A is not diagonalizable. Problem S04.7. Let V be a finite dimensional real vector space and U, W V be subspaces of V. Show both of the following: (a) U 0 W 0 = (U + W ) 0 (b) (U W ) 0 = U 0 + W 0 Solution. 5

(a) Let f U 0 W 0. Then f U 0 and f W 0. Take x U + W. Then x = u + w for some u U, w W. We see Thus f (U + W ) 0. f(x) = f(u + w) = f(u) + f(w) = 0 + 0 = 0. Now take f (U + W ) 0. For any u U, we have u U + W. Then f(u) = 0. Hence, since u was arbitrary, f U 0. Similarly, f(w) = 0 for all w W so f W 0. Thus f U 0 W 0. We conclude that U 0 W 0 = (U + W ) 0. (b) Let f (U W ) 0. For any Z V, define f Z V by f Z (v) = f(v), v Z, f Z (v) = 0, v V Z. Then f = f U + f V U. Clearly f V U U 0. We must show f U W 0. Let w W. If w U, then f V U (w) = 0, and f(w) = 0 since w U W. Thus 0 = f(w) = f U (w)+f V U (w) = f U (w). Otherwise w U, in which case f U (w) = 0 by definition. Hence, f U (w) = 0 for all w W and so f U W 0. Then f = f U + f V U U 0 + W 0. Take f U 0 + W 0. Then f = f 1 + f 2, f 1 U 0, f 2 W 0. For any v U W, we have v U and v W. Then f(v) = f 1 (v) + f 2 (v) = 0 + 0 = 0. Hence f(v) = 0 for all v U W so f (U W ) 0. We conclude (U W ) 0 = U 0 + W 0. Problem S04.9. Let V be a finite dimensional real inner product space and T : V V a linear operator. Show the following are equivalent: (a) (T x, T y) = (x, y) for all x, y V, (b) T x = x for all x V, (c) T T = I, where T is the adjoint of T and I : V V is the identity map, (d) T T = I. Solution. (a) = (b) by taking x = y. Assume (b) is true. Then for any x, y V, (x, x) + 2(x, y) + (y, y) = (x + y, x + y) = (T x + T y, T x + T y) = (T x, T x) + 2(T x, T y) + (T y, T y) = (x, x) + 2(T x, T y) + (y, y). 6

Thus 2(x, y) = 2(T x, T y) and so (b) = (a). Assume (a), (b) are true. Consider, for any v V, ((T T I)v, (T T I)v) = (T T v v, T T v v) = (T T v, T T v) (T T v, v) (v, T T v) + (v, v) = (T T v, T T v) (T v, T v) (T v, T v) + (v, v) = (T T v, T T v) (v, v) (v, v) + (v, v) [by (a)] = (T T v, T T v) (v, v) = (T v, T T T v) (v, v) = (v, T T v) (v, v) [by (a)] = (T v, T v) (v, v) = 0 [by (b)]. Thus (T T I)v = 0 for all v V so T T = I. Thus (b) = (c). Assume (c) is true. Then for any v V, (v, v) = (v, Iv) = (v, T T v) = (T v, T v). Hence (c) = (b). Thus far we have that (a),(b),(c) are equivalent. Assume these hold.then (a) implies (T T x, T T y) = (T x, T y) = (T T x, y) = (T T x, (T T I)y) = 0 for all x, y V. From (c) we see that T, T are bijective, thus so is T T. Hence (T T I)y is orthogonal to all of V so (T T I)y = 0. However y was arbitrary, so this implies that T T I = 0 so T T = I. Thus (a),(b),(c) imply (d). Assume (d) is true. Then for any x, y V, we have Then (x, y) = (x, Iy) = (x, T T y) = (T x, T x). (T T x, T T y) = (T x, T y) = (T T x, y) = (T T x, (T T I)y) = 0. This implies (c) (and thus (a),(b)) by the same reasoning as above. Problem F04.10. Let T : R n R n be a linear transformation and for λ C, define the subspace V (λ) = {v V : (T λi) n v = 0 for some n 1}. This is called the generalized eigenspace of λ. 1. Prove there for each λ C there is a fixed N N such that V (λ) = ker ( (T λi) N). 2. Prove that if λ µ then V (λ) V (µ) = {0}. Hint: use the fact that Solution. I = T λi µ λ + T µi λ µ. 7

1. Let v 1,..., v k be a basis for V (λ). For i = 1,..., k, define N i N to be equal to the least n N such that (T λi) n v i = 0. Take N = max{n 1,..., N k }. Then (T λi) N v i = 0 for all i = 1,..., k. Since we can build any vector in V (λ) from a linear combination of v 1,..., v k, we see V (λ) = ker ( (T λi) N). 2. By part 1., there are N 1, N 2 N such that V (λ) = ker ( (T λi) N 1 ) and V (µ) = ker ( (T µi) N 2 ). Take M = max{n 1, N 2 }. We see that ( T λi I = I 2M = µ λ + T µi λ µ Since (T λi) and (T µi) commute with each other, by the binomial theorem, if we expand the right hand side above, each summand will have a factor of (T λi) or (T µi) which has a power greater than or equal to M. Thus if v V (λ) V (µ), then ( T λi v = Iv = µ λ + T µi ) 2M v = 0. λ µ Hence V (λ) V (µ) = {0}. Problem S05.1. ) 2M Given n 1, let tr : M n (C) C denote the trace operator: tr(a) = A kk, A M n (C). k=1 (a) Determine a basis for the kernel of tr. (b) For X M n (C), show that tr(x) = 0 if and only if there are matrices A 1,..., A m, B 1,..., B m M n (C) such that m X = A j B j B j A j. Solution. j=1 (a) Since dim(m n (C)) = n 2 and since the range of tr is C which has dimension 1 as a vector space over itself, we know from the Rank-Nullity theorem that dim ker(tr) = n 2 1. Thus to find a basis for the kernel, it is sufficient to find n 2 1 linearly independent matrices on which the trace operator vanishes. For i = 1,..., n, j = 1,... n,, define E ij M n (C) be such that the entry in the i th row and j th column is 1 and the other entries are 0. Then the matrices E ij, i j are clearly linearly independent and have 0 trace. This constitutes n 2 n linearly independent matrices with 0 trace. For n 1 more, consider F k = E 11 E kk, k = 2,..., n. These are also linearly independent. Thus the collection forms a basis for the kernel of tr. {E ij : i, j {1,..., n}, i j} {F k : k = 2,..., n} 8

(b) It is well-known that tr(ab) = tr(ba), A, B M n (C). Thus if X has the specified form, m m tr(x) = tr(a j B j ) tr(b j A j ) = tr(a j B j ) tr(a j B j ) = 0. j=1 Conversely, if X has trace 0, we must have some complex numbers α ij, 1 i, j n, i j and β k, k = 2,..., n such that Further, we see that Thus Further Thus X = X = i,j=1,i j j=1 α ij E ij + β k F k. k=2 E ij = E ii E ij and 0 = E ij E ii. α ij E ij = E ii (α ij E ij ) (α ij E ij )E ii, i, j = 1,..., n, i j. i,j=1,i j F k = E k1 E 1k E 1k E k1, k = 2,..., n. E ii (α ij E ij ) (α ij E ij )E ii + This is the desired form up to renaming matrices and indices. (β k E k1 )E 1k E 1k (β k E k1 ). Problem S05.2. Let V be a finite-dimensional vector space and let V denote the dual space of V. For a set W V, define and for a set U V, define k=2 W = {f V : f(w) = 0 for all w W } V U = {v V : f(v) = 0 for all f U} V. (a) Show that for any W V, (W ) = Span (W ). (b) Let W be a subspace of V. Give an explicit isomorphism bewteen (V/W ) and W. Show that it is an isomorphism. Solution. (a) Let W V. Take x Span (W ). Then there are scalars α 1,..., α n and vectors w 1,..., w n W such that x = α 1 w 1 + + α n w n. Then for any f W we have f(x) = α 1 f(w 1 ) + + α n f(w n ) = α 1 0 + + α n 0 = 0. 9

Since f was arbitrary, f(x) = 0 for all f W. Hence by definition x (W ). Thus Span (W ) (W ). Now take x W, then for all f W, we have f(x) = 0. Hence, x (W ). However, Span (W ) is the smallest vector space containing W. Thus (W ) Span (W ). Thus (W ) = Span (W ). (b) Define φ : (V/W ) W by φ(f) = g f, f (V/W ) where g f is the functional which sends x to f(x + W ). Then for any w W, we have [φ(f)](w) = g f (w) = f(w + W ) = f(0) = 0. Thus φ(f) is indeed in W when f (V/W ). Further, if φ(f) = 0, then [φ(f)](x) = 0, = f(x + W ) = 0 for all x V. Hence f is the zero functional. Thus φ(f) = 0 implies f = 0 which tells us that φ is injective. Then since dim((v/w ) ) = dim(v/w ) = dim(v ) dim(w ) = dim(w ), the injectivity of φ implies bijectivity, so φ is an isomorphism. Problem S05.3. Let A be a Hermitian-symmetric n n complex matrix. Show that if (Av, v) 0 for all v C n then there exists and n n matrix T such that A = T T. Solution. By the spectral theorem, there is an orthonormal basis {v 1,..., v n } of C n consisting of eigenvectors of A. Let λ 1..., λ n be the corresponding eigenvalues. Since A is Hermitian, all λ i are real. Further, for each i, 0 (Av i, v i ) = (λ i v i, v i ) = λ i (v i, v i ) = λ i. Thus we can define a matrix T such that T x i = λ i x i (recall that a linear operator is uniquely identified by how it treats basis vectors). Then T x i = λ i x i = λ i x i since T is clearly diagonal with respect to this basis. Then Ax i = λ i x i = ( ) λ i λi x i = ( ) λ i T x i = T λi x i = T (T x i ) = (T T )x i. Then since A and T T agree on basis elements, they are the same matrix. Problem S05.4. We say that I M n (C) is a two-sided ideal in M n (C) if (i) for all A, B I, A + B I, (ii) for all A I and B M n (C), AB and BA are in I. 10

Show that the only two-sided ideals in M n (C) are {0} and M n (C) itself. Solution. Let I be a two sided ideal in M n (C). Suppose there is a non-zero matrix A I. Then by multiplying A by elementary matrices we can transform A A which has a nonzero entry in its first row and column and is still in I. Next, by multiplying on the left and right by a matrix B such that (B) 11 = 1 and (B) ij = 0 otherwise. We arrive at a matrix A I which is zero everywhere except for a nonzero entry in the first row and column. Again, multiplying by an elementary matrix, we can reduce A to a matrix A 11 I which has a 1 in the first row and first column and zeroes elsewhere. Performing similar steps, we can create matrices A ii I which have a 1 in the i th row and i th column and zeroes elsewhere. Adding all these matrices together, we see have the identity matrix I I. Then for any C M n (C), we have CI = C I. Hence I = M n (C). Thus the only two-sided ideals in M n (C) are {0} and M n (C). Problem F05.6. (a) Prove that if P is a real-coefficient polynomial and A a real-symmetric matrix, then λ is an eigenvalue of A if and only if P (λ) is an eigenvalue of P (A). (b) Use (a) to prove that if A is real symmetric, then A 2 is non-negative definite. (c) Check part (b) by verifying directly that det(a 2 ) and trace(a 2 ) are non-negative when A is real-symetric. Solution. (a) Suppose λ is an eigenvalue of A. Then there is an eigenvector v 0 such that Av = λv Then A 2 v = A(Av) = A(λv) = λav = λ 2 v. Similarly, A k v = λ k v for k = 3, 4, 5,.... Let P be the polynomial Then P (x) = a 0 + a 1 x + + a m x m. P (A)v = a 0 Iv + a 1 Av + + a m A m v = a 0 v + a 1 λv + + a m λ m v = (a 0 + a 1 λ + + a m λ m )v = P (λ)v. So P (λ) is an eigenvalue of P (A) (note, we didn t need the assumption that A is real-symmetric for this direction). By the spectral theorem, A is (orthogonally) similar to a diagonal matrix D. A = UDU where U is orthogonal. Then P (A) = a 0 I + a 1 UDU + + a m (UDU ) m = a 0 UU + a 1 UDU + + a m UD m U = U(a 0 I + a 1 D + + a m D m )U = UP (D)U. Say 11

Thus P (A) is similar to P (D). Now if P (λ) is an eigenvalue of P (A) then it is also an eigenvalue of P (D). But P (D) is diagonal so P (λ) lies on the diagonal of P (D) and so λ lies on the diagonal of D and thus is an eigenvalue of D. Then λ is also an eigenvalue of A since A is similar to D (note, this argument also didn t require that A is symmetric, just that A is diagonalizable). (b) If A is n n, then there is an orthonormal basis for R n consisting of eigenvectors {v 1,..., v n } of A. Suppose the corresponding eigenvalues are λ 1,..., λ n (possibly with repititions). All these eigenvalues are real since A is symmetric. Also, λ 2 1,..., λ 2 n are eigenvalues of A 2 corresponding to the same eigenvalues. Finally, for any x R n, x = α 1 v 1 + + α n v n for some α 1,..., α n R. Thus gives x t A 2 x = (α 1 v 1 + + α n v n ) t (α 1 λ 2 1v 1 + + α n λ 2 nv n ) = αi 2 λ 2 i, i=1 by orthogonality. this last expression is clearly non-negative since all λ i and α i are real. Thus A 2 is non-negative definite. (c) We see det(a 2 ) = det(aa) = det(a)det(a) = (det(a)) 2 0. Also, the entries of A 2 are Thus the diagonal entries are (A 2 ) ij = a ik a kj. k=1 (A 2 ) ii = a ik a ki = a 2 ik, by symmetry. k=1 k=1 Thus all diagonal entries of A 2 are non-negative and so the trace is non-negative. Problem F05.9. Suppose U, W are subspaces of a finite-dimensional vector space V. (a) Show that dim(u W ) = dim(u) + dim(w ) dim(span (U, W )). (b) Let n = dim(v ). Use part (a) to show that if k < n then an intersection of k subspaces of dimension n 1 always has dimension at least n k. Solution. (a) This is called the dimension formula and is proven as follows. Let {x 1,..., x k } be a basis for U W. Extend this separately to a basis {x 1,..., x k, u 1,..., u l } of U and {x 1,..., x k, w 1,..., w m } of W. 12

Then dim(u W ) = k, dim(u) = k + l and dim(w ) = k + m. So it remains to prove that dim(span (U, W )) = k + l + m; the result will follow. To do this, we just throw all the vectors together and if there is any justice in the world {x 1,..., x k, u 1,..., u l, w 1,..., w m } will form a basis for Span (U, W ). Take any y Span (U, W ). Then y can be built from vectors in U and W. But each of those vectors can be built by basis vectors of U and W and so y can be built using the basis vectors of U and W. That is {x 1,..., x k, u 1,..., u l } {x 1,..., x k, w 1,..., w m } = {x 1,..., x k, u 1,..., u l, w 1,..., w m } forms a spanning set for Span (U, W ). such that Take scalars α 1,..., α k, β 1,..., β l, γ 1,..., γ m α 1 x 1 + + α k x }{{} k + β 1 u 1 + + β l u l + γ }{{} 1 w 1 + + γ m w m = 0. }{{}.=x.=u.=w Then w = x u U. Also, it is clear w W. Then w U W. Hence there are scalars µ 1,..., µ k such that Then w = µ 1 x 1 + + µ k x k. γ 1 w 1 + + γ m w m µ 1 x 1 µ k x k = 0. But these vectors form a basis for W and thus a linealry independent set. Thus γ 1 = = γ m = 0 (and the same for µ i but these won t matter). Then w = 0 so x + u = 0. But the vectors comprising x and u form a basis for U, thus α 1 = = α k = β 1 = = β l = 0. Thus is a linearly independent set. {x 1,..., x k, u 1,..., u l, w 1,..., w m } From this we see that there is a basis for Span (U, W ) which has k + l + m elements so dim(span (U, W )) = k + l + m and the proof is completed. (b) We prove the claim by induction on k. If k = 1, the result is trivial. Suppose the result holds from some k 1. Let V 1,..., V k, V k+1 be subspaces of V of dimension n 1. Then dim ( k+1 i=1 V i) = dim ( Vk+1 ( k i=1v i )) = dim (Vk+1 )+dim ( k i=1v k ) dim ( Span ( Vk+1, k i=1v i )), by the dimensionality formula. But Span ( V k+1, k i=1v i ) has dimension at most n, Vk+1 has dimension n 1 and by our inductive hypothesis, k i=1v i has dimension at least n k. Then dim ( k+1 i=1 V i) n 1 + n k n = n k 1 = n (k + 1). Thus the claim holds for k + 1. This completes the proof. 13

Problem F05.10. (a) For n = 2, 3, 4,..., is there an n n matrix A with A n 1 0 and A n = 0? (b) Is there an n n upper triangular matrix A with A n 0 and A n+1 = 0? Solution. (a) Yes. Let A be the matrix which A i,i+1 = 1 for i = 1,..., n 1 and A ij = 0 otherwise. That is, 1 on the first superdiagonal and 0 elsewhere. (b) No. Suppose A n+1 = 0. Suppose that λ is an eigenvalue of A with nonzero eigenvector v. Then Av = λv = A 2 v = λav = λ 2 v = = A n+1 v = λ n Av = λ n+1 v. But A n+1 = 0 so λ n+1 v = 0. Then v 0 implies that λ n+1 = 0 which gives λ = 0. Thus all eigenvalues of A are zero. Then the minimal polynomial of A has only zero as a root and thus m A (x) = x k for some k N. However, the degree of m A is at most n. So m A (A) = 0 = A k = 0, for some k n = A n = 0. [Note: the assumption that A is upper triangular is unnecessary.] Problem S06.7. Prove that if a, λ C with a 0, then 1 a 0 T = 0 1 a 0 0 λ is not diagonalizable. Solution. A matrix is diagonalizable if and only if the geometric multiplicity of each eigenvalue is equal to the algebraic multiplicity of the eigenvalue. Here 1 is an eigenvalue of T of algebraic multiplicity 2 but 0 a 0 v 1 0 (T I)v = 0 = 0 0 a v 2 = 0. 0 0 λ 1 v 3 0 Then v 2 = v 3 = 0 and so v = (1, 0, 0) t spans the eigenspace corresponding to the eigenvalue 1. Hence the geometric multiplicity of this eigenvalue is only 1. Hence the matrix is not diagonalizable. Problem S06.8. A linear transformation T is called orthogonal if it is non-singular and T t = T 1. Prove that if T : R 2n+1 R 2n+1 is orthogonal, then there is v R 2n+1 such that T (v) = v or T (v) = v. 14

Solution. The characteristic polynomial p T (t) is an odd degree polynomial over R so there is a real root λ. Then there is a nonzero vector v R 2n+1 such that T (v) = λv so λ 2 (v, v) = (λv, λv) = (T (v), T (v)) = (v, T t T (v)) = (v, Iv) = (v, v). Since (v, v) 0, this gives λ 2 = 1 so λ = ±1 and hence T (v) = ±v. Problem S06.9. Let S be a real symmetric matrix. (a) Prove that all eigenvalues of S are real. (b) State and prove the spectral theorem. Solution. (a) Let λ C be an eigenvalue of S with nonzero eigenvector v. λ(v, v) = (λv, v) = (Sv, v) = (v, Sv) = (v, λv) = λ(v, v). Hence since v 0, we have λ = λ so λ R. (b) Spectral Theorem. Let S be a symmetric matrix in M n (F) where F = R or F = C. Then there is an orthonormal basis for F n consisting of eigenvectors of S. In particular, S is orthogonally diagonalizable. Proof. We prove this by induction on n where n is the dimension of the space that S operates on. For n = 1, the statement is obvious since the operator S simply scales vectors. Assume that if a symmetric matrix operates on an (n 1)-dimensional space, then the statement holds. If S is an n n matrix, then it operates on F n. By fundamental theorem of algebra, there is an eigenvalue λ of S and by the above proof, λ R. Let 0 v F n be a corresponding eigenvector; without loss of generality, v = 1. Then v = {x F n : (x, v) = 0} is an (n 1)-dimensional subspace of F n. Also, if x v, then (Sx, v) = (x, Sv) = (x, λv) = λ(x, v) = 0. Hence v is an S-invariant subspace. Thus S v is a symmetric matrix operating on an (n 1)-dimensional space. By inductive hypothesis, there is an orthonormal basis {x 1,..., x n 1 } of v which consists of eigenvectors of S v and thus eigenvectors of S. Then {x 1,..., x n 1, v} is an orthonormal set in F n (and thus a basis for F n ) which consists of eigenvectors of S. In particular, if λ 1,..., λ n 1, λ are the corresponding eigenvalues (there can be repetitions) and we put P = [x 1 x n 1 v], then λ 1 λ 2 SP = P....= P D, λ n 1 λ so S = P DP 1 and since {x 1,..., x n 1, v} is an orthonormal set P 1 = P t so S = P DP t and thus S is orthogonally diagonalizable. 15

Problem S06.10. Let Y be an arbitrary set of commuting matrices in M n (C). Prove that there exists a non-zero vector v C n which is a common eigenvector of all matrices in Y. Solution. Let A Y. Then by the fundamental theorem of algebra, A has an eigenvalue λ. Let E λ = ker(a λi). Let 0 x E λ. Then for arbitrary B Y, Ax = λx = BAx = B(λx) = A(Bx) = λ(bx). Hence Bx E λ. Thus E λ is B invariant. Hence by the fundamental theorem of algebra, B Eλ has an eigenvalue and thus an eigenvector v 0. This v is a simultaneous eigenvector of A and B. Since B was arbitrary, v is a simultaneuous eigenvector of all matrices in Y. Problem W06.7. Let V be a complex inner product space. State and prove the Cauchy- Schwarz inequality for V. Solution. Cauchy-Schwarz Inequality. Let V be an inner product space over C and v, w V. Then (v, w) v w with equality if and only if v, w are linearly dependent. Proof. If w = 0, the inequality is trivially satisfied (it is actually equality and v, w are linearly dependent so all statements hold). Assume w 0. For any c C, we have Putting c = (v,w) (w,w) 0 (v cw, v cw) = (v, v) c(v, w) c(v, w) + c 2 (w, w). (possible since w 0) we see 0 (v, v) (v, w) 2 (w, w) (v, w) 2 (w, w) + (v, w) 2 (w, w) (w, w) 2 = (v, w) 2 (w, w) (v, v). Multiplying by (w, w), we see (v, w) 2 (v, v)(w, w) = v 2 + w 2 = (v, w) v w. We also notice that equality holds if and only if (v cw, v cw) = 0 which holds if and only if v = cw. Problem W06.8. Let T : V W be a linear transformation of finite dimensional innter product spaces. Show that there exists a unique linear transform T t : W V such that (T v, w) W = (v, T t w) V for all v V, w W. 16

Solution. We prove uniqueness first. Suppose there are linear maps R, S : W V both satisfying the condition. Then for all v V, w W, (T v, w) W = (v, Rw) V = (v, Sw) V = (v, (R S)w) V = 0. Thus (R S)w = 0 since it is orthogonal to all of V. However, w was arbitrary so this implies R = S. For existence, let {v 1,..., v n } and {w 1,..., w m } be orthonormal bases of V and W respectively. If T has matrix A with respect to these bases, define T t to have matrix A t with respect to these basis. Then T t satisfies the conditon. Problem W06.9. Let A M 3 (R) be invertible and satisfy A = A t and det(a) = 1. Prove that 1 is an eigenvalue of A. Solution. The conclusion isn t actually true. The matrix 1 0 0 2 A = 0 1 0 3 0 0 6 is symmetric and has determinant 1 but doesn t have 1 as an eigenvalue. It is likely that we were supposed to assume that A t = A 1 rather than A t = A. In this case, see S03.9. Problem S07.2. Let U, V, W be n-dimensional vector spaces and T : U V, S : V W be linear transformations. Prove that is S T : U W is invertible, then both T, S are invertible. Solution. First suppose that T is not invertible. Then the kernel of T is nontrivial, so there is a non-zero vector u U such that T(u)= 0. But then S T (u) = 0, so the kernel of S T is nontrivial and so S T is not invertible; a contradiction. Thus T is invertible. Now assume that S is not invertible. Then the kernel of S is nontrivial so there is a nonzero vector v V such that S(v) = 0. However, T is invertible, and thus surjective so there is u U such that T (u) = v (and u 0 since v 0). Then S T (u) = S(T (u)) = S(v) = 0, so the kernel of S T is nontrivial and S T is not invertible; a contradiction. Hence S is invertible. Problem S07.3. Consider the space of infinite sequences of real numbers S = {(a 0, a 1, a 2,...) : a n R, n = 0, 1, 2,...}. For each pair if real numbers A and B, prove that the set of solutions (x 0, x 1, x 2,...) of the linear recursion x n+2 = Ax n+1 + Bx n, n = 0, 1, 2,... is a subspace of S of dimension 2. Solution. Let S 0 be set of all solutions to the recurrence relation. We first show that S 0 is indeed a subspace of S. Take x, y S 0. Put z = x + y. Then z n+2 = x n+2 +y n+2 = Ax n+1 +Bx n +Ay n+1 +By n = A(x n+1 +y n+1 )+B(x n +y n ) = Az n+1 +Bz n. 17

Thus z S 0. Next for α R and x S 0, put w = αx. Then w n+2 = αx n+2 = α(ax n+1 + Bx n ) = A(αx n+1 ) + B(αx n ) = Aw n+1 + Bw n. Thus w S 0. Hence S 0 is closed under addition and multiplication by scalars and is thus a subspace of S. Next we show that if u, v S 0 are such that u 0 = 1, u 1 = 0, v 0 = 0, v 1 = 1 then u, v form a basis for S 0. It is clear that they are linearly independent and for x = (x 0, x 1, x 2,...) S 0, we have x = x 0 u + x 1 v. Indeed, x 0 u + x 1 v agrees with x in the first and second entries. Assume that it agrees with x in the (n + 1) th and (n + 2) th entries for some n 0. Then Consider x n+1 = x 0 u n+1 + x 1 v n+1 and x n+2 = x 0 u n+2 + x 1 v n+2. x n+3 = Ax n+2 + Bx n+1 = A(x 0 u n+2 + x 1 v n+2 ) + B(x 0 u n+1 + x 1 v n+1 ) = x 0 (Au n+2 + Bu n+1 ) + x 1 (Av n+2 + Bv n+1 ) = x 0 u n+3 + x 1 v n+3. Hence by induction it is true that x = x 0 u + x 1 v. Thus u, v span S 0. Hence dim(s 0 ) = 2. Problem S07.4. Suppose that A is a symmetric n n real matrix and let λ 1,..., λ l be the distinct eigenvalues of A. Find the sets { ( X = x R n : lim x t A 2k x ) } 1/k exists k and { L = lim k ( x t A 2k x ) 1/k : x X }. Solution. We prove that X = R n and L = {0, λ 2 1,..., λ 2 l }. By taking the zero vector for x, it is clear that the zero vector is in X and zero is in L. We ll focus on non-zero vectors henceforth. Since A is symmetric, by the spectral theorem there is an orthonormal basis {x 1,..., x n } for R n consisting of eigenvectors of A. Suppose that µ 1,..., µ n R are the corresponding eigenvalues [the eigenvalues are real since A is symmetric; also repititions are allowed so this is a slightly different list from λ 1,..., λ l ]. Let 0 x R n. Then there are α 1,..., α n R (not all zero) such that x = α 1 x 1 +... + α n x n. Then so by orthogonality, A 2k x = α 1 A 2k x 1 + + α n A 2k x n = α 1 µ 2k 1 x 1 + + α n µ 2k n x n From here, it is clear that x t A 2k x = α 2 1µ 2k 1 + + α n µ 2k n. x t A 2k x α 2 jµ 2k j 18

for all j = 1,..., n. Let m {1,..., n} be an index satisfying α m 0, and if α j 0 for some j = 1,..., n, then µ j µ m. That is, m is the index of the largest of the µ s which has a non-zero coefficient in the representation of x in this basis (note: such m always exists; it is not necessarily unique, but that won t matter). Then Then using our two bounds, we see that x t A 2k x = α1µ 2 2k 1 + + α n µ 2k n nαmµ 2 2k m. α 2/k m µ 2 m ( x t A 2k x ) 1/k (nα 2 m ) 1/k µ 2 m. ( Thus by the squeeze theorem, lim x t A 2k x ) 1/k = µ 2 m. k This shows that X = R n. It also shows that for every non-zero x R n, we have ( x t A 2k x ) 1/k = µ 2 j for some j = 1,..., n. Thus the eigenvalues of A are the only possible lim k values for the limit. To see that each eigenvalue is indeed achieved, notice that ( x t j A 2k x j ) 1/k = µ 2 j for all k N. Thus lim k ( x t j A 2k x j ) 1/k = µ 2 j for each j = 1,..., n. Problem S07.5. Let T be a normal linear operator on a finite dimensional complex inner product space V. Prove that if v is an eigenvector of T, then v is also an eigenvector of the adjoint T. Solution. First, consider if S is any normal operator and x C n, then (Sx, Sx) = (x, S Sx) = (x, SS x) = (S x, S x). Thus Sx = S x for all x C n. Let v be an eigenvector of T with corresponding eigenvalue λ C. Since T is normal, so is (T λi). Then 0 = (T λi)v = (T λi) x = (T λi)v. Hence T v = λv so v is an eigenvector of T corresponding to eigenvalue λ. Problem F07.3. Let V be a vector space and T a linear transformation such that T v and v are linearly dependent for every v V. Prove that T is a scalar multiple of the identity transformation. Solution. If dim(v ) = 1, the result is trivial. Assume that dim(v ) > 1 Suppose x, y V are linearly independent. Then there are scalars α, β, γ such that T x = αx, T y = βy, T (x + y) = γ(x + y). 19

Then 0 = (γ α)x + (γ β)y. But x, y were taken to be linearly independent, so α = γ = β. Since the same argument would work for any linearly independent vectors, we see that T v = αv for all v V. Thus T = αi. Problem F07.12. (a) Suppose that x 0 < x 1 <... < x n are points in [a, b]. Define linear functionals on P n (the space of all polynomials of degree less than or equal to n) by l j (p) = p(x j ), j = 0, 1,..., n, p P n. Show that the set {l j } n j=0 is linearly independent. (b) Show that there are unique coefficients c j R such that b a p(t)dt = c j l j (p) j=0 Solution. for all p P n. (a) Let α 0, α 1,..., α n R be such that Put p i (x) = α 0 l 0 + α 1 l 1 + + α n l n = 0. n m=0,m i (x x m ), i = 0, 1,..., n. Then each p i is a polynomial of degree n and l i (p i ) 0 since x i x j when i j. However, l j (p i ) = 0 when i j since (x x j ) is a factor of p i when i j. Thus for each i = 0, 1,..., n, α 0 l 0 (p i ) + α 1 l 1 (p i ) +... + α n l n (p i ) = 0 = α i l i (p i ) = 0 = α i = 0. Hence l 0, l 1,..., l n are linearly independent. (b) For any finite dimensional vector space V, we know dim(v ) = dim(v ). Here P n has dimension n + 1 and we have found n + 1 linearly independent members of (P n ). Thus l 0, l 1,..., l n form a basis for (P n ). Since l(p) = b p(x)dx, p a Pn defines a linear functional on P n, we know there are unique c 0, c 1,..., c n R such that l = c 0 l 0 + c 1 l 1 + + c n l n. Hence b a p(x)dx = c j l j (p), for all p P n. j=0 20

Problem S08.8. Assume that V is an n-dimensional vector space and that T is a linear transformation T : V V such that T 2 = T. Prove that every v V can be written uniquely as v = v 1 + v 2 such that T (v 1 ) = v 1 and T (v 2 ) = 0. Solution. Note that if v im(t ), then v = T (w) for some w V and so T (v) = T 2 (w) = T (w) = v. Thus T fixes members of its image so T (v T (v)) = 0 for all v V. For any v V, write v = T (v) + (v T (v)).= v 1 + v 2. It s clear that this is one way to represent v in the desired way. Assume that v = v 1 +v 2 = x 1 +x 2, where T (v 1 ) = v 1, T (x 1 ) = x 1 and T (v 2 ) = 0, T (x 2 ) = 0. Then T (v) = T (v 1 ) + T (v 2 ) = T (x 1 ) + T (x 2 ) = T (v 1 ) = T (x 1 ) = v 1 = x 1. Then v 1 + v 2 = v 1 + x 2 = v 2 = x 2. This gives uniqueness of such a representation. Problem S08.9. Let V be a finite-dimensional vector space over R. (a) Show that if V has odd dimension and T : V V is a real linear transformation, then T has a non-zero eigenvector v V. (a) Show that for every even positive integer n, there is a vector space V over R of dimension n and a real linear transformation T : V V such that there is no non-zero v V that satisfies T (v) = λv for some λ R. Solution. (a) The characteristic polynomial p T of T is a polynomial over R of degree dim(v ) which is odd. Every odd ordered polynomial over R has a root in R. We know this root (say λ R) is an eigenvalue of T and thus there is a non-zero eigenvector v V such that T (v) = λv. Hence T has an eigenvector. (b) Let V = R n for positive even n and let T be the left multiplication operator for 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 A = 0 0 1 0 0...... 0 0 0 1 0 It is easy to see that T has characteristic polynomial p T (t) = ( t) n + ( 1) n. But n is even, so p T (t) = t n + 1. This equation has no roots in R when n is even and so there is no real eigenvalue and thus no non-zero v R n such that T (v) = λv for some λ R. 21

Problem F08.7. Suppose that T is a complex n n matrix and that λ 1,..., λ k are distinct eigenvalues of T with corresponding eigenvectors v 1,..., v k. Show that v 1,..., v k are linearly independent. Solution. We use induction. First, it is clear that {v 1 } is a linearly independent set because eigenvectors are necessarily non-zero. Now assume that {v 1,..., v l } are linearly independent for some 2 l < k. Let α 1,..., α l, α l+1 C be such that α 1 v 1 + + α l v l + α l+1 v l+1 = 0. Then (T λ l+1 I)(α 1 v 1 + + α l v l + α l+1 v l+1 ) = 0. But (T λ l+1 I)v i = (λ i λ l+1 )v i for i = 1,..., l and (T λ l+1 I)v l+1 = 0. Thus α 1 (λ 1 λ l+1 )v 1 + + α l (λ l λ l+1 )v l = 0. But these vectors are linearly independent by our inductive hypothesis, so α 1 (λ 1 λ l+1 ) = = α l (λ l λ l+1 ) = 0. However, since the eigenvalues are distinct, this implies that α 1 = = α l = 0. Hence 0 = α 1 v 1 + + α l v l + α l+1 v l+1 = α l+1 v l+1 = α l+1 = 0. Thus {v 1,..., v l, v l+1 } is a linearly independent set. By induction, we conclude that {v 1,..., v k } are linearly independent. Problem F08.8. Must the eigenvectors of a linear transformation T : C n C n span C n? Solution. No. Take any non-diagonalizable matrix A M n (C) and let T (x) = Ax, x C n. For example, if 1 1 0 A = 0 1 1. 0 0 2 Then, up to multiplication by scalars, the eigenvectors of A (and thus T ) are 1 v 1 = 0 and 1 v 2 = 1 0 1 which clearly do not span C 3. Problem F08.9. (a) Prove that any linear transformation T : C C must have an eigenvector. (b) Is (a) true for any linear transformation T : R n R n? 22

Solution. (a) By the fundamental theorem of algebra, p T (t) = det(t ti) has a root λ C. Then det(t λi) = 0 so (T λi) is a singular transformation. Hence there is a non-zero v C n such that (T λi)(v) = 0 or T (v) = λv; that is, v is an eigenvector of T. (b) No. Let A = [ ] 0 1 1 0 and define T : R 2 R 2 by T (x) = Ax, x R 2. Suppose T has an eigenvector 0 v R 2. Then there is λ R such that T (v) = λv. If v = (v 1, v 2 ), this implies that v 2 = λv 1 and v 1 = λv 2 where at least one of v 1, v 2 is non-zero. Asusme v 1 0. Then by the second equation, neither λ nor v 2 are zero. Also, by the first equation v 2 = λv 1 = λ 2 v 2 = λ 2 = 1, a contradiction to the fact that λ R. Hence there is no eigenvector for T. Problem F08.11. Consider the Poisson equation with periodic boundary condition 2 u = f, x (0, 1), x2 u(0) = u(1). A second order accurate approximation to the problem is given by Au = x 2 f where 2 1 0 0 1 1 2 1 0 0 0 1 2 1 0 A =........., 0 0 1 2 1 1 0 0 1 2 u = [u 0, u 1,..., u n 1 ] t, f = [f 0, f 1,..., f n 1 ] t and u i u(x i ) with x i = i x, x = 1/n and f i = f(x i ) for i = 0, 1,..., n 1. (a) Show that A is singular. (b) What conditions must f satisfy so that a solution exists? Solution. (a) Notice Av = 0 when v = [1 1 1] t, so A has a nontrivial null space and is thus singular (in fact, up to scalar multiplication, this is the only vector in the null space of A). (b) We need f in the range of A for a solution to exist. The Fredholm alternative tells us that range(a) = null(a t ). But A t = A, so we simply need f null(a). By (a), this is equivalent to (v, f) = 0 f 0 + f 1 + + f n 1 = 0. 23

Problem F08.12. Consider the least square problem min Ax b x Rn where A R m n, b R m and m n. Prove that if x and x + αz (α 0) are both minimizers, then z null(a). Solution. Let b = b 1 + b 2 be the projection of b onto im(a); i.e., b 1 is the unique vector in im(a) such that b 1 b im(a). We know that b 1 is the closest vector to b which lies in im(a). Then the minimizers of Ax b are exactly those vectors x R n such that Ax = b 1. Let x be such a minimizer. Then for any y R n, Ay im(a) and so (Ay, b 1 b) = 0. But then 0 = (Ay, Ax b) = (y, A (Ax b)). Since y is arbitrary, this implies that A (Ax b) is orthogonal to all of R n so A (Ax b) = 0 or A Ax = A b. That is, any minimizer x of Ax b must satisfy the normal equations: A Ax = A b. Assume that both x and x + αz, α 0 are minimizers of Ax b. Then A b = A Ax = A A(x + αz) = αa Az = 0 = A Az = 0. Taking the inner product of A Az with z, we see (z, A Az) = 0 = (Az, Az) = 0 = Az 2 = 0 = Az = 0. Thus z null(a). Problem F09.4. Let V be a finite dimensional inner product space and let U be a subspace of V. Show that dim(u) + dim(u ) = dim(v ). Suppose dim(u) = n. Then dim(v ) = n + k for some k N 0. If k = 0, then U = V and the result hold trivially. Assume k > 0. Let {x 1,..., x n } form an orthonormal basis for U. We can extend this to an orthonormal basis {x 1,..., x n, y 1,..., y k } for V. If we can prove that y 1,..., y k is an orthonormal basis for U, then we will be done. Since the set {x 1,..., x n, y 1,..., y k } is orthonormal, for any j = 1,..., k and i = 1,..., n, we have (y j, x i ) = 0. Hince y j is orthogonal to each basis member for U and hence orthogonal to U; that is, y j U for each j. Take y U. Then, y V so there are scalars α 1,..., α n, β 1,..., β k such that y = α 1 x 1 + + α n x n + β 1 y 1 + + β k y k. Since each x i U, we have (y, x i ) = 0. Alternately taking the inner product of y with each x i, we find α i = 0 for each i. Thus y = β 1 y 1 + + β k y k. Hence {y 1,..., y k } is a spanning set for U ; it is also a linearly independent set since it is part of the basis for V. Thus {y 1,..., y k } is a basis for U so dim(u ) = k so the claim is proven. 24

Problem F09.5. satisfy then necessarily b 1 = = b n = 0. Show that if α 1,..., α n R are all different and some b 1,..., b n R b i e αit for all t ( 1, 1) Solution. Let T : C ( 1, 1) C ( 1, 1) be defined by i=1 (T f)(t) = f (t), t ( 1, 1). We see that for each α i, if we define f i C ( 1, 1) by f i (t) = e αit, t ( 1, 1) then T f i = α i f i. Hence the functions f i are eigenvectors of T corresponding to different eigenvalues. But eigenvectors of a linear operator corresponding to different eigenvalues are linearly independent. Hence which is the desired conclusion. b 1 f 1 + + b n f n = 0 = b 1 = = b n = 0 Problem F09.12. Let n 2 and let V be an n-dimensional vector space over C with a set of basis vectors e 1,..., e n. Let T be the linear transformation of V satisfying T (e i ) = e i+1, i = 1,..., n 1 and T (e n ) = e 1. (a) Show that T has 1 as an eigenvalue. Find an eigenvector with eigenvalue 1 and show that it is unique up to scaling. (b) Is T diagonalizable? Solution. (a) Let x = e 1 + e 2 + + e n V. Then T (x) = T (e 1 ) + T (e 2 ) + + T (e n ) = e 2 + e 3 + + e 1 = e 1 + e 2 + + e n = x. Further x 0 since the basis vectors are linearly independent. Thus x is an eigenvector of T with eigenvalue 1. Conversely, if v is an eigenvector of T corresponding to eigenvalue 1, then T (v) = v. Also there are α 1,..., α n C such that v = α 1 e 1 + + α n e n. Then which gives α 1 e 1 + + α n e n = v = T (v) = α 1 e 2 + α 2 e 3 + α n e 1 (α 1 α n )e 1 + (α 2 α 1 )e 2 + (α n α n 1 )e n = 0. But these vectors are linearly independent so α 1 α n = 0, α 2 α 1 = 0,, α n α n 1 = 0 which gives α 1 = = α n. Call this value α C. Then v = αe 1 + + αe n = αx. Hence the eigenvector is unique up to scaling. 25

(b) Since T (e j ) = e j+1 for all j = 1,..., n 1 and T (e n ) = e 1, the matrix [T ] B is given by 0 0 0 0 1 1 0 0 0 1 0 [T ] B =............. 0 0 0 1 0 Then p T (t) = det ([T ] B ti) = ±(t n 1). But t n 1 has distinct roots for all n. Thus the eigenvalues of T are distinct and so T is diagonalizable. Problem S10.1. Let u 1,..., u n be an orthonormal basis for R n and let y 1,..., y n be a collection of vectors in R n such that n j=1 y j 2 < 1. Show that u 1 + y 1,..., u n + y n form a basis for R n. Solution. Since invertible linear maps send one basis to another, it suffices to show that there is an invertible linear map L : R n R n such that L(u j ) = u j + y j for all j = 1,..., n. Define a linear map, T : R n R n by T (u j ) = y j, j = 1,..., n (a linear map is uniquely determined by how it treats basis elements). Let x R n. Then there are scalars α 1,..., α n such that x = n j=1 α ju j. Then T (x) = α j T (u j ) j=1 = α j y j j=1 α j y j j=1 ( ) 1/2 ( ) 1/2 α j 2 y j 2. j=1 j=1 ( n But by orthogonality of u 1,..., u n, we have x = j=1 α j 2) 1/2. Thus we have that T n j=1 y j 2 < 1. But T < 1 implies that I T is invertible. Further, we see that (I T )(u j ) = u j + y j, j = 1,..., n. Hence u 1 + y 1,..., u n + y n form a basis for R n. Problem S10.3. Let S, T be two normal transformations in the complex finite dimensional inner product space V such that ST = T S. Prove that there is a basis for V consisting of vectors which are simultaneous eigenvectors of S and T. Solution. Suppose that n = dim(v ). Let λ 1,..., λ k be the distinct eigenvalues of S. Then by the spectral theorem, the eigenvectors of S can be taken to form an orthonormal basis 26

for V which means that E λ1 E λk = V. Consider, for v E λi, we have Sv = λ i v. Then T Sv = T (λ i v) = ST v = λ i T v = S(T v) = λ i (T v). Thus T v E λi. Hence E λi is T -invariant. Then T Eλi is a normal operator on E λi so by the spectral theorem, there is an orthonormal basis v (i) 1,..., v (i) l i for E λi consisting of eigenvectors of T. Then k l i { } i=1 j=1 is a basis of V consisting of simultaneous eigenvectors of both S and T. Problem S10.4. (i) Let A be a real symmetric n n matrix such that x t Ax 0 for every x R n. Prove that trace(a) = 0 implies A = 0. (ii) Let T be a linear transformation in the complex finite dimensional vector space V with an inner product. Suppose that T T = 4T 3I where I is the identity transformation. Prove that T is Hermitian positive definite and find all possible eigenvalues of T. Solution. I m have no idea why these two questions are grouped into one problem. As far as I can see, they have nothing to do with each other and have elementary solutions which are completely independent of the other. (i) Let e i be the standard basis vectors. Then v (i) j e t iae i 0 = a ii 0. Then if the sum of the diagonal entries is zero, each entry must be zero; i.e., a ii = 0, i = 1,... n. Next, (e i + e j ) t A(e i + e j ) 0 = a ii + a ij + a ji + a jj 0 = a ij + a ji 0. But since A is symmetric, this implies a ij 0. Also (e i e j ) t A(e i e j ) = a ii a ij a ji + a jj 0 which gives a ij 0. Thus a ij = 0. Since i, j were arbitrary, this gives A = 0. (ii) We see (T T ) = (T ) T = T T. Thus (4T 3I) = 4T 3I = 4T 3I = 4T 3I = T = T so T is Hermitian. Then T = 1 4 T T + 3 4 I = 1 4 T 2 + 3 4 I. 27

Then for any x V, (T x, x) = 1(T 2 x, x) + 3(x, x) = 1(T x, T x) + 3 (x, x) 0 4 4 4 4 since inner products are positive definite. Further, if (T x, x) = 0 then (T x, T x) = 0 and (x, x) = 0. The latter can only happen if x = 0. Thus T is positive definite. The functional equation for T gives T 2 4T + 3I = 0 = (T I)(T 3I) = 0. Then by the Cayley-Hamilton Theorem, the characteristic polynomial of T must divide (t 1)(t 3). Thus the only possible eigenvalues of T are 1 and 3. ( ) 4 4 Problem S10.6. Let A =. 1 0 (i) Find a Jordan form J of A and a patrix P such that P 1 AP = J. (ii) Compute A 100 and J 100. (iii) Find a formula for a n when a 0 = a, a 1 = b and a n+1 = 4a n 4a n 1. Solution. (i) The characteristic polynomial of A is p A (x) = x(x 4) + 4 = x 2 4x + 4 = (x 2) 2. Thus the sole eigenvalue of A is 2. We see N = A 2I is not the zero matrix. Thus there is x 0 so that Nx 0. We see by inspection that x = (1, 0) t gives Nx = (2, 1) t. Put ( ) 2 1 P =. 1 0 Then and (ii) We see and P 1 AP = ( 0 1 1 2 P 1 = ) ( 4 4 1 0 J 2 = J 3 = ( ) ( ) 0 1 0 1 = 1 2 1 2 ) ( 2 1 1 0 ) = ( ) ( ) 2 1 2 1 0 2 0 2 ( ) ( ) 4 4 2 1 0 4 0 2 ( 1 0 2 4 From these, it is reasonable to guess that ( ) 2 J n n n2 = n 1 0 2 n. 28 = = ) ( ) 2 1 = 1 0 ( ) 4 4, 0 4 ( ) 8 12. 0 8 ( ) 2 1.= J 0 2