j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

Similar documents
Math Linear Algebra II. 1. Inner Products and Norms

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

GQE ALGEBRA PROBLEMS

Linear Algebra 2 Spectral Notes

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Spectral Theorem for Self-adjoint Linear Operators

MATHEMATICS 217 NOTES

Linear Algebra Lecture Notes-II

The following definition is fundamental.

Matrix Theory. A.Holst, V.Ufnarovski

Notes on the matrix exponential

1. General Vector Spaces

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Review problems for MA 54, Fall 2004.

Problems in Linear Algebra and Representation Theory

NORMS ON SPACE OF MATRICES

Linear Algebra. Workbook

MATH 583A REVIEW SESSION #1

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Knowledge Discovery and Data Mining 1 (VO) ( )

Differential Topology Final Exam With Solutions

Math 108b: Notes on the Spectral Theorem

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Numerical Linear Algebra Homework Assignment - Week 2

Elementary linear algebra

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

Remarks on Definitions

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Mathematical Methods wk 2: Linear Operators

G1110 & 852G1 Numerical Linear Algebra

Lecture Notes in Linear Algebra

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Symmetric and self-adjoint matrices

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

Econ Slides from Lecture 7

Quantum Computing Lecture 2. Review of Linear Algebra

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Linear Algebra Highlights

Announcements Wednesday, November 01

CS 246 Review of Linear Algebra 01/17/19

Diagonalization by a unitary similarity transformation

Jordan normal form notes (version date: 11/21/07)

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Foundations of Matrix Analysis

Linear algebra and applications to graphs Part 1

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

First we introduce the sets that are going to serve as the generalizations of the scalars.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

1 Last time: least-squares problems

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Linear Algebra- Final Exam Review

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Eigenvectors and Hermitian Operators

Lecture notes: Applied linear algebra Part 1. Version 2

Solution to Homework 1

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

1 Matrices and Systems of Linear Equations. a 1n a 2n

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Numerical Linear Algebra

1 Matrices and Systems of Linear Equations

Math 113 Final Exam: Solutions

LECTURE 7. k=1 (, v k)u k. Moreover r

Elementary Linear Algebra

MAT Linear Algebra Collection of sample exams

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

Chapter 4 Euclid Space

Linear Algebra. Min Yan

MATH 235. Final ANSWERS May 5, 2015

2 b 3 b 4. c c 2 c 3 c 4

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Lecture 2: Linear operators

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

1 Linear transformations; the basics

Honors Linear Algebra, Spring Homework 8 solutions by Yifei Chen

Chapter 6 Inner product spaces

Extra Problems for Math 2050 Linear Algebra I

NOTES on LINEAR ALGEBRA 1

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Matrices and Linear Algebra

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Chapter Two Elements of Linear Algebra

The Spectral Theorem for normal linear maps

Linear Algebra Final Exam Solutions, December 13, 2008

Math Spring 2011 Final Exam

Notes on basis changes and matrix diagonalization

Math 25a Practice Final #1 Solutions

Transcription:

LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a vector space, two norms, 2 are said to be equivalent iff there exists constants a, b, 0 < a < b such that a x x 2 b x for all x V. We saw that for finite dimensional normed vector spaces all norms are equivalent. following norms of R n defined as follows, if x = (x,..., x n ) R n : ( n ) /p x j p, if p <, x p = max j n x j, if p =. Verify this directly for the That is, prove all these norms are equivalent without appealing to the theorem about equivalence of norms. addition, prove or disprove: lim x p = x for all x R n. p At first I assumed that the equivalence of these norms was too trivial for me to add a proof, but a certain amount of confusion was visible in the answers. It was permissible to assume that equivalence of norms was an equivalence relation!; the proof of this fact being quite immediate. Therefore, one can select one of them (not two or three) and simply show the others are equivalent to it. The easiest one is. Then one can say: Let p <. That x x p is obvious; on the other hand x p p = n i= x i p n i= x p = n x p so that x p n /p x. And we are done! Concerning the last statement, here s a proof. Let x = (x,..., x n ) R n, let ξ = x = max j n x j. If ξ = 0 then x = 0 and there is nothing to prove, so assume ξ 0. Let k be the number of indices for which x i = ξ, so k n. Then, if p <, x p = ξk /p ( + ϵ p ) /p, where ϵ p = k { x i ξ p : x i < ξ} 0 as p. A trivial application of the popularly named squeeze theorem proves k /p ( + ϵ p ) /p =, hence lim p x p = ξ = x. Exercise 3 Let V be a finite dimensional normed vector space and let {e,..., e n } be a basis. Let {x m } be a sequence of elements of V and write x m = ξ mj e j. Prove that the sequence {x m } converges to x = ξ j e j in V ; that is lim x m x = 0, m if and only if each of the n sequences of (real or complex) numbers {ξ mj } m= converges; more specifically, if and only if lim ξ mj = ξ j for j =,..., n. You may (and almost certainly should) use that in a finite dimensional m vector space all norms are equivalent. I assumed everybody would know how to do this, but I wasn t quite right. My sentence in the statement of the problem You may (and almost certainly should) use that in a finite dimensional vector space all norms are equivalent, should perhaps have been more assertive, say You may use (and cannot possibly avoid using) that in a finite dimensional vector space all norms are equivalent. It is not a consequence of linear independence that in a general vector space n ξ je j small (whatever small may mean) implies that n ξ j is small. It is the Heine Borel theorem that does the trick. The equivalence of all norms! In

2 Exercise 4 Let V, W be finite dimensional normed vector spaces; if T : V W is a linear operator define T = sup{ T x W : x V, x V }. (). Prove T < for all T L(V, W ) (this may require assuming all norms in a finite dimensional vector space are equivalent). Let {e,..., e n } be a basis of V. In V we will also consider the norm x V, = ξ j, if x = ξ j e j. By the equivalence of all norms in finite dimensional vector spaces, there exists c > 0 such that x V, c x V for all x V. Now let b = max{ T e i W ; i =,..., n}. Let x = n ξ je j V. Then T x W = It follows that T bc <. ξ j T e j W ξ j T e j W b ξ j = b x V, bc x V. 2. Prove that with T as defined for T L(V, W ), L(V, W ) becomes a normed vector space. This follows from straightforward, essentially obvious, computations. 3. Assume V = W. Prove that this norm in L(V ) also satisfies T S T S and I =. Let T, s L(V ) and let x V. Then T Sx T Sx T S x and T S T S follows. Concerning I, the statement that I = is only 99.99% true. Clearly, Ix = x for all x, and that implies I. But to get I we need to have x V, x such that Ix =. Any x V with x = will do, assuming there is such an x! The statement is thus true as long as V {0}; fails if V is the pathetic trivial space. 4. Let V = C n with the Euclidean norm, which is usually denoted by the same symbol as the absolute value; that is, if z = (z,..., z n ) C n we set z = z 2 = (z z) /2 = z j 2 As usual, we identify operators in L(C n ) with their matrices with respect to the canonical basis of C n ; write T = (t jk ) if T L(C n ). In brief, we identify L(C n ) with M n (C). Now M n (C) can be identified with C n2 so it now makes sense to define for an operator T L(C n ) the norm T p for p [, ]. We only need this for p = 2, so just in case, /2 T 2 = t jk 2. Another norm that one can define in M n (C) is T 2, = max j, k n t jk 2 /2 /2.

3 (a) With T being the operator norm defined by (), prove T 2, T T 2 for all T L(C n ). There was originally a part (b) here, which I removed. This is a simple computation, but most everybody got it wrong. First of all recall that if e = (, 0,..., 0),..., e n = (0,..., 0, ) is the standard basis of C n, then T e k = (t k,..., t nk ) for k =,..., n. Since e k = for all k, we have T e k T for all k, hence T max T e k = max t jk 2 k n k n /2 = T 2, Conversely, we will use that if x = (x,..., x n ) C n, then, by Cauchy-Schwarz, 2 n n t jk x k t jk 2 x k 2 = x 2 t jk 2, to get T x 2 = ( t k x k,..., t nk x k ) 2 = proving T T 2. 2 t jk x k x 2 n t jk 2 = T 2 2 x 2, Exercise 5 Let V be a finite dimensional normed vector space, let {T m }, {S m } be sequences of operators in L(V ), converging (in the operator norm ()) to T, S respectively.. Prove that {T m + S m } converges to T + S and that {T m S m } converges to T S. The proof of this is essentially the same that works for sequences in R. 2. Prove: If S L(V ) and S <, then I S is invertible. As a finite dimensional normed vectors space, L(V ) is complete. Let S L(V ) and assume S <. It is immediately verifiable that the sequence {T n = I +S + +S n } is a Cauchy sequence in the operator norm, thus converges to some operator T. Now thus T n (I S) = I + S + + S n (S + + S n ) = I S n, T n (I S) I = S n S n 0 as n. Thus {T n (I S)} converges to I. But by part, it converges to T (I S). Thus T (I S) = I, proving I S invertible, (I S) = T = S n. 3. Let G(V ) be the group of invertible operators in L(V ). Prove it is an open subset of L(V ). Let T G(V ). Let ϵ = T. Assume S T < ϵ. Then S = T (T S) = T (I T (T S)). Now T is invertible, and since n=0 T (T s) T T S < ϵ T =, I T (T S) is also invertible. As the product of two invertible operators, so is S. The result follows.

4 Exercise 6 Let V be a normed vector space of finite dimension n. Let T L(V ).. Prove that the sequence of operators {I + T + 2! T 2 + + m m! T m } = { k! T k } converges in the normed space L(V ) and it makes thus sense to define Since e T = k=0 k=0 k! T k. k! T k <, it is immediate that { m k=0 k! T k } is a Cauchy sequence in L(V ), thus converges. 2. Let V = R 3 and consider the operator (matrix) Evaluate e T explicitly. T = 3 5 4 4 2 We find the S, N decomposition of the matrix. The characteristic polynomial of this matrix is p(λ) = λ 3 2λ 2 20λ 24 = (x + 2) 2 (x 6). Solving for eigenvectors, for λ = 2 we have to solve. 3x + 3y 5z = 0 x y z = 0 4x + 4y 4z = 0 This solves to x = y, z = 0; giving a one-dimensional eigenspace spanned, for example, by (,, 0). The generalized eigenspace, however, has dimension 2. We can now square 2I T ; the square will have 0 as an eigenvalue; one eigenvector will be the one we found, a second linearly independent one will complete the basis of the generalized eigenspace corresponding to -2. See the notes on Jordan for more details. I get as a possible second eigenspace (, 0, ). Finally, one finds that (, 0, ) is an eigenvector corresponding to the eigenvalue 6. Let U = 0 0 ; 0 Then and U T U = which is the S, N decomposition; U = 2 0 0 2 2 0 0 0 6 S = 2 0 0 0 2 0 0 0 6 /2 /2 /2 0 0 /2 /2 /2 = 2 0 0 0 2 0 0 0 6, N = k=0 + 2 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0.

5 We see SN = NS (as it should be) and since N 2 = 0, we get m=0 m! (S + N)m = = m=0 m! (Sm + mn) = e S (I + N) = e 2 0 0 2e 2 e 2 0 0 0 e 6 e 2 0 0 0 e 2 0 0 0 e 6 0 0 2 0 0 0 Finally, e T = Ue S+N U = 2 (e6 e 2 ) 2 (3e 2 e 6 ) 2 (e 2 + e 6 ) e 2 2e 2 e 2 2 (e6 e 2 ) 2 (e 2 e 6 ) 2 (e 2 + e 6 ) I usually write up the solution before starting to grade; after grading the problem in question I might add some comments, or even change what I have written. Here I am adding a comment. If A is a square n n matrix, a lot of authors consider it as an operator in F n by defining x xa. In this case the Jordan form is the transpose of what we called the Jordan form. It is interesting that all of you who used the word Jordan used a Jordan form with the s above the main diagonal. It seems that the only one who actually took the trouble to go through the main steps to calculate the exponential was I. Exercise 7 Let f : V K be linear, y V, and assume that f(x) = (x, y) for all x V. Then f can be defined by (); that is f = sup{ f(x) : x V, x }. Prove f = y. f(x) = (x, y) x y = y x, so f y. If y = 0, then f = 0 and f = 0 = y, so assume y 0. Then y/ y = and f(y/ y ) f ; i.e., f (y/ y, y) = y. Exercise 8 Let A L(V ). Prove A = A. Notice first that A := (A ) = A. Now let x V. Then thus A 2 A A, proving A A. But then proving the converse inequality. Exercise 9 Textbook, Exercise 4, 70, page 37. A x 2 = (A x, A x) = (AA x, x) A A x 2, A = (A ) A, To get CA = B, the obvious choice is C = BA. But A might not be invertible! Thinking a bit more carefully, we see that C, applied to anything of the form Ax should be of the form Bx. So we make the obvious definition of C : R(A) V : C(Ax) = Bx. Is this well defined? The answer is yes; if Ax = Ay, then x y ker(a) ker(b), thus Bx = By. Because it is well defined, it is also easy to see that C is linear. The orthogonal complement of R(A) is ker(a ) = ker(a); the tricky part is how to define C on ker(a) so as to assure that C = C. Playing around with inner products, etc., one reaches the conclusion that the definition has to satisfy: If x ker(a), then ACx = B x and (Cx, y) = (x, Cy)

6 for all x, y ker(a). Assuming for a moment we have achieved this, let x, y V. Write x = Au + x 2, y = Av + y 2, where u, v V and x 2, y 2 ker A. Now, using where convenient the self-adjointness of A, AB, (Cx, y) = (Bu + Cx 2, Av + y 2 ) = (Bu + Cx 2, Av) + (Bu + Cx 2, y 2 ) = (ABu + ACx 2, v) + (Bu + Cx 2, y 2 ) = (ABu, v) + (ACx 2, v) + (Bu, y 2 ) + (Cx 2, y 2 ), (x, Cy) = (Au + x 2, Bv + Cy 2 ) = (u, ABv + ACy 2 ) + (x 2, Bv + Cy 2 ) = (u, ABv) + (u, ACy 2 ) + (x 2, Bv + Cy 2 ) = (ABu, v) + (u, ACy 2 ) + (x 2, Bv) + (x 2, Cy 2 ). We see (Cx, y) = (x, Cy). In fact, (u, ABv) = (ABu, v) since AB is self-adjoint, (u, ACy 2 ) = (u, B y 2 ) = (Bu, y 2) by the assumption about AC since y 2 ker A, similarly (x 2, Bv) = (B x 2, v) = (ACx 2, v). Finally, (x 2, Cy 2 ) = (Cx 2, y 2 ) since x 2, y 2 ker A. Let us prove now that we can achieve to have C defined on ker A so the desired properties hold. Let {e,..., e m } be a basis of ker A. Since ker(a) ker(b) = R(B ), we have R(A) = ker(a ) = ker(a) R(B ); thus there exist y,..., y m V such that Ay j = e j for j =,..., m. Writing y j = y j + y j with y j R(A), y j ker(a), we have Ay j = Ay j = e j. Replacing the y j s by the y j s, we may assume y j R(A) for j =,..., n. Define C on ker(a) by Ce j = y j, j =,..., m. Then ACe j = B e j and it follows that AC = B on ker(a). Let x, y ker(a), say x = m c je j, y = m d je j. Then (Cx, y) = j,k c j dk (y j, e k ) = 0 since y j R(A) and e k ker(a) = R(A) for all j, k. Similarly (x, Cy) = 0 for x, y ker A. Thus (Cx, y) = 0 = (x, Cy) for x, y ker A. Only one person started this problem. I did not quite understand this person s solution. If right, it is shorter than what I wrote here. Exercise 0 Textbook, Exercise 6, 70, page 37. (Hint: No and yes; prove: If A is skew, A 2 is self-adjoint. Should take about half a second.) If A = 0, then A is both skew and symmetric, and the same is true for all of its powers. Assume thus A 0. Assume A is skew, A = A. Then (A 2 ) = (A ) 2 = ( A) 2 = A 2, so A 2 is symmetric. One could ask whether A 2 = 0, but for a skew matrix this implies that A = 0. Similarly, (A 3 ) = ( A) 3 = A 3, so A 3 is skew. Exercise Textbook, Exercise 0, 70, page 37. (a) det(a) = det(a ) = det( A) = ( ) n det(a), so that if dim V is odd, we get det A = det A, hence det A = 0. (b) Let A be skew adjoint in V. Write V = (ker(a)) (ker(v ) ); because A is skew adjoint it is easy (trivial?) to see that ker(v ) is A-invariant; as an operator in ker(v ), A is still skew-adjoint. It is also invertible, thus by part (a) we must have dim(ker(v ) ) even. since we are done. dim(r(a)) = dim V dim(ker(v )) = dim(ker(v ) ), Exercise 2 Textbook, Exercise 7, 74, page 45. (a) If x V and (A I)x = 0, then Ax = x and x 2 = (x, x) = (Ax, x) = (x, Ax) = (x, x) = x 2 so x = 0. It follows that A I is invertible.

7 (b) Let U = (A + I)(A I), where A is skew. Let x V. Then setting y = (A I) x, we have Ux 2 = (A+I)y 2 = Ay 2 +2Re (Ay, y)+ y 2 = Ay 2 2Re (y, Ay)+ y 2 = Ay 2 2Re (Ay, y)+ y 2. Since Re z = Re z, we have that It follows that U is an isometry. Ux 2 = Ay 2 2Re (Ay, y) + y 2 = (A I)y 2 = x 2. (c) If U = (A + I)(A I) where A is skew adjoint, then U I is invertible. To see this, assume x V and Ux = x. Applying A I to both sides, we get (A + I)x = (A I)x, whence x = x, hence x = 0. (d) Assume U is an isometry and U I is invertible. Let A = (U + I)(U I). Since Then for x, y V, setting z = (U I) x, (Ax, y) = ((U + I)z, y) = Exercise 3 Let V be an inner product space and let A be a self-adjoint operator in V. Prove: Let λ = sup{(ax, x) : x V, x = }. Then λ = max σ(a). Moreover, if (Ax, x) = λ, x =, then Ax = λx. The solution I had in mind was more or less as follows. There exists a sequence {x m } with x m = such that λ = lim m (Ax m, x m ). Since we are in a finite dimensional vector space, Heine Borel is valid; a bounded sequence has a convergent subsequence. Passing to this subsequence, we may as well assume {x m } converges; there is x V such that lim m x m = x. It is easy to see that the map y y : V R is continuous; implying x =, and the map y (Ay, y) is continuous, implying (Ax, x) = λ. It should be noted that we used implicitly that A is self-adjoint (at least if the vector space is complex); A self-adjoint guarantees that (Ax, x) is real. Suppose now x = and (Ax, x) = λ. Let y V ; t R and assume first x + ty 0. Then (x + ty)/ x + ty is a vector of norm hence, by the definition of λ, ( ( ) ) x + ty x + ty (A(x + ty), x + ty) = A, λ; x + ty 2 x + ty x + ty i.e., (A(x + ty), x + ty) λ x + ty 2. (2) Since (2) is trivially true if x + ty = 0, it holds for all t R, y V. Keep y fixed for a while. Expanding the two sides of the inequality in (2), using that (x, x) = x 2 = and (Ax, x) = λ, we get (and here we are using the fact that A is selfadjoint to replace (Ay, x) by (y, Ax) = (Ax, y)): (λ y 2 (Ay, y))t 2 + 2Re (Ax λx, y)t 0. (3) If in (3) we divide by t > 0 and let t 0+, we conclude that Re (Ax λx, y) 0. Since y V was arbitrary, we can replace y by y to conclude that Re (Ax λx, y) 0, hence Re (Ax λx, y) = 0. If the space is complex, we can replace next y by iy and conclude that Im (Ax λx, y) 0. Thus ((A λ)x, y) = 0 for all y V, proving Ax = λx. Exercise 4 Let V be finite dimensional inner product space and let A L(V ). Then A 2 = max{λ : λ is an eigenvalue of A A}.

8 This exercise is a simple consequence of the previous one. Let λ be the largest eigenvalue of the positive (thus self-adjoint) operator A A. If x V, then Ax 2 = (A Ax, x) λ x 2 by the previous exercise. Thus A 2 λ. On the other hand, there is x V, x =, such that A Ax = λx; then A 2 Ax 2 = (A Ax, x) = λ. Exercise 5 Let F be a field of characteristic 0. Consider the following two matrices in M n (F ): n 0 0 A =, B = 0 0 0. 0 0 0 Prove they are similar. As a hint, if they are, then B would have to be the Jordan normal form of A. A field of characteristic 0 contains Q as a subfield and we may as well assume F = C; at least for a while. Clearly A is self-adjoint. Trying to find eigenvalues and eigenvectors, I think it is easiest to just look at the equation Ax = λx; if x = (x,..., x n ) this results in n equations x + + x n = λx i ; i =,..., n. If λ 0, the only way all these equations can hold is if x = x 2 = = x n. But then λx = x + + x n = nx, so l = n. If λ = 0, on the other hand, we can easily find n linearly independent solutions. That is, with a minimum of effort we find the following set of eigenvectors, the first one corresponding to the eigenvalue n, the rest to the eigenvalue 0: (,,..., ), (,, 0,..., 0), (, 0,, 0,..., 0), (,..., 0,, 0), (,..., 0, ). As it should be, this is an orthogonal basis, which can easily be normalized. In this basis the operator A has clearly the matrix B, proving the two matrices to be similar. Finally, the matrix of eigenvectors U = 0 0 0 0 makes sense in Q (as does its inverse), so we are done. Exercise 6 Let V be a vector space of finite dimension n over a field F (any field, for now). Let E be a projection operator; that is E L(V ) and E 2 = E.. Show that the minimum polynomial p of E has one of the following forms: p(λ) = λ, p(λ) = λ, p(λ) = λ(λ ). We have E(E I) = 0. If E = 0 it follows that setting p(λ) = l we have p(e) = 0, and p(λ) = l is the minimum polynomial. If E = I, then it is p(λ) = λ. Otherwise let p(λ) = λ(λ ). Since p(e) = 0, the minimum polynomial must be a factor of p(λ). The only factors are λ and (λ ). Since E 0, E I 0, neither factor vanishes on E, thus p(λ) = λ(λ ) is minimum. 2. Show that σ(e) {0, }. This is immediate from part. Exercise 7 Show that an operator T is nilpotent if and only if σ(t ) = {0}. We have to assume here that all zeros of the characteristic polynomial are in the field!! That a nilpotent operator cannot have any eigenvalue other than 0 is clear. Conversely, assume there is x 0, T x = λx, λ 0. Then T k x = λ k x 0 for all k N, so T is not nilpotent. Alternative proof The only way that the spectrum of T can reduce to 0 is if the characteristic polynomial is p(λ) = λ n, which happens if and only if T n = 0.

GENERALITIES 9 Exercise 8 Prove: A L(V ) is normal if and only if V has an orthonormal basis consisting of eigenvectors of A (Definitely assume V is a complex, finite dimensional, inner product space). Prove that A is self-adjoint if and only if it is normal and all its eigenvalues are real. A is unitary if and only if it is normal and all its eigenvalues are complex numbers of absolute value. If A is normal, it has such a basis by the spectral theorem. Conversely, assume A has such a basis and let U be the matrix whose columns are the eigenvectors. One sees that U U = I, thus U is unitary. Moreover, one also sees that AU = UD, where D = diag(λ,..., λ n ) is a diagonal matrix whose diagonal entries are the eigenvalues of A. Thus A = UDU, A = UD U, hence A A = UD DU = UDD U = AA since diagonal matrices commute. The rest of the exercise is sort of obvious (possibly all of it is), so I won t write out a solution. Exercise 9 Textbook, 80, #5 (p. 62) We assume A is normal and we can write A = r λ je j, where λ,..., λ r are the distinct eigenvalues of A and E,..., E r are mutually orthogonal orthogonal projections. (a) If A = A 2, then we must have λ 2 j = λ j for j =,..., r. Thus (given all λ j s) being distinct we either have r = and λ = 0, or λ =, or r = 2 and λ =, λ 2 = 0. In all of these cases, A = E is an orthogonal (in particular) self-adjoint projection. (b) If A k = 0, then λ k j = 0 for j =,..., r, hence λ j = 0 for j =,..., r, hence A = 0. (c) If A 3 = A 2, then λ 3 j = λ2 j for j =,..., r, hence either λ j = 0 or λ j = for each j. It follows that A is an orthogonal projection. The conclusion is not true in general; for example there are non-zero operators A such that A 2 0; then A 2 = A 3 = 0, but A 2 A. (d) Assume now A = A, thus λ j R for all j. If A k = I for some k N, then λ k j = for j =,..., n. This implies that either we have a single eigenvalue equal to either or -, or two eigenvalues, one equal to, the other one to -. In all cases, A 2 = I. Exercise 20 Textbook, 80, #6 (p. 62) Yes. For a normal transformation A, the kernel of A and of A are the same. Thus AB = 0 implies that R(B) ker(a) = ker(a ), hence A B = 0, hence B A = (A B) = 0; since B is self-adjoint it now follows that BA = 0. Exercise 2 Textbook, 80, #7 (p. 62) Here is where we can use to good effect Theorem 2 of 74; there exists an orthonormal basis {e,..., e n } of V with respect to which the operator A has a lower triangular matrix. The diagonal of that matrix consists of the eigenvalues of A repeated according to their multiplicities. If we denote the entries of this matrix by a ij, i, j n, then a ii = λ i and a ij = 0 if j > i. Now the matrix of A A with respect to this bass will be (a ji )(a ij ) hence tr(a A) = a ji 2 = a ji 2 = a ii 2 + a ji 2 a ii 2 = λ i 2. i i j=i i= (i,j) : i<j n This proves the inequality. Now equality is achieved if and only if (i,j) : i<j n a ji 2 = 0; i.e., if and only if a ji = 0 for j > i; i.e., if the matrix of A is diagonal. Having a diagonal matrix with respect to an orthonormal basis is equivalent to being normal Generalities Exercise 22 Here is a cute one. Let A, B be 2 2 matrices with integer entries, such that the matrices A, A + B, A + 2B, A + 3B and A + 4B are all invertible, and the inverses have integer entries. Show that A + 5B is also invertible with integer entries. i= i=

GENERALITIES 0 This was once a Putnam problem. The thing to keep in mind is that if A is a matrix with integer entries, then det(a) Z; if both A, A have integer entries, then = det(a) det(a and since the only divisors of are and, we conclude det(a) = ±. Conversely, if det A = ±, then the construction of the inverse matrix using the adjunct matrix shows that A has integer entries. Thus an integer matrix has an inverse that is also an integer matrix if and only if det A = ±. Having established this, let s turn to the problem. For t R, let p(t) = det A + tb. The hypothesis implies that p(0), p(), p(2), p(3), p(4) {, }. This implies that p assumes at least 3 times either the value or the value. But p is a polynomial of degree 2; if it assumes 3 times a value it must be constant, thus constantly equal to or -. Then det(a + 5B) = ±, hence A + 5B is invertible and the inverse has integer entries. Exercise 23 Let V be a finite dimensional vector space over a field F and let {e,..., e n } be a basis of V. Consider the following linear maps from V to V. They are called elementary linear transformations. I. For c F, i, j n, E i+c(j) is the unique map such that E i+c(j) e k = e k if k i (even if k = j), E i+c(j) e i = e i + ce j. II. For c F, c 0, i n, E c(i) is the unique map such that E c(i) e j = e j if j i, E c(i) e i = ce i. III. For i, j n, i j, E ij is the unique linear map such that E ij e k = e k if k i, j, E ij e i = e j, E ij e j = e i. Prove that a linear operator T L(V ) is invertible if and only if T is the product of elementary transformations. Hint and Comments. First the comments. Students in our rather elementary Matrix Theory course encounter the matrices of these transformations, called elementary matrices. They learn, with or without too much of a proof, that multiplying a matrix on the left by one of these elementary matrices acts on the rows of the matrix (on the right) by what is known as an elementary row operation. They learn that elementary row operations can be used to take a matrix down to canonical form and the canonical form of a square invertible matrix is the identity matrix. If a number of these transformations take the matrix to the identity, then the same transformations done to the identity matrix give you the inverse matrix. This is the row reduction method for inverting a matrix, probably the most efficient of the relatively straightforward inversion methods. Now the hints. That all the elementary transformations are invertible is obvious. For the converse you need to show that if {f,..., f n } is another basis of V, then a sequence of elementary linear transformations can take {e,..., e n } to {f,..., f n }. If you can show that you can replace one of the e i s by f, you are on the way. It may not be possible to replace e by f (for example, f could be equal to e 2 ). Maybe your induction hypothesis should be: For i n, there is a sequence of elementary linear transformations taking the basis {e,..., e n } to a basis {f,..., f i, e i+,..., e n}. By a transformation of type III you can then start by taking the basis to a permuted one in which f depends on e, allowing you to replace e by f. Etc. I am tired. I won t grade this exercise.