Linear Algebra: Graduate Level Problems and Solutions. Igor Yanovsky

Similar documents
Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Definitions for Quizzes

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Elementary linear algebra

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

LINEAR ALGEBRA REVIEW

Then since v is an eigenvector of T, we have (T λi)v = 0. Then

Math 108b: Notes on the Spectral Theorem

1. General Vector Spaces

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Linear Algebra. Workbook

Chapter 4 Euclid Space

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

MATH 583A REVIEW SESSION #1

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

235 Final exam review questions

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

GQE ALGEBRA PROBLEMS

Linear Algebra Lecture Notes-II

6 Inner Product Spaces

MATH 235. Final ANSWERS May 5, 2015

Linear Algebra Highlights

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Math Linear Algebra II. 1. Inner Products and Norms

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Chapter 6: Orthogonality

October 25, 2013 INNER PRODUCT SPACES

Lecture 7: Positive Semidefinite Matrices

Diagonalizing Matrices

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Foundations of Matrix Analysis

2. Every linear system with the same number of equations as unknowns has a unique solution.

Review problems for MA 54, Fall 2004.

LINEAR ALGEBRA SUMMARY SHEET.

Numerical Linear Algebra

Chapter 6 Inner product spaces

Introduction to Linear Algebra, Second Edition, Serge Lange

Lecture Summaries for Linear Algebra M51A

Linear Algebra 2 Spectral Notes

Linear Algebra Review

Vector Spaces and Linear Transformations

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Math 115A: Homework 5

Eigenvalues and Eigenvectors A =

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

1 Last time: least-squares problems

MATH Spring 2011 Sample problems for Test 2: Solutions

LINEAR ALGEBRA MICHAEL PENKAVA

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

TBP MATH33A Review Sheet. November 24, 2018

Quantum Computing Lecture 2. Review of Linear Algebra

SUMMARY OF MATH 1600

MTH 2032 SemesterII

Review of some mathematical tools

Conceptual Questions for Review

Study Guide for Linear Algebra Exam 2

NOTES on LINEAR ALGEBRA 1

A PRIMER ON SESQUILINEAR FORMS

PRACTICE PROBLEMS FOR THE FINAL

Symmetric and self-adjoint matrices

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

Solution to Homework 1

Chapter 5 Eigenvalues and Eigenvectors

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem.

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

Answer Keys For Math 225 Final Review Problem

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

A Brief Outline of Math 355

Eigenvalues and Eigenvectors

Math Linear Algebra Final Exam Review Sheet

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

The Spectral Theorem for normal linear maps

Calculating determinants for larger matrices

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

EXERCISES AND SOLUTIONS IN LINEAR ALGEBRA

Online Exercises for Linear Algebra XM511

Algebra II. Paulius Drungilas and Jonas Jankauskas

(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).

Linear Algebra Practice Problems

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Linear Algebra in Actuarial Science: Slides to the lecture

Last name: First name: Signature: Student number:

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Transcription:

Linear Algebra: Graduate Level Problems and Solutions Igor Yanovsky

Linear Algebra Igor Yanovsky, 5 Disclaimer: This handbook is intended to assist graduate students with qualifying examination preparation. Please be aware, however, that the handbook might contain, and almost certainly contains, typos as well as incorrect or inaccurate solutions. I can not be made responsible for any inaccuracies contained in this handbook.

Linear Algebra Igor Yanovsky, 5 3 Contents Basic Theory 4. Linear Maps.................................. 4. Linear Maps as Matrices........................... 4.3 Dimension and Isomorphism......................... 4.4 Matrix Representations Redux....................... 6.5 Subspaces................................... 6.6 Linear Maps and Subspaces......................... 7.7 Dimension Formula.............................. 7.8 Matrix Calculations............................. 7.9 Diagonalizability............................... 8 Inner Product Spaces 8. Inner Products................................ 8. Orthonormal Bases.............................. 8.. Gram-Schmidt procedure...................... 9.. QR Factorization........................... 9.3 Orthogonal Complements and Projections................. 9 3 Linear Maps on Inner Product Spaces 3. Adjoint Maps................................. 3. Self-Adjoint Maps.............................. 3 3.3 Polarization and Isometries......................... 3 3.4 Unitary and Orthogonal Operators..................... 4 3.5 Spectral Theorem............................... 5 3.6 Normal Operators.............................. 5 3.7 Unitary Equivalence............................. 6 3.8 Triangulability................................ 6 4 Determinants 7 4. Characteristic Polynomial.......................... 7 5 Linear Operators 8 5. Dual Spaces.................................. 8 5. Dual Maps.................................. 6 Problems 3

Linear Algebra Igor Yanovsky, 5 4 Basic Theory. Linear Maps Lemma. If A Mat mxn (F) and B Mat nxm (F), then tr(ab) = tr(ba). Proof. Note that the (i, i) entry in AB is n j= α ijβ ji, while (j, j) entry in BA is m i= β jiα ij. Thus tr(ab) = tr(ba) = m n α ij β ji, i= j= n j= i= m β ji α ij.. Linear Maps as Matrices Example. Let P n = {α + α t + + α n t n : α, α,..., α n F} be the space of polynomials of degree n and D : V V the differential map D(α + α t + + α n t n ) = α + + nα n t n. If we use the basis, t,..., t n for V then we see that D(t k ) = kt k and thus the (n + )x(n + ) matrix representation is computed via [D() D(t) D(t ) D(t n )] = [ t nt n ] = [ t t t n ]........... n.3 Dimension and Isomorphism A linear map L : V W is isomorphism if we can find K : W V such that LK = I W and KL = I V. V I V V L W I W K W

Linear Algebra Igor Yanovsky, 5 5 Theorem. V and W are isomorphic there is a bijective linear map L : V W. Proof. If V and W are isomorphic we can find linear maps L : V W and K : W V so that LK = I W and KL = I V. Then for any y = I W (y) = L(K(y)) so we can let x = K(y), which means L is onto. If L(x ) = L(x ) then x = I V (x ) = KL(x ) = KL(x ) = I V (x ) = x, which means L is. Assume L : V W is linear and a bijection. Then we have an inverse map L which satisfies L L = I W and L L = I V. In order for this inverse map to be allowable as K we need to check that it is linear. Select α, α F and y, y W. Let x i = L (y i ) so that L(x i ) = y i. Then we have L (α y + α y ) = L (α L(x ) + α L(x )) = L (L(α x + α x )) = I V (α x + α x ) = α x + α x = α L (y ) + α L (y ). Theorem. If F m and F n are isomorphic over F, then n = m. Proof. Suppose we have L : F m F n and K : F n F m such that LK = I F n KL = I F m. L Mat nxm (F) and K Mat mxn (F). Thus and n = tr(i F n) = tr(lk) = tr(kl) = tr(i F m) = m. Define the dimension of a vector space V over F as dim F V = n if V is isomorphic to F n. Remark. dim C C =, dim R C =, dim Q R =. The set of all linear maps {L : V W } over F is homomorphism, and is denoted by hom F (V, W ). Corollary. If V and W are finite dimensional vector spaces over F, then hom F (V, W ) is also finite dimensional and dim F hom F (V, W ) = (dim F W ) (dim F V ) Proof. By choosing bases for V and W there is a natural mapping hom F (V, W ) Mat (dimf W ) (dim F V )(F) F (dim F W ) (dim F V ) This map is both - and onto as the matrix represetation uniquely determines the linear map and every matrix yields a linear map.

Linear Algebra Igor Yanovsky, 5 6.4 Matrix Representations Redux L : V W, bases x,..., x m for V and y,..., y n for W. The matrix for L interpreted as a linear map is [L] : F m F n. The basis isomorphisms defined by the choices of basis for V and W : [x x m ] : F m V, [y y n ] : F n W. [x x m] V F m L W [L] F n [y y n] L [x x m ] = [y y n ][L].5 Subspaces A nonempty subset M V is a subspace if α, β F and x, y M, then αx+βy M. Also, M. If M, N V are subspaces, then we can form two new subspaces, the sum and the intersection: M + N = {x + y : x M, y N}, M N = {x : x M, x N}. M and N have trivial intersection if M N = {}. M and N are transversal if M + N = V. Two spaces are complementary if they are transversal and have trivial intersection. M, N form a direct sum of V if M N = {} and M + N = V. Write V = M N. Example. V = R. M = {(x, ) : x R}, x-axis, and N = {(, y) : y R}, y-axis. Example. V = R. M = {(x, ) : x R}, x-axis, and N = {(y, y) : y R}, a diagonal. Note (x, y) = (x y, ) + (y, y), which gives V = M N. If we have a direct sum decomposition V = M N, then we can construct the projection of V onto M along N. The map E : V V is defined using that each z = x + y, x M, y N and mapping z to x. E(z) = E(x + y) = E(x) + E(y) = E(x) = x. Thus im(e) = M and ker(e) = N. Definition. If V is a vector space, a projection of V is a linear operator E on V such that E = E. [x x m ] : F m V means [x x m] α. α m = α x + + α mx m

Linear Algebra Igor Yanovsky, 5 7.6 Linear Maps and Subspaces L : V W is a linear map over F. The kernel or nullspace of L is ker(l) = N(L) = {x V : L(x) = } The image or range of L is im(l) = R(L) = L(V ) = {L(x) W : x V } Lemma. ker(l) is a subspace of V and im (L) is a subspace of W. Proof. Assume that α, α F and that x, x ker(l), then L(α x + α x ) = α L(x ) + α L(x ) = α x + α x ker(l). Assume α, α F and x, x V, then α L(x ) + α L(x ) = L(α x + α x ) im (L). Lemma. L is - ker(l) = {}. Proof. We know that L( ) = L() =, so if L is we have L(x) = = L() implies that x =. Hence ker(l) = {}. Assume that ker(l) = {}. If L(x ) = L(x ), then linearity of L tells that L(x x ) =. Then ker(l) = {} implies x x =, which shows that x = x as desired. Lemma. L : V W, and dim V = dim W. L is - L is onto dim im (L) = dim V. Proof. From the dimension formula, we have dim V = dim ker(l) + dim im(l). L is - ker(l) = {} dim ker(l) = dim im (L) = dim V dim im (L) = dim W im (L) = W, that is, L is onto..7 Dimension Formula Theorem. Let V be finite dimensional and L : V W a linear map, all over F, then im(l) is finite dimensional and dim F V = dim F ker(l) + dim F im(l) Proof. We know that dim ker(l) dim V and that it has a complement M of dimension k = dim V dim ker(l). Since M ker(l) = {} the linear map L must be - when restricted to M. Thus L M : M im(l) is an isomorphism, i.e. dim im(l) = dim M = k..8 Matrix Calculations Change of Basis Matrix. Given the two basis of R, β = {x = (, ), x = (, )} and β = {y = (4, 3), y = (3, )}, we find the change-of-basis matrix P from β to β. Write y as a linear combination of x and x, y = ax + bx. (4, 3) = a(, ) + b(, ) a = 3, b = y = 3x + x. Write y as a linear combination of x and x, y = ax + bx. (3, ) = a(, ) + b(, ) a =, b = y = x + x. Write the coordinates of y and y as columns of P. P = [ 3 ].

Linear Algebra Igor Yanovsky, 5 8.9 Diagonalizability Definition. Let T be a linear operator on the finite-dimensional space V. T is diagonalizable if there is a basis for V consisting of eigenvectors of T. Theorem. Let v,..., v n be nonzero eigenvectors of distinct eigenvalues λ,..., λ n. Then {v,..., v n } is linearly independent. Alternative Statement. If L has n distinct eigenvalues λ,..., λ n, then L is diagonalizable. (Proof is in the exercises). Definition. Let L be a linear operator on a finite-dimensional vector space V, and let λ be an eigenvalue of L. Define E λ = {x V : L(x) = λx} = ker(l λi V ). The set E λ is called the eigenspace of L corresponding to the eigenvalue λ. The algebraic multiplicity is defined to be the multiplicity of λ as a root of the characteristic polynomial of L, while the geometric multiplicity of λ is defined to be the dimension of its eigenspace, dim E λ = dim(ker(l λi V )). Also, dim(ker(l λi V )) m. Eigenspaces. A vector v with (A λi)v = is an eigenvector for λ. Generalized Eigenspaces. Let λ be an eigenvalue of A with algebraic multiplicity m. A vector v with (A λi) m v = is a generalised eigenvector for λ. Inner Product Spaces. Inner Products The three important properties for complex inner products are: ) (x x) = x > unless x =. ) (x y) = (y x). 3) For each y V the map x (x y) is linear. The inner product on C n is defined by (x y) = x t ȳ Consequences: (α x + α x y) = α (x y) + α (x y), (x β y + β y ) = β (x y ) + β (x y ), (αx αx) = αᾱ(x x) = α (x x).. Orthonormal Bases Lemma. Let e,..., e n be orthonormal. Then e,..., e n are linearly independent and any element x span{e,..., e n } has the expansion x = (x e )e + + (x e n )e n. Proof. Note that if x = α e + + α n e n, then (x e i ) = (α e + +α n e n e i ) = α (e e i )+ +α n (e n e i ) = α δ i + +α n δ ni = α i.

Linear Algebra Igor Yanovsky, 5 9.. Gram-Schmidt procedure Given a linearly independent set x,..., x m in an inner product space V it is possible to construct an orthonormal collection e,..., e m such that span{x,..., x m } = span{e,..., e m }. e = x x. z = x proj x (x ) = x proj e (x ) = x (x e )e, e = z z. z k+ = x k+ (x k+ e )e (x k+ e k )e k, e k+ = z k+ z k+... QR Factorization A = [ ] [ ] x x m = e e m (x e ) (x e ) (x m e ) (x e ) (x m e )...... (x m e m ) = QR Example. Consider the vectors x = (,, ), x = (,, ), x 3 = (,, ) in R 3. Perform Gram-Schmidt: e = x x = (,,) = (,, ). z = (,, ) (,, ) = (,, ), e = z z = (,,) 3/ = ( 6, 6, 6 ). z 3 = x 3 (x 3 e )e (x 3 e )e = (,, ) (,, ) 6 ( 6, 6, 6 ) = ( ) 3, 3, 3, e3 = z 3 z 3 = ( ) 3, 3, 3..3 Orthogonal Complements and Projections The orthogonal projection of a vector x onto a nonzero vector y is defined by ( proj y (x) = x y ) y y y = (x y) (y y) y, ( The length of this projection is projy (x) = (x y) ). y The definition of proj y (x) immediately implies that it is linear from the linearity of the inner product. The map x proj y (x) is a projection. Proof. Need to show proj y (proj y (x)) = proj y (x). ( ) (x y) proj y (proj y (x)) = proj y (y y) y = (x y) (y y) proj y(y) = (x y) (y y) (y y) (y y) y = (x y) (y y) y = proj y(x).

Linear Algebra Igor Yanovsky, 5 Cauchy-Schwarz Inequality. V complex inner product space. (x y) x y, x, y V. Proof. First show proj y (x) x proj y (x): (proj y (x) x proj y (x)) = ( (x y) y y x (x y) y y ) = ( (x y) ) ( x (x y) y y y y (x y) ) y y = (x y) (x y) (x y) (x y) (x y) (y x) y y (y y) = (y x) (x y) =. y y y x proj y (x) = (x y) (y y) y (x y) = y = (x y). (y y) y Triangle Inequality. V complex inner product space. x + y x + y. Proof. x + y = (x + y x + y) = x + Re(x y) + y x + (x y) + y x + x y + y = ( x + y ). Let M V be a finite dimensional subspace of an innter product space, and e,..., e m an orthonormal basis for M. Using that basis, define E : V V by E(x) = (x e )e + + (x e m )e m Note that E(x) M and that if x M, then E(x) = x. Thus E (x) = E(x) implying that E is a projection whose image is M. If x ker(e), then = E(x) = (x e )e + + (x e m )e m (x e ) = = (x e m ) =. This is equivalent to the condition (x z) = for all z M. The set of all such vectors is the orthogonal complement to M in V is denoted M = {x V : (x z) = for all z M} Theorem. Let V be an inner product space. Assume V = M M, then im(proj M ) = M and ker(proj M ) = M. If M V is finite dimensional then V = M M and proj M (x) = (x e )e + + (x e m )e m for any orthonormal basis e,..., e m for M. Proof. For E defined as above, ker(e) = M. x = E(x) + (I E)(x) and (I E)(x) ker(e) = M. Choose z M. x proj M (x) x proj M (x) + proj M (x) z = x z, where equality holds when proj M (x) z =, i.e., proj M (x) is the only closest point to x among the points in M.

Linear Algebra Igor Yanovsky, 5 Theorem. Let E : V V be a projection on to M V with the property that V = ker(e) ker(e). Then the following conditions are equivalent. ) E = proj M. ) im(e) = ker(e). 3) E(x) x for all x V. Proof. We have already seen that () (). Also (),() (3) as x = E(x)+(I E)(x) is an orthogonal decomposition. So x = E(x) + (I E)(x) E(x). Thus, we only need to show that (3) implies that E is orthogonal. Choose x ker(e) and observe that E(x) = x ( E)(x) is an orthogonal decomposition. Thus x E(x) = x ( E)(x) = x + ( E)(x) x This means that ( E)(x) = and hence x = E(x) im(e) ker(e) im(e). Conversely, if z im(e) = M, then we can write z = x + y ker(e) ker(e). This implies that z = E(z) = E(y) = y, where the last equality follows from ker(e) im(e). This means that x = and hence z = y ker(e). 3 Linear Maps on Inner Product Spaces 3. Adjoint Maps The adjoint of A is the matrix A such that a ij = ā ji. A : F m F n, A : F n F m. (Ax y) = (Ax) t ȳ = x t A t ȳ = x t (Āt y) = (x A y). dim(m) + dim(m ) = dim(v ) = dim(v ) Theorem. Suppose S = {v, v,..., v k } is an orthonormal set in an n-dimensional inner product space V. Then a) S can be extended to an orthonormal basis {v, v,..., v k, v k+,..., v n } for V. b) If M = span(s), then S = {v k+,..., v n } is an orthonormal basis for M. c) If M is any subspace of V, then dim(v ) = dim(m) + dim(m ). Proof. a) Extend S to a basis S = {v, v,..., v k, w k+,..., w n } for V. Apply Gram- Schmidt process to S. The first k vectors resulting from this process are the vectors in S. S spans V. Normalizing the last n k vectors of this set produces an orthonormal set that spans V. b) Because S is a subset of a basis, it is linearly independent. Since S is clearly a subset of M, we need only show that it spans M. For any x V, we have x = n (x v i )v i. i= If x M, then (x v i ) = for i k. Therefore, x = n i=k+ (x v i )v i span(s ). c) Let M be a subspace of V. It is finite-dimensional inner product space because V is, and so it has an orthonormal basis {v, v,..., v k }. By (a) and (b), we have dim(v ) = n = k + (n k) = dim(m) + dim(m ).

Linear Algebra Igor Yanovsky, 5 Theorem. Let M be a subspace of V. Then V = M M. Proof. By Gram-Schmidt process, we can obtain an orthonormal basis {v, v,..., v k } of M, and by theorem above, we can extend it to an orthonormal basis {v, v,..., v n } of V. Hence v k+,..., v n M. If x V, then x = a v + +a n v n, where a v + +a k v k M and a k+ v k+ + +a n v n M. Accordingly, V = M + M. On the other hand, if x M M, then (x x ) =. This yields x =. Hence M M = {}. Theorem. a) M M. b) If M is a subset of a finite-dimensional space V, then M = M. Proof. a) Let x M. Then (x z) =, z M ; hence x M. b) V = M M and, also V = M M. Hence dim M = dim V dim M and dim M = dim V dim M. This yields dim M = dim M. Since M M by (a), we have M = M. Fredholm Alternative. L : V W be a linear map between finite dimensional inner product spaces. Then ker(l) = im(l ), ker(l ) = im(l), ker(l) = im(l ), ker(l ) = im(l). Proof. Using L = L and M = M, the four statements are equivalent. ker L = {x V : Lx = }. V L W im(l) = {Lx : x V }, W L V im(l ) = {L y : y W }, im(l ) = {x V : (x L y) = for all y W } = {x V : (Lx y) = for all y W }. If x ker L x im(l ). Conversely, if (Lx y) = for all y W Lx = x ker L. Rank Theorem. L : V W be a linear map between finite dimensional inner product spaces. Then rank(l) = rank(l ). Proof. dim V = dim(ker(l)) + dim(im(l)) = dim(im(l )) + dim(im(l)) = dim V dim(im(l )) + dim(im(l)). Corollary. For an n m matrix A, the column rank equals the row rank. Proof. Conjugation does not change the rank. rank(a) is the column rank. rank(a ) is the row rank of the conjugate of A. Corollary. Let L : V V be a linear operator on a finite dimensional inner product space. Then λ is an eigenvalue for L λ is an eigenvalue for L. Moreover, these eigenvalue pairs have the same geometric multiplicity: dim(ker(l λi V )) = dim(ker(l λi V )). Proof. Note that (L λi V ) = L λi V. Thus we only need to show dim(ker(l)) = dim(ker(l )). dim(ker(l)) = dim V dim(im(l)) = dim V dim(im(l )) = dim(ker(l )).

Linear Algebra Igor Yanovsky, 5 3 3. Self-Adjoint Maps A linear operator L : V V is self-adjoint (Hermitian) if L = L, and skew-adjoint if L = L. Theorem. If L is self-adjoint operator on a finite-dimensional inner product space V, then every eigenvalue of L is real. Proof. Method I: Suppose L is a self-adjoint operator in V. Let λ be an eigenvalue of L, and let x be a nonzero vector in V such that Lx = λx. Then λ(x x) = (λx x) = (Lx x) = (x L x) = (x Lx) = λ(x x). Thus λ = λ, which means that λ is real. Proof. Method II: Suppose that L(x) = λx for x. Because a self-adjoint operator is normal, then if x is an eigenvector of L then x is also an eigenvector of L. Thus, λx = L(x) = L (x) = λx. Proposition. If L is self- or skew-adjoint, then for each invariant subspace M V the orthogonal complement is also invariant, i.e., if L(M) M, then also L(M ) M. Proof. Assume that L(M) M. If x M and z M, then since L(x) M we have = (z L(x)) = (L (z) x) = ±(L(z) x). Since this holds x M, it follows that L(z) M. 3.3 Polarization and Isometries Real inner product on V : (x + y x + y) = (x x) + (x y) + (y y) (x y) = ((x + y x + y) (x x) (y y)) = ( x + y x y ). Complex inner products (are only conjugate symmetric) on V : (x + y x + y) = (x x) + Re(x y) + (y y) Re(x y) = ( x + y x y ). Re(x iy) = Re( i(x y)) = Im(x y). In particular, we have Im(x y) = ( x + iy x iy ). We can use these ideas to check when linear operators L : V V are. First note that L = (L(x) y) =, x, y V. To check the part, let y = L(x) to see that L(x) =, x V. Theorem. Let L : V V be self-adjoint. L = (L(x) x) =, x V. Proof. If L = (L(x) x) =, x V. Assume that (L(x) x) =, x V. = (L(x + y) x + y) = (L(x) x) + (L(x) y) + (L(y) x) + (L(y) y) = (L(x) y) + (y L (x)) = (L(x) y) + (y (L(x))) = Re(L(x) y). Now insert y = L(x) to see that = Re(L(x) L(x)) = L(x).

Linear Algebra Igor Yanovsky, 5 4 Theorem. Let L : V V be a linear map on a complex inner-product space. Then L = (L(x) x) =, x V. Proof. If L = (L(x) x) =, x V. Assume that (L(x) x) =, x V. = (L(x + y) x + y) = (L(x) x) + (L(x) y) + (L(y) x) + (L(y) y) = (L(x) y) + (L(y) x) = (L(x + iy) x + iy) = (L(x) x) + (L(x) iy) + (L(iy) x) + (L(iy) iy) = i(l(x) y) + i(l(y) x) [ ] [ ] [ ] (L(x) y) =. i i (L(y) x) Since the columns of the matrix on the left are linearly independent the only solution is the trivial one. In particular (L(x) y) =. 3.4 Unitary and Orthogonal Operators A linear transformation A is orthogonal is AA T = I, and unitary if AA = I, i.e. A = A. Theorem. L : V W is a linear map between inner product spaces. TFAE: ) L L = I V, (L is unitary) ) (L(x) L(y)) = (x y) x, y V, (L preserves inner products) 3) L(x) = x x V. (L preserves lengths) Proof. () (). L L = I V (L(x) L(y)) = (x L L(y)) = (x Iy) = (x y), x V. Also note: L takes orthonormal sets of vectors to orthonormal sets of vectors. () (3). (L(x) L(y)) = (x y), x, y V L(x) = (L(x), L(x)) = (x, x) = x. (3) (). L(x) = x, x V (L L(x) x) = (L(x) L(x)) = (x x) = (Ix x) ((L L I)(x) x) =, x V. Since L L I is self-adjoint (check), L L = I. Two inner product spaces V and W over F are isometric, if we can find an isometry L : V W, i.e. an isomorphism such that (L(x) L(y)) = (x y). Theorem. Supposet L is unitary, then L is an isometry on V. Proof. An isometry on V is a mapping which preserves distances. Since L is unitary, L(x) L(y) = L(x y) = x y. Thus L is an isometry.

Linear Algebra Igor Yanovsky, 5 5 3.5 Spectral Theorem Theorem. Let L : V V be a self-adjoint operator on a finite dimensional inner product space. Then we can find a real eigenvalue λ for L. Spectral Theorem. Let L : V V be a self-adjoint operator on a finite dimensional inner product space. Then there exists an orthonormal basis e,..., e n of eigenvectors, i.e. L(e ) = λ e,..., L(e n ) = λ n e n. Moreover, all eigenvalues λ,..., λ n are real. Proof. Prove this by induction on dim V. Since L = L, we can find v V, λ R such that L(v) = λv (Lagrange multipliers). Let v = {x V : (x v) = }, an orthogonal complement to v. dim v = dim V. We show L leaves v invariant, i.e. L(v ) v. Let x v, then (L(x) v) = (x L (v)) = (x L(v)) = (x λv) }{{} = λ(x v) =. λ R Thus, L v : v v is again self-adjoint, because (L(x) y) = (x L(y)), x, y V x, y v. Let e = v v ; e,..., e n - orthonormal basis for v so that L(e i ) = λ i e i, i =,..., n. Check: (e e i ) =, i since e i v = e, i =,..., n. Corollary. Let L : V V be a self-adjoint operator on a finite dimensional inner product space. Then there exists an orthonormal basis e,..., e n of eigenvectors and a real n n diagonal matrix D such that λ L = [e e n ] D [e e n ] = [e e n ]..... λ n 3.6 Normal Operators [e e n ]. An operator L : V V on an inner product space is normal if LL = L L. Self-adjoint, skew-adjoint and isometric operators are normal. These are precisely the operators that admit the orthonormal basis that diagonalizes them. Proposition. LL = L L L(x) = L (x) for all x V. Proof. L(x) = L (x) L(x) = L (x) (L(x) L(x)) = (L (x) L (x)) (x L L(x)) = (x LL (x)) (x (L L LL )(x)) = L L LL =, since L L LL is self-adjoint. Theorem. If V is a complex inner product space and L : V V is normal, then ker(l λi V ) = ker(l λi V ) for all λ C. Proof. Observe L λi V is normal and use previous proposition to conclude that (L λi V )(x) = (L λi V )(x).

Linear Algebra Igor Yanovsky, 5 6 Spectral Theorem for Normal Operators. Let L : V V be a normal operator on a complex inner product space. Then there exists an orthonormal basis e,..., e n such that L(e ) = λ e,..., L(e n ) = λ n e n. Proof. Prove this by induction on dim V. Since L is complex linear, we can use the Fundamental Theorem of Algebra to find λ C and x V \{}, so that L(x) = λx L (x) = λx. ker(l λi V ) = ker(l λi V ). Let x = {z V : (z x) = }, an orthogonal complement to x. To get induction, we need to show that x is invariant under L, i.e. L(x ) x. Let z x and show Lz x. (L(z) x) = (z L (x)) = (z λx) = λ(z x) =. Check that L x is normal. Similarly, x is invariant under L, i.e. L : x x, since (L (z) x) = (z L(x)) = (z λx) = λ(z x) =. L x = (L x ) since (L(z) y) = (z L y), z, y x. 3.7 Unitary Equivalence Two nxn matrices A and B are unitary equivalent if A = UBU, where U is an nxn matrix such that U U = UU = I F n. Corollary. (nxn matrices). A normal matrix is unitary equivalent to a diagonal matrix.. A self-adjoint matrix is unitary or orthogonally equivalent to a real diagonal. 3. A skew-adjoint matrix is unitary equivalent to a purely imaginary diagonal. 4. A unitary matrix is unitary equivalent to a diagonal matrix whose diagonal elements are unit scalars. 3.8 Triangulability Schur s Theorem. Let L : V V be a linear operator on a finite dimensional complex inner product space. Then we can find an orthonormal basis e,..., e n such that the matrix representation [L] is upper triangular in this basis, i.e., α α α n L = [e e n ] [L] [e e n ] α α n = [e e n ]...... [e e n ]. α nn

Linear Algebra Igor Yanovsky, 5 7 Generalized Schur s Theorem. Let L : V V be a linear operator on an n dimensional vector space over F. Assume that χ L (t) = (t λ ) (t λ n ) for λ,..., λ n F, then V admits a basis x,..., x n such that the matrix representation with respect to x,..., x n is upper triangular. Proof. The proof is by induction on the dimension n of V. The result is immediate if n =. So suppose that the result is true for linear operators on (n )-dimensional inner product spaces whose characteristic polynomials split. We can assume that L has a unit eigenvector z. Suppose that L (z) = λz and that W = span({z}). We show that W is L-invariant. If y W and x = cz W, then (L(y) x) = (L(y) cz) = (y L (cz)) = (y cl (z)) = (y cλz) = cλ(y z) = cλ() =. So L(y) W. It is easy to show that the characteristic polynomial of L W divides the characteristic polynomial of L and hence splits. Thus, dim(w ) = n, so we may apply the induction hypothesis to L W and obtain an orthonormal basis γ of W such that [L W ] γ is upper triangular. Clearly, β = γ {z} is an orthonormal basis for V such that [L] β is upper triangular. 4 Determinants 4. Characteristic Polynomial The characteristic polynomial of A is defined as χ A (t) = t n +α n t n +... α t+α. The characteristic polynomial of L : V V can be defined by χ L (t) = det(l ti V ). Facts: det L = det L. det A = det AT. If A is orthogonal, det A = ±, since det(i) = det(aa T ) = det(a) det(a T ) = (det A). If U is unitary, det A =, and all λ i =.

Linear Algebra Igor Yanovsky, 5 8 5 Linear Operators 5. Dual Spaces For a vector space V over F, we define the dual space V = hom(v, F) as the set of linear functions of V, i.e. V = {f : V F f is linear }. Let x,..., x n be a basis for V. For each i, there is a unique linear functional f i on V s.t. f i (x j ) = δ ij. In this way we obtain from x,..., x n a set of n distinct linear functionals f,..., f n on V. These functionals are also linearly independent. For, suppose f = Then f(x j ) = n c i f i. i= n c i f i (x j ) = i= n c i δ ij = c j. i= In particular, if f is the zero functional, f(x j ) = for each j and hence the scalars c j are all. Now f,..., f n are n linearly independent functionals, and since we know that V has dimension n, it must be that f,..., f n is a basis for V. This basis is called the dual basis. We have shown that! dual basis {f,..., f n } for V. If f is a linear functional on V then f is some linear combination of the f i, and the scalars c j must be given by c j = f(x j ). f = n f(x i )f i i= Similarly, if x = f j (x) = n α i x i is a vector in V, then i= n α i f j (x i ) = i= i= n α i δ ij = α j so that the unique expression for x as a linear combination of the x i is x = n f i (x)x i i= f i (x) = α i = i th coordinate for x. Let M V be a subspace and define the annihilator to M in V as M = {f V : f(x) = for all x M} = {f V : f(m) = {}} = {f V : f M = } annihilaror is the counterpart of orthogonal complement

Linear Algebra Igor Yanovsky, 5 9 Example. Let β = {x, x } = {(, ), (3, )} be a basis for R. (x i (ξ, ξ )). We find the dual basis of β given by β = {f, f }. To determine formulas for f and f, we seek functionals f (ξ, ξ ) = a ξ + a ξ and f (ξ, ξ ) = b ξ + b ξ such that f (x ) =, f (x ) =, f (x ) =, f (x ) =. Thus { { = f (x ) = f(, ) = a + a = f (x ) = f(, ) = b + b = f (x ) = f(3, ) = 3a + a = f (x ) = f(3, ) = 3b + b The solutions yield a =, a = 3 and b =, b =. Hence f (ξ, ξ ) = ξ + 3ξ and f (ξ, ξ ) = ξ ξ, or f = (, 3), f = (, ), form the dual basis. Example. Let β = {x, x, x 3 } = {(,, ), (,, ), (,, )} be a basis for R 3. (x i (ξ, ξ, ξ 3 )). We find the dual basis of β given by β = {f, f, f 3 }. To determine formulas for f,f,f 3 we seek functionals f (ξ, ξ, ξ 3 ) = a ξ + a ξ + a 3 ξ 3, f (ξ, ξ, ξ 3 ) = b ξ + b ξ + b 3 ξ 3, and f 3 (ξ, ξ, ξ 3 ) = c ξ + c ξ + c 3 ξ 3 such that f i (x j ) = δ ij. = f (x ) = f(,, ) = a + a 3 = f (x ) = f(,, ) = b + b 3 = f (x ) = f(,, ) = a = f (x ) = f(,, ) = b = f (x 3 ) = f(,, ) = a + a 3 = f (x 3 ) = f(,, ) = b + b 3 = f 3 (x ) = f(,, ) = c + c 3 Thus a = 3, a =, a 3 = 3, = f 3 (x ) = f(,, ) = c b =, b =, b 3 =, = f 3 (x 3 ) = f(,, ) = c + c 3 c = 3, c =, c 3 = 3. Hence f (ξ, ξ, ξ 3 ) = 3 ξ + 3 ξ 3, f (ξ, ξ, ξ 3 ) = ξ, f 3 (ξ, ξ, ξ 3 ) = 3 ξ + 3 ξ 3, or f = ( 3,, 3 ), f = (,, ), f 3 = ( 3,, 3 ), form the dual basis. Example. Let W is the subspace of R 4 spanned by x = (,, 3, 4) and x = (,, 4, ). We find the basis for W, the annihilator of W. If suffices to find a basis of the set of linear functionals f(ξ, ξ, ξ 3, ξ 4 ) = a ξ + a ξ + a 3 ξ 3 + a 4 ξ 4 for which f(x ) = and f(x ) = : f(,, 3, 4) = a + a 3a 3 + 4a 4 = f(,, 4, ) = a + 4a 3 a 4 = The system of equations in a, a, a 3, a 4 is in echelon form with free variables a 3 and a 4. Set a 3 =, a 4 = to obtain a =, a = 4, a 3 =, a 4 = f (ξ, ξ, ξ 3, ξ 4 ) = ξ 4ξ + ξ 3. Set a 3 =, a 4 = to obtain a = 6, a =, a 3 =, a 4 = f (ξ, ξ, ξ 3, ξ 4 ) = 6ξ ξ ξ 4. The set of linear functionals {f, f } is a basis of W.

Linear Algebra Igor Yanovsky, 5 Example. Given the annihilator described by the three linear functionals in R 4 : f (ξ, ξ, ξ 3, ξ 4 ) = ξ + ξ + ξ 3 + ξ 4 f (ξ, ξ, ξ 3, ξ 4 ) = ξ + ξ 4 f 3 (ξ, ξ, ξ 3, ξ 4 ) = ξ 4ξ 3 + 3ξ 4 we find the subspace it annihilates. After the row reduction, we find that the functionals below ahhihilate the same subspace g (ξ, ξ, ξ 3, ξ 4 ) = ξ + ξ 3 g (ξ, ξ, ξ 3, ξ 4 ) = ξ g 3 (ξ, ξ, ξ 3, ξ 4 ) = ξ 4 The subspace annihilated consists of the vectors with ξ = ξ 3, ξ = ξ 4 =. Thus the subspace that is annihilated is given by span{(,,, )}. Proposition. If M V is a subspace of a finite dimensional space and x,..., x n is a basis for V such that M = span{x,..., x m }, then M = span{f m+,..., f n }, where f,..., f n is the dual basis. In particular we have dim(m) + dim(m ) = dim(v ) = dim(v ). Proof. Let x,..., x m be a basis for M; M = span{x,..., x m }. Extend to {x,..., x n }, a basis for V. Construct a dual basis f,..., f n for V, f i (x j ) = δ ij. We show that f m+,..., f n is a basis for M. First, show M = span{f m+,..., f n }. Let f M V. f = n i= c if i = n i= f(x i)f i = m i= f(x i)f i + n i=m+ f(x i)f i = n i=m+ f(x i)f i span{f m+,..., f n }. Second, {f m+,..., f n } are linearly independent, since {f m+,..., f n } is a subset of basis for V. Thus, dim(m ) = n m = dim(v ) dim(m). Theorem. W and W are subspaces of a finite-dimensional vector space. Then W = W W = W. Proof. If W = W, then of course W = W. If W W, then one of the two subspaces contains a vector which is not in the other. Suppose there is a vector x W but x / W. There is a linear functional f such that f(z) = for all z W, but f(x). Then f W but f / W and W W. Theorem. Let W be a subspace of a finite-dimensional vector space V. Then W = W. Proof. dim W + dim W = dim V, dim W + dim W = dim V, and since dim V = dim V we have dim W = dim W. Since W W, we see that W = W.

Linear Algebra Igor Yanovsky, 5 Proposition. Assume that the finite dimensional space V = M N, then also V = M N and the restriction maps V M and V N give isomorphisms M N, N M. Proof. Select a basis x,..., x n for V such that M = span{x,..., x m } and N = span{x m+,..., x n }. Then let f,..., f n be the dual basis and simply observe that M = span{f m+,..., f n } and N = span{f,..., f m }. This proves that V = M N. Next we note that dim(m ) = dim(v ) dim(m) = dim(n) = dim(n ). So at least M and N have the same dimension. Also if we restrict f m+,..., f n to N, then we still have that f i (x j ) = δ ij for j = m +,..., n. As N = span{x m+,..., x n }, this means that f m+ N,..., f n N form a basis for N. The proof that N M is similar.

Linear Algebra Igor Yanovsky, 5 5. Dual Maps The dual space construction leads to dual map L : W V for a linear map L : V W. This dual map is a substitute for the adjoint to L and is related to the transpose of the matrix representation of L. The definition is L (g) = g L. Thus if g W we get a linear function g L : V F since L : V W. The dual to L is often denoted L = L t. The dual map satisfies (L(x) g) = (x L (g)) for all x V and g W. Generalized Fredholm Alternative. L : V W be a linear map between finite dimensional inner product spaces. Then ker(l) = im(l ), ker(l ) = im(l), ker(l) = im(l ), ker(l ) = im(l). Proof. Using L = L and M = M, the four statements are equivalent. ker L = {x V : Lx = }. im(l) = {Lx : x V }, im(l ) = {L (g) : g W }, im(l ) = {x V : (x L (g)) = for all g W } = {x V : g(l(x)) = for all g W }. If x ker L x im(l ). Conversely, if g(l(x)) = for all g W Lx = x ker L. Rank Theorem. L : V W be a linear map between finite dimensional inner product spaces. Then rank(l) = rank(l ). Proof. dim V = dim(ker(l)) + dim(im(l)) = dim(im(l ) ) + dim(im(l)) = dim V dim(im(l )) + dim(im(l)).

Linear Algebra Igor Yanovsky, 5 3 6 Problems Cross Product: a = (a, a, a 3 ), b = (b, b, b 3 ): i j k a b = a a a 3 = (a b 3 a 3 b )i + (a 3 b a b 3 )j + (a b a b )k b b b 3 ( ) a = a 3 b b 3, a 3 a b 3 b, a a b b. Problem (F 3, #9). Consider a 3x3 real symmetric matrix with determinant 6. Assume (,, 3) and (, 3, ) are eigenvectors with eigenvalues and. a) Give an eigenvector of the form (, x, y) for some real x, y which is linearly independent of the two vectors above. b) What is the eigenvalue of this eigenvector. Proof. a) Since A is real and symmetric, A is self-adjoint. By the spectral theorem, its eigenvectors are orthonormal. v v = (,, 3) (, 3, ) =, so the two vectors are orthogonal. We cross the v and v to obtain a linearly indepentent vector v 3. v 3 = v v = i j k 3 3 = ( 3 3 ), 3, 3 = ( 3,, 3). Thus, the required vector is e 3 = v 3 3 = (, 3, 3 3 ). b) Since A is self-adjoint, by the spectral theorem, there exists an orthonormal basis of eigenvectors and a real diagonal matrix D such that A = ODO = [e e e 3 ] λ λ λ 3 [e e e 3 ]. Since O is orthotonal, OO = I, i.e. O = O, and A = ODO. Note det A = det(odo ) = det(o) det(d) det(o ) = det(o) det(d) det(o) = det(d) = λ λ λ 3. Thus 6 = det A = det D = λ λ λ 3, and λ =, λ =, then λ 3 = 3.

Linear Algebra Igor Yanovsky, 5 4 Problem (S, #9). Find the matrix representation in the standard basis for either rotation by an angle θ in the plane perpendicular to the subspace spanned by vectors (,,, ) and (,,, ) in R 4. Proof. x = (,,, ), x = (,,, ). span{(,,, ), (,,, )} = span{e, e }, orthogonal complement span{e 3, e 4 }, where rotation happens. e = T = [ ] e e e 3 e 4, z = x (x e )e = e = z z = 3 orthogonal complement, cos(θ) sin(θ) ± sin(θ) cos(θ) 3 ( [ ] e e e 3 e 4 basis for ; e 3 = ) = 4, e 4 = 6 3,

Linear Algebra Igor Yanovsky, 5 5 Problem (F, #8). T : R 3 R 3 rotation by 6 counterclockwise about the plane perpendicular to (,, ). S : R 3 R 3 reflection about the plane perpendicular to (,, ). Determine the matrix representation of S T in the standard basis {(,, ), (,, ), (,, )}. Proof. Rotation: T = [ e e ] e 3 cos(θ) sin(θ) sin(θ) cos(θ) T = [ ] e e e 3 3 3 [ e e e 3 ] [ e e e 3 ] e, e, e 3 orthonormal basis; e = e e 3, e = e 3 e, e 3 = e e. e =, e =, e 3 = ±. 3 6 Check: e 3 = e e = i j k ( ) 3 = 3,, = ( 6,, ), e 3 = +. 6 T = 6 3 3 6 3 6 Reflection: S = [ f f ] f 3 f =, f =, i j k Check: f 3 = f f = f 3 =. S = S T = OS O P T θ P. 3 3 [ f f f 3 ] f 3 = ± 3 3 3 6 6 6. = P T θ P. ( ) =,, = (,, ), = OS O.

Linear Algebra Igor Yanovsky, 5 6 Problem (F, #8). Let T be the rotation of an angle 6 counterclockwise about the origin in the plane perpendicular to (,, ) in R 3. i) Find the matrix representation of T in the standard basis. Find all eigenvalues and eigenspaces of T. ii) What are the eigenvalues and eigenspaces of T if R 3 is replaced by C 3. Proof. i) T = [ ] e e e 3 3 3 [ e e e 3 ] e, e, e 3 orthonormal basis; e = e e 3, e = e 3 e, e 3 = e e. e =, e =, e 3 = ±. 6 3 Check: e 3 = e e = i j k ( ) 6 = 6,, = ( 3,, ), e 3 = +. 3 Know T (e ) = e, so λ = is an eigenvalue. Eigenspace = span{e }. If z R 3, z = αe + w, w span{e, e 3 }. T (z) = αe +T (w). So if T (z) = λz, must have }{{} rot. by 6 λ(αe + w) = αe +T (w) }{{} span{e,e 3 }. λαe = αe and T (w) = λw impossible unless w =. No more eigenvalues or eigenvectors. Any 3-D rotation has for an eigenvalue: any vector lying along the axis of the rotation is unchanged by the rotation, and is therefore an eigenvector corresponding to eigenvalue. The line formed by these eigenvectors is the axis of rotation. For the special case of the null rotation, every vector in 3-D space is an eigenvector corresponding to eigenvalue. Any 3-D reflection has two eigenvalues: - and. Any vector orthogonal to the plane of the mirror is reversed in direction by the reflection, without its size being changed; that is, the reflected vector is - times the original, and so the vector is an eigenvector corresponding to eigenvalue -. The set formed by all these eigenvectors is the line orthogonal to the plane of the mirror. On the other hand, any vector in the plane of the mirror is unchanged by the reflection: it is an eigenvector corresponding to eigenvalue. The set formed by all these eigenvectors is the plane of the mirror. Any vector that is neither in the plane of the mirror nor orthogonal to it is not an eigenvector of the reflection. 3 ii) T : C 3 C 3 ;, χ = (t )(t t + ) roots are t = 3, e i π 3, e i π 3. These are then the three distinct eigenvalues, each with one complex

Linear Algebra Igor Yanovsky, 5 7 [ ] dimensional eigenspace. Find e ±i π 3 eigenspaces for 3 [ ] α 3, and then, β [ ] γ C δ eigenspaces for T : C 3 C 3, span{e } λ =, span{αe + βe 3 } e i π 3, span{γe + δe 3 } e i π 3. Problem (S 3, #8; W, #9; F, #). Let V be n-dimensional complex vector space and T : V V a linear operator. χ T has n distinct roots. Show T is diagonalizable. Let V be n-dimensional complex vector space and T : V V a linear operator. Let v,..., v n be non-zero vectors of distinct eigenvalues in V. Prove that {v,..., v n } is linearly independent. Proof. Since F = C, any root of χ T is also an eigenvalue, so we have λ,..., λ n distinct eigenvalues. Induction on n = dim V. n = trivially linearly independent. n >, v,..., v n are non-zero vectors in V with λ,..., λ n distinct eigenvalues. If α v + + α n v n =, (6.) want to show α i s =. T (α v + + α n v n ) = T () =, α T v + + α n T v n =, α λ v + + α n λ n v n =. (6.) Multiplying (6.) by λ n and subtracting off (6.), we get α (λ n λ )v + + α n (λ n λ n )v n =. Since {v,..., v n } are linearly independent, and λ i λ j, i j, α = = α n =. Then by (6.), α n v n = α n =, since v n is non-zero. Thus, α = = α n =, and {v,..., v n } are linearly independent. Having shown {v,..., v n } are linearly independent, they generate an n-dimensional subspace which is then all of V. Hence {v,..., v n } gives a basis.

Linear Algebra Igor Yanovsky, 5 8 Problem (F, #9). Let A be a real symmetric matrix. Prove that there exists an invertible matrix P such that P AP is diagonal. Proof. Let V = R n with the standard inner product. Let A be real symmetric matrix A t = A A is self-adjoint. Let T be the linear operator on V which is represented by A in the standard order basis. P V S V B [A] S VS P [A] B VB Since T is self-adjoint on V, there exists an orthonormal basis β = {v,..., v n } of eigenvectors of T. Then T v i = λv i, i =,..., n, where λ i s are eigenvalues of T. Let D = [A] B. Let P be the matrix with v,..., v n as column vectors. Then [A] B = P [A] S P = P [A] S [v v n ] = P [Av Av n ] = P [λ v λ n v n ] Since with choice, P is orthonormal with real entries, detp = (P invertible) P = P t. v λ [A] B = P t [λ v λ n v n ] =. [λ v λ n v n ] =..... v n λ n since v i s are orthonormal. Then D = P t [A] S P, D = P AP. Problem (S 3, #9). Let A M 3 (R) satisfy det(a) = and A t A = AA t = I R 3. Prove that the characteristic polynomial of A has as a root (i.e. is an eigenvalue of A). Proof. χ A (t) = t 3 + = (t λ )(t λ )(t λ 3 ), λ, λ, λ 3 C, using the fundamental theorem of algebra. A real root of odd degree. λ R. Case : λ, λ 3 R; Case : λ = λ, λ 3 = λ. det(a) = = λ λ λ 3, since determinant is a product of eigenvalues. A t A = AA t = I R 3 A orthogonal A M 3 (C) is unitary since A = ĀT = A T. λ, λ, λ 3 eigenvalues for A as a unitary transformation, so if Ax i = λ i x i, then (x i x i ) = (U Ux i x i ) = (U is unitary) = (Ux i Ux i ) = (λ i x i λ i x i ) = λ i (x i x i ) λ i =. Case : λ, λ 3 R λ, λ 3 = ± and λ λ 3 =, so one or three eigenvalues = +. Case : λ λ λ 3 = λ λ λ = λ λ = λ =.

Linear Algebra Igor Yanovsky, 5 9 Problem (S 3, #). Let T : R n R n be symmetric 3, tr(t ) =. Show that T =. Proof. By spectral theorem, T = ODO, O is orthogonal and D is diagonal with real entries. λ T = ODO ODO = OD O, where D =...... λ n = tr(t ) = tr(od O ) = tr(o OD ) = tr(d ) = λ + + λ n. λ i = since λ i. Problem (W, #). Let V be a finite dimensional complex inner product space and f : V C a linear functional. Show f(x) = (x y) for some y. Proof. Select e,..., e n orthonormal basis, and let y = f(e )e + + f(e n )e n. (x y) = (x f(e )e + + f(e n )e n ) = f(e )(x e ) + + f(e n )(x e n ) = f((x e )e + + (x e n )e n ) = f(x), since f is linear. We can also show that y is unique. Suppose y is another vector in V for which f(x) = (x y ) for every x V. Then (x y) = (x y ) for all x, so (y y y y ) =, and y = y. Problem (S, #). Let V be a finite dimensional real inner product space and T, S : V V two commuting (i.e. ST = T S) self-adjoint linear operators. Show that there exists an orthonormal basis that simultaneously diagonalizes S and T. Proof. Since T, S are self-adjoint, an ordered orthonormal basis {v,..., v n } of eigenvectors corresponding to eigenvalues λ,..., λ n for T. v i E λi (T ). V = E λ (T ) E λn (T ). v i E λi (T ) T v i = λ i v i. T Sv i = ST v i = Sλ i v i = λ i Sv i Sv i E λi (T ). Thus E λi (T ) is invariant under S, i.e. S : E λi (T ) E λi (T ). Since S Eλi (T ) is self-adjoint, an ordered orthonormal basis β i of eigenvectors of S for E λi (T ). β = n i= β i. 3 symmetric in R self-adjoint hermitian.

Linear Algebra Igor Yanovsky, 5 3 Problem (S, #). Let V be a complex inner product space and W a finite dimensional subspace. v V. Prove that there exists a unique vector v W W such that Let v v W v w, w W. Deduce that equality holds if and only if w = v W. Proof. v W is supposed to be the orthogonal projection of v onto W. Choose an orthonormal basis e,..., e n for W. Then define proj W (x) = (x e )e + + (x e n )e n. Claim: x W = proj W (x). Show: x proj W (x) proj W (x). ( x (x e )e (x e n )e n (x e )e + + (x e n )e n ) = ( x (x e )e + + (x e n )e n ) ( (x e )e + + (x e n )e n (x e )e + + (x e n )e n ) = (x e )(x e ) + + (x e n )(x e n ) ( n i= j= n (x e i )(x e j )(e i e j ) ) = (x e )(x e ) + + (x e n )(x e n ) ( n (x e i )(x e i ) ) = since (e i e j ) = δ ij. In fact, proj W (x) x proj W (x) and W w x proj W (x). x proj W (x) + proj W (x) w = x w x proj W (x) x w, with equality when proj W (x) w = proj W (x) = w. Show: x proj W (x) w W. ( x (x e )e (x e n )e n (w e )e + + (w e n )e n ) i= = ( x (w e )e + + (w e n )e n ) ( (x e )e + + (x e n )e n (w e )e + + (w e n )e n ) = (w e )(x e ) + + (w e n )(x e n ) ( n i= j= n (x e i )(w e j )(e i e j ) ) = (w e )(x e ) + + (w e n )(x e n ) ( n (x e i )(w e i ) ) = since (e i e j ) = δ ij. i= Problem (F 3, #). a) Let t R such that t is not an integer multiple of π. For the matrix [ ] cos(t) sin(t) A = sin(t) cos(t) prove there does not exist a real valued [ matrix ] B such that BAB is a diagonal matrix. λ b) Do the same for the matrix A =, where λ R \ {}. [ ] cos(t) λ sin(t) Proof. a) det(a λi) = = λ sin(t) cos(t) λ λ cos t +. λ, = cos t± cos t. λ, = a±ib, b, i.e. λ, / R. Hence, eigenvectors

Linear Algebra Igor Yanovsky, 5 3 are not real, and B M(R), such [ that BAB ] [ is ] diagonal. [ ] λ w b) λ, =. We find eigenvectors, =. Thus, both eigenvectors w [ ] are v, =, i.e. linearly dependent there does not exist a basis for R consisting of eigenvectors of A. Therefore, B M(R), such that BAB is diagonal.

Linear Algebra Igor Yanovsky, 5 3 Problem (F, #) (Spectral Theorem for Normal Operators). Let A Mat n n (C) satisfying A A = AA, i.e. A is normal. Show that there is an orthonormal basis of eigenvectors of A. Rephrase: For L : V V, V complex finite dimensional inner product space. Proof. Prove this by induction on dim V. Since L is complex linear, we can use the Fundamental Theorem of Algebra to find λ C and x V \{}, so that L(x) = λx L (x) = λx. ker(l λi V ) = ker(l λi V ). Let x = {z V : (z x) = }, an orthogonal complement to x. To get induction, we need to show that x is invariant under L, i.e. L(x ) x. Let z x and show Lz x. (L(z) x) = (z L (x)) = (z λx) = λ(z x) =. Check that L x is normal. Similarly, x is invariant under L, i.e. L : x x, since (L (z) x) = (z L(x)) = (z λx) = λ(z x) =. L x = (L x ) since (L(z) y) = (z L y), z, y x.

Linear Algebra Igor Yanovsky, 5 33 Problem (W, #). Let V be a finite dimensional complex inner product space and L : V V a linear transformation. Show that we can find an orthonormal basis so [L] is upper triangular. Proof. Assume [L] is upper triangular with respect to e,..., e n α α α n α α n [L(e ) L(e n )] = [e e n ]......... α nn L(e ) = α e e is an eigenvector with eigenvalue α. dim V = : L : V V complex can pick e V so that Le = α e ; pick e e. [ ] α α [L(e ) L(e )] = [e e ] upper triangular. α L(e ) = α e, L(e ) = α e + α e. Observe: L(e k ) span{e,..., e k } L(e ),..., L(e k ) span{e,..., e k }. So we have {} = M M M k M k = V with the property dim M k = k, L(M k ) M k = span{e,..., e k }. Enough to show that any linear transformation on an n-dimensional space has an (n )-dim subspace M V. Keep using this result then we can generate such an increasing sequence {} = M M M k M = V that are all invariant under L. M = span{e }, e =. e = M M, e is an orthogonal complement of e. Pick e,..., e k orthonormal basis such that span{e,..., e k } = M k. L(e ) M, L(e ) = α e, L(e ) M, L(e ) = α e + α e, Get an upper triangular form for [L]. L(e k ) M k, L(e k ) = α k e + α k e + + α kk e k. To construct M V we select x V \ {} such that L (x) = λx. (L : V V is complex and linear, so by the Fundamental Theorem of Algebra can find x, λ, so that L (x) = λx). Then M = x. dim M = k have to show M is invariant under L. Take z x: (L(z) x) = (z L (x)) = (z λx) = λ(z x) =. L(z) x. So L(M) M and M has dimension k.

Linear Algebra Igor Yanovsky, 5 34 Problem (F, #7; F, #7). Let T : V W be a linear transformation of finite dimensional real vector spaces. Define the transpose of T and then prove both of the following: ) im(t ) = ker(t t ). ) rank(t ) = rank(t t ) (dim im(t ) = dim im(t t )). Proof. Transpose = dual. Let T : V W be linear. Let T t = T : W V, where X = hom R (X, R). T t : W V is linear. T (g) = g T V T W, V T t W ) This is a proof of the Generalized Fredholm Alternative. ker T = {g W : T (g) = g T = } im(t ) = {T (x) : x V } im(t ) = {g W : g(t (x)) = for all x V } g(t (x)) = for all x V g T = g ker T. ) This is a proof of the Generalized Rank Theorem. rank(t ) = dim(im(t )), rank(t t ) = dim(im(t t )). T t : W V. Dimension formula: dim W = dim(ker(t t )) + dim(im(t t )) = dim(im(t )) + dim(im(t t )) = dim W dim(im(t )) + dim(im(t t )). Problem (W, #8). Let T : V W and S : W X be linear transformations of finite dimensional real vector spaces. Prove that rank(t ) + rank(s) dim(w ) rank(s T ) min{rank(t ), rank(s)}. Proof. Note: V T W S X. rank(s T ) = rank(t ) dim(im T ker S). rank(t ) + rank(s) dim(w ) = rank(t ) + rank(s) dim(ker S) rank(s) = rank(t ) dim(ker S) = rank(s T ) + dim(im T ker S) dim(ker S) rank(s T ) }{{} Note: M V subspace, dim(l(m)) dim M, a consequence of dimension formula. rank(s T ) = dim((s T )(V )) = dim(s(t (V ))) dim(t (V )) = rank(t ) dim(s(w )) = rank(s). Alternatively, to prove rank(s T ) rank(s), note that since T (V ) W, we also have S(T (V )) S(W ) and so dim S(T (V )) dim S(W ). Then rank(s T ) = dim((s T )(V )) = dim(s(t (V ))) dim S(W ) = rank(s).

Linear Algebra Igor Yanovsky, 5 35 Problem (S 3, #7). Let V be a finite dimensional real vector space. Let W V be a subspace and W = {f : V F linear f = on W }. Let W, W V be subspaces. Prove that W W = (W + W ). Proof. W = {f V f W = }. Write similar definitions for W, W, (W +W ), and W W and make observations. ) (W + W ) W, W (W + W ) W W. ) Suppose f W W f W =, f W = f W +W = f (W + W ). Thus, W W (W + W ). Problem (S, #8). Let V be a finite dimensional real vector space. Let M V be a subspace and M = {f : V F linear f = on M}. Prove that dim(v ) = dim(m) + dim(m ). Proof. Let x,..., x m be a basis for M; M = span{x,..., x m }. Extend to {x,..., x n }, a basis for V. Construct a dual basis f,..., f n for V, f i (x j ) = δ ij. We show that f m+,..., f n is a basis for M. First, show M = span{f m+,..., f n }. Let f M V. f = n i= c if i = n i= f(x i)f i = m i= f(x i)f i + n i=m+ f(x i)f i = n i=m+ f(x i)f i span{f m+,..., f n }. Second, {f m+,..., f n } are linearly independent, since {f m+,..., f n } is a subset of basis for V. Thus, dim(m ) = n m = dim(v ) dim(m).

Linear Algebra Igor Yanovsky, 5 36 Problem (F 3, #8). Prove the following three statements. You may choose an order of these statements and then use the earlier statements to prove the later statements. a) If L : V W is a linear transformation between two finite dimensional real vector spaces V, W, then dim im L = dim V dim ker(l). b) If L : V V is a linear transformation on a finite dimensional real inner product space and L is its adjoint, then im(l ) is the orthogonal complement of ker(l) in V. c) Let A nxn be a real matrix, then the maximal number of linearly independent rows (row rank) equals the maximal number of linearly independent columns (column rank). Proof. We prove (a) and (b) separately. (a),(b) (c). a) We know that dim ker(l) dim V and that it has a complement M of dimension k = dim V dim ker(l). Since M ker(l) = {} the linear map L must be - when restricted to M. Thus L M : M im(l) is an isomorphism, i.e. dim im(l) = dim M = k. b) We want to show ker(l) = im(l ). Since M = M, we can prove ker(l) = im(l ). ker L = {x V : Lx = }. V L W im(l) = {Lx : x V }, W L V im(l ) = {L y : y W }, im(l ) = {x V : (x L y) = for all y W } = {x V : (Lx y) = for all y W }. If x ker L x im(l ). Conversely, if (Lx y) =, y W Lx = x ker L. c) Using Dimension formula (a) and Fredholm Alternative (b), we have the Rank theorem: dim V = dim(ker(a)) + dim(im(a)) = dim(im(a )) + dim(im(a)) = dim V dim(im(a )) + dim(im(a)). Thus, rank(a) = rank(a ). Conjugation does not change the rank, so rank(a) = rank(a T ). rank(a) is the column rank. rank(a T ) is the row rank of A. Thus, row rank (A) = column rank (A). We have not proved that the conjugation does not change the rank. To establish this result easier, use (b) where we showed im(l ) = ker(l). Since V = ker L iml, ker(l) = im(l), which establishes im(l ) = im(l).