LINEAR ALGEBRA REVIEW

Similar documents
Quantum Computing Lecture 2. Review of Linear Algebra

1. General Vector Spaces

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Definitions for Quizzes

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Conceptual Questions for Review

Lecture 7: Positive Semidefinite Matrices

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Linear Algebra. Min Yan

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Study Guide for Linear Algebra Exam 2

The following definition is fundamental.

235 Final exam review questions

LINEAR ALGEBRA MICHAEL PENKAVA

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

A PRIMER ON SESQUILINEAR FORMS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

MATH 235. Final ANSWERS May 5, 2015

Linear algebra 2. Yoav Zemel. March 1, 2012

Review problems for MA 54, Fall 2004.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

(v, w) = arccos( < v, w >

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

2. Every linear system with the same number of equations as unknowns has a unique solution.

Lecture 2: Linear operators

Linear Algebra Review

Introduction to Linear Algebra, Second Edition, Serge Lange

Elementary linear algebra

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus

Linear Algebra Highlights

Review of linear algebra

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

6 Inner Product Spaces

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Chapter 5 Eigenvalues and Eigenvectors

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Solving a system by back-substitution, checking consistency of a system (no rows of the form

GQE ALGEBRA PROBLEMS

October 25, 2013 INNER PRODUCT SPACES

Math Linear Algebra Final Exam Review Sheet

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Linear Algebra Practice Problems

Lecture Summaries for Linear Algebra M51A

Chapter 1 Vector Spaces

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Linear Algebra Lecture Notes-II

Mathematics Department Stanford University Math 61CM/DM Inner products

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Foundations of Matrix Analysis

CS 246 Review of Linear Algebra 01/17/19

SUMMARY OF MATH 1600

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Linear Algebra. Session 12

Lecture notes - Math 110 Lec 002, Summer The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition.

Linear Algebra Review. Vectors

Numerical Linear Algebra

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

LINEAR ALGEBRA SUMMARY SHEET.

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Math Linear Algebra II. 1. Inner Products and Norms

Math 593: Problem Set 10

Linear Algebra: Matrix Eigenvalue Problems

Last name: First name: Signature: Student number:

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

MA 265 FINAL EXAM Fall 2012

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Solutions to Final Exam

1 Last time: least-squares problems

Lecture 23: 6.1 Inner Products

Knowledge Discovery and Data Mining 1 (VO) ( )

Vector Spaces and Linear Transformations

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Review of Linear Algebra

1 9/5 Matrices, vectors, and their applications

(v, w) = arccos( < v, w >

PRACTICE PROBLEMS FOR THE FINAL

Vector spaces. EE 387, Notes 8, Handout #12

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

WOMP 2001: LINEAR ALGEBRA. 1. Vector spaces

MAT Linear Algebra Collection of sample exams

Online Exercises for Linear Algebra XM511

Math 113 Practice Final Solutions

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Linear Algebra II. Ulrike Tillmann. January 4, 2018

Symmetric and anti symmetric matrices

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

(v, w) = arccos( < v, w >

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

NOTES ON BILINEAR FORMS

Linear Algebra in Actuarial Science: Slides to the lecture

Transcription:

LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication defined coordinatewise. (2) A linear combination of vectors v 1,... v n is an expression λ 1 v 1 +... λ n v n for some scalars λ i. By convention the empty linear combination has value 0. A linear combination of linear combinations is a linear combination also. (3) If W is a VS over F, a subspace is a nonempty V W such that λv V for all λ F, v V and V forms a VS. V is a subspace if and only if 0 V and V is closed under linear combinations of two elements. A subspace is closed under all linear combinations. (4) If U and V are subspaces of W then U + V and U V are subspaces, where U + V = {u + v : u U, v V }. (5) If W = U +V and U V = {0}, then every element of W has the form u+v for a unique u U and v V. This is because if u 1 + v 1 = u 2 + v 2, then u 1 u 2 = v 2 v 1 U V. In this case U and V are called complementary. 2. Span and dependence (1) Let X V. The span of X is the set of all linear combinations n i=1 λ ix i with λ i F, x i X. Note that X may be infinite, but each individual linear combination is finite; in particular if a vector is in the span of X, it is in the span of a finite subset of X. The span of X is the least subspace containing X. (2) Let v 1,... v n be a sequence of vectors (repetitions are allowed). Then a dependence in the sequence is an equation of the form n i=1 λ iv i = 0, and the dependence is non-trivial if and only if at least one λ i is nonzero. If the sequence contains a zero or has a repetition then there is a non-trivial dependence. (3) A set X of vectors is independent if and only if for all finite sequences from X with no repetitions, there is no non-trivial dependence. is independent, and no independent set contains 0. (4) Given an equation n i=1 λ iv i = 0 with λ j 0, we may deduce that v j = i j λ 1 j λ i v i. This line of thought has several important consequences, among them (a) A set X is independent iff x / span(x \ {x}) for all x X. (b) If y span(x {x}) \ span(x), then x span(x {y}). (c) If X is independent and X {x} is dependent, then x span(x). 1

2 JC 3. Basis and dimension (a) A basis of V is an independent subset of V whose span is V. (b) Let B be a finite basis of size n, then all other bases are finite and have the same size. In this case we say V is finite dimensional (FD) and call n the dimension. V has dimension 0 if and only if V = {0}, in this case is the only basis. (c) Related facts: (i) Let dim(v ) = n and let I be an independent set. Then I n, and there is a basis containing I. (ii) Let dim(v ) = n and let X V with X = n + 1. Then X is dependent. (iii) If I is an independent set of size k, then the span of I is a space of dimension k with I as a basis. (5) Let dim(w ) = n and let V be a subspace of W. Then dim(v ) n, and dim(v ) = n iff V = W. 4. Linear maps, isomorphism (1) Let V, W be Vs s over F. A function T : V W is linear iff it preserves + and scalar multiplication, equivalently T (λv + µw) = λt (v) + µt (w). The image of T is {T v : v V } and the nullspace is {v V : T v = 0}. They are subpsaces of W, V respectively. (2) Let T be linear. Then T v = T v iff T (v v ) = 0, that is v v is in the nullspace. So easily T is injective if and only if the nullspace is {0}. (3) Key example: if v 1,... v n are vectors in V then the map (λ 1,... λ n ) i λ iv i is linear from F n to V. (4) The composition of linear maps is linear. If T is a linear bijection then the inverse function T 1 is also linear. Linear bijections are called isomorphisms, and two spaces are isomorphic if and only if there is an isomorphism between them. Being isomorphic is an equivalence relation, because the identity is an isomorphism, composing isomorphisms gives an isomorphism, and the inverse of an isomorphism is an isomorphism. Isomorphic spaces are structurally the same, in particular they have the same dimension, and the image of a basis under an isomorphism is a basis. (5) Let v 1,... v n be a list of vectors without repetitions. (λ 1,... λ n ) i λ iv i is injective if and only if there is no non-trivial dependence, and is surjective if the v i span V. So the linear map (λ 1,... λ n ) i λ iv i is an isomorphism exactly when the v i form a basis. So (a) V (a space over F ) has dimension n if and only if V is isomorphic to F n. (b) Any two spaces over F of dimension n are isomorphic. (6) Let U and V be complementary subspaces of W. Then the map (u, v) u + v is an isomorphism between U V and W, where U V is the VS of ordered pairs with coordinatewise operations. (7) Let W be a FD VS and let U, V be subspaces. Then dim(u + V ) = dim(u) + dim(v ) dim(u V ).

LINEAR ALGEBRA REVIEW 3 5. Quotient space, rank, nullity (1) Let V be a subspace of W. Introduce an equivalence relation aeb a b V. The equivalence classes are sets of the form V + a = a + V. We define W/V to be the set of all equivalence classes. We make W/V into a VS by defining (V + a) + (V + b) = V + (a + b) and λ(v + a) = V + λa. It can be checked that these are well-defined and satisfy the VS axioms. (2) Let T : V W be linear and let N be the nullspace of T. Then T v = T v T (v v ) = 0 v v N N + v = N + v. and we get a bijection between the spaces V/N and im(t ) in which N + v maps to T v. It is easy to see this bijection is linear, so the spaces V/N and im(t ) are isomorphic: in particular they have the same dimension. (3) To put it another way: If T : V W is linear and T v = b, then the set of solutions of T w = b is precisely N + v where N is the nullspace. (4) Let W be FD of dimension n and let V be a subspace of dimension m. Let v 1,... v m be a basis of V and extend it to get a basis v 1,... v n for W. A routine check shows that the set of V + v j for m < j n forms a basis for W/V, so dim(w/v ) = n m = dim(w ) dim(v ). (5) Let T : V W be a linear map between FD spaces. The rank of T is the dimension of the image and the nullity is the dimension of the nullspace. By the considerations above dim(v ) = r + n where r is the rank and n is the nullity. (6) If dim(v ) = dim(w ) = K and T : V W is linear, then T is injective if and only if the nullity is zero if and only if the rank is K if and only if T is surjective. 6. Matrix of a transformation, linear equations (1) Let T : V W be linear and let B be a basis for V. Then easily T is determined by T B (the restriction of T to B) and in fact any function from B to W is T B for a unique linear T. (2) Let V, W be FD spaces over F and fix bases v 1,... v m and w 1,... w n. Let T : V W be a linear map. Then the matrix of T is the n m array of elements of F given by T v j = i a ijw i. It has n rows and m columns, and a ij is the entry in row i, column j. Rows correspond to elements of the basis of W, columns to elements of the basis of V ; column j gives the coefficients of the expansion of T v j using the basis w 1,... w n. Subtle point: Formally the matrix depends on the order in which the bases are enumerated. This is not important as long as you stick to a fixed enumeration. (3) If A = (a ij ) is n m and B = (b jk ) is m l, then the matrix product AB is n l and has entries (c ik ) where c ik = m j=1 a ijb jk. The product of matrices corresponds to composition of transformations. (4) A column vector of height k is an k 1 matrix. With the same notation as above, let v V and expand v = j b jv j and T v = i c iw i. If b, c are the column vectors formed from the b s and c s then c = Ab. (5) For a given linear T : V W and w W, solving the equation T v = w amounts to solving a system of n simultaneous equations in n variables.

4 JC 7. Spaces of transformations (1) The set of all linear maps from V to W is a vector space if we define (φ + ψ)(v) = φ(v) + ψ(v) and (λφ)(v) = λ(φ(v)). (2) Let V have a basis v 1,... v m and W a basis w 1,... w n. Then the space of linear maps from V to W has a basis of form {ρ ij ) where ρ ij is the unique linear map which takes v j to w i and all other basis elements to zero: explicitly ρ ij ( k λ kv k ) = λ j w i. In particular the dimension of the space of maps is mn. Remark: ρ ij has a matrix with one in row i column j and zero elsewhere. (3) F is a vector space of dimension one over F. The space of all linear maps from V to F is called the dual space V. If v 1,... v m is a basis for V, the dual basis is v1,... vm where vi ( k λ kv k ) = λ i. (4) If T : V W is linear, we induce a map T : W V by T φ = φ T. If the matrix of T with some bases is the n m matrix A, the matrix if T with respect to the dual bases is the transpose; that is the m n matrix with a ji in row i column j. 8. Multilinearity, tensor product (1) Let V 1,... V n and W be VS s over a field k. A function φ : V 1... V n W is multilinear if and only if it is linear in each argument when the other arguments are held constant. The set of such multilinear maps froms a VS. If φ : V 1... V n W is multilinear and ψ : W W is linear then ψ φ is multilinear. (2) Given V 1,... V n we can construct the most general multilinear map with domain V 1... V n. Put precisely, we can construct a space V 1... V n (the tensor product ) and a multilinear map : V 1... V n V 1... V n such that for every space W and every multilinear φ : V 1... V n W there is a unique linear ψ : V 1... V n W such that φ = ψ. We usually write (v 1,... v n ) = v 1... v n, and in this notation the property φ = ψ reads ψ(v 1... v n ) = φ(v 1,... v n ). (3) From now on we assume that each V i is finite dimensional with dimension d i and a basis B i. Then we can construct the tensor product easily: it has dimension d 1... d n with a basis of elements of the form b 1... b n. This reflects the fact that any multilinear map on V 1... V n is determined by its values on B 1... B n. 9. Alternating maps, exterior power, determinants (1) A multilinear map from V n to W is alternating if and only if it is zero on any argument (z 1,... z n ) such that z i = z j for some i < j. If φ is alternating and σ S n, then φ(z σ(1),... z σ(n) ) = ( 1) parity(σ) φ(z 1,... z n ). (2) We can describe multilinear maps using a similar construction to tensor product, called the exterior power V n. This construction gives a space V n and alternating : V n V n such that for any alternating φ : V n W there is a unique linear ψ : V n W with ψ(v 1... v n ) = φ(v 1,..., v n ). (3) We will assume that V is FD with dimension d and fix a basis B, enumerated as b 1,... b d. It is easy to see that if t > d then any alternating map on V t is

LINEAR ALGEBRA REVIEW 5 zero (use multilinearity and the fact that any tuple in B t has a repetition). By convention V t = 0 in this case. (4) When t d we can describe V t as follows: it s a space of dimension ( ) d t with a basis of elements of the form b i1 b i2... b it where i 1 < i 2 <... < i t. In particular when t = d, V t has dimension one. (5) Let T : V V be linear. The map (z 1,... z d ) T z 1... T z d is alternating, and using the fact that V d has dimension one there is a unique λ F such that T z 1... T z d = λ(z 1... z d ) for all z i V. λ is called the determinant of T. Routine calculations shows that if T has matrix A with respect to some basis then the determinant of T is the determinant of A. (6) Some easy facts: if I is the identity transformation det(i) = 1. det(st ) = det(s)det(t ). S is invertible iff det(s) 0. det(a) is unchanged if a multiple of one row of A is subtracted from another, and is multiplied by 1 when two rows are swapped. The determinant of an upper triangular matrix is the product of its diagonal elements, so we can efficiently compute determinants by row reduction. The identity tranformation I has a matrix I n with ones down the diagonal and zeroes elsewhere. If A is a matrix with det(a) 0 there is a unique B with AB = BA = I n, B is called the inverse matrix and written A 1. 10. Real inner product spaces (1) Let V be a real VS. An inner product on V is a bilinear function v w from V 2 to R such that v w = w v, v v 0, and v v = 0 if and only if v = 0. A real inner product space is a real space equipped with an inner product. (2) The standard inner product on R n (sometimes called the dot product ) is given by (x 1,... x n ) (y 1,... y n ) = x i y i. i (3) By definition v = v v. This is called the length or norm of v. Since 0 (v + λw) (v + λw) = v 2 + 2λ(v w) + λ 2 w 2, it follows that v w v w (the Cauchy-Schwartz inequality). From this we see easily that v + w v + w. (4) v and w are orthogonal (a fancy word for perpendicular) if v w = 0. Only the zero vector is orthogonal to itself. If v and w are orthogonal then so are λv and µw for any scalars λ, µ. (5) If v 0 and λ = v 1, then λv = 1. (6) A subset of a real IP space is orthonormal if and only if every element has length one, and distinct elements are orthogonal. An ON set is independent because if i λ ix i = 0 where the x i are distinct elements of an ON set, we can form the inner product with x i to conclude that λ i = 0. More generally, we have (Pythagoras theorem) that if the x i are distinct elements of an ON set then λ i x i 2 = λ 2 i. i i

6 JC (7) Any FD real IP space has an ON basis. To see this we start with any basis w 1,... w n and generate an ON set v 1,... v n such that the span of v 1,... v j equals the span of w 1,... w j. Given v 1,... v j we first define vj+1 = w j+1 j i=1 w j+1 v i v i, and check that this is nonzero and orthogonal to v 1,... v j. Then we scale it to get a vector v j+1 of length one. Remark: if v 1,... v n is an ON basis then i λv i j µ jv j = i λ iµ i. So by a change of basis we can make any IP into the familiar dot product on R n. (8) If v 1,... v n is an ON basis then the expansion of v takes a very simple form. If v = i λ iv i then taking the inner product with v j we see that v v j = λ j. (9) If V is a FD IP space then every element φ of the dual space V has the form v v w for a unique w. This can be seen easily by letting v 1,... v n be an ON basis, and then writing φ in terms of the dual basis as φ = i λ ivi. It is routine to see that w = i λ iv i works, and in fact is the only possibility. (10) If T : V V is linear, then an adjoint for T is a linear map S : V V such that T v w = v Sw for all v, w. If it exists it is unique. For V FD the adjoint always exists, and if we let A be the matrix of T using an ON basis then the matrix for the adjoint T is the transpose of A. T is self-adjoint if and only if T = T ; in an ON basis this happens if and only if the matrix is symmetric. (11) Let W be a real inner product space and let X W. Then X is the set of v such that v x = 0 for all x X. This is a subspace of W. (12) Let W be a FD real inner product space and let V be a subspace of W. We claim that V and V are complementary. To see this let v 1,... v m be an ON basis of V, and extend to get an ON basis v 1,... v n for W. Then easily w V if and only if v w j = 0 for 1 j m if and only if w is a linear combination of {w m+1,... w n }. So {w m+1,... w n } is a basis for V. 11. Projection on a subspace (1) Let W be a FD subspace of a real IP space V. Then for any v V there is a unique. w W which is closest to v. This can be computed as follows: choose an ON basis w 1,... w m for W, let λ i = v w i, then w = i λ iw i. The main point is that v w W. The map which takes v to w as above is a linear map called orthogonal projection onto W. (2) Assume that V is FD. Then if P is orthogonal projection onto W, we have P 2 = P = P. Conversely if P 2 = P = P for some linear map P, then P is the orthogonal projection onto W where W is the image of P. 12. Complex numbers (1) The field of complex numbers consists of expressions a + bi for real a and b, added and multiplied in the usual way (follow usual laws of arithmetic with i 2 = 1). (2) If z = a + bi, then the conjugate z is given by z = a bi. z + w = z + w and z w = z w. z z = a 2 + b 2, which is zero only when z = 0. (3) Any complex polynomial can be written as a product of linear complex polynomials. In particular any complex polynomial of degree greater than zero has a root.

LINEAR ALGEBRA REVIEW 7 13. Eigenvalues, characteristic equation, diagonalisation (1) Let T : V V be linear. An eigenvector of T is v V such that v 0, and T v = λv for some λ F. λ is called the eigenvalue associated with v. If λ is an eigenvalue then the subspace {w : T w = λw} is the eigenspace; it is not {0}, and its nonzero members are precisely the eignevectors with eigenvalue λ. (2) Let V be FD. Then λ is an eigenvalue of T iff and only if the nullspace of λi T is nonzero, which by the theory of determinants happens exactly when det(λi T ) = 0. The polynomial det(λi T ) is the characteristic polynomial of T, det(λi T ) = 0 is the characteristic equation of T. The characteristic polynomial of T has degree dim(v ) so the equation has at most dim(v ) roots. (3) Let V be FD. A basis of V consists of eigenvectors for T if and only if the matrix of T is diagonal in that basis. (4) Let v 1,... v n be a family of eigenvectors of T, each one belonging to a different eigenvalue of T. Then the v i are independent. To see this take a dependence of minimal size, which (renumbering if necessary) may be assumed to have the form m j=1 µ jv j = 0 for some m with 1 < m n. Multiplying by λ 1, m j=1 λ 1µ j v j = 0. Applying T, m j=1 λ jµ j v j = 0. Subtracting, m j=2 µ j(λ j λ 1 )v j = 0. Contradiction as we chose m to be the minimal size for a dependence. (5) Let V be FD and let T have dim(v ) distinct eigenvalues. Then V has a basis of eigenvectors. Proof: choose one eigenvector for each eigenvalue, they form an independent set. (6) If V is a real inner product space and T is self adjoint map from V to V, then eigenvectors which belong to distinct eigenvalues are orthogonal. In fact if T v = λv and T w = µw we see that λv w = T v w = v T w = µv w. 14. Complex IP spaces, diagonalisation of a self adjoint map (1) Let V be a complex VS. An inner product on V is a function v w from V 2 to C such that (a) (Hermitian property) v w = w v. (b) (Sesquilinear property) (v 1 + v 2 ) w = v 1 w + v 2 w, v (w 1 + w 2 ) = v w 1 + v w 2, (λv) w = λ(v w), v (λw) = λ(v w). (c) (Positivity property) v v R, v v 0, and v v = 0 if and only if v = 0. A complex inner product space is a complex space equipped with an inner product. (2) The standard inner product on C n (sometimes called the dot product ) is given by (x 1,... x n ) (y 1,... y n ) = x i ȳ i. i (3) The following facts have similar proofs to the real case, except you have to be careful since inner product is only linear in the first argument. (a) Any FD complex IP space has an ON basis. (b) If v 1,... v n is an ON basis then v = n i=1 (v v i)v i. (c) Every element of V is v v w for a unique w. (d) An ON set is independent.

8 JC (4) It is also easy to see that if V is a FD complex IP space and T : V V is linear, there is a unique linear T (the adjoint) such that T v w = v T w. If T has matrix A = (a ij ) with respect to some ON basis, the matrix of T is the Hermitian conjugate of A, that is the matrix with ā ji in row i column j. T is self-adjoint if and only if T = T, this corresponds to the matrix A being hermitian (equal to its Hermitian conjugate). (5) When V is FD and W is a subspace of V, then the orthogonal projection onto W is a linear map P with P 2 = P = P. As in the real case, the property P 2 = P = P characterises the projection maps. (6) If T is self-adjoint then all eigenvalues of T are real. To see this observe that if T v = λv, v 0 then λv v = T v v = λv T v = λv v. (7) If T is self-adjoint then eigenvectors corresponding to different eigenvalues are orthogonal. Argument similar to real case, uses fact that all eigenvalues are real. (8) If V is a FD complex space with dim(v ) > 0, and S : V V is linear then S has an eigenvalue, because the characteristic polynomial has positive degree and so has at least one root. (9) (complex form of the spectral theorem) If W is FD complex IP space and T is self-adjoint then W has an ON basis consisting of eigenvectors. To see this let λ 1,... λ m be the eigenvalues, and choose for each i an ON basis B i for the eigenspace W i corresponding to λ i. Let B = B i, and let V be the subspace spanned by the ON set B. It is easy to see that v V = T v V, and v V = T v V. Let S : V V be the restriction of T to V : we must have V = {0} because otherwise S would have an eigenvalue v, and this would be a nonzero element of V V. So V = W and we are done. It is easy to see that the spectral theorem expresses T in the form i λ ip i where P i is orthogonal projection onto W i. (10) (real form of the spectral theorem) Let V be a real inner product space and let T : V V be self-adjoint. Then V has an ON basis consisting of eigenvectors of T. Proof: use complex numbers to show that a self-adjoint map on a real inner product space of dimension greater than zero has a (real!) eigenvalue, then argue as above. As in the complex case T is a linear combination of projections to the eiegenspaces with coefficients the eigenvalues. 15. Orthogonal and unitary matrices (1) A real square matrix O is orthogonal if O T = O 1. Note that this is so if and only if the columns of O form an ON set (look at O T O = I) if and only if the rows of O form an ON set (look at OO T = I). In matrix language the real spectral theorem says that if A is symmetric there is an orthogonal O with O T AO = O 1 AO diagonal. (2) A complex square matrix U is unitary if U H = U 1. Note that this is so if and only if the columns of U form an ON set if and only if the rows of U form an ON set. In matrix language the complex spectral theorem says that if A is Hermitian there is a unitary U with U H AU = U 1 AU diagonal.

LINEAR ALGEBRA REVIEW 9 16. Quadratic forms, positive (semi)definite matrices (1) A real quadratic form in n variables can be written in the form x T Ax for a unique symmetric A. The spectral theorem gives a change of variables x = Oy such that in the new variables the form becomes y T (O T AO)y = i λ iy 2 i where the λ i are the eigenvalues of A. (2) In particular, x T Ax 0 for all x if and only if all the λ i are non-negative, and in the case we say A is positive semidefinite. x T Ax 0 for all x and is zero only at zero if and only if all the λ i are positive, and in this case we say A is positive definite.