Lecture Notes for Math 414: Linear Algebra II Fall 2015, Michigan State University

Size: px
Start display at page:

Download "Lecture Notes for Math 414: Linear Algebra II Fall 2015, Michigan State University"

Transcription

1 Lecture Notes for Fall 2015, Michigan State University Matthew Hirn December 11, 2015 Beginning of Lecture 1 1 Vector Spaces What is this course about? 1. Understanding the structural properties of a wide class of spaces which all share a similar additive and multiplicative structure structure = vector addition and scalar multiplication vector spaces 2. The study of linear maps on finite dimensional vector spaces We begin with vector spaces. First two examples: 1. R n = n-tuples of real numbers x = (x 1,..., x n ), x k R vector addition: x+y = (x 1,..., x n )+(y 1,..., y n ) = (x 1 +y 1,..., x n +y n ) scalar multiplication: λ R, λx = λ(x 1,..., x n ) = (λx 1,..., λx n ) 2. C n [on your own: review 1.A on complex numbers] 1.B Definition of Vector Space Scalars: Field F (assume F = R or C unless otherwise stated). So the previous two vector spaces can be written as F n with scalars F Let V be a set (for now). 1

2 Definition 1 (Vector addition). u, v V, assigns an element u + v V Definition 2 (Scalar multiplication). λ F, v V, assigns an element λv V Definition 3 (Vector space). A set V is a vector space over the field F if vector addition and scalar multiplication are defined, and the following properties hold (u, v, w V, a, b F): 1. Commutativity: u + v = v + u 2. Associativity: (u + v) + w = u + (v + w) and (ab)v = a(bv) 3. Additive Identity: 0 V such that v + 0 = v 4. Additive Inverse: for every v there exists w such that v + w = 0 5. Multiplicative Identity: 1v = v 6. Distributive Properties: a(u + v) = au + av and (a + b)v = av + bv If F = R, real vector space If F = C, complex vector space From here on out V will always denote a vector space Two more examples of vector spaces: 1. F : x = (x 1, x 2,...) just like F n 2. F S = the set of functions f : S F from S to F [check on your own] Now for some important properties... Proposition 1. The additive identity is unique. Proof. Let 0 1 and 0 2 be any two additive identities. Then 0 1 = = = 0 2 Proposition 2. The additive inverse is unique. 2

3 Proof. Let w 1 and w 2 be two additive inverses of v. Then: w 1 = w = w 1 + (v + w 2 ) = (v + w 1 ) + w 2 = 0 + w 2 = w 2 Now we can write v as the additive inverse of v and define subtraction as v w = v + ( w). On the other hand, we still don t know that 1v = v! Notation: We have 0 F and 0 V. In the previous two propositions we dealt with 0 V. Next we will handle 0 F. We just write 0 for either and use the context to determine the meaning. Proposition 3. 0 F v = 0 V for every v V Proof. 0v = (0 + 0)v = 0v + 0v = 0v = 0 Now the other way around... Proposition 4. λ0 = 0 for every λ F Proposition 5. ( 1)v = v for all v V Proof. v + ( 1)v = 1v + ( 1)v = (1 + ( 1))v = 0v = 0 Now use uniqueness of additive inverse. End of Lecture 1 3

4 Beginning of Lecture 2 Warmup: Is the empty set a vector space? Answer: No since 0 / 1.C Subspaces A great way to find new vector spaces is to identify subsets of an existing vector space which are closed under addition and multiplication. Definition 4 (Subspace). U V is a subspace of V if U is also a vector space (using the same vector addition and scalar multiplication as V ). Proposition 6. U V is a subspace if and only if: 1. 0 U 2. u, w U = u + w U 3. λ F and u U = λu U Now we can introduce more interesting examples of vector spaces, many of which are subspaces of F S for some set S [you should verify these are vector spaces]: 1. P(F) = {p : F F : p(z) = a } 0 + a 1 z {{ + a m z m }, a k F k, m N} deg(p)=m 2. C(R; R) = real valued continuous functions 3. C m (R n ; R) = real valued functions with continuous partial derivatives up to order m 4. R([0, 1]) = {f : [0, 1] R : 1 0 f(x) dx < }. 5. F m,n = the set of all m n matrices with entries in F 6. S = {x : [0, 1] R n : x (t) is continuous and x (t) = Ax(t), where A R n,n } Another convenient way to get new vector spaces is to add subspaces together (this is like the union of two sets, but for vector spaces!). 4

5 Definition 5 (Sum of subsets). Suppose U 1,..., U m V. Then: U U m := {u u m : u 1 U 1,..., u m U m }. Proposition 7. Suppose U 1,..., U m are subspaces of V. Then U U m is the smallest subspace of V containing U 1,..., U m. An example: U 1 = {x R 3 : x 1 + x 2 + x 3 = 0} U 2 = {x R 3 : x 3 = 0} U 1 + U 2 = {x R 3 : x = y + z, y 1 + y 2 + y 3 = 0 and z 3 = 0} U 1 + U 2 = {x R 3 : x = a( 1, 0, 1) + b(1, 1, 0) + c(1, 0, 0) + d(0, 1, 0)} (1) U 1 + U 2 = R 3 Note there is redundancy in (1). We will be especially interested in situations that avoid this redundancy, i.e., subspace summations U U m when the representation u u m is unique. Definition 6 (Direct sum). Suppose that U 1,..., U m are subspaces of V. U U m is a direct sum if each element of U U m can be written in only one way as u u m where u k U k. If U U m is a direct sum, then we denote it as U 1 U m Examples: 1. Let U k be the subspace of F n such that only the k th coordinate is nonzero: U k = {(0,..., 0, x, 0,..., 0) : x F} }{{} k 1 Then R n = U 1 U n 2. Recall the previous example with redundancy. That is not a direct sum. We can change U 2 though to get a direct sum: U 1 = {x R 3 : x 1 + x 2 + x 3 = 0} U 2 = {x R 3 : x 1 = x 2 = x 3 } R 3 = U 1 U 2 5

6 Notice in the second example that U 1 U 2 = {0}. This leads us to the following proposition. Proposition 8. Let U, W be subspaces of V. Then, V = U W U W = {0} The first example makes it tempting to propose the same pairwise intersection property for any number of subspaces, but this is not true! [try to come up with an example, then see the book] Instead we have the following proposition, which we can use to prove Proposition 8. Proposition 9. Suppose U 1,..., U m are subspaces of V. Then U U m is a direct sum 0 = u u m, u k U k, only when u k = 0 k Proof. The direction is clear. For the direction, let v U U m and suppose we have two representations: v = u u m = w w m Then 0 = (u 1 w 1 ) + + (u m w m ) Since u k w k U k, we must have u k = w k for each k. [try to prove Proposition 8 on your own using Proposition 9, then see the book]. 2 Finite Dimensional Vector Spaces 2.A Span and Linear Independence We saw last time that summing subspaces gives rise to new vector spaces. Now we keep track of each of the vectors that generate these spaces. Definition 7 (Linear combination). w is a linear combination of the vectors v 1,..., v m V if a 1,..., a m F such that w = a 1 v 1 + a m v m 6

7 Definition 8 (Span). The span of v 1,..., v m V is span(v 1,..., v m ) = {a 1 v 1 + a m v m : a k F k} Analogous to the sum of subspaces, we have the following result. Proposition 10. span(v 1,..., v m ) is the smallest subspace of V containing v 1,..., v m. Nomenclature: If span(v 1,..., v m ) = V then we say that v 1,..., v m spans V. Definition 9 (Finite dimensional vector space). V is finite dimensional if there exists a finite number of vectors v 1,..., v m (a list) such that span(v 1,..., v m ) = V. Definition 10 (Infinite dimensional vector space). V is infinite dimensional if it is not finite dimensional. End of Lecture 2 7

8 Beginning of Lecture 3 Warmup: Is this a vector space? 1. {f C((0, 1); R) : f(x) = x p for some p > 0} Answer: No (all three properties fail) 2. {f C(R; R) : f is periodic of period σ} Answer: Yes (contains zero function, closed under addition and scalar multiplication) Examples: 1. P(F) is infinite dimensional [see the proof in the book]. 2. P m (F) = {p P(F) : deg(p) m} is finite dimensional: span(1, z, z 2,..., z m ) = P m (F) 3. U = {f C(R; R) : f is periodic of period n for some n N} U is infinite dimensional Proof. Let L = v 1,..., v m be an arbitrary list from U, so that each v k has period n k N. If l = lcm(n 1,..., n m ), then any linear combination from L will have period which is at most l. Therefore if p is a prime number such that p > l, sin( 2π p x) / L, but sin(2π p x) U, and thus span(l) U. Since L was arbitrary we can conclude that no finite list will span U. It will be very useful to record if a list of vectors v 1,..., v m has no redundancy in its span, just as we isolated sums of subspaces with no redundancy by defining the direct sum. Definition 11 (Linear independence). v 1,..., v m V are linearly independent if whenever 0 = a 1 v a m v m, then necessarily a 1 = = a m = 0. Definition 12 (Linear dependence). v 1,..., v m V are linearly dependent if a 1,... a m with at least one a k 0 and 0 = a 1 v a m v m. The notions of linear independence and linear dependence are extremely important! Examples: 8

9 1. (1, 0, 0), (0, 1, 0) are linearly independent in F , z,..., z m are linearly independent in P(F) [Why? Use the fact that a polynomial of degree m has at most m distinct zeros] 3. Recall example from sum of subspaces: ( 1, 0, 1), (1, 1, 0), (1, 0, 0), (0, 1, 0) are linearly dependent ( 1, 0, 1), (1, 1, 0), (1, 1, 1) are linearly independent The following is a very useful lemma... Lemma 1 (Linear Dependence Lemma, LDL). If v 1,..., v m V are linearly dependent and v 1 0, then k {2,..., m} such that 1. v k span(v 1,..., v k 1 ) 2. If the v k is removed from v 1,..., v m then the resulting span is the same as the original. Proof. Let L = v 1,..., v m. For #1, by definition of linear dependence a 1,..., a m not all zero such that 0 = a 1 v 1 + +a m v m. Let k {2,..., m} be the largest index such that a k 0. Then: v k = a 1 a k v 1 a k 1 a k v k 1 (2) For #2, let L = L \ {v k }. Since L L, span(l ) span(l). Let u span(l). Then: u = a 1 v 1 + a k 1 v k 1 + a k v k + a k+1 v k a m v m Substitute (2) in for v k and the sum is now in terms of L, i.e., u span(l ). Thus span(l) span(l ). Now for our first theorem. Theorem 1. If V = span(v 1,..., v n ) and w 1,..., w m are linearly independent in V, then m n. 9

10 Proof. We will use the two lists and make successive reductions and additions using Lemma 1. Note: w 1,..., w m linearly indpendent w k 0 k [why?] Add & reduce: Since V = span(v 1,..., v n ) and w 1 V, then w 1, v 1,..., v n are linearly dependent. So Lemma 1 says at least one of the v k can be removed. Up to a relabeling, we may assume it is v n. So span(w 1, v 1,..., v n 1 ) is the same as span(v 1,..., v n ). Now we can repeat: w 2 V = span(w 1, v 1,..., v n 1 ) so w 2, w 1, v 1,..., v n 1 are linearly dependent. Use Lemma 1 again, which says that one of them can be removed. The question is which? If it is w 1, then w 1 span(w 2 ), which is a contradiction; so it must be one of the v 1,..., v n 1. Without loss of generality (WLOG), we may assume it is v n 1 and so span(w 2, w 1, v 1,..., v n 2 ) = span(w 2, v 1,..., v n 1 ) = V. Keep repeating. At each stage one of the v k must be removed, else Lemma 1 implies that w j span(w 1,... w j 1 ) which is a contradiction. The process stops when either we run out of w s (m n) or we run out of v s (m > n). If m > n, then span(w 1,..., w n ) = V and m > n. Thus w m / span(w 1,..., w n ) = V, but this is a contradiction since w k V k. Proposition 11. If V is finite dimensional and U is a subspace of V, then U is finite dimensional. End of Lecture 3 10

11 Beginning of Lecture 4 2.B Bases span+ linear independence = basis Definition 13. v 1,..., v n V is a basis of V if span(v 1,..., v n ) = V and v 1,..., v n are linearly independent. Proposition 12. v 1,..., v n V is a basis of V if and only if v V,! a 1,..., a n F such that v = a 1 v 1 + a n v n The notion of a basis is extremely important because it allows us to define a coordinate system for our vector spaces! Examples: 1. (1, 0,..., 0), (0, 1, 0,..., 0),..., (0,..., 0, 1) is the standard basis of F n. 2. 1, z,..., z m is the standard basis for P m (F) 3. Let Z N = {0, 1,..., N 1} (with addition mod N) and let V = {f : Z N C}. The standard (time side) basis for V is δ 0,..., δ N 1 where { 1 n = k δ k (n) = 0 n k Indeed, f(n) = N 1 k=0 f(k)δ k (n) Fourier analysis tells us that another (frequency side) basis for V is e 0,..., e N 1 where e k (n) = 1 e 2πikn/N N and f(n) = N 1 k=0 a k e k (n) 11

12 with a k = ˆf(k) = 1 N 1 N n=0 f(n)e 2πikn/N The coefficients a k define the function ˆf(k) which is the Fourier transform of f. If v 1,..., v n spans V, it should have enough vectors to make a basis. Indeed: Proposition 13. If L = v 1,..., v n spans V, then L can be reduced to a basis. Proof. If L is linearly independent, then we are done. So assume it is not. We will selectively throw away vectors using the LDL. Step 1: If v 1 = 0 remove v 1 Step 2: If v 2 span(v 1 ), remove v 2 Step k: If v k span(v 1,..., v k 1 ), remove v k Stop at Step n, getting a new list L = w 1,..., w m. We still have span(l ) = V since we only discarded vectors that were in the span of other vectors. We also have the property: w k / span(w 1,..., w k 1 ), k > 1 Thus by the contrapositive of LDL, L is linearly independent, and hence a basis. Corollary 1. If V is finite dimensional, it has a basis. We just removed stuff from a spanning set to get a basis. We can also add stuff to a linearly independent set to get a basis. Proposition 14. If L = u 1,..., u m V is linearly independent, then L can be extended to a basis. Proof. Let w 1,..., w n be a basis of V. Thus L = u 1,..., u m, w 1,..., w n spans V. Apply the procedure in the proof of Proposition 13, and note that none of the u s get deleted [why?]. 12

13 Now we show that every subspace U has a complementary subspace W that together direct sum to V. Proposition 15. Suppose V is finite dimensional and that U is a subspace of V. Then there exists another subspace W such that V = U W Proof. V finite dimensional U finite dimensional U has a basis u 1,..., u m. By the previous proposition we can extend u 1,..., u m to a basis of V, say L = u 1,..., u m, w 1,..., w n. We show that W = span(w 1,..., w n ) is the answer. We need to show: (1) V = U + W, and (2) U W = {0}. Since L is a basis, for any v V we have: v = a } 1 u 1 + {{ + a m u m} u U Now suppose that v U W. Then which implies + } b 1 w 1 + {{ + b n w n} = u + w U + W w W v = a 1 u a m u m = b 1 w b n w n a 1 u a m u m b 1 w 1 b n w n = 0 But L is linearly independent so a 1 = = a m = b 1 = = b n = 0. 2.C Dimension Since a basis gives a unique representation of each v V, we should be able to say that the number of vectors in basis is the dimension of V. But to do so, we need to make sure every basis of V has the same number of vectors. Indeed: Theorem 2. Any two bases of a finite dimensional vector space have the same length. Proof. Let B 1 = v 1,..., v m and B 2 = w 1,..., w n be two bases of V. Since B 1 is linearly independent and B 2 spans V, m n. Flipping the roles of B 1 and B 2, we get n m. 13

14 Definition 14. The dimension of V is the length of B for any basis B. Proposition 16. If U is a subspace of V, then dim U dim V Examples: 1. dim F n = n Remark: dim R 2 = 2 and dim C = 1, even though R 2 can be identified with C. The scalar field F cannot be ignored when computing the dimension of V! 2. dim P m (F) = m + 1 Let L = v 1,..., v n. If dim V = n, then we need only check if L is linearly independent OR if span(l) = V to conclude that L is a basis for V. Proposition 17. Suppose dim V = n and let L = v 1,..., v n. 1. If L is linearly independent, then L is a basis 2. If span(l) = V, then L is a basis. Proof. Use Proposition 14 for (1) and Proposition 13 for (2). End of Lecture 4 14

15 Beginning of Lecture 5 Theorem 3. dim V <, U 1 and U 2 subspaces of V. Then dim(u 1 + U 2 ) = dim U 1 + dim U 2 dim(u 1 U 2 ) Proof. Proof will use 3 objects: 1. B = u 1,..., u m = basis of U 1 U 2 2. L 1 = v 1,..., v j = extension of B so that B L 1 = basis for U 1 3. L 2 = w 1,..., w k = extension of B so that B L 2 = basis for U 2. We will show that L = B L 1 L 2 is a basis for U 1 + U 2. This will complete the proof since if it is true, then dim(u 1 +U 2 ) = m+j+k = (m+j)+(m+k) m = dim U 1 +dim U 2 dim(u 1 U 2 ) Clearly L spans U 1 + U 2 since span(l) contains both U 1 and U 2. Now we show linear independence. Suppose: a i u i + b l v l + i l p Then: c p w p = p i But w p U 2 by assumption, so c p w p U 1 U 2 p p a i u i l c p w p = q c p w p = 0 (3) b l v l U 1 d q u q for some d q Now, (u 1,..., u m, w 1,..., w k ) is a basis for U 2. Thus: c p w p d q u q = 0 c p = 0, d q = 0, p, q p q Therefore (3) reduces to a i u i + i l Repeat the previous argument. b l v l = 0 15

16 3 Linear Maps V, W always vector spaces. 3.A The Vector Space of Linear Maps Definition 15. Let V, W be vector spaces over the same field F. A function T : V W is a linear map if it has the following two properties: 1. additivity: T (u + v) = T u + T v, u, v V 2. homogeneity: T (λv) = λ(t v) λ F, v V The set of all linear maps from V to W is denoted L(V, W ). Note: You could say T is linear if it preserves the vector space structures of V and W. Examples (read the ones in the book too!): Fix a point x 0 R. Evaluation at x 0 is a linear map: T : C(R; R) R T v = v(x 0 ) The anti-derivative is a linear map: T : C(R; R) C 1 (R; R) (T v)(x) = x 0 v(y) dy Fix b F. Define the forward shift operator as: T : F F T (v 1, v 2, v 3,...) = (b, v 1, v 2, v 3,...) T is a linear map if and only if b = 0 [why?]. Next we show that we can always find a linear map that takes whatever values we want on a basis, and furthermore, that it is completely determined by these values. 16

17 Theorem 4. Let v 1,..., v n be a basis for V and let w 1,..., w n W. Then there exists a unique linear map T : V W such that T v k = w k, k Proof. Define T : V W as T (a 1 v 1 + a n v n ) = a 1 w a n w n Clearly T v k = w k for all k. It is easy to see that T is linear as well [see the book]. For uniqueness, let S : V W be another linear map such that Sv k = w k for all k. Then: S(a 1 v 1 + a n v n ) = S(a k v k ) = a k Sv k = a k w k = T (a 1 v 1 + +a n v n ) The previous theorem is elementary, but highlights the fact that amongst all the maps from V to W, linear maps are very special. Theorem 5. L(V, W ) is a vector space with the following vector addition and scalar multiplication operations: vector addition: S, T L(V, W ), (S + T )(v) = Sv + T v v V scalar mult.: T L(V, W ), λ F, (λt )(v) = λ(t v) v V Theorem 6. L(V, W ) is finite dimensional and dim L(V, W ) = (dim V )(dim W ) Proof. Suppose dim V = n and dim W = m and let B V = v 1,..., v n B W = w 1,..., w m be bases for V and W respectively. Define the linear transform E p,q : V W as { 0 k q E p,q (v k ) =, p = 1,..., m, q = 1,..., n w p k = q 17

18 By Theorem 4, this uniquely defines each E p,q. We are going to show that these mn transformations {E p,q } p,q form a basis for L(V, W ). Let T : V W be a linear map. For each 1 k n, let a 1,k,..., a m,k be the coordinates of T v k in the basis B W : T v k = m a p,k w p p=1 To prove spanning, we wish to show that: T = m a p,q E p,q (4) p=1 q=1 Let S be the linear map on the right hand side of (4). Then for each k, Sv k = a p,q E p,q v k p q = p a p,k w p = T v k So S = T, and since T was arbitrary, {E p,q } p,q spans L(V, W ). To prove linear independence, suppose that S = a p,q E p,q = 0 p q Then Sv k = 0 for each k, so a p,k w p = 0, p k But w 1,..., w m are linearly independent, so a p,k = 0 for all p and k. End of Lecture 5 18

19 Beginning of Lecture 6 Warmup: Let U, W be 5-dimensional subspaces of R 9. Can U W = {0}? Answer: No. First note that dim{0} = 0. Then, using Theorem 3 we have: dim R 9 = 9 dim(u 1 + U 2 ) = dim U 1 + dim U 2 dim(u 1 U 2 ) = 10 dim(u 1 U 2 ) dim(u 1 U 2 ) 1 Proposition 18. If T : V W is a linear map, then T (0) = 0. Proof. T (0) = T (0 + 0) = T (0) + T (0) T (0) = 0 Usually the product of a vector from one vector space with a vector from another vector space is not well defined. However, for some pairs of linear maps, it is useful to define their product. Definition 16. If T L(U, V ) and S L(V, W ), then the product ST L(U, W ) is (ST )(u) = S(T u), u U Note: You must make sure the range of T is in the domain of S! Another note: Multiplication of linear maps is not commutative! In other words, in general ST T S. 3.B Null Spaces and Ranges For a linear map T, the collection of vectors that get mapped to zero and the collection of those that do not are very important. Definition 17. For T L(V, W ), the null space of T, null T, is: See examples in the book. null T = {v V : T v = 0} Proposition 19. For T L(V, W ), null T is a subspace of V. 19

20 Proof. Check if it contains zero, closed under addition, closed under scalar multiplication: T (0) = 0 so 0 null T u, v null T, then T (u + v) = T u + T v = = 0 u null T, λ F, then T (λu) = λt u = λ0 = 0 Definition 18. A function T : V W is injective if T u = T v implies u = v. Proposition 20. Let T L(V, W ). Then T is injective null T = {0} Proof. For the direction, we already know that 0 null T. Thus T (v) = 0 = T (0), but since T is injective v = 0. For the direction, we have: T u = T v T (u v) = 0 u v = 0 u = v Definition 19. For T : V W, the range of T is: range T = {T v : v V } Proposition 21. If T L(V, W ), then range T is a subspace of W. Definition 20. A function T : V W is surjective if range T = W. Theorem 7 (Rank-Nullity Theorem). Suppose V is finite dimensional and T L(V, W ). Then range T is finite dimensional and dim V = dim(null T ) + dim(range T ) Proof. Let u 1,..., u m be a basis for null T, and extend it to a basis u 1,..., u m, v 1,..., v n of V. So we need to show that dim range T = n. To do so we prove that T v 1,..., T v n is a basis for range T. 20

21 Let v V and write: v = a 1 u a m u m + b 1 v b n v n T v = b 1 T v b n T v n Thus span(t v 1,..., T v n ) = range T Now we show that T v 1,..., T v n are linearly independent. Suppose c 1 T v c n T v n = 0 T (c 1 v c n v n ) = 0 c 1 v c n v n null T c 1 v c n v n = d 1 u d m u m But v 1,..., v n, u 1,..., u m are linearly independent, so c j = d k = 0 for all j, k. Thus T v 1,..., T v n are linearly independent. Corollary 2. Suppose V, W are finite dimensional and let T L(V, W ). Then: 1. If dim V > dim W then T is not injective. 2. If dim V < dim W then T is not surjective. Proof. Use the Rank-Nullity Theorem: 1. dim null T = dim V dim range T dim V dim W > 0 2. dim range T = dim V dim null T dim V < dim W End of Lecture 6 21

22 Beginning of Lecture 7 Very important applications: Homogeneous systems of equations m equations and n unknowns: a 1,k x k = 0. (5) a m,k x k = 0 where a j,k F and x = (x 1,..., x n ) F n. Can you solve all m equations simultaneously? Clearly x = 0 is a solution. Are there any others? Define T : F n F m : ( ) T (x 1,..., x n ) = a 1,k x k,..., a m,k x k (6) Note: T (0) = 0 is equivalent to saying 0 is a solution of (5). Furthermore, Nontrivial solutions exist for (5) dim null T > 0 But by the Rank-Nullity Theorem: Since dim range T m, dim null T > 0 dim F n dim range T > 0 if n > m = Nontrivial solutions exist for (5) Inhomogeneous systems of equations: Let c k F and consider: a 1,k x k = c 1. (7) a m,k x k = c m 22

23 New question, can you say for all c = (c 1,..., c m ) F m there exists at least one solution to (7)? Using the same T as defined in (6), we have: A solution exists for (6) c F m, x F n s.t. T (x) = c range T = F m dim range T = m dim F n dim null T = m dim null T = n m Since dim null T 0, if n < m then certainly there exists c F m such that no solution exists for (7). 3.C Matrices Definition 21. Let T L(V, W ) and let B V = v 1,..., v n and B W = w 1,..., w m be bases of V and W respectively. The matrix of T with respect to B V and B W is the m n matrix M(T ; B V, B W ) (or just M(T ) when B V and B W are clear) with entries A j,k defined by: T v k = m A j,k w j, j=1 k = 1,..., n Note: Recall the proof of the fact that dim L(V, W ) = mn. In that proof we were implicitly using the matrix representation of T. Another note: Recall the idea that a basis B V = v 1,..., v n for a vector space V gives coordinates for V. That is, for all v V, there exists a 1,..., a n F such that v = a 1 v a n v n So the n-tuple (a 1,..., a n ) F n is a coordinate representation of the vector v in the basis B V. If we change the basis, say to B V, we change the coordinate representation of v say to (a 1,..., a n), but we do not change v. Similarly, the matrix M(T ; B V, B W ) can be thought of as a coordinate representation of the linear map T L(V, W ) with respect to the bases B V and B W. If we change the bases, we get a new matrix representation of T, but we do not change T ; it is still the same linear map. [we will come back to this with an example later] 23

24 Definition 22. F m,n is the set of all m n matrices with entries in F. Proposition 22. F m,m is a vector space with the standard matrix addition and scalar multiplication. Proposition 23. dim F m,n = mn. We will derive matrix multiplication from the desire that M(ST ) = M(S)M(T ) for all S, T for which ST makes sense. Suppose T : U V, S : V W, and that B V = {v r } n r=1 is basis for V, B W = {w j } m j=1 is a basis for W, and B U = {u k } p is a basis for U. Let M(S) = A and M(T ) = C. Then for each 1 k p: ( ) (ST )u k = S nc r,k v r = = = r=1 C r,k Sv r r=1 r=1 C r,k m j=1 A j,r w j ( m ) A j,r C r,k w j j=1 r=1 Thus we define matrix multiplication as: (AC) j,k = A j,r C r,k r=1 [read the rest of 3.C on matrix multiplication on your own] End of Lecture 7 24

25 Beginning of Lecture 8 3.D Invertibility and Isomorphic Vector Spaces Definition 23. A linear map that is both injective and surjective is called bijective. Definition 24. A linear map T L(V, W ) is invertible if S L(W, V ) such that ST = I V and T S = I W. Such a map S is an inverse of T. Proposition 24. An invertible linear map has a unique inverse. Proof. Let S 1 and S 2 be two inverses of T L(V, W ). Then: S 1 = S 1 I = S 1 (T S 2 ) = (S 1 T )S 2 = IS 2 = S 2 Notation: Thus we can denote the inverse of T as T 1 L(W, V ). Theorem 8. T L(V, W ) is invertible T is bijective Proof. For the = direction: Need to show T is injective and surjective. Suppose: T v 1 = T v 2 T 1 T v 1 = T 1 T v 2 v 1 = v 2 since T 1 T = I. Thus T is injective. Now suppose w W. Then: and so T is surjective. T T 1 w = w T (T 1 w) = w }{{} V Now for the = direction: Need to show T is invertible. To do so we define a map S L(W, V ) and show that ST = I and T S = I. Define S : W V as: Sw := unique v V s.t. T v = w (i.e., Sw = v T v = w) 25

26 Note S is well defined only because T is bijective! By construction we have T S = I. To show that ST = I, let v V, then: T (ST v) = (T S)(T v) = T v ST = I since T is injective Now we need to show that S L(W, V ). For additivity let w 1, w 2 W : T (Sw 1 + Sw 2 ) = T Sw 1 + T Sw 2 = w 1 + w 2 S(w 1 + w 2 ) = Sw 1 + Sw 2 by definition of S For homogeneity use a similar argument: T (λsw) = λt (Sw) = λw S(λw) = λsw We now want to formalize the notion of when two vector spaces are essentially the same. Definition 25. Two parts: An isomorphism is an invertible linear map (i.e., a bijection) V, W are isomorphic if there exists T L(V, W ) such that T is an isomorphism. We write V = W. Theorem 9. V = W dim V = dim W Proof. For the = direction, we know then there is a bijection T L(V, W ). Thus null T = {0} and range T = W, so by Rank-Nullity Theorem: dim V = dim null T + dim range T = 0 + dim W = dim W For the = direction, let v 1,..., v n be a basis for V and let w 1,..., w n be a basis for W. Define T : V W as: T (c 1 v c n v n ) = c 1 w 1 + c n w n It is easy to see T L(V, W ), T is injective, T is surjective. Thus T defines an isomorphism. Corollary 3. If dim V = n, then V = F n. 26

27 Remark: This proves that we can think of the coordinates of any v V in a basis B V = v 1,..., v n as a unique representation in F n, with the vector space structure of V carried over to F n. Indeed, define the matrix of v V with respect to the basis B V as the n 1 matrix: where M(v; B V ) := c 1. c n v = c 1 v c n v n The linear map M(, B V ) : V F n (note F n,1 = F n trivially) is an isomorphism. Corollary 4. If dim V = n and dim W = m, then L(V, W ) = F m,n. Proof. This follows easily since we already proved that dim L(V, W ) = (dim V )(dim W ). Proposition 25. Let B V = v 1,..., v n be a basis of V and let B W = w 1,..., w m be a basis of W. Then M( ; B V, B W ) : L(V, W ) F m,n is an isomorphism. Proposition 26. Let T L(V, W ), let v V, and let B V of V and W respectively. Then: and B W be bases M(T v; B W ) = M(T ; B V, B W )M(v; B V ) [See the book for the proofs of the previous two propositions.] Example: Let D L(P 3 (R), P 2 (R)) be the differentiation operator, defined by Dp = p. Let s compute the matrix M(D) of D with respect to the standard bases B 3 = 1, x, x 2, x 3 of P 3 (R) and B 2 = 1, x, x 2 of P 2 (R). Since Dx n = (x n ) = nx n 1 we have: End of Lecture 8 M(D; B 3, B 2 ) =

28 Beginning of Lecture 9 Example: Let D L(P 3 (R), P 2 (R)) be the differentiation operator, defined by Dp = p. Let s compute the matrix M(D) of D with respect to the standard bases B 3 = 1, x, x 2, x 3 of P 3 (R) and B 2 = 1, x, x 2 of P 2 (R). Since Dx n = (x n ) = nx n 1 we have: M(D; B 3, B 2 ) = Now lets consider a different basis for P 3 (R), for example B 3 = 1 + x, x + x 2, x 2 + x 3, x 3. Compute: Thus: D(1 + x) = 1 D(x + x 2 ) = 1 + 2x D(x 2 + x 3 ) = 2x + 3x 2 D(x 3 ) = 3x 2 M(D; B 3, B 2 ) = Now consider the specific polynomial p P 3 (R), p(x) = 2 + x + 3x 2 + 5x 3 = p (x) = 1 + 6x + 15x 2 The coordinates of p in B 3 and B 3, as well as p in B 2, are: 2 2 M(p; B 3 ) = 1 3 M(p; B 3) = M(p ; B 2 ) = Computing Dp in terms of matrix multiplication with respect to B 3 and B 2 28

29 we should get back M(p ; B 2 ); indeed: M(Dp; B 2 ) = M(D; B 3, B 2 )M(p; B 3 ) = = = M(p ; B 2 ) We should also be able to compute Dp in terms of matrix multiplication but with respect to B 3 and B 2 and still get back M(p ; B 2 ); indeed: M(Dp; B 2 ) = M(D; B 3, B 2 )M(p; B 3) = = = M(p ; B 2 ) Remark: As we said earlier, the choice of bases determines the matrix representation M(T ; B V, B W ) of the linear map T L(V, W ). Later on we will prove important results about the choice of the bases the give the nicest possible matrix representation of T. Definition 26. A linear map T L(V, V ) =: L(V ) is an operator. Remark: For the matrix of an operator T L(V ), we assume that we take the same basis B V for both the domain V and the range V, and thus write it as M(T ; B V ) := M(T ; B V, B V ). Furthermore, M(T ; B V ) F n,n, where dim V = n, and so we see that M(T ; B V ) is a square matrix. Theorem 10. Suppose V is finite dimensional and T L(V ). Then the following are equivalent: 29

30 1. T is bijective (i.e., invertible) 2. T is surjective 3. T is injective Remark: Not true if V is infinite dimensional! Proof. We prove this by proving that Clearly 1 2 so that part is done. Now suppose T is surjective, i.e., range T = V. Then by the Rank-Nullity Theorem: dim V = dim null T + dim range T dim V = dim null T + dim V dim null T = 0 null T = {0} T is injective So that takes care of 2 3. Now suppose T is injective. Then null T = {0} and dim null T = 0. Once again use the Rank-Nullity Theorem: dim V = dim null T + dim range T dim V = 0 + dim range T range T = V Thus T is surjective. Since we assumed it was injective, this means T is bijective and so we have 3 1 and we are done. 4 Polynomials Read on your own! 5 Eigenvalues, Eigenvectors, and Invariant Subspaces Extremely important subject matter that is the heart of Linear Algebra and is used all over mathematics, applied mathematics, data science, and more. 30

31 For example, consider a graph G = (V, E) consisting of vertices V and edges E; for example see Figure 1. You can encode this graph with a 6 6 matrix Figure 1: Graph with 6 vertices and 7 edges L so that: degree of vertex k, j = k L j,k = 1, j k and there is an edge between vertices j and k 0, otherwise This matrix is called the graph Laplacian and it encodes connectivity properties of the graph through its eigenvalues and eigenvectors. If the nodes in the graph represent webpages, and the edges represent hyperlinks between the webpages, then a similar type of matrix represents the world wide web, and its eigenvectors and eigenvalues form the foundation of how Google computes search results! 5.A Invariant Subspaces At the beginning of the course we defined a structure on sets V through the notion of a vector space. We then examined this structure further through subspaces, bases, and related notions. We then extended our study through linear maps between vector spaces, culminating in the Rank-Nullity Theorem and the notion of an isomorphism between two vector spaces with the same structure. Now we examine the structure of linear operators. The idea is that we will study the structure of T L(V ) by finding nice structural decompositions of V relative to T. Thought experiment: Let T L(V ) and suppose V = U 1 U m 31

32 To understand T, we would need only understand T k = T Uk for each k = 1,..., m. However, T k may not be in L(U k ); indeed, T k might map U k to some other part of V. This is a problem, since we would like each restricted linear map T k to be an operator itself on the subspace U k. This leads us to the following definition. Definition 27. Suppose T L(V ). A subspace U of V is invariant under T if T u U for all u U, i.e., T U L(U). Examples: {0}, V, null T, range T Must an operator have any invariant subspaces other than {0} and V? We will see... We begin with the study of one dimensional invariant subspaces. End of Lecture 9 32

33 Beginning of Lecture 10 Definition 28. Suppose T L(V ). A scalar λ F is an eigenvalue of T if there exists v V, v 0, such that T v = λv Such a v is called an eigenvector of T. Proposition 27. T L(V ) has a one dimensional invariant subspace if and only if T has an eigenvalue. Proof. First suppose that T has a one dimensional invariant subspace, which we denote as U. Since dim U = 1, U must be of the form: U = {λv : λ F} = span(v) for some v V, v 0. Since T is invariant under U, T v U. Thus there exists λ F such that T v = λv. Now suppose that T has an eigenvalue λ F. Then there exists v V, v 0, such that T v = λv. Then U = span(v) is an invariant subspace under T. Proposition 28. Suppose V is finite dimensional, T L(V ), and λ F. The following are equivalent: 1. λ is eigenvalue of T 2. T λi is not injective 3. T λi is not surjective 4. T λi is not invertible Example: The Laplacian for V = {f C ([ π, π]; C) : f( π) = f(π)} is defined as: f = d2 f dx 2 The eigenvalues and eigenvectors of are: λ = k 2, k Z, v(x) = e ikx = cos kx + i sin kx Notice the similarity between the eigenvectors of and the Fourier Transform defined earlier on Z N... 33

34 Theorem 11. Let T L(V ). If λ 1,..., λ m are distinct eigenvalues of T and v 1,..., v m are corresponding eigenvectors, then v 1,..., v m are linearly independent. Proof. Proof by contradiction. Suppose v 1,..., v m are linearly dependent. Using the LDL, let k be the smallest index such that Thus We also can conclude: v k span(v 1,..., v k 1 ) (8) v k = a 1 v a k 1 v k 1 T v k = a 1 T v a k 1 T v k 1 λ k v k = a 1 λ 1 v a k 1 λ k 1 v k 1 v k = a 1 v a k 1 v k 1 λ k v k = a 1 λ k v a k 1 λ k v k 1 Combining the two expansions of λ k v k yields: 0 = a 1 (λ k λ 1 )v a k 1 (λ k λ k 1 )v k 1 Since k is the smallest index satisfying (8), v 1,..., v k 1 must be linearly independent. Thus a 1 = = a k 1 = 0 since λ k λ j 0 for all k j. But then v k = 0, which is a contradiction. Corollary 5. Suppose V is finite dimensional. Then T L(V ) has at most dim V distinct eigenvalues. End of Lecture 10 34

35 Beginning of Lecture 11 5.B Eigenvectors and Upper-Triangular Matrices One of the main differences between operators and general linear maps is that we can take powers of operators! This will lead to many interesting results... Definition 29. Let T L(V ) and let m Z, m > 0. T m = T T (composition m times) T 0 = I If T is invertible, then T m = (T 1 ) m Definition 30. Suppose T L(V ) and let p P(F) be given by: Then p(t ) L(V ) is defined as: p(z) = a 0 + a 1 z + a 2 z a m z m p(t ) = a 0 I + a 1 T + a 2 T a m T m Theorem 12. Let V {0} be a finite dimensional vector space over C. Then every T L(V ) has an eigenvalue. Proof. Suppose dim V = n > 0 and choose v V, v 0. Then: L = v, T v, T 2 v,..., T n v is linearly dependent because the length of L is n + 1. Thus there exists a 0,..., a n C, not all zero, such that 0 = a 0 v + a 1 T v + a 2 T 2 v + + a n T n v Consider the polynomial p P(C) with coefficients given by a 0,..., a n. By the Fundamental Theorem of Algebra, p(z) = a 0 + a 1 z + + a n z n = c(z λ 1 ) (z λ m ), z C, where m n, c C, c 0, and λ k C. Thus: 0 = a 0 v + a 1 T v + + a n T n v = (a 0 I + a 1 T + + a n T n )v = c(t λ 1 I) (T λ m I)v Thus (T λ k I)v = 0 for at least one k, which means T λ k I is not injective, which implies that λ k is eigenvalue of T. 35

36 Example: Theorem 12 is not true for real vector spaces! Take for example the following operator T L(F 2 ) defined as: T (w, z) = ( z, w) If F = R, then T is a counterclockwise rotation by 90 degrees. Since a 90 degree rotation of any nonzero v R 2 will never equal a scalar multiple of itself, T has no eigenvalues! On the other hand, if F = C, then by Theorem 12 T must have at least one eigenvalue. Indeed it has two, λ = i and λ = i [see the book p. 135]. Recall we want a nice decomposition of V as V = U 1 U m, where each U k is an invariant subspace of T, so that to understand T L(V ) we only need to understand T Uk. We will accomplish this by finding bases of V that yield matrices M(T ) with lots of zeros. As a first baby step, let V be a complex vector space. Then T L(V ) must have at least one eigenvalue λ and a corresponding eigenvector v. Extend v to a basis of V : B V = v, v 2,..., v n Then: M(T ; B V ) = λ 0. 0 (9) Furthermore, if we define U 1 = span(v ) and U 2 = span(v 2,..., v n ), then V = U 1 U 2. The subspace U 1 is a one dimensional invariant subspace of V under T, but U 2 is not necessarily. It is a start though! Now let s try to do better... Definition 31. A matrix is upper triangular if all the entries below the diagonal equal 0: λ λ m There is a useful connection between upper triangular matrices and invariant subspaces: 36

37 Proposition 29. Suppose T L(V ) and B V = v 1,..., v n is a basis for V. Then the following are equivalent: 1. M(T ; B V ) is upper triangular 2. T v k span(v 1,..., v k ) for each k = 1,..., n 3. span(v 1,..., v k ) is invariant under T for each k = 1,..., n Proof. First we prove 1 2. Let A = M(T ; B V ). Then by the definition of A we have: T v k = A j,k v j But then Clearly 3 = 2 j=1 T v k span(v 1,..., v k ) A j,k = 0 j > k }{{} A is upper triangular We finish the proof by showing 2 = 3. Fix k. From 2 we have: T v 1 span(v 1 ) span(v 1,..., v k ) T v 2 span(v 1, v 2 ) span(v 1,..., v k ). T v k span(v 1,..., v k ) Thus if v span(v 1,..., v k ), then T v span(v 1,..., v k ) as well. Now can improve upon our baby step (9) above by showing that given an eigenvector v with eigenvalue λ, we can extend it to a basis B V such that M(T ; B V ) is upper triangular. Theorem 13. Suppose V is a finite dimensional complex vector space and T L(V ). Then there exists a basis B V such that M(T ; B V ) is upper triangular. End of Lecture 11 37

38 Beginning of Lecture 12 Warmup: Suppose T L(V ) and 6I 5T + T 2 = 0. What are the possible eigenvalues of T? Answer: 6I 5T + T 2 = 0 implies that (T 2I)(T 3I) = 0. Now let v 0 be an eigenvector of T with eigenvalue λ. Then 0 = (T 2I)(T 3I)v = (λ 2)(λ 3)v, which implies that λ = 2 or λ = 3. Theorem 14. Suppose V is a finite dimensional complex vector space and T L(V ). Then there exists a basis B V such that M(T ; B V ) is upper triangular. Proof. Induction on dim V. Clearly the result is true when dim V = 1. Now suppose the result is true for all complex vector spaces with dimension n 1 or less, and let V be a complex vector space with dim V = n. We know that V has one eigenvalue λ. Define: U = range (T λi) Since T λi is not surjective, dim U < dim V. Furthermore, U is invariant under T ; indeed, let u U: T u = (T λi)u }{{} U + λu }{{} U Thus T = T U L(U), and we can apply the induction hypothesis to T and U. In particular, there exists a basis B U = u 1,..., u m of U such that M( T ; B U ) is upper triangular. Extend B U to a basis for V : B V = u 1,..., u m, v 1,..., v l, l + m = n Since M( T ; B U ) is upper triangular, by Proposition 29 we have: Furthermore, T v j = (T λi)v }{{} j U T u k = T u k span(u 1,..., u k ) for all k = 1,..., m. + λv }{{} j span(u 1,..., u m, v j ) span(u 1,..., u m, v 1,..., v j ) span(v j ) 38

39 Thus T and B V satisfy condition 2 of Proposition 29, and so M(T ; B V ) is upper triangular. Upper triangular matrices a very useful for determining if T L(V ) is invertible... Proposition 30. Let T L(V ) and let B be a basis for which M(T ; B) is upper triangular. Then T is invertible all diagonal entries of M(T ; B) are nonzero Proof. Let B = v 1,..., v n and let A = M(T ; B). Easier to prove not (a) not (b). First suppose T is not invertible; we want to show that some entry of M(T ; B) is zero. T not invertible T not injective there exists v 0 such that T v = 0. Expand v in B: v = c j v j j=1 Let k be the index satisfying the following: c k 0 and c j = 0 for all j > k (note that possibly k = n). If k = 1, then v = c 1 v 1 T v 1 = 0 A 1,1 = 0. If k > 1 then: v = T v = k c j v j j=1 k c j T v j j=1 k 1 0 = c j T v j + c k T v k j=1 k 1 T v k = j=1 ( cj c k ) T v j span(v 1,..., v k 1 ), where in the last line we used Proposition 29. But also by Proposition 29, k 1 b j v j = T v k = j=1 k A j,k v j j=1 39

40 and since B is a basis we must have A k,k = 0. Now suppose some entry on the diagonal of M(T ; B) is zero. If A 1,1 = 0 then T v 1 = 0 and so T is not injective, and hence not invertible. If A k,k = 0 for k > 1, then by Proposition 29 we have: T v k = k k 1 A j,k v j = A j,k v j span(v 1,..., v k 1 ) (10) j=1 j=1 Consider now the linear map T = T span(v1,...v k ). By (10), T L(span(v 1,..., v k ), span(v 1,..., v k 1 )) Thus T cannot be injective since it maps a k-dimensional vector space to a (k 1)-dimensional vector space. In particular, there exists v span(v 1,..., v k ) such that T v = 0. But then T v = 0, and so T is not injective, and hence not invertible. End of Lecture 12 40

41 Beginning of Lecture 13 Not only can upper triangular matrices tell us when T L(V ) is invertible, they also tell us precisely what the eigenvalues of T are! Proposition 31. Let T L(V ) and suppose A = M(T ) is upper triangular. Then: λ is an eigenvalue of T λ = A k,k for some k Proof. Let A = M(T ) have diagonal entries given by A k,k = λ k : λ 1 A = M(T ) =... 0 λ m Let λ F. Then M(T λi) = λ 1 λ... 0 λ m λ Thus by Proposition 30 T λi is not invertible (and hence λ is an eigenvalue) if and only if λ = λ k for some k. 5.C Eigenspaces and Diagonal Matrices Definition 32. A diagonal matrix is a square matrix that is 0 everywhere except possibly the diagonal: λ λ m Note: If M(T ; B) is upper triangular, then the diagonal entries are precisely the eigenvalues of T (since diagonal matrices are upper triangular). Definition 33. Suppose T L(V ) and λ F. The eigenspace of T corresponding to λ is: E(λ, T ) = null (T λi) Note: T E(λ,T ) = λi (so eigenspaces are invariant subspaces) 41

42 Proposition 32. Suppose V is finite dimensional and T L(V ). Suppose also that λ 1,..., λ m are distinct eigenvalues of T. Then: is a direct sum and furthermore E(λ 1, T ) + + E(λ m, T ) (11) dim E(λ 1, T ) + + dim(λ m, T ) dim V Proof. Let u k E(λ k, T ) and suppose that u u m = 0 Since eigenvectors corresponding to distinct eigenvalues are linearly independent, each u k = 0 and so (11) is a direct sum. Furthermore, by #16 of 2.C (HW1), dim E(λ 1, T )+ +dim E(λ m, T ) = dim(e(λ 1, T ) E(λ m, T )) dim V End of Lecture 13 42

43 Beginning of Lecture 14 Definition 34. An operator T L(V ) is diagonalizable if there exists a basis B such that M(T ; B) is diagonal. Proposition 33. Suppose V is finite dimensional and T L(V ). Then: T is diagonalizable V has a basis of eigenvectors of T. Proof. An operator T L(V ) has a diagonal matrix with respect to a basis B = v 1,..., v n if and only if T v k = λ k v k for each k. Example: Not every operator is diagonalizable, even over complex vector spaces! Consider T L(C 2 ) defined as: T (w, z) = (z, 0) Then T 2 = 0. Now let v 0 be an eigenvector with eigenvalue λ. Then 0 = T 2 v = T (T v) = λt v = λ 2. Thus λ = 0. Even though dim E(0, T 2 ) = 2, we see that E(0, T ) = {(w, 0) : w C} and so dim(0, T ) = 1. Therefore V does not have a basis of eigenvectors of T, and so T is not diagonalizable. We will address examples like this much later with the notion of generalized eigenvectors... On the other hand, if we have enough distinct eigenvalues, we know that T is diagonalizable: Proposition 34. If T L(V ) has dim V < distinct eigenvalues, then T is diagonalizable. Proof. Let dim V = n and suppose T L(V ) has distinct eigenvalues λ 1,..., λ n with corresponding eigenvectors v 1,..., v n. The eigenvectors are linearly independent because they correspond to distinct eigenvalues, and thus they form a basis for V. Thus T is diagonalizable. Note: The converse is not true! Take any diagonal matrix with non-unique entries on the diagonal. Finally, our main result for this chapter. Namely, if T is diagonalizable, then we can achieve our stated goal of decomposing V as V = U 1 U n, where each U k is an invariant subspace of V under T and dim U k = 1. 43

44 Theorem 15. Suppose V is finite dimensional and T L(V ). Let λ 1,..., λ m denote distinct eigenvalues of T. Then the following are equivalent: 1. T is diagonalizable 2. V has a basis consisting of eigenvectors of T 3. There exist one dimensional invariant subspaces U 1,..., U n of V such that V = U 1 U n 4. V = E(λ 1, T ) E(λ m, T ) 5. dim V = dim E(λ 1, T ) + + dim E(λ m, T ) Proof. Many parts. The plan is: 1 2 3, 2 = 4 = 5 = 2 1 2: Simply Proposition = 3: Let B = v 1,..., v n be basis of eigenvectors of V. Define U k = span(v k ). Then each U k is a 1-dimensional invariant subspace of V under T, and since B is a basis it is clear V = U 1 U n. 3 = 2: For each k, let v k U k, v k 0. Since U k is a 1-dimensional invariant subspace under T, each v k is an eigenvector of T. Furthermore each v V can be written uniquely as: v = u u n, where u k U k and therefore u k = a k v k for some a k F. Thus v 1,..., v n is a basis for V. 2 = 4: Let v 1,..., v n be a basis of eigenvectors for V, and subdivide the list according to the unique eigenvalues of T, so that: v (l) 1,..., v(l) k l corresponds to λ l, for l = 1,..., m and k 1 + k k m = n. Then any v V can be written as: v = m k l l=1 j=1 a j,l v (l) j } {{ } E(λ l,t ) E(λ 1, T ) E(λ m, T ) 44

45 4 = 5: This is simply 2.C #16, which you did for homework! 5 = 2: Choose a basis for each E(λ l, T ), say v (l) 1,..., v(l) k l, where k k m = n by assumption. Let L be the list of all of these vectors concatenated together. To show L is linearly independent, suppose: m k l l=1 j=1 a j,l v (l) j = 0 }{{} u l E(λ l,t ) m u l = 0 l=1 Each u l is eigenvector of T corresponding to a distinct eigenvalue λ l ; thus u 1,..., u m must be linearly independent and so u l = 0 for all l. But then a j,l = 0 for all j = 1,..., k l and for each l, since v (l) 1,..., v(l) k l are linearly independent. End of Lecture 14 45

46 Beginning of Lecture 15 6 Inner Product Spaces We now introduce geometrical aspects such as length and angle into the setting of abstract vector spaces. 6.A Inner Products and Norms We begin by looking at R n. Definition 35. The norm of x = (x 1,..., x n ) R n is: x = x x2 n Definition 36. For x, y R n, the dot product of x and y is: Notice that x 2 = x x. x y = x 1 y x n y n. Example: In R 2, x = x x2 2 which is just the length of x, and where θ is the angle between x and y. Properties of the dot product: x x 0 x R n x x = x 2 = 0 x = 0 x y = y x x y = x y cos θ, Fix y R n. Then T y (x) = x y is a linear map, i.e., T y L(R n, R). Now we want to generalize the dot product to abstract vector spaces. First lets consider C n. Let λ = a + ib C be a complex scalar. Recall that: λ = a 2 + b 2 46

47 λ 2 = λ λ For z C n, the norm is defined as: Note that: z = z z n 2 z 2 = z 1 z z n z n If we want z z = z 2, then the previous line implies that we should define the dot product on C n as: w z = w 1 z w n z n This leads us to the generalization of the dot product to abstract vector spaces: Definition 37. An inner product on V is a function, : F 2 F that has the following properties: 1. Positive Definitness: v, v 0 v V v, v = 0 v = 0 2. Linearity in the first argument: u + v, w = u, w + v, w u, v, w V λu, v = λ u, v λ F, u, v V 3. Conjugate Symmetry: u, v = v, u u, v V Examples: 1. Euclidean inner product on F n. Let w = (w 1,..., w n, z = (z 1,..., z n ) F n : w, z = w 1 z w n z n 2. Weighted Euclidean inner product on F n. Fix c = (c 1,..., c n ) R n with c k 0. Then for w, z F n, w, z c = c 1 w 1 z c n w n z n 47

48 3. Define V = L 2 (R) as: L 2 (R) = {f : R R : f(x) 2 dx < } One can verify this is a real vector space. Since it is a subset of the vector space of all functions mapping R to R, we need to show (1) it contains an additive identity (zero), (2) it is closed under addition, and (3) it is closed under scalar multiplication. Indeed, f 0 L 2 (R), and furthermore if f L 2 (R) then λf L 2 (R) for any λ R since λf(x) 2 dx = λ 2 f(x) 2 dx < The trickiest part is that it is closed under addition; i.e., if f, g L 2 (R), then f + g L 2 (R). First note: f(x)+g(x) 2 dx = + g(x) 2 dx +2 f(x) 2 dx } {{ } I } {{ } II f(x)g(x) dx } {{ } III Since f, g L 2 (R), we know that the first two terms are finite. That leaves the third term. That this is finite follows from what s known in Real Analysis as Hölder s Inequality. However, we can in fact prove it with more elementary tools. First let a, b R and note that: (a b) 2 0 a 2 2ab + b 2 0 ab a2 2 + b2 2 Now let f(x) = a and g(x) = b. Then: f(x)g(x) dx f(x) g(x) 2 2 dx < (12) Thus L 2 (R) is a vector space! We can add an inner product to it by defining the inner product as: f, g = f(x)g(x) dx By what we just showed in (12), the inner product is well defined. Furthermore, it is easy to verify that all of the properties of an inner 48

49 product hold, except for definiteness property: f, f = 0 f = 0. This is a bit technical but follows from Real Analysis. Now L 2 (R) is what we call an inner product space. Any inner product can always be used to define the norm of a vector. In this case, we get the L 2 -norm: f 2 = ( f, f = ) 1/2 f(x) 2 dx In fact L 2 (R) is a special inner product space called a Hilbert space, but we leave that for more advanced math classes... Definition 38. An inner product space is a vector space V along with an inner product on V. Important Note: For the rest of chapter 6, we assume V is an inner product space. Definition 39. For v V an inner product space, the norm of v is: End of Lecture 15 v = v, v, 49

50 Beginning of Lecture 16 Proposition 35. The following basic properties hold: 1. For each fixed u V, the function T u (v) = v, u is linear, i.e., T u L(V, F). 2. 0, v = 0 v V 3. v, 0 = 0 v V 4. u, v + w = u, v + u, w u, v, w V 5. u, λv = λ u, v λ F and u, v V 6. v = 0 v = 0 7. λv = λ v λ F Proof. The proofs are all very simple and in the book. Definition 40. u, v V are orthogonal if u, v = 0. In plane geometry, two vectors are orthogonal if they are perpendicular, see Figure 2. Figure 2: Orthogonal line segments It is easy to see the following two basic facts: 0 is orthogonal to every v V 0 is the only vector in V orthogonal to itself Theorem 16 (Pythagorean Theorem). Suppose u and v are orthogonal vectors in V. Then: u + v 2 = u 2 + v 2 50

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

Lecture notes - Math 110 Lec 002, Summer The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition.

Lecture notes - Math 110 Lec 002, Summer The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition. Lecture notes - Math 110 Lec 002, Summer 2016 BW The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition. 1 Contents 1 Sets and fields - 6/20 5 1.1 Set notation.................................

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Generalized eigenspaces

Generalized eigenspaces Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

More information

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013 Math 113 Homework 5 Bowei Liu, Chao Li Fall 2013 This homework is due Thursday November 7th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name

More information

MATH 115A: SAMPLE FINAL SOLUTIONS

MATH 115A: SAMPLE FINAL SOLUTIONS MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication

More information

Eigenvalues, Eigenvectors, and Invariant Subspaces

Eigenvalues, Eigenvectors, and Invariant Subspaces CHAPTER 5 Statue of Italian mathematician Leonardo of Pisa (7 25, approximate dates), also known as Fibonacci. Exercise 6 in Section 5.C shows how linear algebra can be used to find an explicit formula

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Eigenspaces and Diagonalizable Transformations

Eigenspaces and Diagonalizable Transformations Chapter 2 Eigenspaces and Diagonalizable Transformations As we explored how heat states evolve under the action of a diffusion transformation E, we found that some heat states will only change in amplitude.

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps

Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps We start with the definition of a vector space; you can find this in Section A.8 of the text (over R, but it works

More information

Introduction to Linear Algebra, Second Edition, Serge Lange

Introduction to Linear Algebra, Second Edition, Serge Lange Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement.

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

Lecture 6: Corrections; Dimension; Linear maps

Lecture 6: Corrections; Dimension; Linear maps Lecture 6: Corrections; Dimension; Linear maps Travis Schedler Tues, Sep 28, 2010 (version: Tues, Sep 28, 4:45 PM) Goal To briefly correct the proof of the main Theorem from last time. (See website for

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Math 113 Winter 2013 Prof. Church Midterm Solutions

Math 113 Winter 2013 Prof. Church Midterm Solutions Math 113 Winter 2013 Prof. Church Midterm Solutions Name: Student ID: Signature: Question 1 (20 points). Let V be a finite-dimensional vector space, and let T L(V, W ). Assume that v 1,..., v n is a basis

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

LINEAR ALGEBRA QUESTION BANK

LINEAR ALGEBRA QUESTION BANK LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,

More information

Abstract Vector Spaces

Abstract Vector Spaces CHAPTER 1 Abstract Vector Spaces 1.1 Vector Spaces Let K be a field, i.e. a number system where you can add, subtract, multiply and divide. In this course we will take K to be R, C or Q. Definition 1.1.

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Solving a system by back-substitution, checking consistency of a system (no rows of the form MATH 520 LEARNING OBJECTIVES SPRING 2017 BROWN UNIVERSITY SAMUEL S. WATSON Week 1 (23 Jan through 27 Jan) Definition of a system of linear equations, definition of a solution of a linear system, elementary

More information

Chapter 4 & 5: Vector Spaces & Linear Transformations

Chapter 4 & 5: Vector Spaces & Linear Transformations Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think

More information

2 Eigenvectors and Eigenvalues in abstract spaces.

2 Eigenvectors and Eigenvalues in abstract spaces. MA322 Sathaye Notes on Eigenvalues Spring 27 Introduction In these notes, we start with the definition of eigenvectors in abstract vector spaces and follow with the more common definition of eigenvectors

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Throughout these notes, F denotes a field (often called the scalars in this context). 1 Definition of a vector space Definition 1.1. A F -vector space or simply a vector space

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Solutions to Final Exam

Solutions to Final Exam Solutions to Final Exam. Let A be a 3 5 matrix. Let b be a nonzero 5-vector. Assume that the nullity of A is. (a) What is the rank of A? 3 (b) Are the rows of A linearly independent? (c) Are the columns

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

Chapter 2 Linear Transformations

Chapter 2 Linear Transformations Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

More information

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017 Final A Math115A Nadja Hempel 03/23/2017 nadja@math.ucla.edu Name: UID: Problem Points Score 1 10 2 20 3 5 4 5 5 9 6 5 7 7 8 13 9 16 10 10 Total 100 1 2 Exercise 1. (10pt) Let T : V V be a linear transformation.

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Infinite-Dimensional Triangularization

Infinite-Dimensional Triangularization Infinite-Dimensional Triangularization Zachary Mesyan March 11, 2018 Abstract The goal of this paper is to generalize the theory of triangularizing matrices to linear transformations of an arbitrary vector

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

( 9x + 3y. y 3y = (λ 9)x 3x + y = λy 9x + 3y = 3λy 9x + (λ 9)x = λ(λ 9)x. (λ 2 10λ)x = 0

( 9x + 3y. y 3y = (λ 9)x 3x + y = λy 9x + 3y = 3λy 9x + (λ 9)x = λ(λ 9)x. (λ 2 10λ)x = 0 Math 46 (Lesieutre Practice final ( minutes December 9, 8 Problem Consider the matrix M ( 9 a Prove that there is a basis for R consisting of orthonormal eigenvectors for M This is just the spectral theorem:

More information

Math 113 Solutions: Homework 8. November 28, 2007

Math 113 Solutions: Homework 8. November 28, 2007 Math 113 Solutions: Homework 8 November 28, 27 3) Define c j = ja j, d j = 1 j b j, for j = 1, 2,, n Let c = c 1 c 2 c n and d = vectors in R n Then the Cauchy-Schwarz inequality on Euclidean n-space gives

More information

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1) Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1) Travis Schedler Tue, Oct 18, 2011 (version: Tue, Oct 18, 6:00 PM) Goals (2) Solving systems of equations

More information

Honors Algebra II MATH251 Course Notes by Dr. Eyal Goren McGill University Winter 2007

Honors Algebra II MATH251 Course Notes by Dr. Eyal Goren McGill University Winter 2007 Honors Algebra II MATH251 Course Notes by Dr Eyal Goren McGill University Winter 2007 Last updated: April 4, 2014 c All rights reserved to the author, Eyal Goren, Department of Mathematics and Statistics,

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

Math 115A: Homework 5

Math 115A: Homework 5 Math 115A: Homework 5 1 Suppose U, V, and W are finite-dimensional vector spaces over a field F, and that are linear a) Prove ker ST ) ker T ) b) Prove nullst ) nullt ) c) Prove imst ) im S T : U V, S

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler. Math 453 Exam 3 Review The syllabus for Exam 3 is Chapter 6 (pages -2), Chapter 7 through page 37, and Chapter 8 through page 82 in Axler.. You should be sure to know precise definition of the terms we

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition 6 Vector Spaces with Inned Product Basis and Dimension Section Objective(s): Vector Spaces and Subspaces Linear (In)dependence Basis and Dimension Inner Product 6 Vector Spaces and Subspaces Definition

More information

Elements of linear algebra

Elements of linear algebra Elements of linear algebra Elements of linear algebra A vector space S is a set (numbers, vectors, functions) which has addition and scalar multiplication defined, so that the linear combination c 1 v

More information

Math 113 Practice Final Solutions

Math 113 Practice Final Solutions Math 113 Practice Final Solutions 1 There are 9 problems; attempt all of them. Problem 9 (i) is a regular problem, but 9(ii)-(iii) are bonus problems, and they are not part of your regular score. So do

More information

Math 113 Final Exam: Solutions

Math 113 Final Exam: Solutions Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P

More information

What is on this week. 1 Vector spaces (continued) 1.1 Null space and Column Space of a matrix

What is on this week. 1 Vector spaces (continued) 1.1 Null space and Column Space of a matrix Professor Joana Amorim, jamorim@bu.edu What is on this week Vector spaces (continued). Null space and Column Space of a matrix............................. Null Space...........................................2

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini DM554 Linear and Integer Programming Lecture 9 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. More on 2. 3. 2 Resume Linear transformations and

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Final Review Written by Victoria Kala vtkala@mathucsbedu SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Summary This review contains notes on sections 44 47, 51 53, 61, 62, 65 For your final,

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you

More information

Math 113 Homework 5 Solutions (Starred problems) Solutions by Guanyang Wang, with edits by Tom Church.

Math 113 Homework 5 Solutions (Starred problems) Solutions by Guanyang Wang, with edits by Tom Church. Math 113 Homework 5 Solutions (Starred problems) Solutions by Guanyang Wang, with edits by Tom Church. Exercise 5.C.1 Suppose T L(V ) is diagonalizable. Prove that V = null T range T. Proof. Let v 1,...,

More information

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V. MA322 Sathaye Final Preparations Spring 2017 The final MA 322 exams will be given as described in the course web site (following the Registrar s listing. You should check and verify that you do not have

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S 1 Vector spaces 1.1 Definition (Vector space) Let V be a set with a binary operation +, F a field, and (c, v) cv be a mapping from F V into V. Then V is called a vector space over F (or a linear space

More information

Definitions for Quizzes

Definitions for Quizzes Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does

More information

LINEAR ALGEBRA MICHAEL PENKAVA

LINEAR ALGEBRA MICHAEL PENKAVA LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)

More information

Math 346 Notes on Linear Algebra

Math 346 Notes on Linear Algebra Math 346 Notes on Linear Algebra Ethan Akin Mathematics Department Fall, 2014 1 Vector Spaces Anton Chapter 4, Section 4.1 You should recall the definition of a vector as an object with magnitude and direction

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

ABSTRACT VECTOR SPACES AND THE CONCEPT OF ISOMORPHISM. Executive summary

ABSTRACT VECTOR SPACES AND THE CONCEPT OF ISOMORPHISM. Executive summary ABSTRACT VECTOR SPACES AND THE CONCEPT OF ISOMORPHISM MATH 196, SECTION 57 (VIPUL NAIK) Corresponding material in the book: Sections 4.1 and 4.2. General stuff... Executive summary (1) There is an abstract

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

1 Basics of vector space

1 Basics of vector space Linear Algebra- Review And Beyond Lecture 1 In this lecture, we will talk about the most basic and important concept of linear algebra vector space. After the basics of vector space, I will introduce dual

More information