Linear transformations: the basics
|
|
- Ambrose Chambers
- 5 years ago
- Views:
Transcription
1 Linear transformations: the basics The notion of a linear transformation is much older than matrix notation. Indeed, matrix notation was developed (essentially) for the needs of calculation with linear transformations over finite-dimensional vector spaces. The details of this are in these notes. The dear reader would be advised to take to heart, when getting to the calculational parts of these notes, the slogan for matrix multiplication columnby-column-on-the-right. That is, if A is a matrix with k columns, listed as C, C 2,..., C k (in order) and A is any matrix over the same field with jth column a.. a k, then the jth column of AA is a C + + a k C k. (I mean, seriously, get used to this.) She would also be well advised to forget until further notice that matrix multiplication is associative. An easy consequence of the columnby-column rule is the distributive law A(A + A ) = AA + AA, whenever it makes sense; just look at the jth columns of both sides. A linear transformation is a homomorphism between vector spaces (possibly the same one). The algebraic structure of a vector space over the field F consists of vector addition and multiplication of vectors by the scalars in F. Hence we have DEFINITION (LINEAR TRANSFORMATION). Suppose that V and W are vector spaces over the same field F. A function T : V W is an F -linear transformation if. T ( v + V v 2 ) = T v + W T v 2 for all v, v 2 V (T preserves or respects vector addition), and 2. T (α F,V v) = α F,W T v for any v V and α F (T preserves or respects scalar multiplication). (As usual, we will leave off the subscripts whenever we can, which is just about always. I put them on in the definition just to emphasize where the operations were taking place. We will also leave off the F if it is understood. Please do not write T is closed under addition, or T is closed under scalar multiplication these phrases make no sense.) In case V = W, a linear transformation from V to itself is usually called a linear operator on V. In case W = F, a linear transformation from V to F is called a linear functional on V. In case T is not only a linear transformation, but is also a bijection (a one-to-one and onto function) from V to W, it is an isomorphism of vector spaces. The most basic kind of example of a linear transformation is this: Suppose that V = F n and W = F m for some field F and positive integers m and n. Let A be any m n matrix with entries from F and T A : F n F m be given
2 by T A v = A v (matrix multiplication). T A preserves addition because matrix multiplication is distributive. (A ( v + v 2 ) = A v + A v 2.) It preserves scalar multiplication since A(α v) = α(a v) for any α F and v F n. Here are some other examples. EXAMPLES.. For any V, W (vector spaces over F ), V,W (or just ) is the zero transformation. V,W v = W for any v V. It is obviously linear. 2. For any V, the identity operator on V, denoted I V (or just I), given by I V v = v for every v V, is clearly linear. So is αi for any scalar α, given by (αi) v = α v; in case α, it is an isomorphism of V with itself (a so-called automorphism of V ). 3. If V = M n (F ) and T = tr, the map that takes an n n matrix A to its trace, the sum of its diagonal elements, then T is a linear functional on V. (So tr(a) = n j= a j,j.) 4. If V = M m,n (F ) and W = M n,m (F ), the map A A T, is plainly linear, and an isomorphism. 5. Say V = M n (C), where C is the complex numbers. If T A = ĀT (the conjugate transpose, often denoted A ), then T easily preserves addition. But if we regard V as a vector space over C, it does not preserve scalar multiplication and is thus not a linear operator. (E.g. T (ii) = ii it I.) But if we regard V as a real vector space, this map is linear, since trivially ra T = r(āt ) for any real r. 6. Here s a map from R 2 to itself that preserves scalar multiplication but not addition. Any v R 2 that is not on the y-axis is uniquely representable a as, where m is the slope of the line through the vector and the ma origin. Suppose we let T v = m v in case v is not on the y-axis, and T v = if v is on the y-axis. It is not ( hard ) to see( that) T preserves ( scalar ) multiplication. But T = T + T =. 7. If V is any finite-dimensional vector space over F and B = ( v,..., v n ) is an ordered basis of V, then the map v [ v] B is an isomorphism of V with F n. We have already observed that [ u + v] B = [ u] B + [ v] B and [α v] B = α[ v] B for any u, v V and α F ; that is, the map is linear. It is clearly one-to-one since if [ u] B = [ v] B = a.. a n, then u = v = a v + +a n v n. 2
3 b.. And it s onto, since if F n, then v = b v + +b n v n V maps b n to this vector in F n. (So, in a definite sense, the only n-dimensional vector space over F is F n ; but what the isomorphism is depends just as much on B as it does on V.) 8. Now, a couple of simple geometrical examples. Fix a line l through the origin in V = R 2. Let l be the line through perpendicular to l. Evidently, any vector v V can be decomposed uniquely into P roj l v + P roj l v, where P roj l v is on l and P roj l v is on l. It is easy to see that the map v P roj l v is linear; it is called the orthogonal projection of v to l. Also, the map T l sending v to P roj l v P roj l v is the reflection of v across l. It is also a linear map. (The notation T l for this is fairly common, but not quite standard.) 9. Suppose that V is the vector space of differentiable functions on the reals. Let W be the (larger) space of functions which are derivatives of some differentiable function, and D : V W be given by Df = f ; as is well known, (f + g) = f + g and (rf) = r(f ) for any f, g V and r R, so D is linear. If we instead let V = C (R), the space of functions with derivatives of all orders, and let D still be differentiation, then D is a linear operator on V.. Let V be the vector space of continuous functions on the reals (or some interval containing a) and a R. The indefinite integral operator on V, defined by T (f(x)) = x f(t)dt is patently linear, by simple observations a from calculus. (Indeed, the facts that x a (f (t) + f 2 (t))dt = x a f (t)dt + x a f 2(t)dt and x a rf(t)dt = r x f(t)dt may have been described to you a as the linearity properties of the integral.). A variation on the last example is when V is the space of continuous functions on [a, b] for some fixed a < b in R, and T : V R takes f V to b f(t)dt; this is a linear functional on V. a As I hope you see, linear transformations show up all over the place. (As I hope somebody noticed, there are a lot of ways to say something is obvious in English and people complain about mathematicians using more than one term for the same idea.) Here are some of the most clear properties of linear transformations. OBSERVATIONS. Suppose that T : V W is a linear transformation (where V and W are vector spaces over F ). Then T V = W. Also, for any v,..., v k V and scalars α,..., α k F, we have that T (α v + + α k v k ) = α T v α k T v k. (That is, T preserves dependence relations, and hence span. If V V is Span(S), then Span{T v : v S} is none other than T (V ).) 3
4 On the other hand, a linear transformation doesn t always preserve independence. It does just in case the transformation is -to-. One direction of this is blatant suppose the set { v i : i I} V is independent and T : V W is linear and -to-. The claim is that {T v i : i I} is also independent. (I will leave this as an exercise basically in notation.) The other direction is also easy if T is not one-to-one, say v w but T v = T w; the singleton { v w} is independent, but {T ( v w)} = {T v T w} = { W } is not. In an earlier set of notes (on definitions and examples of vector spaces) I went through the details to show that F X (the set of all functions from X to F ) is a vector space over F for any nonempty set X; the operations are defined pointwise. In case W is a vector space over F and X is nonempty, essentially the same work shows that W X is also a vector space over F, again with operations defined pointwise. (That is, (f + g)(x) = f(x) + W g(x) and (αf)(x) = α F,W f(x).) Now consider the case where X = V is also a vector space over F. Consider the subset of W V comprising the linear transformations from V to W ; we will call this set Hom F (V, W ), although L(V, W ) is also used. (The subscript may be omitted if the field is understood.) I claim that Hom F (V, W ) is a subspace of W V. That is, if T, T and T 2 are linear transformations from V to W and α is a scalar, then T + T 2 and αt are also linear. It is clear that the zero tranformation is in Hom F (V, W ) and acts as there. For the first, for any v, v 2 V, (T +T 2 )( v + v 2 ) = T ( v + v 2 )+T 2 ( v + v 2 ) by the definition of T + T 2. Because both T and T 2 preserve addition, this is (T v +T v 2 )+(T 2 v +T 2 v 2 ); using commutativity and associativity of addition in W, we see that this is (T v + T 2 v ) + (T v 2 + T 2 v 2 ), and by the definition of T + T 2 again, this equals (T + T 2 ) v + (T + T 2 ) v 2 ; hence T + T 2 preserves addition. Now for any v V and scalar α, (T + T 2 )(α v) = T (α v) + T 2 (α v) by the definition of T + T 2. As T and T 2 preserve scalar multiplication, this is (αt v)+(αt 2 v); by one of the distributive laws in W, this becomes α(t v+t 2 v) and by the definition of T + T 2 again, we see this is α[(t + T 2 ) v], so T + T 2 preserves scalar multiplication. For αt, consider (αt )( v + v 2 ) for v, v 2 V. By definition of αt, this is α(t ( v + v 2 )), which equals α(t v + T v 2 ) as T preserves addition. By the same distributive law as above, this is α(t v ) + α(t v 2 ) and by definition of αt, this is (αt ) v + (αt ) v 2, so αt preserves vector addition. Finally, if v V and β F, then (αt )(β v) = α(t (β v)) by definition of αt ; this equals α(βt v) as T preserves scalar multiplication, which equals (αβ)t v by the associative law for scalar multiplication in W. By commutativity of multiplication in F, this is (βα)t v; by the associative law again, this is β(αt v) and by the definition of αt, this is β((αt ) v), showing that αt preserves scalar multiplication. [I did this in gruesome detail to show exactly how the definitions come in, and to emphasize how modular this verification is. Note that we use the vector space properties of W in several places, but never those of V.] Now U, V, and W are all vector spaces over F, and T : U V and 4
5 T 2 : V W are any functions, it makes sense to talk about the composition T 2 T : U W defined by (T 2 T ) u = T 2 (T u) for every u U. In case T and T 2 are linear, it is routine to see that so is T 2 T ; it s like the verifications above, but even easier. I will skip the details here. We usually write T 2 T instead of T 2 T. It is clear that if T Hom F (V, W ), then for any F -spaces U and X, T U,V = U,W and W,X T = V,X. Also, I W T = T I V = T. Recall that, whenever it makes sense to compose three functions, the composition is associative. For any sets U, V, W and X, and functions f : U V, g : V W and h : W X, we have h (g f) = (h g) f just from the definition. In particular, this holds for linear transformations between F -spaces. We use the following standard abbreviations in case T Hom F (V, V ) is a linear operator on V. T = I V, T = T, T 2 = T T, T 3 = T T T and so on T n+ = T n T. With this, for any linear operator T on V and polynomial p(x) F [X] say p(x) = n j= a jx j it makes sense to define p(t ) = n j= a jt j ; it will be a linear operator on V. A word on the distributive laws; there are two, as composition of transformations is not usually commutative. (E.g., if A and B are noncommuting matrices, T A T B T B T A.) Consider the case where T : U V, and T 2, T 3 are any functions from V to W ; here W (at least) is assumed to be a vector space. Then for any u U, [(T 2 + T 3 )T ] u = (T 2 + T 3 )(T u) by definition of composition; this is T 2 (T u) + T 3 (T u) by definition of T 2 + T 3 and again by definition of composition this is (T 2 T ) u + (T 3 T ) u = [T 2 T + T 3 T ] u by the definition of sums of functions. Thus (T 2 + T 3 )T = T 2 T + T 3 T, and this verification does not use linearity of these functions at all. Now suppose that T, T 2 : U V and T 3 : V W and T 3 preserves addition. Then for any u U, [T 3 (T + T 2 )] u = T 3 [(T + T 2 ) u] = T 3 (T u + T 2 u) by the definition of composition and addition of functions. Because T 3 preserves addition, this is T 3 (T u) + T 3 (T 2 u) = (T 3 T ) u + (T 3 T 2 ) u = (T 3 T + T 3 T 2 ) u, again by the definitions of sums and compositions of functions. So we have both distributive laws in case the functions are linear, but only one of them requires any part of linearity. You are responsible for knowing these rules in the generality given above, but I will summarize our properties in case V = W. PROPOSITION 2. If V is a vector space over F, then the collection Hom F (V, V ) of all linear operators on V is a linear algebra with identity over F. The addition and scalar multiplication are defined pointwise, and the multiplication is function composition. We have:. Hom F (V, V ) is a vector space over F. 2. T (T 2 + T 3 ) = T T 2 + T T 3 for any T, T 2, T 3 in Hom F (V, V ). 3. (T + T 2 )T 3 = T T 3 + T 2 T 3 for any T, T 2, T 3 in Hom F (V, V ). 5
6 4. T (αt 2 ) = α(t T 2 ) for any T, T 2 in Hom F (V, V ) and any scalar α in F. 5. T (T 2 T 3 ) = (T T 2 )T 3 for any T, T 2, T 3 in Hom F (V, V ). 6. There is an identity I V such that T I V = I V T = T for every T Hom F (V, V ). This algebra will only be commutative (T T 2 = T 2 T for any T, T 2 in Hom F (V, V )) in case dim(v ). There s little to say about the proof at this point. I haven t mention part (4) before; it follows easily from the fact that T preserves scalar multiplication. It also generalizes to any case where the compositions are defined. What about the comment on (non)commutativity? If dim(v ) =, it has only the zero vector and the space of operators is also trivial. If dim(v ) =, it s not hard to see that for any T Hom F (V, V ), there is a scalar α such that T v = α v for all v V. If the dimension is at least 2, say v and v 2 are independent vectors in V. We can find linear operators T and T 2 on V such that T v = v, T v 2 =, T 2 v = and T 2 v 2 = v. Then T T 2 v 2 = and T T 2 v 2 = v, so T T 2 T 2 T. OBSERVATION. Fix a linear operator T on the F -space V. Consider the map Φ : F [X] Hom F (V, V ) where Φ(p(X)) = p(t ) for each polynomial p(x) F [X]. Then Φ is a ring homomorphism. That is, if p (X), p 2 (X) are in F [X], then Φ(p (X) + p 2 (X)) = p (T ) + p 2 (T ) and Φ(p (X)p 2 (X)) = p (T )p 2 (T ). The additive property is kind of obvious. The multiplicative property less so, since the multiplication on the right side is function composition, but it follows easily from the distributive laws. The kernel of this homomorphism will interest us considerably. It will always be nontrivial if V is finite-dimensional, as we shall see. But onto more basic matters. It is also easily seen that it ˆT is any operator that commutes with T (i.e., T ˆT = ˆT T ), then ˆT commutes with p(t ) for any polynomial p(x). In particular, p (T )p 2 (T ) = p 2 (T )p (T ) for any polynomials p and p 2. In case a function f : V W is one-to-one and onto, the inverse function : W V exists it will also be a bijection (i.e., -to- and onto). In case V and W are F -spaces, and T is an isomorphism, then I claim that T is also an isomorphism of vector spaces. We just need to check that it is linear. So suppose that w and w 2 are in W and T w j = v j for j =, 2. Then T v j = w j for j =, 2. So w + w 2 = T v + T v 2 = T ( v + v 2 ), so T w +T w 2 = v + v 2 = T ( w + w 2 ) and thus T preserves addition. Now if w W and α F, say T w = v and then T v = w, so α w = αt v = T (α v) and so T (α w) = α v = αt w. Thus T is linear, and so an isomorphism. Note that if T : U V and T 2 : V W are both isomorphisms, then f (T 2 T ) = T T 2. In case V = W, we will often write T 2 for T T, and so on. 6
7 DEFINITION(S) 3 (KERNEL and IMAGE). Suppose that T : V W is a linear transformation between the F -spaces V and W. The set { v V : T v = W } is called the kernel of T and denoted ker(t ). The set { w W : w = T v for some v V } is called the image (or range) of T and denoted im(t ) (or ran(t )). Note that if V = F n, W = F m, and T = T A for some m n matrix A, these are nothing new. ker(t A ) is the null space null(a) and im(t A ) = col(a), the column space of A. PROPOSITION 4. With the notation of the definition, ker(t ) is a subspace of V and im(t ) is a subspace of W. PROOF: We know that T V = W, so V ker(t ) and W im(t ). Now if v, v 2 ker(t ), then T v = T v 2 = W. So T ( v + v 2 ) = T v + T v 2 = W + W = W, showing the v + v 2 ker(t ) and so ker(t ) is closed under addition. If v ker(t ) and α F, then T (α v) = αt v = α W = W, so α v ker(t ) and thus ker(t ) is closed under scalar multiplication. For the image, say w, w 2 im(t ) and that T v j = w j. Then w + w 2 = T v + T v 2 = T ( v + v 2 ) im(t ). im(t ) is closed under addition. Also, for any w im(t ) and scalar α, α w = αt v for some (at least one) v V, so α w = T (α v) im(t ) and im(t ) is closed under scalar multiplication. That s it. Obviously, T is onto (a surjection) if and only if im(t ) = W. It is also easily seen that T is -to- (an injection) if and only if ker(t ) is trivial. For if ker(t ) is nontrivial, there is a nonzero v V such that T v = T V = W. Conversely, if T is not -to-, then T v = T v 2 where v v 2. So the nonzero vector v v 2 is in the kernel since T ( v v 2 ) = T v T v 2 = W. In a sense, the size of ker(t ) measures how far T is from being one-toone, and the size of im(t ) measures how far it is from being onto. The first isomorphism theorem for vector spaces makes this precise. (Exercise?) So, in a way, does a related result to follow shortly. But first, some examples. Some of the examples above of transformations were isomorphisms. (This includes the reflection T l, where l is a line in R 2.) They all have trivial kernel and the image is W. But the projection P roj l is neither -to- nor onto. Its kernel is the line l and its image is just l. If T (A) = tr(a) for A M n (F ), then the image is F and for n the kernel rather large it consists of all matrices the sum of whose diagonal entries is zero. Clearly V,W has kernel V and trivial image. If V = C, and D : V V is differentiation, ker(d) consists of all constant functions (so is -dimensional) and its image is all of V. This kind of thing can t happen for finite-dimensional V, as we shall see. If V = R[x] (the space of polynomial functions) and T f = x f(t)dt, then T : V V has trivial kernel, but is not onto; im(t ) = {f V : f() = }. Again, this cannot happen in the finite-dimensional case. On the other hand, if a < b and T f = b f(t)dt, then its image is R and its kernel is huge. Lots of a functions have definite integral. 7
8 Here s a basic fact that generalizes rank+nullity=number of columns for matrices. PROPOSITION 5. Suppose that T : V W is a linear transformation of F -spaces. Suppose that { v i : I I} is a basis for ker(t ) and { w j : j J} is a basis for im(t ). Now suppose that for each j J we choose(!) u j V such that T u j = w j. Then { v i : i I} {u j : j J} is a basis for V (with no redundancies). (We assume the bases for ker(t ) and im(t ) have no repetitions.) In particular, if V is finite-dimensional, dim(ker(t ))+dim(im(v )) = dim(v ). Proof: By no redundancies I mean of course that no u j can be a v i this is clear, since T u j = w j W = T v i and that if j k then u j u k again clear, since T u j = w j w k = T u k. Let s see that the given union is independent. If there are distinct i,..., i m I and j,..., j n J and scalars a i,..., a im, b j,..., b jn so that a i v i + + a im v im + b j u j + + b jn u jn = V, we must show that all the a i s and b j s are zero. Let v = a i v i + + a im v im and u = b j u j + + b jn v jn. So T ( v + u) = W. Since T v = W, too, we have T u = W. But T u = b j w j + + b jn w jn ; we assumed that the w j s were independent, so this forces all the b j s to be zero. So v = and this forces all the a i s to be, as well. Now for spanning. Given any v V, T v im(t ), so there are j,..., j n J and scalars d j,..., d jn such that T v = n k= d j k w jk. Let v = n k= d j k u jk. T ( v v ) = T v T v = W, so v v ker(t ). Thus v v is a linear combo of the v i s; thus v is a linear combo of the v i s together with the u j s. This does it. I trust the finite-dimensional version is now clear. Once one make sense of the sum of infinite numbers, it generalizes using this proposition (including its use of the axiom of choice). Note that it follows that if V and W have the same finite dimension, then T is one-to-one if and only if it is onto. A linear operator on the space V is called nonsingular is it -to-; in case V is finite-dimensional, this is equivalent to T being invertible. Now we turn explicitly to the finite-dimensional case. Suppose that V and W are finite-dimensional vector spaces over F, and that T : V W is a linear transformation. Suppose that B = ( v,..., v n ) is an ordered basis for V and that C = ( w,..., w m ) is an ordered basis for W. DEFINITION 6 (MATRIX OF A TRANSFORMATION WITH RESPECT TO GIVEN ORDERED BASES). With the notation of the last paragraph, we define the matrix of T with respect to the ordered bases B and C as follows. It will be an m n matrix, and its jth column (for j =,..., n) will be [T v j ] C. That is, we evaluate the jth vector in the basis B and express it in terms of the basis C. We use the notation C [T ] B for this matrix. (Others use different notation.) In the important special case where V = W and B = C, we will just write [T ] B instead of B [T ] B except occasionally for emphasis. A trivial example is when T = V,W. Whatever ordered bases B and C we 8
9 choose, we get C [T ] B = m,n. Also, if V = W and T = αi V, then for any B, [T ] B = αi n. Another simple observation is that if T = T A for an m n matrix A, V = F n, W = F m and we choose B and C as the standard bases for each of V and W, then C [T A ] B is just aw, you guessed A itself. (If we vary the bases, it probably won t be, though.) Consider the projection P roj l and reflection T l (where l is a line through the origin in the real space R 2 ). Instead of choosing the standard basis here, let s suppose B = C is a basis consisting of a vector v on l and a vector v 2 on l. Then P roj l v = v = v + v 2 and P roj l v 2 =. Also T l v = ( v and T) l v 2 = v 2. We get particularly simple matrices this way; [P roj l ] B = and [T l ] B =. If we want the matrices with respect to the standard basis ( e, e 2 ), it still helps to refer to this nonstandard basis; well, it s nonstandard unless l is the x-axis. ( Let s ) do this in the specific case where l is defined by y = 2 3x. Let 3 v = on l. (I m tempted to divide this by 3 for some reason, but 2 2 let s forgo that for now.) Let v 2 = on l 3. Now e = 3 3 v v and e 2 = 2 3 v v 2. So P roj l e = 3 3 v = 3, P roj 6 l e 2 = T l e = 3 3 v 2 3 v 2 = 3 and T 2 l e 2 = 2 3 v 3 3 v 2 = 3. If S = ( e, e 2 ), then [P roj l ] S = 3 and [T 6 4 l ] S = 3. Not 2 5 only are these matrices less pretty, but to actually find them it seemed natural to go through the nonstandard basis. One recurring theme for us will be, given a linear operator on a finite-dimensional vector space, to choose a basis for a vector space so that its matrix with respect to that basis comes out nice, and this will give us considerable information about the operator. Suppose now that V = W = P 3 (X), the space of polynomials over the reals with degree 3. Let B = (, X, X 2, X 3 ) be the standard ordered basis for V and D : V V be given by differentiation. As D =, DX =, DX 2 = 2X and DX 3 = 3X 2, we have [D] B = 2 3. If we instead chose the basis B = (, X, 2 X2, 6 X3 ), we d get the slightly nicer matrix [D] B =, which is in Jordan canonical form coming attractions. 9
10 We could also regard D as a transformation from V to W = P 2 (X) and let B be as above, C = (, X, X 2 ); then C [D] B would look like [D] B except it would miss that last row of zeroes. What this matrix C [T ] B is good for is that it reduces calculations about T to matrix multiplication. More specifically, if v V, then C[T ] B [ v] B = [T v] C. To see this, suppose that V is k-dimensional, B = ( v,..., v k ), [ v] B = and the columns of C [T ] B are, in order, C,..., C k. Then [T v] C = [T (a v + + a k v k )] C = [a T v + + a k T v k ] C = a [T v ] C + +a k [T v k ] C = a C + +a k C k, which is just the matrix product C[T ] B [ v] B by the column-by-column rule. For instance, (( say T ) = T ( l for the )) line l defined ( by y ) = 2 3 x, V = W = R 2, B = C = ( v, v 2 ) =, and v = = 5 v v 2. C[T ] B (or 5 5 just [T ] B ) is and [ v] B =. [T v] 4 C = C [T ] B [ v] B =, so 4 7 T v = 5 v +4 v 2 =. If we use B 22 = C, the standard basis ( e, e 2 ) instead, then C [T ] B = 3 and 2 5 C [T ] B [ v] B = 3 = , which is reassuring. 22 Here s a second example again for simplicity we have V = W and B = C but in this case V = P 3 (X) and B = (, X, X 2, X 3 ) and T = D. We know that [D] B = 2 3. For any polynomial p(x) = a + a X + a 2 X 2 + a 3 X 3 in V, we easily see that [p] B = a a a 2 a 3 and [D] B[p] B = a 2a 2 3a 3 a.. a k. This corresponds to the easily verified fact that Dp(X) = a +2a 2 X +3a 3 X 2. (Again, reassuring.) One use of the matrix with respect to a basis is to do calculations involving kernels and images. I ll illustrate; suppose that V is the vector space M n (F ) of all n n matrices over F, and A is any particular n n matrix. Let T : V V be defined by T X = AX XA for any X V. Let s do the (easy) verification that any such T is linear. First, T (X + X 2 ) = A(X + X 2 ) (X + X 2 )A =
11 (AX + AX 2 ) (X A + X 2 A) = (AX X A) + (AX 2 X 2 A) = T X + T X 2 for any X, X 2 V. Next, T (αx) = A(αX) (αx)a = α(ax) α(xa) = α(ax XA) = αt X for any X V and scalar α. (We have of course used several basic properties of matrix algebra here.) Now let s get specific. Say F = R, n = 2 and A =. It s not too hard to compute a basis for each of ker(t ) and im(t ) directly, (( but ) let s ( do it ) us-ing [T ] B for some B. Let B = (E,, E,2, E 2,, E 2,2 ) =,, be the standard ordered basis for V. Before actually finding [T ] B, notice that it is not itself ( a 2 2 matrix; ) it s 4 4 and I really hope you know why. T E, = = E, E,2 + E 2, + E 2,2 so the first column of [T ] B is. And so on. T E,2 = E, + E 2,2, T E 2, = E, E 2,2, and T E 2,2 = E,2 E 2,. [T ] B = row-reduces very easily to, and for col([t ] B ) is. This, and a basis for null([t ] B) is,. These col- umn vectors are not in V, but they correspond to elements of V, and from {( them we can ) ( easily read )} off bases {( for ker(t ) )( and im(t )} ). These bases are,, and,, respectively. (I hope this is clear; I ve had students forget to make this re-translation step.) OBSERVATION. Assume all the data is fixed; (F, V, W, C, D). Then the map T C [T ] D is an isomorphism between Hom F (V, W ) and M m,n (F ). In particular, C [T + T ] B = C [T ] B + C [T 2 ] B and C [αt ] B = α C [T ] B for any T, T 2, T Hom F (V, W ) and α F. At this stage, what I am going to say next is also more of a notational observation than anything else, but it is significant enough to be called a proposition and to earn its rather pompous name. PROPOSITION 7 (THE REASON MATRIX MULTIPLICATION IS DE- ) (, ))
12 FINED THE WAY IT IS). Suppose that U, V and W are finite-dimensional vector spaces over F, B is an ordered for U, C is an ordered basis for V, and D is an ordered basis for W. Suppose also that T : U V and T 2 : V W are linear transformations. Then D [T 2 T ] B = D [T 2 ] C C [T ] B. Proof: Say B = ( u,..., u k ), C = ( v,..., v l ) and D = ( w,..., w m ). I trust it is clear that both D [T 2 T ] B and D [T 2 ] C C [T ] B are m k matrices. We check that their jth columns are the same for each j. Suppose that the jth column of C [T ] B is a. a l ; this means that T u j = a v + + a l v l. Suppose that the columns of D [T 2 ] C are C,..., C m ; for each i m, this means that C i = [T 2 v i ] D. Now the jth column of the product of D [T 2 ] C and C[T ] B is a C + + a l C l. The jth column of D [T 2 T ] B is [(T 2 T ) u j ] D. This is equal to [T 2 (T u j )] D = [T 2 (a v + + a ell v l )] D = [a T 2 v + + a l T 2 v l ] D = a [T 2 v ] D + + a l [T 2 v l ] D = a C + + a l C l. This completes the proof. As a simple illustration, let U = V = W = P 3 (X), B = C = D = (, X, X 2, X 3 ) and T = T 2 (differentiation, but I don t want to call it D in this context). D[T 2 ] C C [T ] B = [T ] 2 B = = 2 6 This reflects the simple fact that T 2 (a + a X + a 2 X 2 + a 3 X 3 ) = 2a 2 + 6a 3 X. (T 2 p(x) is of course the second derivative p (X).) Another example is provided by T = T l for the line l above; it is obvious that [T ] 2 B = I and an easy calculation show that [T ] 2 B = I, too (no accident). This, um, reflects the fact that if we reflect across the same line twice, we get back to where we started. Note that if T : V W is an isomorphism of finite-dimensional vector spaces, then it preserves dimension, so for any ordered basis B of V and ordered basis C of W, C [T ] B is a square matrix. So is B [T ] C, and it should be no surprise that the product of these is the identity. ( C [T ] B B [T ] C = C [T T ] C = [I] C = I and similarly in the other direction.) Another very instructive illustration of Proposition 7 is this: Suppose that U = F k, V = F l, W = F m and X = F n, and we give each of these vector spaces its standard basis. Suppose that A is a k l matrix, A 2 is an l m matrix, and A 3 is an m n matrix, all with entries from F. Let T j be T Aj for j =, 2, 3. We leave off the subscripts on [T j ] because the basis is standard in all cases. We have. A 3 (A 2 A ) = [T 3 ]([T 2 ][T ]) = [T 3 ]([T 2 T ]) = [T 3 (T 2 T )] = [(T 3 T 2 )T ] = 2
13 ([T 3 T 2 ])[T ] = ([T 3 ][T 2 ])[T ] = (A 3 A 2 )A. I wish to emphasize that this last paragraph, while it illustrates Proposition 7, is not an example or illustration of the fact that matrix multiplication is associative. It is a proof that matrix multiplication is associative. It is almost entirely conceptual; the only serious calculational thing it uses is the much easier column-by-column fact. I ve implicitly (in fact, explicitly in some of the examples) raised the issue of what happens to the matrix of a transformation in case we change the basis on one side or the other or both. Let s deal with this systematically. In the following, T : V W is assumed to be a linear transformation between the finite-dimensional vector spaces V and W (over F ). Also we assume that B and B are ordered bases for V, and C and C are ordered bases for W. Say W is m-dimensional and V is n-dimensional. PROPOSITION 8. With the notation we just set up, C [T ] B = C P C C [T ] B B P B. In particular, if V = W, B = C, B = C and P = B P B, then [T ] B = P [T ] B P. a.. Proof: Say a = is any vector is F n ; if B = ( v,..., v n), let a n v = a v + + a n v n; so the given vector a is [ v] B. Then B P B a = [ v] B. C[T ] B [ v] B = [T v] C and C P C [T v] C = [T v] C. But C [T ] B a = [T v] C, too. That is, multiplying any vector in F n by C [T ] B gives the same result as multiplying it by C P C C [T ] B B P B these two matrices must needs be the same, therefore. The special case of a linear operator will most concern us in this course. We ve done the calculations for the examples of P roj l and T l for our line defined by y = 2 3x. (Incidentally, this line has no particular significance; it s just that it s (( nice) to( have a)) specific example to flog to death.) With ( B = C ) = ( v, v 2 ) = , and B 2 3 = ( e, e 2 ), we have B P B = and P = B P B is its inverse 3. A straight matrix calculation shows that, if A = = [T l ] B, then P AP = 3 = [T ] 2 5 B. If 9 6 Â = [P roj l ] B =, P ÂP = 3 = [P roj 6 4 l ] B, as advertized. If V = W = P 3 (X), B = C = (, X, X 2, X 3 ) and B = C = (, X, 2 X2, 6 X3 ), 3
14 then P = B P B =. 2 6, [T ] B = 2 3 and P [T ] B P = One reason for considering a nonstandard basis for a finite-dimensional vector space is that the matrix of a particular linear operator on the space may come out nicer, and easier to calculate with, if it is expressed in terms of the nonstandard basis. Ideally, we hope that an operator T on V may be diagonalizable, which means that that there is some ordered basis B for V such that [T ] B = A is a diagonal matrix. (That is, a j,k = for any j k.) As seen above, if l is line in R 2 and P roj l is the orthogonal projection to l, and T l the reflection across l, then both these transformations are diagonalizable. ( s are allowed on the diagonal they are just required off the diagonal.) Maybe it s not obvious just yet, but the transformation D : P 3 (X) P 3 (X) is not diagonalizable. In a sense, the best we can do is the Jordan canonical form mentioned above. But this is more advanced material, and we ll get to it later. 4
Spanning, linear dependence, dimension
Spanning, linear dependence, dimension In the crudest possible measure of these things, the real line R and the plane R have the same size (and so does 3-space, R 3 ) That is, there is a function between
More informationFinal Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2
Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch
More informationVector Spaces and Linear Transformations
Vector Spaces and Linear Transformations Wei Shi, Jinan University 2017.11.1 1 / 18 Definition (Field) A field F = {F, +, } is an algebraic structure formed by a set F, and closed under binary operations
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationOHSx XM511 Linear Algebra: Solutions to Online True/False Exercises
This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)
More informationLinear Algebra Lecture Notes-I
Linear Algebra Lecture Notes-I Vikas Bist Department of Mathematics Panjab University, Chandigarh-6004 email: bistvikas@gmail.com Last revised on February 9, 208 This text is based on the lectures delivered
More informationLecture 6: Finite Fields
CCS Discrete Math I Professor: Padraic Bartlett Lecture 6: Finite Fields Week 6 UCSB 2014 It ain t what they call you, it s what you answer to. W. C. Fields 1 Fields In the next two weeks, we re going
More informationMath 110, Spring 2015: Midterm Solutions
Math 11, Spring 215: Midterm Solutions These are not intended as model answers ; in many cases far more explanation is provided than would be necessary to receive full credit. The goal here is to make
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationMath 4A Notes. Written by Victoria Kala Last updated June 11, 2017
Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...
More informationLINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS
LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in
More informationLinear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space
Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................
More informationLinear Algebra. Chapter 5
Chapter 5 Linear Algebra The guiding theme in linear algebra is the interplay between algebraic manipulations and geometric interpretations. This dual representation is what makes linear algebra a fruitful
More information[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]
Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the
More informationNONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction
NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques
More informationLinear Algebra, Summer 2011, pt. 2
Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................
More informationLinear Algebra Lecture Notes
Linear Algebra Lecture Notes Lecturers: Inna Capdeboscq and Damiano Testa Warwick, January 2017 Contents 1 Number Systems and Fields 3 1.1 Axioms for number systems............................ 3 2 Vector
More informationEquivalence Relations
Equivalence Relations Definition 1. Let X be a non-empty set. A subset E X X is called an equivalence relation on X if it satisfies the following three properties: 1. Reflexive: For all x X, (x, x) E.
More informationMAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction
MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example
More informationDot Products, Transposes, and Orthogonal Projections
Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +
More informationGeneralized eigenspaces
Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationSolution to Homework 1
Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false
More informationTheorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.
5. Fields 5.1. Field extensions. Let F E be a subfield of the field E. We also describe this situation by saying that E is an extension field of F, and we write E/F to express this fact. If E/F is a field
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationChapter 5. Linear Algebra
Chapter 5 Linear Algebra The exalted position held by linear algebra is based upon the subject s ubiquitous utility and ease of application. The basic theory is developed here in full generality, i.e.,
More information1 Linear transformations; the basics
Linear Algebra Fall 2013 Linear Transformations 1 Linear transformations; the basics Definition 1 Let V, W be vector spaces over the same field F. A linear transformation (also known as linear map, or
More information0.2 Vector spaces. J.A.Beachy 1
J.A.Beachy 1 0.2 Vector spaces I m going to begin this section at a rather basic level, giving the definitions of a field and of a vector space in much that same detail as you would have met them in a
More informationTopics in linear algebra
Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over
More informationLinear Algebra Review
Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite
More informationInfinite-Dimensional Triangularization
Infinite-Dimensional Triangularization Zachary Mesyan March 11, 2018 Abstract The goal of this paper is to generalize the theory of triangularizing matrices to linear transformations of an arbitrary vector
More informationLINEAR ALGEBRA MICHAEL PENKAVA
LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)
More informationContents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces
Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and
More informationLinear algebra 2. Yoav Zemel. March 1, 2012
Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.
More information2: LINEAR TRANSFORMATIONS AND MATRICES
2: LINEAR TRANSFORMATIONS AND MATRICES STEVEN HEILMAN Contents 1. Review 1 2. Linear Transformations 1 3. Null spaces, range, coordinate bases 2 4. Linear Transformations and Bases 4 5. Matrix Representation,
More informationReview 1 Math 321: Linear Algebra Spring 2010
Department of Mathematics and Statistics University of New Mexico Review 1 Math 321: Linear Algebra Spring 2010 This is a review for Midterm 1 that will be on Thursday March 11th, 2010. The main topics
More informationOnline Exercises for Linear Algebra XM511
This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2
More informationCHANGE OF BASIS AND ALL OF THAT
CHANGE OF BASIS AND ALL OF THAT LANCE D DRAGER Introduction The goal of these notes is to provide an apparatus for dealing with change of basis in vector spaces, matrices of linear transformations, and
More informationAnswers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3
Answers in blue. If you have questions or spot an error, let me know. 3 4. Find all matrices that commute with A =. 4 3 a b If we set B = and set AB = BA, we see that 3a + 4b = 3a 4c, 4a + 3b = 3b 4d,
More informationMath 113 Midterm Exam Solutions
Math 113 Midterm Exam Solutions Held Thursday, May 7, 2013, 7-9 pm. 1. (10 points) Let V be a vector space over F and T : V V be a linear operator. Suppose that there is a non-zero vector v V such that
More informationLinear Algebra Highlights
Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to
More informationHonors Algebra II MATH251 Course Notes by Dr. Eyal Goren McGill University Winter 2007
Honors Algebra II MATH251 Course Notes by Dr Eyal Goren McGill University Winter 2007 Last updated: April 4, 2014 c All rights reserved to the author, Eyal Goren, Department of Mathematics and Statistics,
More informationLinear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016
Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:
More informationDefinitions for Quizzes
Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does
More informationMATH2210 Notebook 3 Spring 2018
MATH2210 Notebook 3 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 3 MATH2210 Notebook 3 3 3.1 Vector Spaces and Subspaces.................................
More informationChapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.
Chapter Find all x such that A x : Chapter, so that x x ker(a) { } Find all x such that A x ; note that all x in R satisfy the equation, so that ker(a) R span( e, e ) 5 Find all x such that A x 5 ; x x
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationAbstract Vector Spaces and Concrete Examples
LECTURE 18 Abstract Vector Spaces and Concrete Examples Our discussion of linear algebra so far has been devoted to discussing the relations between systems of linear equations, matrices, and vectors.
More information1. Foundations of Numerics from Advanced Mathematics. Linear Algebra
Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or
More informationA PRIMER ON SESQUILINEAR FORMS
A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form
More information(2.2) Definition of pointwise vector operations: (a) The sum f + g of f,g IF T is the function
Vector spaces, especially spaces of functions 19 (2.2) Definition of pointwise vector operations: (a) The sum f + g of f,g IF T is the function f + g : T IF : t f(t)+g(t). (s) The product αf of the scalar
More information10. Smooth Varieties. 82 Andreas Gathmann
82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It
More information0.1 Rational Canonical Forms
We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best
More informationVector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture
Week9 Vector Spaces 9. Opening Remarks 9.. Solvable or not solvable, that s the question Consider the picture (,) (,) p(χ) = γ + γ χ + γ χ (, ) depicting three points in R and a quadratic polynomial (polynomial
More informationAlgebra II. Paulius Drungilas and Jonas Jankauskas
Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive
More informationMA106 Linear Algebra lecture notes
MA106 Linear Algebra lecture notes Lecturers: Diane Maclagan and Damiano Testa 2017-18 Term 2 Contents 1 Introduction 3 2 Matrix review 3 3 Gaussian Elimination 5 3.1 Linear equations and matrices.......................
More informationMATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by
MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar
More informationALGEBRA II: RINGS AND MODULES OVER LITTLE RINGS.
ALGEBRA II: RINGS AND MODULES OVER LITTLE RINGS. KEVIN MCGERTY. 1. RINGS The central characters of this course are algebraic objects known as rings. A ring is any mathematical structure where you can add
More informationReview Notes for Midterm #2
Review Notes for Midterm #2 Joris Vankerschaver This version: Nov. 2, 200 Abstract This is a summary of the basic definitions and results that we discussed during class. Whenever a proof is provided, I
More informationProperties of Transformations
6. - 6.4 Properties of Transformations P. Danziger Transformations from R n R m. General Transformations A general transformation maps vectors in R n to vectors in R m. We write T : R n R m to indicate
More informationMATH 196, SECTION 57 (VIPUL NAIK)
TAKE-HOME CLASS QUIZ: DUE MONDAY NOVEMBER 25: SUBSPACE, BASIS, DIMENSION, AND ABSTRACT SPACES: APPLICATIONS TO CALCULUS MATH 196, SECTION 57 (VIPUL NAIK) Your name (print clearly in capital letters): PLEASE
More informationChapter 4 & 5: Vector Spaces & Linear Transformations
Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think
More informationMATH PRACTICE EXAM 1 SOLUTIONS
MATH 2359 PRACTICE EXAM SOLUTIONS SPRING 205 Throughout this exam, V and W will denote vector spaces over R Part I: True/False () For each of the following statements, determine whether the statement is
More informationComps Study Guide for Linear Algebra
Comps Study Guide for Linear Algebra Department of Mathematics and Statistics Amherst College September, 207 This study guide was written to help you prepare for the linear algebra portion of the Comprehensive
More informationTest 1 Review Problems Spring 2015
Test Review Problems Spring 25 Let T HomV and let S be a subspace of V Define a map τ : V /S V /S by τv + S T v + S Is τ well-defined? If so when is it well-defined? If τ is well-defined is it a homomorphism?
More informationAlgebra I Fall 2007
MIT OpenCourseWare http://ocw.mit.edu 18.701 Algebra I Fall 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.701 007 Geometry of the Special Unitary
More information1 Matrices and Systems of Linear Equations. a 1n a 2n
March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real
More informationMath 3108: Linear Algebra
Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationMATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS
MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on
More informationHomework 10 M 373K by Mark Lindberg (mal4549)
Homework 10 M 373K by Mark Lindberg (mal4549) 1. Artin, Chapter 11, Exercise 1.1. Prove that 7 + 3 2 and 3 + 5 are algebraic numbers. To do this, we must provide a polynomial with integer coefficients
More informationProblems in Linear Algebra and Representation Theory
Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific
More informationEXAMPLES AND EXERCISES IN BASIC CATEGORY THEORY
EXAMPLES AND EXERCISES IN BASIC CATEGORY THEORY 1. Categories 1.1. Generalities. I ve tried to be as consistent as possible. In particular, throughout the text below, categories will be denoted by capital
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More informationReview of Linear Algebra
Review of Linear Algebra Throughout these notes, F denotes a field (often called the scalars in this context). 1 Definition of a vector space Definition 1.1. A F -vector space or simply a vector space
More informationFirst we introduce the sets that are going to serve as the generalizations of the scalars.
Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................
More informationMath 54. Selected Solutions for Week 5
Math 54. Selected Solutions for Week 5 Section 4. (Page 94) 8. Consider the following two systems of equations: 5x + x 3x 3 = 5x + x 3x 3 = 9x + x + 5x 3 = 4x + x 6x 3 = 9 9x + x + 5x 3 = 5 4x + x 6x 3
More informationHonours Algebra 2, Assignment 8
Honours Algebra, Assignment 8 Jamie Klassen and Michael Snarski April 10, 01 Question 1. Let V be the vector space over the reals consisting of polynomials of degree at most n 1, and let x 1,...,x n be
More informationMath 115A: Linear Algebra
Math 115A: Linear Algebra Michael Andrews UCLA Mathematics Department February 9, 218 Contents 1 January 8: a little about sets 4 2 January 9 (discussion) 5 2.1 Some definitions: union, intersection, set
More informationBASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =
CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with
More informationMathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps
Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps We start with the definition of a vector space; you can find this in Section A.8 of the text (over R, but it works
More informationLECTURE 16: TENSOR PRODUCTS
LECTURE 16: TENSOR PRODUCTS We address an aesthetic concern raised by bilinear forms: the source of a bilinear function is not a vector space. When doing linear algebra, we want to be able to bring all
More informationLinear Algebra. Min Yan
Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................
More informationREVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION
REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar
More informationLinear algebra and differential equations (Math 54): Lecture 10
Linear algebra and differential equations (Math 54): Lecture 10 Vivek Shende February 24, 2016 Hello and welcome to class! As you may have observed, your usual professor isn t here today. He ll be back
More informationMatrices related to linear transformations
Math 4326 Fall 207 Matrices related to linear transformations We have encountered several ways in which matrices relate to linear transformations. In this note, I summarize the important facts and formulas
More informationABSTRACT VECTOR SPACES AND THE CONCEPT OF ISOMORPHISM. Executive summary
ABSTRACT VECTOR SPACES AND THE CONCEPT OF ISOMORPHISM MATH 196, SECTION 57 (VIPUL NAIK) Corresponding material in the book: Sections 4.1 and 4.2. General stuff... Executive summary (1) There is an abstract
More informationChapter 2 Notes, Linear Algebra 5e Lay
Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication
More informationExercises on chapter 0
Exercises on chapter 0 1. A partially ordered set (poset) is a set X together with a relation such that (a) x x for all x X; (b) x y and y x implies that x = y for all x, y X; (c) x y and y z implies that
More informationYORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions
YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this
More informationThe value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.
Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class
More informationThe Integers. Math 3040: Spring Contents 1. The Basic Construction 1 2. Adding integers 4 3. Ordering integers Multiplying integers 12
Math 3040: Spring 2011 The Integers Contents 1. The Basic Construction 1 2. Adding integers 4 3. Ordering integers 11 4. Multiplying integers 12 Before we begin the mathematics of this section, it is worth
More informationLinear Algebra I. Ronald van Luijk, 2015
Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition
More informationMAT 445/ INTRODUCTION TO REPRESENTATION THEORY
MAT 445/1196 - INTRODUCTION TO REPRESENTATION THEORY CHAPTER 1 Representation Theory of Groups - Algebraic Foundations 1.1 Basic definitions, Schur s Lemma 1.2 Tensor products 1.3 Unitary representations
More informationExamples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.
The exam will cover Sections 6.-6.2 and 7.-7.4: True/False 30% Definitions 0% Computational 60% Skip Minors and Laplace Expansion in Section 6.2 and p. 304 (trajectories and phase portraits) in Section
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationStudy Guide for Linear Algebra Exam 2
Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real
More information8 General Linear Transformations
8 General Linear Transformations 8.1 Basic Properties Definition 8.1 If T : V W is a function from a vector space V into a vector space W, then T is called a linear transformation from V to W if, for all
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More information