Answers in blue. If you have questions or spot an error, let me know. 3 4. Find all matrices that commute with A =. 4 3 a b If we set B = and set AB = BA, we see that 3a + 4b = 3a 4c, 4a + 3b = 3b 4d, c d 3c + 4d = 4a + 3c, and 4c [ + 3d = ] 4b + 3d. Solving these equations, a = d and b = c, so all a c matrices of the form B = will commute with A that is, all rotation-scaling matrices. c a (If you prefer, you could solve for c instead of for b.) [. Let A = 3 ]. Compute A by finding a pattern or by using geometry. 3 Geometrically, this is a rotation by π 6, so that A6 = I. Since is a multiple of 6, this [ ] 3 tells us that A = I, or AA = I, so that A = A =. 3 5 3. Let A = 3. Compute A 5 by finding a pattern or by using geometry. Geometrically, this is a reflection [ across ] a line, so A = I (or, just compute it). Thus, 5 A = (A ) 5 A = A = 3. 5 4. Let A be an n n diagonal matrix with diagonal entries a,..., a nn. Find a formula for A t in terms of a,..., a nn, where t is any positive integer. Do this a couple of times to establish a pattern (or, if you have the formal background for it, prove it by induction by looking at A t A). You ll find that A t is the n n diagonal matrix with diagonal entries (a ) t,..., (a nn ) t. 5. Are the following matrices invertible? If so, find their inverses. If not, explain how you know that they aren t. 3 3 4 A = 4 5 6 B = 4 C = 7 8 9 3 9 A is not invertible (as it has rank ), while B = 3 4 and C = 3. 6. For which values of k are the vectors,, and 3 linearly dependent? k k Consider a relation a + b + c 3 =. From looking at the first two components, k k it s clear that we have a = λ, b = 3λ, c = λ (for some scalar λ) and that this will be a
nontrivial relation for λ. Looking at the last component, we see that such a relation works for λ if and only if k = or k = 4. Alternatively, put the three vectors in the columns of a matrix and see whether it row-reduces to the identity. 7. Consider the matrix A = 3 6 8 4 5 with rref(a) = 3. 7 4 3 8 (a) Find a basis for im(a). We know that im(a) is spanned by the column vectors of A and that all column vectors corresponding to a column of rref(a) without a leading are redundant (since the same relations among the columns of A also hold among the columns of rref(a) and vice-versa), 3 so im(a) = span(, ). 7 3 (b) Find a basis for ker(a). Using rref(a) x =, we see that the solutions to A x = are r s 3t x = s t = r + s + t 3, t so that ker(a) = span(,, 3 ). It s a good idea to use Rank-Nullity to check your work: dim(im(a)) + dim(ker(a)) = + 3 = 5 as required. 8. Suppose that the vectors v,..., v n form a basis of R n. Let A be the matrix with column vectors v,..., v n. Find im(a) and ker(a). By Summary 3.3., im(a) = R n and ker(a) = { }. 9. Can you find a matrix A such that im(a) = ker(a). What about a 3 3 matrix? 4 4? n n? An example of a matrix with this property is A =. It is impossible to find a 3 3 matrix with this property since then dim(im(a)) = dim(ker(a)) (i.e., rank(a) = nullity(a)), so that the Rank-Nullity Theorem tells us that rank(a) + rank(a) = 3, or rank(a) =.5. Since the rank of a matrix is always an integer, no matrix with this property exists. For a
4 4 matrix, one matrix with this property is A =. Generalizing this pattern, one can check that no such n n matrix A exists if n is odd and that one possibility when n is even is the matrix A whose first n columns are and whose second n columns are e,..., e n. Let a, b, c R where a. Find a basis for the subspace V of R 3 defined by V = { x y R 3 ax + by + cz = }. z Note that V = ker(a) where A = [ a b c ]. Since a, we compute rref(a) = [ b a c a so that one possible basis is B = { b a, }. Doing this by inspection, you can also note that V is a plane (and so of dimension ), so that any two linearly independent solutions to ax + by + cz = will form a basis.. Let A be an n p matrix such that ker(a) = { } and B be a p m matrix. How does ker(ab) relate to ker(b)? We saw in Section 3.4, #39 that ker(b) ker(ab) (see the HW solutions for details). Now, suppose that x ker(ab). Then AB x =. Since ker(a) = { }, this can happen only if B x = (since A(B x) = ), or, in other words, if x ker(b) so that ker(ab) ker(b) also. So, in this case, we have ker(ab) = ker(b).. Let V and W be subspaces of R n and define V W = { v w R n v V, w W }. Show that V W is a subspace of R n. Since V and W are subspaces of R n, we know V and W. Thus, = ( ) V W. Suppose that v w V W and v w V W for some vectors v, v V and w, w W. Then ( v w ) + ( v w ) = ( v + v ) ( w + w ) V W since v + v V and w + w W since V and W are subspaces of R n. Finally, suppose that v w V W for some v V and w W, and k R. Then k( v w) = k v k w V W since k v V and k w W since V and W are subspaces of R n. Thus, V W is a subspace of R n. 3. Let V be a subspace of R n and define V = { x R n x v = for all v V }. Show that V is a subspace of R n. Note that v = for all vectors v R n, so V. Suppose that w, w V. That is, w v = and w v = for all vectors v V. Then ( w + w ) v = w v + w v = + = for any vector v V (note that we proved the distributivity of the dot product on the review for the first exam).. c a], 3
Finally, suppose that w V. That is w v = for all v V. Then (k w) v = k( w v) = k = for all v V (note that we proved the associativity of scalars for the dot product on the review for the first exam). Thus, V is a subspace of R n. 4. Let v,..., v m R n. Let T ( x) be a linear transformation from R n to R p. If v,..., v m are linearly dependent, are T ( v ),..., T ( v m ) necessarily linearly dependent? If v,..., v m are linearly independent, are T ( v ),..., T ( v m ) necessarily linearly independent? Most of the time, the best way (in my opinion) to answer questions about linear (in)dependence is to look at linear relations (although you may use any part of Summary 3..9 if you prefer, or Summary 3.3. in some cases, and you should at least know the other ways as they re occasionally useful). So, if v,..., v m are linearly dependent, then we may suppose that c v + + c m v m = where not all of the c i =. Taking T of both sides and using the properties of linear transformations, we see that c T ( v ) + + c m T ( v m ) = (note that T ( ) = A = for some matrix A), so that T ( v ),..., T ( v m ) are linearly dependent (since not all of c i = ). For linear independence, the same trick won t work: it ll tell us if we start with the trivial relation among v,..., v m, then we ll get the trivial relation among T ( v ),..., T ( v m ) also, but this doesn t prove that other relations don t exist. If we knew that T were invertible, we could try the same thing in reverse to transform a relation among T ( v ),..., T ( v m ) into a relation among v,..., v m and use the linear independence of v,..., v m to show the linear independence of T ( v ),..., T ( v m ). However, what if T isn t invertible? Let s try an example where T is not invertible (note that while examples are generally useless for proving that something is true, one example is all you need to prove something is false): the simplest such example is T ( x) = for all x (from R to R, say). Then e, e are linearly independent, but T ( e ), T ( e ) are both the zero vector and so not linearly independent. Thus, the answer to the second part is: no, they are not necessarily linearly independent (although we saw that they will be if T is invertible). An informal way to think about this is imagining that the coordinates (that is to say, coefficients) of a set of linearly independent vectors is information (while the coordinates of linearly dependent vectors don t carry the same amount of information, since there isn t a unique way to extract the coordinates ). Then, these results say that applying a linear transformation to a set of vectors won t gain any new information, but may lose some of the existing information. 5. Consider two 3 3 matrices A and B and a vector v R 3 such that AB v =, BA v =, and A v =, but A v and B v. (a) Show that the vectors B = {A v, B v, v} form a basis of R 3. (b) Find the B-matrix of the transformation T ( x) = A x. (a) By Summary 3.3., it s enough to show that these vectors are linearly independent. (Showing that they span R 3 would also be sufficient, but showing linear independence is usually easier.) As above, we ll start by looking at a relation of the form c A v + c B v + c 3 v = and calculate that c = c = c 3 = to show linear independence. First, apply matrix A to both sides on the left: A(c A v+c B v+c 3 v) = A, or c A v+c AB v+c 3 A v = 4
, which implies c 3 A v = as the other vectors =. Since A v, this means that c 3 =. Now, apply B to both sides on the left: B(c A v + c B v) = B, so that c B v = since BA v =. As B v, this shows that c =. Thus, we have c A v = and since A v we have c =. Thus, the only relation among the vectors in B is the trivial relation, so that they are linearly independent. (b) The easiest way to go about this is to use the (unnumbered) theorem following Definition 3.4.3 in your book to calculate the B-matrix column-by-column (using the other methods may be possible, but probably gets more than a bit messy). We compute [T (A v)] B = [A v] B = [ ] B =, [T (B v)] B = [AB v] B = [ ] B =, and [T ( v)] B = [A v] B =, so that the B-matrix of T is B =. 6. We say that a set of vectors v,..., v m R n are orthonormal if they are perpendicular unit vectors. That is, if { if i = j v i v j = if i j. Show that if u,..., u n R n are orthonormal, then B = { u,..., u n } form a basis of R n. (Hint: Let c u + + c n u n = and take the dot product of both sides with u j for j =,..., n.) Note that u j (c u + + c n u n ) = u j simplifies to c ( u j u ) + + c n ( u j u n ) = and that { if i = j u j u i = if i j. So, this tells us that c () + + c j () + + c n () =, or that c j =. Doing this for j =,..., n we see that c = = c n =, so that the only relation among the vectors of B is the trivial relation. As above, this is sufficient to show that B is a basis by Summary 3.3.. 7. Let A = 3 3 and let B = {,, } be a basis of R 3. Find the B-matrix of 7 the transformation T ( x) = A x. 3 The B-matrix of T is B = 4 5 6. 7 8 9 5
a b 8. Let A be a matrix of the form A = where a b a + b = and a, and let B = b a {, } be a basis of R a b. Find the B-matrix of the transformation T ( x) = A x. Interpret your answer geometrically. The B-matrix is B =. Geometrically, A is the matrix of a reflection across the line b a spanned by and the vector is perpendicular to this line, so this result shows a b that if we write a vector as a linear combination of a vector on the line (i.e., x ) and a vector perpendicular to the line (i.e., x ), then the vector on the line is unaltered by T, while the coefficient of the vector perpendicular to the line is multiplied by (that is, its direction is replaced by its opposite). In other words, T ( x + x ) = x x. Aside: It s easy to verify that B really is a basis of R, for example by #6. u 9. Let A be a matrix of the form A = u u u u u where u +u u u =, and let B = {, } u u be a basis of R. Find the B-matrix of the transformation T ( x) = A x. Interpret your answer geometrically. The B-matrix is B =. Geometrically, A is the matrix of an orthogonal projection onto u u the line spanned by and is a vector perpendicular to this line. So, this result u u shows that if we write a vector x as a linear combination of a vector on the line (i.e., x ) and a vector perpendicular to the line (i.e., x ), then the vector on the line is unaltered by T, while the vector perpendicular to the line is mapped to the zero vector. In other words, T ( x + x ) = x. Aside: It s easy to verify that B really is a basis of R, for example by #6.. True or False: (a) Let B = { e n,..., e } (the standard basis vectors in the opposite order) and let A = [ v... v n ]. Then the B-matrix of T ( x) = A x is the matrix [ vn... v ] (the column vectors in the opposite order). False. Both rows and columns are swapped, so the B- matrix B is defined entrywise by b i,j = a n+ i,n+ j. (b) Let A be an n n matrix. If A = A, then A = I n. False. For example, in the case, let A be the matrix of an orthogonal projection. (c) Let A be an n m matrix. Then dim(im(a))+dim(ker(a)) = n. False. By Rank-Nullity, this is equal to m. (d) If A is a matrix representing an orthogonal projection, then A is similar to. True. See #9. (e) Let A and B be n n matrices. If A is similar to B, then A t is similar to B t for every positive integer t. True. See Example 7 on p. 46. 6
(f) Let A and B be n n matrices such that B is similar to A. If A is invertible, then B is invertible. True. Write B = S AS and use Theorem.4.7 to show that B = S A S. (Note: This also shows that B is similar to A.) (g) If A and B are invertible n n matrices, then AB is invertible and (AB) = A B. False. See Theorem.4.7. (h) Let V be the set of all unit vectors in R n (that is, the set of all vectors x such that x x = ). Then V is a subspace of R n. False. This actually fails all three properties for being a subspace. Most easily, =, so V. (i) If V is a subspace of R n, then dim(v ) n. True. See for example Theorem 3..8. (Note: In the context of Chapter 4, this is a special case of the result that if V is a subspace of W, then dim(v ) dim(w ), but don t worry about that now since we haven t even defined what this means in an abstract vector space yet.) (j) If the vectors v,..., v m R n are linearly independent and the vector v R n is not in span( v,..., v m ), then the vectors v,..., v m, v are linearly independent. True. We know that none of v,..., v m are redundant since they are linearly independent, and v can t be redundant since if it were it would be in span( v,..., v m ). Aside: This is one trick for finding a basis of a subspace of R n : pick any nonzero vector in the space. If it doesn t span the space, pick a vector not in the span and add it to the set of vectors you ve picked. Then repeat until you end up with a set of vectors that do span the subspace. Since you can find at most n linearly independent vectors in R n, this process is guaranteed to terminate after finitely many steps and is constructed in such a way that you re guaranteed that the set of vectors you end up with will both span the space and be linearly independent. (k) If the ith column of an n m matrix A is equal to the jth column of A for some i j, then the vector e i e j is in the kernel of A. True. If the columns of A are v k for k =,..., m (so that v i = v j ), then A( e i e j ) = A e i A e j = v i v j =. (l) Suppose the vectors v,..., v m R n are linearly independent. Define the vectors w j = j v i for j =,..., m to be the sum of the first j vectors from v,..., v m (so, for example, w = v, w = v + v,..., w m = v + + v m ). Then the vectors w,..., w m are linearly independent. True. Use the fact that v,..., v j are linearly independent to show that the vector w j is not a linear combination of w,..., w j for j =,..., m. (m) Let v,..., v m, v, w R n. If v is not a linear combination of v,..., v m and w is not a linear combination of v,..., v m, then ( v + w) is not a linear combination of v,..., v m. False. For example, let w be any vector that isn t a linear combination of v,..., v m and then let v = v w. i= 7