A. [ 3. Let A = 5 5 ]. Find all (complex) eigenvalues and eigenvectors of The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute 3 λ A λi =, 5 5 λ from which det(a λi) = (3 λ)(5 λ) ()( 5) = λ 8λ + 5 + = λ 8λ + 5. Set this equal to zero to get λ 8λ + 5 =. Use the quadratic formula and we have λ = ( 8) ± ( 8) 4()(5) () = 8 ± 36 = 4 ± 3i. The eigenspace corresponding to λ = 4 + 3i is the null space of 3i A (4 + 3i)I =. 5 3i Since λ is an eigenvalue of A, the matrix must be singular, so the second row is a scalar multiple of the first. The equation [ then ] gives us ( 3i)x + x =, from which x = ( + 3i)x, and so is an eigenvector corresponding + 3i to λ = 4 + 3i. We can replace i by i everywhere to get that is an eigenvector 3i of A corresponding to λ = 4 3i. These are bases for their respective eigenspaces, so the [ eigenvectors ] corresponding to λ = 4 + 3i are all nonzero scalar multiples of. Similarly, + 3i [ the eigenvectors ] corresponding to λ = 4 3i are all nonzero scalar multiples of. The nonzero matters, as the zero vector is never an eigenvector 3i of anything. Full credit was given for finding one eigenvector corresponding to each eigenvalue.
. Find bases for the row space and column space of the matrix 3 3 3 5 3 4 We start by putting the matrix in row echelon form. 3 3 R 3 5 +R 3 4 3R 3 5 5 7 5 5 7 8 8 3.4 5 5 7 8 8 3.4 9. 3.4. 5R +8R 3.4 9. swap swap From this, we can see that the first, second, and fourth columns are the pivot columns, so these corresponding columns in the original matrix form a basis for the column space. 3, 3, 3 5 4
The nonzero rows of the matrix in row echelon form are a basis for the row space. {[ 3 ], [.4], [ ]} The top three rows of the original matrix do not form a basis for the row space, however, as the first row is the sum of the second and third rows, so they are not linearly independent. I was surprised at how many different answers people gave for the row space. Most of the answers were correct, too. The final answer depends on how far you go toward reduced row echelon form before deciding that it s obvious which columns are pivot columns and stop.
3. Let L : P P be a linear transformation defined by L(p(x)) = xp (x), where p (x) is the derivative of p(x). Let S = {, +x, +x+x } be an ordered basis for P. Find the matrix that represents L with respect to the basis S. We apply L to the vectors in S and compute L() =, L( + x) = x, and L( + x + x ) = x + x. The matrix should have columns consisting of [L()] S, [L( + x)] S, and [L( + x + x )] S. The first of these is trivial, as [] S =. For the others, we need to compute [x] S and [x + x ] S. There are various ways to do this, and if you could write each of these as a linear combination of the vectors in S by hand, that was fine. A more systematic way is to apply the natural isomorphism M : P R 3 a given by M(a + bx + cx ) = b, so that we can do all computations in R 3. c If we do this, then our basis S becomes L(+x) = x becomes we wish to write, and,,, and L(+x+x ) = x+x becomes If we make a matrix A = Ax = b, for each of b = as linear combinations of and b =,. Thus,,, then this is equivalent to solving. You can do this by row operations. It turns outthat A is pretty easy to compute by cofactors, so we can compute A =, and then the solution is x = A b. From this, we compute [L( + x)] S = = [L( + x + x )] S = =.
Now we have all of the columns for the matrix of L with respect to the basis S, so we make the matrix.
4. Let A = 3 5 3. Find matrices P and D with D diagonal such that either A = P DP or D = P AP. Be sure to specify whether you want A = P DP or D = P AP. You are not required to compute P. Since A is upper triangular, we can read off the eigenvalues as the numbers on its diagonal and get λ = 3,,. The matrix D should be a diagonal matrix 3 with the eigenvalues on the diagonal, so D =. The matrix P should have its columns be eigenvectors corresponding to λ = 3,, and, respectively. Since A is a 3 3 matrix with three distinct eigenvalues, each of the eigenspaces must have dimension, and it suffices to find an eigenvector for each eigenvalue. 5 3 For λ = 3, we have A 3I = 5. The first column is clearly not a pivot column, so x can be anything. Since we only need one eigenvector, let stake x =. Back substitution yields x 3 = and x =, from which we get For λ =, we have A ( )I = 5 5 3 4. From this, it is clear that the second column is not a pivot column, so x can be anything. Back substitution quickly yields x 3 =, so the top equation gives us 5x +5x + =, and so x = x. If we set x =, we get x =, and is an eigenvector. For λ =, we have A I = 5 3 4. This time, the third column is not a pivot column, so x 3 can be anything. The second row gives us 4x x 3 =, from which x 3 = 4x. One easy solution to this is x 3 = 4, x =. The top row gives us x +5x 3x 3 =, from which x = 3x 3 5x = 3(4) 5( ) = 7. Thus, 7 4 If we take P = P AP. is an eigenvector. 7 4, then we get AP = P D, from which D =
5. Find the best fit line in the sense of ordinary least squares to the points (, ), (, ), and (3, ). The usual equation for a line is y = mx + b. If we plug in the three points, we get = m + b, = m + b, and = 3m + b. These give us a system of equations 3 [ m b ] = The best solution in the sense of least squares is We can compute A T A = ˆx = (A T A) A T b [ 3 ] 3 =. [ 4 6 6 3 det(a T A) = (4)(3) (6)(6) = 6 (A T A) = 3 6 6 6 4 A T 3 b = 9 = 5 ˆx = (A T A) A T b = 3 6 9 = 3 = 6 6 4 5 6 4 [ m Therefore, the constants for the best fit line are = 7 b 3 m = and b = 7 3, so y = x + 7 3. ] [ 7 3 ] ], from which Scoring on this problem ended up being close to binary, as it wasn t hard if you knew how, but a little under half of the class didn t.
6. Let L : R n R m be a linear transformation with ker L = {}. Show that m n. By Theorem 6.3, L(x) = Ax, for some m n matrix A. The kernel of L is the null space of A, so ker L = {} means that the null space of A is {}. This means that Ax = has only the trivial solution x =. Therefore, every column of A must be a pivot column. A has n columns, and hence n pivot columns. Each pivot column requires a pivot in a distinct row, so A has at least n rows. Since the number of rows of A is m, we have m n. Another approach is to cite Theorem 6.6, which states that dim ker L+ dim range L = dim V. From the setup, we have V = R n, so dim V = n. If ker L = {}, then dim ker L = dim {} =. Plugging these in, we get dim range L = n. The range of L is a subspace of R m, so we have dim range L dim R m = m, from which n m. The first solution was the intended solution to this problem, though I was aware that there are a number of ways to do the problem. A number of students tried something along the lines of the second solution, but mostly didn t catch that the range of L is a subspace of R m.
7. Let A be a matrix with only one eigenvalue, whether real or complex. Show that A is diagonalizable if and only if A is a scalar matrix. Suppose first that A is diagonalizable. Then A = P DP for some diagonal matrix D. Every entry on the diagonal of D must be an eigenvalue of A. Since A has only one eigenvalue, say λ, all of the entries on the diagonal of D are the same. Therefore, D = λi is a scalar matrix. We can compute A = P DP = P (λi)p = P (λp ) = λ(p P ) = λi, so A is a scalar matrix. For the converse, if A is a scalar matrix, then it is diagonal, and we can take D = A and P = I to get P DP = IAI = A, so A is diagonalizable. Instead of saying that diagonalizable implies scalar, we can show that not scalar implies not diagonalizable. If A is not a scalar matrix, but has λ as its only eigenvalue, then A λi O, for otherwise, we would have A = λi. If A is n n, then A λi has rank at least, and hence nullity of at most n. Therefore, there are at most n linearly independent eigenvectors corresponding to the eigenvalue λ. Since λ is the only eigenvalue, there are not n linearly independent eigenvectors of A, and so A is not diagonalizable. When I wrote this problem, I had initially thought of making it ask you to show that diagonalizable implies scalar for a matrix with only one eigenvalue. But then I thought, well, the converse is completely trivial, as you only need to observe that a scalar matrix is already diagonal, and hence diagonalizable. So the idea was to make it an if and only if problem to pad it with a few easy points. I didn t expect more students to be able to do the hard direction than the easy one.
8. Let A be a matrix. It can be shown that det(a T A) and det(aa T ) are always well-defined. Either prove that det(a T A) = det(aa T ) or else give a counterexample. Let A =. We can compute A T A = = [], from which det(a T A) =. We can also compute AA T = =, from which det(aa T ) =. Therefore, det(a T A) det(aa T ). If you assume that A is square, then it is always true that det(a T A) = det(aa T ). However, the problem does not assert that A is square. Assuming that A is square and proceeding to prove that det(a T A) = det(aa T ) would get you half credit. Only three people got this problem right. If you pick A to be a non-square matrix whose rows or columns are a linearly independent set of vectors, it will be a counterexample. In particular, if you pick a non-square matrix A and fill in numbers at random, it will usually be a counterexample. So the statement isn t just barely false, but wildly false unless you assume that all matrices are square, as most of the class did.