Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det A = 0, it suces to show that A is singular (rule 8, page 248). To show that A is singular, it suces to nd a nonzero vector in the nullspace of A. Here it is: let x = (1, 2, 1, 0,..., 0 ), }{{} n 3 zeros and let's check that x is in N(A). The ith row of A is (i + 1, i + 2, i + 3,..., i + n), and its dot product with x is (i + 1) 1 + (i + 2) ( 2) + (i + 3) 1 + 0 } + {{ + 0 }, n 3 zeros which equals 0, so indeed x is perpendicular to every row of A, and therefore x lies in N(A), as claimed. This proves that A is singular, and therefore that det A = 0. 2. Do Problem 29 from 5.1. Solution. The proof is perfectly valid if A is an invertible matrix. However, the formula P = A(A T A) 1 A T is applicable for any matrix A of full column rank; note that A does not have to be a square matrix. If A is not a square matrix, then neither is A T, and the matrices A and A T do not have determinants, so the expression 1 A A T A AT is utterly meaningless. (The formula det(ab) = det(a) det(b) applies only when A and B are both square matrices of the same size.) 3. Do Problem 1 from 5.2. Solution. Compute the determinants using the big formula (equation 4 on page 257): det A = 1 1 1 + 2 2 3 + 3 3 2 1 2 2 2 3 1 3 1 3 = 1 + 12 + 18 4 6 9 = 12 det B = 1 4 7 + 2 4 5 + 3 4 6 1 4 6 2 4 7 3 4 5 = 28 + 40 + 72 24 56 60 = 0 det C = 1 1 0 + 1 0 1 + 1 1 0 1 0 0 1 1 0 1 1 1 = 0 + 0 + 0 0 0 1 = 1 Since det A 0, the matrix A is invertible (rule 8, page 248), which implies that the rows of A are independent. The rows of C are independent for the same reason. Since det B = 0, on the other hand, B is singular, so its rows are dependent. We can double-check this by noting that (1, 1, 1) lies in the left-nullspace of B. 1
4. Do Problem 5 from 5.2. Solution. First note that, by the big formula (compare example 5 on page 259), det det = 1 = 1 det det =1 = 1 (*) Let's consider the rst task: Place the smallest number of zeros in a 4 4 matrix A that will guarantee det A = 0. Four zeros are enough, if we place them all in one row (rule 6, page 247). However, that observation alone does not constitute a complete solution, because we must also prove that three zeros, no matter where they are placed, cannot force det A to be zero. To prove this, note that no matter which three entries in A are forced to be zero, we can still ll in the rest of the matrix A to form one of the four matrices in ( ) above. Indeed, each of the three prescribed zeros prevents A from equaling only one of the four matrices in ( ), so, no matter where the three prescribed zeros are placed, at least one of the four matrices in ( ) can still be A. So 4 is the smallest number of zeros that can be placed in A to force det A = 0. Now let's consider the second task: Place as many zeros as possible while still allowing det A 0. The examples in ( ) show that it's possible to place 12 zeros while still allowing det A = 0. That again is not a complete solution, because we must also prove that 13 is too many. To do so, note that, if any 13 of the entries of A are zero, then the pigeonhole principle says that at least one of the four rows of A will be forced to be all zeros, and then det A is forced to be zero (rule 6, page 247). So 12 is the maximum number of zeros that A can have, if det A 0. 5. Do Problem 7 from 5.2. Solution. The total number of 5 5 permutation matrices is 5! = 120. They are all obtained from the identity matrix by row swaps, so they all have determinant ±1. We claim that exactly 60 of them have determinant +1, and 60 of them have determinant 1. To show this, let us partition the 120 permutation matrices into 60 pairs, where two permutation matrices form a pair if they're related to each other by the exchange of rows 1 and 2. For example, 0 0 0 0 and 0 0 0 0 0 0 are paired with each other. In each pair, the determinants of the two matrices have opposite signs (rule 2, page 246), so one of them equals +1 and the other is 1. Since we have 60 pairs, there must be 60 permutation matrices of determinant +1, and 60 permutation matrices of determinant 1. 2
For the second part of the problem, consider the matrix 0 0 A = 0 0. 0 If, starting from A, we exchange rows 1 and 5, then rows 2 and 5, then rows 3 and 5, and nally rows 4 and 5, we will arrive at the identity matrix, so det A = ( 1) 4 det I = 1 (rule 2, page 246). This is not a complete solution, though, because we must also prove that any fewer than 4 row exchanges cannot take us from A to the identity matrix. It is possible to prove this cleanly with a little bit of graph theory, but to avoid a lengthy digression, let us present an ad hoc argument. First note that A has no 1's at all along the main diagonal, and no row exchange can ever introduce more than two 1's onto the main diagonal where previously there were zeros. Since the identity matrix has ve 1's on the main diagonal, we need at least 5/2, which rounds up to 3, row exchanges to transform A into the identity matrix. On the other hand, three row exchanges cannot possibly bring A to the identity, because that would imply that det A = ( 1) 3 det I = 1, which is false. So indeed four exchanges are needed to go from A to the identity matrix. 6. Do Problem 17 from 5.2. (You are asked to show that the determinant of B n is 1 for all n.) Solution. We will prove by strong induction 1 that det B n = 1 for all integers n 1. First note that det B 1 = det [ 1 ] [ ] 1 1 = 1, det B 2 = det = 1, 1 2 so our claim is true for n 2. Now assume that n 3 and that our claim is true for B 1, B 2,..., B n 1 ; that is, det B 1 = det B 2 = = det B n 1 = 1. We claim that, in this case, det B n = 1 also. To show this, let's compute det B n using cofactors in the last row: we have 2 det B n = 1 C n,n 1 + 2 C n,n, ( ) so let's compute C n,n 1 and C n,n. For C n,n 1, note that M n,n 1 has the block form [ Bn 2 0 M n,n 1 = 1 ] 1 If the logical structure of a proof by induction is unfamiliar, please read, for example, http://en.wikipedia. org/wiki/mathematical_induction#complete_induction 2 As usual, whenever 1 i, j n, we let Mi,j denote the submatrix of B n obtained by throwing out row i and column j, and let C i,j be the cofactor, i.e., C i,j = ( 1) i+j det M i,j. 3
where the is some 1 by (n 2) block, which we don't care about. By cofactor expansion in the last column of M n,n 1, we see that det M n,n 1 = 1 det B n 2. Therefore, the cofactor C n,n 1 of our matrix B n is given by C n,n 1 = ( 1) n+(n 1) det M n,n 1 = det M n,n 1 = det B n 2. ( ) For C n,n, note that M n,n = B n 1, so Plugging ( ) and ( ) into ( ), we nd that C n,n = ( 1) n+n det M n,n = det B n 1. ( ) det B n = det B n 2 + 2 det B n 1. By our induction hypothesis, B n 2 = B n 1 = 1, so we now know det B n = 1 + 2 1 = 1, as claimed. This completes the induction and the proof. 7. Do Problem 2 from 5.3. a 1 c 0 Solution. (a) y = a b c d = c, and (b) y = ad bc a 1 c d 0 f g 0 i a b c d e f g h i = fg di D. For the numerator in (b) it may be easiest to use cofactor expansion in the second column. 8. Do Problem 27 from 5.3. [ ] cos θ r sin θ Solution. The lengths of the two columns of are cos sin θ r cos θ 2 θ + sin 2 θ = 1 and ( r sin θ) 2 + (r cos θ) 2 = r. Since these two column vectors are also perpendicular, they form a 1 r rectangle. Since the absolute value of J is the area of this rectangle, we know that J = ±r. In fact, a direct computation shows that J = +r. 9. Do Problem 39 from 5.3. (Hint. Try to relate the determinant of the cofactor matrix to the determinant of the actual matrix.) Solution. Let C be the matrix of cofactors of A (page 270). (Please note that C can be dened for any square matrix A, invertible or not: the formula C ij = ( 1) i+j det M ij doesn't depend on the invertibility of any matrix.) In this problem, we are given the matrix C, and we want to nd A. The answer is as follows. If C is invertible, then A = 3 det C(C T ) 1. If C is singular, then it is not possible to determine A exactly, but at least we know that A is singular; see the discussion at the end for more precise information about A. To justify our answer, we proceed in three steps. Step 1: nd the determinant of A, as suggested in the hint. We rst claim that det A = 3 det C, or, equivalently, det C = (det A) 3. (*) 4
To prove this, rst note that the formula (see page 271) AC T = (det A)I ( ) holds in general, whether A is invertible or not. Since det(ac T ) = (det A)(det C T ) and det(c T ) = det C (see rules 8 and 10, pages 248249), we have (det A)(det C) = det(ac T ) = det((det A)I) = (det A) 4 det I. (Pay close attention to the last step: det A is a scalar, and we are taking its 4th power because I is the 4 4 identity matrix in this context; see the last paragraph on page 246.) Since det I = 1, we get (det A)(det C) = (det A) 4, or in other words (det A)(det C (det A) 3 ) = 0, ( ) so either det A = 0 or det C = (det A) 3. If det A 0 then we know that ( ) must hold, but, frustratingly, in the case that det A = 0, our equation ( ) tells us nothing at all about det C. So if det A = 0, we must nd another way to prove that ( ) holds anyway, i.e., that det C = 0. We will present a proof by contradiction 3 : suppose that det A = 0 but det C 0. Then det(c T ) 0, so C T is an invertible 4 4 matrix. We may therefore multiply both sides of ( ) by (C T ) 1 on the right: A = (det A)(C T ) 1. But we are assuming det A = 0, so this equation says that A is the zero matrix! Well, the cofactor matrix for the zero matrix is also the zero matrix, so C = 0, and in particular det C = 0, which contradicts our assumption that det C 0. So our supposition that det A = 0 but det C 0 was false, and in fact we know that, if det A = 0, then det C = 0 also. In summary, we have proven that ( ) holds no matter what: if det A 0 then we proved this with ( ), and if det A = 0 then we used a proof by contradiction. Step 2: Solution in the case of invertible C. If det C 0, then from step 1 we know that det A = 3 det C, and we may multiply both sides of ( ) by (C T ) 1 on the right to nd A = (det A)(C T ) 1 = 3 det C(C T ) 1. Step 3: Solution in the case of singular C. If det C = 0, then we know from step 1 that det A = 0, but it is impossible to determine A exactly. Nevertheless, we can obtain some partial information about A. First we make a few observations: If A has rank at most 2, then we claim C = 0. Indeed, in this case, every minor of A has rank at most 2, and therefore is singular and has determinant 0. That means all cofactors of A are 0, i.e., C = 0. If A has rank 3, then we claim C has rank 1. Indeed, in this case, some 3 3 submatrix of A is invertible (see problem 5 on problem set 3, i.e., problem 12 from 3.3), and the corresponding entry of C will be nonzero, so C has rank at least 1. On the other hand, ( ) says that AC T = 0, so the column space of C T is contained in the 1-dimensional nullspace of A, so C also has rank at most 1. 3 If the logical structure of a proof by contradiction is unfamiliar, please read, for example, http://en.wikipedia. org/wiki/proof_by_contradiction 5
We may turn these bullet points around to conclude the following. It is impossible for C to have rank 2 or 3. If C = 0, then A is a matrix of rank at most 2, but nothing more can possibly be determined about A. If C has rank 1, then A has rank 3, and, while it is not possible to determine A completely, we can say a few more things about it. Choose 4 4 permutation matrices P r and P c such that the entry of the matrix C 1 := P r CP c in row 4, column 4 is nonzero; say this entry equals k. (It is always possible to nd such P r and P c because not all entries of C are 0.) It turns out that C 1 is the cofactor matrix for A 1 := (det P r )(det P c )P r AP c, but let us leave the proof of this as an exercise to the reader. Since C 1 has rank 1, there exist a unique (column) vector of the form u = (u 1, u 2, u 3, 1) in the column space of C 1, and a unique (column) vector of the form v = (v 1, v 2, v 3, 1) in the column space of C1 T. Then it must be that C 1 = kuv T. Let B be the 3 3 matrix formed by the rst three rows and columns of A 1. Since C 1 is the cofactor matrix for A 1, we know det B = k, and we claim A 1 = 1 0 0 0 1 0 0 0 1 u 1 u 2 u 3 B 1 0 0 v 1 0 1 0 v 2 0 0 1 v 3 Indeed, note that, since AC T = 0 by ( ), we know the nullspace of A contains the row space of C, which is spanned by v, so Av = 0. For similar reasons one can show A T C = 0, so the left-nullspace of A contains the column space of C, which is spanned by u, so A T u = 0. Now it suces to note that the right-hand side of ( ) is the only 4 4 matrix with v in its nullspace, u in its left-nullspace, and B as the 3 3 matrix formed by its rst three rows and columns. In sum, all we can conclude about A is that it has the form A = (det P r ) 1 (det P c ) 1 Pr 1 A 1 Pc 1 1 0 0 = (det P r )(det P c )Pr T 0 1 0 0 0 1 B u 1 u 2 u 3. 1 0 0 v 1 0 1 0 v 2 0 0 1 v 3 for some 3 3 matrix B of determinant k. No two choices for B result in the same A, and one can check that any matrix A of the above form really must have cofactor matrix C, and so it is not possible to determine A exactly. This is probably more information than you wanted to know, but there you have it. 10. Do Problem 24 from 6.1. P T c ( ) 6
Solution. One could use the big formula to compute det(a λi) = (2 λ) 3 + 8 + 8 4(2 λ) 4(2 λ) 4(2 λ) = (2 λ) 3 12(2 λ) + 16 = 8 12λ + 6λ 2 λ 3 24 + 12λ + 16 = λ 3 + 6λ 2 = λ 2 (6 λ) and conclude that the eigenvalues of A are 0, 0, and 6, but this would be tedious. Instead, note that since A is a 3 3 matrix of rank 1, it has a 2-dimensional nullspace, and that nullspace isby denitionthe space of eigenvectors corresponding to the eigenvalue 0. Therefore, the eigenvalue 0 occurs with multiplicity at least 2 (corresponding to the dimension of the nullspace), and we may write λ 1 = λ 2 = 0. Recall also that the sum of all three eigenvalues of A equals the trace of A (equation 6, page 289): so the remaining eigenvalue must be λ 3 = 6. λ 1 + λ 2 + λ 3 = 2 + 2 + 2, To nd eigenvectors corresponding to the eigenvalue 0, we just have to nd a basis for the nullspace of A. That's the same as the nullspace of the 1 3 matrix [ 2 1 2 ]. The second and third columns are pivot columns, and they correspond to the special solutions ( 1 2, 1, 0) and ( 1, 0, 1). So we may set x 1 = ( 1 2, 1, 0) and x 2 = ( 1, 0, 1); these are our rst two eigenvectors, corresponding to the eigenvalue λ 1 = λ 2 = 0. (Of course, other solutions are possible, too; any basis for N(A) will do.) To nd an eigenvector x 3 corresponding to the eigenvalue λ 3 = 6, rst note that Ax 3 must lie in the column space of A, which is spanned by the vector (1, 2, 1), so Ax 3 is a multiple of (1, 2, 1) whether we like it or not. If Ax 3 = 6x 3, that just means x 3 must itself be a multiple of (1, 2, 1). Well then, we may as well set x 3 = (1, 2, 1), and check that Ax 3 = 6x 3 indeed. (Any nonzero multiple of this x 3 will do, also.) 7