Solution Set 7, Fall '12

Similar documents
Math Linear Algebra Final Exam Review Sheet

Conceptual Questions for Review

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

MTH 464: Computational Linear Algebra

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion


Homework Set #8 Solutions

Topic 15 Notes Jeremy Orloff

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Math 215 HW #9 Solutions

1 Last time: determinants

Math 240 Calculus III

4. Determinants.

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Matrix Operations: Determinant

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Linear Algebra, Summer 2011, pt. 3

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Chapter 3. Determinants and Eigenvalues

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Linear Algebra Primer

ENGR-1100 Introduction to Engineering Analysis. Lecture 21

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

1 Matrices and Systems of Linear Equations

Matrices and Linear Algebra

Determinants by Cofactor Expansion (III)

Elementary Linear Algebra

Determinants Chapter 3 of Lay

Lecture Summaries for Linear Algebra M51A

Determinants. Recall that the 2 2 matrix a b c d. is invertible if

MH1200 Final 2014/2015

Math 320, spring 2011 before the first midterm

2 b 3 b 4. c c 2 c 3 c 4

Problems for M 10/26:

ENGR-1100 Introduction to Engineering Analysis. Lecture 21. Lecture outline

ANSWERS. E k E 2 E 1 A = B

Components and change of basis

MAT Linear Algebra Collection of sample exams

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Econ Slides from Lecture 7

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

18.06SC Final Exam Solutions

3 Matrix Algebra. 3.1 Operations on matrices

and let s calculate the image of some vectors under the transformation T.

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

Review problems for MA 54, Fall 2004.

Lecture Notes in Linear Algebra

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

Section 4.5. Matrix Inverses

Extra Problems for Math 2050 Linear Algebra I

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Determinants: Uniqueness and more

Solution Set 3, Fall '12

Calculating determinants for larger matrices

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

LINEAR ALGEBRA REVIEW

Chapter 2 Notes, Linear Algebra 5e Lay

Recitation 8: Graphs and Adjacency Matrices

6 EIGENVALUES AND EIGENVECTORS

Chapter 4. Determinants

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

1 Matrices and Systems of Linear Equations. a 1n a 2n

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

HOMEWORK 9 solutions

Chapter 2: Matrix Algebra

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

2. Every linear system with the same number of equations as unknowns has a unique solution.

Eigenvalues and Eigenvectors

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1

Determinants An Introduction

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

1. General Vector Spaces

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

STEP Support Programme. STEP 2 Matrices Topic Notes

Determinants and Scalar Multiplication

Chapter 4. Solving Systems of Equations. Chapter 4

Recall : Eigenvalues and Eigenvectors

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

1300 Linear Algebra and Vector Geometry

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

Linear Systems and Matrices

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

1 Determinants. 1.1 Determinant

Lecture 2: Eigenvalues and their Uses

Transcription:

Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det A = 0, it suces to show that A is singular (rule 8, page 248). To show that A is singular, it suces to nd a nonzero vector in the nullspace of A. Here it is: let x = (1, 2, 1, 0,..., 0 ), }{{} n 3 zeros and let's check that x is in N(A). The ith row of A is (i + 1, i + 2, i + 3,..., i + n), and its dot product with x is (i + 1) 1 + (i + 2) ( 2) + (i + 3) 1 + 0 } + {{ + 0 }, n 3 zeros which equals 0, so indeed x is perpendicular to every row of A, and therefore x lies in N(A), as claimed. This proves that A is singular, and therefore that det A = 0. 2. Do Problem 29 from 5.1. Solution. The proof is perfectly valid if A is an invertible matrix. However, the formula P = A(A T A) 1 A T is applicable for any matrix A of full column rank; note that A does not have to be a square matrix. If A is not a square matrix, then neither is A T, and the matrices A and A T do not have determinants, so the expression 1 A A T A AT is utterly meaningless. (The formula det(ab) = det(a) det(b) applies only when A and B are both square matrices of the same size.) 3. Do Problem 1 from 5.2. Solution. Compute the determinants using the big formula (equation 4 on page 257): det A = 1 1 1 + 2 2 3 + 3 3 2 1 2 2 2 3 1 3 1 3 = 1 + 12 + 18 4 6 9 = 12 det B = 1 4 7 + 2 4 5 + 3 4 6 1 4 6 2 4 7 3 4 5 = 28 + 40 + 72 24 56 60 = 0 det C = 1 1 0 + 1 0 1 + 1 1 0 1 0 0 1 1 0 1 1 1 = 0 + 0 + 0 0 0 1 = 1 Since det A 0, the matrix A is invertible (rule 8, page 248), which implies that the rows of A are independent. The rows of C are independent for the same reason. Since det B = 0, on the other hand, B is singular, so its rows are dependent. We can double-check this by noting that (1, 1, 1) lies in the left-nullspace of B. 1

4. Do Problem 5 from 5.2. Solution. First note that, by the big formula (compare example 5 on page 259), det det = 1 = 1 det det =1 = 1 (*) Let's consider the rst task: Place the smallest number of zeros in a 4 4 matrix A that will guarantee det A = 0. Four zeros are enough, if we place them all in one row (rule 6, page 247). However, that observation alone does not constitute a complete solution, because we must also prove that three zeros, no matter where they are placed, cannot force det A to be zero. To prove this, note that no matter which three entries in A are forced to be zero, we can still ll in the rest of the matrix A to form one of the four matrices in ( ) above. Indeed, each of the three prescribed zeros prevents A from equaling only one of the four matrices in ( ), so, no matter where the three prescribed zeros are placed, at least one of the four matrices in ( ) can still be A. So 4 is the smallest number of zeros that can be placed in A to force det A = 0. Now let's consider the second task: Place as many zeros as possible while still allowing det A 0. The examples in ( ) show that it's possible to place 12 zeros while still allowing det A = 0. That again is not a complete solution, because we must also prove that 13 is too many. To do so, note that, if any 13 of the entries of A are zero, then the pigeonhole principle says that at least one of the four rows of A will be forced to be all zeros, and then det A is forced to be zero (rule 6, page 247). So 12 is the maximum number of zeros that A can have, if det A 0. 5. Do Problem 7 from 5.2. Solution. The total number of 5 5 permutation matrices is 5! = 120. They are all obtained from the identity matrix by row swaps, so they all have determinant ±1. We claim that exactly 60 of them have determinant +1, and 60 of them have determinant 1. To show this, let us partition the 120 permutation matrices into 60 pairs, where two permutation matrices form a pair if they're related to each other by the exchange of rows 1 and 2. For example, 0 0 0 0 and 0 0 0 0 0 0 are paired with each other. In each pair, the determinants of the two matrices have opposite signs (rule 2, page 246), so one of them equals +1 and the other is 1. Since we have 60 pairs, there must be 60 permutation matrices of determinant +1, and 60 permutation matrices of determinant 1. 2

For the second part of the problem, consider the matrix 0 0 A = 0 0. 0 If, starting from A, we exchange rows 1 and 5, then rows 2 and 5, then rows 3 and 5, and nally rows 4 and 5, we will arrive at the identity matrix, so det A = ( 1) 4 det I = 1 (rule 2, page 246). This is not a complete solution, though, because we must also prove that any fewer than 4 row exchanges cannot take us from A to the identity matrix. It is possible to prove this cleanly with a little bit of graph theory, but to avoid a lengthy digression, let us present an ad hoc argument. First note that A has no 1's at all along the main diagonal, and no row exchange can ever introduce more than two 1's onto the main diagonal where previously there were zeros. Since the identity matrix has ve 1's on the main diagonal, we need at least 5/2, which rounds up to 3, row exchanges to transform A into the identity matrix. On the other hand, three row exchanges cannot possibly bring A to the identity, because that would imply that det A = ( 1) 3 det I = 1, which is false. So indeed four exchanges are needed to go from A to the identity matrix. 6. Do Problem 17 from 5.2. (You are asked to show that the determinant of B n is 1 for all n.) Solution. We will prove by strong induction 1 that det B n = 1 for all integers n 1. First note that det B 1 = det [ 1 ] [ ] 1 1 = 1, det B 2 = det = 1, 1 2 so our claim is true for n 2. Now assume that n 3 and that our claim is true for B 1, B 2,..., B n 1 ; that is, det B 1 = det B 2 = = det B n 1 = 1. We claim that, in this case, det B n = 1 also. To show this, let's compute det B n using cofactors in the last row: we have 2 det B n = 1 C n,n 1 + 2 C n,n, ( ) so let's compute C n,n 1 and C n,n. For C n,n 1, note that M n,n 1 has the block form [ Bn 2 0 M n,n 1 = 1 ] 1 If the logical structure of a proof by induction is unfamiliar, please read, for example, http://en.wikipedia. org/wiki/mathematical_induction#complete_induction 2 As usual, whenever 1 i, j n, we let Mi,j denote the submatrix of B n obtained by throwing out row i and column j, and let C i,j be the cofactor, i.e., C i,j = ( 1) i+j det M i,j. 3

where the is some 1 by (n 2) block, which we don't care about. By cofactor expansion in the last column of M n,n 1, we see that det M n,n 1 = 1 det B n 2. Therefore, the cofactor C n,n 1 of our matrix B n is given by C n,n 1 = ( 1) n+(n 1) det M n,n 1 = det M n,n 1 = det B n 2. ( ) For C n,n, note that M n,n = B n 1, so Plugging ( ) and ( ) into ( ), we nd that C n,n = ( 1) n+n det M n,n = det B n 1. ( ) det B n = det B n 2 + 2 det B n 1. By our induction hypothesis, B n 2 = B n 1 = 1, so we now know det B n = 1 + 2 1 = 1, as claimed. This completes the induction and the proof. 7. Do Problem 2 from 5.3. a 1 c 0 Solution. (a) y = a b c d = c, and (b) y = ad bc a 1 c d 0 f g 0 i a b c d e f g h i = fg di D. For the numerator in (b) it may be easiest to use cofactor expansion in the second column. 8. Do Problem 27 from 5.3. [ ] cos θ r sin θ Solution. The lengths of the two columns of are cos sin θ r cos θ 2 θ + sin 2 θ = 1 and ( r sin θ) 2 + (r cos θ) 2 = r. Since these two column vectors are also perpendicular, they form a 1 r rectangle. Since the absolute value of J is the area of this rectangle, we know that J = ±r. In fact, a direct computation shows that J = +r. 9. Do Problem 39 from 5.3. (Hint. Try to relate the determinant of the cofactor matrix to the determinant of the actual matrix.) Solution. Let C be the matrix of cofactors of A (page 270). (Please note that C can be dened for any square matrix A, invertible or not: the formula C ij = ( 1) i+j det M ij doesn't depend on the invertibility of any matrix.) In this problem, we are given the matrix C, and we want to nd A. The answer is as follows. If C is invertible, then A = 3 det C(C T ) 1. If C is singular, then it is not possible to determine A exactly, but at least we know that A is singular; see the discussion at the end for more precise information about A. To justify our answer, we proceed in three steps. Step 1: nd the determinant of A, as suggested in the hint. We rst claim that det A = 3 det C, or, equivalently, det C = (det A) 3. (*) 4

To prove this, rst note that the formula (see page 271) AC T = (det A)I ( ) holds in general, whether A is invertible or not. Since det(ac T ) = (det A)(det C T ) and det(c T ) = det C (see rules 8 and 10, pages 248249), we have (det A)(det C) = det(ac T ) = det((det A)I) = (det A) 4 det I. (Pay close attention to the last step: det A is a scalar, and we are taking its 4th power because I is the 4 4 identity matrix in this context; see the last paragraph on page 246.) Since det I = 1, we get (det A)(det C) = (det A) 4, or in other words (det A)(det C (det A) 3 ) = 0, ( ) so either det A = 0 or det C = (det A) 3. If det A 0 then we know that ( ) must hold, but, frustratingly, in the case that det A = 0, our equation ( ) tells us nothing at all about det C. So if det A = 0, we must nd another way to prove that ( ) holds anyway, i.e., that det C = 0. We will present a proof by contradiction 3 : suppose that det A = 0 but det C 0. Then det(c T ) 0, so C T is an invertible 4 4 matrix. We may therefore multiply both sides of ( ) by (C T ) 1 on the right: A = (det A)(C T ) 1. But we are assuming det A = 0, so this equation says that A is the zero matrix! Well, the cofactor matrix for the zero matrix is also the zero matrix, so C = 0, and in particular det C = 0, which contradicts our assumption that det C 0. So our supposition that det A = 0 but det C 0 was false, and in fact we know that, if det A = 0, then det C = 0 also. In summary, we have proven that ( ) holds no matter what: if det A 0 then we proved this with ( ), and if det A = 0 then we used a proof by contradiction. Step 2: Solution in the case of invertible C. If det C 0, then from step 1 we know that det A = 3 det C, and we may multiply both sides of ( ) by (C T ) 1 on the right to nd A = (det A)(C T ) 1 = 3 det C(C T ) 1. Step 3: Solution in the case of singular C. If det C = 0, then we know from step 1 that det A = 0, but it is impossible to determine A exactly. Nevertheless, we can obtain some partial information about A. First we make a few observations: If A has rank at most 2, then we claim C = 0. Indeed, in this case, every minor of A has rank at most 2, and therefore is singular and has determinant 0. That means all cofactors of A are 0, i.e., C = 0. If A has rank 3, then we claim C has rank 1. Indeed, in this case, some 3 3 submatrix of A is invertible (see problem 5 on problem set 3, i.e., problem 12 from 3.3), and the corresponding entry of C will be nonzero, so C has rank at least 1. On the other hand, ( ) says that AC T = 0, so the column space of C T is contained in the 1-dimensional nullspace of A, so C also has rank at most 1. 3 If the logical structure of a proof by contradiction is unfamiliar, please read, for example, http://en.wikipedia. org/wiki/proof_by_contradiction 5

We may turn these bullet points around to conclude the following. It is impossible for C to have rank 2 or 3. If C = 0, then A is a matrix of rank at most 2, but nothing more can possibly be determined about A. If C has rank 1, then A has rank 3, and, while it is not possible to determine A completely, we can say a few more things about it. Choose 4 4 permutation matrices P r and P c such that the entry of the matrix C 1 := P r CP c in row 4, column 4 is nonzero; say this entry equals k. (It is always possible to nd such P r and P c because not all entries of C are 0.) It turns out that C 1 is the cofactor matrix for A 1 := (det P r )(det P c )P r AP c, but let us leave the proof of this as an exercise to the reader. Since C 1 has rank 1, there exist a unique (column) vector of the form u = (u 1, u 2, u 3, 1) in the column space of C 1, and a unique (column) vector of the form v = (v 1, v 2, v 3, 1) in the column space of C1 T. Then it must be that C 1 = kuv T. Let B be the 3 3 matrix formed by the rst three rows and columns of A 1. Since C 1 is the cofactor matrix for A 1, we know det B = k, and we claim A 1 = 1 0 0 0 1 0 0 0 1 u 1 u 2 u 3 B 1 0 0 v 1 0 1 0 v 2 0 0 1 v 3 Indeed, note that, since AC T = 0 by ( ), we know the nullspace of A contains the row space of C, which is spanned by v, so Av = 0. For similar reasons one can show A T C = 0, so the left-nullspace of A contains the column space of C, which is spanned by u, so A T u = 0. Now it suces to note that the right-hand side of ( ) is the only 4 4 matrix with v in its nullspace, u in its left-nullspace, and B as the 3 3 matrix formed by its rst three rows and columns. In sum, all we can conclude about A is that it has the form A = (det P r ) 1 (det P c ) 1 Pr 1 A 1 Pc 1 1 0 0 = (det P r )(det P c )Pr T 0 1 0 0 0 1 B u 1 u 2 u 3. 1 0 0 v 1 0 1 0 v 2 0 0 1 v 3 for some 3 3 matrix B of determinant k. No two choices for B result in the same A, and one can check that any matrix A of the above form really must have cofactor matrix C, and so it is not possible to determine A exactly. This is probably more information than you wanted to know, but there you have it. 10. Do Problem 24 from 6.1. P T c ( ) 6

Solution. One could use the big formula to compute det(a λi) = (2 λ) 3 + 8 + 8 4(2 λ) 4(2 λ) 4(2 λ) = (2 λ) 3 12(2 λ) + 16 = 8 12λ + 6λ 2 λ 3 24 + 12λ + 16 = λ 3 + 6λ 2 = λ 2 (6 λ) and conclude that the eigenvalues of A are 0, 0, and 6, but this would be tedious. Instead, note that since A is a 3 3 matrix of rank 1, it has a 2-dimensional nullspace, and that nullspace isby denitionthe space of eigenvectors corresponding to the eigenvalue 0. Therefore, the eigenvalue 0 occurs with multiplicity at least 2 (corresponding to the dimension of the nullspace), and we may write λ 1 = λ 2 = 0. Recall also that the sum of all three eigenvalues of A equals the trace of A (equation 6, page 289): so the remaining eigenvalue must be λ 3 = 6. λ 1 + λ 2 + λ 3 = 2 + 2 + 2, To nd eigenvectors corresponding to the eigenvalue 0, we just have to nd a basis for the nullspace of A. That's the same as the nullspace of the 1 3 matrix [ 2 1 2 ]. The second and third columns are pivot columns, and they correspond to the special solutions ( 1 2, 1, 0) and ( 1, 0, 1). So we may set x 1 = ( 1 2, 1, 0) and x 2 = ( 1, 0, 1); these are our rst two eigenvectors, corresponding to the eigenvalue λ 1 = λ 2 = 0. (Of course, other solutions are possible, too; any basis for N(A) will do.) To nd an eigenvector x 3 corresponding to the eigenvalue λ 3 = 6, rst note that Ax 3 must lie in the column space of A, which is spanned by the vector (1, 2, 1), so Ax 3 is a multiple of (1, 2, 1) whether we like it or not. If Ax 3 = 6x 3, that just means x 3 must itself be a multiple of (1, 2, 1). Well then, we may as well set x 3 = (1, 2, 1), and check that Ax 3 = 6x 3 indeed. (Any nonzero multiple of this x 3 will do, also.) 7