DETERMINANTS 1. Some Motivation Today we re going to be talking about erminants. We ll see the definition in a minute, but before we get into ails I just want to give you an idea of why we care about erminants. The big theorem we ll discuss today is that a square matrix has zero erminant if and only if it fails to be invertible. It is this property we ll exploit later in the class to find real numbers λ and vectors x so that A x = λ x. The numbers λ and vectors x which satisfy this equation are quite important for understanding how the matrix acts, but in order to find them we ll need erminants. 2. Determinants We have already defined the erminant of a 2 2 matrix: = ad bc. To define erminants for a general square matrix we ll need the following Definition 2.1. The ijth minor of a matrix A, written A ij, is the matrix one gets upon deleting the ith row and jth column of A. Example. If A is the matrix, then A 22 = 1 7 3 9 We can now give a definition of the erminant. Definition 2.2. If A is an n n matrix, then 1 4 7 2 5 7 3 6 9 and A 11 = 5 7 6 9 A = j=1 1 1+j a 1j A 1j.. Notice that our definition is recursive: find the erminant of an n n matrix requires us to compute the erminant of many n 1 n 1 matrices, which each require us to compute the erminant of many n 2 n 2 matrices, etc. Example. Here s an example of a erminant calculation: 1 3 9 3 7 5 7 5 3 5 = 1 3 1 4 1 4 1 1 4 3 7 + 9 1 1 = 128 5 312 5 + 93 7 = 23 21 36 = 34. aschultz@stanford.edu http://math.stanford.edu/~aschultz/summer06/math103 Page 1 of 5
We won t be able to prove this in class today, but in fact one has the following Theorem 2.1. The erminant can be computed as A = j=1 1 i+j a ij A ij for a fixed i this is expanding along the row i. The erminant can also be computed as A = 1 i+j a ij A ij for a fixed j this is expanding along the row j. This theorem is awfully handy in computing erminants, because it lets us choose a row that simplifies calculations as much as possible. Example. Find the erminant of A = 1 2 3 4 0 1 2 3 2 0. Solution. The definition of the erminant says we should expand along the first row, but since the first column has lots of zeroes I m going to compute the erminant by expanding along it. A = 1 1 2 3 0 1 2 0 + 0 0. I haven t bothered to write down the other three matrices since their erminants will not count: they have a coefficient of 0 in front! Now to compute the erminant of the residual 3 3 matrix I ll again choose to expand along the first column: it has lots of zeros which make calculations easy. 1 2 3 0 1 2 = 1 1 2 0 1 0 + 0 = 11 0 = 1. Putting all our calculations together we have 1 2 3 4 0 1 2 3 2 = 1 1 2 3 0 1 2 0 = 1 11 0 = 1. This example shows that the smartest way to calculate erminants is to expand along a row or column which is sparse i.e., which has lots of zeros. It also is indicative of another result which is very handy: Theorem 2.2. If A is a lower or upper triangular matrix with diagonal entries a 11,, a nn then n A = a ii. aschultz@stanford.edu http://math.stanford.edu/~aschultz/summer06/math103 Page 2 of 5
One final note is that in practice it can be hard when expanding along a given row or column to remember whether one should add or subtract the erminant of a given minor i.e., it s sometimes hard to remember whether the coefficient 1 i+j will be 1 or 1. For this, it can be helpful to write down a checkerboard that keeps track of which minors have a coefficient 1 and which have a coefficient 1. Just start by putting + in the top left hand corner and then alternate. For instance, the checkerboard for 4times4 matrices is just + + + + + + + +. 3. Determinants by Row Reduction A very reasonable question in class was asked: Is there a way to compute erminants that is not recursive? There are actually a few good answers to this question all in the affirmative, but we ll only talk about one today. The other involves a far more abstract definition for the erminant than we ll get a chance to discuss in class. The method we will tell about involves row reduction, which is pretty exciting pedagogically because row reduction has played such a huge role in this class all term long. In fact, with the exception of the Gram-Schmidt algorithm, I think all of our results have relied on being able to compute the reduced row echelon form of a matrix. The result we ll describe is motivated by the following Example. Consider what happens to the erminant of a 2 2 matrix after an elementary row operation has been performed on it. Solution. Let A = and we ll consider the erminant of the matrices which result from performing an elementary row operation on A. We first consider the row operation of a scalar multiple of one row into another row. In this case we have a + kc b + kd = a + kcd cb + kd = ad + kcd cb ckd = ad bc = A., For the next row operation scaling a row by a nonzero constant we have = akd bkc = kad bc = k A. kc kd Finally we notice the effect of swapping a row: = cb da = ad bc = A. In fact these identities on 2 2 matrices carry over to arbitrary square matrices. aschultz@stanford.edu http://math.stanford.edu/~aschultz/summer06/math103 Page 3 of 5
Theorem 3.1. Suppose that we reach a matrix B by performing elementary row operations on a matrix A. The number of row swaps in these operations is s and the number of row scalings is r, with constants k 1,, k r. Then B = 1 s r A. This theorem let s us use elementary row operations to transform a matrix into a convenient form for computing the erminant reduced row echelon form which is always upper triangular, for instance. As long as we remember the operations we took to get to a convenient matrix form, we can calculate the erminant of the initial matrix. Example. Suppose that rrefa = I n and that the move A into reduced row echelon form we had to swap 7 rows and scale rows by the constants k 1 = 2, k 2 = 1 2, k 3 = 11, and k 4 = 2. Then A = I n 1 7 2 1 2 11 2 = 1 22. The previous theorem is not only computationally convenient: it is also theoretically quite useful. In fact, it proves the fact about erminants we care about most: Theorem 3.2. A matrix A has A = 0 if and only if A is not invertible. Proof. A matrix is not invertible only if rrefa has a last row which is the vector 0. Hence rrefa = 0 and since 0 = rrefa = k A with s some nonzero constant, we have A = 0. If A = 0, on the other hand, then rrefa = k A = 0. But a square matrix in reduced row echelon form is upper triangular, and so the erminant is the product of the diagonal entries. This can be 0 only if the there is a 0 entry on the diagonal of rrefa, which implies that the last diagonal entry of rrefa = 0. But this means that the last row of rrefa = 0, and hence A is not invertible. There s one last comment about using row operations to find the erminant of a matrix. Calculating the erminant using row operations is generally speaking much quicker than calculating the erminant by expanding along a row or column. However when one is attempting to find the erminant of a 2 2, a 3 3 or a 4 4 matrix, it is usually more convenient to just expand along a row or column. 4. Algebraic Properties of the Determinant There are several algebraic properties of the erminant which will be useful. Theorem 4.1. Suppose that A and B are two n n matrices. Then AB = A B. Solution. First we ll assume that A = 0. This means that A is not invertible, and hence ima R n. But since imab ima R n we have imab R n, and so AB is not invertible. Therefore we have AB = 0, and so AB = A B as desired no matter what B is. Now assume that A 0. Suppose that to move A into reduced row echelon form rrefa = I n since A 0 implies A is invertible we require s row swaps and scalar multiplications k 1,, k r. Then we have 1 = I n = 1 s r A = A = 1 1 s r. aschultz@stanford.edu http://math.stanford.edu/~aschultz/summer06/math103 Page 4 of 5
But notice that if one applies the same row operations to the matrix AB we will wind up at matrix B performing these row operations is like multiplying by A 1 on the left. This means that B = 1 s r AB = AB = 1 1 s r B = A B. Corollary 4.2. For an invertible matrix A, A 1 = A 1. Proof. We know that AA 1 = I n, so that Solving for A 1 gives the desired result. 1 = I n = AA 1 = A A 1. There is another handy fact to know about erminants, though we won t prove it in class today. Theorem 4.3. For a square matrix A, A = A T. aschultz@stanford.edu http://math.stanford.edu/~aschultz/summer06/math103 Page 5 of 5