Determinants. Recall that the 2 2 matrix a b c d. is invertible if

Similar documents
Determinants Chapter 3 of Lay

Evaluating Determinants by Row Reduction

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Linear Systems and Matrices

det(ka) = k n det A.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Chapter 2. Square matrices

Properties of the Determinant Function

ECON 186 Class Notes: Linear Algebra

Chapter 4. Determinants

MATH2210 Notebook 2 Spring 2018

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Math 240 Calculus III

Math 416, Spring 2010 The algebra of determinants March 16, 2010 THE ALGEBRA OF DETERMINANTS. 1. Determinants

MATH 2050 Assignment 8 Fall [10] 1. Find the determinant by reducing to triangular form for the following matrices.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Determinants An Introduction

Matrices. In this chapter: matrices, determinants. inverse matrix

I = i 0,

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column

Math 103, Summer 2006 Determinants July 25, 2006 DETERMINANTS. 1. Some Motivation

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

MATH 1210 Assignment 4 Solutions 16R-T1

Topic 15 Notes Jeremy Orloff

1 Determinants. 1.1 Determinant

2 b 3 b 4. c c 2 c 3 c 4

Lecture 8: Determinants I

The Determinant: a Means to Calculate Volume

II. Determinant Functions

Components and change of basis

1 Last time: determinants

Chapter 2: Matrices and Linear Systems

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Math 344 Lecture # Linear Systems

MAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:

1300 Linear Algebra and Vector Geometry

Chapter 2 Notes, Linear Algebra 5e Lay

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

The determinant. Motivation: area of parallelograms, volume of parallepipeds. Two vectors in R 2 : oriented area of a parallelogram

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

Determinants by Cofactor Expansion (III)

Elementary Row Operations on Matrices

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Determinants: summary of main results

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Matrix Operations: Determinant

k=1 ( 1)k+j M kj detm kj. detm = ad bc. = 1 ( ) 2 ( )+3 ( ) = = 0

Gaussian Elimination and Back Substitution

LINEAR ALGEBRA WITH APPLICATIONS

MATH 2030: EIGENVALUES AND EIGENVECTORS

Math113: Linear Algebra. Beifang Chen

3.1 SOLUTIONS. 2. Expanding along the first row: Expanding along the second column:

Graduate Mathematical Economics Lecture 1

Undergraduate Mathematical Economics Lecture 1

= = 3( 13) + 4(10) = = = 5(4) + 1(22) =

Determinants. Beifang Chen

Math Linear Algebra Final Exam Review Sheet

Chapter 5: Matrices. Daniel Chan. Semester UNSW. Daniel Chan (UNSW) Chapter 5: Matrices Semester / 33

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

Solution Set 7, Fall '12

Relationships Between Planes

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

ENGR-1100 Introduction to Engineering Analysis. Lecture 21. Lecture outline

MTH 102A - Linear Algebra II Semester

Matrices. Chapter Definitions and Notations

Lectures on Linear Algebra for IT

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

ENGR-1100 Introduction to Engineering Analysis. Lecture 21

Matrices and Linear Algebra

Determinants - Uniqueness and Properties

Inverses and Determinants

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Math/CS 466/666: Homework Solutions for Chapter 3

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Homework Set #8 Solutions

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Lecture 10: Determinants and Cramer s Rule

and let s calculate the image of some vectors under the transformation T.

CHAPTER 8: Matrices and Determinants

Elementary Linear Algebra

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Chapter 3. Determinants and Eigenvalues

Determinant of a Matrix

A = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1

Math Camp Notes: Linear Algebra I

Introduction to Matrices and Linear Systems Ch. 3

1 Matrices and Systems of Linear Equations

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

This MUST hold matrix multiplication satisfies the distributive property.

3 Matrix Algebra. 3.1 Operations on matrices

Lecture Summaries for Linear Algebra M51A

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Fall Inverse of a matrix. Institute: UC San Diego. Authors: Alexander Knop

NOTES FOR LINEAR ALGEBRA 133

MATRICES AND MATRIX OPERATIONS

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

CHAPTER 6. Direct Methods for Solving Linear Systems

Matrix Arithmetic. j=1

Transcription:

Determinants Recall that the 2 2 matrix a b c d is invertible if and only if the quantity ad bc is nonzero. Since this quantity helps to determine the invertibility of the matrix, we call it the determinant. We will also write det a c b d = ad bc. We can find the determinant of a general 3 3 matrix A by recognizing that to invert A, we apply the row reduction process to the augmented matrix [ A I 3 ] to attempt to bring it to the form [ I 3 A 1 ]. For this to be possible, we must be able to reduce A to the identity matrix I 3. Let s study how this happens in the general case, assuming that the upper left entry is nonzero: a 11 a 12 a 13 a 11 a 12 a 13 a 21 a 22 a 23 scale rows 2, 3 a 11 a 21 a 11 a 22 a 11 a 23 a 31 a 32 a 33 a 11 a 31 a 11 a 32 a 11 a 33 a 11 a 12 a 13 0 a replace rows 2, 3 11 a 22 a 12 a 21 a 11 a 23 a 13 a 21 0 a 11 a 32 a 12 a 31 a 11 a 33 a 13 a 31

The entries in positions (2,2) and (3,2) in this last matrix cannot both be zero since A is invertible. In fact, swapping these two rows would yield the same matrix at this stage as if we had swapped rows 2 and 3 of A at the outset. So let s suppose that the (2,2) entry, a 11 a 22 a 12 a 21 is not 0. Then we can scale row 3 by this nonzero value and use it to eliminate the (3,2) entry below it with a row replacement. This brings A to row echelon form a 11 a 12 a 13 0 a 11 a 22 a 12 a 21 a 11 a 23 a 13 a 21 0 0 a 11 Δ where Δ = a 11 a 22 a 33 + a 12 a 23 a 31 +a 13 a 21 a 32 a 11 a 23 a 32 a 12 a 21 a 33 a 13 a 22 a 31 We know that all the pivot entries (on the diagonal) must be nonzero for A to be invertible. As we have assumed that a 11 and a 11 a 22 a 12 a 21 are nonzero, this means that Δ must be nonzero as well. This leads us to define the determinant of A to be this quantity Δ and gives us the first case of the Theorem If A is a 3 3 matrix, then A is invertible if and only if its determinant Δ is nonzero.

Proof (cont d) Had we needed to swap rows 2 and 3 above, we would have arrived at an expression for Δ similar to the one above, but with every row index value 2 switched to a 3, and vice versa. If we do this in the above expression for Δ, we obtain precisely the expression Δ, implying that the definition for Δ works in this other case, too. Finally, if a 11 were zero, we could not have begun the reduction procedure above. But at least one of a 21 or a 31 would then have to be nonzero (else, having a zero column, A would not be invertible). We could then swap row 1 with row 2 or 3 to put a nonzero entry in the upper left corner and proceed as before. At the end we will obtain an expression similar to Δ except that every row index value of 1 is switched to either a 2 or a 3, and vice versa. However, making this switch in the expression we have for Δ above, we notice once again that this yields precisely the expression Δ. This proves that the expression for Δ defined above must be nonzero whenever A is invertible, independent of these particular cases. That is, the theorem above is proved. // Notice that

a 11 a 12 a 13 det a 21 a 22 a 23 = a 11 a 22 a 33 +a 12 a 23 a 31 + a 13 a 21 a 32 a 31 a 32 a 33 a 11 a 23 a 32 a 12 a 21 a 33 a 13 a 22 a 31 = a 11 (a 22 a 33 a 23 a 32 ) +a 12 (a 23 a 31 a 21 a 33 ) +a 13 (a 21 a 32 a 22 a 31 ) = a 11 det a 22 a 23 a 32 a 33 a 12 det a 21 a 23 a 31 a 33 +a 13 det a 21 a 22 a 31 a 32 That is, the determinant of the 3 3 matrix A can be computed as a linear combination of the entries of the first row with weights which are, up to sign, 2 2 determinants built from other entries of A. Further, the corresponding 2 2 determinant in each of the three terms is obtained from A by deleting the entries in the same row and column as the entry that appears as the weight.

The quantities C 11 = det a 22 a 23, a 32 a 33 C 12 = det a 21 a 23, a 31 a 33 C 13 = det a 21 a 22 a 31 a 32 are called cofactors of the corresponding entries a 11, a 12, a 13 of A. The (i, j) cofactor is the signed determinant of the 2 2 submatrix A ij of A found by deleting row i and column j of A; the sign has the value ( 1) i+ j (equals +1 when i and j have the same parity and 1 when they have different parity). With these definitions, we can express a formula for the determinant of A as det A = a 11 C 11 + a 12 C 12 +a 13 C 13. It turns out (and is straightforward to verify) that the same procedure can be used to compute a determinant in terms of a cofactor expansion when the entries are taken from any row or any column of A.

For instance, cofactor expansion along the second column of A gives the correct formula det A = a 12 C 12 + a 22 C 22 +a 32 C 32. Cofactor expansion also suggests that we can define the determinant of a 4 4 matrix recursively in terms of 3 3 cofactor determinants, the determinant of a 5 5 matrix recursively in terms of 4 4 cofactor determinants, and so on. That is, we define the determinant of the n n matrix A = a ij [ ] to be det A = a 11 C 11 + a 12 C 12 +L+a 1n C 1n, where C ij = ( 1) i+ j det A ij is the (i, j) cofactor, given in terms of the (n 1) (n 1) determinant of the submatrix A ij of A obtained by deleting row i and column j from A. It is possible to compute determinants by cofactor expansion along other rows and columns of A. We state this in a theorem whose proof is straightforward but very tedious.

Theorem If A = [ a ij ] is an n n matrix, then deta can be computed by cofactor expansion along any row (i = 1, 2,, n): det A = a i1 C i1 +a i2 C i2 +L+a in C in and along any column (j = 1, 2,, n): det A = a 1 j C 1 j +a 2 j C 2 j +L+a nj C nj. // Corollary If A is a triangular square matrix (either upper triangular or lower triangular), then det A is the product of the diagonal entries of A. // It turns out that the computation of determinants via cofactor expansions is not terribly efficient. Once again, we turn to row operations to simplify our methods.

Theorem Suppose A and B are n n matrices. 1. If B is obtained from A by a row operation that swaps a pair of rows, then detb = det A; 2. if B is obtained from A by a row operation that scales a row by the scalar factor k, then detb = k det A; and 3. if B is obtained from A by a row operation that replaces one row by its sum with some multiple of another row, then detb = det A. Proof We proceed by induction on n, the case n = 2 coming from the following computations: det c a d b = cb ad = det a b c d ; and det ka c kb d = det a kc = k det a b c d ; b kd = kad kbc det a + kc b + kd c d = (a + kc)d (b +kd)c = det a b c + ka d + kb = a(d + kb) b(c + ka) = ad bc = det a b c d

We now assume that n > 2 and that the theorem is true for all matrices of size less than n. The row operation is a swap. Cofactor expansion of B along any row that is not swapped (as n > 2, there must be some such row) is identical to cofactor expansion of A along the same row except that each of the submatrices in the corresponding cofactors for B has the same pair of rows swapped. By the induction hypothesis, these cofactors have opposite value. So detb = det A. The row operation is a scaling operation. Cofactor expansion of B along the row that is being scaled shows directly that detb = k det A. The row operation is a replacement. Suppose the operation replaces row i by its sum with k times row j. Cofactor expansion of B along row i shows that detb = det A +k det A where A is the matrix obtained from A by replacing row i with row j. In particular, since A has two identical rows, swapping them produces the same matrix, whence by our result above, det A = det A. This means that det A = 0, so detb = det A. // Recall that performing an elementary row operation on A is identical to multiplying A on the left by an elementary matrix E (which is the matrix obtained from the identity matrix I by

performing the same row operation). So the matrices denoted B in the previous theorem all have the form EA for some appropriate form of elementary matrix E. Since cofactor expansions allow us to conclude that deti = 1 (regardless of the size of I), if E is an elementary matrix which represents a row swap, then dete = 1, if E is an elementary matrix which represents scaling a row by the factor k, then dete = k, and if E is an elementary matrix which represents replacing one row by its sum with a multiple of another, then dete = 1, the theorem above can be restated as Theorem Suppose A and E are n n matrices with E elementary. Then det(ea) = dete det A. // Corollary Suppose A is an n n matrix that can be brought to row echelon form U = u ij [ ] by means of some number of row replacements, exactly r row swaps, and no row scalings. (This is always possible.) Then det A = ( 1) r u 11 u 22 Lu nn.

Proof Represent these row operations with elementary matrix multiplications: U = E p (L(E 2 (E 1 A ))L). Repeated application of the previous theorem shows that detu = ( 1) r det A, so det A = ( 1) r detu. But U is upper triangular, so detu can be computed by repeated cofactor expansions along the first column, showing that det A = ( 1) r u 11 u 22 Lu nn. // Corollary If A is an n n matrix, then A is invertible if and only if det A 0. // This shows us that our original definition of determinant for n n matrices with n > 2 accomplishes exactly what we want it to do: it determines when A is invertible! The last theorem can be extended much more generally: Theorem Suppose A and B are n n matrices. Then det( AB) = det A det B.

Proof If A is not invertible, then neither is AB (for if AB had inverse C, then A would have inverse BC since ABC = I ). So both det A and det( AB) are 0 and the identity det( AB) = det A det B is true. If A is invertible, then there is a sequence of row operations, represented by elementary matrices E 1,E 2,,E p, that brings A into the identity matrix I, whence E p LE 2 E 1 A = I. These elementary matrices are all invertible; let F 1, F 2,, F p be their respective inverses (all of which are themselves elementary matrices). Then A = F 1 F 2 LF p, so det( AB) = det( F 1 F 2 LF p B) = det F 1 (det F 2 (L(detF p det B)L)) = det F 1 (det F 2 (L(detF p )L)) detb = det( F 1 F 2 LF p ) det B = det A det B // It is important to note that it is not true in general that det( A + B) = det A +det B; that is, the determinant is not a linear function of matrices. However, the determinant is linear with respect to the columns of a matrix. That is, if we define a function

then Δ(x) = det [ a 1 L a j 1 x a j+1 L a n ], Δ(rx + sy ) = rδ(x )+ sδ(y), as can be seen by cofactor expansion of the matrix [ a 1 L a j 1 rx + sy a j+1 L a n ] along the jth column. Finally, there is a simple relation between the determinant of a matrix and of its transpose: Theorem If A is an n n matrix, then det A T = det A. Proof By induction on n. If n = 1, the result is obvious. If n > 1, then the cofactor expansion of det A T along the first row is identical to the cofactor expansion of det A along the first column, because the corresponding cofactors are equal by the induction hypothesis. //