Math 103, Summer 2006 Determinants July 25, 2006 DETERMINANTS. 1. Some Motivation

Similar documents
Math 416, Spring 2010 The algebra of determinants March 16, 2010 THE ALGEBRA OF DETERMINANTS. 1. Determinants

Determinants. Recall that the 2 2 matrix a b c d. is invertible if

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

ECON 186 Class Notes: Linear Algebra

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

1 Last time: determinants

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Announcements Wednesday, October 25

Lecture 1 Systems of Linear Equations and Matrices

1300 Linear Algebra and Vector Geometry

Determinants - Uniqueness and Properties

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrices. In this chapter: matrices, determinants. inverse matrix

Matrix Arithmetic. j=1

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Linear Algebra and Vector Analysis MATH 1120

22m:033 Notes: 3.1 Introduction to Determinants

1 Last time: inverses

Matrix Inverses. November 19, 2014

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

MATRICES AND MATRIX OPERATIONS

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Chapter 4. Solving Systems of Equations. Chapter 4

Evaluating Determinants by Row Reduction

3 Matrix Algebra. 3.1 Operations on matrices

Math 240 Calculus III

Chapter 9: Systems of Equations and Inequalities

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Determinants and Scalar Multiplication

Matrices. Chapter Definitions and Notations

Properties of the Determinant Function

Chapter 4. Determinants

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Differential Equations and Linear Algebra C. Henry Edwards David E. Penney Third Edition

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise:

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

Introduction to Matrices and Linear Systems Ch. 3

Determinants. Samy Tindel. Purdue University. Differential equations and linear algebra - MA 262

Chapter 2: Matrices and Linear Systems

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Linear Algebra Tutorial for Math3315/CSE3365 Daniel R. Reynolds

Determinants Chapter 3 of Lay

Elementary Row Operations on Matrices

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3

Inverses and Determinants

Determinants of 2 2 Matrices

Linear Algebra Basics

Undergraduate Mathematical Economics Lecture 1

Graduate Mathematical Economics Lecture 1

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

22A-2 SUMMER 2014 LECTURE 5

Math 320, spring 2011 before the first midterm

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

MTH 464: Computational Linear Algebra

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Calculus II - Basic Matrix Operations

1 Determinants. 1.1 Determinant

Second Midterm Exam April 14, 2011 Answers., and

Elementary maths for GMT

Announcements Wednesday, October 10

Matrix operations Linear Algebra with Computer Science Application

1111: Linear Algebra I

Announcements Monday, October 02

and let s calculate the image of some vectors under the transformation T.

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

Topic 15 Notes Jeremy Orloff

4. Determinants.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

1 Last time: linear systems and row operations

MATH 2030: EIGENVALUES AND EIGENVECTORS

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.

Math 344 Lecture # Linear Systems

Chapter 1: Systems of Linear Equations and Matrices

Lecture 9: Elementary Matrices

Math Linear Algebra Final Exam Review Sheet

Section 6.4. The Gram Schmidt Process

MAT 2037 LINEAR ALGEBRA I web:

The Gauss-Jordan Elimination Algorithm

Matrix Factorization Reading: Lay 2.5

Gaussian elimination

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

if we factor non-zero factors out of rows, we factor the same factors out of the determinants.

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Lecture 2 Systems of Linear Equations and Matrices, Continued

3.1 SOLUTIONS. 2. Expanding along the first row: Expanding along the second column:

= = 3( 13) + 4(10) = = = 5(4) + 1(22) =

REPLACE ONE ROW BY ADDING THE SCALAR MULTIPLE OF ANOTHER ROW

ANSWERS. E k E 2 E 1 A = B

Chapter 2. Square matrices

Transcription:

DETERMINANTS 1. Some Motivation Today we re going to be talking about erminants. We ll see the definition in a minute, but before we get into ails I just want to give you an idea of why we care about erminants. The big theorem we ll discuss today is that a square matrix has zero erminant if and only if it fails to be invertible. It is this property we ll exploit later in the class to find real numbers λ and vectors x so that A x = λ x. The numbers λ and vectors x which satisfy this equation are quite important for understanding how the matrix acts, but in order to find them we ll need erminants. 2. Determinants We have already defined the erminant of a 2 2 matrix: = ad bc. To define erminants for a general square matrix we ll need the following Definition 2.1. The ijth minor of a matrix A, written A ij, is the matrix one gets upon deleting the ith row and jth column of A. Example. If A is the matrix, then A 22 = 1 7 3 9 We can now give a definition of the erminant. Definition 2.2. If A is an n n matrix, then 1 4 7 2 5 7 3 6 9 and A 11 = 5 7 6 9 A = j=1 1 1+j a 1j A 1j.. Notice that our definition is recursive: find the erminant of an n n matrix requires us to compute the erminant of many n 1 n 1 matrices, which each require us to compute the erminant of many n 2 n 2 matrices, etc. Example. Here s an example of a erminant calculation: 1 3 9 3 7 5 7 5 3 5 = 1 3 1 4 1 4 1 1 4 3 7 + 9 1 1 = 128 5 312 5 + 93 7 = 23 21 36 = 34. aschultz@stanford.edu http://math.stanford.edu/~aschultz/summer06/math103 Page 1 of 5

We won t be able to prove this in class today, but in fact one has the following Theorem 2.1. The erminant can be computed as A = j=1 1 i+j a ij A ij for a fixed i this is expanding along the row i. The erminant can also be computed as A = 1 i+j a ij A ij for a fixed j this is expanding along the row j. This theorem is awfully handy in computing erminants, because it lets us choose a row that simplifies calculations as much as possible. Example. Find the erminant of A = 1 2 3 4 0 1 2 3 2 0. Solution. The definition of the erminant says we should expand along the first row, but since the first column has lots of zeroes I m going to compute the erminant by expanding along it. A = 1 1 2 3 0 1 2 0 + 0 0. I haven t bothered to write down the other three matrices since their erminants will not count: they have a coefficient of 0 in front! Now to compute the erminant of the residual 3 3 matrix I ll again choose to expand along the first column: it has lots of zeros which make calculations easy. 1 2 3 0 1 2 = 1 1 2 0 1 0 + 0 = 11 0 = 1. Putting all our calculations together we have 1 2 3 4 0 1 2 3 2 = 1 1 2 3 0 1 2 0 = 1 11 0 = 1. This example shows that the smartest way to calculate erminants is to expand along a row or column which is sparse i.e., which has lots of zeros. It also is indicative of another result which is very handy: Theorem 2.2. If A is a lower or upper triangular matrix with diagonal entries a 11,, a nn then n A = a ii. aschultz@stanford.edu http://math.stanford.edu/~aschultz/summer06/math103 Page 2 of 5

One final note is that in practice it can be hard when expanding along a given row or column to remember whether one should add or subtract the erminant of a given minor i.e., it s sometimes hard to remember whether the coefficient 1 i+j will be 1 or 1. For this, it can be helpful to write down a checkerboard that keeps track of which minors have a coefficient 1 and which have a coefficient 1. Just start by putting + in the top left hand corner and then alternate. For instance, the checkerboard for 4times4 matrices is just + + + + + + + +. 3. Determinants by Row Reduction A very reasonable question in class was asked: Is there a way to compute erminants that is not recursive? There are actually a few good answers to this question all in the affirmative, but we ll only talk about one today. The other involves a far more abstract definition for the erminant than we ll get a chance to discuss in class. The method we will tell about involves row reduction, which is pretty exciting pedagogically because row reduction has played such a huge role in this class all term long. In fact, with the exception of the Gram-Schmidt algorithm, I think all of our results have relied on being able to compute the reduced row echelon form of a matrix. The result we ll describe is motivated by the following Example. Consider what happens to the erminant of a 2 2 matrix after an elementary row operation has been performed on it. Solution. Let A = and we ll consider the erminant of the matrices which result from performing an elementary row operation on A. We first consider the row operation of a scalar multiple of one row into another row. In this case we have a + kc b + kd = a + kcd cb + kd = ad + kcd cb ckd = ad bc = A., For the next row operation scaling a row by a nonzero constant we have = akd bkc = kad bc = k A. kc kd Finally we notice the effect of swapping a row: = cb da = ad bc = A. In fact these identities on 2 2 matrices carry over to arbitrary square matrices. aschultz@stanford.edu http://math.stanford.edu/~aschultz/summer06/math103 Page 3 of 5

Theorem 3.1. Suppose that we reach a matrix B by performing elementary row operations on a matrix A. The number of row swaps in these operations is s and the number of row scalings is r, with constants k 1,, k r. Then B = 1 s r A. This theorem let s us use elementary row operations to transform a matrix into a convenient form for computing the erminant reduced row echelon form which is always upper triangular, for instance. As long as we remember the operations we took to get to a convenient matrix form, we can calculate the erminant of the initial matrix. Example. Suppose that rrefa = I n and that the move A into reduced row echelon form we had to swap 7 rows and scale rows by the constants k 1 = 2, k 2 = 1 2, k 3 = 11, and k 4 = 2. Then A = I n 1 7 2 1 2 11 2 = 1 22. The previous theorem is not only computationally convenient: it is also theoretically quite useful. In fact, it proves the fact about erminants we care about most: Theorem 3.2. A matrix A has A = 0 if and only if A is not invertible. Proof. A matrix is not invertible only if rrefa has a last row which is the vector 0. Hence rrefa = 0 and since 0 = rrefa = k A with s some nonzero constant, we have A = 0. If A = 0, on the other hand, then rrefa = k A = 0. But a square matrix in reduced row echelon form is upper triangular, and so the erminant is the product of the diagonal entries. This can be 0 only if the there is a 0 entry on the diagonal of rrefa, which implies that the last diagonal entry of rrefa = 0. But this means that the last row of rrefa = 0, and hence A is not invertible. There s one last comment about using row operations to find the erminant of a matrix. Calculating the erminant using row operations is generally speaking much quicker than calculating the erminant by expanding along a row or column. However when one is attempting to find the erminant of a 2 2, a 3 3 or a 4 4 matrix, it is usually more convenient to just expand along a row or column. 4. Algebraic Properties of the Determinant There are several algebraic properties of the erminant which will be useful. Theorem 4.1. Suppose that A and B are two n n matrices. Then AB = A B. Solution. First we ll assume that A = 0. This means that A is not invertible, and hence ima R n. But since imab ima R n we have imab R n, and so AB is not invertible. Therefore we have AB = 0, and so AB = A B as desired no matter what B is. Now assume that A 0. Suppose that to move A into reduced row echelon form rrefa = I n since A 0 implies A is invertible we require s row swaps and scalar multiplications k 1,, k r. Then we have 1 = I n = 1 s r A = A = 1 1 s r. aschultz@stanford.edu http://math.stanford.edu/~aschultz/summer06/math103 Page 4 of 5

But notice that if one applies the same row operations to the matrix AB we will wind up at matrix B performing these row operations is like multiplying by A 1 on the left. This means that B = 1 s r AB = AB = 1 1 s r B = A B. Corollary 4.2. For an invertible matrix A, A 1 = A 1. Proof. We know that AA 1 = I n, so that Solving for A 1 gives the desired result. 1 = I n = AA 1 = A A 1. There is another handy fact to know about erminants, though we won t prove it in class today. Theorem 4.3. For a square matrix A, A = A T. aschultz@stanford.edu http://math.stanford.edu/~aschultz/summer06/math103 Page 5 of 5