a for a 1 1 matrix. a b a b 2 2 matrix: We define det ad bc 3 3 matrix: We define a a a a a a a a a a a a a a a a a a

Similar documents
a for a 1 1 matrix. a b a b 2 2 matrix: We define det ad bc 3 3 matrix: We define a a a a a a a a a a a a a a a a a a

(3) If you replace row i of A by its sum with a multiple of another row, then the determinant is unchanged! Expand across the i th row:

Inverse Matrix. A meaning that matrix B is an inverse of matrix A.

(3) If you replace row i of A by its sum with a multiple of another row, then the determinant is unchanged! Expand across the i th row:

Theorem: Let A n n. In this case that A does reduce to I, we search for A 1 as the solution matrix X to the matrix equation A X = I i.e.

Matrix Algebra 2.2 THE INVERSE OF A MATRIX Pearson Education, Inc.

Determinants of order 2 and 3 were defined in Chapter 2 by the formulae (5.1)

Mon Feb matrix inverses. Announcements: Warm-up Exercise:

Matrices and vectors

R is a scalar defined as follows:

Chimica Inorganica 3

MATH10212 Linear Algebra B Proof Problems

4. Determinants. det : { square matrices } F less important in mordern & practical applications but in theory

6.003 Homework #3 Solutions

1 Last time: similar and diagonalizable matrices

Chapter Unary Matrix Operations

LECTURE 8: ORTHOGONALITY (CHAPTER 5 IN THE BOOK)

Chapter 6: Determinants and the Inverse Matrix 1

Apply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j.

, then cv V. Differential Equations Elements of Lineaer Algebra Name: Consider the differential equation. and y2 cos( kx)

Math 4707 Spring 2018 (Darij Grinberg): homework set 4 page 1

THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS

Zeros of Polynomials

CALCULATION OF FIBONACCI VECTORS

Course : Algebraic Combinatorics

A FIBONACCI MATRIX AND THE PERMANENT FUNCTION

3.2 Properties of Division 3.3 Zeros of Polynomials 3.4 Complex and Rational Zeros of Polynomials

Math 155 (Lecture 3)

Eigenvalues and Eigenvectors

Lecture 8: October 20, Applications of SVD: least squares approximation

Singular value decomposition. Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaine

LINEAR ALGEBRA. Paul Dawkins

The Discrete Fourier Transform

1 1 2 = show that: over variables x and y. [2 marks] Write down necessary conditions involving first and second-order partial derivatives for ( x0, y

Example 1.1 Use an augmented matrix to mimic the elimination method for solving the following linear system of equations.

Matrix Algebra from a Statistician s Perspective BIOS 524/ Scalar multiple: ka

Linearly Independent Sets, Bases. Review. Remarks. A set of vectors,,, in a vector space is said to be linearly independent if the vector equation

CHAPTER I: Vector Spaces

Lecture Overview. 2 Permutations and Combinations. n(n 1) (n (k 1)) = n(n 1) (n k + 1) =

Orthogonal transformations

Discrete-Time Systems, LTI Systems, and Discrete-Time Convolution

15.083J/6.859J Integer Optimization. Lecture 3: Methods to enhance formulations

6 Integers Modulo n. integer k can be written as k = qn + r, with q,r, 0 r b. So any integer.

Sequences, Mathematical Induction, and Recursion. CSE 2353 Discrete Computational Structures Spring 2018

subcaptionfont+=small,labelformat=parens,labelsep=space,skip=6pt,list=0,hypcap=0 subcaption ALGEBRAIC COMBINATORICS LECTURE 8 TUESDAY, 2/16/2016

MT5821 Advanced Combinatorics

Basic Iterative Methods. Basic Iterative Methods

In this document, if A:

Chapter 7 COMBINATIONS AND PERMUTATIONS. where we have the specific formula for the binomial coefficients:

Ma 530 Introduction to Power Series

APPENDIX F Complex Numbers

B = B is a 3 4 matrix; b 32 = 3 and b 2 4 = 3. Scalar Multiplication

(I.C) Matrix algebra

Complex Analysis Spring 2001 Homework I Solution

Review Problems 1. ICME and MS&E Refresher Course September 19, 2011 B = C = AB = A = A 2 = A 3... C 2 = C 3 = =

5.1. The Rayleigh s quotient. Definition 49. Let A = A be a self-adjoint matrix. quotient is the function. R(x) = x,ax, for x = 0.

M 340L CS Homew ork Set 6 Solutions

M 340L CS Homew ork Set 6 Solutions

Lecture 23 Rearrangement Inequality

CHAPTER 10 INFINITE SEQUENCES AND SERIES

24 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS

Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

Sequences. Notation. Convergence of a Sequence

NICK DUFRESNE. 1 1 p(x). To determine some formulas for the generating function of the Schröder numbers, r(x) = a(x) =

The z-transform. 7.1 Introduction. 7.2 The z-transform Derivation of the z-transform: x[n] = z n LTI system, h[n] z = re j

Chapter 7: The z-transform. Chih-Wei Liu

MAT 271 Project: Partial Fractions for certain rational functions

Chapter 2. Periodic points of toral. automorphisms. 2.1 General introduction

ENGI 9420 Engineering Analysis Assignment 3 Solutions

3. Z Transform. Recall that the Fourier transform (FT) of a DT signal xn [ ] is ( ) [ ] = In order for the FT to exist in the finite magnitude sense,

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Assignment 2 Solutions SOLUTION. ϕ 1 Â = 3 ϕ 1 4i ϕ 2. The other case can be dealt with in a similar way. { ϕ 2 Â} χ = { 4i ϕ 1 3 ϕ 2 } χ.

AH Checklist (Unit 3) AH Checklist (Unit 3) Matrices

Infinite Sequences and Series

First, note that the LS residuals are orthogonal to the regressors. X Xb X y = 0 ( normal equations ; (k 1) ) So,

Appendix F: Complex Numbers

1. n! = n. tion. For example, (n+1)! working with factorials. = (n+1) n (n 1) 2 1

PROBLEM SET I (Suggested Solutions)

Stochastic Matrices in a Finite Field

4.3 Growth Rates of Solutions to Recurrences

(VII.A) Review of Orthogonality

CHAPTER 5. Theory and Solution Using Matrix Techniques

For a 3 3 diagonal matrix we find. Thus e 1 is a eigenvector corresponding to eigenvalue λ = a 11. Thus matrix A has eigenvalues 2 and 3.

Frequency Domain Filtering

1 Generating functions for balls in boxes

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

Properties and Tests of Zeros of Polynomial Functions

U8L1: Sec Equations of Lines in R 2

PROPERTIES OF AN EULER SQUARE

ECE-S352 Introduction to Digital Signal Processing Lecture 3A Direct Solution of Difference Equations

Lemma Let f(x) K[x] be a separable polynomial of degree n. Then the Galois group is a subgroup of S n, the permutations of the roots.

Section 5.1 The Basics of Counting

A 2nTH ORDER LINEAR DIFFERENCE EQUATION

4 The Sperner property.

8. Applications To Linear Differential Equations

Why learn matrix algebra? Vectors & Matrices with statistical applications. Brief history of linear algebra

Appendix: The Laplace Transform

Linear Regression Demystified

CALCULATING FIBONACCI VECTORS

Principle Of Superposition

Transcription:

Math S-b Lecture # Notes This wee is all about determiats We ll discuss how to defie them, how to calculate them, lear the allimportat property ow as multiliearity, ad show that a square matrix A is ivertible if ad oly if its determiat is ozero We ll also derive some useful geometric applicatios that will allow us to ot oly calculate legth, area, ad volume, but also to defie geometric cotet (-volume) i higher dimesios We will also give a iterpretatio of the determiat as a expasio factor for geometric cotet We ll wrap it up with a few mior results (Cramer s Rule ad a ot-too-practical formula for the iverse of a matrix) Defiig the determiat You are probably already familiar with the determiat i the case of ad perhaps matrices Let s start with those ad reverse egieer the geeral defiitio for ay square matrix matrix: Just for the sae of cosistecy, let s defie det[ a] a for a matrix a b a b matrix: We defie det ad bc c d c d matrix: We defie a a a a a a a a a a a a det a a a a a a a a a a a a a a a a ( a a a a ) a ( a a a a ) a ( a a a a ) a a a a a a a a a a a a a a a a a a a a a a a a This defiitio is based o a fact that we have ot yet established called the Laplace expasio, but let s tae this as give ad see what, if ay, patter it suggests Note that there is ust term for the determiat of a matrix, terms for a matrix (oe positive, oe egative), ad! 6 terms for a matrix (half of them positive ad half egative) Also ote that the umber of factors i each term grows with the size of the matrix A more subtle observatio is that, at least as writte for the case, all terms are of the form ax aya z ad the choices of x, yz, correspod precisely with the differet ways of permutig the characters i, ie,,,,, Fially, ote that the sig of each term correspods to whether this is a eve permutatio (positive if obtaied by a eve umber of traspositios of the characters startig with ) or a odd permutatio (egative if obtaied by a eve umber of traspositios) Based o these observatios, we might (correctly) speculate that for a matrix we should defie the determiat as follows: a a Defiitio: Give a matrix A, we defie sg( ) a () a () a ( ) where P ( ) a a P ( ) deotes the set of all permutatios of the characters ; deotig a idividual permutatio; ( i) deotig where the character i is mapped uder that permutatio; ad sg( ) if is a eve permutatio ad sg( ) if is a odd permutatio There will be! terms i the sum correspodig to the umber of permutatios i P ( ) There are other ways to defie the determiat, but this is a practical defiitio at least i the case of relatively small matrices Two simple observatios ) If A is either upper triagular or lower triagular, all but oe of the terms i the determiat will vaish ad the determiat will be simply the product of its diagoal etries T ) For ay matrix, [The sum is the same, ust rearraged ad with the same sigs] revised 7/7/0

Multiliearity Note that the determiat is, i fact, a fuctio det : R R that taes ay matrix A ad yields the real umber As a fuctio from oe liear space to aother, the determiat is ot liear For example, if we a b at bt were to scale a matrix A c d (with ad bc ), we have ta ct dt ad det( ta) t ad t bc t ( ad bc) t More geerally, for ay matrix, we have det( ta) t However, the determiat is liear i ay sigle row or colum This is ow as multiliearity example: det x x x x x x x x x x x x x x x x x example: det 4 (4 ) ( ) ( 4 ) 8 5 8 5 x x x x The multiliearity property gives several immediate corollaries I terms of the th colum of a matrix: det v xy v det v x v det v y v ad det v rx v rdet v x v I terms of the th row of a matrix: v v v det xy det x det y v v v v v ad det rx rdet x v v This actually explais the Laplace expasio Choose ay row or colum of the matrix A ad for each etry ai of that row or colum, let A i be its mior the ( ) ( ) matrix obtaied by deletig the ith row ad th colum of the matrix A The, i terms of the ith row, i ( ) a ; ad i terms of the i i th colum, i ( ) a i i i revised 7/7/0

a a For example, i terms of the st row of a matrix A, we ca express a a a a a 0 a 0 Applyig liearity i the st row, this gives; a a 0 0 a det a det ad because of all the 0 s i the first a a a a a a row of each, ad some observatios about eve vs odd permutatios to determie the sigs, this becomes a a a a a a a det ( ) a det a a a a a a a ( ) adet A The same idea applies to ay choice of row or colum with appropriate sigs Example: If A 0, we ca choose to expad alog ay row or colum We ofte choose a row with oe or more 0 s i order to miimize the umber of ozero terms i the sum, but ot ecessarily 0 0 Expadig alog the st row gives () () () 5 Expadig alog the d row gives 0 (5) (0) 0( ) 5 Expadig alog the rd colum gives 0 () 0( ) () 5 Effect of elemetary row operatios o the determiat For ay matrix A, we have the followig properties: (a) A B det B by scale row AB det B scale row by, 0 iterchage (b) AB det Bdet A two rows add a multiple of (c) AB det Bdet A oe row to aother Property (a) follows directly from liearity i ay oe row Property (b) follows by observig that all the terms i the determiat will be the same except that eve permutatios will become odd ad vice-versa This causes all the sigs to be reversed Property (b) also implies that if a matrix has two idetical rows, the its determiat must be zero Property (c) requires a small argumet for ustificatio: vi vi vi v i vi det det det det 0 det v vi v v i v v revised 7/7/0

There are at least two sigificat results that flow from these observatios The first has to do with simplificatio of the calculatio of a determiat by first doig some row reductio The secod will give a ew criterio for ivertibility of a matrix We ca calculate the determiat of a matrix by double tracig the steps i row reductio ad the effect of each step o the value of the determiat This is especially useful for larger matrices Example: Calculate for the matrix Solutio: A 4 0 4 5 4 4 0 0 0 0 4 0 7 0 7 0 7 0 0 0 4 5 0 4 5 0 4 5 0 0 0 0 0 0 det A We could coclude from the 4th etry whe we obtaied a upper triagular matrix that, so We could also have completed the row reductio to get to reduced row-echelo form This would give that, so Ivertibility ad the determiat Suppose we bega with a matrix A ad carried out a sequece of steps to obtai rref( A ) This sequece of steps would ivolve s row swaps which would affect the determiat by multiplyig by ( ) s, r row scaligs by factors,,, (where,,, r 0), ad some umber of steps where a multiple of a pivot row is r added to aother row The effect of these row operatios o the determiat the gives that s det[rref ( )] ( ) s A det( A) From this we coclude that det( A) ( ) r det[rref ( A)] r There are oly two possible values for det[rref ( A )] If the matrix A is ivertible with ra, the rref ( A) I ad det[rref ( A )] If the matrix A is ot ivertible with ra, the rref ( A ) will have at least oe allzero row ad det[rref ( A )] 0 From the result above, this gives the followig importat theorem: Theorem: A matrix A is ivertible if ad oly if 0 There are a umber of other facts about determiats of both practical ad theoretical value Propositio: If A ad B are matrices, the det( AB) ()(det B ) Proof: If the matrix A is ot ivertible, the AB will also ot be ivertible ad 0 ad det( AB ) 0, so the result holds i this case A homewor exercise shows that i the case where A is ivertible ad B is a arbitrary matrix, the rref[ A AB] [ I B ] If the row reductio from A to I ivolves the same row operatios as outlied previously, the these same row operatios would be applied i reducig AB to B, so s det( AB) ( ) r det( B) det( A)det( B) Propositio: If A is ivertible, the Proof: If A is ivertible, the are reciprocals A A det( A ) det( A ) I, so det( ) det( )det( ) det( ) A A A A I, so Propositio: If two matrices A ad B are similar, the det B det( ) A ad det( A ) 4 revised 7/7/0

Proof: Two matrices A ad B are similar if a oly if B S AS for some ivertible (chage of basis) matrix S Therefore det det( B S AS) det( S )det( A)det( S) det( A ) This last propositio yields a importat corollary: Corollary: Suppose V is a fiite-dimesioal vector space ad T : V V is a liear trasformatio The the determiat det( T ) is well-defied That is, if B is ay basis for V ad if A T is the matrix of T relative to B this basis, ad if we defie det( T ) det( A ), the this value will be the same o matter what basis we choose Proof: If we choose ay other basis the the matrix of T relative to this other basis will be B S AS for some ivertible (chage of basis) matrix S Therefore det( T ) det B from the previous propositio Geometry ad the determiat If we merge some of the previous iformatio about Gram-Schmidt orthogoalizatio ad QR factorizatio with the curret facts about determiats, we ca derive some importat ad useful results Recall that if v,, v are liearly idepedet ad if we write A v v v, the the Gram-Schmidt process r v u v u 0 r gave v u A v v v u u u QR 0 0 r matrix w/liearly matrix idepedet colums w/orthoormal colums upper triagular matrix with ozero diagoal etries The colums of the matrix A are the origial vectors; the colums of the matrix Q are those of the Gram- Schmidt basis; ad the etries of the matrix R capture all of the geometric aspects of the origial basis, ie legths, areas, etc ad the o-orthogoality of the origial vectors The -volume of the parallelepiped determied by v,, v is ust the product of the diagoal etries of R, ie rr r det R T T T T T T Note that with A QR we have A A ( QR) QR R Q QR R IR R R Therefore det( ) det( ) det( )det( ) det( )det( ) (det ) ( -volume) T T T AA RR R R R R R, so This is a very hady way to calculate areas, volumes, ad their higher-dimesioal aalogues Example: I R, fid the area of the parallelogram determied by the vectors T -volume det( AA ) v ad v 0 Solutio: I multivariable calculus, we would liely fid the area of this parallelogram usig the cross product 4 We would calculate that vv 5 ad fid its magitude: Area v v 6 5 4 45 5 Usig our determiat method, we write 0 T 4 5 A ad calculate AA 0 0 5 5 So T 4 5 T det( AA ) det 70 5 45 5 5 ad Area -volume det( AA ) 45 5 It is importat to ote that the cross product is oly defied i very limited applicability R, so ay method ivolvig cross products has 5 revised 7/7/0

Special Case: Determiat of a matrix as a expasio factor If T T A v v is a matrix, the det( AA) det( A)det( A) ( ) ( -volume) ad the - volume determied by the vectors,, T v v is give by det( AA) () If we further ote v Ae that v Ae determied by v,, v, we ca observe that the uit -cube determied by e e,, is mapped to the parallelepiped, so the volume is expaded from to This result exteds to ay regio i the domai ad eables us to thi of as a volume expasio factor This provides a simple geometric iterpretatio of the fact that det( AB) ()(det B ) (ad therefore det( AB) det B ) Sice the product of two matrices correspods to the compositio of liear trasformatios, ad if applyig the matrix B scales volume by det B, ad this is followed by applyig the matrix A which scales volume by, the the compositio should scale volume by the product det B It s ot hard to reaso that the sig of the determiat will be positive if the liear trasformatio is orietatio preservig ad egative if the trasformatio is orietatio reversig Ideed, we ca defie these terms by the sig of the determiat Cramer s Rule I the special case whe a system of liear equatios i variables has a uique solutio, determiats provide a formula for this uique solutio This is ow as Cramer s Rule Cramer s Rule: Suppose a liear system is represeted as Ax b where A is a matrix with ra Let A be the matrix obtaied by replacig the th colum of A with the colum vector b If the solutio to x the system is x, the x for all x Proof: Suppose x solves Ax b, the det v b v det v Ax v det v ( xvxv) v x det v v v x where we have liberally applied several previous results So x x yz Example: Solve the liear system x4y5z usig Cramer s Rule x y6z 4 Solutio: We have A 4 5 ad b We first calculate 6 4 (9) ( ) ( ) 58 84 0, so the system will yield a uique solutio We ext write 6 revised 7/7/0

A 4 5, A 5, ad A 4 ad calculate 4 6 4 6 4 (9) (8) () 9 8 9 48 ad (8) ( ) (9) 76 7 7 ad ( ) (9) ( ) 6 9 6 So x, x, ad x 48 4 84 7 7 6 7 84 6 84 7 Cooboo recipe for fidig the iverse of a ivertible matrix If you loo carefully at Cramer s Rule, you may otice that it actually provides a formula for the iverse of ay ivertible matrix The fact that should appear i the deomiators is clear eough, ad we omit most of the remaiig details, but with a little effort we ca arrive at the followig (ot particularly useful) result: Recipe for A : Give a matrix, we first calculate If 0, stop the matrix is ot ivertible If 0, we cotiue For each etry a i of the matrix, let A i be its mior the ( ) ( ) matrix obtaied by deletig the ith row ad th colum of the matrix A We defie the cofactors by i cof ( ) ( ) If we assemble all of these cofactors ito a matrix, we call this cof ( A ) We the a i i traspose this matrix to get the adoit matrix ad( ) [cof ( )] T A A The A det( ) ad( A ) A simple procedure for carryig this out is to: (a) Calculate the determiat of the give matrix If it s ozero, cotiue (b) Calculate the matrix cosistig of the determiats of the respective miors for every etry of the give matrix (c) Adust all the sigs usig the checerboard patter: (d) Traspose the resultig matrix to get the adoit (e) Multiply by the reciprocal of the determiat to get the iverse matrix Example: Fid the iverse of the matrix A 4 5 6 Solutio: (a) (9) ( ) ( ) 58 84 0 9 (b) The determiat of the miors gives: 5 7 9 (c) Adust the sigs to get the matrix of cofactors: 5 7 9 7 (d) Traspose to get the adoit: 5 9 7 (e) Multiply by the reciprocal of the determiat to get A 84 5 A 7 revised 7/7/0

Had we proceeded this way, we would have solved the system i the previous example as 9 7 48 4 xa b 84 5 84 7 7 6 4 6 Note: The impracticality of this method starts to become clear whe we loo at 4 4 matrices which would ivolve the calculatio 6 determiats of matrices i additio to the origial 4 4 determiat which requires the calculatio of other determiats to brig the total to 0 such determiats (i additio to the other calculatios) For a 5 5 matrix, we would have to calculate 5 5 0 determiats of 4 4 matrices each of which would require the calculatio of smaller determiats I geeral, it is far quicer to solve usig row reductio methods, ad row reductio has the additioal advatage of yieldig solutios i the case of cosistet systems with ra less tha Notes by Robert Witers 8 revised 7/7/0