For example, if M = where A,,, D are n n matrices over F which all commute with each other, then Theorem 1says det F M = det F (AD, ): (2) Theorem 1 w

Similar documents
Chapter 4. Matrices and Matrix Rings

Topics in linear algebra

Linear Algebra Primer

MATRICES The numbers or letters in any given matrix are called its entries or elements

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Elementary maths for GMT

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

ELEMENTARY MATRIX ALGEBRA

A division algorithm

MAT 2037 LINEAR ALGEBRA I web:

Follow links for Class Use and other Permissions. For more information send to:

Matrices. Chapter Definitions and Notations

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 326: RINGS AND MODULES STEFAN GILLE

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Numerical Analysis Lecture Notes

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

* 8 Groups, with Appendix containing Rings and Fields.

Determinants of Block Matrices and Schur s Formula by Istvan Kovacs, Daniel S. Silver*, and Susan G. Williams*

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Answers and Solutions to Selected Homework Problems From Section 2.5 S. F. Ellermeyer. and B =. 0 2

Recitation 8: Graphs and Adjacency Matrices

ELEMENTARY LINEAR ALGEBRA

Linear Algebra Primer

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c

INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES

Chapter 2: Linear Independence and Bases

22m:033 Notes: 3.1 Introduction to Determinants

a (b + c) = a b + a c

A matrix over a field F is a rectangular array of elements from F. The symbol

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

MA106 Linear Algebra lecture notes

1300 Linear Algebra and Vector Geometry

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

The 4-periodic spiral determinant

Introduction to Determinants

1. Introduction to commutative rings and fields

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

ACI-matrices all of whose completions have the same rank

Eigenvalues and Eigenvectors

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

The Determinant: a Means to Calculate Volume

(Inv) Computing Invariant Factors Math 683L (Summer 2003)

FACTORIZATION AND THE PRIMES

Some Notes on Linear Algebra

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

Formal power series rings, inverse limits, and I-adic completions of rings

Matrices. Math 240 Calculus III. Wednesday, July 10, Summer 2013, Session II. Matrices. Math 240. Definitions and Notation.

Chapter 2 Notes, Linear Algebra 5e Lay

Homework 5 M 373K Mark Lindberg and Travis Schedler

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.

Linear Algebra March 16, 2019

NOTES FOR LINEAR ALGEBRA 133

1. Introduction to commutative rings and fields

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Solution Set 7, Fall '12

Elementary Row Operations on Matrices

Definition 2.3. We define addition and multiplication of matrices as follows.

1 Matrices and Systems of Linear Equations. a 1n a 2n

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise:

2. Introduction to commutative rings (continued)

Solutions of exercise sheet 8

The Integers. Peter J. Kahn

MILNOR SEMINAR: DIFFERENTIAL FORMS AND CHERN CLASSES

0.2 Vector spaces. J.A.Beachy 1

LS.1 Review of Linear Algebra

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

3 Fields, Elementary Matrices and Calculating Inverses

Linear Algebra, Summer 2011, pt. 2

Written Homework # 4 Solution

Group, Rings, and Fields Rahul Pandharipande. I. Sets Let S be a set. The Cartesian product S S is the set of ordered pairs of elements of S,

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27

NOTES ON LINEAR ALGEBRA. 1. Determinants

LINEAR SYSTEMS, MATRICES, AND VECTORS

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

The Integers. Math 3040: Spring Contents 1. The Basic Construction 1 2. Adding integers 4 3. Ordering integers Multiplying integers 12

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

EIGENVALUES AND EIGENVECTORS 3

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

Extra Problems for Math 2050 Linear Algebra I

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

Lecture 8 - Algebraic Methods for Matching 1

0.1 Rational Canonical Forms

UMA Putnam Talk LINEAR ALGEBRA TRICKS FOR THE PUTNAM

Review of linear algebra

Math Matrix Theory - Spring 2012

Matrices and Determinants

Math 309 Notes and Homework for Days 4-6

(1) A frac = b : a, b A, b 0. We can define addition and multiplication of fractions as we normally would. a b + c d

Solutions to Assignment 3

Graduate Mathematical Economics Lecture 1

Detailed Proof of The PerronFrobenius Theorem

2 b 3 b 4. c c 2 c 3 c 4

Math 121 Homework 5: Notes on Selected Problems

chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS 5.1 Basic Definitions

Transcription:

Determinants of lock Matrices John R. Silvester 1 Introduction Let us rst consider the 2 2 matrices M = a b c d and product are given by M + N = a + e b + f c + g d + h and MN = and N = e f g h ae + bg af + bh ce + dg cf + dh. Their sum : Here the entries a; b; c; d; e; f; g; h can come from a eld, such as the real numbers, or more generally from a ring, commutative or not. Indeed, if F is a eld, then the set R = n F n of all n n matrices over F forms a ring (non-commutative if n 2), because its elements can be added, subtracted and multiplied, and all the ring axioms (associativity, distributivity, etc.) hold. If a;b;:::;h are taken from this ring R, then M; N can be thought of either as members of 2 R 2 (2 2 matrices over R) or as members of 2n F 2n. It is well-known fact, which we leave the reader to investigate, that whether we compute with these matrices as 2n 2n matrices, or as 2 2 \block" matrices (where the blocks a;b;::: are n n matrices, i.e., elements of R) makes no dierence as far as addition, subtraction and multiplication of matrices is concerned. (See for example [2], p. 4, or [6], pp. 100{106.) In symbols, the rings 2 R 2 and 2n F 2n can be treated as being identical: 2 R 2 = 2n F 2n, or 2 ( n F n ) 2 = 2n F 2n. More generally, we can partition any mn mn matrix as an m m matrix of n n blocks: m ( n F n ) m = mn F mn. The main point of this article is to look at determinants of partitioned (or block) matrices. If a; b; c; d lie in a ring R, then provided that R is commutative there is a determinant for M, which we shall write as det R, thus: det R M = ad, bc, which of course lies in R. If R is not commutative, then the elements ad, bc, ad, cb, da, bc, da, cb may not be the same, and we do not then know which of them (if any) might be a suitable candidate for det R M. This is exactly the situation if R = n F n, where F is a eld (or a commutative ring) and n 2; so to avoid the diculty we take R to be, not the whole of the matrix ring n F n, but some commutative subring R n F n. The usual theory of determinants then works quite happily in 2 R 2, or more generally in m R m, and for M 2 m R m we can work out det R M, which will be an element ofr. ut R n F n, so det R M is actually a matrix over F, and we can work out det F (det R M), which will be an elementoff. On the other hand, since R n F n,wehave M 2 m R m m ( n F n ) m = mn F mn, so we can work out det F M, which will also be an element of F. Our main conclusion is that these two calculations give the same result: Theorem 1. Let R be a commutative subring of n F n, where F is a eld (or a commutative ring), and let M 2 m R m. Then det F M = det F (det R M): (1) 1

For example, if M = where A,,, D are n n matrices over F which all commute with each other, then Theorem 1says det F M = det F (AD, ): (2) Theorem 1 will be proved later. First, in section 2 we shall restrict attention to the case m = 2 and give some preliminary (and familiar) results about determinants of block diagonal and block triangular matrices which, as a by-product, yield a proof by block matrix techniques of the multiplicative property of determinants. In section 3 we shall prove something a little more general than Theorem 1 in the case m =2;and Theorem 1 itself, for general m, will be proved in section 4. 2 The multiplicative property Let M =, where A,,, D 2 n F n, so that M 2 2n F 2n. As a rst case, suppose = = O, the n n zero matrix, so that M = A O, a block-diagonal matrix. It isawell-known fact that A O det F The keen-eyed reader will notice immediately that, since = det F A det F D: (3) det F A det F D = det F (AD); (4) equation (3) is just (2) in the special case where = = O. However, we postpone this step, because with care we can obtain a proof of the multiplicative property (4) as aby-product of the main argument. One way of proving (3) is to use the Laplace expansion of det F M by the rst n rows, which gives the result immediately. A more elementary proof runs thus: generalize to the case where A is r r but D is still n n. The result is now obvious if r = 1, by expanding by the rst row. So use induction on r, expanding by the rst row to perform the inductive step. (Details are left to the reader.) This result still holds if we know only that = O, and the proof is exactly the same: we obtain A O det F = det F A det F D: (5) y taking transposes, or by repeating the proof using columns instead of rows, we also obtain the result when = O, namely, det F = det F A det F D: (6) 2 hostemostel.com

In order to prove (4) we need to assume something about determinants, and we shall assume that adding a multiple of one row (respectively, column) to another row (respectively, column) of a matrix does not alter its determinant. Since multiplying a matrix on the left (respectively, right) by a unitriangular matrix corresponds to performing a number of such operations on the rows (respectively, columns), it does not alter the determinant. (A uni triangular matrix is a triangular matrix with all diagonal entries equal to 1.) We shall also assume that det F I n =1,where I n is the n n identity matrix. So now observe that I n,i n In O In,I n = O I n I n I n O I n,,d whence det F = det F of (7) are unitriangular. From (5) and (6) it follows from this that det F O ut also A O,I n D,,D ; (7), since the rst three matrices on the left O = det F (,) det F = det F In D = O I n A AD,I n O : : (8) Here the second matrix on the left is unitriangular, so taking determinants and using (5) and the rst part of (8), we have det F A det F D = det F I n det F (AD); and since det F I n =1,the multiplicative law (4) for determinants in n F n follows. 3 Determinants of 2 2 block matrices Since we now know that det F A det F D = det F (AD), then also det F (,) det F = det F det F (,) = det F ((,)) = det F (,). From (5), (6) and (8), we obtain: Lemma 2. If M =, then det F M = det F (AD, ) (9) whenever at least one of the blocks A,,, D is equal to O. (ompare this with (2).) We shall now try to generalize somewhat. Suppose the blocks and D commute, that is, D = D. Then A D D O =, I n AD, D, D 3 = AD, : (10) hostemostel.com

We proved (4) in n F n for all n, so we can apply it in 2n F 2n to get, via (5) and (6), det F M det F D = det F (AD, ) det F D; and so detf M, det F (AD, ) det F D =0: (11) Now if det F D is not zero (or, in the case where F is a ring rather than a eld, if det F D is not a divisor of zero), then (9) follows immediately from (11); but we do not actually need this extra assumption, as we shall now show. Adjoin an indeterminate x to F, and work in the polynomial ring F [x]. This is a(nother) commutative ring, and a typical element is a polynomial a 0 x r +a 1 x r,1 +:::+a r,1x+a r, where a i 2 F, all i, and addition, subtraction and multiplication of polynomials is done in the obvious way. Now let us add to D the scalar matrix xi n, and for brevity write D x = xi n + D. Since a scalar matrix necessarily commutes with (because it commutes with every matrix), and also D commutes with, it follows that D x commutes with. Thus, if we put M x =, then working x over F [x] in place of F yields equation (11) for M x, that is, detf M x, det F (AD x, ) det F D x =0: (12) ut here we have the product of two polynomials equal to the zero polynomial. The second polynomial det F D x = det F (xi n + D) is certainly not the zero polynomial, but is monic of degree n, that is, it is of the form x n + terms of lower degree. (It is in fact the characteristic polynomial of,d.) This means that the left-hand polynomial in (12), the bracketed expression, must be the zero polynomial. For if not, it is of the form a 0 x r +terms of lower degree, where a 0 2 F and a 0 6= 0. Multiplying the two polynomials, we get a 0 x r+n + terms of lower degree, which cannot be the zero polynomial, a contradiction. (This argument works even when F is not a eld, but merely a commutative ring possibly with divisors of zero.) So now we have proved that det F M x, det F (AD x, ) = 0; but D 0 = D and M 0 = M, so we just have to put x = 0 and we obtain the following stronger version of Theorem 1: Theorem 3. If M =, where A,,, D 2 n F n and D = D, then det F M = det F (AD, ): (13) Other versions of Theorem 3 hold if dierent blocks of M commute, so it is an exercise for the reader to show that if A = A then det F M = det F (AD, ); (14) if D = D then det F M = det F (DA, ); (15) if A = A then det F M = det F (DA, ): (16) Of course, if all four blocks of M commute pairwise, then all of (13), (14), (15), (16) hold, the expressions on the right being equal. As a particular case, if R is a commutative subring of n F n and A,,, D 2 R, then any of (13){(16) gives an expression for det F M, and this gives a proof of Theorem 1inthecase m =2. 4 hostemostel.com

4 Determinants of m m block matrices In this section we shall prove Theorem 1, but rst we do some preliminary calculations. Let R be any commutative ring, and let M 2 m R m, where m 2. We partition M as follows: M = A b, where A 2 c m,1 R m,1, b 2 m,1 R, c 2 R m,1, and d 2 R. So here d b is a column vector, and c is a row vector. Suppose c =(c 1 ;c 2 ;:::;c m,1), and consider the eect of multiplying this on the right by the scalar matrix di m,1. We have c(di m,1) = (c 1 ;c 2 ;:::;c m,1) 0 @ = (c 1 d; c 2 d;:::;c m,1d) d 0 ::: 0 0 d ::: 0...... 0 0 ::: d = (dc 1 ;dc 2 ;:::;dc m,1) (because R is commutative) = d(c 1 ;c 2 ;:::;c m,1) =dc: 1 A In a similar way, A(dI m,1) =da. So now we have A b di m,1 0 = A 0 b c d,c 1 0 d (17) where A 0 = da,bc 2 m,1 R m,1. (Notice here that bc, being the product of an (m,1)1 matrix and a 1 (m, 1) matrix, is an (m, 1) (m, 1) matrix.) Applying det R to each side of (17), we get (det R M)d m,1 = (det R A 0 )d: (18) If we now take R to be a commutative subring of n F n, then (18) is an equation in n F n, so we can apply det F to each side and get det F (det R M)(det F d) m,1 = det F (det R A 0 )(det F d): (19) Also, (17) is now an equation in mn F mn,sowe can apply det F to each side and obtain (det F M)(det F d) m,1 = (det F A 0 )(det F d): (20) We now have everything we need ready for a proof of Theorem 1. Proof of Theorem 1. The result is trivial if m = 1, for then det R M = M, and there is nothing to prove. So assume m 2, partition M as above, and use induction on m. y the inductive hypothesis, det F (det R A 0 ) = det F A 0, since A 0 2 m,1 R m,1. From (19) and (20) we deduce detf M, det F (det R M) (det F d) m,1 =0: (21) Now if det F d is not zero (or, in the case where F is a commutative ring rather than a eld, if det F d is not a divisor of zero), then our result follows immediately. To deal with other cases, we repeat our trick with polynomials in the proof of Lemma 2, in section 3. 5 hostemostel.com

Adjoin an indeterminate x to F, and work in the polynomial ring F [x]. Add the scalar matrix xi n to d to dene d x = xi n + d (which may look strange, but d, being in R, really is an n n matrix over F ), and make the corresponding adjustment to M, by putting M x = A c b. Then exactly as for M, we have d x detf M x, det F (det R M x ) (det F d x ) m,1 =0: (22) ut det F d x is a monic polynomial of degree n, and we conclude that the other term in (22), det F M x, det F (det R M x ), is the zero polynomial. Putting x =0and noting that d 0 = d and M 0 = M, we obtain det F M = det F (det R M), and the proof of Theorem 1is complete. As an application, we can obtain the formula for the determinant of a tensor product of matrices rather easily. For example, if P = p 11 p 12 p 21 p 22 tensor product P Q is dened to be the 2n 2n matrix 2 2 F 2 and Q 2 n F n, the p 11 Q p 12 Q. Here the p 21 Q p 22 Q blocks, all being scalar multiples of Q, commute pairwise; one could in fact take R to be the subring of n F n generated by Q, that is, the set F [Q] of all polynomial expressions a 0 Q r + a 1 Q r,1 + :::+ a r,1q + a r I n, where r 0 and a i 2 F, all i. This is certainly a commutative subring of n F n and contains all four blocks p ij Q. Applying Theorem 1, or Theorem 3, we have det F (P Q) = det F (p11 Q)(p 22 Q), (p 12 Q)(p 21 Q) = det F p 11 p 22 Q 2, p 12 p 21 Q 2 = det F (p11 p 22, p 12 p 21 )Q 2 = det F (detf P)Q 2 More generally: = (det F P) n (det F Q) 2 : orollary. Let P 2 m F m and Q 2 n F n. Then det F (P Q) = (det F P) n (det F Q) m : Proof. Take R = F [Q], as above. If P =(p ij ), then by denition 0 1 p 11 Q p 12 Q ::: p 1m Q p P Q = 21 Q p 22 Q ::: p 2m Q @...... p m1 Q p m2 Q ::: p mm Q A 2 m R m : Then det R (P Q) is a sum of m terms each of the form (p 1i1 Q)(p 2i2 Q) :::(p mim Q)=p 1i1 p 2i2 :::p mim Q m 6

for some permutation i 1 ;i 2 ;:::;i m of 1; 2;:::;m. Thus det R (P Q) = (det F P)Q m : Then, by Theorem 1, det F (P Q) = det F (det R (P Q)) = det F ((det F P)Q m ) = (det F P) n (det F Q m ) = (det F P) n (det F Q) m : 5 Acknowledgements This paper began when I set my linear algebra class the following exercise: if A,,, D 2 n F n and D is invertible, show that det A (The intended proof was via the equation A D = det(ad, D D,1 D): I n O,D,1 I n = A, D,1 and the exercise took its inspiration from the theory of Dieudonne determinants for matrices over skew elds. See, for example, [1], chapter IV.) It was a small step to notice that, if also D = D, then (13) holds; but (13) does not mention D,1, and so the natural question to ask was whether (13) would hold even if D were not invertible (but still D = D). I eventually found a proof of this, when F is a eld, by using the principle of the irrelevance of algebraic inequalities. (See [3], chapter 6.) I passed this around friends and colleagues, and Dr W. Stephenson showed me how to use monic polynomials to shorten the argument, and extend it from elds to rings. At the same time, Dr A. D. arnard suggested it might be possible to extend the result to the m m case by assuming all the pairs of blocks commute. I am grateful to both of them for their help, and their interest. I also wish to thank the referee for directing my attention to some of the early literature on this subject, mentioned below. Theorem 1 is not, I think, a new result, and I have seen what appears to be an abstract version of it (without proof) at a much more advanced level. I have not been able to nd an elementary statement and proof, as given above, in the literature. The block matrix proof of the multiplicative property of determinants is essentially that given in [2], chapter 4. The formula for the determinant of a tensor product rst appears in the case m =4,n =2in [11], and indeed is referred to in [7] as Zehfuss' theorem. The rst proof of the general case is probably that in [8], p. 117, though in [5], p. 82, the proof is attributed to [4], [9] and [10]. See also the references to these articles in [7], volume 2, pp. 102{104, and volume 4, pp. 42, 62 and 216. ; 7

References [1] E. Artin, Geometric Algebra, Interscience 1957. [2] Frank Ayres, Jr., Matrices, Shaum's Outline Series, McGraw-Hill 1962. [3] P. M. ohn, Algebra, Volume 1, Wiley 1982 (2nd edition). [4] K. Hensel, Uber die Darstellung der Determinante eines Systems welches aus zwei anderen componirt ist, Acta Math. 14 (1889), 317{319. [5].. MacDuee, The theory of matrices, helsea 1956. [6] L. Mirsky, An introduction to linear algebra, larendon Press 1955. [7] T. Muir, The theory of determinants in the historical order of development, Dover reprint 1960. [8] T. Muir, A treatise on determinants, Macmillan 1881. [9] E. Netto, Zwei Determinantensatze, Acta Math. 17 (1893), 199{204. [10] R. D. von Sterneck, eweis eines Satzes uber Determinanten, Monatshefte f. Math. u. Phys. 6 (1895), 205{207. [11] G. Zehfuss, Uber eine gewisse Determinante, Zeitschrift f. Math. u. Phys. 3 (1858), 298{301. Department of Mathematics 7 September 1999 King's ollege Strand London W2R 2LS (jrs@kcl.ac.uk) 8