DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX

Similar documents
Determinantal divisor rank of an integral matrix

On some matrices related to a tree with attached graphs

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

Predrag S. Stanimirović and Dragan S. Djordjević

ACI-matrices all of whose completions have the same rank

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

GENERALIZED POWER SYMMETRIC STOCHASTIC MATRICES

Product distance matrix of a tree with matrix weights

Linear estimation in models based on a graph

Generalized Principal Pivot Transform

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

Identities for minors of the Laplacian, resistance and distance matrices

Z-Pencils. November 20, Abstract

On Euclidean distance matrices

Determinant of the distance matrix of a tree with matrix weights

ELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES

For other titles published in this series, go to Universitext

On the adjacency matrix of a block graph

Lecture Summaries for Linear Algebra M51A

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

Resistance distance in wheels and fans

Some Reviews on Ranks of Upper Triangular Block Matrices over a Skew Field

a Λ q 1. Introduction

On minors of the compound matrix of a Laplacian

Ep Matrices and Its Weighted Generalized Inverse

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

Intrinsic products and factorizations of matrices

MATH36001 Generalized Inverses and the SVD 2015

COMPOUND MATRICES AND THREE CELEBRATED THEOREMS

Introduction to Numerical Linear Algebra II

Dragan S. Djordjević. 1. Introduction

RESISTANCE DISTANCE IN WHEELS AND FANS

Group inverse for the block matrix with two identical subblocks over skew fields

NEW CONSTRUCTION OF THE EAGON-NORTHCOTT COMPLEX. Oh-Jin Kang and Joohyung Kim

An integer arithmetic method to compute generalized matrix inverse and solve linear equations exactly

Rank and dimension functions

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

The generalized Schur complement in group inverses and (k + 1)-potent matrices

PRODUCT DISTANCE MATRIX OF A GRAPH AND SQUARED DISTANCE MATRIX OF A TREE. R. B. Bapat and S. Sivasubramanian

Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

COMPOUND MATRICES AND THREE CELEBRATED THEOREMS

Subsets of Euclidean domains possessing a unique division algorithm

Row Space and Column Space of a Matrix

RESEARCH ARTICLE. Linear Magic Rectangles

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

MAT 2037 LINEAR ALGEBRA I web:

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

Sums of diagonalizable matrices

Foundations of Matrix Analysis

ELA

Algebra C Numerical Linear Algebra Sample Exam Problems

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Solutions to Assignment 3

1 Linear Algebra Problems

Math 240 Calculus III

Reduction of Smith Normal Form Transformation Matrices

Chapter 1. Matrix Algebra

ELE/MCE 503 Linear Algebra Facts Fall 2018

Properties of Matrices and Operations on Matrices

Block companion matrices, discrete-time block diagonal stability and polynomial matrices

Hermite normal form: Computation and applications

Graduate Mathematical Economics Lecture 1

Undergraduate Mathematical Economics Lecture 1

Definition 2.3. We define addition and multiplication of matrices as follows.

Determining a span. λ + µ + ν = x 2λ + 2µ 10ν = y λ + 3µ 9ν = z.

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

1 Last time: determinants

Some Results on Generalized Complementary Basic Matrices and Dense Alternating Sign Matrices

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

Perron eigenvector of the Tsetlin matrix

Lecture 18: The Rank of a Matrix and Consistency of Linear Systems

Linear Algebraic Equations

arxiv: v1 [math.ra] 14 Apr 2018

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix

INVERSE TRIDIAGONAL Z MATRICES

Possible numbers of ones in 0 1 matrices with a given rank

A new interpretation of the integer and real WZ factorization using block scaled ABS algorithms

c 1998 Society for Industrial and Applied Mathematics

On V-orthogonal projectors associated with a semi-norm

Moore Penrose inverses and commuting elements of C -algebras

Lecture Notes in Linear Algebra

Rank of Matrices and the Pexider Equation

Math Camp Notes: Linear Algebra I

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

Sign Patterns with a Nest of Positive Principal Minors

Linear Algebra and its Applications

Skew-Symmetric Matrix Polynomials and their Smith Forms

Linear Equations in Linear Algebra

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

1 Determinants. 1.1 Determinant

Linear Systems and Matrices

Notes on Mathematics

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Generalizations of Sylvester s determinantal identity

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Transcription:

INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 8 14 c 2008 Institute for Scientific Computing and Information DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX RAVI B BAPAT Abstract We define the determinantal divisor rank of an integral matrix to be the number of invariant factors which equal 1 Some properties of the determinantal divisor rank are proved, which are analogous to known properties of the usual rank These include the Frobenious inequality for the rank of a product and a relation between the rank of a submatrix of a matrix and that of its complementary submatrix in the inverse or a generalized inverse of the matrix Key Words Integral matrix, determinantal divisor, invariant factor, generalized inverse 1 Introduction and Preliminaries We work with matrices over the integers Z but all the results have obvious generalizations to matrices over a principal ideal ring Let M m,n (Z) be the set of m n matrices over Z We assume familiarity with basic facts concerning integral matrices, see, for example, 6 For A M m,n (Z) we denote the jth determinantal divisor (which by definition is the gcd of the j j minors) of A by d j (A), j = 1,, min{m, n}, and the jth invariant factor by s j (A), j = 1,, rank A Recall that s j (A) = d j (A) d j 1(A), j = 1,, rank A, where we set d 0(A) = 1 If A M m,n (Z) then we define the determinantal divisor rank of A, denoted by δ(a), as the maximum integer k such that d k (A) = 1 if such an integer exists and zero otherwise Equivalently, δ(a) is the number of invariant factors of A which equal 1 This definition is partly motivated by the following result obtained recently by Zhan 11 We have modified the notation for consistency Theorem 1 Let r, s, n be positive integers with r, s n The matrix B M r,s (Z) is a submatrix of some unimodular matrix of order n if and only if δ(b) r +s n Theorem 1 has an uncanny similarity to the following well-known result: An r s matrix B over a field is a submatrix of a nonsingular matrix of order n if and only if the rank of B is at least r + s n We show that this similarity extends to some other results such as the Frobenius inequality for matrix product and a result of Fiedler and Markham 4 and Gustafson 5, and its extension to generalized inverses obtained in 1 We now introduce some more definitions If A and G are matrices of order m n and n m respectively, then G is called a generalized inverse of A if AGA = A We say that G is a reflexive generalized inverse of A if AGA = A and GAG = G For background material on generalized inverses see 2,3 Received by the editors January 26, 2006 and, in revised form, March 26, 2007 2000 Mathematics Subject Classification 35R35, 49J40, 60G40 8

DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX 9 Generalized inverses of integral matrices have been extensively studied We say that an integral matrix is regular if it admits an integral generalized inverse It is well-known (see, for example, 9) that an integral matrix is regular if and only if all its invariant factors are equal to 1 Thus an integral matrix is regular if and only if its rank coincides with the determinantal divisor rank Let A M m,n (Z) and suppose rank A = r > 0 We say that A admits a rank factorization if there exist matrices P M m,r (Z) and Q M r,n (Z) such that P has a left-inverse in M r,m (Z), Q has a right-inverse in M n,r (Z) and A = P Q Then we have the following result As remarked earlier, the equivalence of (i) and (iii) is known The other implications are easy to prove Lemma 2 Let A M m,n (Z) Then the following conditions are equivalent: (i) A is regular (ii) A admits a rank factorizarion (iii) Each invariant factor of A is 1 (iv) The determinantal divisor rank of A equals the rank of A We now prove the following: Theorem 3 Let A M m,n (Z) and let B M r,s (Z) be a submatrix of A Then (i) δ(b) δ(a) (ii) r + s δ(b) m + n δ(a) Proof First suppose that m = r + 1, n = s, and without loss of generality assume that B is constituted by the first r rows of A Let p = min{r, s} Let UBV = diag(s 1,, s p ) r s = D be the Smith normal form where U, V are unimodular matrices, s 1,, s k are the invariant factors and s k+1 = = s p = 0, k = rank B Let D à = diag(u, 1)AV = x T In Ã, subtract x i times the i-th row from the last row, i = 1,, δ(b) Then à is reduced to a matrix where the first δ(b) elements of the last row equal zero Clearly, if in the reduced matrix the last row has an entry equal to 1, then δ(ã) = δ(a) = δ(b) + 1 Otherwise, δ(a) = δ(b) Therefore (i) is proved in this case Let t = δ(a) If ã i1 j 1,, ã it j t is a partial diagonal of à of length t then it contains at least t 1 entries of D Thus any partial diagonal product of à of length t is either 0 or is divisible by s t 1 (B) Thus s t 1 (B) d t (A) = 1 It follows that s 1 (B) = = s t 1 (B) = 1 and hence δ(b) t 1 Therefore (ii) is also proved in this case When A is obtained by appending a column to B, the proof is similar The general case is proved by proceeding by appending a row or column at a time It is clear that the following assertion contained in Theorem 1 is a consequence of Theorem 3, (ii) Let A M n,n (Z) be a unimodular matrix and let B M r,s (Z) be a submatrix of A Then δ(b) r + s n 2 Rank of product and sum If A and B are matrices such that AB is defined then rank AB min{ rank A, rank B} The corresponding result for δ is given next Lemma 4 Let A M m,n (Z), B M n,p (Z) Then δ(ab) min{δ(a), δ(b)}

10 R B BAPAT Proof By the Cauchy-Binet formula any t t minor of AB is either zero or is an integral linear combination of t t minors of A, t = 1,, min{m, p} Thus, if d t (AB) is nonzero, then d t (A) d t (AB) Putting t = δ(ab) we see that d δ(ab) (A) = 1 and hence δ(ab) δ(a) It can be similarly shown that δ(ab) δ(b) If A and B are matrices of order m n and n p respectively, then it is well-known that rank AB rank A + rank B n We now prove the corresponding result for δ Theorem 5 Let A M m,n (Z), B M n,p (Z) Then (1) δ(ab) δ(a) + δ(b) n Proof Let δ(a) = r, δ(b) = s Let w = min{m, n}, z = min{n, p} Let A = Udiag(h 1,, h r, h r+1,, h w ) m n V and B = U 1 diag(k 1,, k s, k s+1,, k z ) n p V 1 be Smith normal forms, where h 1 = = h r = k 1 = = k s = 1 Then U 1 ABV1 1 = diag(h 1,, h w )V U 1 diag(k 1,, k z ) It follows from the preceding equation that the submatrix of V U 1 formed by the first r rows and the first s columns is a submatrix of U 1 ABV1 1 Since V U 1 is unimodular, it follows from Theorem 1 that this submatrix must have at least r + s n invariant factors equal to 1 Thus by Theorem 3, (i), δ(ab) = δ(u 1 ABV 1 1 ) r + s n = δ(a) + δ(b) n and the proof is complete If A, X, B are matrices such that AXB is defined, then the well-known Frobenius inequality asserts that rank AXB rank AX + rank XB rank X The analogous result for δ is false as seen from the following example Let A = diag(1, 0, 1), X = diag(2, 2, 3), B = diag(0, 1, 1) Then δ(x) = 1, δ(ax) = 1, δ(xb) = 1 and δ(axb) = 0 Thus δ(axb) is less than δ(ax) + δ(xb) δ(x) We now show that an analogue of the Frobenius inequality is true when X is regular Theorem 6 Let A M m,n (Z), X M n,p (Z) and B M p,q (Z) and suppose X is regular Then (2) δ(axb) δ(ax) + δ(xb) δ(x) Proof Let rank X = r Since X is regular, by Lemma 2, each of the r invariant factors of X equals 1 Let be the Smith normal form of X Then Ir AXB = AU 0 X = Udiag(I r, 0) n p V n r Ir 0 r p V B

Ir Let C = AU 0 DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX 11 and D = I r 0 V B By Theorem 5, (3) δ(cd) δ(c) + δ(d) r Clearly, δ(ax) = δ(c), δ(xb) = δ(d) and the result follows from (3) We now turn to the sum of two matrices If A M m,n (Z), B M m,n (Z), then it is not true in general that δ(a + B) δ(a) + δ(b) For example, let A = diag(2, 0), B = diag(0, 3) Then δ(a) = δ(b) = 0, whereas δ(a + B) = 1 However, if either A or B is regular, then δ(a + B) δ(a) + δ(b) holds, as we see now We first prove a preliminary result We include a proof for completeness, although it is essentially contained in the proof of Theorem 3 Lemma 7 Let A m,n (Z) and let C M m+1,n (Z) be obtained by augmenting A by a row vector Then δ(c) δ(a) + 1 Proof Any t t minor of C is an integral linear combination of (t 1) (t 1) minors of A Thus d t 1 (A) d t (C) Putting t = δ(a) we see that δ(a) δ(c) 1 and the proof is complete Theorem 8 Let A M m,n (Z), B M m,n (Z), and suppose either A or B is regular Then δ(a + B) δ(a) + δ(b) Proof First suppose B is regular We assume, without loss of generality, that B is in Smith normal form Since A A + B = I m, I m B we see, in view of Lemma 4, that A δ(a + B) δ B The result follows by a repeated application of Lemma 7, keeping in mind that the determinantal divisor rank is unchanged when a matrix is augmented by a zero row The proof is similar when A is regular 3 Complementary submatrices in a matrix and its inverse It has been shown by Fiedler and Markham 4 and by Gustafson 5 that if A is a nonsingular n n matrix over a field, then the nullity of any submatrix of A equals the nullity of the complementary submatrix in A 1 The result was extended to Moore-Penrose inverse by Robinson 10 and then an extension to any generalized inverse was given in 1 We now prove the analogues of these results for the determinantal divisor rank Theorem 9 Let A M n,n (Z) be a unimodular matrix, let B M n,n (Z) be the inverse of A and suppose A and B are partitioned as A = ( s n s ) r A 11 A 12 and B = n r A 21 A 22 Then r δ(a 11 ) = n s δ(b 22 ) ( r n r ) s B 11 B 12 n s B 21 B 22

12 R B BAPAT Proof By Jacobi s identity, any t t minor of A 11 equals the complementary minor in B, 1 t min{r, s} Let the submatrix of B which is complementary to A 11 be partitioned as (4) ( r t n r ) s t n s B 22 First suppose that n r s+t > 0 It can be seen, by Laplace expansion, that the determinant of the matrix (4) can be expressed as an integral linear combination of minors of B 22 of order n s r + t Thus d n s r+t (B 22 ) d t (A 11 ) Setting t = δ(a 11 ) we see that d n s r+t (B 22 ) divides 1 and therefore (5) δ(b 22 ) n s r + δ(a 11 ) If n r s + δ(a 11 ) 0, then (5) is obvious Therefore r δ(a 11 ) n s δ(b 22 ) Reversing the role of A and B we can prove and hence the result is proved r δ(a 11 ) n s δ(b 22 ) If A and G are matrices of order m n and n m respectively, then it will be convenient to assume that they are partitioned as follows: ( q 1 q 2 ) p (6) A = 1 A 11 A 12 and G = p 2 A 21 A 22 ( p 1 p 2 q 1 G 11 G 12 q 2 G 21 G 22 where p 1 + p 2 = m and q 1 + q 2 = n The next result is similar to a result due to Nomakuchi 7 over a field The proof is based on familiar ideas, see, for example, 8 Theorem 10 Let A M m,n (Z) be a regular matrix of rank r and let G M n,m (Z) be a generalized inverse of A Then there exist matrices B M m,m r (Z), Q M n r,n (Z) and T M n r,m r (Z) such that the matrix A B Q is unimodular and G is the submatrix formed by the first n rows and the first m columns of its inverse Proof The matrices AG and GA are idempotent Thus I m AG and I n GA are idempotent as well and admit rank factorization by Lemma 2 Let I m AG = BC and I n GA = P Q be rank factorizations Let P and C be a left-inverse of P and a right inverse of C respectively and set T = P (G GAG)C We have T ) (7) A B Q T G P C 0 = AG + BC AP QG + T C QP

DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX 13 Since I n GA = P Q, then AP Q = 0 and hence AP = 0 Similarly we can verify that AG + BC = I m, QG + T C = 0 and QP = I n r Thus A B G P Q T C 0 equals the identity matrix and the result is proved We now prove the main result of this section Theorem 11 Let A M m,n (Z) be a regular matrix of rank r and let G M n,m (Z) be a generalized inverse of A Let A and G be partitioned as in (6) Then (8) r p 1 q 1 δ(g 22 ) δ(a 11 ) p 2 + q 2 r Proof By Theorem 10 there exist matrices B M m,m r (Z), Q M n r,n (Z) and T M n r,m r (Z) such that the matrix A B Q T is unimodular and G is the submatrix formed by the first n rows and the first m columns of its inverse Thus we may write S = q 1 q 2 m r p 1 A 11 A 12 B 1 p 2 A 21 A 22 B 2, and S 1 = n r Q 1 Q 2 T p 1 p 2 n r q 1 G 11 G 12 P 1 q 2 G 21 G 22 P 2 m r C 1 C 2 0 Since S is unimodular, we have, using Theorem 9, and Theorem 3, (i), G22 P (9) p 1 δ(a 11 ) = m + q 2 r δ 2 C 2 0 (10) m + q 2 r δ(g 22 ) It follows that δ(g 22 ) δ(a 11 ) p 2 + q 2 r and the proof of the second inequality in (8) is complete We now prove the first inequality By Theorem 3, (ii) we have (11) p 2 + q 2 δ(g 22 ) (q 2 + m r) + (p 2 + n r) δ G22 P 2 C 2 0 Combining (9) and (11) we see that the first inequality in (8) is true and the proof is complete References 1 R B Bapat, Outer inverses: Jacobi type identities and nullities of submatrices, Linear Algebra and Its Applications, 361:107-120 (2003) 2 A Ben-Israel and T N E Greville, Generalized Inverses: Theory and Applications, Wiley- Interscience, 1974 3 S L Campbell and C D Meyer, Jr, Generalized Inverses of Linear Transformations, Pitman, 1979 4 M Fiedler and T L Markham, Completing a matrix when certain entries of its inverse are specified, Linear Algebra and Its Applications, 74:225-237 (1986) 5 W H Gustafson, A note on matrix inversion, Linear Algebra and Its Applications, 57:71-73 (1984) 6 M Newman, Integral Matrices, Academic Press, New York and London, 1972 7 K Nomakuchi, On the characterization of generalized inverses by bordered matrices, Linear Algebra and Its Applications, 33:1-8 (1980)

14 R B BAPAT 8 K Manjunatha Prasad and K P S Bhaskara Rao, On bordering of regular matrices, Linear Algebra and Its Applications, 234:245-259 (1996) 9 K P S Bhaskara Rao On generalized inverses of matrices over principal ideal rings, Linear and Multilinear Algebra, 10(2):145-154 (1981) 10 D W Robinson, Nullities of submatrices of the Moore-Penrose inverse, Linear Algebra and Its Applications, 94:127-132 (1987) 11 Xingzhi Zhan, Completion of a partial integral matrix to a unimodular matrix, Linear Algebra and Its Applications, 414:373-377 (2006) Indian Statistical Institute New Delhi, 110016, India E-mail: rbb@isidacin URL: http://wwwisidacin/ rbb/