ACI-matrices all of whose completions have the same rank

Similar documents
1 Determinants. 1.1 Determinant

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

The following two problems were posed by de Caen [4] (see also [6]):

Sparsity of Matrix Canonical Forms. Xingzhi Zhan East China Normal University

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Intrinsic products and factorizations of matrices

Linear Systems and Matrices

The Maximum Dimension of a Subspace of Nilpotent Matrices of Index 2

Possible numbers of ones in 0 1 matrices with a given rank

Lectures on Linear Algebra for IT

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

ORIE 6300 Mathematical Programming I August 25, Recitation 1

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Math 240 Calculus III

Definition 2.3. We define addition and multiplication of matrices as follows.

Minimum number of non-zero-entries in a 7 7 stable matrix

Undergraduate Mathematical Economics Lecture 1

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture Notes in Linear Algebra

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Foundations of Matrix Analysis

A property concerning the Hadamard powers of inverse M-matrices

MATRICES. a m,1 a m,n A =

INVERSE CLOSED RAY-NONSINGULAR CONES OF MATRICES

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX

Graduate Mathematical Economics Lecture 1

II. Determinant Functions

SPRING OF 2008 D. DETERMINANTS

Fundamentals of Engineering Analysis (650163)

Combinatorial Batch Codes and Transversal Matroids

Key words. Strongly eventually nonnegative matrix, eventually nonnegative matrix, eventually r-cyclic matrix, Perron-Frobenius.

Solving Homogeneous Systems with Sub-matrices

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1

Refined Inertia of Matrix Patterns

Determinantal divisor rank of an integral matrix

Sign Patterns that Allow Diagonalizability

Matrix Completion Problems for Pairs of Related Classes of Matrices

The Determinant: a Means to Calculate Volume

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

RATIONAL REALIZATION OF MAXIMUM EIGENVALUE MULTIPLICITY OF SYMMETRIC TREE SIGN PATTERNS. February 6, 2006

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Discrete Applied Mathematics

Eigenvalues and Eigenvectors

Elementary linear algebra

Determinants of Partition Matrices

Chapter 2: Matrix Algebra

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

18.S34 linear algebra problems (2007)

Elementary Row Operations on Matrices

MATH2210 Notebook 2 Spring 2018

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Elementary maths for GMT

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 10 Shape Preserving Properties of B-splines

MATH 2030: EIGENVALUES AND EIGENVECTORS

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Definitions, Theorems and Exercises. Abstract Algebra Math 332. Ethan D. Bloch

0.1 Rational Canonical Forms

Notes on Mathematics

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

On the adjacency matrix of a block graph

Spectral inequalities and equalities involving products of matrices

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Reduction of Smith Normal Form Transformation Matrices

Section 9.2: Matrices.. a m1 a m2 a mn

arxiv: v1 [math.co] 3 Nov 2014

Transportation Problem

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

Sums of diagonalizable matrices

Eigenvalues and Eigenvectors

Some Reviews on Ranks of Upper Triangular Block Matrices over a Skew Field

Chapter 3 Transformations

EVALUATION OF A FAMILY OF BINOMIAL DETERMINANTS

Chapter 7. Linear Algebra: Matrices, Vectors,

1 Matrices and Systems of Linear Equations. a 1n a 2n

NEW CONSTRUCTION OF THE EAGON-NORTHCOTT COMPLEX. Oh-Jin Kang and Joohyung Kim

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

ALGEBRAIC GEOMETRY I - FINAL PROJECT

THE BOOLEAN IDEMPOTENT MATRICES. Hong Youl Lee and Se Won Park. 1. Introduction

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

LINEAR EQUATIONS WITH UNKNOWNS FROM A MULTIPLICATIVE GROUP IN A FUNCTION FIELD. To Professor Wolfgang Schmidt on his 75th birthday

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix

Sign Patterns with a Nest of Positive Principal Minors

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

NOTES ON LINEAR ALGEBRA. 1. Determinants

MAT Linear Algebra Collection of sample exams

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Matrices and RRE Form

Mathematics. EC / EE / IN / ME / CE. for

UNIT 3 MATRICES - II

1 Linear Algebra Problems

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Transcription:

ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices all of whose completions have the same rank, determine the largest number of indeterminates in such partial matrices of a given size, and determine the partial matrices that attain this largest number. AMS classifications: 15A03, 15A99 Keywords: partial matrix, ACI-matrix, completion, rank 1 Introduction A partial matrix over a set Ω is a matrix in which some entries all from Ω are specified and the other entries are free to be chosen from Ω. A completion of a partial matrix is a specific choice of values from Ω for its unspecified entries. A completion may also mean a completed matrix of a partial matrix. We call the unspecified entries indeterminates since they are free to range over Ω. Let M m,n (Ω) be the set of m n matrices whose entries are from a given set Ω, and let P m,n (Ω) be the set of m n partial matrices over Ω. If m = n, then we use the abbreviations M n (Ω) and P n (Ω) respectively. We call elements in Ω constants and call matrices in M m,n (Ω) constant matrices, in contrast to indeterminates and partial matrices respectively. To study partial matrices, it is more convenient to consider a larger class of matrices for technical reasons. Let F[x 1,..., x k ] be the ring of polynomials in the indeterminates x 1,..., x k E-mail addresses: huangzejun@yahoo.cn(z.huang), zhan@math.ecnu.edu.cn(x.zhan). This research is supported by the NSFC grant 10971070. Corresponding author. 1

with coefficients from a field F. We call a matrix A over F[x 1,..., x k ] an affine column independent (abbreviated as ACI) matrix if each entry of A is a polynomial of degree at most one and no indeterminate appears in two distinct columns of A. Obviously, every submatrix of an ACI-matrix is also an ACI-matrix. Given a partial matrix, we may label its unspecified entries with distinct indeterminates. Thus partial matrices are ACI-matrices. By a completion of a matrix over F[x 1,..., x k ] we mean an assignment of values in F to the indeterminates x 1,..., x k. A completion may also mean a completed polynomial matrix. The ACI-matrices all of whose completions are nonsingular and the ACI-matrices all of whose completions are singular are characterized in [1]. The following problem was also posed in [1, Problem 5]: Problem 1 Let F be a field. Characterize the ACI-matrices over F[x 1,..., x k ] all of whose completions have the same rank. We will solve this problem under a minor condition on the field F and determine the maximum number of indeterminates in such partial matrices as well as the matrices attaining this maximum number. 2 Main Results First we give some lemmas which will be used to prove our main results. F is a field throughout. Lemma 1 ([?]) Let A be an m n ACI-matrix over F[x 1,..., x k ]. If T M m (F) is a constant matrix and P M n (F) is a permutation matrix, then T AP is an ACI-matrix over F[x 1,..., x k ]. A proper ACI-matrix is an ACI-matrix containing at least one indeterminate, i.e., it is not a constant matrix. Lemma 2 Let A be an m n proper ACI-matrix over F[x 1,..., x k ]. Then there exists a nonsingular constant matrix T M m (F) and a permutation matrix Q M n (F) such that b 1 c (1) 1 b 2 c (2) T AQ = 1 c (1) 2 b 3........ c (s 1) 1 c (s 2) 2 c (s 3) 3 b s c (s) 1 c (s 1) 2 c (s 2) 3 c (1) s B where for j = 1,..., s, b j is a column vector each of whose components is a polynomial of degree 1 in which there is an indeterminate that appears nowhere else in T AQ, c (i) j are constant column 2 (1)

vectors for 1 j s, 1 i s j + 1, and B is a constant matrix. Proof. We use induction on n. The case for n = 1 is easy to check. Assume that the result holds for all proper ACI-matrices with n 1 columns and let A be an m n proper ACI-matrix. Suppose A has an entry in the position (r, t) that contains an indeterminate, say, x 1. We interchange rows 1 and r, and then interchange columns 1 and t to get a matrix A 1 = P 0 AQ 0 = (ã ij ), where ã ij = ã (0) ij + k u=1 ã (u) ij x u, ã (1) 11 0, P 0 M m (F) and Q 0 M n (F) are permutation matrices. Adding ã (1) i1 /ã(1) 11 times the first row to the i-th row in A 1 for i = 2,..., m successively we get a matrix A 2 = T 1 A 1, where T 1 M m (F) is the nonsingular matrix corresponding to these elementary row operations. Now x 1 appears only in the (1, 1) position of A 2. If there is another indeterminate in the first column of A 2 but not in the (1, 1) position, say, x 2 in the (i 1, 1) position, then interchange row i 1 and row 2 we get a new matrix A 3 = P A 2 where P is a permutation matrix. Suppose the coefficient of x 2 in position (i, 1) of A 3 is u i1, i = 1, 2,..., m. Adding u i1 /u 21 times the second row to the i-th row for i = 1, 3, 4,..., m we get a new matrix A 4 = T 2 A 3 where T 2 is a nonsingular constant matrix. Now x 1 and x 2 appear only in the (1, 1) position and the (2, 1) position of A 4 respectively. If there is an indeterminate in the last m 2 components of the first column of A 4, continue in this way until we get b1 B 1 A 5 = T 3 A 4 = c 1 B 2 where c 1 is a constant column vector and b 1 is a column vector each of whose components is a polynomial of degree 1 in which there is an indeterminate that appears nowhere else in A 5. If c 1 is void in A 5 or B 2 is a constant matrix, we have already finished the proof since A 5 has the form in (1). Otherwise let the length of c 1 be l 1 and assume that B 2 contains at least one indeterminate. Note that B 2 is an ACI-matrix by Lemma 1. Use the induction we know that there exists a nonsingular constant matrix T 4 M l (F) and a permutation matrix Q 1 M n 1 (F) such that T 4 B 2 Q 1 has the form in (1), i.e., b 2 c (1) 2 b 3 T 4 B 2 Q 1 =....... c (s 2) 2 c (s 3) 3 b s c (s 1) 2 c (s 2) 3 c (1) s B 3

where b 2,..., b s are column vectors each of whose components is a polynomial of degree 1 in which there is an indeterminate that appears nowhere else in T 4 B 2 Q 1, c (i) j are constant column vectors for 2 j s, 1 i s j + 1, and B is a constant matrix. Denote by I t the identity matrix of order t. Let [ b1 B 1 Q 1 A 6 = (I m l T 4 )A 5 (1 Q 1 ) = T 4 c 1 T 4 B 2 Q 1 ]. If some entries of B 1 Q 1 contain the same indeterminates as those in b 2,..., b s that appear only once in T 4 B 2 Q 1 mentioned above, by using elementary row operations on A 6 we can make these indeterminates vanish in B 1 Q 1, i.e., there exists a nonsingular matrix T 5 M m (F) such that T 5 A 6 has form (1). Set T = T 5 (I m l T 4 )T 3 T 2 P T 1 P 0 and Q = Q 0 (1 Q 1 ). Then T AQ = T 5 (I m l T 4 )T 3 T 2 P T 1 P 0 AQ 0 (1 Q 1 ) = T 5 A 6 has form (1). We use S to denote the cardinality of a set S. Lemma 3 Let m n be positive integers, let F be a field with F m and let A be an m n ACI-matrix over F[x 1,..., x k ]. If all the completions of A have rank n, then there exists a nonsingular constant matrix T M m (F) and a permutation matrix Q M n (F) such that T AQ = U where U is an n n upper triangular ACI-matrix with nonzero constant diagonal entries. Proof. We prove the lemma in the equivalent form: There exists a nonsingular constant matrix T M m (F) and a permutation matrix Q such that L T AQ = (2) where L is an n n lower triangular ACI-matrix with nonzero constant diagonal entries. We use induction on n to prove this equivalent version. For n = 1 the result follows from Lemma 2. Assume the result holds for all ACI-matrices over F[x 1,..., x k ] with n 1 columns and let A be an m n ACI-matrix all of whose completions have rank n. Case 1. A has a constant column, say, the j-th column which has a nonzero entry, say, the t-th entry, since A cannot have zero columns. Without loss of generality, A 0 = P 1 AQ 1 = (ã ij ) 4

with ã 11 0 where P 1, Q 1 are permutation matrices of orders m and n respectively, and the entries in the first column of A 0 are constants. Adding ã i1 /ã 11 times the first row to the i-th row of A 0 for i = 2,..., m successively we get a matrix A 1 = T 1 A 0, where T 1 M m (F) is the nonsingular matrix corresponding to these elementary row operations. Partition A 1 as ã11 u T A 1 = T 1 A 0 =. (3) 0 H By Lemma 1, A 1 and hence H is an ACI-matrix. Now all completions of A 1 have rank n. Thus all completions of H have rank n 1. By the induction hypothesis on H, there exists a nonsingular constant matrix T 2 M m 1 (F) and a permutation matrix Q 2 M n 1 (F) such that T 2 HQ 2 is of form (2). Denote Q 0 = [ 0 In 1 1 0 ], the basic circulant permutation matrix. Set T = (Q 0 I m n )(1 T 2 )T 1 P 1 and Q = Q 1 (1 Q 2 )Q T 0. Then T M m (F) is a nonsingular constant matrix, Q is a permutation matrix and T AQ is of form (2). Case 2. A has no constant column. By Lemma 2 there exists a nonsingular constant matrix T 3 M m (F) and a permutation matrix Q 3 such that T 3 AQ 3 has form (1). Set A 2 T 3 AQ 3 = [ B1 where B 2 = (c (s) 1, c(s 1) 2,..., c (1) s, B). We claim that B 2 is nonvoid and B 2 0. Otherwise, in A 2 adding the j-th column to the first column for 2 j s and choosing suitable values for the indeterminates successively we can make the sum of the first s columns be a zero vector by the property of b 1,..., b s stated in Lemma 2. This contradicts the fact that all completions of A and hence all completions of A 2 have rank n. Let B 2 be p n and let the rank of B 2 be r 1. Then there exists a nonsingular constant matrix T 4 M p (F) and a permutation matrix Q 4 M n (F) such that T 4 B 2 Q 4 = B 2 [ D E 0 0 where D = diag(d 1,..., d r ) is a diagonal matrix with each d i nonzero. Let T 5 = I m p T 4 and A 3 T 5 A 2 Q 4 = T 5 T 3 AQ 3 Q 4 = ] ] [ B1 Q 4 T 4 B 2 Q 4 ]. 5

If r = n, i.e., E is void, then by considering row permutations of A 3 we easily see that the conclusion holds. Next suppose r < n. Now we prove that E has at least one zero row. To the contrary suppose that E has no zero row. Let t = m p, g = n r. Partition B 1 Q 4 = (C 1, C 2 ) where C 1 is t r and C 2 is t g. Let Ir D 1 C 1 Z E R 1 =, A 4 A 3 R 1 = 0 I D 0 n r 0 0 where Z = C 1 D 1 E+C 2. By Lemma 2, each row of B 1 Q 4 = (C 1, C 2 ) contains an indeterminate that appears only in one position of A 2. Without loss of generality we may suppose that x 1,..., x t are these indeterminates and x i appears in the i-th row of (C 1, C 2 ), i = 1,..., t. Since x i appears only in one position of (C 1, C 2 ) and D 1 E has no zero row, x i appears in the i-th row of Z = C 1 D 1 E + C 2. Note that Z is t g. Let w 11 x 1 + w 12 x 1 + w 1g x 1 + w 21 x 2 + w 22 x 2 + w 2g x 2 + Z =...... w t1 x t + w t2 x t + w tg x t + where w ij F and for each 1 i t, w ij, j = 1,..., g are not all zero. We show that there exist k j F, j = 1,..., g 1 such that w ig + k g 1 w i,g 1 + + k 1 w i1 0 for i = 1,..., t. (4) In fact we may successively choose k j such that if w i0,j 0 for some i 0, j, then w i0,g + k g 1 w i0,g 1 + + k j w i0,j 0. (5) (4) will follow from (5) since w ij, j = 1,..., g are not all zero for each 1 i t. If w i,g 1 = 0 for all 1 i t, we choose k g 1 = 0. Otherwise let w i1,g 1,..., w iv,g 1 be the nonzero elements among w 1,g 1,..., w t,g 1. For every q = 1,..., v, the equation w iq,g + yw iq,g 1 = 0 has only one solution, i.e., y = w 1 i q,g 1 w i q,g. Since v t < m and F m, there exists k g 1 F such that w iq,g + k g 1 w iq,g 1 0 holds for all q = 1,..., v. Next if w i,g 2 = 0 for all 1 i t, we choose k g 2 = 0. Otherwise, as above there exists k g 2 F such that w ig + k g 1 w i,g 1 + k g 2 w i,g 2 0 holds for all those i for which w i,g 2 0. Continuing in this way we can find all the k g 1, k g 2,..., k 1 satisfying (5). In Z adding k j times column j to column g for 1 j g 1 we get a new matrix G = ZR 2, where R 2 M g (F) is the nonsingular matrix corresponding to these elementary column operations. Since x i appears with a nonzero coefficient in the i-th component of the last 6

column of G for i = 1,..., t, we can choose suitable values for x 1,..., x t successively to make the last column of G be a zero column, which means that there is at least one completion of G which has rank g 1 = n r 1. But on the other hand, all completions of A 4 = T 5 T 3 AQ 3 Q 4 R 1 have rank n. Therefore all completions of Z and hence all completions of G = ZR 2 have rank n r, which is a contradiction. So E has a zero row, say, the f-th row being zero. In A 3 interchanging the (t + f)-th row and the first row, and then interchanging the f-th column and the first column, we get a new ACI-matrix df 0 A 5 = P 2 A 3 Q 5 = P 2 T 5 T 3 AQ 3 Q 4 Q 5 = H 2 where P 2 M m (F) and Q 5 M n (F) are permutation matrices corresponding to these elementary row and column operations respectively. Since all completions of A 5 have rank n, all completions of H 2 have rank n 1. By Lemma 1, H 2 is an ACI-matrix. So using the induction hypothesis on H 2, there exists a nonsingular constant matrix T 6 M m 1 (F) and a permutation matrix Q 6 M n 1 (F) such that T 6 H 2 Q 6 is of form (2). Set T = (1 T 6 )P 2 T 5 T 3 and Q = Q 3 Q 4 Q 5 (1 Q 6 ). Then T M m (F) is nonsingular, Q M n (F) is a permutation matrix and T AQ is of form (2). This completes the proof. For a matrix G, we denote by G(i, j) the entry of G in the position (i, j). Lemma 4 Let F be a field with F n+1 and A be an m n ACI-matrix over F[x 1,..., x k ]. If all completions of A have the same rank r < n, then A has at least one column with only constant entries. Proof. To the contrary suppose each column of A contains at least one indeterminate, which implies that the number of indeterminates in A is at least n. Without loss of generality we assume that x j appears in the j-th column of A for j = 1,..., n. Let A = (a ij ) with a ij = a (0) ij + b ij x j + u j a (u) ij x u, where for each j, b ij, i = 1,..., m, are not all zero. We show that there exist t i F, i = 2,..., m such that b 1j + t 2 b 2j + + t m b mj 0 for j = 1,..., n. (6) In fact we may successively choose t 2,..., t n such that if b i,j0 0 for some i, j 0, then b 1,j0 + t 2 b 2,j0 + + t i b i,j0 0. (7) 7

(6) will follow from (7) since the matrix B (b ij ) m n has no zero column. If the second row of B is a zero row, we choose t 2 = 0. Otherwise let b 2,j1,..., b 2,js second row of B. For every p = 1,..., s, the equation b 1,jp + yb 2,jp be the nonzero entries in the = 0 has only one solution, i.e., y = b 1 2,j p b 1,jp. Since s n and F n + 1, there exists t 2 F such that b 1,jp + t 2 b 2,jp 0 holds for all p = 1,..., s. Next if the third row of B is a zero row, choose t 3 = 0. Otherwise, as above there exists t 3 F such that b 1,j + t 2 b 2,j + t 3 b 3,j 0 holds for all those j for which b 3,j 0. Continuing in this way we can find all the t 2, t 3,..., t n satisfying (7). Now in A adding t i times the i-th row to the first row for i = 2,..., m we get a matrix A 1 all of whose completions have the same rank r. By Lemma 1, A 1 is an ACI-matrix. By the condition (6), x j appears in A 1 (1, j) with a nonzero coefficient. Obviously there is a choice of values for the indeterminates such that the first row of A 1 becomes a zero row. For this completion of A 1, without loss of generality suppose the first r columns are linearly independent. Now in A 1 change the value of x r+1 such that A 1 (1, r +1) 0 and keep the values of other indeterminates unchanged. Now for the second choice of values for the indeterminates, the rank of the completed matrix is r + 1, which is a contradiction. Now we are ready to characterize the ACI-matrices all of whose completions have the same rank. In a block matrix, the condition that some block B is s t with s = 0 means that the rows in which B lies are void, i.e., they do not appear, and the condition that some block B is s t with t = 0 means that the columns in which B lies are void. Theorem 5 Let m, n be positive integers, F be a field with F max{m, n + 1} and A be an m n ACI-matrix over F[x 1,..., x k ]. Then all completions of A have the same rank r if and only if there exists a nonsingular constant matrix T M m (F) and a permutation matrix Q M n (F) such that T AQ = U 1 0 0 0 0 U 2 (8) where U 1 and U 2 are square upper triangular ACI-matrices with nonzero constant diagonal entries and the sum of their orders equals r. Proof. The sufficiency of the condition is obvious. To prove the necessity we use induction on n. For n = 1 the conclusion is easy to verify by using Lemma 2. Now let n 2 and assume that the result holds for all ACI-matrices with n 1 columns. Let A be an m n ACI-matrix all of whose completions have rank r. If r = n, which implies m n, then the result follows from Lemma 3 (with the first block row and the first two block columns in (8) void). If A has a zero column, say, column j, interchanging column 1 and j we get a matrix A 0 = AQ 0 = (0, B), where Q 0 M n (F) is a permutation matrix 8

and B is an m (n 1) ACI-matrix all of whose completions have rank r. Using the induction hypothesis on B, there exists a nonsingular matrix T 0 M m (F) and a permutation matrix Q 1 M n 1 (F) such that T 0 BQ 1 = U 1 0 0 0 0 U 2 where U 1 and U 2 are r 1 r 1 and r 2 r 2 upper triangular ACI-matrices with nonzero constant diagonal entries respectively, r 1 + r 2 = r. We have A 1 T 0 A 0 (1 Q 1 ) = (0, T 0 BQ 1 ) = 0 U 1 0 0 0 0 0 0 U 2 In A 1 interchange column 1 and the second block column, and denote the corresponding permutation matrix by Q 2. Set T = T 0 and Q = Q 0 (1 Q 1 )Q 2. Then. has form (8). T AQ = T 0 AQ 0 (1 Q 1 )Q 2 = A 1 Q 2 Next we consider the case that r < n and A has no zero column. By Lemma 4, A has a column with only constant entries, which are not all zero, say, the j-th column with the i-th entry nonzero. Interchanging rows 1 and i, and then interchanging columns 1 and j we get a new matrix A 2 = P 1 AQ 3 = (ã ij ) with ã 11 0 and ã i1 F for 1 i m, where P 1, Q 3 are permutation matrices. In A 2 adding ã i,1 /ã 11 times the first row to the i-th row for i = 2,..., m successively we get a matrix A 3 = T 1 A 2, where T 1 M m (F) is the nonsingular matrix corresponding to these elementary row operations. Partition A 3 as ã11 u T A 3 = T 1 A 2 =. 0 H By Lemma 1, A 3 is an ACI-matrix all of whose completions have rank r. Therefore H is an (m 1) (n 1) ACI-matrix all of whose completions have rank r 1. Using the induction hypothesis on H, we know that there exists a nonsingular constant matrix T 2 M m 1 (F) and a permutation matrix Q 4 M n 1 (F) such that V 1 T 2 HQ 4 = 0 0 0 0 U 2 where V 1 and U 2 are r 1 r 1 and r 2 r 2 upper triangular ACI-matrices with nonzero constant diagonal entries respectively, r 1 + r 2 = r 1. 9

Set T = (1 T 2 )T 1 P 1 and Q = Q 3 (1 Q 4 ). Then T M m (F) is a nonsingular constant matrix, Q M n (F) is a permutation matrix and ã 11 0 V 1 U 1 T AQ = 0 0 0 0 0. 0 0 U 2 0 0 0 U 2 Let r 1 = r 1 + 1. Then U 1 and U 2 are r 1 r 1 and r 2 r 2 upper triangular ACI-matrices with nonzero constant diagonal entries and r 1 + r 2 = r 1 + r 2 + 1 = r. We remark that in the form (8) some block rows or/and block columns may be void. For example (8) includes the following forms as special cases: U1 0 U 1,,. 0 0 0 U 2 Now we study the possible numbers of indeterminates in the partial matrices of a given size all of whose completions have the same rank. number. Obviously it suffices to determine the largest Theorem 6 Let m n be positive integers, F be a field with F max{m, n + 1} and A be an m n partial matrix over F all of whose completions have the same rank r. Then the number of indeterminates of A is less than or equal to mr r(r + 1)/2. This maximum number is attained at A if and only if there exist permutation matrices P M m (F), Q M n (F) such that P AQ = V 1 C 1 C 2 0 0 C 3 0 0 V 2 where C 1, C 2, C 3 are partial matrices all of whose entries are indeterminates, V 1 and V 2 are s s and (r s) (r s) upper triangular matrices with nonzero constant diagonal entries and with all the entries above the diagonal being indeterminates, and s = 0 when m > n. Proof. By Theorem 5 there exists a nonsingular constant matrix T = (t ij ) M m (F) and a permutation matrix Q M n (F) such that T AQ = U 1 0 0 0 0 U 2 (9) where U 1 and U 2 are r 1 r 1 and r 2 r 2 upper triangular ACI-matrices with nonzero constant diagonal entries respectively, r 1 + r 2 = r. Let à = AQ = (a ij) and B = (b ij ) = T à = T AQ. 10

We assert that the j-th column of à contains at most j 1 indeterminates for j = 1,..., r 1, at most r 1 indeterminates for j = r 1 + 1,..., n r 2 and at most m n + j 1 indeterminates for j = n r 2 + 1,..., n. Suppose the j-th column of à has exactly p indeterminates, say, a i1,j, a i2,j,..., a ip,j. From (9) we have m p b ij = t ik a kj = t i,ih a ih,j + d ij, d ij F. k=1 h=1 Since for 1 j r 1 and i j, b ij are constants, we have t i,ih = 0 for j i m and 1 h p. So T has an (m j + 1) p zero submatrix. If p j, then (m j + 1) + p m + 1 and by the Frobenius-König theorem [?], det T = 0, which contradicts the fact that T is nonsingular. This shows that if 1 j r 1, then p j 1. Since for r 1 + 1 j n r 2 and i r 1 + 1, b ij are constants, we have t i,ih = 0 for r 1 + 1 i m and 1 h p. Thus T has an (m r 1 ) p zero submatrix. If p r 1 + 1 then m r 1 +p m+1 and hence T is singular, contradiction. This shows that if r 1 +1 j n r 2, then p r 1. Since for n r 2 + 1 j n and i m (n j), b ij are constants, we have t i,ih = 0 for m (n j) i m and 1 h p. So T has an (n j +1) p zero submatrix. If p m n+j, then n j + 1 + p m + 1 and T is singular, contradiction. This shows that if j n r 2 + 1, then p m n + j 1. Denote by f(a) the number of indeterminates of A, which is equal to that of Ã. Using r 1 + r 2 = r we have f(a) r 1 (j 1) + (n r 2 r 1 )r 1 + j=1 j=n r 2 +1 n (m n + j 1) = (m n)r 2 + nr 1 r(r + 1) 2 (m n)r + nr 1 r(r + 1) 2 = mr 1 r(r + 1). 2 The if part of the second conclusion is obvious. Now suppose that the number of indeterminates of A is equal to mr r(r + 1)/2. From the above argument we see that i) if m = n, then the j-th column of à has exactly j 1 indeterminates for j = 1,..., r 1, n r 2 + 1,..., n and r 1 indeterminates for j = r 1 + 1,..., n r 2 ; ii) if m > n, then r 1 = 0, r 2 = r, the j-th column of à has no indeterminate for j = 1,..., n r and has exactly m n + j 1 indeterminates for j = n r + 1,..., n. 11

Note that à is a partial matrix. To complete our proof of Theorem 6, it suffices to prove the following two statements: (S1) Let G P n (F) be a partial matrix and T M n (F) be a nonsingular constant matrix such that T G = U 1 0 0 0 0 U 2 (10) where U 1 and U 2 are r 1 r 1 and r 2 r 2 upper triangular matrices with nonzero constant diagonal entries. If the j-th column of G has exactly j 1 indeterminates for j = 1,..., r 1, n r 2 +1,..., n and r 1 indeterminates for j = r 1 + 1,..., n r 2, then there exists a permutation matrix P such that P G = V 1 C 1 C 2 0 0 C 3 (11) 0 0 V 2 where C 1, C 2, C 3 are partial matrices all of whose entries are indeterminates, V 1 and V 2 are r 1 r 1 and r 2 r 2 upper triangular matrices with nonzero constant diagonal entries and with all the entries above the diagonal being indeterminates. (S2) Let m > n, let G P m,n (F) be a partial matrix and T M m (F) be a nonsingular constant matrix such that [ 0 ] T G = 0 U 2 (12) where U 2 is an r r upper triangular matrix with nonzero constant diagonal entries. If the j-th column of G has no indeterminate for 1 j n r and has exactly m n+j 1 indeterminates for n r + 1 j n, then there exists a permutation matrix P such that 0 C P G = 0 U (13) where C is a partial matrix all of whose entries are indeterminates, U is an r r upper triangular matrix with nonzero constant diagonal entries and with all the entries above the diagonal being indeterminates. Proof of (S1). We use induction on n to prove (S1). It holds trivially for the case n = 1. Next let n 2 and assume (S1) holds for all matrices of order n 1. Let G be an n n partial matrix which satisfies the condition of (S1). The case r 1 = n is just the statement (S) in [1, proof of Theorem 12]. So next we suppose r 1 < n. There exists a permutation matrix P 1 M n (F) such that if we denote P 1 G = (g ij ), then the following hold: 12

i) if r 2 = 0, then in the last column, g 1n, g 2n,..., g r1,n are distinct indeterminates and all the other entries g r1 +1,n, g r1 +2,n,..., g nn are constants; ii) if r 2 1, then in the last column, g nn is a constant and all the other entries g 1n, g 2n,..., g n 1,n are indeterminates. Let T P T 1 = (t ij) and denote V = T G = (T P1 T )(P 1 G) = (v ij ). We consider the above two cases separately. Case 1. r 2 = 0. Since r 1 n v in = t ik g kn + t ik g kn = 0, for i = r 1 + 1,..., n, k=1 k=r 1 +1 we have t ij = 0 for r 1 + 1 i n and 1 j r 1. Partition T P1 T T11 T 12 G11 G 12 =, P 1 G = 0 T 22 G 21 G 22 where T 11 M r1 (F) and G 11 P r1 (F). Since T is nonsingular, T 11 and T 22 are nonsingular. Thus V = (T P1 T U1 )(P 1 G) = 0 0 implies G 21 = 0 and G 22 = 0 since T 22 G 21 = 0 and T 22 G 22 = 0. Clearly the j-th column of G 11 has exactly j 1 indeterminates for j = 1,..., r 1. From V = (T P1 T T11 G 11 )(P 1 G) = 0 0 we deduce that T 11 G 11 = U 1 is upper triangular with nonzero constant diagonal entries. By the induction hypothesis there exists a permutation matrix P 2 of order r 1 such that P 2 G 11 is upper triangular with nonzero constant diagonal entries and with all the entries above the diagonal being indeterminates. By assumption the j-th column of G, and hence P 1 G, has r 1 indeterminates for j = r 1 + 1,..., n. Since G 22 = 0, we deduce that all the entries of G 12 are indeterminates. Set P = (P 2 I n r1 )P 1. Then P is a permutation matrix and ( ) P2 G 11 P 2 G 12 P G = 0 0 has form (11). Case 2. r 2 1. Since n 1 v nn = t nk g kn + t nn g nn F, k=1 13

we have t nk = 0 for 1 k n 1, and the above equality reduces to v nn = t nn g nn. Then v nn 0 implies t nn 0 and g nn 0. Since U 2 is upper triangular, the first n 1 entries in the last row of V are zero. From n 0 = v nj = t nk g kj = t nn g nj, for j = 1, 2,..., n 1, k=1 we get g nj = 0 for j = 1, 2,..., n 1. Partition T P1 T T1 v G1 w =, P 1 G = 0 t nn 0 g nn where T 1 M n 1 (F) and G 1 P n 1 (F). Since T is nonsingular, T 1 is nonsingular. Clearly the j-th column of G 1 has exactly j 1 indeterminates for j = 1,..., r 1, (n 1) (r 2 1)+1,..., n 1 and r 1 indeterminates for j = r 1 +1,..., (n 1) (r 2 1). By the condition ii) all the components of w are indeterminates. From V = (T P1 T T1 G 1 )(P 1 G) = 0 t nn g nn we deduce that T 1 G 1 has form (10). By the induction hypothesis there exists a permutation matrix P 2 of order n 1 such that P 2 G 1 has form (11). Set P = (P 2 1)P 1. Then P is a permutation matrix and ( ) P2 G 1 P 2 w P G = 0 g nn has form (11). Thus we complete the proof of (S1). Proof of (S2). If r = 0 then G = 0, and the result holds trivially. It suffices to prove the case r 1. We can use induction on n to prove (S2) by an argument similar to that in the above proof of Case 2 of (S1). We omit the details. The starting step n = 1 needs some care. When m > n = r = 1, the condition (12) says that T G is a column vector with its last component being a nonzero constant, and the second condition in (S2) says that G has exactly one constant component. The conclusion (13) states that the only constant component of G is nonzero. To the contrary, suppose it is zero. Then each component of T G is either 0 or a polynomial of degree 1, which contradicts the condition that the last component of T G is a nonzero constant. For those m n partial matrices with m < n, we may apply Theorem 6 by considering their transposes. Note that the possible generalization of Theorem 6 to ACI-matrices does not make sense. Just consider the matrix 1 x1 + x 2 + + x k A =. 0 1 14

References [1] R.A. Brualdi, Z. Huang and X. Zhan, Singular, nonsingular and bounded rank completions of ACI-matrices, Linear Algebra Appl., 433(2010), 1452-1462. [2] R.A. Brualdi and H.J. Ryser, Combinatorial Matrix Theory, Cambridge University Press, 1991. 15