Some Reviews on Ranks of Upper Triangular Block Matrices over a Skew Field

Similar documents
Solving Homogeneous Systems with Sub-matrices

MA 511, Session 10. The Four Fundamental Subspaces of a Matrix

Online Exercises for Linear Algebra XM511

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Group Inverse for a Class of. Centrosymmetric Matrix

A Family of Nonnegative Matrices with Prescribed Spectrum and Elementary Divisors 1

Lecture Summaries for Linear Algebra M51A

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

of a Two-Operator Product 1

Diameter of the Zero Divisor Graph of Semiring of Matrices over Boolean Semiring

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Riesz Representation Theorem on Generalized n-inner Product Spaces

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

ACI-matrices all of whose completions have the same rank

Math 407: Linear Optimization

MAT 2037 LINEAR ALGEBRA I web:

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

When is the Ring of 2x2 Matrices over a Ring Galois?

Lecture notes: Applied linear algebra Part 1. Version 2

Diophantine Equations. Elementary Methods

Elementary Row Operations on Matrices

Matrix Arithmetic. j=1

Locating Chromatic Number of Banana Tree

Math 344 Lecture # Linear Systems

Remarks on Fuglede-Putnam Theorem for Normal Operators Modulo the Hilbert-Schmidt Class

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Definition 2.3. We define addition and multiplication of matrices as follows.

CSL361 Problem set 4: Basic linear algebra

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

On the Power of Standard Polynomial to M a,b (E)

Systems of Linear Equations

Row Space and Column Space of a Matrix

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

5 Linear Transformations

Math113: Linear Algebra. Beifang Chen

System of Linear Equations

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Left R-prime (R, S)-submodules

ORIE 6300 Mathematical Programming I August 25, Recitation 1

Elementary maths for GMT

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Lecture 4 Orthonormal vectors and QR factorization

On a Principal Ideal Domain that is not a Euclidean Domain

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Matrices and Matrix Algebra.

EE263: Introduction to Linear Dynamical Systems Review Session 2

First we introduce the sets that are going to serve as the generalizations of the scalars.

Numerical Linear Algebra

Section 5.6. LU and LDU Factorizations

2 Two-Point Boundary Value Problems

1 Last time: least-squares problems

Math 3C Lecture 25. John Douglas Moore

Chapter 7. Linear Algebra: Matrices, Vectors,

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Linear Algebra (Review) Volker Tresp 2017

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

A Note on Linearly Independence over the Symmetrized Max-Plus Algebra

Math 321: Linear Algebra

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

LECTURE 6: VECTOR SPACES II (CHAPTER 3 IN THE BOOK)

1 Last time: inverses

The Four Fundamental Subspaces

k-pell, k-pell-lucas and Modified k-pell Numbers: Some Identities and Norms of Hankel Matrices

Chapter 2: Matrix Algebra

Linear Algebra Highlights

Permanents and Determinants of Tridiagonal Matrices with (s, t)-pell Numbers

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

A Generalization of p-rings

Lecture 7. Econ August 18

On Reflexive Rings with Involution

MH1200 Final 2014/2015

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

Matrix Algebra for Engineers Jeffrey R. Chasnov

A Generalized Fermat Equation with an Emphasis on Non-Primitive Solutions

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Math 1553 Introduction to Linear Algebra

Linear Algebra Review. Vectors

Uniqueness of the Solutions of Some Completion Problems

Components and change of basis

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

Linear Algebra I Lecture 8

Elementary linear algebra

Undergraduate Mathematical Economics Lecture 1

The Greatest Common Divisor of k Positive Integers

NOTES on LINEAR ALGEBRA 1

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

1 Determinants. 1.1 Determinant

On a Certain Representation in the Pairs of Normed Spaces

On Symmetric Bi-Multipliers of Lattice Implication Algebras

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Transcription:

International Mathematical Forum, Vol 13, 2018, no 7, 323-335 HIKARI Ltd, wwwm-hikaricom https://doiorg/1012988/imf20188528 Some Reviews on Ranks of Upper Triangular lock Matrices over a Skew Field Netsai uaphim, Kawin Onsaard, Primporn So-ngoen and Thitarie Rungratgasame Department of Mathematics, Faculty of Science Srinakharinwirot University, angkok 10110, Thailand Copyright c 2018 Netsai uaphim, Kawin Onsaard, Primporn So-ngoen and Thitarie Rungratgasame This article is distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited Abstract The aim([ of this article is to go over necessary and sufficient conditions that r = r(a)+r(), which is a rank equation of 2 2 A C 0 upper triangular block matrices over a skew field Mathematics Subject Classification: 15A03 Keywords: lock matrix, rank, triangular matrix, generalized inverse 1 Introduction The basic method to find a rank of a matrix over a field is to apply row operations on a matrix to get the associated row ehcelon matrix which is an upper triangular matrix This method can also be done in the case of a matrix over a skew field From this reason, we are interested in finding a rank of an upper triangular block matrix In particular, we focus on the case that the rank of an upper triangular block matrix is eqaul to a sum of ranks of matrices on the main diagonal of the block matrix Matsaglia and Styan in [3] provided

324 Netsai uaphim et al many important inequalities on rank of matrices over a field In this work, we give some results similar to those in their work but over a skew field K instead We shall start with basic definitions and theorems for matrices over a skew field (for more details see [2 Let K denote a skew field, K m n denote the set of all m n matrices with all entries in K, 0 m n denote the m n zero matrix, and I n denote the n n identity matrix for any m, n N Definition 11 Let A K m n The subspace of K 1 n spanned by the row vectors of A is called the row space of A, denoted by R(A), and the subspace of K m 1 spanned by the column vectors of A is called the column space of A, denoted by C(A) The dimension of the row space of a matrix A is called the row rank of A, and the dimension of the column space of a matrix A is called the column rank of A Theorem 12 Let A be an arbitrary m n matrix over a skew field K Then the row rank of A and the column rank of A are equal, denoted by r(a) a 11 a 12 a 1n a 21 a 22 a 2n Proof Let A = a m1 a m2 a mn Denote R i = (a i1, a i2,, a in ) the i th row of A for all i = 1, 2,, m Suppose the row rank of A equals r Then there exist r rows of A forming a basis of the row space of A Without loss of generality, say R i1, R i2,, R ir where R ik = (a ik 1, a ik 2,, a ik n) for all k = 1, 2,, r and i 1 < i 2 < < i r Then each R i of A is a linear combination of R i1, R i2,, R ir We obtain R 1 = α 11 R i1 + α 12 R i2 + + α 1r R ir R 2 = α 21 R i1 + α 22 R i2 + + α 2r R ir R m = α m1 R i1 + α m2 R i2 + + α mr R ir Substitute each R i by (a i1, a i2,, a in ) and each R ik We obtain by (a ik 1, a ik 2,, a ik n) a ij = α i1 a i1 j + α i2 a i2 j + + α ir a irj for all i = 1, 2,, m and j = 1, 2,, n We rewrite that

Some reviews on ranks of upper triangular block matrices over a skew field 325 a 1j a 2j a mj = α 11 α 21 α m1 Note that for each k, a i 1 j + α 1k α 2k α mk α 12 α 22 α m2 a i 2 j + + α 1r α 2r α mr a i rj 0 m 1 because dim(r(a)) = r Thus each column of A is a linear combination of r non-zero column vectors in K m 1 Then dim(c(a)) r, ie, the column rank of A is less than or equal to r Therefore, dim(c(a)) dim(r(a)) Similarly, dim(r(a)) dim(c(a)) Therefore, dim(c(a)) = dim(r(a)) The row operations on a matrix over a skew field can also be done the same procedure as in the case of a matrix over a field Consequently, an elementary row matrix is also defined to be a matrix derived by applying row operation exactly one time on an identity matrix Then the reduced-row matrix, derived by applying row operations on a matrix, can also be written as the product of elementary row matrices and the original matrix Theorem 13 For an arbitrary m n matrix A over a skew field K, the rank of A is equal to the number of all nonzero rows of the reduced-row echelon matrix of A Proof Let A be an m n matrix over a skew field K, and A RR denote the R 2 R reduced-row echelon matrix of A such that A RR = r 0 0 0 where R i 0 m n for all i = 1, 2,, r and 0 < r min{m, n} We will show that {R 1, R 2,, R r } is linearly independent Suppose {R 1, R 2,, R r } is not linearly independent, ie, there exists j {1, 2,, r} such that R j is a linear combination of the remaining rows Let R j = α 1 R 1 + α 2 R 2 + + α j 1 R j 1 + α j+1 R j+1 + + α r R r (131) R 1

326 Netsai uaphim et al Suppose R j = (0, 0,, 0, b, ) where b 0 is the leading entry and b is an element in a column k Since R j is the row of A RR Then other entries in the column k, not a leading entries, are all zero Then the element in the column k of (131) becomes b = α 1 0 + α 2 0 + + α j 1 0 + α j+1 0 + + α r 0 = 0 This result contradicts to the assumption that b 0 Therefore, R 1, R 2,, R r is linearly independent Hence {R 1, R 2,, R r } becomes a basis of the row space of A Then dim(r(a)) = r, ie, r(a) = r is equal to the number of all nonzero rows of the reduced-row echelon matrix of A Next, we will state the degree theorem for the sum of two subspaces of a vector space over a skew field The proof of this theorem can be done by the same method as for a vector space over a field We then omit its proof Theorem 14 Let V be a vector space over a skew field K, U and W subspaces of V Define the sum U + W = {u + w u U, w W } Then U + W is the smallest subspace of V containing both U and W such that dim(u + W ) = dim U + dim W dim(u W ) 2 Rank of an Upper Triangular lock Matrix Related to Row Spaces and Column Spaces In this section, we will derive rank equations for a block matrix For A K m n and K m t, we denote C ([ A a subspace of K m 1 spanned by the columns of A and, and C(A)+C() the smallest subspace of K m 1 containing both the column space of A and the column ([ space of On the other hand, if A A K m n and K s n, we denote R a subspace of K 1 n spanned by the rows of A and, and R(A) + R() the smallest subspace of K 1 n containing both the row space of A and the row space of Lemma 21 [4] Let A K s n and K n m Then C(A) C(A) and R(A) R() Moreover, r(a) min{r(a), r()} Lemma 22 Let A and be matrices over a skew field K 1 If A K m n and K m t, then C ([ A = C(A) + C() Moreover, dim (C(A) C()) = r(a) + r() r ([ A

Some reviews on ranks of upper triangular block matrices over a skew field 327 2 If A K m n and K s n, then R ([ A Moreover, dim (R(A) R()) = r(a) + r() r = R(A) + R() ([ A Proof 1 Let A K m n and K m t y the property of C(A) + C(), C(A) + C() C ([ A Since all linear combinations of columns of A and must also be in C(A) + C(), C ([ A C(A) + C() y Theorem 14, r ([ A = dim(c(a) + C()) = r(a) + r() dim (C(A) C()) That is, dim (C(A) C()) = r(a) + r() r ([ A 2 Let A K m n and K s n Then A T K n m and T K n s We apply ([ the above result to have that A R = C ([ A T T = C(A T ) + C( T ) = R(A) + R() Similarly, dim (R(A) R()) = r(a) + r() r ([ A Lemma 23 Let A and be matrices over a skew field K 1 If A K m n and K m t, then C(A) C() { 0} if and only if there are column vectors ᾱ, β such that Aᾱ = β 0 2 If A K m n and K s n, then R(A) R() { 0} if and only if there are row vectors γ, θ such that γa = θ 0 Proof 1 Let A K m n and K m t such that A = A 1 A 2 A n and = 1 2 t where Ai is the i th column matrix of A, for all i = 1, 2, 3,, n, and j is the j th column matrices of, for all j = 1, 2, 3,, t ( ) Assume C(A) C() { 0} Let v 0 be a column matrix such that v C(A) C() Then there exist α 1, α 2,, α n K and β 1, β 2,, β t K such that v = A 1 α 1 + A 2 α 2 + + A n α n and v = 1 β 1 + 2 β 2 + + t β t We let ᾱ = α 1 α 2 α n β 1 β 2 and β = We have that Aᾱ = A 1α 1 + A 2 α 2 + + β t A n α n = v = 1 β 1 + 2 β 2 + + t β t = β Then there exist the column vectors ᾱ, β such that Aᾱ = β 0 ( ) Assume that there exist column vectors ᾱ = α 1 α 2 β 1 β 2 and β = such that α n β t

328 Netsai uaphim et al Aᾱ = β 0 Then 0 A 1 α 1 + A 2 α 2 + + A n α n = Aᾱ = β = 1 β 1 + 2 β 2 + + t β t Let v = A 1 α 1 +A 2 α 2 + +A n α n = 1 β 1 + 2 β 2 + + t β t Then v 0 and v C(A) C() Therefore, C(A) C() { 0} 2 Let A K m n and K s n Then A T K n m and K n s From the above result proved in 1, we have that R(A) R() = C(A T ) C( T ) { 0} there are column vectors ᾱ, β such that A T ᾱ = T β 0 there are column vectors ᾱ, β such that (ᾱ T A) T = ( β T ) T 0 there are row vectors γ = ᾱ T, θ = β T such that γa = θ 0 y Lemma 22 and Lemma 23, we derive the following theorem Theorem 24 Let A and be matrices over a skew field K 1 If A K m n and K m t, then the followings are equivalent: (i) r ([ A = r(a) + r() (ii) C(A) C() = { 0} (iii) There are no column vectors ᾱ, β such that Aᾱ = β 0 2 If A K m n and K s n, then the followings are equivalent: ([ A (iv) r = r(a) + r() (v) R(A) R() = { 0} (vi) There are no row vectors γ, θ such that γa = θ 0 We then also get an inequality for rank of a 2 2 block matrix Theorem 25 Let A K m n, K s t and C K m t ([ ([ A C C 1 r(a) + r() r r(a) + r 0 2 r(a) + r() r ([ A C r ([ A 0 C + r()

Some reviews on ranks of upper triangular block matrices over a skew field 329 Proof Let A K n n, K s t and C K m t ([ ([ A 0 A C It is shown in [4] that r(a) + r() = r r 0 0 ([ ([ A C C y Lemma 22, we have that r r(a) + r 0 and r ([ A C r ([ A 0 C + r() y Theorem 24 and Theorem 25, we derive the conditions for equalities of rank of a block matrix as follows Corollary 26 Let A K m n, K s t and C K m t 1 The followings are equivalent: ([ ([ A C C (i) r = r(a) + r 0 ([ ([ {} A C 0 (ii) C C = 0 0 (iii) There are no matrices X, Y, Z, W such that 2 The followings are equivalent: (iv) r A [X ] C [Z ] Y = W 0 ([ A C = r ([ A C + r() 0 (v) R ([ A C R ([ 0 = {[ 0 0 ]} (vi) There are no matrix X, Y, Z, W such that X [A ] Z [0 ] C = Y W 0 0 0 0 0 0 0 0 In the last section, we will use a ([ generalized inverse to find the necessary A C and sufficient conditions such that r = r(a) + r() 0

330 Netsai uaphim et al 3 Rank of an Upper Triangular lock Matrix and Generalized Inverses It is common known that an inverse of a square matrix A is the unique matrix A 1 such that A 1 A = AA 1 = I where I is the identity matrix However, an inverse in this sense is of a different type from the usual inverse that we are familiar with To be precise, in our work, an inverse is defined for a matrix of an arbitrary size (need not to be a square matrix) in the more generalized sense (see [1 Most of the results in this section are similar results shown in [3] but for a block matrix over a skew field instead Definition 31 Let A K m n We call a generalized inverse or weak inverse or an inner inverse of A if AA = A We use A to represent an arbitrary generalized inverse of A Definition 32 Let A K m n If U K n m satisfies UA = I n, then U is called a left inverse of A Similarly, if V K n m satisfies AV = I m, then V is called a right inverse of A It is known (eg see [3 that A has a left inverse if and only if r(a) = n, called full column rank, and A has a right inverse if and only if r(a) = m, called full row rank Theorem 33 [3] Let A K m n (i) A has a left inverse if and only if r(a) = min{m, n} (ii) A has a right inverse if and only if r(a) = min{m, n} Lemma 34 (Full rank decomposition) Let A K m n with r(a) = r > 0 where r min{m, n} Then A = CR where C K m r has a left-inverse and R K r n has a right inverse a 11 a 12 a 1n a 21 a 22 a 2n Proof Let A = Denote R i = (a i1, a i2,, a in ), for all a m1 a m2 a mn i = 1, 2, 3,, m, the i th row of A Then there exist r rows of A forming a basis of the row space of A, without loss of generality, say R 1, R 2,, R r Then R r+1, R r+2,, R m is a linear combination of R 1, R 2,, R r

Some reviews on ranks of upper triangular block matrices over a skew field 331 case 1 R 1, R 2,, R r form a basis of R(A) We obtain R 1 = R 1 + 0R 2 + + 0R r, R 2 = 0R 1 + R 2 + + 0R r,, R r = 0R 1 + 0R 2 + + R r, R r+1 = α (r+1)1 R 1 + α (r+1)2 R 2 + + α (r+1)r R r,, R m = α m1 R 1 + α m2 R 2 + + α mr R r for some α ij K where i {r + 1, r + 2,, m}, j {1, 2,, r} 1 0 0 0 1 0 R 1 R 1 R 2 Then A = = 0 0 1 R 2 α (r+1)1 α (r+1)2 α (r+1)r R m α (r+2)1 α (r+2)2 α (r+2)r R r α m1 α m2 α mr Therefore, A can be written as A = CR where C K m r has a left inverse and R K r n has a right inverse case 2 R i1, R i2,, R ir form a basis of R(A) where i 1 < i 2 < < i r and i 1, i 2,, i r {1, 2,, m} We then rewrite {R 1, R 2,, R m } to be the ordered set {R i1, R i2,, R ir,, R im } Let Ā = R i1 R i2 R ir R im We have that A is row equivalent to Ā Then there are elementary matrices e 1, e 2,, e k such that Ā = e ke k 1 e 1 A y case 1, there are matrices C K m r and R K r n such that Ā = C R where C has a left inverse and R has a right inverse Then A = e 1 1 e 1 2 e 1 k Ā = e 1 1 e 1 2 e 1 k C R Choose C = e 1 1 e 1 2 e 1 k C Then C also has a left inverse Therefore, A can be written as A = C R where C K m r has a left inverse and R K r n has a right inverse We will also show the other way to find a decomposition of A (implying that such decomposition is not unique) Let C j = a 1j a 2j a mj for all j = 1, 2, 3,, n Then there exist r columns of A forming a basis of the column space of A, without loss of generality, say

332 Netsai uaphim et al C 1, C 2,, C r Hence C r+1, C r+2,, C n is a linear combination of C 1, C 2,, C r We obtain C 1 = C 1 + C 2 0 + + C r 0, C 2 = C 1 0 + C 2 + + C r 0,, C r = C 1 0 + C 2 0 + + C r, C r+1 = C 1 β (r+1)1 + C 2 β (r+1)2 + + C r β (r+1)r,, C n = C 1 β n1 + C 2 β n2 + + C r β nr for some β ij K where i {r + 1, r + 2,, n} and j {1, 2,, r} Then A = C 1 C 2 C n 1 0 0 β (r+1)1 β (r+2)1 β n1 = 0 1 0 β (r+1)2 β (r+2)2 β n2 C 1 C 2 C r 0 0 1 β (r+1)r β (r+2)r β nr Therefore, A can be written as A = CR where C K m r has a left-inverse and R K r n has a right inverse Theorem 35 (Existence of a generalized inverse) For A K m n, there are nonsingular matrices F K m m, G K n n, r(a) = r > 0 where r min{m, n} such that Ir 0 A = F G 0 0 where some bordering zero matrices are absent if [ A has full ] rank Moreover, A has a generalized inverse A = G 1 Ir X F 1 such that Y Z X K r (n r), Y K (m r) r and Z K (m r) (n r) Proof Let A K m n y Lemma 34, there are a left invertible matrix C K m r and a right invertible matrix R K r n such that A = CR Let F = [ C C 0 ] and G = [ R R 0 ] where the matrices C 0 and R 0 are added, if necessary, so that F and G are still nonsingular matrices Then A = I r 0 R C C0 It can be verified that for any X, Y, Z of appropriate 0 0 R 0 Ir X Ir 0 dimensions, is a generalized inverse of Then Y Z 0 0 ( ) A G 1 Ir X F 1 A = A Thus G 1 Ir X F 1 is a generalized inverse Y Z Y Z of A Now, for any matrix A, we can always write A for any arbitrary generalized inverse of A The following lemma is easily derived

Some reviews on ranks of upper triangular block matrices over a skew field 333 Lemma 36 Let A K m n Then (I n A A) 2 = (I n A A) and (I m AA ) 2 = (I m AA ) Theorem 37 Let A, be conformable matrices and for any choices of generalized inverse A, (i) r ([ A (I m AA )C = r(a) + r((i m AA )C), ([ A (ii) r C(I n A = r(a) + r(c(i n A A)) A) Proof (i) Let v be a column matrix such that v C(A) C((I m AA )C) Then Aᾱ = v = (I m AA )C β where ᾱ, β are column vectors Since (I m AA ) 2 = (I m AA ) (by Lemma 36), (I m AA )Aᾱ = 0 = (I m AA )C β Therefore, there are no column vectors ᾱ, β such that Aᾱ = (I m AA )C β 0 Hence r [ A (I AA )C ] = r(a) + r((i m AA )C) by Theorem 24 (ii) Let w be a row matrix such that w R(A) R(C(I n A A)) Then λa = w = γc(i n A A) where λ, γ are row vectors Since (I n A A) 2 = (I n A A) (by Lemma 36), λa(i n A A) = 0 = γc(i n A A) Therefore, there are no row vectors λ, γ such that λa(i n A A) = γc(i n A A) 0 ([ A y Theorem 24, r C(I n A = r(a) + r(c(i n A A)) A) Theorem 38 (Cancellation Rules) Let C K m m have a left inverse (full column rank) and K n n have a right inverse (full row rank) Then for any matrix A K m n, r(a) = r(ca) = r(a) Proof Let C K m m and K n n such that C has a left inverse (full column rank) and has a right inverse (full row rank) We let L be a left inverse of C and R be a right inverse of Let A K m n y Proposition 21, r(a) = r(lca) r(ca) r(a), and r(a) = r(ar) r(a) r(a) Therefore, r(a) = r(ca) = r(a) Theorem 39 Let A K m n, K m t and C K s n and for any choices of their generalized inverses A,, C, (i) r ([ A = r(a) + r ((I m AA )) = r ((I m )A) + r(), ([ A (ii) r = r(a) + r(c(i n A A)) = r(a(i n C C)) + r(c) C

334 Netsai uaphim et al In A In A Proof (i) Since is a right inverse of 0 I t 0 I t I n A A, by Theorem 38 and Theorem 37(i), 0 I t and [ A ] = r ([ A = r ([ A (I m AA ) = r(a) + r((i m AA )) Im 0 Im 0 A Im 0 A (ii) Since CA is a left inverse of I s CA and = I s C CA, I s C by Theorem 38 and Theorem 37(ii), we have ([ ([ A A r = r C CA = r(a) + r(c(i n A A)) A + C ([ Theorem 310 Let A M m n, M s t and C M m t Then A C r = r(a) + r() if and only if (I m AA )C(I t ) = 0 m t 0 0 Proof Since is a generalized inverse of [ 0 ], we have I n+t [ 0 ] 0 [0 ] In 0 0 = In+t = 0 I t, and ( ( [0 ] )) A C I n+t 0 = [ A C ] I n 0 0 I t = [ A C(I t ) ] That is, by Theorem 39, ([ A C r = r ([ 0 0 ( [A + r ] ( C I n+t [ 0 ] [ 0 ] )) = r() + r ([ A C(I t ) = r() + r(a) + r ( (I m AA )(C C ) ) = r(a) + r() + r ( C C AA C + AA C ) = r(a) + r() + r ( (C AA C)(I t ) ) = r(a) + r() + r ( (I m AA )C(I t ) ) ([ A C Thus r = r(a) + r() if and only if (I m AA )C(I t ) = 0 0 m t

Some reviews on ranks of upper triangular block matrices over a skew field 335 Acknowledgements We would like to show our gratitude to anonymous reviewers for their comments on an earlier version of the manauscript References [1] A en-israel and TNE Greville, Generalized Inverses: Theory and Applications, Wiley, New York, 1974 [2] A Kleyn, Lectures on Linear Algebra over Division Ring, 2007 arxiv:math/0701238 [3] G Matsaglia and G PH Styan, Equalities and Inequalities for Ranks of Matrices, Linear and Multilinear Algebra, 2 (1974), no 3, 269-292 https://doiorg/101080/03081087408817070 [4] J Wang and J Lu, Proof of Inequality of Rank of Matrix on Skew Field by Constructing lock Matrix, International Mathematical Forum, 4 (2009), no 36, 1803-1808 Received: May 19, 2018; Published: June 12, 2018