Re-nnd solutions of the matrix equation AXB = C

Similar documents
Moore-Penrose-invertible normal and Hermitian elements in rings

EP elements in rings

EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B

Some results on the reverse order law in rings with involution

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

On EP elements, normal elements and partial isometries in rings with involution

SCHUR COMPLEMENTS IN C ALGEBRAS

Operators with Compatible Ranges

Factorization of weighted EP elements in C -algebras

The reflexive re-nonnegative definite solution to a quaternion matrix equation

Dragan S. Djordjević. 1. Introduction

arxiv: v1 [math.ra] 28 Jan 2016

Representation for the generalized Drazin inverse of block matrices in Banach algebras

Additive results for the generalized Drazin inverse in a Banach algebra

Inner image-kernel (p, q)-inverses in rings

Dragan S. Djordjević. 1. Introduction and preliminaries

Formulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms

On the Moore-Penrose and the Drazin inverse of two projections on Hilbert space

Partial isometries and EP elements in rings with involution

The DMP Inverse for Rectangular Matrices

Research Article Partial Isometries and EP Elements in Banach Algebras

ADDITIVE RESULTS FOR THE GENERALIZED DRAZIN INVERSE

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B

Nonsingularity and group invertibility of linear combinations of two k-potent matrices

Drazin Invertibility of Product and Difference of Idempotents in a Ring

The Moore-Penrose inverse of differences and products of projectors in a ring with involution

arxiv: v1 [math.ra] 14 Apr 2018

Moore Penrose inverses and commuting elements of C -algebras

Group inverse for the block matrix with two identical subblocks over skew fields

A Note on Solutions of the Matrix Equation AXB = C

Research Article Constrained Solutions of a System of Matrix Equations

The characterizations and representations for the generalized inverses with prescribed idempotents in Banach algebras

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D

A note on solutions of linear systems

ELA

arxiv: v1 [math.ra] 21 Jul 2013

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS

ELA ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES

Generalized Principal Pivot Transform

On Sum and Restriction of Hypo-EP Operators

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

of a Two-Operator Product 1

Representations for the Drazin Inverse of a Modified Matrix

Product of Range Symmetric Block Matrices in Minkowski Space

On pairs of generalized and hypergeneralized projections on a Hilbert space

EP elements and Strongly Regular Rings

Decomposition of a ring induced by minus partial order

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

A Note on the Group Inverses of Block Matrices Over Rings

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

arxiv: v1 [math.ra] 25 Jul 2013

Formulas for the Drazin Inverse of Matrices over Skew Fields

On V-orthogonal projectors associated with a semi-norm

The reflexive and anti-reflexive solutions of the matrix equation A H XB =C

Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

On product of range symmetric matrices in an indefinite inner product space

Representations of the (b, c)-inverses in Rings with Involution

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

THE MOORE-PENROSE GENERALIZED INVERSE OF A MATRIX

Linear Algebra and its Applications

arxiv: v1 [math.ra] 16 Nov 2016

The Drazin inverses of products and differences of orthogonal projections

Weighted generalized Drazin inverse in rings

A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

a Λ q 1. Introduction

arxiv: v1 [math.ra] 24 Aug 2016

On Pseudo SCHUR Complements in an EP Matrix

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

Perturbation of the generalized Drazin inverse

The Residual Spectrum and the Continuous Spectrum of Upper Triangular Operator Matrices

Linear Algebra and its Applications

Weighted Drazin inverse of a modified matrix

arxiv: v1 [math.ra] 15 Jul 2013

Bulletin of the. Iranian Mathematical Society

Dragan S. Djordjević. 1. Motivation

On the Moore-Penrose Inverse in C -algebras

The Moore-Penrose inverse of 2 2 matrices over a certain -regular ring

Lecture notes: Applied linear algebra Part 1. Version 2

Yongge Tian. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China

ELA

Matrix Inequalities by Means of Block Matrices 1

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Predrag S. Stanimirović and Dragan S. Djordjević

A note on the equality of the BLUPs for new observations under two linear models

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

Principal pivot transforms of range symmetric matrices in indefinite inner product space

Some results on matrix partial orderings and reverse order law

arxiv: v1 [math.ra] 27 Jul 2013

The Expression for the Group Inverse of the Anti triangular Block Matrix

Generalized left and right Weyl spectra of upper triangular operator matrices

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix

Transcription:

Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices. We give necessary and sufficient conditions for the existence of Re-nnd solutions and present a general form of such solutions. As a special case when A = I we obtain the results from paper of J. Gross (Explicit solutions to the matrix inverse problem AX = B, Linear Algebra Appl. 289 131-134 (1999)). 1 Introduction Let C n m denote the set of complex n m matrices. I n denotes the unit matrix of order n. By A, R(A), rank(a) and N (A), we denote the conjugate transpose, the range, the rank and the null space of A C n m. The Hermitian part of X is defined as H(X) = 1 2 (X + X ). We will say that X is Re-nnd (Re-nonnegative definite) if H(X) 0 and X is Re-pd (Re-positive definite) if H(X) > 0. The symbol A stands for an arbitrary generalized inner inverse of A, i.e. A satisfies AA A = A. By A we denote the Moore-Penrose inverse of A C n m, i.e. the unique matrix A C m n satisfying AA A = A, A AA = A, (AA ) = AA, (A A) = A A. Key words and phrases: Operator equations, hermitian part, Re-nnd solutions. 2000 Mathematics Subject Classification: 47A62, 15A24. Supported by Grant No. 1232 of the Ministry of Science, Technology and Development, Republic of Serbia. 1

For some important properties of generalized inverses see [5] and [6]. Many authors have studied the well-known equation AXB = C (1) with the unknown matrix X, such that X belongs to some special class of matrices. For example, in [19] and [7] the existence of reflexive and antireflexive, with respect to a generalized reflection matrix P, solutions of the matrix equation (1) was considered, while in [15], [9], [18], [20] necessary and sufficient conditions for the existence of symmetric and antisymmetric solutions of the equation (1) were investigated. The Hermitian nonnegative definite solutions for the equation AXA = B were investigated by C. G. Khatri and S. K. Mitra [15], J.K. Baksalary [4], H. Dai and P. Lancaster [10], J. Gross [12],, X. Zhang and M. Y. Cheng[24], X. Zhang [25]. L.Wu [22] studied Re-positive definite solutions of the equation AX = C and L. Wu and B. Cain [23] found the set of all complex Re-nnd matrices X such that XB = C and presented a criterion for Re-nndness. J. Gross [11] gave an alternative approach, which simultaneously delivers explicit Rennd solutions and gave a corrected version of some results from [23]. Some results from [23] were extended in the paper of Q. Wang and C. Yang [21], in which authors presented criteria for 2 2 and 3 3 partitioned matrices to be Re-nnd, found necessary and sufficient conditions for the existence of Re-nnd solutions of the equation (1) and derived an expression for these solutions. In the paper of A. Dajić and J. Koliha [3], for the first time, a general form of Re-nnd solutions of the equation AX = B is given, where A and B are given operators between Hilbert spaces. Beside these papers many other papers have dealt with the problem of finding the Re-nnd and Re-pd solutions of some other forms of equations. In this paper, we first consider the matrix equation AXA = C, where A C n m, C C n n, and find necessary and sufficient conditions for the existence of Re-nnd solutions. Also, we present a general form of these solutions. Using this result, we get necessary and sufficient conditions for the equation AXB = C, where A C n m, B C m n and C C n n, to have a Re-nnd solution. This way, the results of [23] and [11] follow as a corollary and a general 2

form of those solutions is given in addition. As far as the author is aware, this is the first time necessary and sufficient conditions for the existence of a Re-nnd solutions of the equation AXB = C has been given in terms of g-inverses. Now, we will state some well-known results which will be very often used in the next section. Theorem 1.1 ([17]) Let A C n m, B C p r and C C n r. Then the matrix equation AXB = C is consistent if and only if, for some A, B, in which case the general solution is for arbitrary Y C m p. AA CB B = C, (2) X = A CB + Y A AY BB, (3) The following result was derived by Albert [1] for block matrices, by Cvetković-Ilić et al [8] for C algebras, for the special case of the Moore- Penrose inverse and by Dajić and Koliha [3] for operators between different Hilbert spaces. Here, we will give the basic version proved in [1]. Theorem 1.2 Let M C (n+m) (n+m) be a hermitian block-matrix given by [ ] A B M = B, D where A C n n and D C m m. Then, M 0 if and only if A 0, AA B = B, D B A B 0. Anderson and Duffin [2] define parallel sum of matrices, for a pair of matrices of the same order as A : B = A(A + B) B. It is clear that for this definition to be meaningful, the expression A(A + B) B must be independent of the choice of the g-inverse (A+B). Hence, a 3

pair of matrices A and B will be said to be parallel summable if A(A+B) B is invariant under the choice of the inverse (A + B), that is, if or equivalently Note that By Theorem 2.1 [13], R(A) R(A + B), R(A ) R(A + B ), R(B) R(A + B), R(B ) R(A + B ). R(A) R(B) BB A = A. R(A) R(B) AA λ 2 BB, for some λ 0, so for the non-negative definite matrices A and B, we have that A A + B R(A 1/2 ) R((A + B) 1/2 ), which implies R(A) R((A + B) 1/2 ) or equivalently Now, (A + B) 1/2 ((A + B) 1/2 ) A = A. (A + B)(A + B) A = ((A + B) 1/2 ((A + B) 1/2 ) ) 2 A = A, which is equivalent to R(A) R(A + B). Hence, non-negative definite matrices A and B are parallel summable. Furthermore, in [2] it was proved that for a pair of parallel summable matrices holds A : B = B : A, i.e. A(A + B) B = B(A + B) A. (4) 4

2 Results Next result was first proved by L. Wu and B. Cain [23] and later restated by J. Gross [11]. It gives necessary and sufficient conditions for the matrix equation AX = B to have a Re-nnd solution X, where A, B are given matrices of suitable size and presents a possible explicit expression for X. Theorem 2.1 Let A C n m, B C n m. There exists a Re-nnd matrix X C m m satisfying AX = B if and only if AA B = B and AB is Re-nnd. From the proof of this theorem we can see that X 0 = A C (A C) + A AC (A ) is one of Re-nnd solutions of AX = B. Also, in the [11] the author mentions that any matrix of the form X = X 0 + (I A A)Y (I A A), with Y C m m which is Re-nnd is also Re-nnd solution of AX = B, in the case where such solutions exist, but he didn t present a general form of such solutions. Our main aim is to generalize these results to the equation AXB = C and to present a general form of Re-nnd solutions of it. First, we will consider the equation AXA = C (5) and find necessary and sufficient conditions for the existence of Re-nnd solutions. The next auxiliary result presents a general form of a solution X of (5) which satisfies H(X) = 0. Lemma 2.1 If A C n m, then X C m m is a solution of the equation which satisfies H(X) = 0 if and only if for some W C m m. AXA = 0 (6) X = W (I A A) (I A A)W, (7) 5

Proof. Denote by r = rank(a). Let us suppose that X is a solution of the equation (6) and H(X) = 0. Using a singular value decomposition of A = U Diag(D, 0)V, where U C n n, V C m m are unitary and D C r r is an invertible matrix, we have that ] A = V Diag(D 1, 0)U and X = V [ X1 X 2 X 3 X 4 for some X 1 C r r and X 4 C (m r) (m r). From AXA = 0 we get that X 1 = 0 and, by H(X) = 0, that X 3 = X2 and H(X 4 ) = 0. Hence, X = V [ 0 X 2 X 2 X 4 [ ] I Taking into account that H(X 4 ) = 0, for W = V X2 V, we have 0 X 4 /2 that X = W (I A A) (I A A)W. In the other direction we can easily check that for arbitrary W C m m, X defined by (7) is a solution of the equation (6) which satisfies H(X) = 0. Theorem 2.2 Let A C n m, C C n n be given matrices such that the equation (5) is consistent and let r = rankh(c). There exists a Re-nnd solution of the equation (5) if and only if C is Re-nnd. In this case the general Re-nnd solution is given by with X = A = C(A = ) + (I A A)UU (I A A) ] V. + W (I A A) (I A A)W (8) A = = A + (I A A)Z((H(C)) 1/2 ), (9) where A, ((H(C)) 1/2 ) are arbitrary but fixed generalized inverses of A and (H(C)) 1/2, respectively, Z C m n, U C m (m r), W C m m are arbitrary matrices. Proof. If X is a Re-nnd solution of the equation (5), then AH(X)A = H(C) 0. V, 6

In the other direction, if C is Re-nnd, then X 0 = A C(A ) is Re-nnd solution of the equation (5). Let us prove that a representation of the general Re-nnd solution is given by (8). If X is defined by (8), then X is Re-nnd and AXA = AA C(AA ) = C. If X is an arbitrary Re-nnd solution of (5), then H(X) is a hermitian nonnegative-definite solution of the equation so, by Theorem 1 [12], AZA = H(C), H(X) = A = H(C)(A = ) + (I A A)UU (I A A), where A = is given by (9), for some Z C m n and U C m (m r). Note that, implying H(X) = H(A = C(A = ) + (I A A)UU (I A A) ), X = A = C(A = ) + (I A A)UU (I A A) + Z, where H(Z) = 0 and AZA = 0. Using Lemma 2.1, we have that Z = W (I A A) (I A A)W, for some W C m n. Hence, we get that X has a representation as in (8) Now, let us consider the equation AXB = C (10) where A C n m, B C m n and C C n n are given matrices and find necessary and sufficient conditions for the existence of a Re-nnd solution. Without loss of generality we may assume that n = m and that matrices A and B are both nonnegative definite. This follows from the fact that whenever the equation (10) is solvable then X is a solution of that equation if and only if X is a solution of the equation A AXBB = A CB. Hence, from now on, we assume that A and B are nonnegative definite matrices from the space C n n. The next theorem is the main result of this paper which presents necessary and sufficient conditions for the equation (10) to have a Re-nnd solution. 7

Theorem 2.3 Let A, B, C C n n be given matrices such that equation (10) is solvable. There exists a Re-nnd solution of (10) if and only if T = B(A + B) C(A + B) A (11) is Re-nnd, where (A + B) is a g-inverse of A + B. In this case a general Re-nnd solution is given by X = (A + B) = (C + Y + Z + W )((A + B) = ) + (I (A + B) (A + B))UU (I (A + B) (A + B)) + Q(I (A + B) (A + B)) (I (A + B) (A + B))Q, (12) where Y, Z, W are arbitrary solutions of the equations Y (A + B) B = C(A + B) A, A(A + B) Z = B(A + B) C, A(A + B) W (A + B) B = T, (13) such that C + Y + Z + W is Re-nnd, (A + B) = is defined by (A + B) = = (A + B) + (I (A + B) (A + B))P ((H(C + Y + Z + W )) 1/2 ), where U C n (n r), Q C n n, P C n n are arbitrary, r = rank(c + Y + Z + W ). Proof. Denote by E = (A + B) B, F = C(A + B) A, K = A(A + B), L = B(A + B) C. Now, equations (13) are equivalent to Y E = F, KZ = L, KW E = T. (14) Using (4) and the fact that E is g-invertible and E = B (A + B), we have that F E E = C(A + B) AB (A + B)(A + B) B = C(A + B) AB B = CB B(A + B) AB B = CB A(A + B) BB B = CB A(A + B) B = CB B(A + B) A = C(A + B) A = F, 8

which implies that the equation Y E = F is consistent. In a similar way, we can prove that the other two equations from (14) are consistent. Furthermore, T = F E = KL is Re-nnd which implies, by Theorem 2.1, that the first two equations from (14) have Re-nnd solutions. Now, suppose that the equation (10) has a Re-nnd solution X. Then H(T ) = H(B(A + B) AXB(A + B) A) = (B(A + B) A)H(X)(B(A + B) A) 0. Conversly, let T be Re-nnd. We can check that X 0 = (A + B) (C + Y + Z + W )(A + B) (15) is a solution of the equation (10), where Y, Z, W are arbitrary solutions of the equations (14). This follows from AX 0 B = (A + B)(A + B) C(A + B) (A + B) = (A + B)(A + B) AA CB B(A + B) (A + B) = AA CB B = C. Now, we have to prove that for some choice of Y, Z, W, matrix C +Y +Z +W is Re-nnd which would imply that X 0 is Re-nnd. Let Y = F E (F E ) + (E ) F EE + (I EE ) (I EE ), Z = K L (K L) + K KL (K ) + (I K K)Q(I K K), W = K T E (I K K)S S(I EE ), where Q = (C K T E )(C K T E ) and S = K KC + C EE. Obviously, Y, Z, W are solutions of the equations (14) and Using H(Y ) = (E ) H(T )E + (I EE ) (I EE ), H(Z) = K H(T )(K ) + (I K K)H(Q)(I K K) H(W ) = K T E + (E ) T (K ) H(C EE + K KC 2K T E ). K KK T E = K KK KL E = K KL E = K T E, K T E EE = K F EE EE = K F EE = K T E, KC E = KL = T, 9

we compute, H(C + Y + Z + W ) = ((E ) + K )H(T )((E ) + K ) [ ] [ ] + (I EE ) (I K I EE K) D (I K K), [ ] I C (E where D = ) T (K ) C K T E. By Theorem 1.2 it H(Q) follows that D is nonnegative definite, so H(C + Y + Z + W ) 0. Hence, with such a choice of Y, Z, W, it can be seen that X 0 defined by (15) is Re-nnd solution of (10). So, we proved the sufficient part of the theorem. Let X be an arbitrary Re-nnd solution of (10). It is evident that Y = AXA, Z = BXB and W = BXA are solutions of (14), and that (A + B)X(A + B) = C + Y + Z + W is Re-nnd. Now, using Theorem 2.2 we get that X has the representation (12). Let us mention that, if we apply Theorem 2.3 to the equation AX = C, (16) we get as a corollary, the Theorem 2.1 from [11]. Note that if the equation AX = C is consistent then X is a solution of it if and only if A AX = A C. By Theorem 2.3, we get that there exists a Re-nnd solution of the equation AX = C if and only if T = (A A + I) 1 A C(A A + I) 1 A A is Re-nnd. Note that in this case (I + A A) is invertible matrix. Let us prove that T is Re-nnd if and only if CA is Re-nnd. By we have that (A A + I) 1 A A = A A(A A + I) 1, T = (A A + I) 1 A (CA )((A A + I) 1 A ), i.e. H(T ) = ((A A + I) 1 A )H(CA )((A A + I) 1 A ). 10

From the last equality, H(CA ) 0 H(T ) 0. Now, suppose that H(T ) 0. Because of the consistence of the equation AX = C, it follows that AA C = C which implies that (A ) (A A + I)T ((A ) (A A + I)) = (A ) A CA AA = AA CA = CA i.e. H(CA ) = ((A ) (A A + I))H(T )((A ) (A A + I)) 0. References [1] A. Albert, Conditions for positive and nonnegative definiteness in terms of pseudoinverses, SIAM J. Appl. Math. 17, no. 2, 434 440 (1969). [2] W.N. Anderson, R.J. Duffin, Series and parallel addition of matrices, J. Math. Anal. Appl 26, 576 594 (1969). [3] A. Dajić, J.J. Koliha, Equations ax = c and xb = d in rings with applications to Hilbert space operators, (preprint). [4] J.K. Baksalary, Nonnegative definite and positive definite solutions to the matrix equation AXA = B, Linear and Multilinear Algebra 16 133 139 (1984). [5] A. Ben-Israel and T.N.E. Greville, Generalized inverses: theory and applications, Wiley-Interscience, New York (1974). [6] S.R. Caradus Generalized Inverses and Operator Theory, Queen s Paper in Pure and Applied Mathematics, Queen s University, Kingston, Ontario (1978). [7] D. Cvetković-Ilić, The reflexive solutions of the matrix equation AXB = C, Comp. Math. Appl. 51(6/7), 897 902 (2006). [8] D. Cvetković-Ilić, D.S. Djordjević, V. Rakočević, Schur complements in C -algebras, Math. Nachr. 278, 1 7 (2005). [9] H. Dai, On the symmetric solutions of linear matrix equations, Linear Algebra Appl. 131 1 7 (1990). 11

[10] H. Dai, P. Lancaster, Linear matrix equations from an inverse problem of vibration theory, Linear Algebra Appl. 246 31 47 (1996). [11] J. Gross, Explicit solutions to the matrix inverse problem AX = B, Linear Algebra Appl. 289 131 134 (1999). [12] J. Gross, Nonnegative definite and positive definite solutions to the matrix equation AXA = B-revisited, Linear Algebra Appl. 321 123 129 (2000). [13] P.A. Fillmore, J.P. Williams, On operators ranges, Advances in Math. 7 254-281 (1971). [14] R. Harte, M. Mbekhta, On generalize inverses in C*-algebras, Studia Math. 103 71-77 (1992). [15] C.G. Khatri, S.K. Mitra, Hermitian and nonnegative definite solutions of linear matrix equations, SIAM J. Appl. Math. 31 579 585 (1976). [16] J.J. Koliha, The Drazin and Moore penrose inverse in C*=algebra, Mathematical Proceedings of the Royal Irish Academy 99A 17 27 (1999). [17] R. Penrose, A generalized inverse for matrices, Proc. Cambridge Philos. Soc. 51 406 413 (1955). [18] Z.Y. Peng, An iterative method for the least squares symmetric solution of the linear matrix equation AXB=C, Appl. Math. Comp. 170 (1) 711 723 (2005). [19] Z.Y. Peng, X.Y. Hu, The reflexive and anti-reflexive solutions of the matrix equation AX=B, Linear Algebra Appl. 375 147 155 (2003). [20] Y. X. Peng, X.Y.Hu, L.Zhang, An iteration method for the symmetric solutions and the optimal approximation solution of the matrix equation AXB=C, Appl. Math. Comp. 160 763 777 (2005). [21] Q. Wang, C. Yang, The Re-nonnegative definite solutions to the matrix equation AXB = C, Comment. Math. Univ. Carolinae, 39(1) 7 13 (1998). [22] L. Wu, The Re-positive definite solutions to the matrix inverse problem AX = B, Linear Algebra Appl., 174 145 151 (1992). 12

[23] L. Wu and B. Cain, The Re-nonnegative definite solutions to the matrix inverse problem AX = B, Linear Algebra Appl. 236 137 146 (1996). [24] X. Zhang, M.Y. Cheng, The rank-constrained Hermitian nonnegative definite and positive definite solutions to the matrix equation AXA = B, Linear Algebra Appl. 370 163 174 (2003). [25] X. Zhang, Hermitian nonnegative definite and positive definite solutions of the matrix equation AXB = C, Appl. Math. E-Notes 4 40 47 (2004). Address: Dragana S. Cvetković-Ilić: Faculty of Sciences and Mathematics, Department of Mathematics, University of Niš, Višegradska 33, 18000 Niš, Serbia E-mail: gagamaka@ptt.yu 13