Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Similar documents
Re-nnd solutions of the matrix equation AXB = C

a Λ q 1. Introduction

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Group inverse for the block matrix with two identical subblocks over skew fields

ELA ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

Matrix Inequalities by Means of Block Matrices 1

A Note on Solutions of the Matrix Equation AXB = C

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

The DMP Inverse for Rectangular Matrices

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Generalized Principal Pivot Transform

Introduction to Numerical Linear Algebra II

Moore Penrose inverses and commuting elements of C -algebras

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

ELA

Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Proceedings of the Third International DERIV/TI-92 Conference

Wavelets and Linear Algebra

The reflexive and anti-reflexive solutions of the matrix equation A H XB =C

ON THE SINGULAR DECOMPOSITION OF MATRICES

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

A note on solutions of linear systems

Properties of Matrices and Operations on Matrices

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS

InequalitiesInvolvingHadamardProductsof HermitianMatrices y

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

Inequalities Involving Khatri-Rao Products of Positive Semi-definite Matrices

Chapter 1. Matrix Algebra

On V-orthogonal projectors associated with a semi-norm

Some results on the reverse order law in rings with involution

Some inequalities for sum and product of positive semide nite matrices

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

The generalized Schur complement in group inverses and (k + 1)-potent matrices

An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Research Article Constrained Solutions of a System of Matrix Equations

Product of Range Symmetric Block Matrices in Minkowski Space

SOME INEQUALITIES FOR THE KHATRI-RAO PRODUCT OF MATRICES

First, we review some important facts on the location of eigenvalues of matrices.

Permutation transformations of tensors with an application

ACI-matrices all of whose completions have the same rank

ELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES

Foundations of Matrix Analysis

On EP elements, normal elements and partial isometries in rings with involution

On Euclidean distance matrices

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

Block Diagonal Majorization on C 0

arxiv: v1 [math.ra] 14 Apr 2018

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

An Alternative Proof of the Greville Formula

Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

On the Relative Gain Array (RGA) with Singular and Rectangular Matrices

MATH36001 Generalized Inverses and the SVD 2015

arxiv: v1 [math.na] 1 Sep 2018

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

Dragan S. Djordjević. 1. Introduction

MATH 2030: MATRICES ,, a m1 a m2 a mn If the columns of A are the vectors a 1, a 2,...,a n ; A is represented as A 1. .

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B

Moore-Penrose-invertible normal and Hermitian elements in rings

The Singular Value Decomposition

The reflexive re-nonnegative definite solution to a quaternion matrix equation

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

Formulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms

APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

On the Simulatability Condition in Key Generation Over a Non-authenticated Public Channel

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Lecture notes: Applied linear algebra Part 1. Version 2

arxiv: v3 [math.ra] 22 Aug 2014

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Some Reviews on Ranks of Upper Triangular Block Matrices over a Skew Field

On the Hermitian solutions of the

On the adjacency matrix of a block graph

Notes on Linear Algebra and Matrix Theory

On some matrices related to a tree with attached graphs

Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations

Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY

of a Two-Operator Product 1

BESSEL MATRIX DIFFERENTIAL EQUATIONS: EXPLICIT SOLUTIONS OF INITIAL AND TWO-POINT BOUNDARY VALUE PROBLEMS

DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX

Drazin Invertibility of Product and Difference of Idempotents in a Ring

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Transcription:

Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of Sciences, Birjand University, Birjand, Iran E-mail :mamann@birjand.ac.ir Abstract. In this article, we consider the matrix equation AXB = C, where A, B, C are given matrices and give new necessary and sufficient conditions for the existence of the diagonal solutions and monomial solutions to this equation. We also present a general form of such solutions. Moreover, we consider the least squares problem min X C AXB F where X is a diagonal or monomial matrix. The explicit expressions of the optimal solution and the minimum norm solution are both provided. Keywords: Matrix equation, Diagonal matrix, Monomial matrix, Least squares problem. 2000 Mathematics subject classification: 15A24, 15A06. 1. Introduction Let R m n be the set of all m n real matrices. For A R m n, let A T, A, A L, A + and A F be the transpose, the generalized inverse(g-inverse), least squares g-inverse, the Moore-Penrose g-inverse and the Frobenius norm of A, respectively. We denote by I n and O m n the n n identity matrix and the m n zero matrix, respectively. A B and vec(a) stand for the Kronecker product of matrices A and B and the vec notation of matrix A, respectively (see 14, 1 or 8). For v R n, v 2 represents the Euclidean norm of v. Corresponding Author Received 2 April 2012; Accepted 28 June 2013 c 2014 Academic Center for Education, Culture and Research TMU 31

32 M. Aman Some authors have studied the problem general solution to the matrix equation AXB = C (1.1) where A R m n, B R n p and C R m p are known matrices. For instance, Dai4, Magnus10, Khatri and Mitra6, Zhang18 and Cvetković-Ilić 3 have derived the general symmetric, L-structured, hermitian and nonnegative definite, hermitian nonnegative definite and positive definite and reflexive solutions to the matrix equation (1.1), respectively. The Re-nonnegative definite solutions for the equation (1.1) were investigated by Wang and Yang 17 and Cvetković-Ilić 2. Hou et al.7 gave iteration methods for computing the least squares symmetric solution of the equation (1.1), in which the authors presented an algorithm to solve the minimum Frobenius norm residual problem: min AXB C F with unknown symmetric matrix X. Tian16 gave the maximal and minimal ranks of solutions of the equation (1.1). Matrix equations play an important role in applied mathematics and engineering sciences. For example, the matrix equation (1.1) arises in the multistatic antenna array processing problem in the scattering theory(see 12, 15 and 9). In this problem, we obtain a matrix equation in the form AXB = C whose diagonal and monomial solutions are feasible solutions of the problem. Lev-Ari in 9 compute a solution to the multistatic antenna array processing problem by using Kronecker, Khatri-Rao and Schur-Hadamard matrix products. Also, matrix equations appear naturally in solving the differential equations or the controllable linear time-invariant system(see 5 and 11). For approximating of solutions for matrix equations with unknown matrix X R m n, we can choose N = n(m 1) or M = m(n 1) unknowns arbitrarily and convert the matrix equations to new one whose diagonal or monomial solutions are feasible 5. In this article we use the singular value decomposition(svd) and examine necessary and sufficient conditions for the existence of diagonal solutions and monomial solutions to the matrix equation (1.1). Also, we derive representation of the general diagonal solutions and monomial solutions to this matrix equation. Moreover, we consider the least squares problem min X C AXB F subject to the constraint that X is a diagonal or monomial matrix and, compute the general solution and the unique solution of minimum 2-norm to this problem. Now, we state some well known results which are used in the next section. Lemma 1.1. Let A R m n and b R m be given. Then, (i) (see 1, P. 53 and 14) the linear system Ax = b is consistent if and only if for some A, AA b = b

Diagonal and Monomial Solutions of the Matrix Equation AXB = C 33 in which case the general solution is x = A b + (I n A A)h for arbitrary h R n. (ii) (see 1, P. 105 and 14, Section 6.5) the general solution to the least squares problem is of the form min x R n b Ax 2 x = A L b + (I n A A)y where A L is a least squares g-inverse and A is a g-inverse of A and y R n is arbitrary. (iii) (see 8, P. 66) the unique solution of the least squares problem is x = A + b. min x R n b Ax 2 2 + x 2 2 2. The diagonal solutions In this section, we present the general diagonal solutions to the matrix equation (1.1). Suppose that the diagonal D = diag(d 1, d 2,..., d n ) is a solution to the equation (1.1), then ADB = C. This relation can be written using the vec notation and the Kronecker product in the equivalent form n (b T i a i )d i = vec(c) i=1 where b i and a i are the i th row of B and the i th column of A, respectively. This shows that the diagonal D = diag(d 1, d 2,..., d n ) is a solution to the equation (1.1) if and only if the vector d = (d 1, d 2,..., d n ) T satisfies the linear system where V d = vec(c) (2.1) V = b T 1 a 1, b T 2 a 2,..., b T n a n. (2.2) Therefore, the matrix equation (1.1) has a diagonal solution if and only if the linear system (2.1) is consistent. By Lemma 1.1(i), (2.1) is consistent if and only if V V vec(c) = vec(c). The following theorem follows from these observations.

34 M. Aman Theorem 2.1. Let A R m n, B R n p and C R m p be given. Then equation (1.1) has a diagonal solution if and only if V V vec(c) = vec(c) (2.3) where V is defined by (2.2), in which case the general diagonal solution is D = diag(d T ) where the vector d satisfies for arbitrary h R n. d = V vec(c) + (I n V V )h (2.4) In the following, we rewrite the matrix V and the solution to (2.1) with more details. First, we give the following lemma. Lemma 2.2. Given matrices A R m n and B R n p. Then the matrix V in (2.2) satisfies V = (B T A)P E (2.5) where P is a symmetric permutation matrix of size n 2 n 2 and E is an n 2 n matrix of the form In E = (2.6) O where O is an (n 2 n) n zero matrix. Proof. The i th column of V satisfies b T i a i = B T e i Ae i = (B T A)(e i e i ), where e i is the i th column of the n n identity matrix. Substituting this in (2.2) yields, V = (B T A)F, where F = e 1 e 1, e 2 e 2,..., e n e n. It is easy to see that the matrix F can be transformed into the matrix E in (2.6) by reordering of its rows. This operation can be achieved by a premultiplication F by permutation matrix n P = G i,((i 1)n+i), (2.7) i=2 where G i,((i 1)n+i) is an interchange matrix (the identity matrix with its i th row and ((i 1)n + i) th row interchanged). Thus we have E = P F. Note that the matrix P is unitary and also symmetric, since there is no overlap between the interchange matrices(see 13, P. 73). Therefore we obtain This completes the proof. F = P E.

Diagonal and Monomial Solutions of the Matrix Equation AXB = C 35 Lemma 2.3. Let A R m n and B R n p be given. Let UA T ΣA O W A and UB T ΣB O W B (2.8) O O O O be singular value decompositions of A and B, respectively, where Σ A and Σ B are diagonal and nonsingular. Then there are two symmetric permutation matrices P 1 and P 2 such that (WB T UA T ΣA Σ )P B O 1 O O is a singular value decomposition of (B T A). P 2 (U B W A ) (2.9) In the above lemma, two permutation matrices P 1 and P 2 have been used to permute the rows and columns of the diagonal matrix ΣA O ΣB O, O O O O to convert this matrix to the following matrix. ΣA Σ B O. O O On the other hand, These permutations have not any overlapping. Therefore, P 1 and P 2 are symmetric. This shows that the matrices P 1 and P 2 satisfied in the above lemma can be computed, easily. Theorem 2.4. Let A R m n, B R n p and C R m p be given. Let rank(a) = r 1 and rank(b) = r 2, and let the matrices U A, U B, W A, W B, Σ A, Σ B, P 1 and P 2 satisfy (2.8) and (2.9). Then equation (1.1) has a diagonal solution if and only if (Ũ T 1 Σ W 11 W 11 Σ 1 Ũ 1 + Ũ T 1 Σ W 11 HŨ2)vec(C) = vec(c) (2.10) for arbitrary matrix H R n (mp r1r2), in which case the general diagonal solution is D = diag(d T ) where the vector d satisfies d = ( W 11 Σ 1 Ũ 1 + HŨ2)vec(C) + (I n W 11 W 11 )h (2.11) for arbitrary h R n where Σ = Σ A Σ B, Ũ1 is an r 1 r 2 mp matrix consisting of the first r 1 r 2 rows of the matrix P 1 (W B U A ), Ũ2 is an (mp r 1 r 2 ) mp matrix consisting of the last (mp r 1 r 2 ) rows of the matrix P 1 (W B U A ) and W 11 is an r 1 r 2 n matrix whose (i, j) th entry is equal to the (i, ((j 1)n + j)) th entry of the matrix P 2 (U B W A ). Proof. Denote by Ũ the matrix P 1(W B U A ) and by W the matrix P 2 (U B W A ). Substituting (2.9) into (2.5), we obtain Σ O V = Ũ T In W P. (2.12) O O O

36 M. Aman We note that the postmultiplication of W by the permutation matrix P in (2.7) is equivalent to interchanging the columns i and ((i 1)n + i) of the matrix W for i = 2, 3,..., n. Therefore the product In W P O is equal to an n 2 n matrix W 1 which consists of the n columns w ((i 1)n+i), i = 1, 2,..., n of W. Now partition W 1 as W11 W 21 (2.13) where W 11 is of size r 1 r 2 n, then (2.12) becomes V = Ũ T Σ O r1r 2 (n 2 r 1r 2) O (mp r1r 2) r 1r 2 O (mp r1r 2) (n 2 r 1r 2) = Ũ T Σ W 11 O (mp r1r 2) n W11 W 21 = Ũ T Σ O r1r 2 (mp r 1r 2) O (mp r1r 2) r 1r 2 I (mp r1r 2) W11 O (mp r1r 2) n. (2.14) We now compute a g-inverse of V in terms of a g-inverse of W 11. To do this, we know that W11 Σ 1 O Ũ O O I is a g-inverse of V (see 1, P. 43). On the other hand, it is easy to verify that W11 O = W 11, H for arbitrary matrix H R n (mp r1r2). Then we have Σ V = W 11 1, H O Ũ. (2.15) O I Substituting (2.14) and (2.15) into (2.3) gives Ũ T Σ W 11 W 11 Σ 1 Σ W 11 H Ũvec(C) = vec(c) O O and Substituting (2.14) and (2.15) into (2.4) gives d = W 11 Σ 1, HŨvec(C) + (I n W 11 W 11 )h.

Diagonal and Monomial Solutions of the Matrix Equation AXB = C 37 By partitioning the matrix Ũ as Ũ1 Ũ 2 where Ũ1 consists of the r 1 r 2 first rows of Ũ, and substituting it into the two above relations, we obtain (3.8) and (3.9). The proof is complete. Proposition 2.5. Let A R m n, B R n p and C R m p be given. If D = diag(d 1, d 2,..., d n ) is a diagonal solution to (1.1),then the number of nonzero diagonal entries of D is at least r c where r c is the rank of the matrix C. Proof. The proof is an immediate consequence of the fact that rank(adb) min{rank(a), rank(d), rank(b)}. Now, we consider the more general case where the linear system (2.1) may be inconsistent. In this case, we solve the least squares problem min X subject to : C AXB F X R n n and X is diagonal, (2.16) where A, B and C are given matrices, or equivalently, the least squares problem d o = arg min d R n vec(c) V d 2 (2.17) where V is defined by (2.2). It is evident that d o is a solution of the problem (2.17) if and only if X o = diag(d T o ) is a solution of the problem (2.16). To solve the above least squares problems, we need the following algorithm. Algorithm 1. 1. Input W 11 R r n and nonsingular diagonal matrix Σ of size r r. 2. Calculate the integer r w according to r w = rank( W 11 ). 3. Find permutation matrices Q 1 R r r and Q 1 R n n satisfying the relations W 11 = Q 1 Ŵ1 Ŵ 2 Ŵ 3 Ŵ 4 where the r w r w matrix Ŵ1 is full rank. 4. Find the diagonal matrix Σ satisfying ΣQ 1 = Q 1 Σ. Q 2

38 M. Aman 5. Partition Σ as Σ = Σ1 O O Σ 2 where the diagonal matrices Σ 1 and Σ 2 are of sizes r w r w and (r r w ) (r r w ), respectively. 6. Solve the matrix equation Y ( Σ 1 Ŵ 1 ) = Σ 2 Ŵ 3. 7. Calculate the matrix L w according to L w = Q T L1 L 2 2 O O Q T 1 where L 1 = Ŵ 1 1 Σ 1 1 (I r w + Y T Y ) 1 and L 2 = L 1 Y T. 8. Solve the matrix equation Ŵ 1 Y = Ŵ2 9. Calculate the matrix M w according to M w = Q T I rw 2 Y T (I rw + Y Y T ) 1 L 1 I rw Y T Q T 1. The matrices (I rw + Y T Y ) and (I rw + Y Y T ) in the above algorithm are nonsingular, since these matrices are symmetric positive definite. Lemma 2.6. Let W 11 R r n and nonsingular diagonal matrix Σ R r r be given. Let L w and M w be the matrices obtained from Algorithm 1. Then L w is a least squares g-inverse and M w is the Moore-Penrose g-inverse of Σ W 11. Proof. By straightforward computations, it can be shown that L w satisfy and M w satisfy Σ W 11 L w Σ W 11 = Σ W 11 and (Σ W 11 L w ) T = Σ W 11 L w, Σ W 11 M w Σ W 11 = Σ W 11, (Σ W 11 M w ) T = Σ W 11 M w M w Σ W 11 M w = M w and (M w Σ W 11 ) T = M w Σ W 11. Therefore, L w is a least squares g-inverse of Σ W 11 and (Σ W 11 ) + = M w. Theorem 2.7. Let A R m n, B R n p and C R m p be given and let the matrices U A, U B, W A, W B, Σ A, Σ B, P 1 and P 2 satisfy (2.8) and (2.9). Let r 1, r 2, Σ, Ũ, W, Ũ1, Ũ2 and W 11 be as in Theorem 2.4. Then the general solution to the least squares problem (2.17) is of the form d o = L w Ũ 1 vec(c) + (I W 11 W 11 )h, for arbitrary h R n where L w is a least squares g-inverse of Σ W 11 obtained from Algorithm 1.

Diagonal and Monomial Solutions of the Matrix Equation AXB = C 39 In addition, the unique solution of minimum 2-norm is d m o = M w Ũ 1 vec(c) where M w is the Moore-Penrose g-inverse of Σ W 11 obtained from Algorithm 1. Proof. From (2.14), above lemma and the definition of least squares g-inverse, it follows that L w H Ũ (2.18) is a least squares g-inverse of V where H R n (mp r1r2) is any solution of the matrix equation W 11 H = O. (2.19) We know that the general solution of (2.19) is H = (I W 11 W 11 )H where W 11 is an arbitrary g-inverse of W 11 and H an arbitrary matrix in R n (mp r1r2) 14, P. 215. By substituting this into (2.18), a least squares g- inverse of V is obtained as Similarly, it can be verified that L w Ũ 1 + (I W 11 W 11 )HŨ2. (2.20) V + = M w Ũ 1. (2.21) It follows from (2.15), (2.20) and Lemma 1.1(ii) that the general least squares solution of (2.17) is d o = L w Ũ 1 vec(c) + (I W 11 W 11 )(HŨ2vec(C) + y) (2.22) where H R n (mp r1r2) and y R n are arbitrary. It is easy to show that for every h R n, there are H R n (mp r1r2) and y R n such that h = (HŨ2vec(C) + y). This proves the first part of the theorem. The second part of the theorem can be easily proved from (2.21) and Lemma 1.1(iii). 3. The monomial solutions In this section, we obtain the general monomial solutions to the matrix equation (1.1) by using the general diagonal solution presented in the previous section. A square matrix is called monomial if each its row and column contains at most one non-zero entry. It is easy to verify that a matrix M is a monomial matrix if and only if it can be written as a product of a diagonal matrix D and a permutation matrix P, i.e., M = DP.

40 M. Aman If we denote by monom(m 1,j1, m 2,j2,..., m n,jn ), an n n monomial matrix whose all entries are equal to zero except probably its (i, j i ) th entry, i = 1, 2,..., n, which is equal to m i,ji, then the above relation can be written as monom(m 1,j1, m 2,j2,..., m n,jn ) = diag(m 1,j1, m 2,j2,..., m n,jn )P T (3.1) in which P is the permutation matrix e j1, e j2,..., e jn where e ji is the j i th column of the n n identity matrix. Assume that the monomial matrix monom(m 1,j1, m 2,j2,..., m n,jn ) is a solution to the matrix equation (1.1). By (3.1) we obtain AD m B m = C (3.2) where B m = e j1, e j2,..., e jn T B and D m = diag(m 1,j1, m 2,j2,..., m n,jn ). This shows that the monomial matrix monom(m 1,j1, m 2,j2,..., m n,jn ) is a solution to (1.1) if and only if the diagonal matrix D m is a solution to (3.2). Therefore by Lemma 2.2, (3.2) is equivalent to (B m T A)P Ed m = vec(c) (3.3) where P and E are defined by (2.7) and (2.6), respectively, and d m is a column vector whose components are the diagonal entries of D m (i.e., D m = diag(d m T )). Replacing B m and A by e j1, e j2,..., e jn T B and AI n, respectively, in (3.3) gives (B T A)(e j1, e j2,..., e jn I n )P Ed m = vec(c) (3.4) Denote by P m, the permutation matrix (e j1, e j2,..., e jn I n )P in (3.4). We now show that P m is not unique. Toward this end, we can rewrite the relation (3.2) by using vec notation as follows: b T j 1 a 1, b T j 2 a 2,..., b T j n a n d m = vec(c) (3.5) where b i and a i are the i th row of B and the i th column of A, respectively. Then, similarly, we have (B T A)e j1 e 1, e j2 e 2,..., e jn e n d m = vec(c). (3.6) To transform the matrix e j1 e 1, e j2 e 2,..., e jn e n into the matrix E in (2.6), premultiply this matrix by permutation matrix n P m = G i,((ji 1)n+i). Thus, (3.6) becomes i=1 (B T A)P med m = vec(c). (3.7) Comparing this with (3.4) yields the desired result, i.e., the permutation matrix P m is not unique. But the n first columns of these permutation matrices are the same.

Diagonal and Monomial Solutions of the Matrix Equation AXB = C 41 Now by Theorem 2.1 and Lemma 2.3 and similar to Theorem 2.4 we can write the following theorem. Theorem 3.1. Suppose the hypothesis of Theorem 2.4 is satisfied. Then equation (1.1) has a monomial solution of the form (3.1) if and only if (Ũ T 1 Σ W m11 Wm 11Σ 1 Ũ 1 + Ũ T 1 Σ W m11 HŨ2)vec(C) = vec(c) (3.8) for arbitrary matrix H R n (mp r1r2), in which case the general monomial solution of the form (3.1) is diag(d T m)p T where the vector d m satisfies d m = ( W m 11Σ 1 Ũ 1 + HŨ2)vec(C) + (I n W m 11 W m11 )h (3.9) for arbitrary h R n where W m11 is an r 1 r 2 n matrix whose (i, t) th entry is equal to the (i, ((j t 1)n + t)) th entry of the matrix P 2 (U B W A ). The above theorem introduces an algorithm for computing the general monomial solution of the matrix equation (1.1). This algorithm depends on a permutation {j 1, j 2,..., j n } of the set {1, 2,..., n}. Then we can compute all of the monomial solutions of (1.1), with implementation these algorithm for all of the permutations of the set {1, 2,..., n}. Now, we consider the least squares problem min X C AXB F subject to : X R n n and X = monom(m 1,j1, m 2,j2,..., m n,jn ), (3.10) where A, B and C are given matrices, or equivalently, the least squares problem d mo = arg min vec(c) V md m 2 (3.11) d m R n where V m = b T j 1 a 1, b T j 2 a 2,..., b T j n a n. It is evident that d mo is a solution of the problem (3.11) if and only if X o = diag(d T mo)p T is a solution of the problem (3.10). By a similar argument to Theorem 2.7, we can easily derive the following theorem. Theorem 3.2. Let A R m n, B R n p and C R m p be given and let the matrices U A, U B, W A, W B, Σ A, Σ B, P 1 and P 2 satisfy (2.8) and (2.9). Let r 1, r 2, Σ, Ũ, W, Ũ1 and Ũ2 be as in Theorem 2.4 and let W m11 be as in Theorem 3.1. Then the general solution to the least squares problem (3.11) is of the form d mo = L wm Ũ 1 vec(c) + (I W m 11 W m11 )h, for arbitrary h R n where L wm is a least squares g-inverse of Σ W m11 obtained from Algorithm 1. In addition, the unique solution of minimum 2-norm is d m mo = M wm Ũ 1 vec(c) where M wm is the Moore-Penrose g-inverse of Σ W m11 obtained from Algorithm 1.

42 M. Aman Acknowledgment. The author sincerely thank the referees for their careful reading of this paper and valuable suggestions and comments. References 1. A. Ben-Israel and T.N.E. Greville, Generalized Inverses: Theory and Applications, Second edition, Springer-Verlag, New York, 2003. 2. D. S. Cvetković-Ilić, Re-nnd solutions of the matrix equation AXB=C, J. Aust. Math. Soc., 84, (2008), 63-72. 3. D. S. Cvetković-Ilić, The reflexive solutions of the matrix equation AXB = C, Comp. Math. Appl., 51, (2006), 897-902. 4. H. Dai, On the symmetric solutions of linear matrix equations, Linear Algebra Appl., 131, (1990), 1-7. 5. S. M. Karbassi and F. Soltanian, A new approach to the solution of sensitivity minimization in linear state feedback control, Iranian Journal of Mathematical Sciences and Informatics, 2(1), (2007), 1-13. 6. C. G. Khatri and S. K. Mitra, Hermitian and nonnegative definite solutions of linear matrix equations, SIAM J. Appl. Math., 31, (1976), 579-585. 7. J. J. Hou, Z. Y. Peng, and X. L. Zhang, An iterative method for the least squares symmetric solution of matrix equation AXB=C, Numer. Algor., 42, (2006), 181-192. 8. A. J. Laub, Matrix analysis for scientists and engineers, SIAM, Philadelphia, PA, 2005. 9. H. Lev-Ari, Efficient solution of linear matrix equations with applications to multistatic antenna array processing, Comm. Inform. Systems, 5, (2005), 123-130. 10. J. R. Magnus, L-structured matrices and linear matrix equation, Linear and Multilinear Algebra, 14, (1983), 67-88. 11. P. Mokhtary and S. Mohammad Hosseini, Some implementation aspects of the general linear methods withinherent Runge-Kutta stability, Iranian Journal of Mathematical Sciences and Informatics, 3(1), (2008), 63-76. 12. C. Prada, S. Manneville, D. Spoliansky, and M. Fink, Decomposition of the time reversal operator: Detection and selective focusing on two scatterers, J. Acoust. Soc. Am., 99, (1996), 2067-2076. 13. Y. Saad, Iterative methods for sparse linear systems, PWS Publishing Company, 1996. 14. J. R. Schott, Matrix analysis for statistics, Wiley, New York, 1997. 15. G. Shi and A. Nehorai, Maximum likelihood estimation of point scatterers for computational time-reversal imaging, Comm. Inform. Systems, 5, (2005), 227-256. 16. Y. Tian, Ranks of solutions of the matrix equation AXB = C, Linear and Multilinear Algebra, 51, (2003), 111-125. 17. Q. Wang and C. Yang, The Re-nonnegative definite solutions to the matrix equation AXB=C, Comment. Math. Univ. Carolin., 39, (1998), 7-13. 18. X. Zhang, Hermitian nonnegative definite and positive definite solutions of the matrix equation AXB=C, Appl. Math. E-Notes, 4, (2004), 40-47.