The reflexive and anti-reflexive solutions of the matrix equation A H XB =C

Similar documents
The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B

THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1) 1. Introduction

The skew-symmetric orthogonal solutions of the matrix equation AX = B

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

Improved Newton s method with exact line searches to solve quadratic matrix equation

Re-nnd solutions of the matrix equation AXB = C

Journal of Computational and Applied Mathematics. New matrix iterative methods for constraint solutions of the matrix

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

On the solvability of an equation involving the Smarandache function and Euler function

arxiv: v1 [math.na] 1 Sep 2018

Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Nonsingularity and group invertibility of linear combinations of two k-potent matrices

Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with J-(Skew) Centrosymmetry

The reflexive re-nonnegative definite solution to a quaternion matrix equation

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Chapter 1. Matrix Algebra

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Elementary linear algebra

AN ITERATIVE METHOD FOR THE GENERALIZED CENTRO-SYMMETRIC SOLUTION OF A LINEAR MATRIX EQUATION AXB + CY D = E. Ying-chun LI and Zhi-hong LIU

of a Two-Operator Product 1

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Moore Penrose inverses and commuting elements of C -algebras

Two Results About The Matrix Exponential

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

Research Article Finite Iterative Algorithm for Solving a Complex of Conjugate and Transpose Matrix Equation

Singular Value Inequalities for Real and Imaginary Parts of Matrices

1 Linear Algebra Problems

Research Article Constrained Solutions of a System of Matrix Equations

Generalized Schur complements of matrices and compound matrices

Group inverse for the block matrix with two identical subblocks over skew fields

A Note on Simple Nonzero Finite Generalized Singular Values

On the Eigenstructure of Hermitian Toeplitz Matrices with Prescribed Eigenpairs

On the eigenvalues of specially low-rank perturbed matrices

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS

76. LIO ND Z. I vector x 2 R n by a quasi-newton-type algorithm. llwright, Woodgate[2,3] and Liao[4] gave some necessary and sufficient conditions for

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

Displacement rank of the Drazin inverse

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Lecture notes: Applied linear algebra Part 1. Version 2

The Residual Spectrum and the Continuous Spectrum of Upper Triangular Operator Matrices

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

A CHARACTERIZATION OF THE OPERATOR-VALUED TRIANGLE EQUALITY

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS

Algebra C Numerical Linear Algebra Sample Exam Problems

5. Orthogonal matrices

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

PRODUCT OF OPERATORS AND NUMERICAL RANGE

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Linear Algebra Massoud Malek

A property concerning the Hadamard powers of inverse M-matrices

INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES. Aikaterini Aretaki, John Maroulas

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27

The semi-convergence of GSI method for singular saddle point problems

Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix

Some inequalities for unitarily invariant norms of matrices

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

arxiv: v3 [math.ra] 22 Aug 2014

Matrix Inequalities by Means of Block Matrices 1

Means of unitaries, conjugations, and the Friedrichs operator

4.6. Linear Operators on Hilbert Spaces

arxiv: v1 [math.na] 13 Mar 2019

arxiv: v1 [math.fa] 1 Oct 2015

The Symmetric and Antipersymmetric Solutions of the Matrix Equation A 1 X 1 B 1 + A 2 X 2 B A l X l B l = C and Its Optimal Approximation

ELA ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES

Absolute value equations

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix

Lecture 10 - Eigenvalues problem

Permutation transformations of tensors with an application

An inverse eigenproblem for generalized reflexive matrices with normal $k+1$-potencies

Some inequalities for sum and product of positive semide nite matrices

On generalized Schur complement of nonstrictly diagonally dominant matrices and general H- matrices

An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

On Solving Large Algebraic. Riccati Matrix Equations

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Weak Monotonicity of Interval Matrices

Minkowski Inverse for the Range Symmetric Block Matrix with Two Identical Sub-blocks Over Skew Fields in Minkowski Space M

Linear Algebra Review. Vectors

Applied Mathematics Letters. A reproducing kernel method for solving nonlocal fractional boundary value problems

SOME INEQUALITIES FOR THE KHATRI-RAO PRODUCT OF MATRICES

EXPLICIT SOLUTION TO MODULAR OPERATOR EQUATION T XS SX T = A

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

Linear Algebra and its Applications

THE MOORE-PENROSE GENERALIZED INVERSE OF A MATRIX

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Transcription:

Journal of Computational and Applied Mathematics 200 (2007) 749 760 www.elsevier.com/locate/cam The reflexive and anti-reflexive solutions of the matrix equation A H XB =C Xiang-yang Peng a,b,, Xi-yan Hu a, Lei Zhang a a College of Mathematics and Econometrics, Hunan University, Changsha 410082, PR China b Department of Information and Computing Science, ChangSha University, Changsha 410003, PR China Received 10 June 2005; received in revised form 19 ctober 2005 Abstract In this paper, the necessary and sufficient conditions for the solvability of matrix equation A H XB = C over the reflexive or anti-reflexive matrices are given, and the general expression of the solution for a solvable case is obtained. Moreover, the relative optimal approximation problem is considered. The explicit expressions of the optimal approximation solution and the minimum norm solution are both provided. 2006 Published by Elsevier B.V. MSC: 15A24 Keywords: Reflexive matrix; Anti-reflexive matrix; Matrix equation; ptimal approximation matrix; Minimum norm solution 1. Introduction Throughout the paper, we denote the complex n-vector space by C n, the set of m n complex matrices by C m n, the set of n n nonsingular matrices by Cn n n, and the conjugate transpose of a complex matrix A by A H. We define an inner product A, B = trace(b H A) for all A, B C m n. Then C m n is a Hilbert inner product space and the norm generated by this inner product is Frobenius norm. Let A be the Frobenius norm of matrix A. The notation A B = (a ij b ij ) m n represents the Hadamard product for any A = (a ij ) m n,b = (b ij ) m n. Chen and Chen [1], Chen [2,3], and Chen and Sameh [4] introduce the following two special classes of subspaces in C m n Cr n n (P ) ={X C n n X = PXP}, Ca n n (P ) ={X C n n X = PXP}, where P is a generalized reflection matrix of size n, that is P H = P and P 2 = I. Research supported by National Natural Science Foundation of China (10571047). Corresponding author. Department of Information and Computing Science of Changsha University, Changsha, Hunan 41003, PR China. Tel.: +86 0731 4261229. E-mail address: pxy396@yahoo.com.cn (X.-y. Peng). 0377-0427/$ - see front matter 2006 Published by Elsevier B.V. doi:10.1016/j.cam.2006.01.024

750 X.-y. Peng et al. / Journal of Computational and Applied Mathematics 200 (2007) 749 760 Let HC n n ={P C n n P H = P, P 2 = I}. Matrix X Cr n n (P ) (or Y Ca n n (P )) is said to be a reflexive (or anti-reflexive) matrix with respect to the generalized reflection matrix P, respectively (abbreviated reflexive (or anti-reflexive) matrix in the paper). The reflexive and anti-reflexive matrices have many special properties and are widely used in engineering and scientific computation [1 4]. In this paper, we mainly consider the following two problems. Problem I. Given matrices A C n m, B C n l, C C m l, find X C n n r (P ) (or C n n (P )) such that A H XB = C. (1.1) Denote the solution set of Problem I by S re (or S ae ). If S re (or S ae ) is nonempty, we consider the relative optimal approximation problem, namely Problem II. Given X C n n, find a n n matrix ˆX S re (or S ae ) such that ˆX X = min X X S re (ors ae ) X. (1.2) The matrix equation A T XA = B, one special form of A H XB = C, was discussed in many papers with the unknown matrix X be symmetric, symmetric positive, central-symmetric, symmetric ortho-symmetric (see, for instance, Chu [5,6], Dai [8], He[10], Peng [12], Peng and Hu [14]) and the matrix equation AX = B, another special form of A H XB = C, was discussed too with the unknown matrix X be reflexive, anti-reflexive, symmetric and re-positive define and etc., (see, for instance, [13,7,9,15]). Matrix equation A H XB = C with unknown X be only symmetric was studied in Dai [7]. In these papers, by using the structure properties of matrices in required subset of C m n and the generalized singular value decomposition (GSVD), the necessary and sufficient conditions for the existence and the expressions of the solution for the matrix equation were given. The reflexive and anti-reflexive matrices have extensive applications in engineering and scientific computation. However, the reflexive and anti-reflexive solutions of matrix equation A H XB = C have not been considered yet. This Problem I is to be settled. The optimal approximation problem is initially proposed in the processes of test or recovery of linear systems due to incomplete dates or revising given dates. A preliminary estimate X of the unknown matrix X can be obtained by the experimental observation values and the information of statistical distribution and that is the Problem II we shall discuss in this paper. Finally, we will obtain the expression of the minimum norm solution which satisfies min X SrE X (or min X SaE X ). The paper is organized as follows. In Section 2, the necessary and sufficient conditions for the existence of and the expressions for the reflexive and anti-reflexive solutions of Eq. (1.1) will be derived. In Section 3, the expressions for the optimal approximation solution of Problem II and the minimum norm solution of Problem I will be obtained. In Section 4, we have given a numerical example to illustrate our results. a 2. Solutions of Problem I In this section, we first introduce some structure properties of the generalized reflection matrix P and subsets Cr n n (P ) and Ca n n (P ) of C n n. We then give the necessary and sufficient conditions for the existence of and the expressions for the reflexive and anti-reflexive solutions with respect to a generalized reflection matrix P of Eq. (1.1). Lemma 2.1 (Peng et al. [14]). If r = rank(i n + P), n r = rank(i n P), then there exist unit orthogonal column matrices U 1 C n r and U 2 C n (n r) such that 2 1 (I n + P)= U 1 U1 H, 2 1 (I n P)= U 2 U2 H and U 1 H U 2 = 0. Let U = (U 1,U 2 ), it can be easily verified from Lemma 2.1 that U is an unitary matrix, and P can be expressed as ( ) Ir P = U U H. (2.1) I n r

X.-y. Peng et al. / Journal of Computational and Applied Mathematics 200 (2007) 749 760 751 Lemma 2.2 (Peng et al. [14]). X Cr n n (P ) if and only if X can be expressed as ( ) X 11 X = U U X H, (2.2) 22 where X 11 C r r, X 22 C (n r) (n r), and U is the same as (2.1). Lemma 2.3 (Peng et al. [14]). X Ca n n (P ) if and only if X can be expressed as ( ) X 12 X = U U H, (2.3) X 21 where X 12 C r (n r), X 21 C (n r) r, and U is the same as (2.1). Denote U H A = U H B = ( A1 A 2 ( B1 B 2 ), A 1 C r m,a 2 C (n r) m, (2.4) ), B 1 C r l,b 2 C (n r) l. (2.5) Using GSVD [11] on matrix pair [A H 1,AH 2 ], we obtain A H 1 = W AΣ 1A U H A, AH 2 = W AΣ 2A V H A, (2.6) where W A Cm m m is a nonsingular matrix, U A C r r and V A C (n r) (n r) are unitary matrices, and I 1A k 1 2A k 1 D 1A D 2A Σ 1A = 1A r k 1, Σ 2A = I 2A t 1 k 1, m r m t 1 k 1 r k 1 n r + k 1 t 1 t 1 k 1 (2.7) where t 1 = rank(a H 1,AH 2 ), k 1 = t 1 rank(a H 2 ), = rank(a H 1 ) + rank(ah 2 ) t 1, I 1A and I 2A are identity matrices, 1A, 2A and are zero matrices, D 1A = diag(λ 1A, λ 2A,...,λ s1 A)>0, D 2A = diag(μ 1A, μ 2A,...,μ s1 A)>0 with 1 > λ 1A λ 2A λ s1 A > 0, 0 < μ 1A μ 2A μ s1 A < 1 and λ 2 ia + μ2 ia = 1, (i = 1, 2,...,). Similarly, the GSVD of matrix pair [B H 1,BH 2 ] is B H 1 = W BΣ 1B U H B, BH 2 = W BΣ 2B V H B, (2.8) where W B Cl l l is a nonsingular matrix, U B C r r and V B C (n r) (n r) are unitary matrices, I 1B k 2 2B D 1B s 2 D 2B Σ 1B = 1B r k 2 s 2, Σ 2B = I 2B l r k 2 s 2 r k 2 s 2 n r + k 2 t 2 s 2 t 2 k 2 s 2 k 2 s 2 t 2 k 2 s 2, l t 2 (2.9) where t 2 = rank(b H 1,BH 2 ), k 2 = t 2 rank(b H 2 ), s 2 = rank(b H 1 ) + rank(bh 2 ) t 2, I 1B and I 2B are identity matrices, 1B, 2B and are zero matrices. D 1B = diag(λ 1B, λ 2B,...,λ s2 B)>0, D 2B = diag(μ 1B, μ 2B,...,μ s2 B)>0 with 1 > λ 1B λ 2B λ s2 B > 0, 0 < μ 1B μ 2B μ s2 B < 1, and λ 2 ib + μ2 ib = 1, (i = 1, 2,...,s 2).

752 X.-y. Peng et al. / Journal of Computational and Applied Mathematics 200 (2007) 749 760 We now state the following results. Let C 11 C 12 C 13 C 14 C 21 C 22 C 23 C 24 WA 1 CW H B = C 31 C 32 C 33 C 34 C 41 C 42 C 43 C 44 k 1 t 1 k 1. (2.10) m t 1 k 2 s 2 t 2 k 2 s 2 l t 2 Theorem 2.1. Given A C n m, B C n l, C C m l in Problem I, P HC n n which can be expressed as (2.1). Partition U H A and U H B as (2.4) and (2.5). The GSVD of matrix pairs [A H 1,AH 2 ] and [BH 1,BH 2 ] areasin(2.6) and (2.8), respectively, the partition form of WA 1 CW H B is (2.10). Then Eq. (1.1) has a solution X Cr n n (P ), if and only if C 14 =, C 24 =, C 34 =, C 13 =, C 31 =, C 41 =, C 42 =, C 43 =, C 44 =. (2.11) In that case, the general solution can be expressed as C 11 C 12 D1B 1 X 13 U A D1A 1 C 21 X 22 X 23 UB H X = U X 31 X 32 X 33 X 44 X 45 X 46 V A X 54 D2A 1 (C 22 D 1A X 22 D 1B )D2B 1 D 1 X 64 C 32 D2B 1 C 33 2A C 23 VB H U H, (2.12) where X 13 C k 1 (r k 2 s 2 ), X 22 C s 2, X 23 C (r k 2 s 2 ), X 31 C (r k 1 ) k 2, X 32 C (r k 1 ) s 2, X 33 C (r k 1 ) (r k 2 s 2 ), X 44 C (n r+k 1 t 1 ) (n r+k 2 t 2 ), X 45 C (n r+k 1 t 1 ) s 2, X 46 C (n r+k 1 t 1 ) (t 2 +k 2 s 2 ), X 54 C (n r+k 2 t 2 ), and X 64 C (t 1 k 1 ) (n r+k 2 t 2 ) are arbitrary matrices. Proof. We first prove the necessity of (2.11). If Eq. (1.1) has a reflexive solution X, then X satisfies (2.2), so Eq. (1.1) is equivalent to ( ) X (U H A) H 11 U X H B = C. (2.13) 22 Substituting (2.4) and (2.5) into (2.13), we obtain A H 1 X 11 B 1 + A H 2 X 22 B 2 = C. (2.14) Note that W A and W B are nonsingular matrices. From (2.6) and (2.8), we get Σ 1A (UA H X 11 U B )Σ H 1B + Σ 2A(VA H X 22 V B )Σ H 2B = W A 1 CW H B. (2.15) Let X 11 X 12 X 13 k 1 UA H X X 21 X 22 X 23 11 U B = (2.16) X 31 X 32 X 33 r k 1 and k 2 s 2 r k 2 s 2 X 44 X 45 X 46 VA H X X 54 X 55 X 56 22 V B = X 64 X 65 X 66 n r+k 2 t 2 s 2 t 2 k 2 s 2 n r + k 1 t 1 t 1 k 1. (2.17)

X.-y. Peng et al. / Journal of Computational and Applied Mathematics 200 (2007) 749 760 753 Substituting (2.16) and (2.17) into (2.15), we obtain X 11 X 12 D 1B C 11 C 12 C 13 C 14 D 1A X 21 D 1A X 22 D 1B + D 2A X 55 D 2B D 2A X 56 X 65 D 2B X 66 = C 21 C 22 C 23 C 24 C 31 C 32 C 33 C 34. (2.18) C 41 C 42 C 43 C 44 Therefore (2.18) holds if and only if (2.13) holds and X 11 = C 11, X 12 = C 12 D1B 1, X 21 = D1A 1 C 21, X 56 = D2A 1 C 23, X 65 = C 32 D2B 1, X 66 = C 33, X 55 = D2A 1 (C 22 D 1A X 22 D 1B )D2B 1. Substituting the above into (2.16), (2.17) and (2.2), thus we have formulation (2.12). Now we prove the sufficiency of (2.11). Let 11 = U A X (0) X (0) C 11 D 1 C 12 D 1 1B 1A C 21 UB H, 22 = V A D2A 1 C 22D2B 1 D 1 C 32 D2B 1 C 33 2A C 23 VB H. bviously, X (0) 11 Cr r, X (0) 22 C(n r) (n r). By Lemma 2.2 and X 0 = U ( X (0) 11 X (0) 22 ) U H, we have X 0 Cr n n (P ). Hence ( (0) X 11 A H X 0 B = A H U X (0) 22 ) U H B = A H 1 X(0) 11 B 1 + A H 2 X(0) 22 B 2 = C, thus X 0 is the reflection solution with respect to generalized reflection matrix P of Eq. (1.1). The proof is completed. Similarly, according to Lemma 2.3 and formulations (2.4) (2.10), we conclude the followings. Theorem 2.2. Given A C n m, B C n l, C C m l in Problem I, P HC n n which can be expressed as (2.1). Partition U H A and U H B as (2.4) and (2.5). The GSVD of matrix pairs [A H 1,AH 2 ] and [BH 1,BH 2 ] areasin(2.6) and (2.8), respectively. The partition form of WA 1 CW H B is (2.10). Then Eq. (1.1) has a solution X Ca n n (P ), if and only if C 11 =, C 14 =, C 24 =, C 34 =, C 33 =, C 41 =, C 42 =, C 43 =, C 44 =. (2.19) In that case the general solution can be expressed as X = U X 41 X 42 X 43 V A D2A 1 C 21 D2A 1 (C 22 D 1A X 25 D 2B )D1B 1 X 53 UB T C 31 C 32 D1B 1 X 63 U A X 14 C 12 D2B 1 C 13 X 24 X 25 D 1 1A C 23 X 34 X 35 X 36 VB T U T, (2.20)

754 X.-y. Peng et al. / Journal of Computational and Applied Mathematics 200 (2007) 749 760 where X 14 C k 1 (n r+k 2 t 2 ), X 24 C (n r+k 2 t 2 ), X 34 C (r k 1 ) (n r+k 2 t 2 ), X 35 C (r k 1 ) s 2, X 36 C (r k 1 ) (t 2 k 2 s 2 ), X 41 C (n r+k 1 t 2 ) k 2, X 42 C (n r+k 1 t 1 ) s 2, X 43 C (n r+k 1 t 1 ) (r k 2 s 2 ), X 53 C (r k 2 s 2 ), X 63 C (t 1 k 1 ) (r k 2 s 2 ) and X 25 C s 2 are arbitrary matrices. 3. The solutions of Problem II In this section, we first introduce a lemma, then get the general expression of the solutions for Problem II. Finally, we deduce the minimum norm solution X with min X X subjected to A H XB = C. Lemma 3.1. Given E C m n, F C m n, D 1 = diag(λ 1, λ 2,...,λ m )>0, D 2 = diag(μ 1, μ 2,...,μ n )>0. Let h(g) = G E 2 + D 1 GD 2 F 2. Then there exists an unique matrix Ĝ C m n satisfy h(ĝ) = min G C m n h(g). Here Ĝ = Φ (E + D 1 FD 2 ), (3.1) where Φ = (φ ij ) C m n, φ ij = 1/(1 + λ 2 i μ2 j ), (i = 1, 2,...,m; j = 1, 2,...,n). Proof. Assume G = (g ij ) C m n.givene = (e ij ) C m n, F = (f ij ) C m n.wehave h(g) = n ((g ij e ij ) 2 + (λ i μ j g ij f ij ) 2 ). (3.2) i,j=1 In Eq. (3.2), h(g) is a continuously differentiable function of m n variables g ij (i = 1, 2,...,m; j = 1, 2,...,n). According to the necessary condition of function which is minimizing at a point, we obtain the following expression 1 g ij = 1 + λ 2 i μ2 j (e ij + λ i f ij μ j ). (3.3) Then, Eq. (3.1) is obtained from Eq. (3.3). The proof is completed. Theorem 3.1. Suppose that the matrices A, B and C are those given in Problem I. Assume that the solution set S re Cr n n (P ) of A H XB = C is nonempty and that X C n n is given. Let ( K U H X 11 K ) 12 r U = K21 K22 n r, (3.4) X11 X12 X 13 k 1 UA H K 11 U X21 X22 X23 B =, (3.5) X31 X32 X33 r k 1 k 2 s 2 r k 2 s 2 X44 X45 X 46 VA H K 22 V X54 X55 X 56 B = X64 X65 X66 n r + k 1 t 1 t 1 k 1 (3.6) n r+k 2 t 2 s 2 t 2 k 2 s 2

X.-y. Peng et al. / Journal of Computational and Applied Mathematics 200 (2007) 749 760 755 then ˆX X =min X SrE X X has an unique solution ˆX S re which can be expressed as C 11 C 12 D1B 1 X 13 U A D1A 1 C 21 ˆX 22 X 23 UB H ˆX = U X31 X32 X33 X44 X45 X46 V A X54 D2A 1 (C 22 D 1A ˆX 22 D 1B )D2B 1 D 1 X64 C 32 D2B 1 C 33 and 2A C 23 VB H ˆX 22 = Ψ (D 1A C 22 D 1B + D 2 2A X 22 D2 2B D 1AD 2A X 55 D 1BD 2B ), (3.8) where Ψ = (ψ ij ) C s 2, ψ ij = 1/(λ 2 ia λ2 jb + μ2 ia μ2 jb ), (i = 1, 2,...,; j = 1, 2,...,s 2 ). Proof. Using the invariance of the Frobenius norm under the unitary transformations, from (2.12), (3.5) and (3.6), we have that ˆX X =min X SrE X X is equivalent to ˆX 13 X13 2 = min X 13 X13 2, ˆX 31 X31 2 = min X 31 X31 2, ˆX 32 X 32 2 = min X 32 X 32 2, ˆX 23 X 23 2 = min X 23 X 23 2, ˆX 33 X 33 2 = min X 33 X 33 2, ˆX 44 X 44 2 = min X 44 X 44 2, ˆX 45 X 45 2 = min X 45 X 45 2, ˆX 46 X 46 2 = min X 46 X 46 2, ˆX 54 X54 2 = min X 54 X54 2, ˆX 64 X64 2 = min X 64 X64 2 (3.9) and h( ˆX 22 ) = min X 22 C s 2 h(x 22 ), (3.10) where h(x 22 ) = D2A 1 22 D 1A ˆX 22 D 1B )D2B 1 55 2 + ˆX 22 X22 2. From (3.9), we have ˆX 13 = X13, ˆX 31 = X31, ˆX 32 = X32, ˆX 33 = X33, ˆX 23 = X23, ˆX 44 = X44, ˆX 45 = X45, ˆX 46 = X46, ˆX 54 = X54, ˆX 64 = X64. From (3.10), we have h(x 22 ) = D2A 1 D 1A ˆX 22 D 1B D2B 1 (D 1 2A C 22D2B 1 X 55 ) 2 + ˆX 22 X22 2. (3.11) From Lemma 3.1, (3.10) and (3.11), (3.8) holds. Thus, (3.7) and (3.8) hold. When X is a zero matrix in Theorem 4, we can straightforward deduce Corollary 3.1. The expression of minimum norm solution ˆX min S re Cr n n (P ) which satisfies ˆX min = min X SrE X (A H ˆX min B = C) is C 11 C 12 D 1 1B U A D1A 1 C 21 ˆX 22 UB H ˆX min =U U H, where ˆX 22 = Ψ (D 1A C 22 D 1B ). V A D2A 1 (C 22 D 1A ˆX 22 D 1B )D2B 1 D2A 1 C 23 C 32 D 1 2B C 33 VB H U H (3.7)

756 X.-y. Peng et al. / Journal of Computational and Applied Mathematics 200 (2007) 749 760 In a similar way, we can deduce the following. Theorem 3.2. Given X C n n, let assume that the solution set S ae Ca n n (P ) of A H XB = C be nonempty. From (3.4), let X14 X15 X 16 k 1 UA T K 12 V X24 X25 X26 B = X34 X35 X36, (3.12) t 2 k 1 n r + k 2 t 2 s 2 t k 2 s 2 X41 X42 X 43 VA H K 22 U X51 X52 X 53 B = X61 X62 X63 k 2 s 2 r k 2 s 2 n r + k 1 t 1 t 1 k 1 (3.13) then X X =min has an unique solution ˆX S ae which can be expressed as X14 C 12 D2B 1 C 13 U A X 24 ˆX 25 D 1 ˆX A =U X34 X35 X36 X41 X42 X 43 V A D2A 1 C 21 D2A 1 (C 22 D 1A ˆX 25 D 12B )D1B 1 X53 U H C 31 C 32 D1B 1 X63 B 1A C 23 VB H U H (3.14) where ˆX 25 =Ω (D 1A C 22 D 2B +D2A 2 X 25 D2 1B D 1AD 2A X52 D 1BD 2B ), Ω=(ω ij ) R s 2, ω ij =1/(λ 2 ia μ2 jb +μ2 ia λ2 jb ), λ 2 ia + μ2 ia = 1, λ2 jb + μ2 jb = 1, (i = 1, 2,...,; j = 1, 2,...,s 2 ). If X is a zero matrix in Theorem 3.2, we can straightforward obtain the following. Corollary 3.2. The expression of minimum norm solution ˆX min S ae Ca n n (P ) which satisfies ˆX min = min X SaE X (A H ˆX min B = C) is ˆX A min =U V A D2A 1 22 D 1A ˆX 25 D 2B )D1B 1 UB H C 32 D1B 1 D 1 2A C 21 C 31 U A C 12 D2B 1 C 13 ˆX 25 D1A 1 C 23 VB H where ˆX 25 = Ω (D 1A C 22 D 2B ), Ω = (ω ij ) R s 2, ω ij = 1/(λ 2 ia μ2 jb + μ2 ia λ2 jb ), λ2 ia + μ2 ia = 1, λ2 jb + μ2 jb = 1, (i = 1, 2,..., ; j = 1, 2,...,s 2 ). Based on Theorems 3.1 and 3.2, we propose the following algorithm for solving Problem II over sets S re or S ae. Algorithm. 1. Input X C n n and a generalized reflection matrix P of size n. 2. Compute r and U by (2.1). 3. Compute A 1, A 2, B 1 and B 2 by (2.4) and (2.5). 4. Compute W A, Σ 1A, Σ 2A, U A and V A by (2.6). 5. Compute W B, Σ 1B, Σ 2B, U B and V B by (2.8). 6. Compute Xij (i, j = 1, 2,...,6) by (3.4), (3.5), (3.6), (3.12), and (3.13). U H

X.-y. Peng et al. / Journal of Computational and Applied Mathematics 200 (2007) 749 760 757 7. Compute partition matrix (C ij ) 4 4 by (2.10). 8. If C i4 = 0 (i = 1, 2, 3, 4) and C 4j = 0 (j = 1, 2, 3) and C 13 = 0 and C 31 = 0 then go to step 9, else go to step 11. 9. Compute ˆX S re by (3.7) and ˆX min S re by Corollary 3.1. 10. Go to step 13. 11. If C i4 = 0(i = 1, 2, 3, 4) and C 4j = 0(j = 1, 2, 3) and C 11 = 0 and C 33 = 0 then go to step 12, else go to step 13. 12. Compute ˆX A S ae by (3.14) and ˆX A min S ae by Corollary 3.2. 13. Stop. 4. Numerical experiments In this section, we will give a numerical example to illustrate our results. All the tests are performed by MATLAB6.5. Example. The matrices P, A, B and X are given by following 0.7025 0.3848 0.3132 0.3461 0.0204 0.0191 0.1502 0.2816 0.1910 0.0377 0.3848 0.2888 0.0096 0.3600 0.0409 0.1615 0.2182 0.2025 0.4281 0.5825 0.3132 0.0096 0.1648 0.0965 0.6190 0.0368 0.3884 0.5449 0.0136 0.1811 0.3461 0.3600 0.0965 0.2728 0.5305 0.0155 0.1560 0.2062 0.5176 0.2250 0.0204 0.0409 0.6190 0.5305 0.0408 0.1254 0.4564 0.0001 0.1944 0.2643 P =, 0.0191 0.1615 0.0368 0.0155 0.1254 0.9019 0.1135 0.2150 0.1980 0.2108 0.1502 0.2182 0.3884 0.1560 0.4564 0.1135 0.6361 0.0051 0.3189 0.1649 0.2816 0.2025 0.5449 0.2062 0.0001 0.2150 0.0051 0.2353 0.3917 0.5341 0.1910 0.4281 0.0136 0.5176 0.1944 0.1980 0.3189 0.3917 0.4231 0.0331 0.0377 0.5825 0.1811 0.2250 0.2643 0.2108 0.1649 0.5341 0.0331 0.3849 0.0312 0.0213 0.0571 0.0611 0.1344 0.1533 0.3473 0.3049 0.7876 0.9700 0.5988 0.3034 0.9823 0.9353 0.9274 0.6287 0.7229 0.7964 0.3359 0.2332 0.4507 0.4373 0.4613 0.6745 0.0902 0.3433 0.0286 0.2658 0.1277 0.1147 0.0794 0.2484 0.7497 0.5493 0.0716 0.0361 0.1012 0.0533 0.0843 0.6990 A =, 0.0426 0.0124 0.2885 0.0174 0.0008 0.0734 0.0162 0.1526 0.0537 0.1343 0.6114 0.2362 0.4917 0.4328 0.4157 0.0893 0.4875 0.4862 0.4795 0.3056 0.0585 0.1130 0.3284 0.2782 0.7200 0.7496 0.3310 0.2220 0.2856 0.3032 0.3298 0.7120 0.2761 0.0334 0.2441 0.1824 0.1526 0.0240 0.1396 0.0024 0.0471 0.1624 0.2124 0.2108 0.3130 0.4500 0.6496 0.1144 0.2196 0.1297 0.0493 0.2194 0.0458 0.2282 0.2974 0.2981 0.2826 0.6621 0.5358 0.2719 0.5576 0.2612 0.0155 0.5931 0.0949 0.2127 0.1515 0.1661 B = 0.3111 0.1312 0.2338 0.4387 0.2230 0.4279 0.0378 0.2621 0.0076 0.4431 0.1140 0.1641 0.0339 0.0493, 0.0052 0.1073 0.3483 0.2266 0.3126 0.1489 0.1114 0.2240 0.1548 0.3010 0.3178 0.1408 0.1248 0.4628 0.5284 0.6772 0.6481 1.2460 1.1101 0.6787 1.2260 0.1323 0.0094 0.0803 0.2237 0.0479 0.0086 0.2303

758 X.-y. Peng et al. / Journal of Computational and Applied Mathematics 200 (2007) 749 760 0.4486 0.2248 0.6724 0.2319 0.8133 0.8995 0.3974 0.6376 0.1633 0.9544 0.5244 0.9089 0.9383 0.4787 0.9238 0.6928 0.3333 0.2513 0.2110 0.1311 0.1715 0.0073 0.3431 0.5265 0.1990 0.4397 0.9442 0.1443 0.2168 0.0683 0.1307 0.5887 0.5630 0.7927 0.6743 0.7010 0.8386 0.6516 0.6518 0.1252 X = 0.2188 0.5421 0.1189 0.1930 0.9271 0.6097 0.2584 0.9461 0.0528 0.1662 0.1055 0.6535 0.1690 0.9096 0.3438 0.2999 0.0429 0.8159 0.2293 0.9114. 0.1414 0.3134 0.2789 0.9222 0.5945 0.8560 0.0059 0.9302 0.6674 0.1363 0.4570 0.2312 0.5568 0.0133 0.6155 0.1121 0.5744 0.3099 0.3109 0.6170 0.7881 0.4161 0.4856 0.7675 0.0034 0.2916 0.7439 0.2688 0.3066 0.2690 0.2811 0.2988 0.9522 0.9473 0.9820 0.0974 0.8068 0.5365 0.7207 0.2207 If 2.0461 2.0657 2.5944 4.5493 3.6081 2.4642 4.3491 2.1421 2.3033 2.6053 4.8920 4.0182 2.5875 4.6500 1.0014 1.2829 0.9332 2.5802 2.3282 1.1873 2.4239 0.5310 0.7236 0.6122 1.3912 1.2802 0.7601 1.3764 C = 1.3838 1.5807 1.3866 3.2077 2.7395 1.4368 2.9053 1.4674 1.5807 1.5218 3.3285 2.7559 1.5077 3.0260 1.5336 1.6630 1.7668 3.4955 2.8836 1.7529 3.3107 1.7402 1.7919 2.0615 3.9187 3.1619 2.0045 3.6362 then partition matrix satisfies (2.11). Thus reflexive solution set S re of Eq. (1.1) is nonempty. The optimal approximation reflexive solution ˆX to X in reflexive solution set S re is 0.1708 0.2920 0.0289 0.1515 0.2379 0.2882 0.7576 0.2009 0.6976 0.3899 0.0811 0.0118 0.7399 0.1921 0.2705 0.5920 0.1341 0.0691 1.2106 0.3242 0.0520 0.2825 0.6040 0.7229 0.1570 0.4963 0.5059 0.0689 1.5102 0.2747 0.2966 0.3405 0.5306 0.7151 0.7005 0.7371 0.6479 0.1230 0.0201 0.3001 ˆX= 0.0853 0.9973 0.5801 0.0917 0.4351 0.4240 0.6128 0.0722 0.1184 0.1386 0.1579 0.3183 0.0876 0.5205 0.1806 0.2893 0.6583 0.8600 0.6749 0.1466 0.2682 0.7709 0.4796 0.7188 0.8461 0.7430 0.2117 0.0721 0.2193 0.1238 0.6274 0.5459 0.4828 0.2074 0.1500 0.4530 0.4431 0.0958 0.5936 0.0006 0.1384 1.3239 0.6012 0.9141 0.2374 0.5154 0.3535 0.3351 0.2126 0.5733 0.0429 0.1902 0.1417 0.0961 0.0170 0.0607 0.5496 0.4133 0.2812 0.3036

X.-y. Peng et al. / Journal of Computational and Applied Mathematics 200 (2007) 749 760 759 and ˆX X =2.9904. The minimum norm reflexive solution is 0.1673 0.6439 0.0841 0.2848 0.2147 0.0105 0.5614 0.2590 0.4956 0.0683 0.1960 0.0051 0.8171 0.1050 0.0922 0.1121 0.2080 0.1977 1.4038 0.1616 0.3042 0.1941 0.7475 0.3689 0.2617 0.0028 0.1740 0.1159 1.2573 0.2991 0.3218 0.2854 0.4500 0.5746 0.8792 0.5070 0.5815 0.0821 0.1511 0.3704 0.2888 0.8144 0.5136 0.4827 0.3553 0.1350 0.4244 0.0368 0.1864 0.0689 ˆX min = 0.1168 0.1297 0.2882 0.1199 0.0936 0.6149 0.2840 0.5223 0.1346 0.0442 0.0739 0.2998 0.4193 0.4259 0.5921 0.4893 0.0809 0.2920 0.2414 0.1159 0.6910 0.3018 0.4377 0.0662 0.2350 0.5587 0.1815 0.1068 0.6833 0.3950 0.1331 1.2530 0.5441 0.5359 0.4301 0.3190 0.2208 0.3048 0.2984 0.3023 0.2702 0.3176 0.0405 0.3287 0.2526 0.0381 0.2109 0.1221 0.1376 0.0175 and ˆX min =2.7622. If 1.9249 2.2466 2.6371 4.4983 3.8222 2.6937 4.4531 2.1878 2.4620 2.7363 5.0205 4.2071 2.7376 4.8245 0.9658 1.3490 0.9534 2.5723 2.4072 1.2683 2.4682 C = 0.6565 0.7580 0.7117 2.5pt507 1.3219 0.7537 1.4773 1.4154 1.6099 1.4247 3.2581 2.7746 1.4566 2.9506 1.3398 1.6372 1.4794 3.2108 2.8225 1.6095 3.0101 1.3847 1.7388 1.7244 3.3622 2.9726 1.8822 3.3002 1.7765 1.9458 2.1828 4.0348 3.3455 2.1526 3.8021 then partition matrix satisfies (2.19). Thus anti-reflexive solution set S ae of Eq. (1.1) is nonempty. The optimal approximation anti-reflexive solution ˆX A to X in anti-reflexive solution set S ae is 0.2228 0.3041 0.2794 0.2526 0.7881 0.0434 0.2659 0.1360 0.1599 0.0109 0.1856 0.4455 0.9399 0.2493 0.5279 0.3095 0.2856 0.0975 1.0299 0.1418 0.2648 0.1941 0.8154 0.1789 0.1013 0.1413 0.0060 0.0813 0.8646 0.0777 0.4910 0.3821 0.5514 0.7453 1.0112 0.0011 0.5471 0.0343 0.2643 0.4753 0.0834 0.0277 0.1012 0.1306 0.3428 0.0075 0.7536 0.0034 0.7128 0.1386 ˆX A = 0.0567 0.0202 0.0401 0.3770 0.4286 0.0479 0.2424 0.0327 0.1406 0.3348 0.1241 0.4756 0.2349 0.3844 0.2898 0.1759 0.4834 0.2520 0.4131 0.0871 0.3378 0.0026 0.3715 0.3768 0.2456 0.0531 0.2810 0.1013 0.3051 0.5273 0.0537 0.1922 0.4803 0.2657 0.7664 0.4569 0.1762 0.6476 0.5416 0.4264 0.1150 0.2293 0.1604 0.8032 0.4078 0.2192 0.0831 0.1569 0.0483 0.0451

760 X.-y. Peng et al. / Journal of Computational and Applied Mathematics 200 (2007) 749 760 and ˆX A X =5.9564. The minimum norm anti-reflexive solution is 0.0707 0.3454 0.1580 0.0781 0.8335 0.1439 0.3267 0.3274 0.0705 0.0832 0.0673 0.2762 0.6802 0.2379 0.3752 0.2639 0.0438 0.2056 1.1734 0.1365 0.2265 0.1405 0.5150 0.0733 0.0545 0.0360 0.3525 0.0412 1.0014 0.3120 0.1431 0.5272 0.4346 0.7876 0.8073 0.2264 0.7265 0.0656 0.2076 0.3327 ˆX A,min = 0.0134 0.0300 0.2985 0.0117 0.5940 0.1441 0.2704 0.3696 0.6478 0.0730 0.0814 0.0289 0.0067 0.0971 0.2396 0.0157 0.1335 0.0694 0.0130 0.1601 0.1246 0.2452 0.1324 0.7302 0.2051 0.0413 0.2752 0.0618 0.3525 0.1074 0.3693 0.0563 0.3068 0.4228 0.2597 0.1068 0.2187 0.0046 0.3368 0.3418 0.1539 0.0651 0.3342 0.2090 0.4602 0.1223 0.0662 0.4914 0.7254 0.2983 0.0031 0.0420 0.0923 0.6476 0.3285 0.2364 0.0146 0.0684 0.0080 0.1999 and ˆX A,min =2.3220. 5. Conclusions In this paper, we obtained the equation s solution sets of reflexive and anti-reflexive matrices with respect to a given generalized reflection matrix P. We also obtained, in corresponding solution set, the nearest solution to a given matrix in Frobenius norm. The solvability conditions and the explicit formula for the solution are given. Given matrix, we have obtained some of the nearest matrices to the matrix and minimum norm matrices in the given solution set by computing. References [1] J.L. Chen, X.H. Chen, Special Matrices, Qinghua University Press, Beijing, China, 2001 (in Chinese). [2] H.C. Chen, Generalized reflexive matrices special properties and applications, SIAM J. Matrix Anal. Appl. 19 (1998) 140 153. [3] H.C. Chen, The SAS Domain Decomposition Method for StructuralAnalysis, CSRD Technical Report 754, Center for Supercomputing Research and Development, University of Illinois, Urbana, IL, 1988. [4] H.C. Chen, A. Sameh, Numerical linear algebra algorithms on the coder system, in: A.K. Noor (Ed.), Parallel Computations and Their Impact on Mechanics, AMD-vol. 86, The American Society of Mechanical Engineers, 1987, pp. 101 125. [5] K.W.E. Chu, Symmetric solutions of linear matrix equation by matrix decompositions, Linear Algebra Appl. 119 (1989) 35 50. [6] K.W.E. Chu, Singular value and generalized singular value decompositions and the solution of linear matrix equations, Linear Algebra Appl. 88 (1987) 89 98. [7] H. Dai, n the symmetric solutions of linear matrix equations, Linear Algebra Appl. 131 (1990) 1 7. [8] H. Dai, Linear matrix equations from an inverse problem of vibration theory, Linear Algebra Appl. 246 (1996) 31 47. [9] F.J.H. Don, n the symmetric solutions of a linear matrix equation, Linear Algebra Appl. 93 (1987) 1 7. [10] C.N. He, Positive and symmetric solutions of linear matrix equation, Hunan Mathematic Annual. 2 (1996) 84 93 (in Chinese). [11] C.C. Paige, M.A. Saunders, Towards a generalized singular value decomposition, SIAM J. Numer. Anal. 18 (1981) 398 450. [12] Z.Y. Peng, Central symmetric solutions and the optimal approximation of equation A T XA = B, J. Changsha Electric Power College 2 (2002) 3 6 (in Chinese). [13] Z.Y. Peng, X.Y. Hu, The reflexive and anti-reflexive solutions of the matrix equation AX = B, Linear Algebra Appl. 375 (2003) 147 155. [14] Y.X. Peng, X.Y. Hu, L. Zhang, Symmetric ortho-symmetric solutions and the optimal approximation of equation A T XA = B, Numer. Math. J. Chinese Univ. 4 (2003) 372 377 (in Chinese). [15] L. Wu, The re-positive definite solutions to the matrix inverse problem AX = B, Linear Algebra Appl. 174 (1992) 145 151.