International Journal of Algebra, Vol. 5, 2011, no. 30, 1489-1504 The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D D. Krishnaswamy Department of Mathematics Annamalai University Annamalainagar 608 002 India krishna swamy2004@yahoo.co.in G. Punithavalli Mathematics Wing Annamalai University Annamalainagar 608 002 India Punithavarman@gmail.com Abstract In this paper, the following problems are discussed. Problem 1. Given matrices A C n m and D C m m,find X SSCp n such that A XA = D, where SSCp n = {X SSC n n /P X SC n n for given P OC n n satisfying P = P }. Problem 2. Given a matrix X C n n, find ˆX S E such that X X = inf X X, X S E where. is the robenius norm, and S E is the solution set of problem 1. Expressions for the general solution of problem 1 are derived. Necessary and sufficient conditions for the solvability of Problem 1 are determined. or problem 2, an expression for the solution is given. Mathematics Subject Classifications: 15A57, 6510, 6520 Keywords: Skew-symmetric ortho-symmetric matrix, Matrix Equation, Matrix nearness problem, Optimal approximation, Least-square solutions
1490 D. Krishnaswamy and G. Punithavalli 1 Introduction Let C n m denote the set of all n mcomplex matrices, and let OC n n,sc n n, SSC n n denote the set of alln northogonal matrices, the set of alln ncomplex symmetric matrices, the set of all n ncomplex skew-symmetric matrices, respectively. The symbol I K will stand for the identity matrix of order K,A for the Moore-penrose generalized inverse of a matrix A, and rk(a for the rank of matrix A. or matrices A, B C n m, the expression A B will be the Hadamard product of A and B; also. will denote the robenius norm. Defining the inner product (A, B=tr(B A for matrices A, B C n m, C n m becomes a Hilbert space. The norm of a matrix generated by this inner product is the robenius norm. If A =(a ij C n n, let L A =(l ij C n n be defined as follows: l ij = a ij whenever i>jand l ij = 0 otherwise (i, j =1.2..., n. Let e i be the i-th column of the identity matrix I n (i =1, 2,..., n and set S n =(e n,e n 1,..., e 1. It is easy to see that S n = S n, S ns n = I n. An inverse problem [2]-[6] arising in the structural modification of the dynamic behaviour of a structure calls for the solution of the matrix equation A XA = D, (1.1 where A C n m, D C m m, and the unknown X is required to be complex and symmetric, and positive semidefinite or possibly definite. No assumption is made about the relative sizes of m and n, and it is assumed throught that A 0 and D 0. Equation (1.1 is a special case of the matrix equation AXB = C. (1.2. Consistency conditions for equation (1.2 were given by Penrose[7] (see also [1]. When the equation is consistent, a solution can be obtained using generalized inverses. Khatri and Mitra [8] gave necessary and sufficient conditions for the existence of symmetric and positive semidefinite solutions as well as explicit formulae using generalized inverses. In [9],[10] solvability conditions for symmetric and positive definite solutions and general solutions of Equation (1.2 were obtained through the use of generalized singular value decomposition [11]-[13]. or important results on the inverse problem A XA = D associated with several kinds of different sets S, for instance, symmetric matrices, symmetric nonnegative definite matrices, bisymmetric (same as persymmetric matrices, bisymmetric nonnegative definite matrices and so on, We refer the reader to [14]-[17]. or the case the unknown A is skew-symmetric ortho-symmetric,[18] has discussed the
Skew-symmetric ortho-symmetric solutions 1491 inverse problem AX = B. However, for this case, the inverse problem A XA = D has not been dealt with yet. This problem will be considered here. Definition 1.1 A matrix P C n n is said to be a symmetric orthogonal matrix if P = P, P P = I n. In this paper, without special statement, we assume that P is a given symmetric orthogonal matrix. Definition 1.2 A Matrix X C n n is said to be a skew-symmetric ortho-symmetric matrix if X = X, (PX = PX. We denote the set of all n n skew-symmetric ortho-symmetric matrices by SSCp n. The problem studied in this paper can now be described as follows. Problem 1. Given matrices A C n m and D C m m, find a skew-symmetric orthosymmetric matrix X such that A XA = D. In this paper, we discuss the solvability of this problem and an expression for its solution is presented. The Optimal approximation problem of a matrix with the above-given matrix restriction comes up in the processes of test or recovery of a linear system due to incomplete data or revising given data. A preliminary estimate X of the unknown matrix X can be obtained by the experimental observation values and the information of statistical distribution. The optimal estimate of X is a matrix ˆX that satisfies the given matrix restriction for Xand is the best approximation of X, see [19]-[21]. In this paper, we will also considered the so-called optimal approximation problem associated with A XA = D. It reads as follows. Problem 2. Given matrix X C n n, find ˆX S E such that X X = inf X X, X S E where S E is the solution set of Problem 1. We point out that if Problem 1 is solvable, then Problem 2 has a unique solution, and in this case an expression for the solution can be derived. The paper is organized as follows. In section 2, we obtain the general form of S E and the sufficient and necessary conditions under which problem 1 is solvable mainly by using the structure of SSCp n and orthogonal projection matrices. In section 3, the expression for the solution of the matrix nearness problem 2 will be determined.
1492 D. Krishnaswamy and G. Punithavalli 2 The expression of the general solution of problem 1 In this section we first discuss some structure properties of symmetric orthogonal matrices. Then given such a matrix P, we consider structural properties of the subset SSC n p of Cn n. inally we present necessary and sufficient conditions for the existence of and the expressions for the skew-symmetric ortho-symmetric ( with respect to the given P solutions of problem 1. Lemma 2.1. Assume P is a symmetric orthogonal matrix of size n, and let P 1 = 1 2 (I n + P, P 2 = 1 2 (I n P. (2.1 Then P 1 and P 2 are orthogonal projection matrices satisfying P 1 + P 2 = I n,p 1 P 2 =0. Proof. Since P 1 = 1 2 (I n + P,P 2 = 1 2 (I n P. Then P 1 + P 2 = 1 2 (I n + P + 1 2 (I n P = 1 2 (I n + P + I n P = 1 2 (2I n=i n. P 1 P 2 = 1 2 (I n + P. 1 2 (I n P = 1 4 (I n P + P P 2 = 1 4 (I n P 2 = 1 4 (I n P.P = 1 4 (I n I n =0. Lemma 2.2. Assume P 1 and P 2 are defined as (2.1 and rank (P 1 =r. Then rank(p 2 = n r, and there exists unit column orthogonal matrices U 1 C n r and U 2 C n (n r such that P 1 = U 1 U 1,P 2 = U 2 U 2, and U 1 U 2 =0then P = U 1 U 1 U 2U 2. Proof. Since P 1 and P 2 are orthogonal projection matrices satisfying P 1 + P 2 = I n and P 1 P 2 =0, the column space R(P 2 of the matrix P 2 is the orthogonal complement of the column space R(P 1 of the matrix P 1, in other words, R n = R(P 1 R(P 2. Hence, if rank (P 1 =r, then rank (P 2 =n r. On the other hand, rank (P 1 =r, rank(p 2 =n r, and P 1,P 2 are orthogonal projection matrices. Thus there exists unit column orthogonal matrices U 1 C n r and U 2 C n (n r such that P 1 = U 1 U 1,P 2 = U 2 U 2. Using R n = R(P 1 R(P 2, we have U 1 U 2 =0. Substituting P 1 = U 1 U 1,P 2 = U 2 U 2, into (2.1, we have P = U 1 U 1 U 2 U 2.
Skew-symmetric ortho-symmetric solutions 1493 Elaborating on Lemma 2.2 and its proof, we note that U =(U 1,U 2 is an orthogonal matrix and that the symmetric orthogonal matrix P can be expressed as P = U ( I r 0 0 I n r U. (2.2 Lemma 2.3. The matrix X SSC n P if and only if X can be expressed as X = U ( 0 0 where C r (n r and U is the same as (2.2. U, (2.3 proof Assume X SSCP n. By lemma 2.2 and the definition of SSCn P, We choose p 1 = I+p,P 2 2 = I P 2 P 1 XP 1 = I + P 2 X I + P 2 = 1 (X + PX + XP + PXP 4 = 1 4 (U ( 0 0 U + PX + XP + U ( I r 0 0 I n r U U ( 0 0 U U ( I r 0 0 I n r U. = 1 4 (U ( 0 0 U + PX + XP + U ( I r ( 0 0 ( Ir 0 0 I n r 0 0 I n r U. = 1 4 (U ( 0 0 U + PX + XP + U ( 0 0 ( I r 0 0 I n r U. = 1 4 (U ( 0 0 U + PX + XP + U ( 0 0 U. P 1 XP 1 = 1 (XP + PX. 4 Similarly Hence, P 2 XP 2 = 1 (XP + PX. 4 X =(P 1 + P 2 X(P 1 + P 2 =P 1 XP 1 + P 1 XP 2 + P 2 XP 1 + P 2 XP 2
1494 D. Krishnaswamy and G. Punithavalli X = P 1 XP 2 + P 2 XP 1 (sincep 1 XP 1 + P 2 XP 2 =0. X = P 1 XP 2 + P 2 XP 1 = U 1 U 1 XU 2U 2 + U 2U 2 XU 1U 1 (sincep 1 = U 1 U 1 andp 2 = U 2 U 2 Let = U 1 XU 2 and G = U 2 XU 1. It is easy to verify that = G. = U 1 U 2 + U 2 GU 1 (since = U 1 XU 2, =(U 1 XU 2 = U 2 X U 1 = U X 2 U 1 = G Then we have X = U 1 U 2 + U 2GU 1 = U ( 0 G 0 U Conversely, for any C r (n r, Let It is easy to verify that X = X X = U ( 0 X = U ( 0 0 0 U U X = U 1 U 2 + U 2GU 1, X =(U 1 U 2 +(U 2 GU 1 = U 2 U 1 + U 1G U 2 using (2.2, we have Thus = U 2 GU 1 U 1 U 2 = (U 1 U 2 + U 2 GU 1 = X. PXP = PU ( 0 0 U P = U ( I r 0 0 I n r U U ( 0 0 U U ( I r 0 0 I n r U = U ( I r ( 0 0 ( Ir 0 0 I n r 0 0 I n r U = U( 0 Ir 0 0 ( 0 I n r U = U( 0 0 U = X X = U ( 0 0 U SSC n P.
Skew-symmetric ortho-symmetric solutions 1495 Lemma 2.4. Let A C n n,d SSC n n and assume A A = D. Then there is precisely one G SC n n such that A = L D + G, and G = 1 2 (A + A 1 2 (L D + L D. Proof.or given A C n n,d SSC n n and A A = D. It is easy to verify that there exists unique and we have G = 1 2 (A + A 1 2 (L D + L D SCn n, A = 1 2 (A A + 1 2 (A + A = 1 2 (L D L D +1 2 (A + A A = 1 2 (A + A +L D 1 2 (L D + L D (sincel D = 1 2 (L D + L D +1 2 (L D L D A = L D + G. Let A C n m and D C m m,udefined in (2.2, Set U A = ( A 1 A 2,A1 C r m,a 2 C (n r m. (2.4 The generalized singular value decomposition (see [11],[12],[13] of the matrix pair [A 1,A 2 ] is A 1 = M A 1 W,A 2 = M A 2 V, (2.5 where W C m m is a nonsingular matrix, W OC r r,v OC (n r (n r and A1 = I k S 1... O O 1 k s t k s m t (2.6
1496 D. Krishnaswamy and G. Punithavalli A2 = O 2 S 2... O I t k s k s t k s m t (2.7 where t = rank(a 1,A 2,K = t rank(a 2, S = rank(a 1 +rank(a 2 t S 1 = diag(α 1,..., α s,s 2 = diag(β 1,..., β s, with 1 >α 1... α s > 0, 0 <β 1... β s < 1, and αi 2 + βi 2 =1,i=1,..., s. 0, 0 1 and 0 2 are corresponding zero submatrices. Theorem 2.5.Given A C n m and D C m m,u defined in (2.2, and U A has the partition form of (2.4, the generalized singular value decomposition of the matrix pair [A 1,A 2 ] as (2.5. Partition the matrix M 1 DM as D 11 D 12 D 13 D 14 k D 21 D 22 D 23 D 24 s M 1 DM = D 31 D 32 D 33 D 34 t k s D 41 D 42 D 43 D 44 m t k s t k s m t
Skew-symmetric ortho-symmetric solutions 1497 then the problem 1 has a solution X SSCP n if and only if D = D, D 11 =0, D 33 =0, D 41 =0, D 42 =0, D 43 =0, D 44 =0. In that case it has the general solution X = U ( 0 0 U, (2.9 where X 11 D 12 S2 1 D 13 = W X 21 S1 1 (L D22 + GS2 1 S 1 X 31 X 32 X 33 1 D 23 V, (2.10 with X 11 C r (n r+k t,x 21 C s (n r+k t,x 31 C (r k s (n r+k t,x 32 C (r k s s, X 33 C (r k s (t k s and G SC s s are arbitrary matrices. Proof. The Necessity : Assume the equation (1.1 has a solution X SSCP n. By the definition of SSCn P, it is easy to verify that D = D. Since D = A A D =(A A = A + A = (A A = D, and we have from lemma 2.3 that X can be expressed as X = U ( 0 0 U, (2.11 where C r (n r. Note that U is an orthogonal matrix, and the definition of A i (i =1, 2, Equation(1.1 is equivalent to A 1 A 2 A 2 A 1 = D. (2.12 Substituting (2.5 in (2.12, then we have M A 1 W A 2 M A 2 V A 1 = D M A 1 W V A 2 M M A 2 V A 1 = D M A 1 W V A 2 M M A 2 V W A 1 M = D M 1 M A 1 W V A 2 M M M 1 M A 2 V W A 1 M M = M 1 DM A 1 (W V A 2 A 2 (V W A 1 = M 1 DM A 1 (W V A 2 A 2 (W V A 1 = M 1 DM, (2.13
1498 D. Krishnaswamy and G. Punithavalli partition the matrix W V as X 11 X 12 X 13 W V = X 21 X 22 X 23, (2.14 X 31 X 32 X 33 where X 11 C r (n r+k t,x 22 C s s,x 33 C (r k s (t k s. Taking W V and M 1 DM, in (2.13, We have 0 X 12 S 2 X 13 0 D 11 D 12 D 13 D 14 S 2 X21 S 1 X 22 S 2 (S 1 X 22 S 2 S 1 X 23 0 D 21 D 22 D 23 D 24 = X 13 X 23 S 1 0 0 D 31 D 32 D 33 D 34 0 0 0 0 D 41 D 42 D 43 D 44 (2.15 Therefore (2.15holds if and only if (2.8 holds and and X 12 = D 12 S 1 2,X 13 = D 13,X 23 = S 1 1 D 23 S 1 X 22 S 2 (S 1 X 22 S 2 = D 22. It follows from Lemma 2.4 that X 22 = S 1 1 (L D22 + GS 1 2, where G SC S S is arbitrary matrix. Substituting the above into (2.14, (2.11, thus we have formulation (2.9 and (2.10. The sufficiency. Let X 11 D 12 S2 1 D 13 G = W X 21 S1 1 (L D 22 + GS2 1 S 1 X 31 X 32 X 33 1 D 23 V.
Skew-symmetric ortho-symmetric solutions 1499 obviously, G C r (n r. By Lemma 2.3 and We have X 0 SSC n P. Hence 0 G X O = U( G 0 U, 0 G A X 0 A = A UU X 0 UU A =(A 1 A 2 ( G 0 ( A 1 A 2 ( A 2 G A 1 G ( A 1 A 2 = A 2 GA 1 + A 1 G A 2 = D. This implies that ( X 0 = U 0 G G 0 U SSCP n is the skew-symmetric ortho-symmetric solution of equation (1.1. Hence the proof. 3 The expression of the solution of Problem 2. To prepare for an explicit expression for the solution of the matrix nearness problem 2, we first verify the following lemma. Lemma 3.1. Suppose that E, C s s, and let S a = diag(a 1,..., a s > 0,S b = diag(b 1,..., b s > 0. Then there exists a unique S s SC s s and a unique S r SSC s s such that and S a SS b E 2 + S a SS b 2 = min. (3.1 S s =Φ [S a (E + S b + S b (E + S a ], (3.2 S r =Φ [S a (E + S b S b (E + S a ], (3.3 where Φ=(ψ ij SC s s, Ψ ij = 1 2(a 2 i b2 j + 1 i, j s. (3.4 a2 j b2 i, Proof. We prove only the existence of S r and (3.3. or any S = (S ij SSC s s, E =(e ij,=(f ij C s s, since S ii =0,S ij = S ji, S a SS b E 2 + S a SS b 2 = [(a i b j S ij e ij 2 +(a i b j S ij f ij 2 ] 1 i,j s
1500 D. Krishnaswamy and G. Punithavalli = [(a i b j S ij e ij 2 +( a j b i S ij e ji 2 +(a i b j S ij f ij 2 +( a j b i S ij f ji 2 + (e 2 ij + f ij 2. 1 i<j s 1 i<s Using g(s S ij =0 (1 i, j n, We have =2(a i b j S ij e ij (a i b j +2( a j b i S ij e ji ( a j b i +2(a i b j S ij f ij (a i b j +2( a j b i S ij f ji ( a j b i =2a 2 i b2 j s ij 2a i b j e ij +2a 2 j b2 i S ij +2a j b i e ji +2a 2 i b2 j s ij 2a i b j f ij +2a 2 j b2 i S ij +2a j b i f ji =4a 2 i b 2 js ij +4a 2 jb 2 i S ij 2a i b j (e ij + f ij +2a j b i (e ji + f ji 4S ij (a 2 i b2 j + a2 j b2 i = 2a ib j (e ij + f ij +2a j b i (e ji + f ji S ij = a ib j (e ij + f ij a j b i (e ji + f ji 2(a 2 i b2 j + a2 j b2 i, 1 i, j s. Here Φ=(Ψ ij = 1 2(a 2 i b2 j + 1 i, j s a2 j b2 i, Hence there exists a unique solution S r = ˆ S ij SSC s s for (3.1 such that Sˆ ij = a ib j (e ij + fij a j b i (e ji + fji 2(a 2 i b2 j + a2 j b2 i, 1 i, j s. S r = φ [S a (E + S b S b (E + S a ]. Theorem 3.2. Let X C n n, the generalized singular value decomposition of the matrix pair [A 1, A 2 ] as (2.5, Let U Z XU =( 11 Z 12 Z, (3.5 21 Z 22 W Z 12V = W Z 21 V = ( X 11 X 12 X 13 X 21 X 22 X 23 X 31 X 32 X 33 ( Y 11 Y12 Y 13 Y21 Y 22 Y 23 Y31 Y 32 Y 33,. (3.6
Skew-symmetric ortho-symmetric solutions 1501 If problem 1 is solvable, then problem 2 has a unique solution ˆX, which can be expressed as ( ˆX = U U, (3.7 0 0 where 1 2 (X 11 Y 11 D 12S 1 = W 1 2 (X 21 Y 21 S 1 1 (L D22 + 2 D 13 GS 1 2 S 1 1 D 23 1 2 (X 31 Y 1 31 2 (X 32 Y 1 32 2 (X 33 Y 33 V, G = φ [S 1 1 (X 22 Y 22 2S 1 1 L D22 S 1 2 S 1 2 + S 1 2 (X 22 Y 22 2S 1 1 L D22 S 1 2 S 1 1 ], with φ =(ψ ij SC s s,ψ ij = a 2 i a2 j b2 i b2 j 2(a 2 i b2 j + 1 i, j s. a2 j b2 i, Proof. Using the invariance of the robenius norm under unitary transformations, from (2.9, (3.5 and (3.6 we have, (2.9implies that X = U ( 0 U, 0 (3.5implies that (3.6implies that ( U Z XU = 11 Z12 Z21 Z 22 W Z 12V = ( X 11 X 12 X 13 X 21 X 22 X 23 X 31 X 32 X 33 X X = U ( 0 0 W Z 21 V = ( Y 11 Y12 Y 13 Y21 Y 22 Y 23 Y31 Y 32 Y 33 ( Z X = U 11 Z12 Z U 1 21 Z 22 U U ( Z 11 Z 12 Z 21 Z 22 ( U Z = U 11 Z12 Z U 21 Z 22
1502 D. Krishnaswamy and G. Punithavalli X X 2 = Z11 2 + Z12 2 + Z21 2 + Z22 2 = Z11 2 + ( X11 D 12 S 1 2 D 13 X 21 S 1 1 (L D 22 +GS 1 2 S 1 1 D 23 X 31 X 32 X 33 W Z12V 2 + ( X11 D 12 S 1 2 D 13 X 21 S 1 1 (L D 22 +GS 1 2 S 1 1 D 23 X 31 X 32 X 33 + W Z 21 V 2 + Thus X ˆX = inf X X X S E is equivalent to X 11 X 11 2 + X 11 + Y 11 2 = min, X 21 X 21 2 + X 21 + Y 21 2 = min, X 31 X 31 2 + X 31 + Y 31 2 = min, X 32 X 32 2 + X 32 + Y 32 2 = min, X 33 X 33 2 + X 33 + Y 33 2 = min, S 1 1 GS 1 2 (X 22 S 1 1 L D22 S 1 2 2 + S 1 1 GS 1 2 +(Y 22 ++S 1 1 L D22 S 1 2 2 = min. rom Lemma 3.1. We have, X 11 = 1 2 (X 11 Y 11,X 21 = 1 2 (X 21 Y 21 and X 31 = 1 2 (X 31 Y 31,X 32 = 1 2 (X 32 Y 32, X 33 = 1 2 (X 33 Y 33 G =Φ [S 1 1 (X 22 Y 22 2S 1 1 L D22 S 1 2 S 1 2 + S 1 2 (X 22 Y 22 2S 1 1 L D22 S 1 2 S 1 1 ]. Taking X 11,X 21,X 31,X 32,X 33 and G into (2.9, (2.10, we obtain that the solution of (the matrix mearness Problem 2 can be expressed as ( ˆX = U U. 0 0
Skew-symmetric ortho-symmetric solutions 1503 References [1] A. Ben-Israel and T.N.E.Greville.Generalized Inverses: Theory and Applications. Wiley, New York, 1974. [2] A. Berman.Mass matrix correction using an incomplete set of measured modes. AIAA J.,17:1147-1148, 1979. [3] A. Berman and E.J.Nagy.Improvement of a large analytical model using test data. AIAA J., 21:1168-1173, 1983. [4].A.Gao,Z.H.Song, and R.Q.Liu.The classification and applications of inverse eigenvalue problems (in chinese.technical Report, presented at the irst Conference on Inverse Eigenvalue Problems, Xian, China,1986. [5] J.G.Sun,Two kinds of inverse eigenvalue problems for real symmetric matrices (in chinese. Math. Numer.Sinica., 10:282-290, 1988. [6].S. Wei.Analytical dynamic model improvement using vibration test data. AIAA J.,28:175-177,1990. [7] R.Penrose.A generalized inverse for matrices. Proc. Cambridge Philos.Soc., 51:406-413,1955. [8] C.G. Khatri and S.K. Mitra.Hermitian and nonnegative definite solutions of linear matrix equations. SIAM J.Appl.Math.,31:579-585,1976. [9] K.W.E. Chu.Symmetric solutions of linear matrix equations by matrix decompositions. Linear Algebra Appl., 119:35-50,1989. [10] H. Dai.On the symmetric solutions of linear matrix equations. Linear Algebra Appl., 131:1-7,1990. [11] G.H. Golub and C.. Van Loan.Matrix Computations. Johns Hopkins U.P., Baltimore,1989. [12] C.C. Paige and M.A. Saunders.Towards a Generalized Singular Value Decomposition. SIAM J.Numer.Anal.,18:398-405,1981. [13] G,W. Stewart.Computing the CS-decomposition of a partitional orthogonal matrix. Numer. Math., 40:298-306,1982. [14] H. Dai and P. Lancaster.Linear matrix equations from an inverse problem of vibration theory. Linear Algebra Appl., 246:31-47, 1996.
1504 D. Krishnaswamy and G. Punithavalli [15] C.N. He.Positive definite and symmetric positive definite soltions of linear matrix equations(in Chinese.Hunan Annals of Mathematics,84-93,1996. [16] Z.Y. Peng, X.Y. Hu, and L. Zhang.One kind of inverse problem for the bisymmetric matrices. Math. Numer. Sinica., 27:81-95,2005. [17] C.P. Guo, X.Y. Hu, and L.Zhang.A class of inverse problems of matrix equation X AX = B. Numerical Mathematics: A Journal of Chinese University,26:222-229,2004. [18].Z. Zhou and X.Y.Hu.The inverse problem of anti-symmetric ortho-symmetric matrices. J.Math.(PRC,25: 179-184,2005. [19] Z. jiang and Q. Lu.On optimal approximation of a matrix under spectral restriction. Math. Numer. Sinica., 1:47-52, 1988. [20] N.J. Highan.Computing a nearest symmetric positive semidefinite matrix. Linear Algebra Appl., 103:103-118,1988. [21] L. Zhang.The approximation on the closed convex xone and its numerical application. Hunan Annals of Mathematics, 6:43-48, 1986. [22] Qing-eng Xiaq, Xi-Yan Hu, and Lei Zhang.The Anti-symmetric Ortho-symmetric solutions of the matrix equation A XA = DElectronic Journal of Linear Algebra.18: 21-29, 2009. Received: January, 2011