An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

Size: px
Start display at page:

Download "An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C"

Transcription

1 Journal of Computational and Applied Mathematics 1 008) An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C Guang-Xin Huang a Feng Yin b KeGuo a a College of Information and Management Chengdu University of echnology Chengdu PR China b Department of Mathematics Sichuan University of Science and Engineering Zigong PR China Received 7 May 006; received in revised form 4 August 006 Abstract In this paper an iterative method is constructed to solve the linear matrix equation AXB = C over skew-symmetric matrix X. By the iterative method the solvability of the equation AXB = C over skew-symmetric matrix can be determined automatically. When the equation AXB = C is consistent over skew-symmetric matrix X for any skew-symmetric initial iterative matrix X 1 the skew-symmetric solution can be obtained within finite iterative steps in the absence of roundoff errors. he unique least-norm skew-symmetric iterative solution of AXB = C can be derived when an appropriate initial iterative matrix is chosen. A sufficient and necessary condition for whether the equation AXB = C is inconsistent is given. Furthermore the optimal approximate solution of AXB = C for a given matrix X 0 can be derived by finding the least-norm skew-symmetric solution of a new corresponding matrix equation A XB = C. Finally several numerical examples are given to illustrate that our iterative method is effective. 006 Elsevier B.V. All rights reserved. MSC: 65F15; 65F0 Keywords: Iterative method; Skew-symmetric solution; Least-norm skew-symmetric solution; Optimal approximate solution 1. Introduction In this paper the following notations are considered and used. Let R m n denote the set of all m n real matrices SSR n n denote the set of all n n real skew-symmetric matrices in R n n. We denote by the superscripts and + the transpose and Moore Penrose generalized inverse of matrices respectively. In matrix space R m n define inner product as: A B =trb A) for all A B R m n A represents the Frobenius norm of A. RA) represents the column space of A. vec ) represents the vector operator. i.e. veca) = a 1 a...a n ) R mn for the matrix A = a 1 a...a n ) R m n a i R m i= 1...n. A B stands for the Kronecker product of matrices A and B. For more notations we refer to 3. In this paper we will consider the following two problems. his work is supported by the Key Science Fund and the Youngth Foundation of Chengdu University of echnology. Corresponding authors. addresses: huangx@cdut.edu.cn G.-X. Huang) fyin@suse.edu.cn F. Yin) /$ - see front matter 006 Elsevier B.V. All rights reserved. doi: /j.cam

2 3 G.-X. Huang et al. / Journal of Computational and Applied Mathematics 1 008) Problem I. For given matrices A R m n B R n p C R m p find matrix X SSR n n such that AXB = C. 1) Problem II. When Problem I is consistent let S E denote the set of skew-symmetric solutions of Problem I for a given matrix X 0 R n n find X S E such that X X 0 = min X S E X X 0. ) In fact Problem II is to find the optimal approximately skew-symmetric solution to a given matrix X 0 R n n. Problem I has been considered in the case of special solution structures e.g. symmetric triangular or diagonal solution X.We refer to Dai 5 Eric Chu Don 4 Magnus 6 Morris and Odell 8 Bjerhammer 1 for more details. Mitra 7 considered common solutions to a pair of linear matrix equations A 1 XB 1 = C 1 A XB = C.In these literatures the problem was discussed by using matrices decomposition such as the singular value decomposition SVD) the generalized SVD GSVD) the quotient SVD QSVD) and the canonical correlation decomposition CCD). However these methods are difficult to be applied to solve Problem II and the representation of solutions of Problems I and II are complicated. Huang and Yin 111 recently solve the constrained inverse eigenproblem and associated approximation problem for anti-hermitian R-symmetric matrices and the matrix inverse problem and its optimal approximation problem for R-symmetric matrices. Recently Peng et al. 9 have constructed an iteration method to solve the linear matrix equation AXB = C over symmetric matrix X. Peng 10 has shown an iterative method to solve the minimum Frobenius norm residual problem: min AXB C with unknown symmetric matrix X. In this paper we will consider the skew-symmetric solution of the linear matrix equation AXB = C we construct an iterative method by which the solvability of Problem I can be determined automatically the solution can be obtained within finite iterative steps when Problem I is consistent and the solution of Problem II can be obtained by finding the least-norm skew-symmetric solution of a new matrix equation A XB = C. his paper is organized as follows. In Section we will solve Problem I by construct an iterative method i.e. for an arbitrary initial matrix X 1 SSR n n if there exists a positive integer k such that R k = 0 and Q k = 0 then Problem I is inconsistent where R k and Q k k = 1...) are defined in Algorithm.1. If Problem I is consistent then for an arbitrary initial matrix X 1 SSR n n we can obtain a solution X SSR n n of Problem I within finite iterative steps in the absence of roundoff errors. Let X 1 = A H B BHA where H is an arbitrary matrix in R p m or more especially let X 1 =0 SSR n n we can obtain the unique least-norm solution of Problem I. hen in Section 3 we give the optimal approximate solution of Problem II by finding the least-norm skew-symmetric solution of a corresponding new matrix equation. In Section 4 several numerical examples are given to illustrate the application of our iterative methods.. he solution of Problem I In this section we will first introduce an iterative method to solve Problem I then prove that it is convergent. Algorithm.1. Step 1: Input matrices A R m n B R n p C R m p ; Step : Choose an arbitrary matrix X 1 SSR n n compute R 1 = C AX 1 B P 1 = A R 1 B Q 1 = 1 P 1 P 1 ) k := 1; Step 3: If R 1 = 0 orr 1 = 0 and Q 1 = 0 then stop; Else go to step 4; Step 4: Compute X k+1 = X k + R k Q k Q k

3 R k+1 = C AX k+1 B G.-X. Huang et al. / Journal of Computational and Applied Mathematics 1 008) P k+1 = A R k+1 B Q k+1 = 1 P k+1 P k+1 ) + trp k+1q k ) Q k Q k ; Step 5: If R k+1 = 0 orr k+1 = 0 and Q k+1 = 0 then stop; Else let k :=k + 1 go to step 4. Obviously it can be seen that Q i SSR n n X i SSR n n where i = Lemma.1. For the sequences {R i } {P i } and {Q i } generated in Algorithm.1 we have trr i+1 R j ) = trr i R j ) + R i Proof. Noting that Q i = Q i by Algorithm.1 we have Q i trq ip j ). 3) trri+1 R j ) = trc AX i+1 B) R j = tr C AX i + R i ) Q i Q i)b R j = tr C AX i B) R j R i Q i AQ ib) R j = trr i R j ) R i = trr i R j ) R i = trr i R j ) R i = trr i R j ) + R i Q i trb Q i A R j ) Q i trq i A R j B ) Q i trq i P j ) Q i trq ip j ). Lemma.. For the sequences {R i } and {Q i } generated by Algorithm.1 and k we have trr i R j ) = 0 trq i Q j ) = 0 ij = 1...ki = j. 4) Proof. Since trr i R j ) = trr j R i) and trq i Q j ) = trq j Q i) for all i j = 1...k we only need prove that trr i R j ) = 0 trq i Q j ) = 0 for all 1 j <i k. We prove the conclusion by induction and two steps are required. Step 1: we will show that trr i+1 R i) = 0 trq i+1 Q i) = 0 i = 1...k 1. 5) o prove this conclusion we also use induction.

4 34 G.-X. Huang et al. / Journal of Computational and Applied Mathematics 1 008) For i = 1 by Lemma.1 and Algorithm.1 noting that Q 1 = Q 1 it follows that trr R 1) = trr1 R 1) + R 1 Q 1 trq 1P 1 ) = R 1 + R 1 Q 1 tr Q 1 P 1 = R 1 + R 1 Q 1 tr ) + Q 1P 1 ) ) Q 1 P 1 + P 1 Q 1 ) = R 1 + R 1 Q 1 tr P 1 P1 Q 1 = R 1 + R 1 Q 1 trq 1Q 1 ) = R 1 R 1 Q 1 trq 1 Q 1) = 0 and ) trq Q P P 1) = tr + trp Q 1 ) Q 1 Q 1 Q 1 ) P P = tr Q 1 + trp Q 1 ) Q 1 trq 1 Q 1) ) P Q 1 + Q 1 P ) = tr + trp Q 1 ) ) P Q 1 + P Q 1 = tr + trp Q 1 ) = 0. Assume 5) holds for i = s 1 i.e. trr s R s 1) = 0 trq s Q s 1) = 0. Noting that Q s = Q s by Lemma.1 we have trrs+1 R s) = trrs R s) + R s Q s trq sp s ) = R s R s Q s trq s P s) = R s R s Q s tr Q s P s + P s Q s ) = R s R s Q s tr Q P s P ) s s = R s R s Q s tr Q s Q s trp sq s 1 ) Q s 1 ) Q s Q s = R s R s Q s tr = R s R s Q s trq s Q s) = 0 Q s 1 ) trp sq s 1 ) Q s 1 trq s Q s 1)

5 and G.-X. Huang et al. / Journal of Computational and Applied Mathematics 1 008) ) Ps+1 trq s+1 Q P s) = tr s+1 + trp s+1q s ) Q s Q s Q s = tr P ) s+1 Ps+1 + trp s+1q s ) Q s Q s Q s = tr P s+1q s Ps+1 Q s + trp s+1q s ) Q s trq s Q s) P s+1 Q s + Q s P s+1 ) = tr + trp s+1 Q s ) P s+1 Q s + P s+1 Q s ) = tr + trp s+1 Q s ) = trp s+1 Q s ) + trp s+1 Q s ) = 0. ) Hence 5) holds for i = s. herefore 5) holds by the principle of induction. Step : Assume that trr s R j ) = 0 trq s Q j ) = 0j = 1...s 1 then we show that trr s+1 R j ) = 0 trq s+1 Q j ) = 0 j = 1...s. 6) In fact by Lemma.1 we have trrs+1 R j ) = trrs R j ) + R s Q s trq sp j ) = R s Q s tr Qs P j + Q sp j ) = R s Q s tr P j P ) j Q s = R s Q Q s tr s = R s Q s = R s Q s Q j trp j Q j 1 ) Q j 1 Q j 1 trq s Q j ) trp j Q j 1 ) ) Q j 1 trq sq j 1 ) trq s Q j ) + trp j Q j 1 ) Q j 1 trq s Q j 1) = 0 and Ps+1 trq s+1 Q P j ) = tr = tr s+1 P s+1 P s+1 = tr P s+1 P s+1 ) + trp s+1q s ) Q s Q s Q j ) + trp s+1q s ) Q s Q s Q j Q j ) + trp s+1q s ) Q s trq s Q j )

6 36 G.-X. Huang et al. / Journal of Computational and Applied Mathematics 1 008) = 1 trp s+1q j ) + 1 trp s+1 Q j ) = 1 trp s+1q j ) + 1 trp s+1 Q j ) = 1 trp s+1q j ) + 1 trq j P s+1) = 1 trp s+1q j ) 1 trq j P s+1 ) = trp s+1 Q j ) = trq j P s+1 ). Noting that trr s+1 R j ) = 0 trr s+1 R j+1) = 0 by Lemma.1 we have trq s+1 Q j ) = trq j P s+1 ) = Q j R j trr j R s+1) trr j+1 R s+1) = Q j R j trr s+1 R j ) trrs+1 R j+1)=0. By the principle of induction 6) holds. Noting that 4) is implied in steps 1 and by the principle of induction we complete the proof. Lemma.3. Suppose X be an arbitrary solution of Problem I i.e. A XB = C and X = X then tr X X k )Q k = R k k = ) where the sequences {X k } {R k } {Q k } are generated by Algorithm.1. Proof. We proof the conclusion by induction. For k = 1 tr X X 1 )Q 1 =tr X X 1 ) P 1 P1 X X 1 )P 1 = tr + X X 1 ) P1 X X 1 )P 1 = tr + P 1 X X 1 ) = tr X X 1 )P 1 = tr X X 1 )A C AX 1 B)B = trbc AX 1 B) A X X 1 ) = trbc AX 1 B) A X X 1 ) = tra X X 1 )BC AX 1 B) = tra XB AX 1 B)R 1 = trc AX 1 B)R 1 = trr 1 R 1 ) = trr 1 R 1) = R 1.

7 G.-X. Huang et al. / Journal of Computational and Applied Mathematics 1 008) Assume 7) holds for k = s. Since tr X X s+1 )Q s =tr X X s R s Q s Q s)q s = tr X X s )Q s R s = R s + R s = 0 then by Algorithm.1 we have tr X X s+1 )Q s+1 =tr X X s+1 ) = tr Q s trq sq s ) Q s trq s Q s) Ps+1 P s+1 ) + trp s+1q s ) Q s Q s X X s+1 )P s+1 + X X s+1 ) Ps+1 + trp s+1q s ) Q s tr X X s+1 )Q s X X s+1 )P s+1 + P s+1 X X s+1 ) = tr = tr X X s+1 )P s+1 = tr X X s+1 )A R s+1 B = tr X X s+1 )A C AX s+1 B)B = trbc AX s+1 B) A X X s+1 ) = trbc AX s+1 B) A X X s+1 ) = tra X X s+1 )BC AX s+1 B) = tra XB AX s+1 B)C AX s+1 B) = trc AX s+1 B)C AX s+1 B) = trr s+1 R s+1 ) = trr s+1 R s+1) = R s+1. herefore 7) holds for k = s + 1. By the principle of induction the proof is completed. heorem.1. Suppose that Problem I is consistent then for an arbitrary initial matrix X 1 SSR n n a solution of Problem I can be obtained with finite iteration steps in the absence of roundoff errors and the minimum of the steps marked t 0 is within minmp n ). Proof. If R i = 0i = 1...mp by Lemma.3 we have Q i = 0 i = 1...mp then we can compute X mp+1 R mp+1 by Algorithm.1. By Lemma. we have trr mp+1 R i) = 0 i = 1...mp

8 38 G.-X. Huang et al. / Journal of Computational and Applied Mathematics 1 008) and trr i R j ) = 0 ij = 1...mp i = j. It can be seen that the set of R 1 R...R mp is an orthogonal basis of the matrix space R m p which implies that R mp+1 = 0 i.e.x mp+1 is a solution of Problem I. When Problem I is consistent we can verify that the solution of Problem I can be obtained within t 0 iterative steps where t 0 = minmp n ). In fact if n mp and if R i = 0i = 1...n then Q i = 0i = 1...n and we can compute X n +1R n +1Q n +1 by Algorithm.1. Similar to the previous proof we have Q n +1 = 0 and then by Lemma.3 we have R n +1 = 0 i.e. X n +1 is a solution of Problem I. From heorem.1 we can easily obtain the following result. heorem.. he necessary and sufficient conditions of the inconsistency of Problem I is that there exists a positive integer k such that R k = 0 and Q k = 0 in the process of Algorithm.1. Proof. Sufficiency: If there exists a positive integer k such that R k = 0 and Q k = 0 by Lemma.3 it is easy to see that Problem I is inconsistent. Necessity: he inconsistency of Problem I implies that R i = 0 for all positive integer i. IfQ i = 0 for all positive integer i then Problem I has solutions by heorem.1 which contradict to the inconsistency of Problem I. herefore there exists a positive integer number k such that R k = 0 and Q k = 0. From heorem.1 and. we get the conditions when Algorithm.1 can be terminated. o show the least-norm skew-symmetric solution of Problem I we first introduce the following result. Lemma.4 See Peng et al. 9 Lemma.4). Suppose that the consistent system of linear equation My = b has a solution y 0 RM ) then y 0 is the least-norm solution of the system of linear equations. By Lemma.4 the following result can be obtained. heorem.3. Suppose that Problem I is consistent. If we choose the initial iterative matrix X 1 = A H B BHA where H is an arbitrary matrix in R p m especially let X 1 = 0 SSR n n we can obtain the unique least-norm skew-symmetric solution of Problem I within finite iterative steps in the absence of roundoff errors by using Algorithm.1. Proof. By Algorithm.1 and heorem.1 if we let X 1 = A H B BHA where H is an arbitrary matrix in R p m we can obtain the solution X of Problem I within finite iterative steps in the absence of roundoff errors and the solution X can be represented that X = A Y B BYA. In the sequel we will prove that X is just the least-norm solution of Problem I. Consider the following system of matrix equations: { AXB = C B XA = C. 8) If Problem I has a solution X 0 SSR n n then X 0 = X 0AX 0 B = C and B X 0 A = AX 0 B) = AX 0 B) = AX 0 B) = C. Hence the systems of matrix equations 8) also has a solution X 0.

9 G.-X. Huang et al. / Journal of Computational and Applied Mathematics 1 008) Conversely if the systems of matrix equations 8) has a solution X R n n such that AXB = C B XA = C let X 0 = X X )/ then X 0 SSR n n and AX 0 B = 1 AX X )B = 1 AXB AX B) = 1 AXB B XA ) = 1 C C ) =C. herefore X 0 is a solution of Problem I. So the solvability of Problem I is equivalent to that of the systems of matrix equations 8) and the solution of Problem I must be the solution of the systems of matrix equations 8). Let S E denote the set of all solutions of the systems of matrix equations 8) then we know that S E S E where S E is the set of all solutions of Problem I. In order to prove that X is the least-norm solution of Problem I it is enough to prove that X is the least-norm solution of the systems of matrix equations 8). Denote vecx) = xvecx ) = x vecy ) = y 1 vecy ) = y vecc) = c 1 vecc ) = c then the systems of matrix equations 8) is equivalent to the systems of linear equations B A x = A B Noting that c1 c x = veca Y B BYA). 9) = B A )y 1 A B)y B y1 A ) = B A A B R y A B by Lemma.4 we know that X is the least-norm solution of the systems of linear equations 9). Since vector operator is isomorphic X is the unique least-norm solution of the systems of matrix equations 8) then X is the unique least-norm solution of Problem I. 3. he solution of Problem II In this section we will show that the optimal approximate solution of Problem II for a given matrix can be derived by finding the least-norm skew-symmetric solution of a new corresponding matrix equation A XB = C. For a given matrix X 0 R n n since symmetric matrix and a skew-symmetric matrix are orthogonal each other we have ) X X 0 = X X0 + X0 + X 0 X0 for any X SSR n n. = X X ) 0 X0 = X X 0 X0 X 0 + X 0 X 0 + X0 +

10 40 G.-X. Huang et al. / Journal of Computational and Applied Mathematics 1 008) When Problem I is consistent the set of solutions of Problem I denoted by S E equation is not empty then linear AXB = C is equivalent to the following equation: A X X ) 0 X0 B = C A X 0 X0 B. Let X = X X 0 X0 )/ C = C AX 0 X0 )/B then Problem II is equivalent to finding the least-norm skew-symmetric solution X of the matrix equation A XB = C. 10) By using Algorithm.1 let initially iterative matrix X 1 = A H B BHA or more especially let X 1 = 0 R n n we can obtain the unique least-norm solution X of the matrix equation 10) then we can obtain the solution X of Problem II and X can be represented that X = X + X 0 X 0 )/. 4. Examples for the iterative methods In this section we will show several numerical examples to illustrate our results. All the tests are performed by MALAB Example 1. Consider the skew-symmetric solution of the equation AXB = C where A = B = C = We will find the skew-symmetric solution of the matrix equation AXB = C by using Algorithm.1. Because of the influence of the error of calculation the residual R i is usually unequal to zero in the process of the iteration where i = For any chosen positive number ε however small enough e.g. ε = e 010 whenever R k <ε stop the iteration and X k is regarded to be a solution of the matrix equation AXB = C. Choose an initially iterative

11 G.-X. Huang et al. / Journal of Computational and Applied Mathematics 1 008) matrix X 1 SSR 5 5 such as X 1 = by Algorithm.1 we have X 14 = R 14 =3.646e 011 <ε. So we obtain a skew-symmetric solution of the matrix equation AXB = C as follows: X = Let X 1 = by Algorithm.1 we have X 14 = R 14 =9.8875e 011 <ε.

12 4 G.-X. Huang et al. / Journal of Computational and Applied Mathematics 1 008) So we obtain a skew-symmetric solution of the matrix equation AXB = C as follows: X = Example. Consider the least-norm solution of the equation AXB = C in Example 1. Let H = and X 1 = A H B BHA. By using Algorithm.1 we have X 17 = R 17 =8.116e 011 <ε. So we obtain the least-norm solution of the matrix equation AXB = C as follows: X = Example 3. Consider the skew-symmetric solution of the equation AXB = C where A = B = C =

13 G.-X. Huang et al. / Journal of Computational and Applied Mathematics 1 008) Let ε = e 005 and initial matrix X 1 = 0. By using Algorithm.1 we have R 6 =1.0408e Q 6 =1.9143e 008 <ε. herefore there is no skew-symmetric solution for the matrix equation AXB = C by heorem.. Example 4. Let S E denote the set of all skew-symmetric solutions of the matrix equation AXB =C where the matrices A B and C are mentioned in Example 1. Suppose X 0 = we will find X S E such that X X 0 = min X S E X X 0 i.e. find the optimal approximate solution to the matrix X 0 in S E. Let X 0 =X 0 X0 )/ X =X X 0 C =C A X 0 B by the method mentioned in Section 3 we can obtain the least-norm skew-symmetric solution X of the matrix equation A XB = C by choosing the initial iteration matrix X 1 = 0 and X is that X 14 = R 14 =7.0170e 011 <ε= e 010 and X = X 14 + X0 = Acknowledgements he authors are grateful to the anonymous referee for the constructive and helpful comments and Prof. Lothar Reichel for the communication. References 1 A. Bjerhammer Rectangular reciprocal matrices with special reference to geodetic calculations Kung. ekn. Hogsk. Handl. Stockholm ) K.W. Eric Chu Symmetric solutions of linear matrix equations by matrix decompositions Linear Algebra Appl ) G.H. Golub C.F. Van Loan Matrix Computations John Hopkins University Press Baltimore MD F.J. Henk Don On the symmetric solution of a linear matrix equation Linear Algebra Appl ) 1 7.

14 44 G.-X. Huang et al. / Journal of Computational and Applied Mathematics 1 008) D. Hua On the symmetric solutions of linear matrix equations Linear Algebra Appl ) J.R. Magnus L-structured matrices and linear matrix equation Linear Multilinear Algebra Appl ) S.K. Mitra Common solutions to a pair of linear matrix equations A 1 XB 1 = C 1 A XB = C Proc. Cambridge Philos. Soc ) G.R. Morris P.L. Odell Common solutions for n matrix equations with applications J. Assoc. Comput. Mach ) Y.X. Peng X.Y. Hu L. Zhang An iteration method for the symmetric solutions and the optimal approximation solution of the matrix equation AXB = C Appl. Math. Comput ) Z.Y. Peng An iterative method for the least squares symmetric solution of the matrix equation AXB = C Appl. Math. Comput ) G.X. Huang F. Yin Constrained inverse eigenproblem and associated approximation problem for anti-hermitian R-symmetric matrices Appl. Math. Comput. 006) doi: /j.amc G.-X. Huang F. Yin Matrix inverse problem and its optimal approximation problem for R-symmetric matrices Appl. Math. Comput. 006) doi: /j.amc

AN ITERATIVE METHOD FOR THE GENERALIZED CENTRO-SYMMETRIC SOLUTION OF A LINEAR MATRIX EQUATION AXB + CY D = E. Ying-chun LI and Zhi-hong LIU

AN ITERATIVE METHOD FOR THE GENERALIZED CENTRO-SYMMETRIC SOLUTION OF A LINEAR MATRIX EQUATION AXB + CY D = E. Ying-chun LI and Zhi-hong LIU Acta Universitatis Apulensis ISSN: 158-539 No. 9/01 pp. 335-346 AN ITERATIVE METHOD FOR THE GENERALIZED CENTRO-SYMMETRIC SOLUTION OF A LINEAR MATRIX EQUATION AXB + CY D = E Ying-chun LI and Zhi-hong LIU

More information

The reflexive and anti-reflexive solutions of the matrix equation A H XB =C

The reflexive and anti-reflexive solutions of the matrix equation A H XB =C Journal of Computational and Applied Mathematics 200 (2007) 749 760 www.elsevier.com/locate/cam The reflexive and anti-reflexive solutions of the matrix equation A H XB =C Xiang-yang Peng a,b,, Xi-yan

More information

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Diagonal and Monomial Solutions of the Matrix Equation AXB = C Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of

More information

An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution

An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution Copyright 2011 Tech Science Press CMES, vol.77, no.3, pp.173-182, 2011 An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution Minghui Wang 1, Musheng Wei 2 and Shanrui Hu 1 Abstract:

More information

Journal of Computational and Applied Mathematics. New matrix iterative methods for constraint solutions of the matrix

Journal of Computational and Applied Mathematics. New matrix iterative methods for constraint solutions of the matrix Journal of Computational and Applied Mathematics 35 (010 76 735 Contents lists aailable at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elseier.com/locate/cam New

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

Re-nnd solutions of the matrix equation AXB = C

Re-nnd solutions of the matrix equation AXB = C Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices.

More information

The Symmetric and Antipersymmetric Solutions of the Matrix Equation A 1 X 1 B 1 + A 2 X 2 B A l X l B l = C and Its Optimal Approximation

The Symmetric and Antipersymmetric Solutions of the Matrix Equation A 1 X 1 B 1 + A 2 X 2 B A l X l B l = C and Its Optimal Approximation The Symmetric Antipersymmetric Soutions of the Matrix Equation A 1 X 1 B 1 + A 2 X 2 B 2 + + A X B C Its Optima Approximation Ying Zhang Member IAENG Abstract A matrix A (a ij) R n n is said to be symmetric

More information

The skew-symmetric orthogonal solutions of the matrix equation AX = B

The skew-symmetric orthogonal solutions of the matrix equation AX = B Linear Algebra and its Applications 402 (2005) 303 318 www.elsevier.com/locate/laa The skew-symmetric orthogonal solutions of the matrix equation AX = B Chunjun Meng, Xiyan Hu, Lei Zhang College of Mathematics

More information

a Λ q 1. Introduction

a Λ q 1. Introduction International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE

More information

Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations

Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations Applied Mathematics Volume 01, Article ID 398085, 14 pages doi:10.1155/01/398085 Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations Chang-Zhou Dong, 1 Qing-Wen Wang,

More information

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B International Journal of Algebra, Vol. 6, 0, no. 9, 903-9 The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B Qingfeng Xiao Department of Basic Dongguan olytechnic Dongguan 53808, China

More information

The DMP Inverse for Rectangular Matrices

The DMP Inverse for Rectangular Matrices Filomat 31:19 (2017, 6015 6019 https://doi.org/10.2298/fil1719015m Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://.pmf.ni.ac.rs/filomat The DMP Inverse for

More information

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D International Journal of Algebra, Vol. 5, 2011, no. 30, 1489-1504 The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D D. Krishnaswamy Department of Mathematics Annamalai University

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 4 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 12, 2012 Andre Tkacenko

More information

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1) 1. Introduction

THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1) 1. Introduction Journal of Computational Mathematics, Vol22, No4, 2004, 535 544 THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1 Zhen-yun Peng Department of Mathematics, Hunan University of

More information

arxiv: v1 [math.na] 26 Nov 2018

arxiv: v1 [math.na] 26 Nov 2018 arxiv:1811.10378v1 [math.na] 26 Nov 2018 Gradient-based iterative algorithms for solving Sylvester tensor equations and the associated tensor nearness problems Maolin Liang, Bing Zheng School of Mathematics

More information

A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity

A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity Linear Algebra and its Applications 384 2004) 43 54 www.elsevier.com/locate/laa A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity Qing-Wen Wang Department

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Research Article Constrained Solutions of a System of Matrix Equations

Research Article Constrained Solutions of a System of Matrix Equations Journal of Applied Mathematics Volume 2012, Article ID 471573, 19 pages doi:10.1155/2012/471573 Research Article Constrained Solutions of a System of Matrix Equations Qing-Wen Wang 1 and Juan Yu 1, 2 1

More information

PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I

PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Mathematical Sciences 205, 2, p. 20 M a t h e m a t i c s MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I Yu. R. HAKOPIAN, S. S. ALEKSANYAN 2, Chair

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY

More information

Some results on the reverse order law in rings with involution

Some results on the reverse order law in rings with involution Some results on the reverse order law in rings with involution Dijana Mosić and Dragan S. Djordjević Abstract We investigate some necessary and sufficient conditions for the hybrid reverse order law (ab)

More information

Research Article Finite Iterative Algorithm for Solving a Complex of Conjugate and Transpose Matrix Equation

Research Article Finite Iterative Algorithm for Solving a Complex of Conjugate and Transpose Matrix Equation indawi Publishing Corporation Discrete Mathematics Volume 013, Article ID 17063, 13 pages http://dx.doi.org/10.1155/013/17063 Research Article Finite Iterative Algorithm for Solving a Complex of Conjugate

More information

Some inequalities for sum and product of positive semide nite matrices

Some inequalities for sum and product of positive semide nite matrices Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,

More information

An iterative solution to coupled quaternion matrix equations

An iterative solution to coupled quaternion matrix equations Filomat 6:4 (01) 809 86 DOI 10.98/FIL104809S Published by Faculty of Sciences and Mathematics University of Niš Serbia Available at: http://www.pmf.ni.ac.rs/filomat An iterative solution to coupled quaternion

More information

1. Introduction. We consider linear discrete ill-posed problems of the form

1. Introduction. We consider linear discrete ill-posed problems of the form AN EXTRAPOLATED TSVD METHOD FOR LINEAR DISCRETE ILL-POSED PROBLEMS WITH KRONECKER STRUCTURE A. BOUHAMIDI, K. JBILOU, L. REICHEL, AND H. SADOK Abstract. This paper describes a new numerical method for the

More information

The Drazin inverses of products and differences of orthogonal projections

The Drazin inverses of products and differences of orthogonal projections J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations. HIROSHIMA S THEOREM AND MATRIX NORM INEQUALITIES MINGHUA LIN AND HENRY WOLKOWICZ Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

A distance measure between cointegration spaces

A distance measure between cointegration spaces Economics Letters 70 (001) 1 7 www.elsevier.com/ locate/ econbase A distance measure between cointegration spaces Rolf Larsson, Mattias Villani* Department of Statistics, Stockholm University S-106 1 Stockholm,

More information

Group inverse for the block matrix with two identical subblocks over skew fields

Group inverse for the block matrix with two identical subblocks over skew fields Electronic Journal of Linear Algebra Volume 21 Volume 21 2010 Article 7 2010 Group inverse for the block matrix with two identical subblocks over skew fields Jiemei Zhao Changjiang Bu Follow this and additional

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Inequalities Involving Khatri-Rao Products of Positive Semi-definite Matrices

Inequalities Involving Khatri-Rao Products of Positive Semi-definite Matrices Applied Mathematics E-Notes, (00), 117-14 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Inequalities Involving Khatri-Rao Products of Positive Semi-definite Matrices

More information

An error estimate for matrix equations

An error estimate for matrix equations Applied Numerical Mathematics 50 (2004) 395 407 www.elsevier.com/locate/apnum An error estimate for matrix equations Yang Cao, Linda Petzold 1 Department of Computer Science, University of California,

More information

Equivalence constants for certain matrix norms II

Equivalence constants for certain matrix norms II Linear Algebra and its Applications 420 (2007) 388 399 www.elsevier.com/locate/laa Equivalence constants for certain matrix norms II Bao Qi Feng a,, Andrew Tonge b a Department of Mathematical Sciences,

More information

arxiv: v1 [math.ra] 28 Jan 2016

arxiv: v1 [math.ra] 28 Jan 2016 The Moore-Penrose inverse in rings with involution arxiv:1601.07685v1 [math.ra] 28 Jan 2016 Sanzhang Xu and Jianlong Chen Department of Mathematics, Southeast University, Nanjing 210096, China Abstract:

More information

ABSOLUTELY FLAT IDEMPOTENTS

ABSOLUTELY FLAT IDEMPOTENTS ABSOLUTELY FLAT IDEMPOTENTS JONATHAN M GROVES, YONATAN HAREL, CHRISTOPHER J HILLAR, CHARLES R JOHNSON, AND PATRICK X RAULT Abstract A real n-by-n idempotent matrix A with all entries having the same absolute

More information

On the solvability of an equation involving the Smarandache function and Euler function

On the solvability of an equation involving the Smarandache function and Euler function Scientia Magna Vol. 008), No., 9-33 On the solvability of an equation involving the Smarandache function and Euler function Weiguo Duan and Yanrong Xue Department of Mathematics, Northwest University,

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081,

More information

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

Research Article Eigenvector-Free Solutions to the Matrix Equation AXB H =E with Two Special Constraints

Research Article Eigenvector-Free Solutions to the Matrix Equation AXB H =E with Two Special Constraints Applied Mathematics Volume 03 Article ID 869705 7 pages http://dx.doi.org/0.55/03/869705 Research Article Eigenvector-Free Solutions to the Matrix Equation AXB =E with Two Special Constraints Yuyang Qiu

More information

Subset selection for matrices

Subset selection for matrices Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,

More information

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS Proceedings of the Second International Conference on Nonlinear Systems (Bulletin of the Marathwada Mathematical Society Vol 8, No 2, December 27, Pages 78 9) KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

More information

AN ITERATIVE METHOD FOR SYMMETRIC SOLUTIONS OF THE MATRIX EQUATION AXB + CXD = F

AN ITERATIVE METHOD FOR SYMMETRIC SOLUTIONS OF THE MATRIX EQUATION AXB + CXD = F 010 D 11 i ± Ψ E 7 3 "7 4 I Nov. 010 MATHEMATICA NUMERICA SINICA Vol.3 No.4 BD@: AXB + CXD = F >9A< =;C? * 867 Bο*c'Dfi $ΛDf ffl±fi 5300) 1 - meffi BΞ14OL) ψ_#1

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

Inner image-kernel (p, q)-inverses in rings

Inner image-kernel (p, q)-inverses in rings Inner image-kernel (p, q)-inverses in rings Dijana Mosić Dragan S. Djordjević Abstract We define study the inner image-kernel inverse as natural algebraic extension of the inner inverse with prescribed

More information

Solution to Homework 8, Math 2568

Solution to Homework 8, Math 2568 Solution to Homework 8, Math 568 S 5.4: No. 0. Use property of heorem 5 to test for linear independence in P 3 for the following set of cubic polynomials S = { x 3 x, x x, x, x 3 }. Solution: If we use

More information

The reflexive re-nonnegative definite solution to a quaternion matrix equation

The reflexive re-nonnegative definite solution to a quaternion matrix equation Electronic Journal of Linear Algebra Volume 17 Volume 17 28 Article 8 28 The reflexive re-nonnegative definite solution to a quaternion matrix equation Qing-Wen Wang wqw858@yahoo.com.cn Fei Zhang Follow

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

A property of orthogonal projectors

A property of orthogonal projectors Linear Algebra and its Applications 354 (2002) 35 39 www.elsevier.com/locate/laa A property of orthogonal projectors Jerzy K. Baksalary a,, Oskar Maria Baksalary b,tomaszszulc c a Department of Mathematics,

More information

Structured Condition Numbers of Symmetric Algebraic Riccati Equations

Structured Condition Numbers of Symmetric Algebraic Riccati Equations Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations

More information

arxiv: v1 [math.na] 21 Oct 2014

arxiv: v1 [math.na] 21 Oct 2014 Computing Symmetric Positive Definite Solutions of Three Types of Nonlinear Matrix Equations arxiv:1410.5559v1 [math.na] 21 Oct 2014 Negin Bagherpour a, Nezam Mahdavi-Amiri a, a Department of Mathematical

More information

arxiv: v4 [quant-ph] 8 Mar 2013

arxiv: v4 [quant-ph] 8 Mar 2013 Decomposition of unitary matrices and quantum gates Chi-Kwong Li, Rebecca Roberts Department of Mathematics, College of William and Mary, Williamsburg, VA 2387, USA. (E-mail: ckli@math.wm.edu, rlroberts@email.wm.edu)

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

On some linear combinations of hypergeneralized projectors

On some linear combinations of hypergeneralized projectors Linear Algebra and its Applications 413 (2006) 264 273 www.elsevier.com/locate/laa On some linear combinations of hypergeneralized projectors Jerzy K. Baksalary a, Oskar Maria Baksalary b,, Jürgen Groß

More information

On EP elements, normal elements and partial isometries in rings with involution

On EP elements, normal elements and partial isometries in rings with involution Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012 Article 39 2012 On EP elements, normal elements and partial isometries in rings with involution Weixing Chen wxchen5888@163.com Follow this

More information

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type

More information

arxiv: v1 [math.na] 13 Mar 2019

arxiv: v1 [math.na] 13 Mar 2019 Relation between the T-congruence Sylvester equation and the generalized Sylvester equation Yuki Satake, Masaya Oozawa, Tomohiro Sogabe Yuto Miyatake, Tomoya Kemmochi, Shao-Liang Zhang arxiv:1903.05360v1

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA

More information

Moore Penrose inverses and commuting elements of C -algebras

Moore Penrose inverses and commuting elements of C -algebras Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

AN ITERATIVE ALGORITHM FOR THE LEAST SQUARES SOLUTIONS OF MATRIX EQUATIONS OVER SYMMETRIC ARROWHEAD MATRICES

AN ITERATIVE ALGORITHM FOR THE LEAST SQUARES SOLUTIONS OF MATRIX EQUATIONS OVER SYMMETRIC ARROWHEAD MATRICES AN ITERATIVE ALGORITHM FOR THE LEAST SQUARES SOLUTIONS OF MATRIX EQUATIONS OVER SYMMETRIC ARROWHEAD MATRICES Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2017

Cheng Soon Ong & Christian Walder. Canberra February June 2017 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2017 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 141 Part III

More information

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level. Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

2 Computing complex square roots of a real matrix

2 Computing complex square roots of a real matrix On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

On the Simulatability Condition in Key Generation Over a Non-authenticated Public Channel

On the Simulatability Condition in Key Generation Over a Non-authenticated Public Channel On the Simulatability Condition in Key Generation Over a Non-authenticated Public Channel Wenwen Tu and Lifeng Lai Department of Electrical and Computer Engineering Worcester Polytechnic Institute Worcester,

More information

A LINEAR PROGRAMMING BASED ANALYSIS OF THE CP-RANK OF COMPLETELY POSITIVE MATRICES

A LINEAR PROGRAMMING BASED ANALYSIS OF THE CP-RANK OF COMPLETELY POSITIVE MATRICES Int J Appl Math Comput Sci, 00, Vol 1, No 1, 5 1 A LINEAR PROGRAMMING BASED ANALYSIS OF HE CP-RANK OF COMPLEELY POSIIVE MARICES YINGBO LI, ANON KUMMER ANDREAS FROMMER Department of Electrical and Information

More information

EP elements and Strongly Regular Rings

EP elements and Strongly Regular Rings Filomat 32:1 (2018), 117 125 https://doi.org/10.2298/fil1801117y Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat EP elements and

More information

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data. Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)

More information

Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016

Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016 Linear Algebra Shan-Hung Wu shwu@cs.nthu.edu.tw Department of Computer Science, National Tsing Hua University, Taiwan Large-Scale ML, Fall 2016 Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall

More information

Eventually reducible matrix, eventually nonnegative matrix, eventually r-cyclic

Eventually reducible matrix, eventually nonnegative matrix, eventually r-cyclic December 15, 2012 EVENUAL PROPERIES OF MARICES LESLIE HOGBEN AND ULRICA WILSON Abstract. An eventual property of a matrix M C n n is a property that holds for all powers M k, k k 0, for some positive integer

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

arxiv: v1 [math.ra] 24 Aug 2016

arxiv: v1 [math.ra] 24 Aug 2016 Characterizations and representations of core and dual core inverses arxiv:1608.06779v1 [math.ra] 24 Aug 2016 Jianlong Chen [1], Huihui Zhu [1,2], Pedro Patrício [2,3], Yulin Zhang [2,3] Abstract: In this

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Linear and Multilinear Algebra. Linear maps preserving rank of tensor products of matrices

Linear and Multilinear Algebra. Linear maps preserving rank of tensor products of matrices Linear maps preserving rank of tensor products of matrices Journal: Manuscript ID: GLMA-0-0 Manuscript Type: Original Article Date Submitted by the Author: -Aug-0 Complete List of Authors: Zheng, Baodong;

More information

The SVD-Fundamental Theorem of Linear Algebra

The SVD-Fundamental Theorem of Linear Algebra Nonlinear Analysis: Modelling and Control, 2006, Vol. 11, No. 2, 123 136 The SVD-Fundamental Theorem of Linear Algebra A. G. Akritas 1, G. I. Malaschonok 2, P. S. Vigklas 1 1 Department of Computer and

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Research Article Least Squares Pure Imaginary Solution and Real Solution of the Quaternion Matrix Equation AXB + CXD = E with the Least Norm

Research Article Least Squares Pure Imaginary Solution and Real Solution of the Quaternion Matrix Equation AXB + CXD = E with the Least Norm Applied Mathematics, Article ID 857081, 9 pages http://dx.doi.org/10.1155/014/857081 Research Article Least Squares Pure Imaginary Solution and Real Solution of the Quaternion Matrix Equation AXB + CXD

More information

Formulas for the Drazin Inverse of Matrices over Skew Fields

Formulas for the Drazin Inverse of Matrices over Skew Fields Filomat 30:12 2016 3377 3388 DOI 102298/FIL1612377S Published by Faculty of Sciences and Mathematics University of Niš Serbia Available at: http://wwwpmfniacrs/filomat Formulas for the Drazin Inverse of

More information

Linear Algebra Homework and Study Guide

Linear Algebra Homework and Study Guide Linear Algebra Homework and Study Guide Phil R. Smith, Ph.D. February 28, 20 Homework Problem Sets Organized by Learning Outcomes Test I: Systems of Linear Equations; Matrices Lesson. Give examples of

More information

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1... Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements

More information