AN ITERATIVE ALGORITHM FOR THE BEST APPROXIMATE (P,Q)-ORTHOGONAL SYMMETRIC AND SKEW-SYMMETRIC SOLUTION PAIR OF COUPLED MATRIX EQUATIONS

Size: px
Start display at page:

Download "AN ITERATIVE ALGORITHM FOR THE BEST APPROXIMATE (P,Q)-ORTHOGONAL SYMMETRIC AND SKEW-SYMMETRIC SOLUTION PAIR OF COUPLED MATRIX EQUATIONS"

Transcription

1 AN ITERATIVE ALGORITHM OR THE BEST APPROXIMATE P,Q-ORTHOGONAL SYMMETRIC AND SKEW-SYMMETRIC SOLUTION PAIR O COUPLED MATRIX EQUATIONS atemeh Panjeh Ali Beik, and Davod Khojasteh Salkuyeh Department of Mathematics, Vali-e-Asr University of Rafsanjan, P O Box 518, Rafsanjan, Iran fbeik@vruacir, beikfatemeh@gmailcom aculty of Mathematical Sciences, University of Guilan, Rasht, Iran salkuyeh@gmailcom Abstract This paper deals with developing a robust iterative algorithm to find the least-squares P, Q-orthogonal symmetric and skew-symmetric solution sets of the generalized coupled matrix equations To this end, first, some properties of these type of matrices are established urthermore, an approach is offered to determine the optimal approximate P, Q-orthogonal skew-symmetric solution pair corresponding to a given arbitrary matrix pair Some numerical experiments are reported to confirm the validity of the theoretical results and to illustrate the effectiveness of the proposed algorithm Keywords: Matrix equation; Least-squares problem; P, Q-orthogonal symmetric matrix; P, Q-orthogonal skew-symmetric matrix; Iterative algorithm 1 AMS Subject Classification: 651, 15A4 1 Introduction Let us first introduce some symbols exploited throughout the paper The notation R m n stands for the set of all m n real matrices The trace and the transpose of a given matrix A are respectively denoted by tra and A T The symbol SOR n n refers to the set of all symmetric orthogonal matrices in R n n, ie, SOR n n = {P R n n P T = P, P = I} or two given matrices P SOR n n and Q SOR n n, the matrix X R n n is called P,Q-orthogonal symmetric if PXQ T = PXQ and is said to be P,Q- orthogonal skew-symmetric if PXQ T = PXQ The set of all P,Q-orthogonal symmetric and P, Q-orthogonal skew-symmetric matrices of order n are respectively represented by SR PQ n n and SSRPQ n n A special kind of permutation matrix with ones along the secondary diagonal and zeros elsewhere is called the exchange matrix Definition 11 Golub and Loan, 1996 Let A = a ij be an n n matrix The matrix A is said to be a persymmetric matrix if Corresponding author a ij = a n j+1,n i+1 1

2 P A Beik and D K Salkuyeh for all 1 i,j n Or equivalently, A is called persymmetric if AJ = JA T where J is the exchange matrix Assume that A R n n and J is the exchange matrix of order n It can be verified that JA T,JAJ and A T J are versions of the matrix A that have been rotated anticlockwise by 9,18 and 7 degrees Moreover, JA, JA T J, AJ and A T are versions of the matrix A that have been reflected in lines at, 45, 9 and 135 degrees to the horizontal measured anti-clockwise Remark 1 Assume that A R n n and J R n n is the exchange matrix Note that A is persymmetric if AJ T = AJ In view of the fact that J SOR n n, it can be concluded that every persymmetric matrix is a I, J-orthogonal symmetric matrix where I stands for the identity matrix of order n Definition 13 A n n matrix H is said to be skew-hamiltonian if ĴH is skew- symmetric where n = k and Ik 11 Ĵ = I k Hamiltonian and skew-hamiltonian matrices have a key role in engineering, such as in linear quadratic optimal control, H optimization and solving algebraic Riccati equations; see Laub, 1991 and Zhou et al, 1995 Eigenvalues and eigenvectors of a Hamiltonian matrix are important for describing a physical system, in fact the real eigenvalues correspond to the possible energy levels of the system and the related eigenvectors express the corresponding state; eg, see Reichl, 13 In view of the importance of constructing skew-hamiltonian matrices specially in inverse problems, solving matrix equations and the structured inverse eigenvalue problem over Hamiltonian matrices have been investigated by some researchers; see Qian and Tan, 13, Zhang et al, 5 and the references therein Remark 14 Note that ĴT = Ĵ 1 = Ĵ and ĴH is skew-symmetric if and only if HĴ is skew-symmetric Without loss of generality, we say H is a skew-hamiltonian matrix if HĴT = HĴ HĴT = HĴ Remark 15 We would like to comment here that for a given matrix H, it is not difficult to see that H is a skew-hamiltonian matrix if HJ is I, J-orthogonal skew-symmetric matrix where Ik J = I k and J = Ik I k This follows from the fact that JĴ = J Notice that J, J SOR n n n = k Linear matrix equations play a cardinal role in various fields, such as control and system theory, stability theory and some other fields of applied and pure mathematics; see Ding and Chen 6, Ding et al 8, Hajarian and Dehghan 11, Wu et al 13 and references therein In recent years, several research works have discussed the way of obtaining explicit forms of the solution for different kinds of matrix equations or example, Wu et al 11b have presented an approach to resolve a general class of Sylvester-polynomial-conjugate matrix equations which include the Yakubovich-conjugate matrix equation as a special case In

3 Wu et al, 1, several explicit parametric solutions have been derived for the generalized Sylvester-conjugate matrix equation In Zhao et al, 1, an analytical expression is derived for the optimal approximate solution in the least-squares P,Q-orthogonal symmetric solution set of the matrix equation A T XB = Z with respect to a given matrix In addition, the authors have obtained an explicit expression of the minimum-norm least-square P, Q-orthogonal symmetric solution for the matrix equation A T XB = Z In Song et al, 14 by using of the coefficients of the characteristic polynomial of matrix ormula and the Leverrier algorithm, the explicit solutions to the Sylvester-conjugate matrix equation AX XB = C have been constructed Recently, Li et al 14 have derived the general expression of the least-squares solution of the matrix equation AXB+CYD = E with the least-norm over symmetric arrowhead matrices Nevertheless, in practice, applying the iterative algorithms is superior to computing explicit forms due to the required high computation costs of the explicit forms As a matter of fact, for obtaining solutions by means of the derived explicit forms, Kronecker products of matrices are required to be explicitly formed In addition, we need to compute the Moore-Penrose pseudoinverse which would be expensive for large-scale least-squares problems In the literature, the applications of iterative algorithms have been successfully examined for solving several types of matrix equations so far; for further details see Beik and Salkuyeh 11, Beik et al 14, Ding and Chen 6, Ding et al 8, Hajarian 14a, Salkuyeh and Totounian 6, Salkuyeh and Beik 14, Wu et al 11a and references therein The extension of the LSQR method for solving constrained matrix equations have been widely studied so far or instance, Li and Huang 1 have presented matrix form of the LSQR method to solve the following coupled matrix equations Recently, Hajarian 14b have proposed matrix form of the LSQR algorithm for determining least-squares solutions of linear operator equation AX = B More recently, Karimi and Dehghan 15 have developed the LSQR method to solve general constrained coupled matrix equations containing the transpose of the unknowns The proposed strategy for constructing the algorithm inkarimi and Dehghan, 15 incorporates those given inli and Huang, 1 andhajarian, 14b In addition, the performance of the proposed algorithm have been numerically compared with some of previously examined iterative schemes There is also a growing interest for using the idea of conjugate gradient method to construct iterative algorithms for determining the approximate least-squares solutions of various kinds of matrix equations; see for instance Beik and Salkuyeh 13, Cai and Chen 9, Cai et al 1, Hajarian 14, Li et al 14, Peng et al 7, Peng and Liu 9, Peng 1, Peng 13, Peng and Xin 13, Peng 15, Sheng and Chen7, Sheng and Chen1, Su and Chen1 Lately, Peng 15 has developed the conjugate gradient least-squares CGLS method to determine the R, S-symmetric solutions of the following least-squares problem, 3 p i=1 q j=1 A ijx j B ij C i = min

4 4 P A Beik and D K Salkuyeh More recently, Li et al 15 have employed the CGLS method to construct a hybrid algorithm for solving the following minimization problem 1 AX B = min, subject to the matrix inequality constraint CXD E over R, S-symmetric solutions Before stating the main goal of the current work, we recollect some definitions and properties which are required throughout this paper The inner product of X,Y R m n is defined by X,Y = try T X The induced norm is the wellknown robenius norm Suppose that A = [a ij ] m s and B = [b ij ] n q are two real matrices, the Kronecker product of the matrices A and B is defined as the mn sq matrix A B = [a ij B] The vec operator transmutes a matrix A of size m s to a vector a = veca of size ms 1 by stacking its columns In Bernstein, 9, it is demonstrated that the following relation holds 1 vecabc = C T AvecB, for arbitrary given matrices A, B and C with suitable dimensions 11 Motivation and main contribution More recently, Sarduvan et al 14 have focused on the solution set of the following least-squares problem 13 AXB C = min, over P, Q-orthogonal skew-symmetric matrices and studied the matrix nearness problem associated with 13 More precisely, the explicit form for ˆX S EX is determined such that 14 ˆX X = min X S EX X X, where X R n n is given and S EX stands for the solution set of 13 By invoking 1 and establishing some theoretical results, Algorithm 1 is presented to obtain the explicit form of ˆX As seen the computation costs of Algorithm 1 is high due to using the Kronecker product and vec operator As a matter of fact, the dimension of the matrix A defined in Step 3 of Algorithm 1 can become large even in the case that the size of coefficient matrices A and B are moderate Consequently, Steps 4, 5 and 6 consume high CPU time sec to be performed In addition, the extension of the presented approach for general types of matrix equations would be onerous These drawbacks inspirit us to examine an iterative algorithm for solving coupled matrix equations and their corresponding least-squares problems over P, Q-orthogonal symmetric and P, Q-orthogonal skew-symmetric matrices To this end, we first investigate some properties of the P, Q-orthogonal symmetric and P, Q-orthogonal skewsymmetric matrices which are utilized for constructing and analyzing the proposed iterative scheme In order to elaborate our results for more general cases, we focus on the following least-squares problem 15 M A1 XB 1 +A X T B +C 1 YD 1 +C Y T D N E 1 X 1 +E X T +G 1 YH 1 +G Y T = min H

5 5 Algorithm 1: Proposed scheme to find the best approximation P, Q- orthogonal skew-symmetric solutions of AXB = C Sarduvan et al, 14 Step 1 Input the matrices A R m n, B R n r C R m r, P,Q SOR n n and X R n n Set = + = for finding the P,Q-orthogonal skew-symmetric solution Step Compute the vectors 1 c 1 = vecc, c = vecc T, y = vec PX Q PX Q T Step 3 Compute QB A = T AP AP QB T vecc, g = vec C T Step 4 orm the spectral decomposition A 1 as follows: D A 1 = V A1 V A T 1,, A 1 = A T A, g 1 = A T g where D is a diagonal matrix with diagonal entries consisting of positive eigenvalues of A 1 and V A1 is an n n orthogonal matrix formed by the corresponding linear independent eigenvectors of A 1 Step 5 Compute s = ranka T A, k = dima T A and Step 6 Compute M = zeross zeross, k s zerosk s,s eyek s ŷ = A g +V A T AM V A T AM y A g, where the symbol refers to the well-known Moore-Penrose generalized inverse Step 7 Compute Ŷ such that ŷ = vecŷ Step 8 Compute ˆX = PŶQ Here we point out that matrix equations which include both unknown matrix and its transpose, for instance, arise in solving time-invariant Hamiltonian systems; for further information see Lancaster and Rozsa, 1983 They also appear in reducing a block anti-triangular matrix to a block anti-diagonal one, a reduction which is exploited in palindromic generalized eigenvalue problems Kressner et al, 9 The generalized inverse eigenvalue problem for persymmetric matrices consists of determining nontrivial persymmetric matrices A and B such that 16 AX = BXΛ, where X and the diagonal matrix Λ are known such that the columns of X are given eigenvectors and the diagonal entries of Λ are the given associated eigenvalues As expressed in Remark 1, a persymmetric matrix belongs to the set of P, Q-orthogonal symmetric matrices Note that the least-squares problem 15 incorporates the least-squares problem associated with the matrix equation 16 Therefore, our proposed algorithm can be applied to solve generalized persymmetric inverse eigenvalue problems GPIEPs Recently, Julio and Soto 15 derived sufficient conditions for persymmetric nonnegative inverse eigenvalue problempniep

6 6 P A Beik and D K Salkuyeh to have a solution or a comprehensive survey on the applications of structured inverse eigenvalue problems in control theory over different kinds of matrices including persymmetric matrices, one may refer to Chu and Golub, and the references therein A matrix is said to be a generalized skew-hamiltonian matrix if HJ 1 is a skew- symmetric matrices where J 1 R n n is an arbitrary orthogonal skew-symmetric matrix; ie, J1 1 = J1 T = J 1 Here it should be noticed that, without any changing, our algorithm can compute the generalized skew-hamiltonian solutions of the mentioned matrix equations so that P 1 and P are identity matrices and Q 1 and Q are given real orthogonal skewsymmetric matrices This follows from the fact that for an arbitrary given matrix X, we may write X = H 1 +S 1 such that H 1,S 1 = where H 1 = 1 X +J1 X T J 1 and S 1 = 1 X J1 X T J 1 It is not difficult to see that H 1 and S 1 are generalized Hamiltonian and generalized skew-hamiltonian matrices; respectively The convergence properties of the algorithm for determining least-squares generalized skwe-hamiltonian solutions can be established with similar strategies used throughout this work, we do not discuss the details and leave them to reader The ensuing subsection is pertained to stating the main problems which the current paper is concerned with 1 Problem reformulation or simplicity, throughout the paper, we exploit the linear operators K : R p p R r r R m n, L : R p p R r r R ν n, K : R m n R ν n R p p and L : R m n R ν n R r r defined as follows: KX,Y = A 1 XB 1 +A X T B +C 1 YD 1 +C Y T D, LX,Y = E 1 X 1 +E X T +G 1 YH 1 +G Y T H, KZ,W = A T 1 ZBT 1 +ET 1 WT 1 +B Z T A + W T E, LZ,W = C T 1 ZDT 1 +GT 1 WHT 1 +D Z T C +H W T G In this paper, our goal is to propose an iterative algorithm to solve the following problems Problem 1 Let the linear operators K and L be defined as before Assume that P 1 SOR p p, Q 1 SOR p p, P SOR r r and Q SOR r r are given ind X SR P 1Q 1 p p and Ỹ SRP Q r r such that M N K X, Ỹ L X,Ỹ = min {X,Y X,Y SR P 1 Q 1 p p SRP Q r r } M N KX,Y LX,Y Problem Let S s be the solution set of Problem 1 or two given matrices X R p p and Y R r r, find X,Y S s such that { } X X + Y Y = min X X + Y Y X,Y S s Problem 3 Let the linear operators K and L be defined as before Assume that P 1 SOR p p, Q 1 SOR p p, P SOR r r and Q SOR r r are given ind

7 X SSR P 1Q 1 p p and Ỹ SSRP Q r r such that M K X, Ỹ N L X,Ỹ = min {X,Y X,Y SSR P 1 Q 1 p p SSRP Q r r } M N 7 KX,Y LX,Y Problem 4 Let S ss be the solution set of Problem 3 or two given matrices X R p p and Y R r r, find X,Y S ss such that { } X X + Y Y = min X X + Y Y X,Y S ss The remainder of this paper is organized as follows In Section, we prove some properties of the P, Q-orthogonal symmetric and P, Q-orthogonal skewsymmetric matrices Using the established characteristics, Section 3 is devoted to constructing and analyzing an iterative algorithm to solve Problems 1,, 3 and 4 Numerical results are reported in Section 4 which reveal the effectiveness of the algorithm to solve our mentioned problems inally the paper is ended with a brief conclusion in Section 5 Useful theoretical results This section is concerned with establishing some properties and theorems which provide fundamental tools for presenting our algorithm to solve Problems 1-4 and analyzing its convergence properties rom the following theorem, it turns out that R m m can be written as a direct sum of the subspaces SR PQ m m and SSRPQ m m where P SORm m and Q SOR m m Theorem 1 Suppose that the matrices P SOR m m and Q SOR m m are given and the subspaces SR PQ m m and SSRPQ m m are signified as before Then R m m = SR PQ m m SSRPQ m m, where stands for the orthogonal direct sum with respect to the robenius inner product, Proof Before proving the validity of the assertion, we point out that straightforward computations reveals that X SR PQ m m if and only if X = PQXT PQ X SSR PQ m m if and only if X = PQX T PQ Here it should be noted that PQ is in general a nonsymmetric matrix satisfies PQ T PQ = PQPQ T = I m or simplicity, the proof is separated into the following two steps: Step 1 It should be noticed that for an arbitrary given X R m m, we may write X = W 1 +W where W 1 = 1 X +PQX T PQ and W = 1 X PQX T PQ, and it is not difficult to see that W 1 SR PQ m m and W SSR PQ m m Therefore, an arbitrary m m matrix X can be written as the summation of two matrices W 1 and W which are respectively P,Q-orthogonal symmetric and P,Q-orthogonal skew-symmetric matrices Step In this step, it is shown that for two arbitrary given matrices Z 1 SR PQ m m and Z SSR PQ m m, we have Z 1,Z = Note that PZ 1 Q and PZ Q are respectively symmetric and skew-symmetric matrices It is well-known that symmetric and

8 8 P A Beik and D K Salkuyeh skew-symmetric matrices are orthogonal to each other, therefore the result follows from the following equality immediately, trz T 1 Z = trq T Z T 1 P T PZ Q Now the result follows immediately from the given explanations in Steps 1 and It is not difficult to establish the following proposition by using the well-known properties of the trace of matrices Proposition Assume that the linear operators K, L, K and L are defined as before, then 1 KX,Y,Z + LX,Y,W = X, KZ,W + Y, LZ,W, for all X R p p, Y R r r, Z R m n and W R ν n Proof It is well-known that KX,Y,Z + LX,Y,W = tr Z T A 1 XB 1 +A X T B +C 1 YD 1 +C Y T D Using the commutative property, we have +tr W T E 1 X 1 +E X T +G 1 YH 1 +G Y T H KX,Y,Z + LX,Y,W = tr B 1 Z T A 1 X +tr X T B Z T A +tr D 1 Z T C 1 Y +tr Y T D Z T C +tr 1 W T E 1 X +tr X T W T E +tr H 1 W T G 1 Y +tr Y T H W T G Invoking the fact that the trace of a matrix is equal to the trace of its transpose, it can be derived that tr B 1 Z T A 1 X = tr X T A T 1 ZBT 1, tr D1 Z T C 1 Y = tr Y T C1 T ZDT 1, tr 1 W T E 1 X = tr X T E1 T WT 1 and tr H 1 W T G 1 Y = tr Y T G T 1 WHT 1 The results follows immediately by substituting the above four equalities into In view of Theorem 1, the following two propositions can be concluded from Proposition immediately We only present the proof of the first proposition, the second one can be established in a similar manner Proposition 3 Given the matrices P 1 SOR p p, Q 1 SOR p p, P SOR r r and Q SOR r r, assume that the linear operators K, L, K and L are defined as before, then KX,Y,Z + LX,Y,W = X, KZ,W+P1 Q 1 KZ,W T P 1 Q 1 for all X SR P 1Q 1 p p, Y SRP Q r r, Z R m n and W R ν n Y, LZ,W+P Q LZ,W T P Q,

9 9 Proof It is obvious that KZ,W = 1 KZ,W+P1 Q 1 KZ,W T P 1 Q KZ,W P1 Q 1 KZ,W T P 1 Q 1 and LZ,W = 1 LZ,W+P Q LZ,W T P Q + 1 LZ,W P Q LZ,W T P Q Substitute two preceding equations in 1 By the assumption, X SR P 1Q 1 p p and Y SR P Q r r Consequently, the assertion holds from the second step in the proof of Theorem 1 Proposition 4 Given the matrices P 1 SOR p p, Q 1 SOR p p, P SOR r r and Q SOR r r, assume that the linear operators K, L, K and L are defined as before, then KX,Y,Z + LX,Y,W = 1 X, KZ,W P1 Q 1 KZ,W T P 1 Q Y, LZ,W P Q LZ,W T P Q, for all X SSR P 1Q 1 p p, Y SSRP Q r r, Z R m n and W R ν n The following lemma is called Projection Theorem which is utilized to present some of our succeeding results, we refer the reader to Wang, 3 for its proof Lemma 5 Let W be a finite dimensional space, U be a subspace of W and U be the orthogonal complement subspace of U or a given w W, always, there exists u U such that w u w u for any u U More precisely, u U is the unique minimization vector in U if and only if w u U Here is the norm corresponding to the inner product defined on W The following two useful propositions can be established by Lemma 5 Proposition 6 Given the matrices P 1,Q 1 SOR p p and P,Q SOR r r, let X R p p and Y R r r be arbitrary P 1,Q 1 -orthogonal symmetric and P,Q - orthogonal symmetric matrices, respectively Then the matrix pair X,Y SR P 1Q 1 p p SR P Q r r is the solution of Problem 1 if and only if for all X,Y SR P 1Q 1 p p SRP Q X, KR 1,R + P 1 Q 1 KR 1,R T P 1 Q 1 + Y, LR 1,R +P Q LR 1,R T P Q =, r r, in which R 1 = M KX,Y and R = N LX,Y where the linear operators K, L, K and L are defined as before Proof Presume that W = { V1 V Consider the following subspaces of W, V 1 R m n and V R ν n }

10 1 P A Beik and D K Salkuyeh U = By Lemma 5, for we have M N if and only if { KX,Y LX,Y KX,Y LX,Y KX,Y LX,Y } X SR P 1Q 1 p p and Y SR P Q r r M N W, = min L M N M KX,,Y N LX,Y for X,Y L := SR P 1Q 1 p p SRP Q r r Or equivalently, KX,Y LX,Y =, KX,Y,R 1 + LX,Y,R =, X,Y SR P 1Q 1 p p SRP Q r r, which completes the proof in view of Proposition 3 In a similar manner used in the proof of Proposition 6, the following proposition can be established Proposition 7 Given the matrices P 1,Q 1 SOR p p and P,Q SOR r r, let X R p p and Y R r r be arbitrary P 1,Q 1 -orthogonal skew-symmetric and P,Q -orthogonal skew-symmetric matrices, respectively Then the matrix pair X,Y SSR P 1Q 1 p p SSRP Q r r is the solution of Problem 3 if and only if for all X,Y SSR P 1Q 1 p p SSRP Q r r, X, KR 1,R P 1Q 1 KR 1,R T P 1 Q 1 + Y, LR 1,R P Q LR 1,R T P Q =, in which R 1 = M KX,Y and R = N LX,Y where the linear operators K, L, K and L are defined as before 3 Proposed algorithm and its convergence analysis It is known that the conjugate gradient least-squares CGLS method is an effective algorithm to solve the following least-squares problem 31 min x R n b Ax, where is the Euclidean vector norm, A is a given large and sparse m n matrix and b R m ; for further details one may refer to Björck 1996 and Saad 1995 In this section, the main goal is to examine the idea of the CGLS method to construct an iterative algorithm for solving Problems 1-4 by exploiting the elaborated results in the previous section More precisely the current section is divided into three subsections In the first part we propose an iterative scheme to solve Problems 1 and 3 The second subsection is devoted to proving the fact that the approximate solution pairs computed by the handled algorithm satisfy an optimality property at each iterate In the last subsection it is momentarily described that how the presented algorithm can be employed to solve Problems and 4,

11 31 Solving Problems 1 and 3 Utilizing the established facts in Section, we may present an extended form of the CGLS method for solving Problems 1 and 3 in Algorithm In the sequel, the convergence properties of the algorithm are discussed Algorithm : The CGLS method for solving Problems 1 and 3 1 Data: Input A i,b i,c i,d i,e i, i,g i,h i,m,n and the symmetric orthogonal matrices P i and Q i for i = 1, Set η = η = 1 for solving Problem 1 Problem 3 Initialization: or solving Problem 1 Problem 3, choose the initial guess X SR P 1Q 1 p p and Y SR P Q r r X SSR P 1Q 1 p p and Y SSR P Q r r 3 begin 4 Compute 5 R 1 = M KX,Y ; 6 R = N LX,Y; 7 P x = 1 KR1,R + 1 η P 1 Q 1 KR 1,R T P 1 Q 1 ; 8 P y = 1 LR1,R + 1 η P Q LR 1,R T P Q ; 9 Set 1 Q x = P x ; 11 Q y = P y ; 1 for k =,1,, until the convergence do 13 K = KQ x k,q y k; 14 L = LQ x k,q y k; 15 Xk +1 = Xk+ Pxk + Pyk K + L 16 Yk+1 = Yk+ Pxk + Pyk K + L 17 R 1 k +1 = R 1 k Pxk + Pyk K + L 18 R k +1 = R k Pxk + Pyk K + L 19 P x k +1 = 1 P y k +1 = 1 Q x k; Q y k; K; L; KR1 k+1,r k η P 1 Q 1 KR 1 k +1,R k +1 T P1 Q 1 ; LR1 k +1,R k η P Q LR1 k+1,r k +1 T P Q ; 1 Q x k +1 = P x k +1+ Pxk+1 + Pyk+1 P xk + Pyk Q y k +1 = P y k +1+ Pxk+1 + Pyk+1 P xk + Pyk 3 end 4 end Q x k; Q y k; 11

12 1 P A Beik and D K Salkuyeh The following theorem presents some properties of the sequences produced by Algorithm These properties have a basic role for studying the convergence behavior of the algorithm to resolve Problems 1-4 Theorem 31 Suppose that k steps of Algorithm have been performed The sequences P x l, P y l, Q x l and Q y l l =,1,,k produced by Algorithm satisfy for i,j =,1,,,k i j P x i,p x j + P y i,p y j = KQ x i,q y i,kq x j,q y j + LQ x i,q y i,lq x j,q y j =, Q x i,p x j + Q y i,p y j =, Note that the previous theorem can be established by mathematical induction and its proof is given in Appendix Remark 3 Without loss of generality, assume that we apply the proposed algorithm to solve Problem 1 Suppose that k k > 1 steps of Algorithm have been performed To continue the algorithm at k+1-th step, we must have P x k or P y k in Lines 1 and Note that if P x k = and P y k = then Proposition 6 implies that Xk,Yk is a solution pair of Problem 1 Considering Lines 15-18, it is required to presume that KQ x k,q y k or LQ x k,q y k Otherwise, using Proposition 3 and Eq 34, we have = KQ x k,q y k,r 1 k + LQ x k,q y k,r k = Q x k,p x k + Q y k,p y k = P x k,p x k + P y k,p y k, which indicates that P x k = and P y k =, as discussed earlier, this ensures that Xk,Yk is a solution pair of Problem 1 Remark 33 In view of the orthogonality relation 3 presented in Theorem 31, it reveals that Algorithm can solve Problems 1 and 3 within finite number of steps Theorem 34 Assume that P 1 SOR p p, Q 1 SOR p p, P SOR r r and Q SOR r r are given Suppose that X,Ỹ is a solution pair of Problem 1 Problem 3 If ˆX,Ŷ is an alternative solution pair of Problem 1 Problem 3, then there exist X 1 SR P 1Q 1 p p and Y 1 SR P Q r r X 1 SSR P 1Q 1 p p and Y 1 SSR P Q r r such that X = ˆX +X 1 and Ỹ = Ŷ +Y 1, and 35 KX 1,Y 1 = and LX 1,Y 1 = Proof We prove the assertion only for the solutions of Problem 1, the same strategy can be handled to show that the conclusion is also valid for the solutions of Problem 3 Presume that X 1 = X ˆX and Y 1 = Ỹ ˆX, it is not difficult to see that X 1 SR P 1Q 1 p p and Y 1 SR P Q r r By Propositions 3 and 6 and using some straightforward computations, it turns out that 36 M K ˆX,Ŷ,KX 1,Y 1 + N L ˆX,Ŷ,LX 1,Y 1 =

13 By the assumption, X,Ỹ and ˆX,Ŷ are solutions of Problem 1 This hypophysis guarantees that M K X, Ỹ N L X,Ỹ = M K ˆX, Ŷ N L ˆX,Ŷ, which is equivalent to say that M K X,Ỹ N + L X, Ỹ M = K ˆX, Ŷ N + L ˆX, Ŷ Considering 36, we derive the following relation M KX 1,Y 1 + LX 1,Y 1 = K X, Ỹ N + L X, Ỹ M K ˆX, Ŷ N + L ˆX, Ŷ, which completes the proof By Theorem 34, in the rest of this subsection, it is demonstrated that Algorithm converges to the least-norm solution pair X,Y of Problem 1 Problem 3 or two arbitrary given matrices Z R m n and W R ν n, we set X = 1 KZ,W+ 1 η P 1 Q 1 KZ,W T P 1 Q 1, Y = 1 LZ,W+ 1 η P Q LZ,W T 38 P Q, where the value of η is respectively taken to be zero and one for solving Problem 1 and Problem 3 By applying Algorithm with X and Y as shown by 37 and 38, Remark 33 implies that there exist Z R m n and W R ν n such that the solution pair X,Y solves Problem 1 for η = and Problem 3 for η = 1 where X = 1 KZ,W + 1 η P 1 Q 1 KZ,W T 39 P 1 Q 1, 31 Y = 1 LZ,W + 1 η P Q LZ,W T P Q Now if X,Ỹ isasolution pairofproblem1problem3thentheorem34 indicates the existence of X 1,Y 1 SR P 1Q 1 p p SRP Q r r X 1,Y 1 SSR P 1Q 1 p p SSRP Q r r where X = X +X 1 and Ỹ = Y +Y 1, and the relations given in 35 are satisfied Evidently, we have X +X 1,X +X 1 + Y +Y 1,Y +Y 1 = X,X + Y,Y + X,X 1 + Y,Y 1 + X 1,X 1 + Y 1,Y 1 In view of 35, 39 and 31, Proposition 3 Proposition 4 implies that X,X 1 + Y,Y 1 = Consequently, it can be deduced that X, X Ỹ, + Ỹ = X +X 1,X +X 1 + Y +Y 1,Y +Y 1 = X,X + Y,Y + X 1,X 1 + Y 1,Y 1

14 14 P A Beik and D K Salkuyeh Therefore, we obtain X + Y X + Ỹ It should be noticed here that the inequality in the above relation holds strictly as long as X 1 or Y 1 As a result X,Y is the unique least-norm solution pair of Problem 1 Problem 3 which can be reached via Algorithm by choosing X and Y as shown by of 37 and 38 with η = η = 1 3 Optimality property of the proposed algorithm The minimization property of Algorithm can be deduced from the following proposition Proposition 35 Suppose that k steps of Algorithm have been performed Then the sequence {Rl} k l=1 satisfies 311 R 1 k,kq x i,q y i + R k,lq x i,q y i =, for i = 1,,,k 1, where R 1 k = M KXk,Yk, R k = N LXk,Yk and Xk, Yk is the k-th approximate solution pair computed by Algorithm Proof Assume that η = η = 1 In view of Proposition 3 Proposition 4, it is deduced that R 1 k,kq x i,q y i + R k,lq x i,q y i = P x k,q x i + P y k,q y i Now the result follows immediately by the property 34 given in Theorem 31 Considering that m-th step of Algorithm has been performed, we elucidate the subspaces K m and U m as follows: { } Qx Qx m 1 31 K m = span,,, Q y Q y m { KQx,Q U m = span y LQ x,q y Evidently we can see that Xm Ym KQx m 1,Q,, y m 1 LQ x m 1,Q y m 1 X Y +K m By Proposition 35, it indicates that M KXm,Ym U N LXm,Ym m The following theorem reveals that the approximate solutions produced by Algorithm satisfy an optimality property which demonstrates the minimization property of the algorithm The result is a direct consequence of Lemma 5, hence we omit its proof Theorem 36 Suppose that m steps of Algorithm have been performed Presume that the subspaces K m and U m are defined by 31 and 313, respectively Then M KXm,Y m = min M KX,Y N LXm,Ym L, N LX,Y }

15 where L := { X Y X Y X Y +K m } 33 On the solution of Problems and 4 In the current subsection we discuss an approach to solve Problems and 4 Assume that P 1 SOR p p, Q 1 SOR p p, P SOR r r andq SOR r r aregiven LetusdenotethesolutionsetofProblems and4bys s ands ss, respectively Evidentlythesets S s ands ss arebothnonempty or two given X R p p and Y R r r, Theorem 1 ensures the existence of Z i and W i i = 1, where X = Z 1 +Z and Y = W 1 +W such that Z 1 SR P 1Q 1 p p, Z SSR P 1Q 1 p p, W 1 SR P Q r r and W SSR P Q r r Remark 37 It is not difficult to see that the following statements hold If X,Y S s then X X,X X + Y Y,Y Y = Z 1 X,Z 1 X + W 1 Y,W 1 Y Z,Z + W,W If X,Y S ss then X X,X X + Y Y,Y Y = Z X,Z X + W Y,W Y Z 1,Z 1 + W 1,W 1 Consequently, from 314, it can be concluded that 316 { } { } min X X + Y Y = min Z 1 X + W 1 Y X,Y S s X,Y S s In addition, 315 implies that 317 { } min X X + Y Y X,Y S ss { } = min Z X + W Y X,Y S ss rom Remark 37, we infer that for solving Problem and Problem 4, it is sufficient to evaluate the least-norm solutions of the following new problems, respectively Problem 5 Let the linear operators K and L be defined as before Assume that P 1 SOR p p, Q 1 SOR p p, P SOR r r and Q SOR r r, X R p p and Y R r r are given ind X SR P 1Q 1 p p and Ỹ SRP Q r r such that M Ñ K X, Ỹ L X,Ỹ = min {X,Y X,Y SR P 1 Q 1 p p SRP Q r r } M Ñ 15 KX,Y LX,Y in which M = M KZ 1,W 1 and Ñ = N LZ 1,W 1 where Z 1 = 1 X +P 1 Q 1 X T P 1 Q 1 and W 1 = 1 Y +P Q Y T P Q Problem 6 Let the linear operators K and L be defined as before Assume that P 1 SOR p p, Q 1 SOR p p, P SOR r r and Q SOR r r, X R p p and Y R r r are given ind X SSR P 1Q 1 r r such that M Ñ K X, Ỹ L X,Ỹ p p and Ỹ SSRP Q = min {X,Y X,Y SSR P 1 Q 1 p p SSRP Q r r } M Ñ KX,Y LX,Y,,

16 16 P A Beik and D K Salkuyeh in which M = M KZ,W and Ñ = N LZ,W where Z = 1 X P 1 Q 1 X T P 1Q 1 and W = 1 Y P Q Y T P Q To determine the solution of Problem Problem 4, we first compute X,Ỹ as the least-norm solution pair of Problem 5 Problem 6 using the approach described at the end of Subsection 31 Then the solution pair X,Y of Problem Problem 4 is obtained by X = X+Z 1 and Y = Ỹ +W 1 X = X+Z and Y = Ỹ +W 4 Numerical experiments In this section, some numerical instances are examined to demonstrate the validity of the elaborated theoretical results and to illustrate the practicability of the proposed algorithm for solving Problems 1-4 More precisely, we consider matrix equations in the cases that they are consistent or inconsistent over P 1,Q 1 -orthogonal symmetric and skew-symmetric and Hamiltonian matrices All of reported experiments were performed on a 64-bit 45 GHz core i7 processor and 8GB RAM using some MATLAB codes on MATLAB version 8353 Example 41 Consider the following least-squares problem 41 M K X = min M KX, X SR P 1 Q where KX = A 1 XB 1 in which A 1 = , B 1 = M = , and the matrices P 1,Q 1 SOR 5 5 are given by 1 1 P 1 = 1 1, Q 1 = In order to solve 41 by Algorithm, we exploit a zero matrix as an initial guess and the following stopping criterion, = Xk Xk 1 < 1,

17 The algorithm converges in iterations and the approximate solution is obtained by 6 1 X = 1 6, 1 6 with the residual norm η = 86e 11, where η k = Rk = M A 1 XkB 1 It is noted that the matrix X = , 1 6 is a solution of KX = M which is P 1,Q 1 -orthogonal symmetric This reveals that the algorithm has provided an approximate solution to KX = M which is also an approximate solution to 41 Convergence history of the method is plotted in igure 1 for more clarification We now utilize Algorithm to compute matrix X such that 4 X X = min X S s X X, where S s is the solution set of 41 and X = According to Problem 5, we set and Z 1 = 1 X +P 1 Q 1 X T P 1Q 1 = 1 M = M KZ 1 = Applying Algorithm to solve the ensuing least-squares problem, 44 M K X = min M KX, X SR P 1 Q 1 5 5, 17

18 18 P A Beik and D K Salkuyeh we derive X3 = as an approximation of X Hence, the approximate solution of 4 is determined by X 6 1 Z 1 +X3 = or more details, log 1 has been displayed in igure 1 Todemonstrate theeffectiveness of theproposedalgorithm tocomputeap 1,Q 1 - orthogonal skew-symmetric solution of the investigated problem, let us consider the following problem 45 M K X = min M KX, X SSR P 1 Q with and M = X X = min X S ss X X, wheres ss is the set of solutions of the problem 45 With asimilar manner used for finding the solution in the orthogonal symmetric case, Algorithm is implemented to compute approximate solutions to the problems 45 and 46 All the other assumptions are as before or both of the problems, the algorithm calculates an approximate solution of the following form X13 =, for X and X in 13 iterations Here, we have η 13 = 711e 1 for Problem 3 It is mentioned that X = , 3,,

19 is P 1,Q 1 -orthogonal skew-symmetric matrix and X X13 Convergence behavior of the method to solve problems 45 and 46 is depicted in igure 1 Subsequently, we consider Algorithm to solve 46 where X is given by 43 Having in mind Problem 6, we set and Z = 1 X P 1 Q 1 X T P 1Q 1 = 1 M = M KZ = If we apply Algorithm to solve the least-squares problem, 47 M K X = min M KX, X SRR P 1 Q we get X13 =, as an approximation of X Hence, the approximate solution of 46 is determined by 3 X 3 3 Z +X13 = or more details, log 1 has been displayed in igure 1 We now set M = I, where I is the identity matrix of order n In this case it not difficult to see that the equation A 1 XB 1 = M is inconsistent over SR P 1Q 1 m n and SSR P 1Q 1 m n We employ Algorithm to solve Problems 1,, 3 and 4 and the results are given in Table 1 In this table, Iters stands for the number of the required iterations for the convergence Convergence history of the methods are presented in igure Apparently, the numerical results show that the proposed algorithm is effective, 19

20 P A Beik and D K Salkuyeh Table 1 Numerical results for Example 41 in the inconsistent case Problem 1 Problem Problem 3 Problem 4 817e e-11 95e e-1 Iters η k log 1 log Number of iterations k Number of iterations k log 1 log Number of iterations k Number of iterations k igure 1 Convergence history for Example 41 consistent case Problem 1: top-left; Problem : top-right; Problem 3: down-left; Problem 4: down-right Example 4 In this example, we consider Problem 1 with KX,Y = A 1 XB 1 +X T +C 1 YD 1 +Y T, LX,Y = X +E X T +Y +G Y T H, where A 1 = pentadiag n,,,1,1, B 1 = pentadiag n 1,,,1,1, C 1 = tridiag n 1,,7, D 1 = tridiag n, 1,4, E = tridiag n 1,3, 1, = tridiag n 1,6,3, G = pentadiag n, 1, 3,1,3, H = pentadiag n,,,3,

21 1 log 1 log Number of iterations k Number of iterations k log 1 log Number of iterations k Number of iterations k igure Convergence history for Example 41 inconsistent case Problem 1: top-left; Problem : top-right; Problem 3: down-left; Problem 4: down-right Presume that p 1 = 1,1,,1 T, p = n,n 1,,1 T, q 1 = 1,,,n T, q = 1,,, T, the matrices P 1, P, Q 1 and Q are specified as follows: P i = I p ip T i p T i p, Q i = I q iqi T i qi Tq, i = 1,, i where I signifies the identity matrix of order n It is not difficult to verify that P 1,P,Q 1 and Q are n n symmetric orthogonal matrices Suppose that X = P 1 W x + W T x Q 1 and Ỹ = P W y + W T y Q where W x = tridiag n 1,,1 and W y = tridiag n 1,1, Evidently, we have X SR P 1Q 1 n n and Ỹ SRP Q n n The matrices M and N are selected so that M = K X,Ỹ and N = L X,Ỹ Note that in this case the system { KX,Y = M, 48 LX,Y = N, is consistent over SR P 1Q 1 m n SRP Q m n Now, let us focus on the solution of Problem 1 with the above data and n = 5 In order to handle Algorithm for resolving Problem 1, we utilize X, Y =, as an initial guess and the following stopping criterion = max{ Xk Xk 1, Yk Yk 1 } < 1

22 P A Beik and D K Salkuyeh Table Numerical results for Example 4 inconsistent case Problem 1 Problem Problem 3 Problem 4 989e e e-11 87e-11 Iters η k In this case the algorithm converges in 85 iterations and we have η85 = 7e 9, where ηk = max{ M KXk,Yk, N LXk,Yk } The graph of log 1 is illustrated in igure 3 To investigate the applicability of the algorithm to solve Problem, we choose X = Y = I, where I is the identity matrix of order n All of the other assumptions are the same as before In this case, Algorithm converges in 85 iterations The convergence performance of the method is illustrated in igure 3 To demonstrate the effectiveness of the algorithm for finding the solution pairs of Problems 3 and 4, we set X = P 1 W x Wx T Q 1 and Ỹ = P W y Wy T Q where W x = tridiag n 1,,5 and W y = tridiag n 1,3, Straightforward computations reveal that X SSR P 1Q 1 n n and Ỹ SSRP Q n n The matrices M and N are chosen such that M = K X,Ỹ and N = L X,Ỹ Therefore, the system 48 is consistent over SSR P 1Q 1 m n SSRP Q m n All of the other hypophyses are assumed to be those of the orthogonal symmetric case In this case, Algorithm for both of Problem 3 and Problem 4 converges in 8 iterations or Problem 3, we have η8 = 43e 9 In igure 3, the convergence history of the method for the problems are displayed for further information inally, we choose M = tridiag n 1,1,1 and N = pentadiag n 1,1,,1,1 In this case, it can be verified that the system 48 is inconsistent over SR P 1Q 1 m n SRP Q m n and SSR P 1Q 1 m n SSRP Q m n Algorithm was handled to solve our mentioned main problems The obtained results are disclosed in Table and convergence actions are delineated in igure 4 Example 43 In this example, we work on the inverse eigenvalue problem HX = XΛ over Hamiltonian matrices, ie, the unknown matrix H is determined so the H is a Hamiltonian matrix and minimizes HX XΛ over Hamiltonian matrices This example has been previously examined in the literature, for instance see Qian and Tan 13, Table 1 or m = l, we choose l eigenpairs λ 1,y 1,,λ l,y l of the n n Hamiltonian matrix H of Example 16 from the benchmark examples for continuous algebraic Riccati equations CARE in Benner et al 1995 We set X = [Y 1, ĴY 1 ] R n m and Λ = diagλ 1, Λ 1 R m m where Y 1 = [y 1,y,,y l ], Λ 1 = diagλ 1,λ,,λ l and Ĵ is defined by 11 Here our aim in Problem 1 is to determine least-squares Hamiltonian solution H of HX = XΛ Note that Problem is equivalent to Problem 1 in Qian and Tan 13 To solve Problem, we choose A = randn,n and our goal is to find the least-squares Hamiltonian solution H of HX = XΛ so that A H is minimized

23 3 log 1 log Number of iterations k Number of iterations k log 1 log Number of iterations k Number of iterations k igure 3 Convergence history for Example 4 consistent case Problem 1: top-left; Problem : top-right; Problem 3: down-left; Problem 4: down-right Now, we use Algorithm to compute an approximate solutions of Problems 1 and It is noted that in the algorithm, the matrices P 1 and Q 1 should be respectively replaced by I n and Ĵ Numerical results for different values of n and m are presented in Tables 3 and 4 In these tables number of iterations Iters and CPU time CPU are presented and Hk is the computed solution at iteration k rom the reported numerical results in Tables 3 and 4, we can observe that Algorithm computes good approximate solutions of mentioned problems in short times and small number of iterations 5 Conclusion It has been established that the set of all real square matrices can be written as a direct sum of the set of all P, Q-orthogonal symmetric and P, Q-orthogonal skew-symmetric matrices Invoking the earlier pointed fact, we have scrutinized the extension of the well-known conjugate gradient least-squares CGLS method for finding the least-squares P, Q-orthogonal skew-symmetric solution pairs of general coupled matrix equations Some numerical examples have been examined to illustrate the good performance of the proposed algorithm to solve the considered problems

24 4 P A Beik and D K Salkuyeh log 1 log Number of iterations k Number of iterations k log 1 log Number of iterations k Number of iterations k igure 4 Convergence history for Example 4 inconsistent case Problem 1: top-left; Problem : top-right; Problem 3: down-left; Problem 4: down-right Table 3 Numerical results of Example 43 for solving Problem 1 n,m CPU Iters HkX XΛ H Hk 5, e , e , e , e , e , e , e , e , e , e , e Acknowledgments The authors would like to express their heartfelt thanks to three anonymous referees for their valuable suggestions and constructive comments which have improved the quality of the paper

25 5 Table 4 Numerical results of Example 43 for solving Problem n,m CPU Iters HkX XΛ A Hk 5, e , e , e , e , e , e , e , e , e , e , e References [Beik and Salkuyeh, 11] Beik PA and Salkuyeh DK11 On the global Krylov subspace methods for solving general coupled matrix equation Computers and Mathematics with Applications 6 11: [Beik and Salkuyeh, 13] Beik PA and Salkuyeh DK 13 The coupled Sylvester-transpose matrix equations over generalized centro-symmetric matrices International Journal of Computer Mathematics 9 7: [Beik et al, 14] Beik PA, Salkuyeh DK and Moghadam MM 14 Gradient-based iterative algorithm for solving the generalized coupled Sylvester-transpose and conjugate matrix equations over reflexive anti-reflexive matrices Transactions of the Institute of Measurement and Control 36 1: [Benner et al, 1995] Benner P, Laub A, Mehrmann V 1995, A collection of benchmark examples for the numerical solution of algebraic Riccati equations I: continuous-time case Tech Report SPC 95, ak f Mathematik, TU Chenmitz-Zwickau, 917 Chemnitz, RG [Bernstein, 9] Bernstein DS 9 Matrix Mathematics: theory, facts, and formulas Second edition, Princeton University Press [Björck, 1996] Björck A 1996 Numerical Methods for Least Squares Problems SIAM, Philadelphia [Cai and Chen, 9] ai J and Chen G 9 An iterative algorithm for the least squares bisymmetric solutions of the matrix equations A 1XB 1 = C 1, A XB = C Mathematical and Computer Modelling 5 7 8: [Cai et al, 1] Cai J, Chen GandLiuQ1 Aniterativemethod for thebisymmetricsolutions oftheconsistent matrixequationsa 1XB 1 = C 1, A XB = C International Journal of Computer Mathematics 87 1: [Chu and Golub, ] Chu MT and Golub GH Structured inverse eigenvalue problems Acta Numerica 11: 1 71 [Ding and Chen, 6] Ding and Chen T 6 On iterative solutions of general coupled matrix equations SIAM Journal on Control and Optimization 44 6: [Ding et al, 8] Ding, Liu PX and Ding J 8 Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle Applied Mathematics and Computation 197 1: 41 5 [Golub and Loan, 1996] Golub HG and Loan CV 1996 Matrix Computations 3rd ed, Baltimore: Johns Hopkins [Hajarian and Dehghan, 11] Hajarian M and Dehghan M 11 The generalized centrosymmetric and least squares generalized centro-symmetric solutions of the matrix equation AYB +CY T D = E Mathematical Methods in the Applied Sciences 34 13:

26 6 P A Beik and D K Salkuyeh [Hajarian, 14] Hajarian M 14 Developing the CGLS algorithm for the least squares solutions of the general coupled matrix equations Mathematical Methods in the Applied Sciences 37 17: [Hajarian, 14a] Hajarian M 14a A gradient-based iterative algorithm for generalized coupled Sylvester matrix equations over generalized centro-symmetric matrices Transactions of the Institute of Measurement and Control 36 : 5 59 [Hajarian, 14b] Hajarian M 14b A matrix LSQR algorithm for solving constrained linear operator equations Bulletin of the Iranian Mathematical Society 4 1: [Julio and Soto, 15] Julio AI and Soto RL 15 Persymmetric and bisymmetric nonnegative inverse eigenvalue problem Linear Algebra and its Applications 469: [Karimi and Dehghan, 15] Karimi S and Dehghan M 15 A general iterative approach for solving the general constrained linear matrix equations system Transactions of the Institute of Measurement and Control DOI: 11177/ [Kressner et al, 9] Kressner D, Schrder C and Watkins DS 9 Implicit QR algorithms for palindromic and even eigenvalue problems Numerical Algorithms 51 : 9 38 [Lancaster and Rozsa, 1983] LancasterPandRozsaP1983 OnthematrixequationAX+X A = C SIAM Journal on Algebraic Discrete Methods 4 4: [Laub, 1991] Laub AJ 1991 Invariant subspace methods for the numerical solution of Riccati equations in: S Bittanti, AJ Laub, JC Willems Eds, The Riccati Equation, Springer, New York, pp [Li et al, 15] Li J, Li W and Peng P 15 A hybrid algorithm for solving minimization problem over R, S-symmetric matrices with the matrix inequality constraint Linear and Multilinear Algebra 63 5: [Li et al, 14] Li H, Gao Z and Zhao D 14 Least squares solutions of the matrix equation AXB +CYD = E with the least norm for symmetric arrowhead matrices Applied Mathematics and Computation 6: [Li and Huang, 1] Li SK and Huang TZ 1 LSQR iterative method for generalized coupled Sylvester matrix equations Applied Mathematical Modelling 36 8: [Peng et al, 7] Peng ZH, Hu XY and Zhang L 7 The bisymmetric solutions of the matrix equation A 1X 1B 1 +A X B + +A l X l B l = C and its optimal approximation Linear Algebra and its Applications 46 : [Peng and Liu, 9] Peng ZH and Liu J9 The generalized bisymmetric solutions of the matrix equation A 1X 1B 1 + A X B + + A l X l B l = C and its optimal approximation Numerical Algorithms 5 : [Peng, 1] Peng ZH and Zhou ZJ 1 An efficient algorithm for the submatrix constraint of the matrix equation A 1X 1B 1 +A X B + +A l X l B l = C International Journal of Computer Mathematics 89 1: [Peng, 13] Peng ZH 13 The reflexive least squares solutions of the matrix equation A 1X 1B 1+ A X B + +A l X l B l = C with a submatrix constraint Numerical Algorithms 64 3: [Peng and Xin, 13] Peng ZH and Xin H13 The reflexive least squares solutions of the general coupled matrix equations with a submatrix constraint Applied Mathematics and Computation 5: [Peng, 15] Peng ZH 15 The R, S-symmetric least squares solutions of the general coupled matrix equations Linear Multilinear Algebra 63 6: [Qian and Tan, 13] Qian J and Tan RCE 13 On some inverse eigenvalue problems for Hermitian and generalized Hamiltonian/skew-Hamiltonian matrices Journal of Computational and Applied Mathematics 5: 8 38 [Reichl, 13] Reichl L 13 The transition to chaos: conservative classical systems and quantum manifestations Springer Science & Business Media [Saad, 1995] Saad Y 1995 Iterative Methods for Sparse Linear Systems PWS Press, New York [Salkuyeh and Toutounian, 6] Salkuyeh DK and Toutounian 6 New approaches for solving large Sylvester equations Applied Mathematics and Computation 173 1: 9 18 [Salkuyeh and Beik, 14] Salkuyeh DK and Beik PA 14 An iterative method to solve symmetric positive definite matrix equations Mathematical Reports 16 : 71 83

27 [Sarduvan et al, 14] Sarduvan M, Şimşek S and Özdemir H 14 On the best approximate P,Q-orthogonal symmetric and skew-symmetric solution of the matrix equation AXB = C Journal of Numerical Mathematics 3: [Sheng and Chen, 7] Sheng X and Chen G 7 A nite iterative method for solving a pair of linear matrix equations AXB, CXD = E, Applied Mathematics and Computation 189 : [Sheng and Chen, 1] Sheng X and Chen G 1 An iterative method for the symmetric and skew symmetric solutions of a linear matrix equation AXB +CY D = E Journal of Computational and Applied Mathematics 33 11: [Song et al, 14] Song C, eng J and Zhao J 14 A new technique for solving continuous Sylvester-conjugate matrix equations Transactions of the Institute of Measurement and Control 36 11: [Su and Chen, 1] Su Y and Chen G 1 Iterative methods for solving linear matrix equation and linear matrix system International Journal of Computer Mathematics 87 4: [Wang, 3] Wang RS 3 unctional Analysis and Optimization Theory Beijing Univ of Aeronautics Astronautics, Beijing [Wu et al, 11a] Wu AG, Lv L and Duan GR 11 Iterative algorithms for solving a class of complex conjugate and transpose matrix equations Applied Mathematics and Computation 17 1: [Wu et al, 11b] Wu AG, Li B, Zhang Y and Duan GR 11 inite iterative solutions to coupled Sylvester-conjugate matrix equations Applied Mathematical Modelling 35 3: [Wu et al, 1] Wu AG, Zhang E and Liu 1 On closed-form solutions to the generalized Sylvester-conjugate matrix equation Applied Mathematics and Computation 18 19: [Wu et al, 13] Wu AG, Tong T and Duan GR 13 inite iterative algorithm for solving coupled Lyapunov equations appearing in continuous-time Markov jump linear systems International Journal of Systems Science 44 11: 8 93 [Zhang et al, 5] Zhang ZZ, Hu XY and Zhang L 5 On the Hermitian-generalized Hamiltonian solutions of linear matrix equations SIAM Journal on Matrix Analysis and Applications 7 1: [Zhao et al, 1] Zhao L, Chen G and Liu Q 1 Least squares P,Q-orthogonal symmetric solutions of the matrix equation and its optimal approximation Electronic Journal of Linear Algebra 1: [Zhou et al, 1995] Zhou K, Doyle J and Glover K 1995 Robust and Optimal Control Prentice- Hall, Upper Saddle River, New Jersey Appendix In this part we prove Theorem 31 by mathematical induction The theorem is established for P, Q-orthogonal symmetric matrices, for P, Q-orthogonal skewsymmetric matrices the proof follows from similar strategy In what follows, given P 1,Q 1 SOR p p and P,Q SOR r r, for simplicity, we use the linear operators D x : R m n R ν n R p p and D y : R m n R ν n R r r which are defined as follows: D x Z,W = 1 T KZ,W+P1 Q 1 KZ,W P1 Q 1, D y Z,W = 1 T LZ,W+P1 Q 1 LZ,W P1 Q 1 Proof of Theorem 31 or k =,1,,, we set α k = P xk + P yk K k + L k and β k = P xk +1 + P yk +1 P x k + P yk where K k = KQ x k,q y k and L k = LQ x k,q y k Because of commutative property of the inner product,, we only need to prove the validity of 3, 33 and 34 for i < j k, 7

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,

More information

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS DAVOD KHOJASTEH SALKUYEH and FATEMEH PANJEH ALI BEIK Communicated by the former editorial board Let A : R m n R m n be a symmetric

More information

for j = 1, 2,..., q. Consequently, we may define the norm of X = (X 1, X 2,..., X q ) as follows:

for j = 1, 2,..., q. Consequently, we may define the norm of X = (X 1, X 2,..., X q ) as follows: A CYCLIC IERAIVE APPROACH AND IS MODIIED VERSION O SOLVE COUPLED SYLVESER-RANSPOSE MARIX EQUAIONS atemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of

More information

AN ITERATIVE ALGORITHM FOR THE LEAST SQUARES SOLUTIONS OF MATRIX EQUATIONS OVER SYMMETRIC ARROWHEAD MATRICES

AN ITERATIVE ALGORITHM FOR THE LEAST SQUARES SOLUTIONS OF MATRIX EQUATIONS OVER SYMMETRIC ARROWHEAD MATRICES AN ITERATIVE ALGORITHM FOR THE LEAST SQUARES SOLUTIONS OF MATRIX EQUATIONS OVER SYMMETRIC ARROWHEAD MATRICES Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

MINIMUM NORM LEAST-SQUARES SOLUTION TO GENERAL COMPLEX COUPLED LINEAR MATRIX EQUATIONS VIA ITERATION

MINIMUM NORM LEAST-SQUARES SOLUTION TO GENERAL COMPLEX COUPLED LINEAR MATRIX EQUATIONS VIA ITERATION MINIMUM NORM LEAST-SQUARES SOLUTION TO GENERAL COMPLEX COUPLED LINEAR MATRIX EQUATIONS VIA ITERATION Davod Khojasteh Salkuyeh and Fatemeh Panjeh Ali Beik Faculty of Mathematical Sciences University of

More information

Delayed Over-Relaxation in Iterative Schemes to Solve Rank Deficient Linear System of (Matrix) Equations

Delayed Over-Relaxation in Iterative Schemes to Solve Rank Deficient Linear System of (Matrix) Equations Filomat 32:9 (2018), 3181 3198 https://doi.org/10.2298/fil1809181a Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Delayed Over-Relaxation

More information

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation

More information

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS U..B. Sci. Bull., Series A, Vol. 78, Iss. 4, 06 ISSN 3-707 ON A GENERAL CLASS OF RECONDIIONERS FOR NONSYMMERIC GENERALIZED SADDLE OIN ROBLE Fatemeh anjeh Ali BEIK his paper deals with applying a class

More information

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex

More information

AN ITERATIVE METHOD FOR THE GENERALIZED CENTRO-SYMMETRIC SOLUTION OF A LINEAR MATRIX EQUATION AXB + CY D = E. Ying-chun LI and Zhi-hong LIU

AN ITERATIVE METHOD FOR THE GENERALIZED CENTRO-SYMMETRIC SOLUTION OF A LINEAR MATRIX EQUATION AXB + CY D = E. Ying-chun LI and Zhi-hong LIU Acta Universitatis Apulensis ISSN: 158-539 No. 9/01 pp. 335-346 AN ITERATIVE METHOD FOR THE GENERALIZED CENTRO-SYMMETRIC SOLUTION OF A LINEAR MATRIX EQUATION AXB + CY D = E Ying-chun LI and Zhi-hong LIU

More information

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Diagonal and Monomial Solutions of the Matrix Equation AXB = C Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

Research Article Constrained Solutions of a System of Matrix Equations

Research Article Constrained Solutions of a System of Matrix Equations Journal of Applied Mathematics Volume 2012, Article ID 471573, 19 pages doi:10.1155/2012/471573 Research Article Constrained Solutions of a System of Matrix Equations Qing-Wen Wang 1 and Juan Yu 1, 2 1

More information

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Journal of Computational Mathematics Vol.xx, No.x, 2x, 6. http://www.global-sci.org/jcm doi:?? ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Davod

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type

More information

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,

More information

Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with J-(Skew) Centrosymmetry

Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with J-(Skew) Centrosymmetry Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with -(Skew) Centrosymmetry Zhong-Yun Liu 1 You-Cai Duan 1 Yun-Feng Lai 1 Yu-Lin Zhang 1 School of Math., Changsha University

More information

A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION AXB = C *

A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION AXB = C * Journal of Computational Mathematics Vol.34, No.4, 2016, 437 450. http://www.global-sci.org/jcm doi:10.4208/jcm.1601-m2015-0416 A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION

More information

Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations

Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations Applied Mathematics Volume 01, Article ID 398085, 14 pages doi:10.1155/01/398085 Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations Chang-Zhou Dong, 1 Qing-Wen Wang,

More information

The skew-symmetric orthogonal solutions of the matrix equation AX = B

The skew-symmetric orthogonal solutions of the matrix equation AX = B Linear Algebra and its Applications 402 (2005) 303 318 www.elsevier.com/locate/laa The skew-symmetric orthogonal solutions of the matrix equation AX = B Chunjun Meng, Xiyan Hu, Lei Zhang College of Mathematics

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Research Article Finite Iterative Algorithm for Solving a Complex of Conjugate and Transpose Matrix Equation

Research Article Finite Iterative Algorithm for Solving a Complex of Conjugate and Transpose Matrix Equation indawi Publishing Corporation Discrete Mathematics Volume 013, Article ID 17063, 13 pages http://dx.doi.org/10.1155/013/17063 Research Article Finite Iterative Algorithm for Solving a Complex of Conjugate

More information

arxiv: v1 [math.na] 13 Mar 2019

arxiv: v1 [math.na] 13 Mar 2019 Relation between the T-congruence Sylvester equation and the generalized Sylvester equation Yuki Satake, Masaya Oozawa, Tomohiro Sogabe Yuto Miyatake, Tomoya Kemmochi, Shao-Liang Zhang arxiv:1903.05360v1

More information

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D International Journal of Algebra, Vol. 5, 2011, no. 30, 1489-1504 The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D D. Krishnaswamy Department of Mathematics Annamalai University

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Re-nnd solutions of the matrix equation AXB = C

Re-nnd solutions of the matrix equation AXB = C Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices.

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

Exercise Sheet 1.

Exercise Sheet 1. Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Journal of Computational and Applied Mathematics. New matrix iterative methods for constraint solutions of the matrix

Journal of Computational and Applied Mathematics. New matrix iterative methods for constraint solutions of the matrix Journal of Computational and Applied Mathematics 35 (010 76 735 Contents lists aailable at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elseier.com/locate/cam New

More information

An iterative solution to coupled quaternion matrix equations

An iterative solution to coupled quaternion matrix equations Filomat 6:4 (01) 809 86 DOI 10.98/FIL104809S Published by Faculty of Sciences and Mathematics University of Niš Serbia Available at: http://www.pmf.ni.ac.rs/filomat An iterative solution to coupled quaternion

More information

A Newton two-stage waveform relaxation method for solving systems of nonlinear algebraic equations

A Newton two-stage waveform relaxation method for solving systems of nonlinear algebraic equations MATHEMATICAL COMMUNICATIONS Math. Commun. 2(25), 5 A Newton two-stage waveform relaxation method for solving systems of nonlinear algebraic equations Davod Khojasteh Salkuyeh, and Zeinab Hassanzadeh Faculty

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

Maximizing the numerical radii of matrices by permuting their entries

Maximizing the numerical radii of matrices by permuting their entries Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

The semi-convergence of GSI method for singular saddle point problems

The semi-convergence of GSI method for singular saddle point problems Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

An Iterative algorithm for $\eta$-(anti)-hermitian least-squares solutions of quaternion matrix equations

An Iterative algorithm for $\eta$-(anti)-hermitian least-squares solutions of quaternion matrix equations Electronic Journal of Linear Algebra Volume 30 Volume 30 2015 Article 26 2015 An Iterative algorithm for $\eta$-anti-hermitian least-squares solutions of quaternion matrix equations Fatemeh Panjeh Ali

More information

The reflexive and anti-reflexive solutions of the matrix equation A H XB =C

The reflexive and anti-reflexive solutions of the matrix equation A H XB =C Journal of Computational and Applied Mathematics 200 (2007) 749 760 www.elsevier.com/locate/cam The reflexive and anti-reflexive solutions of the matrix equation A H XB =C Xiang-yang Peng a,b,, Xi-yan

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

An Even Order Symmetric B Tensor is Positive Definite

An Even Order Symmetric B Tensor is Positive Definite An Even Order Symmetric B Tensor is Positive Definite Liqun Qi, Yisheng Song arxiv:1404.0452v4 [math.sp] 14 May 2014 October 17, 2018 Abstract It is easily checkable if a given tensor is a B tensor, or

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Factorized Solution of Sylvester Equations with Applications in Control

Factorized Solution of Sylvester Equations with Applications in Control Factorized Solution of Sylvester Equations with Applications in Control Peter Benner Abstract Sylvester equations play a central role in many areas of applied mathematics and in particular in systems and

More information

Structured Condition Numbers of Symmetric Algebraic Riccati Equations

Structured Condition Numbers of Symmetric Algebraic Riccati Equations Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

2 Computing complex square roots of a real matrix

2 Computing complex square roots of a real matrix On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

Block preconditioners for saddle point systems arising from liquid crystal directors modeling

Block preconditioners for saddle point systems arising from liquid crystal directors modeling Noname manuscript No. (will be inserted by the editor) Block preconditioners for saddle point systems arising from liquid crystal directors modeling Fatemeh Panjeh Ali Beik Michele Benzi Received: date

More information

On the Simulatability Condition in Key Generation Over a Non-authenticated Public Channel

On the Simulatability Condition in Key Generation Over a Non-authenticated Public Channel On the Simulatability Condition in Key Generation Over a Non-authenticated Public Channel Wenwen Tu and Lifeng Lai Department of Electrical and Computer Engineering Worcester Polytechnic Institute Worcester,

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

On the Eigenstructure of Hermitian Toeplitz Matrices with Prescribed Eigenpairs

On the Eigenstructure of Hermitian Toeplitz Matrices with Prescribed Eigenpairs The Eighth International Symposium on Operations Research and Its Applications (ISORA 09) Zhangjiajie, China, September 20 22, 2009 Copyright 2009 ORSC & APORC, pp 298 305 On the Eigenstructure of Hermitian

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

THE RELATION BETWEEN THE QR AND LR ALGORITHMS SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

A Note on the Pin-Pointing Solution of Ill-Conditioned Linear System of Equations

A Note on the Pin-Pointing Solution of Ill-Conditioned Linear System of Equations A Note on the Pin-Pointing Solution of Ill-Conditioned Linear System of Equations Davod Khojasteh Salkuyeh 1 and Mohsen Hasani 2 1,2 Department of Mathematics, University of Mohaghegh Ardabili, P. O. Box.

More information

A Note on Simple Nonzero Finite Generalized Singular Values

A Note on Simple Nonzero Finite Generalized Singular Values A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

More information

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems Applied Mathematical Sciences, Vol. 10, 2016, no. 30, 1477-1488 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.6269 Performance Comparison of Relaxation Methods with Singular and Nonsingular

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

On the eigenvalues of specially low-rank perturbed matrices

On the eigenvalues of specially low-rank perturbed matrices On the eigenvalues of specially low-rank perturbed matrices Yunkai Zhou April 12, 2011 Abstract We study the eigenvalues of a matrix A perturbed by a few special low-rank matrices. The perturbation is

More information

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

The Symmetric and Antipersymmetric Solutions of the Matrix Equation A 1 X 1 B 1 + A 2 X 2 B A l X l B l = C and Its Optimal Approximation

The Symmetric and Antipersymmetric Solutions of the Matrix Equation A 1 X 1 B 1 + A 2 X 2 B A l X l B l = C and Its Optimal Approximation The Symmetric Antipersymmetric Soutions of the Matrix Equation A 1 X 1 B 1 + A 2 X 2 B 2 + + A X B C Its Optima Approximation Ying Zhang Member IAENG Abstract A matrix A (a ij) R n n is said to be symmetric

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

A Note on Eigenvalues of Perturbed Hermitian Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.

More information

Research Article A Rapid Numerical Algorithm to Compute Matrix Inversion

Research Article A Rapid Numerical Algorithm to Compute Matrix Inversion International Mathematics and Mathematical Sciences Volume 12, Article ID 134653, 11 pages doi:.1155/12/134653 Research Article A Rapid Numerical Algorithm to Compute Matrix Inversion F. Soleymani Department

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Two special kinds of least squares solutions for the quaternion matrix equation AXB+CXD=E

Two special kinds of least squares solutions for the quaternion matrix equation AXB+CXD=E Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012) Article 18 2012 Two special kinds of least squares solutions for the quaternion matrix equation AXB+CXD=E Shi-fang Yuan ysf301@yahoocomcn

More information

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet

More information

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B International Journal of Algebra, Vol. 6, 0, no. 9, 903-9 The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B Qingfeng Xiao Department of Basic Dongguan olytechnic Dongguan 53808, China

More information

ELA INTERVAL SYSTEM OF MATRIX EQUATIONS WITH TWO UNKNOWN MATRICES

ELA INTERVAL SYSTEM OF MATRIX EQUATIONS WITH TWO UNKNOWN MATRICES INTERVAL SYSTEM OF MATRIX EQUATIONS WITH TWO UNKNOWN MATRICES A RIVAZ, M MOHSENI MOGHADAM, AND S ZANGOEI ZADEH Abstract In this paper, we consider an interval system of matrix equations contained two equations

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information