AN ITERATIVE ALGORITHM FOR THE LEAST SQUARES SOLUTIONS OF MATRIX EQUATIONS OVER SYMMETRIC ARROWHEAD MATRICES

Similar documents
ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS

for j = 1, 2,..., q. Consequently, we may define the norm of X = (X 1, X 2,..., X q ) as follows:

MINIMUM NORM LEAST-SQUARES SOLUTION TO GENERAL COMPLEX COUPLED LINEAR MATRIX EQUATIONS VIA ITERATION

Delayed Over-Relaxation in Iterative Schemes to Solve Rank Deficient Linear System of (Matrix) Equations

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

On the Preconditioning of the Block Tridiagonal Linear System of Equations

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

Conjugate Gradient (CG) Method

Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning

On Solving Large Algebraic. Riccati Matrix Equations

THE solution of the absolute value equation (AVE) of

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

AN ITERATIVE METHOD FOR THE GENERALIZED CENTRO-SYMMETRIC SOLUTION OF A LINEAR MATRIX EQUATION AXB + CY D = E. Ying-chun LI and Zhi-hong LIU

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

AN ITERATIVE ALGORITHM FOR THE BEST APPROXIMATE (P,Q)-ORTHOGONAL SYMMETRIC AND SKEW-SYMMETRIC SOLUTION PAIR OF COUPLED MATRIX EQUATIONS

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

Research Article Finite Iterative Algorithm for Solving a Complex of Conjugate and Transpose Matrix Equation

An Even Order Symmetric B Tensor is Positive Definite

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Research Article Constrained Solutions of a System of Matrix Equations

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The reflexive and anti-reflexive solutions of the matrix equation A H XB =C

Chapter 7 Iterative Techniques in Matrix Algebra

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Introduction. Chapter One

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

arxiv: v1 [math.na] 13 Mar 2019

Structured Condition Numbers of Symmetric Algebraic Riccati Equations

Block preconditioners for saddle point systems arising from liquid crystal directors modeling

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

DELFT UNIVERSITY OF TECHNOLOGY

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

Iterative Methods for Solving A x = b

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

The Symmetric and Antipersymmetric Solutions of the Matrix Equation A 1 X 1 B 1 + A 2 X 2 B A l X l B l = C and Its Optimal Approximation

Large-scale eigenvalue problems

An iterative solution to coupled quaternion matrix equations

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Re-nnd solutions of the matrix equation AXB = C

Bulletin of the. Iranian Mathematical Society

PROJECTED GMRES AND ITS VARIANTS

1. Introduction. We consider linear discrete ill-posed problems of the form

Krylov Subspace Methods that Are Based on the Minimization of the Residual

A Newton two-stage waveform relaxation method for solving systems of nonlinear algebraic equations

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

Chapter 3 Transformations

The skew-symmetric orthogonal solutions of the matrix equation AX = B

A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION AXB = C *

A Smoothing Newton Method for Solving Absolute Value Equations

Journal of Computational and Applied Mathematics. New matrix iterative methods for constraint solutions of the matrix

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

arxiv: v1 [math.na] 5 Jun 2017

Tikhonov Regularization of Large Symmetric Problems

On the solvability of an equation involving the Smarandache function and Euler function

2 Computing complex square roots of a real matrix

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

G1110 & 852G1 Numerical Linear Algebra

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

M.A. Botchev. September 5, 2014

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

A new iterative method for solving a class of complex symmetric system of linear equations

arxiv: v1 [math.na] 1 Sep 2018

On the eigenvalues of specially low-rank perturbed matrices

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

Two-parameter generalized Hermitian and skew-hermitian splitting iteration method

S.F. Xu (Department of Mathematics, Peking University, Beijing)

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

arxiv: v1 [math.ra] 11 Aug 2014

Various symmetries in matrix theory with application to modeling dynamic systems

Iterative techniques in matrix algebra

DELFT UNIVERSITY OF TECHNOLOGY

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Results on stability of linear systems with time varying delay

An Accelerated Jacobi-gradient Based Iterative Algorithm for Solving Sylvester Matrix Equations

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

1 Conjugate gradients

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

The Drazin inverses of products and differences of orthogonal projections

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B

Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with J-(Skew) Centrosymmetry

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D

Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations

R, 1 i 1,i 2,...,i m n.

arxiv: v1 [hep-lat] 2 May 2012

Jae Heon Yun and Yu Du Han

Transcription:

AN ITERATIVE ALGORITHM FOR THE LEAST SQUARES SOLUTIONS OF MATRIX EQUATIONS OVER SYMMETRIC ARROWHEAD MATRICES Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan, Iran e-mail: fbeik@vruacir; beikfatemeh@gmailcom Faculty of Mathematical Sciences, University of Guilan, Rasht, Iran e-mail: khojasteh@guilanacir; salkuyeh@gmailcom Abstract This paper concerns with exploiting an oblique projection technique to solve a general class of large and sparse least squares problem over symmetric arrowhead matrices More precisely, we develop the conjugate gradient least squares (CGLS) algorithm to obtain the solutions of two main problems The first problem consist of computing the least norm symmetric arrowhead solution group X = (X 1,, X q) of the subsequent least squares problem N q A ijl X jb ijl C i F = min i=1 l=1 j=1 By resolving the second problem, we aim to compute the optimal approximate symmetric arrowhead solution group of the above mentioned least squares problem corresponding to a given arbitrary matrix group Utilizing the feature of approximate solutions obtained by a projection approach, the minimization property of the proposed algorithm is established In addition, it is demonstrated that the propounded algorithm can calculate the exact solution of each of the mentioned problems within finite number of iteration steps in the exact arithmetic Finally, some numerical experiments are examined which reveal the applicability and feasibility of the offered algorithm Keywords: Matrix equation, projection technique, iterative algorithm, least squares problem, arrowhead matrix AMS Subject Classification: 15A24, 65F10 1 Introduction First we inaugurate the ensuing symbols and notations which are exploited during this paper We utilize tr(a), A T, Range(A) and Null(A) to denote the trace, the transpose, the column space and the null space of the matrix A, respectively The notation R m n stands for the set of all m n real matrices For a given matrix A, the symbol A + denotes the well-known Moore-Penrose inverse of A A matrix A R n n is called a symmetric (skew-symmetric) matrix if A = A T (A = A T ) and the set of all n n symmetric (skew-symmetric) matrices is denoted by SR n n (SSR n n ) The notation I[1, p] refers to the set of all integer numbers between 1 and p Assume that X = (X 1,, X q ) where X j R m j n j for j I[1, q], we call X as a matrix group Corresponding author 1

2 An algorithm to determine symmetric arrowhead solutions of matrix equations Linear matrix equations have a cardinal application in numerous fields, such as control theory, system theory, stability theory and some other areas of pure and applied mathematics Hitherto, several types of iterative algorithms to solve various kinds of matrix equations have been handled in the literature; for further details see [1, 2, 10, 11, 12, 16, 18, 19, 28, 30, 31, 34, 35, 37, 38, 43] and the references therein In addition the explicit forms of the solutions of some kinds of matrix equations have been derived For instance, Jiang and Wei [20] have studied the solutions of complex matrix equations X AXB = C and X A XB = C and obtained explicit solutions of the equations by the method of characteristic polynomial and a method of real representation of a complex matrix, respectively In [39], Wu et al have propounded an unified approach to resolve a general class of Sylvester-polynomial-conjugate matrix equations which include the Yakubovich-conjugate matrix equation as a special case Several explicit parametric solutions to the generalized Sylvester-conjugate matrix equation have been obtained in [40] The idea of conjugate gradient (CG) method [32] has been developed for constructing iterative algorithms to compute the solutions of different kinds of linear matrix equations over generalized reflexive and anti-reflexive, generalized bisymmetric, generalized centro-symmetric, mirror-symmetric, skew-symmetric and (P, Q)-reflexive matrices, for more details see [2, 7, 8, 9, 15, 21, 28, 38] and the references therein For instance, Peng et al [27] have proposed an iterative algorithm for finding the bisymmetric solutions of matrix equation A 1 X 1 B 1 + A 2 X 2 B 2 + + A l X l B l = C Recently, the conjugate gradient least squares (CGLS) [32] method has been extended to present iterative algorithms for solving least squares problems for various kinds of (coupled) matrix equations For example, Peng [30] has offered an algorithm for minimizing (11) A 1 X 1 B 1 + A 2 X 2 B 2 + + A l X l B l C F, where F is the Frobenius norm and X j (j I[1, l]) is a reflexive matrix with a specified central principal submatrix In [29], Peng and Zhou have offered an algorithm to minimize (11) where X j (j I[1, l]) is bisymmetric with a specified central principal In an alternative work, Peng and Xin [31] have extended the algorithm in [29, 30] to propose an iterative method for soveing the general coupled matrix equations l A ij X j B ij = C i, i = 1, 2,, t, j=1 where X j R n j n j (j I[1, l]) is a reflexive matrix with a specified central principal submatrix In [16], Hajarian has developed the CGLS method to determine the minimum norm solutions of the subsequent general least squares problem q j=1 A 1,jX j B 1,j C 1 q j=1 A 2,jX j B 2,j C 2 = min q j=1 A p,jx j B p,j C p In this paper we aim to develop the CGLS method for solving the least squares problem for a class of coupled matrix equations over symmetric arrowhead matrices The inspiration of presenting the current work is briefly expounded in the second section Here we would like to comment that the topic of constructing iterative algorithms to find the symmetric arrowhead solutions of coupled matrix equation has not been investigated so far Beside the earlier refereed comment, some of our utilized manners for analyzing the properties of the CGLS method are different to those employed in [16, 29, 29, 31] More

Beik and Salkuyeh 3 precisely, the applied strategy for demonstrating the minimization property of proposed algorithm 11 Preparatory concepts In the present subsection we recollect some definitions and properties which are used throughout this paper The inner product of X, Y R m n is defined by X, Y F = tr(y T X) The induced norm is the well-known Frobenius norm, ie, the norm of Y R m p is given by Y F = tr(y T Y ) It is natural to expound the inner product of the matrix groups X = (X 1,, X q ) and Y = (Y 1,, Y q ) by X, Y = X 1, Y 1 F + X 2, Y 2 F + + X q, Y q F, where X j, Y j R n j m j for j I[1, q] Therefore we may define the norm of the matrix group X = (X 1,, X q ) as follows: X 2 = X 1 2 F + X 2 2 F + + X q 2 F, where X j R n j m j for j I[1, q] Suppose that A = [a ij ] m s and B = [b ij ] n q are real matrices, the Kronecker product of the matrices A and B is defined as the mn sq matrix A B = [a ij B] The vec operator transmutes a matrix A of size m s to a vector a = vec(a) of size ms 1 via stacking the columns of A In this paper, the next relation is exploited (See [3]) vec(axb) = (B T A)vec(X) Definition 11 [23] A given matrix A R n n is called a symmetric arrowhead matrix if it has the following form: a 1 b 1 b 2 b n 1 b 1 a 2 0 0 A = b 2 0 a 3 0 b n 1 0 0 a n In practice arrowhead matrices occur in numerous areas, see [14, 25, 42] For instance, when the Lanczos method is exploited for solving the eigenvalue problem for large sparse matrices [24] In addition, the eigenstructure problems of arrowhead matrices emerge from applications in molecular physics [23] In [22], it is pointed that the symmetric arrowhead matrices can denote the parameter matrices of the control equations of nonlinear control systems The set of all n n symmetric arrowhead matrices is represented by SAR n n The matrix group X = (X 1, X 2,, X q ) is said to be a symmetric arrowhead matrix group if the matrices X 1, X 2,, X q are symmetric arrowhead matrices The outline of this paper is organized as follows In the next section we momentarily recount our goal for presenting the current work and state two main problems which are concerned in this paper In Section 3, we establish some useful theoretical results which capable us to analyze the properties of our proposed algorithm Section 4 is devoted to elaborating an algorithm for solving the mentioned problems and investigating the convergence properties of the algorithm Numerical experiments are reported to illustrate the effectively and applicability of the propounded algorithm in Section 5 Finally the paper is finished with a brief conclusion in Section 6

4 An algorithm to determine symmetric arrowhead solutions of matrix equations 2 Motivations and main contribution In [5, 26], a specific inverse problem has been studied which consists of constructing the symmetric arrowhead matrix from spectral data The generalized inverse eigenvalue problem for symmetric arrowhead matrices consists of determining nontrivial symmetric arrowhead matrices A and B such that (21) AXΛ = BX, where X and the diagonal matrix Λ are known such that the columns of X are given eigenvectors and the diagonal elements of Λ are the given associated eigenvalues In [41], an approach has been offered to solve the generalized inverse eigenvalue problem (21) Moreover, the author has calculated the nearest solution of (21) to an arbitrary given symmetric arrowhead matrices à and B Nevertheless, it can be easily checked out that the proposed manner has high computational costs As seen (21) is a linear matrix equation and it can be a inspiration to investigate about the symmetric arrowhead solutions of matrix equations For instance, recently, Li et al [22] have derived the general expression of the least squares solution of the matrix equation AXB + CY D = E with the least norm for symmetric arrowhead matrices In fact the authors has considered the following problem Problem 1 Given A R m n, B R n s, C R m k, D R k s, E R m s, let S E = {[X, Y ] X SAR n n, Y SAR k k, AXB + CY D E = min} Find out [ ˆX, Ŷ ] S E such that ˆX 2 F + Ŷ 2 F = min [X,Y ] S E ( X 2 F + Y 2 F ) In [22], the Moore-Penrose inverse and the Kroneker product have been exploited to obtain the next algorithm for solving Problem 1 Note that in the following algorithm, the matrix H l (l = n, k) has the following structure H l = e 1 e 2 e 3 e l 1 e l 0 0 0 0 0 e 1 0 0 0 e 2 0 0 0 0 0 e 1 0 0 0 e 3 0 0, 0 0 0 e 1 0 0 0 e l 1 0 0 0 0 0 e 1 0 0 0 e l where e i is a unite vector with the ith entry is one and the others are zero Algorithm 1 (Li et al, [22]) Step 1 Input A R m n, B R n s, C R m k, D R k s, E R m s and H n, H k Step 2 Compute P 1 = (B T A)H n, P 2 = (D T C)H k Step 3 Compute R, G, Z, K 11, K 12, K 22 as follows: R = (I P 1 P + 1 )P 2 G = R + + (I R + R)ZP T 2 (P + 1 )T P + 1 (I P 2R + ), Z = (I + (I R + R)P T 2 (P + 1 )T P + 1 P 2(I R + R)) 1, K 11 = I P + 1 P 1 + P + 1 P 2Z(I R + R)P T 2 (P + 1 )T, K 12 = P + 1 P 2(I R + R)Z, K 22 = (I R + R)Z

Beik and Salkuyeh 5 Step 4 If the equality (P 1 P 1 + + RR+ )vec(e) = vec(e) holds, then compute [ ˆX, Ŷ ] by [ ] ( [ ] [ [ ]] ) + [ vec( ˆX) K11 K vec(ŷ ) = I H 12 K11 K K12 T H 12 P + K 22 K12 T H 1 P 1 + P ] 2G vec(e), K 22 G where H = diag(h n, H k ) There are mainly two reasons which motivate us to develop an efficient iterative algorithm to solve linear matrix equations over symmetric arrowhead matrices The first is that Algorithm 1 is too expensive in practice due to exploiting the Kronocker product and requiring the computation of the Moore-Penrose inverses On the other hand, to the best of our knowledge, application of iterative methods to resolve coupled linear matrix equations over symmetric arrowhead matrices has not been investigated so far 21 Problem reformulation In order to derive the algorithm for more general cases, in this paper we consider the following two main problems Problem 2 Assume that the matrices A ijl R r i n j, B ijl R r i n j and C i R r i s i are given for i I[1, N], j I[1, q] and l I[1, p] Moreover suppose that (22) N q S E = {X = (X 1,, X q ) X j SAR n j n j and A ijl X j B ijl C i F = min} i=1 l=1 j=1 Find the least norm symmetric arrowhead solution group ˆX = ( ˆX 1,, ˆX q ) S E such that ˆX = min X S E X Problem 3 For a given arbitrary matrix group X = ( X 1, X q ) Find X = (X 1,, X q ) S E such that X X = min X S E X X For simplicity, we exploit the subsequent linear operator where M : R n 1 n 1 R nq nq R r 1 s 1 R r N s N, X = (X 1,, X q ) M(X) := (M 1 (X),, M N (X)), M i (X) = l=1 j=1 q A ijl X j B ijl, i = 1, 2,, N As a matter of fact we consider the problem of determining the minimum norm symmetric arrowhead solutions of the general least squares problem (23) M(X) C = min, where C = (C 1,, C N ) 3 Preliminary theoretical results The current section concerns with proving some profitable theoretical results which capable us to analyze our proposed algorithm for solving Problems 2 and 3 Meanwhile, some notations are introduced

6 An algorithm to determine symmetric arrowhead solutions of matrix equations Let us consider the linear subspace CSAR n n on R n n defined as follows: 0 0 0 0 0 0 a 23 a 2n CSAR n n = X X = 0 a 32 0 and X = X T an 1n 0 a n2 a nn 1 0 Now we present the subsequent useful lemma which demonstrates that R n n can be written as a direct sum of its three subspaces, ie, SAR n n, CSAR n n and SSR n n Lemma 31 Assume that the linear subspaces SAR n n, CSAR n n and SSR n n are defined as before Then the following statement holds: R n n = SAR n n CSAR n n SSR n n Here refers to the orthogonal direct sum with respect to, F Proof It is well-known that That is, for an arbitrary Z R n n we have R n n = SR n n SSR n n Z = Z 1 + Z 2, where Z 1 = (Z + Z T )/2, Z 2 = (Z Z T )/2 and Z 1, Z 2 F = 0 Evidently the symmetric matrix Z 1 = [Z (1) ij ] be written as follows: Z 1 = Z 1 = Z 1 + Z 2, where Z 1 SARn n and Z 2 CSARn n are respectively defined by Z (1) 11 Z (1) 12 Z (1) 13 Z (1) 1n, Z (1) n1 0 0 Z nn (1) Z (1) 21 Z (1) 22 0 0 Z (1) 31 0 Z (1) 33 0 and Z 2 = Z 1 Z 1 It is not difficult to verify that Z 1, Z 2 = tr ( (Z F 1) T Z 2 ) = 0 From the above results, it can be deduced that SR n n = SAR n n CSAR n n On the other hand, the matrix Z 1 is symmetric and Z 2 is a skew-symmetric matrix which imply that Z 1, Z 2 = tr ( Z T F 2 Z 1 ) = tr ( Z 2 Z 1 ) ( ) = tr Z 1 Z 2 = tr ( (Z 1) T ) Z 2 = Z2, Z 1 F = Z 1, Z 2 Therefore Z 1, Z 2 F = 0 As Z 2 is a symmetric matrix, we may show that Z 2, Z 2 F = 0 in a similar manner Now the result can be concluded immediately F

Beik and Salkuyeh 7 In what follows we consider the linear operator A expounded by A : R n 1 n 1 R nq nq R n 1 n 1 R nq nq, ( A 1 + A T 1 X = (X 1,, X q ) A(X) :=,, A ) q + A T q, 2 2 where A l corresponding to X l = [X (l) ij ] n j n j is elucidated by A l = X (l) 11 X (l) 12 X (l) 13 X (l) 1n X (l) 21 X (l) 22 0 0 X (l) 31 0 X (l) 33 0, l I[1, q] X (l) n1 0 0 X nn (l) Now we establish the following proposition which provides the effective tools to scrutinize the properties of the sequence of approximate solutions produced by the presented main algorithm Proposition 32 Suppose that the linear operators A and M are defined as before Let P = (P 1,, P q ) be an arbitrary symmetric arrowhead matrix group Moreover assume that the linear operator M is given by M : R r 1 s 1 R r N s N R n 1 n 1 R nq nq, Y = (Y 1,, Y N ) M (Y ) := (M 1 (Y ),, M q (Y )), where N Mj (Y ) = A T ijl Y ibijl T, l=1 i=1 j I[1, q] Then for arbitrary given matrix groups X = (X 1,, X q ) and Y following statements hold = (Y 1,, Y p ), the (31) (32) (33) M(X), Y = X, M (Y ), P, M (Y ) = P, A(M (Y )), M(P ), Y = P, A(M (Y ))

8 An algorithm to determine symmetric arrowhead solutions of matrix equations Proof It is known that for given arbitrary matrices A, B, we have tr(ab) = tr(a T B T ) = tr(ba) Now straightforward computations turn out that N q M(X), Y = tr A ijl X j B ijl, i=1 N = tr = = = N Y T i i=1 l=1 j=1 i=1 l=1 j=1 N i=1 l=1 j=1 ( q tr j=1 l=1 j=1 q i (A ijl X j B ijl ), Y T q tr ( Y i Bijl T XT j A T ijl), q tr ( A T ijl Y ibijl T ) XT j, X T j = X, M (Y ) ( N i=1 l=1 )) A T ijl Y ibijl T, The relation (32) can be concluded from Lemma 31 The last equality follows from Eqs (31) and (32) The following lemma known as the Projection Theorem which is useful for obtaining the next results in this paper, the proof of theorem can be found in [36] Lemma 33 Let W be a finite dimensional space, U be a subspace of W, and U be the orthogonal complement subspace of U For a given w W, always, there exists u 0 U such that w u 0 w u for any u U More precisely, u 0 U is the unique minimization vector in U if and only if (w u 0 ) U Here is the norm corresponding to the inner product defined on W The following proposition is a direct conclusion of Proposition 32 and Lemma 33 Proposition 34 The symmetric arrowhead matrix group ˆX = ( ˆX 1,, ˆX q ) is the solution of Problem 2 if and only if M (C M( ˆX)), X = 0, or equivalently, A(M ( ˆR)), X = 0 where ˆR = C M( ˆX) and X = (X 1,, X q ) is an arbitrary given symmetric arrowhead matrix group Proof By Lemma 33, ˆX = ( ˆX1,, ˆX q ) is the solution of Problem 2 if and only if C M( ˆX) M(X), for any symmetric arrowhead matrix group X = (X 1,, X q ), ie, (34) C M( ˆX), M(X) = 0 Now the results follow immediately from Proposition 32

Beik and Salkuyeh 9 The following remark of the above proposition provides a sufficient condition for ˆX = ( ˆX 1,, ˆX q ) to be the solution of Problem 2 Remark 35 Suppose that ˆX = ( ˆX1,, ˆX q ) is a symmetric arrowhead matrix group and ˆR = C M( ˆX) = ( ˆR 1,, ˆR N ) If (35) A(M ( ˆR)) = 0, then Proposition 34 implies that ˆX is the solution of Problem 2 Proposition 36 Suppose that ˆX is the solution of Problem 2 and the set SE is determined by (22) Then the following statements hold (1) Assume that X = ˆX + X where X is a given symmetric arrowhead matrix group where M( X) = 0 Then X S E (2) Suppose that X S E, then M( X ˆX) = 0, ie, there exists a symmetric arrowhead matrix group X such that X = X ˆX and M( X) = 0 Proof Suppose that X is a given symmetric arrowhead matrix group such that M( X) = 0 Let ˆX be the solution of Problem 2 and X = ˆX + X Evidently, C M( X), C M( X) = C M( ˆX) M( X), C M( ˆX) M( X) = C M( ˆX), C M( ˆX) The above equality together with the fact that X SAR n n reveal that X S E which proves the first part Now we establish the second part Suppose that X S E and without loss of generality assume that X ˆX As ˆX, X S E, we may conclude that (36) C M( X), C M( X) = C M( ˆX), C M( ˆX) Note that with an analogous explanation for concluding (34), we may deduce that (37) C M( ˆX), M( X ˆX) = 0 By some easy computations, we derive that C M( X), C M( X) = C M( ˆX) M( X ˆX), C M( ˆX) M( X ˆX) = C M( ˆX), C M( ˆX) 2 C M( ˆX), M( X ˆX) + M( X ˆX), M( X ˆX) In view of (36) and (37), the above relation implies that M( X ˆX), M( X ˆX) = 0 which completes the proof 4 An algorithm and its analysis The conjugate gradient least squares (CGLS) method is an efficient algorithm for solving the large sparse least squares problem (41) min x R n b Ax 2, where 2 is the well-known Euclidean vector norm, A R m n and b R m are known; for further details see [4, 32]

10 An algorithm to determine symmetric arrowhead solutions of matrix equations In this section we develop the CGLS method to construct an iterative algorithm for solving Problems 2 and 3 41 An iterative algorithm to resolve Problem 2 Now we present an extended form of the CGLS method for finding the solution of the first problem Algorithm 2 An iterative algorithm for solving Problem 2 1 Choose an arbitrary symmetric arrowhead matrix group X(0) and a tolerance ɛ (For simplicity we may set X(0) = 0) 2 Set k = 0 and compute R(0) = C M(X(0)); P (0) = A(M (R(0)); Q(0) = P (0) 3 If P (k) = 0 or R(k) < ɛ then stop, else goto the next step 4 Calculate X(k + 1) = X(k) + R(k + 1) = R(k) P (k) 2 Q(k); M(Q(k)) 2 P (k) 2 M(Q(k)); M(Q(k)) 2 P (k + 1) = A(M (R(k + 1))); Q(k + 1) = P (k + 1) + k := k + 1 5 Goto Line 3 P (k+1) 2 Q(k); P (k) 2 Remark 41 We would like to point out here that (1) Evidently at each iterate the matrix groups P (k) and Q(k) are symmetric arrowhead matrix groups Hence the approximate solution X(k) at each iterate is a symmetric arrowhead matrix group (2) If R(k) = 0 then the kth approximate solution X(k) is the exact solution of the coupled matrix equations M(X) = C over the symmetric arrowhead matrices Now we present the following theorem which its proof is given in the Appendix A Theorem 42 Suppose that k steps of Algorithm 2 have been performed, ie, P (l) 0 and M(Q(l)) 0 for l = 0, 1,, k The sequences P (l) and Q(l) (l = 0, 1,, k) produced by Algorithm 2 satisfy (1) P (i), P (j) = 0, (2) M(Q(i)), M(Q(j)) = 0, (3) Q(i), P (j) = 0, for i, j = 0, 1, 2,, k (i j) In the following remark, we will discuss the assumption P (l) 0 and M(Q(l)) 0 (l = 0, 1,, k) in the previous theorem More precisely, we clarify that if there exists t such that P (t) = 0 or M(Q(t)) = 0 then X(t) is the exact solution of Problem 2 Remark 43 Suppose that t 1 (t 1) steps of Algorithm 2 have been performed, ie, P (l) 0 and M(Q(l)) 0 for l = 0, 1,, t 1 (1) Consider the case that P (t) = 0 but R(k) 0 By Remark 35, P (t) = 0 implies that X(t) is the solution of Problem 2

Beik and Salkuyeh 11 (2) Now let us consider the case that M(Q(t)) = 0 Note that Q(t) is a symmetric arrowhead matrix group Therefore, using Proposition 32 we deduce that 0 = M(Q(t)), R(t) = Q(t), M (R(t)) = Q(t), P (t) P (t) 2 = P (t) + Q(t 1), P (t) P (t 1) 2 = P (t), P (t) + P (t) 2 Q(t 1), P (t) P (t 1) 2 Now Theorem 42 implies that P (t), P (t) = 0 which is equivalent to say that P (t) = 0 That is M(Q(t)) = 0 indicates that X(t) is the solution of Problem 2 The following theorem turns out that the solution of Problem 2 can be computed by Algorithm 2 within finite number of steps for any initial symmetric arrowhead matrix group in the exact arithmetic Theorem 44 In the absence of roundoff errors, the solution of Problem 2 can be computed by Algorithm 2 within at most m + 1 iteration steps where m = ν 1 + + ν q and ν j stands for the dimension of the linear subspace SAR n j n j Proof Presume that P (l) 0 for l = 0, 1,, m 1 Therefore, X(m) can be obtained by Algorithm 2 Suppose that S denotes the matrix subspace consists of all matrices of the form E = (E 1,, E q ) where E i SAR n i n i for i = 1, 2,, q As established by Theorem 42, we have P (i), P (j) = 0 for i, j = 0, 1, 2,, m 1, (i j), which reveals that {P (0), P (1),, P (m 1)} is an orthogonal basis for S difficult to see that P (m) = 0, hence X(m) is the solution of Problem 2 It is not Consider the least squares problem (41) It is known that the solutions of the least squares problem (41) are computed by x = A + b+null(a) [3] Thence, x = A + b gives the minimum norm solution Invoking the fact that Range(A + ) = Range(A T ), it is concluded that x Range(A T ) has the smallest 2-norm solution of the linear system Ax = b On the other hand we have C M(X) 2 = = N i=1 N i=1 l=1 j=1 l=1 j=1 = Âx ĉ 2 2, q A ijl X j B ijl C i 2 F q (Bijl T A ijl)vec(x j ) vec(c i ) 2 2

12 An algorithm to determine symmetric arrowhead solutions of matrix equations where  = B11l T A 11l l=1 B1ql T A 1ql l=1 BN1l T A N1l BNql T A Nql l=1 l=1 and ĉ = vec(c 1 ) vec(c N ) vec(x 1 ), x =, vec(x q ) For an arbitrary matrix group W = [W 1,, W N ], it is not difficult to see that the matrix group vec( X) = vec(m (W )) belongs to Range(ÂT ) We deduce that if X(0) = A(M (W )) then at each iterate, say the kth iterate vec(x(k)) Range(ÂT ) As a result, the sequence of approximate solutions produced by Algorithm 2 converges to the least norm solution of the following the least squares problem (42) min X R n 1 n 1 R nq nq C M(X) That is Algorithm 2 converges to the solution of Problem 2 42 Minimization property of Algorithm 2 In order to demonstrate the minimization property of Algorithm 2, we need to establish the subsequent proposition Proposition 45 Suppose that k steps of Algorithm 2 have been performed, ie, P (l) 0 and M(Q(l)) 0 for l = 0, 1,, k 1 Then the kth residual R(k) satisfies (43) R(k), M(Q(i)) = 0, i = 0, 1,, k 1 Proof It is known that Q(i) is a symmetric arrowhead matrix group for i = 0, 1,, k 1 Now by Proposition 32 and Theorem 42, it reveals that R(k), M(Q(i)) = M (R(k)), Q(i) = A(M (R(k))), Q(i) = P (k), Q(i) = 0 Assume that m steps of Algorithm 2 have been performed, ie, P (l) 0 and M(Q(l)) 0 for l = 0, 1,, m 1 Let us define the subspace K m and MK m as follows: (44) K m = span{q(0),, Q(m 1)}, MK m = span{m(q(0)),, M(Q(m 1))} Now in view of Proposition 45, it reveals that Algorithm 2 is an oblique projection technique onto X(0)+K m (X(m) X(0)+K m ) with the orthogonality condition R(m) MK m where R(m) = C M(X(m)) The following theorem shows that the residual matrices associated with the approximate solutions produced by Algorithm 2 satisfy an optimality property Theorem 46 (The minimization property of Algorithm 2) Suppose that m steps of Algorithm 2 have been performed, ie, P (l) 0 and M(Q(l)) 0 for l = 0, 1,, m 1 Moreover presume that X(0) is an arbitrary given symmetric arrowhead matrix group

Beik and Salkuyeh 13 Presume that the subspaces K m and MK m are defined by (44) Assume that P (m) 0, then the m-th approximate solution obtained by Algorithm 2 satisfies C M(X(m)) = min C M(X) X X(0)+K m Proof It is not difficult to see that X(m) X(0) + K m γ (m) 0, γ (m) 1,, γ (m) m 1 such that Consequently, there exists m 1 X(m) = X(0) + γ (m) i Q(i) Suppose that X is an arbitrary member of X(0) + K m, ie, there exists γ 0, γ 1,, γ m 1 we have i=0 m 1 X = X(0) + γ i Q(i) Assume that R(m) = C M(X(m)) and R(X) = C M(X), it is not onerous to see that Thus m 1 R(X) = R(m) + i=0 i=0 (γ (m) i m 1 R(X), R(X) = R(m), R(m) 2 + m 1 i=0 (γ (m) i i=0 (γ (m) i γ i )M(Q(i)) m 1 γ i )M(Q(i)), γ i ) M(Q(i)), R(m) i=0 (γ (m) i γ i )M(Q(i)) In view of Theorem 42 and Proposition 45, the above relation reduces to m 1 R(X), R(X) = R(m), R(m) + (γ (m) i γ i ) 2 M(Q(i)), M(Q(i)), i=0 which completes the proof 43 On the solution of Problem 3 Let X = ( X 1,, X q ) be a given arbitrary matrix group By means of Lemma 31, we have X = W 1 + W 2 + W 3, where W 1 = (W (1) 1,, W q (1) ), W 2 = (W (2) 1,, W q (2) ) and W 3 = (W (3) 1,, W q (3) ) such that W (1) j SAR n j n j, W (2) j CAR n j n j and W (3) j SSR n j n j for j I[1, q] As a matter of fact W 1 = A( X) and Lemma 31 implies W 1 X, W 2 + W 3 = 0,

14 An algorithm to determine symmetric arrowhead solutions of matrix equations where X = (X 1,, X q ) is a symmetric arrowhead solution group Therefore, it is not difficult to show that X X, X X = (W 1 X) + (W 2 + W 3 ), (W 1 X) + (W 2 + W 3 ) Consequently, we deduce that = W 1 X, W 1 X + 2 W 1 X, W 2 + W 3 + W 2 + W 3, W 2 + W 3 = W 1 X, W 1 X + W 2 + W 3, W 2 + W 3 min X X = min X W 1 X S E X S E As a result, we may apply Algorithm 2 to find the least norm solution of the following least squares problem (45) min M(Y ) Ĉ, Y ŜE where Ŝ E = {Y = (Y 1,, Y q ) Y j SAR n j n j for j I[1, q]} and Ĉ = C M(W 1) To determine the solution of Problem 3, we first compute Y as the least norm solution of (45) using the approach described in the previous subsection Then the solution of Problem 3 is derived by X = Y + W 1 5 Numerical experiments In this section, some numerical experiments are reported to verify the validity of the presented theoretical results and the applicability of Algorithm 2 The execution of all numerical tests was performed in double precision with some Matlab codes on a Pentium 4 PC, with a 306 GHz CPU and 100GB of RAM Example 51 Consider the subsequent two-dimensional convection diffusion equation { u + 2α1 u x + 2α 2 u y 2α 3 u = f, in Ω, u = 0, x on Ω, where Ω = [0, 1] [0, 1] and Ω is the boundary of Ω We assume that α 1, α 2 and α 3 are nonnegative parameters Discretization of the Laplacian operator by the standard five-point stencil and the first-order derivatives by centered finite differences with mesh size h = 1/(n + 1) in both x- and y-directions yields the Sylvester matrix equation (51) AX + XB = C, where A and B are n n matrices of the form A = tridiag( 1 α 1 h, 2 α 3 h 2, 1 + α 1 h), B = tridiag( 1 α 2 h, 2 α 3 h 2, 1 + α 2 h) The Sylvester matrix equation (51) with the above data have been examined by several authors so far, for instance see [6, 13, 17, 33] The right-hand side matrix C is taken such

Beik and Salkuyeh 15 Table 1 Numerical results for Example 51 for different values of n and (α 1, α 2, α 3 ) α 1 = 10, α 2 = 20, α 3 = 10 n = 1000 n = 2000 n = 3000 n = 4000 n = 5000 Iters 25 24 22 22 22 E k 76e-10 70e-10 88e-10 63e-10 50e-10 CPU times 042 120 220 434 717 α 1 = 50, α 2 = 100, α 3 = 50 n = 1000 n = 2000 n = 3000 n = 4000 n = 5000 Iters 25 24 22 22 22 E k 82e-10 73e-10 91e-10 65e-10 51e-10 CPU times 042 123 222 427 709 α 1 = 100, α 2 = 100, α 3 = 0 n = 1000 n = 2000 n = 3000 n = 4000 n = 5000 Iters 25 24 22 22 22 E k 73e-10 69e-10 88e-10 63e-10 50e-10 CPU times 044 123 222 433 734 a way that the exact solution of (51) is the following symmetric arrowhead matrix 1 1 2 n 1 1 2 0 0 X = 2 0 3 0 n 1 0 0 n We apply Algorithm 2 to solve system (51) The initial guess is set to be X(0) = 0 and the succeeding stopping criterion is exploited, E k = R(k) R(0) < 10 9 Numerical results for different values of n (=1000,2000,3000, 4000, 5000) and three set of parameters (α 1, α 2, α 3 ) = (10, 20, 10), (α 1, α 2, α 3 ) = (50, 100, 50) and (α 1, α 2, α 3 ) = (100, 100, 0) are given in Table 1 In this table, the number of iterations for the convergence (Iters), CPU times (in seconds) and E k are presented As seen the proposed algorithm is effectual for computing the symmetric arrowhead solution of (51) Example 52 Let M = tridiag( 1, 2, 1) R n n and N = tridiag(05, 0, 05) R n n For every r R we define the matrix T r as T r = M + rn Consider the following coupled matrix equations { A11 X (52) 1 + X 2 B 12 = C 1, A 21 X 1 + X 2 B 22 = C 2, where A 11 = T 4, B 12 = T 4, A 21 = T 3 and B 22 = T 3 The right-hand matrices C 1 and C 2 are selected such that (X1, X 2 ) is the solution of (52) where X 1 and X 2 are the symmetric

16 An algorithm to determine symmetric arrowhead solutions of matrix equations Table 2 Numerical results for Example 52, system (52) n = 1000 n = 2000 n = 3000 n = 4000 n = 5000 Iters 45 48 50 47 48 E k 46e-10 58e-10 48e-10 48e-10 42e-10 CPU times 161 575 1359 2177 3395 Table 3 Numerical results for Example 52, system (54) n = 1000 n = 2000 n = 3000 n = 4000 n = 5000 Iters 105 105 108 108 114 E k 84e-10 84e-10 87e-10 85e-10 77e-10 CPU times 450 1392 3036 5256 8513 arrowhead matrices given by 1 1 2 n 1 1 2 0 0 X1 = 2 0 3 0, X 2 = n 1 0 0 n 1 1 2 n 1 1 2 0 0 2 0 3 0 n 1 0 0 n The application of Algorithm 2 is examined to compute a symmetric arrowhead solution to the system (52) We utilize (X 1 (0), X 2 (0)) = (0, 0) as the initial guess and (53) E k = max{ R 1(k) R 1 (0), R 2(k) R 2 (0) } < 10 9, as the stopping criterion where R(k) = (R 1 (k), R 2 (k)) Numerical results for different values on n are given in Table 2 All the assumptions are as the previous example Now consider the next coupled matrix equations { A11 X (54) 1 B 11 + A 12 X 2 B 12 = C 1, A 21 X 1 B 21 + A 22 X 2 B 22 = C 2, where A 11 = A 12 = T 3, A 21 = A 22 = T 1, B 11 = B 12 = T 7 and B 21 = B 22 = 8 Numerical results are reported in Table 3 We point here that all the assumptions are as the first part of this example As seen, the handled algorithm is suitable Example 53 This example is given to verify the validity of the presented results in Subsection 43 To do this, we consider the system { A11 X (55) 1 B 11 + A 12 X 2 B 12 = C 1, A 21 X 1 B 21 + A 22 X 2 B 22 = C 2, where A 11 = B 12 = 4 2 0 1 2 2 0 1 3 4 2 3 0 3 2 1 1 1, B 11 =, A 21 = 2 0 1 1 2 1 1 1 3 2 3 1 1 3 1 0 1 2, A 12 =, B 21 = 1 2 1 1 2 2 0 2 1 1 0 1 1 3 1 1 2 2,,

Beik and Salkuyeh 17 A 22 = 1 1 3 3 1 0 2 0 1 with the right-hand side matrices C 1 = 31 18 24 51 7 39 34 8 23, B 22 =, C 2 = 7 2 2 1 3 1 1 1 6 22 25 28 59 3 4 68 38 23 It is not difficult to see that Eq (55) has a unique symmetric arrowhead solution (see [1]) of the form (X1, X 2 ) where X1 = 1 1 2 1 2 0, X2 = 3 1 2 1 2 0 2 0 3 2 0 1 An initial guess (X(0), Y (0)) = (0, 0) and the stopping criterion E k < 10 4 are exploited where E k is defined in (53) The approximate solution X(56) = (X 1 (56), X 2 (56)) is provided by Algorithm 2 where 10002 10004 19996 29999 10001 20002 X 1 (56) = 10004 19994 0, X 2 (56) = 10001 20000 0 19996 0 30003 20002 0 09988 We now consider X 1 = 4 4 3 5 3 0 4 1 4, X2 = 3 4 4 4 2 5 5 2 4 The matrix group W 1 = (W (1) 1, W (2) 1 ), defined in subsection 43, is given by 4 45 35 W (1) 1 = 45 3 0, W (2) 1 = 3 4 45 4 2 0 35 0 4 45 0 4 Therefore, we have Ĉ = C M(W 1) = (Ĉ1, Ĉ2) where 945 345 46 14 585 64 Ĉ 1 = 395 23 145, Ĉ 2 = 875 255 105 41 15 295 35 175 425 In this case, the provided approximate solution of Eq (45) by Algorithm 2 is Y (44) = (Y 1 (44), Y 2 (44)) where 29989 35011 15001 00004 30000 24999 Y 1 (44) = 35011 09977 0, Y 2 (44) = 30000 00002 0 15001 0 09996 24999 0 30008 Hence X 1 (44) = W (1) 1 + Y 1 (44) = X 2 (44) = W (2) 1 + Y 2 (44) =, 10011 09989 19999 09989 20023 0 19999 0 30004 29996 10000 20001 10000 20002 0 20001 0 09992 As seen the proposed algorithm has obtained a suitable solution for Problem 3,

18 An algorithm to determine symmetric arrowhead solutions of matrix equations 6 Conclusion Recently, Li et al [Appl Math Comput (2014), 719 724] have derived the general expression for the least norm symmetric arrowhead solution of the least squares problem corresponding to the matrix equation AXB + CY D = E by exploiting Moore-Penrose inverse and the Kronecker product In this paper, an efficient iterative algorithm has been proposed to solve the least squares problem associated with a general class of coupled matrix equations over symmetric arrowhead matrices As a matter of fact, the conjugate gradient least squares (CGLS) method has been extended to construct the offered algorithm which obtains the symmetric arrowhead solution group of the considered problem within finite iteration steps for any initial symmetric arrowhead matrix group in the absence of roundoff errors In particular, the least norm symmetric arrowhead solution group of the mentioned least squares problem can be computed by choosing an appropriate initial symmetric arrowhead matrix group Scrutinized numerical experiments have confirmed the established theoretical results and reveal the good performance of the handled algorithm The superior features of our examined algorithm are that it is very simple, neat and inexpensive to handle for large an sparse problems Whereas, the propounded algorithm outperforms Li et al s approach even for small size problems as examined for several examples Appendix A The proof of Theorem 42 Without loss of generality, we only need to establish the validity of the assertions for 1 i < j k To this end, we exploit the mathematical induction In what follows, for simplicity, we set α k = Step 1 For k = 1, we have Also, it can be seen that P (k) 2 M(Q(k)) 2, β k = P (k + 1) 2 P (k) 2 P (0), P (1) = P (0), P (0) α 0 A(M (M(Q(0)))) = P (0) 2 α 0 P (0), M (M(Q(0))) = P (0) 2 α 0 M(Q(0)), M(Q(0)) = 0 M(Q(0)), M(Q(1)) = β 0 M(Q(0)) 2 + M(Q(0)), M(P (1)) As Q(0) = P (0) we conclude that = β 0 M(Q(0)) 2 + 1 α 0 M (R(0) R(1)), P (1) = β 0 M(Q(0)) 2 + 1 α 0 A(M (R(0) R(1))), P (1) = β 0 M(Q(0)) 2 + 1 α 0 P (0) P (1), P (1) = β 0 M(Q(0)) 2 1 α 0 P (1), P (1) = 0 Q(0), P (1) = P (0), P (1) = 0 Step 2 Suppose that the conclusions are true when k = l, ie, the following relations hold for i < l, P (i), P (l) = 0, M(Q(i)), M(Q(l)) = 0 and Q(i), P (l) = 0

Beik and Salkuyeh 19 We demonstrate that the validity of the contentions for k = l + 1 It can be seen that P (l + 1), P (l) = P (l) α l A(M (M(Q(l)))), P (l) = P (l), P (l) α l M(P (l)), M(Q(l)) = P (l), P (l) α l M(Q(l)) β l 1 M(Q(l 1)), M(Q(l)) = P (l), P (l) α l M(Q(l)), M(Q(l)) = 0 Moreover by some straightforward computations and Proposition 32, we have M(Q(l + 1)), M(Q(l)) = M(P (l + 1)) + β l M(Q(l)), M(Q(l)) = M(P (l + 1)), M(Q(l)) + β l M(Q(l), M(Q(l)) = 1 α l M(P (l + 1)), R(l) R(l + 1) + β l M(Q(l)), M(Q(l)) = 1 α l P (l + 1), P (l + 1) + β l M(Q(l), M(Q(l)) = 0 Note that Q(l 1) is a symmetric arrowhead matrix group In view of Proposition 32, it is not difficult to conclude that P (l + 1), Q(l) = P (l + 1), P (l) + β l 1 Q(l 1) = β l 1 P (l + 1), Q(l 1) = β l 1 P (l) α l A(M (M(Q(l)))), Q(l 1) = α l β l 1 M(Q(l)), M(Q(l 1)) = 0 For j = 1, 2,, l 1, by using the above results, we have: P (l + 1), P (j) = P (l) α l A(M (M(Q(l)))), P (j) = P (l), P (j) α l M(Q(l)), M(P (j)) = α l M(Q(l)), M(Q(j)) β j 1 M(Q(j 1)) = 0 From Proposition 32, it is seen that M(Q(l + 1)), M(Q(j)) = M(P (l + 1)) + β l M(Q(l)), M(Q(j)) = M(P (l + 1)), M(Q(j)) = 1 M(P (l + 1)), R(j) R(j + 1) α j = 1 α j P (l + 1), M (R(j) R(j + 1)) = 1 α j P (l + 1), P (j) P (j + 1) = 0 Invoking the fact that Q(j) is a symmetric arrowhead matrix group for each j, we derive: P (l + 1), Q(j) = P (l) α l A(M (M(Q(l)))), Q(j) = P (l), Q(j) α l M(Q(l)), M(Q(j)) = 0 From Steps 1 and 2, the proof is competed by the principle of the mathematical induction

20 An algorithm to determine symmetric arrowhead solutions of matrix equations References [1] F P A Beik and D K Salkuyeh, On the global Krylov subspace methods for solving general coupled matrix equation, Comput Math Appl 62 (2011), no 11, 4605 4613 [2] F P A Beik and D K Salkuyeh, The coupled Sylvester-transpose matrix equations over generalized centro-symmetric matrices, Int J Comput Math 90 (2013), no 7, 1546 1566 [3] D S Bernstein, Matrix Mathematics: theory, facts, and formulas, Second edition, Princeton University Press, 2009 [4] A Björck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, 1996 [5] C F Borges, R Frezza and W B Gragg, Some inverse eigenproblems for Jacobi and arrow matrices, Numer Linear Algebra Appl 2 (1995) 195 203 [6] A Bouhamidi and K Jbilou, A note on the numerical approximate solutions for generalized Sylvester matrix equations with applications, Appl Math Comput 206 (2008), no 2, 687 694 [7] M Dehghan and M Hajarian, The general coupled matrix equations over generalized bisymmetric matrices, Linear Algebra Appl 432 (2010), no 6, 1531 1552 [8] M Dehghan and M Hajarian, An iterative algorithm for solving a pair of matrix equation AY B = E, CY D = F over generalized centro-symmetric matrices, Comput Math Appl 56 (2008), no 12, 3246 3260 [9] M Dehghan and M Hajarian, Analysis of an iterative algorithm to solve the generalized coupled Sylvester matrix equations, Applied Mathematical Modelling, 35 (2011), no 7, 3285 3300 [10] F Ding and T Chen, Gradiant based iterative algorithms for solving a class of matrix equations, IEEE Trans Autom Contr 50 (2005), no 8, 1216 1221 [11] F Ding and T Chen, On iterative solutions of general coupled matrix equations, SIAM J Control Optim 44 (2006), no 4, 2269 2284 [12] F Ding, PX Liu and J Ding, Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle, Appl Math Comput 197 (2008), no 1, 41 50 [13] A El Guennouni, K Jbilou and A J Riquet, Block Krylov subspace methods for solving large Sylvester equations, Numer Algorithms, 29 (2002), no 1 3, 75-96 [14] D Fisher, G Golub, O Hald, C Leiva and O Widlund, On FourierToeplitz methods for separable elliptic problems, Math Comp 28 (1974), no 126, 349-368 [15] M Hajarian and M Dehghan, The generalized centro-symmetric and least squares generalized centrosymmetric solutions of the matrix equation AY B + CY T D = E, Math Meth Appl Sci 34 (2011), no 13, 1562 1579 [16] M Hajarian, Developing the CGLS algorithm for the least squares solutions of the general coupled matrix equations, Math Meth Appl Sci DOI: 101002/mma3017 [17] DY Hu and L Reichel, Krylov-subspace methods for the Sylvester equation, Linear Algebra Appl 172 (1992) 283 313 [18] G X Huang, F Ying and K Gua, An iterative method for skew-symmetric solution and the optimal approximate solution of the matrix equation AXB = C, J Comput Appl Math 212 (2008), no 2, 231 244 [19] K Jbilou and A J Riquet, Projection methods for large Lyapunov matrix equations, Linear Algebra Appl 415 (2006), no 2, 344 358 [20] T Jiang and M Wei, On solutions of the matrix equations X AXB = C and X A XB = C, Linear Algebra Appl 367 (2003) 225 233 [21] J F Li, X Y Hu, X-F Duan and L Zhang, Iterative method for mirror-symmetric solution of matrix equation AXB + CY D = E, Bull Iran Math Soc 36 (2010), no 2, 35 55 [22] H Li, Z Gao and D Zhao, Least squares solutions of the matrix equation AXB + CY D = E with the least norm for symmetric arrowhead matrices, Appl Math Comput 226 (2014) 719 724 [23] D P O leary and G Stewart, Computing the eigenvalues and eigenvectors of symmetric arrowhead matrices, J Comput Phys 90 (1990) 497 405 [24] B Parlett and B Nour-Omid, The use of refined error bound when updating eigenvalues of tridiagonals, Linear Algebra Appl 68 (1985) 179-219 [25] B Parlett, The Symmetric Eigenvalue Problem, Prentice-Hall, Englewood Cilffs, 1980 [26] Z H Peng, X Y Hu and L Zhang, Two inverse eigenvalue problems for a special kind of matrices, Linear Algebra Appl 416 (2006), no 2, 336 347 [27] Z H Peng, X Y Hu and L Zhang, The bisymmetric solutions of the matrix equation A 1X 1B 1 + A 2X 2B 2 + + A l X l B l = C and its optimal approximation, Linear Algebra Appl 426 (2007), no 2, 583-595

Beik and Salkuyeh 21 [28] Z H Peng and J Liu, The generalized bisymmetric solutions of the matrix equation A 1X 1B 1 + A 2X 2B 2 + + A l X l B l = C and its optimal approximation, Numer Algorithms, 50 (2009), no 2, 127-144 [29] Z H Peng and Z J Zhou, An efficient algorithm for the submatrix constraint of the matrix equation A 1X 1B 1 + A 2X 2B 2 + + A l X l B l = C, Int J Comput Math 89 (2012), no 12, 1641-1662 [30] Z H Peng, The reflexive least squares solutions of the matrix equation A 1X 1B 1 + A 2X 2B 2 + + A l X l B l = C with a submatrix constraint, Numer Algorithms, 64 (2013), no 3, 455-480 [31] Z H Peng and H Xin, The reflexive least squares solutions of the general coupled matrix equations with a submatrix constraint, Appl Math Comput 225 (2013) 425 445 [32] Y Saad, Iterative Methods for Sparse Linear Systems, PWS Press, New York, 1995 [33] M Robbé and M Sadkane, A convergence analysis of GMRES and FOM methods for Sylvester equations, Numer Algorithms, 30 (2002), no 1, 71-89 [34] D K Salkuyeh and F Toutounian, New approaches for solving large Sylvester equations, Appl Math Comput 173 (2006), no 1, 9 18 [35] D K Salkuyeh and F P A Beik, An iterative method to solve symmetric positive definite matrix equations, Mathematical Reports, To appear [36] R S Wang, Functional Analysis and Optimization Theory, Beijing Univ of Aeronautics Astronautics, Beijing (2003) [37] A G Wu, G Feng, G R Duan and W J Wu, Finite iterative solutions to a class of complex matrix equations with conjugate and transpose unknowns, Math Comput Modelling, 52 (2010), no 9, 1463 1478 [38] A G Wu, L Lv and G R Duan, Iterative algorithms for solving a class of complex conjugate and transpose matrix equations, Appl Math Comput 217 (2011), no 21, 8343 8353 [39] A G Wu, B Li, Y Zhang and G-R Duan, Finite iterative solutions to coupled Sylvester-conjugate matrix equations, Applied Mathematical Modelling, 35 (2011), no 3, 1065 1080 [40] A G Wu, E Zhang and F Liu, On closed-form solutions to the generalized Sylvester-conjugate matrix equation, Appl Math Comput 218 (2012), no 19, 9730 9741 [41] Y Yuan, Generalized inverse eigenvalue problems for symmetric arrow-head matrices, International Journal of Computational and Mathematical Sciences 4 (2010) 268 271 [42] H Zha, A two-way chasing scheme for reducing a symmetric arrowhead matrix to tridiagonal form, J Numer Algebra Appl 1 (1992), no 1, 49-57 [43] B Zhou and G R Duan, On the generalized Sylvester mapping and matrix equation, Systems & Control Letters, 57 (2008), no 3, 200 208