ON PMHSS ITERATION METHODS FOR CONTINUOUS SYLVESTER EQUATIONS * 1. Introduction

Size: px
Start display at page:

Download "ON PMHSS ITERATION METHODS FOR CONTINUOUS SYLVESTER EQUATIONS * 1. Introduction"

Transcription

1 Journal of Computational Mathematics Vol.35, No.5, 2017, doi: /jcm.1607-m ON PMHSS ITERATION METHODS FOR CONTINUOUS SYLVESTER EQUATIONS * Yongxin Dong and Chuanqing Gu Department of Mathematics, School of Sciences, Shanghai University, Shanghai, China cqgu@staff.shu.edu.cn Abstract The modified Hermitian and skew-hermitian splitting (MHSS) iteration method and preconditioned MHSS (PMHSS) iteration method were introduced respectively. In the paper, on the basis of the MHSS iteration method, we present a PMHSS iteration method for solving large sparse continuous Sylvester equations with non-hermitian and complex symmetric positive definite/semi-definite matrices. Under suitable conditions, we prove the convergence of the PMHSS iteration method and discuss the spectral properties of the preconditioned matrix. Moreover, to reduce the computing cost, we establish an inexact variant of the PMHSS iteration method and analyze its convergence property in detail. Numerical results show that the PMHSS iteration method and its inexact variant are efficient and robust solvers for this class of continuous Sylvester equations. Mathematics subject classification: 65F10, 65F50. Key words: Continuous Sylvester equation, PMHSS iteration, Inexact PMHSS iteration, Preconditioning, Convergence. 1. Introduction For solving a class of complex symmetric linear systems (W + it)x = b, where i = 1, W,T R n n are real, symmetric, and positive semidefinite matrices with at least one of them, being positive definite, Bai et al. introduced the MHSS iteration method [1] and the PMHSS iteration method [2, 3], respectively. Moreover, for solving large sparse continuous Sylvester equations with non-hermitian and positive definite/semidefinite matrices, Bai presented a Hermitian and skew-hermitian splitting (HSS) iteration method [4, 5]. For more details about the HSS iteration method and theory, we refer to [5 11] and the references therein. According to the HSS iteration method, Zhou et al. proposed a MHSS iteration method for solving large sparse continuous Sylvester equations with non-hermitian and complex symmetric positive definite/semidefinite matrices [12]. Recently, a MHSS iteration method was also presented for solving the complex linear matrix equation AXB = C [13,14]. In the paper, we consider the iteration solution of the following continuous Sylvester equation AX +XB = F, (1.1) where A C m m, B C n n and F C m n are given complex matrices. Assume that A, B and F are large and sparse matrices. Let A = W +it and B = U +iv, where W,T R m m, U,V R n n are real symmetric matrices, with W being positive definite and T,U,V positive semi-definite. We assume T 0, which implies that A is non-hermitian. The continuous * Received April 21, 2016 / Revised version received July 12, 2016 / Accepted July 26, 2016 / Published online July 1, 2017 /

2 On PMHSS Iteration Methods for Sylvester Equations 601 Sylvester equation with A = W + it and B = U +iv may arise from numerical solutions of PDEs with complex coefficients [1, 2]. A Lyapunov equation is a special case of the Sylvester equations, where B = A and F = F. It is well known that the continuous Sylvester equation (1.1) has a unique solution, under the assumes that there is no common eigenvalue between A and B [4]. The continuous Sylvester equation (1.1) is mathematically equivalent to the system of linear equations Ax = f, (1.2) where A = I A + B T I, and the vector x and f contain the concatenated columns of the matrices X and F, respectively, with being the Kronecker product symbol and B T representing the transpose of the matrix B. The matrix equation (1.1) plays an important role in numerical methods for differential equations with complex coefficients[1,2,4], iterative methods for algebraic Riccati equations[15 17], matrix nearness problem [18], image restoration [19] and other problems; see [4, 12, 20 31] and the references therein. Recent interest is directed more towards large and sparse matrices A and B, and F = CD with very low rank, where C and D have only a few columns [32]. In these cases, the standard methods are often too expensive to be practical, and iterative methods become more viable choices [14, 25]. The standard direct method for solving (1.1) is due to Bartels and Stewart [26]. However, this method requires dense matrix operations such as the Schur decomposition; thus is not applicable in large-scale settings. For large-scale settings, iterative methods have been developed that take advantage of the sparsity and the low-rank structure. The two most common ones are the Alternating Direction Implicit (ADI) method [25, 30, 32] and the (rational) Krylov projection methods [28]. Advantages of Krylov subspace based algorithms over ADI iterations are that no knowledge about the spectra of A and B is needed and (except for [27]) no linear systems of equations with (shifted) A and B have to be solved. But ADI iterations often enable faster convergence if (sub) optimal shifts to A and B can be effectively estimated [25]. Recently, Ding and Chen proposed a few simple iterative schemes, namely, Gradient based iterative (GI) algorithms, for matrix equations [29] (and others therein). The schemes, resembling the classical Jacobi and Gaussian iterations for linear systems, are easy to implement and cost little per step but converge linearly at the best [25,29,33,34]. In the paper, we mainly conduct the idea of the HSS based iteration method to solve the continuous Sylvester equation (1.1), see, e.g., [1 14,17,19 23,33 35] and references therein. Bai, Golub and Ng in [5] firstly proposed the Hermitian and skew-hermitian splitting (HSS) method for non-hermitian positive-definite linear systems. Because of the effectiveness and robustness of the HSS method, it is extensively studied and extended to other equations and conditions, see e.g. [6 9] and references therein. A considerable advantage of the MHSS iteration [1, 12] consists in the fact that the solutions of the shifted skew-hermitian sub-system of the continuous Sylvester equation with coefficient matrices αi +it and βi +iv are avoided and only two linear sub-systems with real and symmetric positive definite coefficient matrices need to be solved at each step[1,12]. Therefore, operations on these matrices can be carried out using real arithmetic only. To the best of our knowledge, there are not a preconditioned MHSS iteration to solve the continuous Sylvester equation (1.1) [2, 9, 36, 37]. Motivated by this, we further propose and analyze a new iteration approach called PMHSS for solving the continuous Sylvester equation (1.1). The rest of this paper is organized as follows. In Section 2, after a brief introduction of the Smith method [17,38], the MHSS iteration method [1,12,13], we present a PMHSS iteration

3 602 Y.X. DONG AND C.Q. GU approach for (1.1) and derive some convergence properties of the PMHSS iteration. In Section 3, we further study the spectral properties of the PMHSS iteration under proper conditions. We establish an inexact PMHSS (IPMHSS) iteration for (1.1) in Section 4. In Section 5, the results of numerical experiments on a few model problems are discussed. In Appendix, we give the proof of the Theorem 4.1. Notation. Throughout this paper, a matrix sequence {Y (k) } k= k=0 C m n is said to be convergent to a matrix Y C m n if the corresponding vector sequence {y (k) } k= k=0 C mn is convergent to the corresponding vector y C mn, where the vectors y (k) and y contain the concatenatedcolumnsofthematricesy (k) andy, respectively. If{Y (k) } k= k=0 isconvergent,then its convergence factor and convergence rate are defined as those of {y (k) } k= k=0, correspondingly. In addition, we use λ( ) (sp( )), κ 2 ( ), 2, F and null( ) to denote the spectrum, the spectral condition number, the spectral norm, the Frobenius norm, and the null space of the matrix, respectively. Note that 2 is also used to represent the 2-norm of a vector. We use veca and ρ( ) to mean the vector obtained by stacking the columns of a given matrix A and the spectral radius of the corresponding matrix respectively. 2. The PMHSS Iteration Let us start our discussion by first reviewing the Smith s method [17,38] for finding or approximating the solution X to XA+BX = C, where X is an unknown m n matrix and A, B, C are known matrices of types n n, m m, m n, respectively. If E is the unit m m matrix, I is the unit n n matrix and q is a nonzero scalar, then XA+BX = C can be written as (qe B)X(qI A) (qe +B)X(qI +A) = 2qC. Premultiply by (qe B) 1 and postmultiply by (qi +A) 1 to get X UXV = W, where U = (qe B) 1 (qe+b), V = (qi+a)(qi A) 1 and W = 2q(qE B) 1 C(qI A) 1. By inspection, the Smith iteration method for XA+BX = C is defined by: X = U k 1 WV k 1. (2.1) k=1 Smith then observed that this linearly convergent sequence could be converted to a quadratically convergent sequence by squaring iteration step. If {Y v } is the sequence of matrices defined iteratively by Y 0 = W, Y v+1 = Y v +U 2v Y v V 2v, then it follows by induction that Y v = 2 v k=1 U k 1 WV k 1 (2.2) for all v. From (2.1) and (2.2), we see that Y v X very rapidly as v [17,38]. One of the methods to solve (1.1) is the HSS based methods which were proposed in [4,6 8,12,20,22]. For more details on HSS methods, we refer to [1,2,4,5,10,12,13,21,23] and the references therein. Evidently, the matrices A = W + it C m m and B = U + iv C n n naturally admit the Hermitian and skew-hermitian splittings H(A) = 1 2 (A+A ) = W, S(A) =

4 On PMHSS Iteration Methods for Sylvester Equations (A A ) = it, H(B) = 1 2 (B + B ) = U and S(B) = 1 2 (B B ) = iv respectively. Now, based on the above discussion, we can establish the following PMHSS iteration approach for solving (1.1). The PMHSS iteration method. GivenaninitialvectorX (0) C m n, computex (k+1) C m n for k = 0,1,2,..., using the following iteration procedure until {X (k) } k=0 satisfies the stopping criterion: { (αv 1 +W)X (k+1 2 ) +X (k+1 2 ) (βv 2 +U) = (αv 1 it)x (k) +X (k) (βv 2 iv)+f, (αv 1 +T)X (k+1) +X (k+1) (βv 2 +V) = (αv 1 +iw)x (k+1 2 ) +X (k+1 2 ) (2.3) (βv 2 +iu) if, where α,β be positive real numbers, V 1 and V 2 be prescribed symmetric positive definite matrices. The two half-steps involved in each step of the PMHSS iteration can be solved effectively using mostly real arithmetic [2]. Under the assumptions, we can easily know that there is no common eigenvalue between the matrices αv 1 + W and (βv 2 + U), as well as between the matrices αv 1 +T and (βv 2 +V), so the fixed-point matrix equations (2.3) have unique solutions for all given right-hand side matrices. Clearly, the PMHSS iteration approach reduces to MHSS iteration method [12], when V 1 = I m,v 2 = I n, the identity matrices of order m and n respectively. In particular, when α = β, we have { (αv 1 +W)X (k+1 2 ) +X (k+1 2 ) (αv 2 +U) = (αv 1 it)x (k) +X (k) (αv 2 iv)+f, (αv 1 +T)X (k+1) +X (k+1) (αv 2 +V) = (αv 1 +iw)x (k+1 2 ) +X (k+1 2 ) (2.4) (αv 2 +iu) if. After vectorization, we obtain where vec(x (k+1) ) = M(α) vec(x (k) )+G(α) vec(f), M(V 1 ;V 2 ;α) = (αk +D) 1 (αk +ih)(αk +H) 1 (αk id), (2.5) G(V 1 ;V 2 ;α) = (1 i)α(αk +D) 1 K(αK +H) 1, (2.6) where H = I W +U T I, D = I T +V T I and K 1 (α) = I (αv 1 )+(αv 2 ) T I = αk. In addition, if we introduce matrices then it holds that F 1 (V 1 ;V 2 ;α) = 1+i 2α (αk +H)K 1 (αk +D), G 1 (V 1 ;V 2 ;α) = 1+i 2α (αk +ih)k 1 (αk id), A = F 1 (V 1 ;V 2 ;α) G 1 (V 1 ;V 2 ;α),m(v 1 ;V 2 ;α) = F 1 (V 1 ;V 2 ;α) 1 G 1 (V 1 ;V 2 ;α). In particular, when V 1 = W,V 2 = U, we have M(W;U;α) = α+i α+1 (αh +D) 1 (αh id), G(W;U;α) = α(1 i) α+1 (αh +D) 1. And the PMHSS iteration scheme is now induced by the matrix splitting A = F 1 (α) G 1 (α),

5 604 Y.X. DONG AND C.Q. GU with F 1 (α) = F 1 (W;U;α) = (α+1)(1+i) (αh +D), 2α G 1 (α) = G 1 (W;U;α) = (α+i)(1+i) (αh id). 2α (2.7a) (2.7b) In this paper, we mainly focus on PMHSS iteration approach (2.4). To prove that the PMHSS iteration converges to the unique solution of (1.1) for any initial guess, performing the vec operation on the (2.4), we get (I (αv 1 +W)+(αV 2 +U) T I)vec(X (k+ 1 2 ) ) = (I (αv 1 it)+(αv 2 iv T I))vec(X (k) )+vec(f), (I (αv 1 +T)+(αV 2 +V T I))vec(X (k+1) ) = (I (αv 1 +iw)+(αv 2 +iu T ) I)vec(X (k+ 1 2 ) ) ivec(f), which can be arranged equivalently as { (αk +H) vec(x (k+1 2 ) ) = (αk id) vec(x (k) )+vec(f), (αk +D) vec(x (k+1) ) = (αk +ih) vec(x (k+1 2 ) ) ivec(f). After some operations, the PMHSS iteration (2.4) can be easily expressed by the following stationary fixed-point iteration: vec(x (k+1) ) = M(α) vec(x (k) )+G(α) vec(f), where M(α) and G(α) are defined in (2.5) and (2.6). We can easily verify that K, H are symmetric positive definite matrices and D is a symmetric positive semi-definite matrix. Note that H = K 1 2HK 1 2 and D = K 1 2DK 1 2 are similar to K 1 H and K 1 D respectively, then we have ρ(m(α)) =ρ((αk +D) 1 (αk +ih)(αk +H) 1 (αk id)) =ρ((αk +ih)(αk +H) 1 (αk id)(αk +D) 1 ) K 1 2 K 1 2 (αk +ih)k 1 2 K 1 2 (αk +H) 1 K 1 2 K 1 2 (αk id)k 1 2 K 1 2 (αk +D) 1 K 1 2 K = K 1 2 (I +i H)(I + H) 1 (αi i D)(αI + D) 1 K = (αi +i H)(αI + H) 1 (αi i D)(αI + D) 1 2 (αi +i H)(αI + H) 1 2 (αi i D)(αI + D) 1 2 α 2 + λ 2 j α 2 + µ 2 j = max λ j sp(k 1 H) α+ λ max j µ j sp(k 1 D) α+ µ j α 2 + λ 2 j max λ j sp(k 1 H) α+ λ = σ(α) < 1, α > 0. (2.8) j Therefore, the PMHSS iteration method converges unconditionally to the exact solution X C m n of (1.1), with the convergence factor being ρ(m(α)). And the PMHSS iteration method (2.4) convergesunconditionally to the exact solution x C mn of the system of linear equations Ax = f, i.e. (1.2). In particular, for the choice α = λ min λmax, (2.9)

6 On PMHSS Iteration Methods for Sylvester Equations 605 with λ min and λ min being the smallest and the largest eigenvalues of the matrix K 1 H, it holds that κ2 (K σ( α) 1 H)+1 κ2 (K 1 H)+1. Evidently, the smaller the condition number of the matrix K 1 H is, the faster the asymptotic convergence rate of the PMHSS iteration will be. However, the practical usefulness of such estimates is questionable. First of all, the estimated value of α usually depends on spectral information that may not be accessible. Secondly, the upper bound on σ( α) does not always result in the best choice of α when the stationary iteration is accelerated. The iteration parameter α only minimizes the upper bound σ( α) of the PMHSS iteration matrix spectral radius ρ(m( α)), but not ρ(m( α)) itself which minimizes the spectral radius. Moreover, when V 1 = W,V 2 = U it holds that α2 +1 ρ(m(α)) < 1. α+1 Note that this upper bound is a constant independent of both data and size of the problem. It implies that when F 1 (α) defined in (2.7) is used to precondition the matrix A C mn mn, the eigenvalues of the preconditioned matrix F 1 (α) 1 A are clustered within the complex disk centered at 1 with radius α 2 +1 α+1 due to F 1 (α) 1 A = I M(α). When α = 1, this radius becomes 2 2. For the above-mentioned special case, we can further prove the convergence of the PMHSS iteration method under weaker conditions without imposing the restriction that the matrix W R m m is positive definite [2]. By making use of Theorem 3.2 in [2], we can demonstrate the following convergence theorem about the PMHSS iteration method for solving (1.1) [2,3,12]. Theorem 2.1. Let A = W +it and B = U +iv, where W,T R m m,u,v R n n are real symmetric positive semi-definite matrices, and α be positive constant. Denote by A = H + id with H = I W +U T I, D = I T +V T I, K 1 (α) = I (αv 1 )+(αv T 2 ) I = αk, where V 1 R m m and V 2 R n n are prescribed symmetric positive definite matrices and represent by M(α) = (αk+d) 1 (αk+ih)(αk+h) 1 (αk id). Then the following statements holds true: (i) A is nonsingular if and only if null(h) null(d) = {0}; (ii) if null(h) null(d) = {0}, the spectral of the PMHSS iteration matrix satisfies ρ (M (α)) σ(α), with 1+α 2 1+ µ (α) 2 σ(α) = max, 1+α µ (α) sp( Z (α) ) 2 where Z (α) = (αh +D) 1 (αh D). Therefore, it holds that 1+α 2 ρ(m(α)) σ(α) < 1, α > 0, 1+α i.e., the PMHSS iteration converges unconditionally to the unique solution of (1.1) for any initial guess.

7 606 Y.X. DONG AND C.Q. GU Proof. Note that the matrix H is nonsingular if and only if the matrix Ĥ = (1 i)h is nonsingular. Evidently, Ĥ = (H + D) i(h D), with its Hermitian part being given by H + D. Hence, when both matrices H and D are symmetric positive semi-definite, we know that Ĥ is nonsingular if and only if null(h) null(d) = {0}. This shows the validity of (i). We now turn to the proof of (ii). For all α > 0, H and D being symmetric positive semi-definite matrices and null(h) null(d) = {0} readily imply that the matrix αh +D is symmetric positive definite. Therefore, by straightforward computations we have α2 +1 ρ(m(α)) = α+1 ρ((αh +D) 1 (αh id)) α2 +1 = 2(α+1) ρ((1 i)(αh +D) 1 (αh id)(1+i)) α2 +1 = 2(α+1) ρ((1 i)(αh +D) 1 [(αh +D)+i(αH D)]) α2 +1 = 2(α+1) ρ((1 i)[i +i(αh +D) 1 (αh D)]) α2 +1 = 2(α+1) ρ((1 i)(i +i Z (α) )) α2 +1 = max 2(1+ µ (α) 2 2(α+1) = α2 +1 (α+1) µ (α) sp( Z (α) ) max µ (α) sp( Z (α) ) 1+ µ (α) 2. 2 It easily follows from µ (α) [ 1,1] that 1 2 (1+ µ(α) 2 ) 1. Therefore, σ(α) α2 +1 α+1 < 1. The spectral properties of the preconditioning matrix F(α) are established in the following theorem. 3. The Spectral Properties of the Preconditioned Matrix For the above-mentioned convergence properties, we can further study the spectral properties of the PMHSS iteration [2, 3] under proper conditions. This result is stated in the following theorem. Theorem 3.1. Let A = H + id C mn mn, with H R mn mn and D R mn mn being symmetric positive definite matrices satisfying the condition in Theorem 2.1, and let α be a positive constant. Define Z (α) = (αh +D) 1 2(H αd)(αh +D) 1 2. Denote by µ (α) 1,...,µ(α) n the eigenvalues of the symmetric matrix Z (α) R mn mn, and by q (α) 1,...,q n (α) the corresponding orthogonal eigenvectors. Then the eigenvalues of the matrix F 1 (α) 1 A are given by λ (α) j j ) = α[(α+1) i(α 1)](1 iµ(α) (α+1)(α 2, j = 1,2,,n, +1)

8 On PMHSS Iteration Methods for Sylvester Equations 607 and the corresponding eigenvectors are given by x (α) j = (αh +D) 1 2 q (α) j, j = 1,,n. Therefore, it holds that F 1 (α) 1 A = X (α) Λ (α) X ( α), where X (α) = (x (α) 1,,x (α) n ) R n n and Λ (α) = diag(λ (α) 1,,λ(α) n ) C mn mn, with κ 2 (X (α) ) = κ 2 (αh +D). Proof. Define matrices Q (α) = (q (α) 1,,q (α) n ) R mn mm and Ξ (α) = diag(µ (α) 1,,µ(α) n ) Rmn mn. Then it holds that Z (α) = Q (α) Ξ (α) Q (α)t. Here and in the sequel, ( ) T denotes the transpose of a real matrix or vector. By straightforward computations we have F 1 (α) 1 2α A = (α+1)(1+i) (αh +D) 1 (H +id) 2α ) = (α+1)(1+i)(α i) (αh +D) 1( αh +D i(h id) 2α ( ) = I i(αh +D) 1 (H αd) (α+1)((α+1)+i(α 1)) 2α = (α+1)((α+1)+i(α 1)) ((αh +D)) 1 2 (I iz (α) )((αh +D)) 1 2 2α = (α+1)((α+1)+i(α 1)) ((αh +D)) 1 2 Q (α) (I iξ (α) )Q (α)t ((αh +D)) 1 2 =X (α) Λ (α) X (α) 1. Hence, the eigenvalues of the matrix F 1 (α) 1 A are given by λ (α) j j ) = α[(α+1) i(α 1)](1 iµ(α) (α+1)(α 2, j = 1,,n, +1) and the corresponding eigenvectors are given by x (α) j = (αh+d) 1 (α) 2 q j,j = 1,,n. Besides, as X (α) = (αh +D) 1 2Q (α) and Q (α) R mn mn are orthogonal, we can obtain It then follows that X (α) 2 = (αh +D) 1 2 Q (α) 2 = (αh +D) = (αh +D) 1 2 2, X (α) 1 2 = Q (α)t (αh +D) = (αh +D) = (αh +D) κ 2 (X (α) ) = X (α) 2 X (α) 1 2 = (αh +D) αh +D = κ 2 (αh +D). This completes the proof of the theorem. Remark 3.1. The previous result requires some comments. Because of the non-uniqueness of the eigenvectors, the condition number κ 2 (X (α) ) of the eigenvector matrix is also not uniquely defined. One possibility is to replace it with the infimum over all possible choices of the eigenvectormatrixx (α). However,thisquantityisnoteasilycomputable. Asanapproximation,

9 608 Y.X. DONG AND C.Q. GU we will use instead the condition number of the matrix formed with the normalized eigenvectors returned by the eig function in Matlab. When the eigenvectors are normalized in the 2-norm, X (α) is replaced by X (α) = X (α) D (α) 1, with leading to D (α) = diag( x (α) 1 2, x (α) 2 2,, x (α) n 2 ), D (α) = (diag(q (α)t 1 (αh +D) 1 q (α) 1,q (α)t 2 (αh +D) 1 q (α) 2,,q (α)t n (αh +D) 1 q (α) ) 1 2, κ 2 ( X (α) ) = κ 2 (D (α) Q (α)t (αh +D) 1 2 ). In the special case when the coefficient matrix A = H+iD C mn mn is normal, we can easily see that the PMHSS-preconditioned matrix F 1 (α) 1 A is also normal. In this case the condition number of the normalized eigenvector matrix X (α) is of course exactly equal to one [2]. This property is formally stated in the following theorem. Theorem 3.2. Let the conditions of Theorem 3.1 be satisfied, and the eigenvector matrix X (α) be normalized as in Remark 3.1 with X (α) being the normalized matrix. Assume that UV = VU, WT = TW. Then it holds that κ 2 ( X (α) ) = 1. Moreover, the orthogonal eigenvectors q (α) 1,,q n (α) of the matrix Z (α) R mn mn are independent of the positive parameter α. Proof. Because UV = VU, WT = TW, we can validate that A = H + id C mn mn is normal, the matrices H,D R mn mn commute, i.e., it holds that HD = DH. Hence, there exists an orthogonal matrix Q R mn mn such that H = QΩQ T and D = QΓQ T, where Ω = diag(ω 1,,ω n ) and Γ = diag(γ 1,,γ n ) are diagonal matrices with ω j,γ j 0,j = 1,,n. It follows that As we obtain αh +D = Q(αΩ+Γ)Q T,H αd = Q(Ω αγ)q T. (αh +D) 1 2 = Q(αΩ+Γ) 1 2 Q T, Z (α) = (αh +D) 1 2 (H αd)(αh +D) 1 2 = Q (α) Ξ (α) Q T, with Ξ (α) = (αω+γ) 1 (Ω αγ). Therefore, the eigenvectors of the matrix Z (α) are given by the columns of the orthogonal matrix Q R mn mn, say, q 1,q 2,,q n, which are independent of the positive parameter α. In addition, by straightforward computations we find q T j (αh +D) 1 q j = q T j Q(αΩ+Γ) 1 Q T q j = e T j (αω+γ) 1 e j = (αω j +γ j ) 1, where e j denotes the j th unit vector in R mn. Therefore, it holds that D (α) = (αω + Γ) 1 2 and D (α) Q T (αh +D) 1 2 (D (α) Q T (αh +D) 1 2 ) T = D (α) Q T (αh +D)QD (α) = I, which immediately results in κ 2 ( X (α) ) = 1. Remark 3.2. If α = 1, then Theorem 2.1 (ii) leads to σ(1) 2 2. This shows that when F := (1+i)(H +D) is used to precondition the matrix A C n n, is used to precondition (1.2), the eigenvalues of the preconditioned matrix F 1 A are clustered within the complex disk centered

10 On PMHSS Iteration Methods for Sylvester Equations 609 at 1 with radius 2 2. Moreover, Theorem 3.1 indicates that the matrix F 1 A is diagonalizable, with the matrix X (1), formed by its eigenvectors, satisfying κ 2 (X (1) ) = κ 2 (H +D). Hence, the preconditioned Krylov subspace iteration methods, when employed to solve the complex symmetric linear system (1.2), can be expected to converge rapidly, at least when κ 2 (H +D) is not too large. As the previous theorem shows, this is guaranteed in the normal case. This in turn indicates that the iteration (2.4) can be expected to converge rapidly when α = 1. Remark 3.3. (see [4]) In some situations, it is not suitable to apply the PMHSS iteration method to the expanded standard linear system (1.2) directly to obtain an approximate solution to the continuous Sylvester equation (1.1) due to the following reasons. In the first place, we only need to treat with matrices of orders m or n, but for the expanded linear system (1.2) we need to treat with matrix of order m n; in the second place, linear system (1.2) may not inherit some useful properties of the matrices A and B; thirdly, a solution matrix X, reconstructed from a solution vector x obtained from solving the expanded linear system (1.2), may loss certain important and useful properties possessed by the original solution matrix X of the continuous Sylvester equation (1.1). For more details, we refer to Bai s paper [4]. 4. The Inexact PMHSS Iteration In the process of the PMHSS for (1.1), the two half-steps comprising each iteration require the exact solution of two Sylvester equations. However, this may be very costly and impractical in actual implementations, particularly when the sizes of the matrices involved are very large. To further improve the computational efficiency of the PMHSS iteration approach, we can solve the two sub-problems (2.4) inexactly by utilizing certain effective iteration methods, e.g., the Smith s method [17,38], the (block) SOR [31], the ADI [25,32] or the Krylov subspace methods[28]; see[4,25] ad references therein. Its convergence can be established in an analogous fashion to that of the inexact MHSS (IMHSS) iteration method, by making use of Theorem 3.1 in [12]. The inexact PMHSS iteration method. k = 0,1,2, until {X (k) } k=0 converges, Given an initial guess X (0) C m n, for 1. Approximate the solution of (αv 1 +W)Z (k) +Z (k) (αv 2 +U) = R (k), with R (k) = F AX (k) X (k) B, by iterating until Z (k) is such that the residual P (k) = R (k) ((αv 1 +W)Z (k) +Z (k) (αv 2 +U)), satisfies and then compute P (k) F ε k R (k) F, X (k+1 2 ) = X (k) +Z (k) ; 2. Approximate the solution of (αv 1 +T)Z (k+1 2 ) +Z (k+1 2 ) (αv 2 +V) = R (k+1 2 ),

11 610 Y.X. DONG AND C.Q. GU with R (k+1 2 ) = if +iax (k+1 2 ) +ix (k+1 2 ) B, by iterating until Z (k+ 1 2 ) is such that the residual Q (k+1 2 ) = R (k+ 1 2 ) ((αv 1 +T)Z (k+1 2 ) +Z (k+1 2 ) (αv 2 +V), satisfies and then compute Q (k+1 2 ) F η k R (k+1 2 ) F, X (k+1) = X (k+1 2 ) +Z (k+1 2 ). Theorem 4.1. Let the conditions of Theorem 2.1 be satisfied. If {X (k) } k=0 Cm n is an iteration sequence defined as X (k+1 2 ) = X (k) + Z (k) with P (k) = R (k) ((αv 1 + W)Z (k) + Z (k) (αv 2 +U)), satisfying P (k) F ε k R (k) F, where R (k) = F AX (k) X (k) B and Q (k+ 1 2 ) = R (k+1 2 ) ((αv 1 +T)Z (k+1 2 ) +Z (k+1 2 ) (αv 2 +V), satisfying Q (k+1 2 ) F η k R (k+1 2 ) F, where R (k+ 1 2 ) = if +iax (k+1 2 ) +ix (k+1 2 ) B. Then {X (k) } is of the form (αv 1 +W)X (k+1 2 ) +X (k+1 2 ) (αv 2 +U) = (αv 1 it)x (k) +X (k) (αv 2 iv)+f P (k), (αv 1 +T)X (k+1) +X (k+1) (αv 2 +V) = (αv 1 +iw)x (k+1 2 ) +X (k+1 2 ) (αv 2 +iu) if Q (k+1 2 ). (4.1) Moreover, if X C m n is the exact solution of the continuous Sylvester equation, then it holds that X (k+1) X (σ(α)+θ η k )(1+θε k ) X (k) X D, (4.2) where the norm D is defined as Y D = (αv 1 + T)Y + Y(αV 1 + V) F for any matrix Y C m n, constants and θ are given by = (αk+d)(αk+h) 1 2 and θ = A(αK+D) 2, where the matrix A = H+iD, H = I W+U T I, D = I T+V T I and K = I V 1 +V2 T I. In particular, if (σ(α) +θ η k )(1+θε k ) < 1, (4.3) then the iteration sequence {X (k) } k=0 Cm n converges to X C m n where ε max = maxε k, and η max = maxη k. k The proof is similar to Theorem 3.1 in [12] with technical modifications. See Appendix. The conclusion is according to the HSS iteration method and theory [4, 5]. We remark that if (2.4) can be solved exactly in some applications, the corresponding quantities {ε k } and {η k } and, hence, ε max and η max, can be set to be zero. It then follows that the convergence rate of the IPMHSS iteration reduces to the same as that of the PMHSS iteration. In general, Theorem 2.1 shows that in order to guarantee the convergence of the IPMHSS iteration, it is not necessary for {ε k } and {η k } to approach to zero as k is increasing k

12 On PMHSS Iteration Methods for Sylvester Equations 611 [4, 5, 10]. All we need is that the condition (4.3) is satisfied. Therefore, in actual applications, we need to choose the inner iteration tolerances {ε k } and {η k } so that the computational complexity of IPMHSS iteration method is minimized and the original convergence rate of the PMHSS iteration is asymptotically recovered. The following theorem presents one possible way of choosing the tolerances {ǫ k } and {η k } such that the original convergence rate of the two-step splitting iterative scheme can be asymptotically recovered. Theorem 4.2. Let the conditions of Theorem 2.1 be satisfied. Suppose that both τ 1 (k), and τ 2 (k) are nondecreasing and positive sequences satisfying τ 1 (k) 1,τ 2 (k) 1 and lim sup τ 1(k) k = lim sup τ 2(k) = +, and that both δ 1 and δ 2 are real constants in the interval (0,1) k satisfying ε k c 1 δ τ1(k) 1 and η k c 2 δ τ2(k) 2, k = 0,1,2,, (4.4) where c 1 and c 2 are positive constants. Then it holds that X (k+1) X D ( σ(α)+ωθδ τ(k) ) 2 X (k) X ) D, where = (αk +D)(αK +H) 1 2, θ = A(αK +D) 2,τ(k) and δ are defined by τ(k) = min{τ 1 (k),τ 2 (k)}, δ = max{δ 1,δ 2 }, ω = max{ c1 c 2, In particular, we have sup X(k+1) X D lim σ(α), k X (k) X D } 1 2 σ(α) (c 1σ(α)+c 2 ). i.e., the convergence rate of the IPMHSS iteration method is asymptotically the same as that of the PMHSS iteration method. Proof. The conclusion is straightforward according to Theorem 3.2 in [4, 12] or Theorem 3.3 in [5]. See also [10,20,21]. Of course, besides (4.4) there may be other rules for which {ε k } and {η k } approach to zero and the asymptotic convergence factor of the IPMHSS iteration tends to that of the PMHSS iteration [4, 5, 10]. 5. Numerical Examples In this section, we use several examples to further examine the effectiveness and show the advantage of the PMHSS and IPMHSS methods over the HSS, IHSS [4,5,10], MHSS and IMHSS methods [1,2,12], as well as the SOR method [31]. In all the test problems discussed in this section, all runs are performed in MATLAB 2014a on an Intel Core i5 (4G RAM) Windows 7 system. All iterations in this section are started from a zero matrix and terminated when the current iterate satisfies R (k) F / R (0) F 10 6, where R (k) = F AX (k) X (k) B is the residual of the k th HSS based iterate. The continuous Sylvester equations are solved by the HSS, IHSS [4,5,10], MHSS, IMHSS [1,12], PMHSS, IPMHSS and SOR [31] methods. We also

13 612 Y.X. DONG AND C.Q. GU Table 5.1: Numerical results for different splitting iteration methods. Example 5.1. Method m m HSS for (1.1) α exp IT CPU IHSS for (1.1) α exp IT CPU HSS for (1.2) α exp IT CPU MHSS for (1.1) α exp IT CPU IMHSS for (1.1) α exp IT CPU MHSS for (1.2) α exp IT CPU PMHSS for (1.1) α exp IT CPU IPMHSS for (1.1) α exp IT CPU PMHSS for (1.2) α exp IT CPU solve the continuous Sylvester equation (1.1) through employing the HSS [4, 5], MHSS [1, 12], PMHSS iteration methods [2, 3] to the standard system of linear equations (1.2). The number of iteration steps ( IT ), the computing time in seconds ( CPU ) and the experimentally found optimal parameterα exp (with β exp = α exp ) arelisted in tables. For IHSS, IMHSS and IPMHSS iterations, we report the number of first half steps of inner iterations(denoted by IT-1iner ), the number of second half steps of inner iterations (denoted by IT-2iner ). In the tables, - means that the computing time in seconds is larger than s. In particular, we set the iteration parameter α exp to be 1, because we see that the iteration counts and CPU time for the matrix form of the PMHSS iteration are almost identical to those obtained with the experimentally found optimal parameter α exp. In the IHSS, IMHSS and IPMHSS iteration methods, we set ǫ k = η k = 0.01,k = 0,1,2,, and use the Smith s method [17,38] as the inner iteration scheme. In the PMHSS and IPMHSS iteration methods for the continuous Sylvester equation (1.1), we choose V 1 = W and V 2 = U. Example 5.1. (See [35]) Let us consider the following complex Helmholtz equation u+σ 1 u+iσ 2 u = f, where σ 1 are real coefficient functions, and u satisfies Dirichlet boundary conditions in D =

14 On PMHSS Iteration Methods for Sylvester Equations 613 Table 5.2: Numerical results for HSS, MHSS and PMHSS iteration methods. Example 5.2. Method m m HSS for (1.1) α exp IT CPU e+03 - HSS for (1.2) α exp IT CPU e+03 - MHSS for (1.1) α exp IT CPU MHSS for (1.2) α exp IT CPU PMHSS for (1.1) α exp IT CPU PMHSS for (1.2) α exp IT CPU [0, 1] [0, 1]. The above equation describes the propagation of damped time-harmonic waves. We take H the five-point centered difference matrix approximating the negative Laplacian operator on an uniform mesh with mesh-size h = 1/(m+1). The matrix H R n n possesses the tensor-product form H = B m I + I B m, with B m = h 2 tridiag( 1,2, 1) R m m. Hence, H is an n n block-tridiagonal matrix, with n = m 2. This leads to the complex symmetric linear system of the form [(H +σ 1 I)+iσ 2 I]x = b. (5.1) In addition, we set σ 1 = 100,σ 2 = 1 and the right-hand side vector b to be b = (1+i)A1, with 1 being the vector of all entries equal to 1. As before, we normalize the system by multiplying both sides by h 2. The Sylvester equation form corresponding to (5.1) is ( B m + σ 1 2 I + i 2 σ 2I ) X +X ( B m + σ 1 2 I + i 2 σ 2I ) = F, where F is reconstructed from b. In the vector form of the PMHSS iteration method, we choose H+σ 1 I as the corresponding prescribed symmetric positive definite matrix. The experimentally found optimal parameters α exp, β exp and numerical results for Example 5.1 are listed in Table 5.1. By comparing the results in Table 5.1, we see that the matrix form of the PMHSS iteration approach is more effective than its vector form, as it requires much less computing time. This phenomenon exists for the HSS and MHSS methods. We also observe that the matrix form of IHSS, IMHSS and IPMHSS methods outperforms the matrix form of HSS, MHSS and PMHSS methods. Example 5.2. (See [1,2]) The system of linear equations (1.2) is of the form ( (K I)+i(K + 3+ ) 3 I) x = b, (5.2) τ τ

15 614 Y.X. DONG AND C.Q. GU Table 5.3: Numerical results for IHSS, IMHSS, IPMHSS and SOR iteration methods. Example 5.2. Method m m IHSS α exp IT IT-1iner IT-2iner CPU IMHSS α exp IT IT-1iner IT-2iner CPU IPMHSS α exp IT IT-1iner IT-2iner CPU SOR ω IT CPU where τ is the time step-size and K is the five-point centered difference matrix approximating the negative Laplacian operator L = δ with homogeneous Dirichlet boundary conditions, on a uniform mesh in the unit square [0,1] [0,1] with the mesh-size h = 1 m+1. This complex symmetric system of linear equations arises in centered difference discretizations of the R 22 - Padé approximations in the time integration of parabolic partial differential equations. For more details, we refer to [39]. The matrix K R n n possesses the tensor-product form K = I B m +B m I, with B m = h 2 tridiag( 1,2, 1) R m m. Hence, K is an n n blocktridiagonal matrix, with n = m 2. We take the right-hand side vector b with its jth entry [b] j being given by [b] j = (1 i)j τ(j+1). Furthermore, we normalize coefficient matrix and right-hand side 2 by multiplying both by h 2. The Sylvester equation form corresponding to (5.2) is (B m τ I +ib m )X +X(B m +i(b m I)) = F, τ where F is reconstructed from b. In our tests we take τ = h. In the vector form of the PMHSS iteration method, we choose K τ I as the corresponding prescribed symmetric positive definite matrix. We report numerical results for Example 5.2 in Tables By comparing the results in Table 5.2, we observe that the matrix form of the HSS, MHSS and PMHSS iterations is more effective than its vector form, respectively, as it requires less computing time. Moreover, the matrix form of the PMHSS iteration can solve much larger problems than its vector form. In Table 5.3, we give the numerical results of the IHSS, IMHSS, IPMHSS and SOR methods. The IPMHSS method performs better numerical behavior than the IMHSS and IHSS methods. We find that the IPMHSS method cost less computing time than the SOR iteration method when the problem size m becomes large. Evidently, the computing time can be significantly reduced by the IHSS, IMHSS and IPMHSS methods.

16 On PMHSS Iteration Methods for Sylvester Equations Appendix The proof of Theorem 4.1 is listed in this section. Proof. Performing the vec operation on both side of (4.1), we obtain the equalities of the two equations (I (αv 1 +W)+(αV 2 +U T I)vec(X (k+ 1 2 ) ) = (I (αv 1 it)+(αv 2 iv T I))vec(X (k) ) +vec(f) vec(p (k) ), (I (αv 1 +T)+(αV 2 +V) T I))vec(X (k+1) ) = (I (αv 1 +iw)+(αv 2 +iu T ) I)vec(X (k+ 1 2 ) ) ivec(f) vec(q (k+1 2 ) ), which can be arranged equivalently as { (αk +H)vec(X (k+1 2 ) ) = (αk id)vec(x (k) )+vec(f) vec(p (k) ), (αk +D)vec(X (k+1) ) = (αk +ih)vec(x (k+1 2 ) ) ivec(f) vec(q (k+1 2 ) ), where H = I W +U T I, D = I T +V T I and K = I V 1 +V T 2 I. Then we get vec(x (k+1 2 ) ) = (αk +H) 1 ((αk id) vec(x (k) )+vec(f) vec(p (k) )), (6.1) vec(x (k+1) ) = (αk +D) 1 ((αk +ih) vec(x (k+1 2 ) ) ivec(f) vec(q (k+1 2 ) )). (6.2) Therefore, we have vec(x (k+1) ) = M(α) vec(x (k) )+G(α) vec(f) (αk +D) 1 vec(q (k+1 2 ) ) (αk +D) 1 (αk +ih)(αk +H) 1 vec(p (k) ), (6.3) where the iteration matrices M(α) and G(α) are defined in (2.5) and (2.6) respectively. If X C m n is the exact solution of (1.1), then it holds that vec(x ) = (αk +H) 1 (αk id) vec(x )+(αk +H) 1 vec(f), (6.4) vec(x ) = M(α) vec(x )+G(α) vec(f). (6.5) By subtracting (6.4) from (6.1) and (6.5) from (6.3), we get vec(x (k+1 2 ) X ) = vec(x (k+1 2 ) ) vec(x ) = (αk +H) 1 (αk id) vec(x (k) X ) (αk +H) 1 vec(p (k) ), (6.6) vec(x (k+1) X ) = vec(x (k+1) ) vec(x ) = M(α) vec(x (k) X ) (αk +D) 1 (αk +ih)(αk +H) 1 vec(p (k) ) (αk +D) 1 vec(q (k+1 2 ) ). (6.7) With the definition of the norm D, for any matrix Y C m n, Y D = (αv 1 +T)Y +Y(αV 2 +V) F = (I (αv 1 +T)+(αV 2 +V) T I) vec(y) 2 = (αk +D) vec(y) 2 := vec(y).

17 616 Y.X. DONG AND C.Q. GU Noticing that vec(r (k) ) 2 = R (k) F = F AX (k) X (k) B F (AX +X B AX (k) X (k) B) F = ((I A+B T I) vec(x (k) X )) 2 (H +id)(αk +D) 1 (αk +D)vec(X (k) X ) 2 (H +id)(αk +D) 1 2 (αk +D) vec(x (k) X ) 2 = θ X (k) X D, where θ = A(αK +D) 1 2. We immediately deduce that vec(p (k) ) 2 = P k F ε k R k F ε k θ X (k) X. Taking norms on both sides of the identities (6.6), we have vec(x (k+1 2 ) X ) = (αk +H) 1 (αk id) vec(x (k) X ) (αk +H) 1 vec(p (k) ) (αk +H) 1 (αk id) vec(x (k) X ) + (αk +H) 1 vec(p (k) ) = (αk +D)(αK +H) 1 (αk id) vec(x (k) X ) 2 + (αk +D)(αK +H) 1 vec(p (k) ) 2 (αk +D)(αK +H) 1 2 (αk id)(αk +D) 1 2 X (k) X D + (αk +D)(αK +H) 1 2 vec(p (k) ) 2. From (2.8), we know that (αk id)(αk +D) 1 2 < 1. It follows that vec(x (k+1 2 ) X ) (αk +D)(αK +H) 1 2 (αk id)(αk +D) 1 2 X (k) X D + (αk +D)(αK +H) 1 2 vec(p (k) ) 2 (1+ε k θ) X (k) X D, (6.8) where = (αk +D)(αK +H) 1 2. From (6.8), we can obtain vec(r (k+1 2 ) ) 2 = R (k+1 2 ) F = if +iax (k+1 2 ) +ix (k+1 2 ) B F (AX +X B AX (k+1 2 ) X (k+1 2 ) B) F = ((I A+B T I) vec(x (k+1 2 ) X )) 2 = A(αK +D) 1 (αk +D) vec(x (k+1 2 ) X ) 2 A(αK +D) 1 2 (αk +D) vec(x (k+1 2 ) X ) 2 = θ X k+1 2 X D, Q (k+1 2 ) F η k θ X (k+1 2 ) X D η k θ (1+ε k θ) X (k) X D.

18 On PMHSS Iteration Methods for Sylvester Equations 617 Evidently, one can see that (αk +ih)(αk +H) 1 < σ(α) from (2.8); then from the identities (6.7), we obtain X (k+1) X D = vec(x (k+1) X ) M(α) (vec(x (k) X )) + (αk +D) 1 vec(q (k+1 2 ) ) + (αk +D) 1 (αk +ih)(αk +H) 1 vec(p k ) (αk +D)M(α) (αk +ih) 1 (αk +ih)vec(x (k) X ) 2 + vec(q (k+1 2 ) ) 2 + (αk +ih)(αk +H) 1 vec(p k ) 2 σ(α) X (k) X D + vec(q (k+1 2 ) ) 2 + (αk +ih)(αk +H) 1 vec(p k ) 2 (σ(α)+η k θ (1+ε k θ)+σ(α)ε k θ) X (k) X D = (σ(α)+θ η k )(1+θε k ) X (k) X D, for k = 0,1,. This is exactly the estimate what we were deriving. Acknowledgments. We would like to thank the referees very much for their valuable comments and suggestions. The work is supported by Chinese Natural Science Foundation( ), Innovation major project of Shanghai Municipal Education Commission (13ZZ068), Key Disciplines of Shanghai Municipality (S30104), The Grant of Shanghai Science and Technology Commission ( ). References [1] Z.-Z. Bai, M. Benzi and F. Chen, Modified HSS iteration methods for a class of complex symmetric linear systems, Computing, 87 (2010), [2] Z.-Z. Bai, M. Benzi and F. Chen, On preconditioned MHSS iteration methods for complex symmetric linear systems, Numer. Algorithms, 56 (2011), [3] Z.-Z. Bai, M. Benzi, F. Chen and Z.-Q. Wang, Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems, IMA J. Numer. Anal., 33 (2013), [4] Z.-Z. Bai, On Hermitian and skew-hermitian splitting iteration methods for continuous Sylvester equations, J. Comput. Math., 29 (2011), [5] Z.-Z. Bai, G.H. Golub and M.K. Ng, Hermitian and skew-hermitian splitting methods for non- Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 24 (2003), [6] Z.-Z. Bai, G.H. Golub and C.-K. Li, Convergence properties of preconditioned Hermitian and skew-hermitian splitting methods for non-hermitian positive semidefinite matrices, Math. Comp., 76 (2007), [7] Z.-Z. Bai, G.H. Golub and M.K. Ng, On successive-overrelaxation acceleration of the Hermitian and skew-hermitian splitting iterations, Numer. Linear Algebra Appl., 14 (2007), [8] Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin, Block triangular and skew-hermitian splitting methods for positive-definite linear systems, SIAM J. Sci. Comput., 26 (2005), [9] Z.-Z. Bai, G.H. Golub and J.-Y. Pan, Preconditioned Hermitian and skew-hermitian splitting methods for non-hermitian positive semidefinite linear systems, Numer. Math., 98 (2004), [10] Z.-Z. Bai, G.H. Golub and M.K. Ng, On inexact Hermitian and skew-hermitian splitting methods for non-hermitian positive definite linear systems, Linear Algebra Appl.,428 (2008), [11] Z.-Z. Bai, Rotated block triangular preconditioning based on PMHSS, Sci. China (Ser. A: Math), 56 (2013),

19 618 Y.X. DONG AND C.Q. GU [12] D.-M. Zhou, G.-L. Chen and Q.-Y. Cai, On modified HSS iteration methods for continuous Sylvester equations, Appl. Math. Comput., 263 (2015), [13] R. Zhou, X. Wang and P. Zhou, A modified HSS iteration method for solving the complex linear matrix equation AXB = C, J. Comput. Math., 34 (2016), [14] Y.-B. Deng, Z.-Z. Bai and Y.-H. Gao, Iterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equations, Numer. Linear Algebra Appl., 13 (2006), [15] Z.-Z. Bai, X.-X. Guo and S.-F. Xu, Alternately linearized implicit iteration methods for the minimal nonnegative solutions of nonsymmetric algebraic Riccati equations, Numer. Linear Algebra Appl., 13 (2006), [16] X.-X. Guo and Z.-Z. Bai, On the minimal nonnegative solution of nonsymmetric algebraic Riccati equation, J. Comput. Math., 23 (2005), [17] Y.-H Gao and Z.-Z. Bai, On inexact Newton methods based on doubling iteration scheme for non-symmetric algebraic Riccati equations, Numer. Linear Algebra Appl., 18 (2011), [18] A.-P.Liao, Z.-Z. Bai andy.lei, Best approximate solution ofmatrixequation AXB+CYD = E, SIAM J. Matrix Anal. Appl., 27 (2005), [19] D. Calvetti and L. Reichel, Application of ADI iterative methods to the restoration of noisy images, SIAM J. Matrix Anal. Appl., 17 (1996), [20] X. Wang, W.-W. Li and L.-Z. Mao, On positive-definite and skew-hermitian splitting iteration methods for continuous Sylvester equation AX + XB = C, Comput. Math. Appl., 66 (2013), [21] X. Wang, Y. Li and L. Dai, On Hermitian and skew-hermitian splitting iteration methods for the linear matrix equation AXB = C, Comput. Math. Appl., 65 (2013), [22] R. Zhou, X. Wang and X.-B. Tang, A generalization of the Hermitian and skew-hermitian splitting iteration method for solving Sylvester equations, Appl. Math. Comput., 271 (2015), [23] Q.-Q. Zheng and C.-F. Ma, On normal and skew-hermitian splitting iteration methods for large sparse continuous Sylvester equations, J. Comput. Appl. Math., 268 (2014), [24] G. H. Golub, S.G. Nash and C.F. Van Loan, A Hessenberg-Schur method for the problem AX+ XB = C, IEEE Trans. Automat. Control, 24 (1979), [25] P. Benner, R.-C. Li and N. Truhar, On the ADI method for Sylvester equations, J. Comput. Appl. Math., 233 (2009), [26] R. H. Bartels and G.W. Stewart, Solution of the matrix equation AX + XB = C: Algorithm 432, Commun. ACM, 15 (1972), [27] V. Simoncini, A New Iterative Method for Solving Large-Scale Lyapunov Matrix Equations, SIAM J. Sci. Comput., 29 (2007), [28] D.-Y. Hu and L. Reichel, Krylov-subspace methods for the Sylvester equation, Linear Algebra Appl., 172 (1992), [29] F. Ding and T.-W. Chen, Gradient Based Iterative Algorithms for Solving a Class of Matrix Equations, IEEE Trans. Automat. Control, 50 (2005), [30] A. Lu and E. L. Wachspress, Solution of lyapunov equations by alternating direction implicit iteration, Comput. Math. Appl., 21 (1991), [31] G. Starke and W. Niethammer, SOR for AX XB = C, Linear Algebra Appl., 154 (1991), [32] J.-R. Li and J. White, Low rank solution of Lyapunov equations, SIAM J. Matrix Anal. Appl., 24 (2002), [33] Q. Niu, X. Wang and L.-Z. Lu, A relaxed gradient based algorithm for solving Sylvester equations, Asian J. Control, 13 (2011), [34] X. Wang, L. Dai and D. Liao, A modified gradient based algorithm for solving Sylvester equations, Appl. Math. Comput., 218 (2012),

20 On PMHSS Iteration Methods for Sylvester Equations 619 [35] X. Li, A.-L. Yang and Y.-J. Wu, Lopsided PMHSS iteration method for a class of complex symmetric linear systems, Numer. Algorithms, 66 (2014), [36] Y.-X. Dong and C.-Q. Gu, A class of generalized relaxed PSS preconditioners for generalized saddle point problems, Appl. Math. Lett., 58 (2016), [37] J.-L. Zhang and C.-Q. Gu, A variant of the deteriorated PSS preconditioner for nonsymmetric saddle point problems, BIT, 56 (2016), [38] R.A. Smith, Matrix Equation XA+BX = C, SIAM J. Appl. Math., 16 (1968), [39] O. Axelsson and A. Kucherov, Real valued iterative methods for solving complex symmetric linear systems, Numer. Linear Algebra Appl., 7 (2000),

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2010-026 On Preconditioned MHSS Iteration Methods for Complex Symmetric Linear Systems by Zhong-Zhi Bai, Michele Benzi, Fang Chen Mathematics and Computer Science EMORY UNIVERSITY On

More information

A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION AXB = C *

A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION AXB = C * Journal of Computational Mathematics Vol.34, No.4, 2016, 437 450. http://www.global-sci.org/jcm doi:10.4208/jcm.1601-m2015-0416 A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION

More information

On preconditioned MHSS iteration methods for complex symmetric linear systems

On preconditioned MHSS iteration methods for complex symmetric linear systems DOI 10.1007/s11075-010-9441-6 ORIGINAL PAPER On preconditioned MHSS iteration methods for complex symmetric linear systems Zhong-Zhi Bai Michele Benzi Fang Chen Received: 25 November 2010 / Accepted: 13

More information

Modified HSS iteration methods for a class of complex symmetric linear systems

Modified HSS iteration methods for a class of complex symmetric linear systems Computing (2010) 87:9 111 DOI 10.1007/s00607-010-0077-0 Modified HSS iteration methods for a class of complex symmetric linear systems Zhong-Zhi Bai Michele Benzi Fang Chen Received: 20 October 2009 /

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

A new iterative method for solving a class of complex symmetric system of linear equations

A new iterative method for solving a class of complex symmetric system of linear equations A new iterative method for solving a class of complex symmetric system of linear equations Davod Hezari Davod Khojasteh Salkuyeh Vahid Edalatpour Received: date / Accepted: date Abstract We present a new

More information

An Accelerated Jacobi-gradient Based Iterative Algorithm for Solving Sylvester Matrix Equations

An Accelerated Jacobi-gradient Based Iterative Algorithm for Solving Sylvester Matrix Equations Filomat 31:8 (2017), 2381 2390 DOI 10.2298/FIL1708381T Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat An Accelerated Jacobi-gradient

More information

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Journal of Computational Mathematics Vol.xx, No.x, 2x, 6. http://www.global-sci.org/jcm doi:?? ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Davod

More information

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Zhong-Zhi Bai State Key Laboratory of Scientific/Engineering Computing Institute of Computational Mathematics

More information

Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning

Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning Adv Comput Math DOI 10.1007/s10444-013-9330-3 Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning Mohammad Khorsand Zak Faezeh Toutounian Received:

More information

Two efficient inexact algorithms for a class of large sparse complex linear systems

Two efficient inexact algorithms for a class of large sparse complex linear systems Two efficient inexact algorithms for a class of large sparse complex linear systems Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan, Rasht,

More information

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices Journal of Computational Applied Mathematics 6 (009) 136 154 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam Optimization

More information

Splitting Iteration Methods for Positive Definite Linear Systems

Splitting Iteration Methods for Positive Definite Linear Systems Splitting Iteration Methods for Positive Definite Linear Systems Zhong-Zhi Bai a State Key Lab. of Sci./Engrg. Computing Inst. of Comput. Math. & Sci./Engrg. Computing Academy of Mathematics and System

More information

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction Journal of Computational Mathematics Vol.xx, No.x, 200x, 1 9. http://www.global-sci.org/jcm doi:10.4208/jcm.1401-cr7 ON AUGMENED LAGRANGIAN MEHODS FOR SADDLE-POIN LINEAR SYSEMS WIH SINGULAR OR SEMIDEFINIE

More information

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex

More information

Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices

Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices Zhong-Zhi Bai 1 Department of Mathematics, Fudan University Shanghai

More information

The semi-convergence of GSI method for singular saddle point problems

The semi-convergence of GSI method for singular saddle point problems Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems Applied Mathematical Sciences, Vol. 10, 2016, no. 30, 1477-1488 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.6269 Performance Comparison of Relaxation Methods with Singular and Nonsingular

More information

Two-parameter generalized Hermitian and skew-hermitian splitting iteration method

Two-parameter generalized Hermitian and skew-hermitian splitting iteration method To appear in the International Journal of Computer Mathematics Vol. 00, No. 00, Month 0XX, 1 Two-parameter generalized Hermitian and skew-hermitian splitting iteration method N. Aghazadeh a, D. Khojasteh

More information

POSITIVE DEFINITE AND SEMI-DEFINITE SPLITTING METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS * 1. Introduction

POSITIVE DEFINITE AND SEMI-DEFINITE SPLITTING METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS * 1. Introduction Journal of Computational Mathematics Vol.34, No.3, 2016, 300 316. http://www.global-sci.org/jcm doi:10.4208/jcm.1511-m2015-0299 POSITIVE DEFINITE AND SEMI-DEFINITE SPLITTING METHODS FOR NON-HERMITIAN POSITIVE

More information

Regularized HSS iteration methods for saddle-point linear systems

Regularized HSS iteration methods for saddle-point linear systems BIT Numer Math DOI 10.1007/s10543-016-0636-7 Regularized HSS iteration methods for saddle-point linear systems Zhong-Zhi Bai 1 Michele Benzi 2 Received: 29 January 2016 / Accepted: 20 October 2016 Springer

More information

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 36, pp. 39-53, 009-010. Copyright 009,. ISSN 1068-9613. ETNA P-REGULAR SPLITTING ITERATIVE METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS

More information

On inexact Hermitian and skew-hermitian splitting methods for non-hermitian positive definite linear systems

On inexact Hermitian and skew-hermitian splitting methods for non-hermitian positive definite linear systems Available online at www.sciencedirect.com Linear Algebra its Applications 428 (2008 413 440 www.elsevier.com/locate/laa On inexact Hermitian sew-hermitian splitting methods for non-hermitian positive definite

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

Factorized Solution of Sylvester Equations with Applications in Control

Factorized Solution of Sylvester Equations with Applications in Control Factorized Solution of Sylvester Equations with Applications in Control Peter Benner Abstract Sylvester equations play a central role in many areas of applied mathematics and in particular in systems and

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Journal of Computational and Applied Mathematics. New matrix iterative methods for constraint solutions of the matrix

Journal of Computational and Applied Mathematics. New matrix iterative methods for constraint solutions of the matrix Journal of Computational and Applied Mathematics 35 (010 76 735 Contents lists aailable at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elseier.com/locate/cam New

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS U..B. Sci. Bull., Series A, Vol. 78, Iss. 4, 06 ISSN 3-707 ON A GENERAL CLASS OF RECONDIIONERS FOR NONSYMMERIC GENERALIZED SADDLE OIN ROBLE Fatemeh anjeh Ali BEIK his paper deals with applying a class

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2007-002 Block preconditioning for saddle point systems with indefinite (1,1) block by Michele Benzi, Jia Liu Mathematics and Computer Science EMORY UNIVERSITY International Journal

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem Kangkang Deng, Zheng Peng Abstract: The main task of genetic regulatory networks is to construct a

More information

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Front. Math. China 2017, 12(6): 1409 1426 https://doi.org/10.1007/s11464-017-0644-1 Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Xinzhen ZHANG 1, Guanglu

More information

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

On matrix equations X ± A X 2 A = I

On matrix equations X ± A X 2 A = I Linear Algebra and its Applications 326 21 27 44 www.elsevier.com/locate/laa On matrix equations X ± A X 2 A = I I.G. Ivanov,V.I.Hasanov,B.V.Minchev Faculty of Mathematics and Informatics, Shoumen University,

More information

Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition

Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition Applied Mathematical Sciences, Vol. 4, 2010, no. 1, 21-30 Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition Amer Kaabi Department of Basic Science

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Available online: 19 Oct To link to this article:

Available online: 19 Oct To link to this article: This article was downloaded by: [Academy of Mathematics and System Sciences] On: 11 April 01, At: 00:11 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 107954

More information

A Continuation Approach to a Quadratic Matrix Equation

A Continuation Approach to a Quadratic Matrix Equation A Continuation Approach to a Quadratic Matrix Equation Nils Wagner nwagner@mecha.uni-stuttgart.de Institut A für Mechanik, Universität Stuttgart GAMM Workshop Applied and Numerical Linear Algebra September

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS DAVOD KHOJASTEH SALKUYEH and FATEMEH PANJEH ALI BEIK Communicated by the former editorial board Let A : R m n R m n be a symmetric

More information

Research Article Convergence of a Generalized USOR Iterative Method for Augmented Systems

Research Article Convergence of a Generalized USOR Iterative Method for Augmented Systems Mathematical Problems in Engineering Volume 2013, Article ID 326169, 6 pages http://dx.doi.org/10.1155/2013/326169 Research Article Convergence of a Generalized USOR Iterative Method for Augmented Systems

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

GAUSS-SIDEL AND SUCCESSIVE OVER RELAXATION ITERATIVE METHODS FOR SOLVING SYSTEM OF FUZZY SYLVESTER EQUATIONS

GAUSS-SIDEL AND SUCCESSIVE OVER RELAXATION ITERATIVE METHODS FOR SOLVING SYSTEM OF FUZZY SYLVESTER EQUATIONS GAUSS-SIDEL AND SUCCESSIVE OVER RELAXATION ITERATIVE METHODS FOR SOLVING SYSTEM OF FUZZY SYLVESTER EQUATIONS AZIM RIVAZ 1 AND FATEMEH SALARY POUR SHARIF ABAD 2 1,2 DEPARTMENT OF MATHEMATICS, SHAHID BAHONAR

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

An Even Order Symmetric B Tensor is Positive Definite

An Even Order Symmetric B Tensor is Positive Definite An Even Order Symmetric B Tensor is Positive Definite Liqun Qi, Yisheng Song arxiv:1404.0452v4 [math.sp] 14 May 2014 October 17, 2018 Abstract It is easily checkable if a given tensor is a B tensor, or

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS

ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS J. Appl. Math. & Informatics Vol. 36(208, No. 5-6, pp. 459-474 https://doi.org/0.437/jami.208.459 ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS DAVOD KHOJASTEH SALKUYEH, MARYAM ABDOLMALEKI, SAEED

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Exploiting off-diagonal rank structures in the solution of linear matrix equations

Exploiting off-diagonal rank structures in the solution of linear matrix equations Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg)

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection Journal of Computational and Applied Mathematics 226 (2009) 77 83 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov

More information

IN this paper, we investigate spectral properties of block

IN this paper, we investigate spectral properties of block On the Eigenvalues Distribution of Preconditioned Block wo-by-two Matrix Mu-Zheng Zhu and a-e Qi Abstract he spectral properties of a class of block matrix are studied, which arise in the numercial solutions

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

arxiv: v1 [math.ra] 11 Aug 2014

arxiv: v1 [math.ra] 11 Aug 2014 Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Delayed Over-Relaxation in Iterative Schemes to Solve Rank Deficient Linear System of (Matrix) Equations

Delayed Over-Relaxation in Iterative Schemes to Solve Rank Deficient Linear System of (Matrix) Equations Filomat 32:9 (2018), 3181 3198 https://doi.org/10.2298/fil1809181a Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Delayed Over-Relaxation

More information

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS REVISTA DE LA UNIÓN MATEMÁTICA ARGENTINA Vol. 53, No. 1, 2012, 61 70 SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS JIAN-LEI LI AND TING-ZHU HUANG Abstract. Recently,

More information

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, SERIES B Volume 5, Number 1-2, Pages 21 30 c 2014 Institute for Scientific Computing and Information A SPARSE APPROXIMATE INVERSE PRECONDITIONER

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

On the eigenvalues of specially low-rank perturbed matrices

On the eigenvalues of specially low-rank perturbed matrices On the eigenvalues of specially low-rank perturbed matrices Yunkai Zhou April 12, 2011 Abstract We study the eigenvalues of a matrix A perturbed by a few special low-rank matrices. The perturbation is

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 4 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 12, 2012 Andre Tkacenko

More information

arxiv: v1 [math.na] 26 Dec 2013

arxiv: v1 [math.na] 26 Dec 2013 General constraint preconditioning iteration method for singular saddle-point problems Ai-Li Yang a,, Guo-Feng Zhang a, Yu-Jiang Wu a,b a School of Mathematics and Statistics, Lanzhou University, Lanzhou

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

S.F. Xu (Department of Mathematics, Peking University, Beijing)

S.F. Xu (Department of Mathematics, Peking University, Beijing) Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations Peter Benner Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information