A new iterative method for solving a class of complex symmetric system of linear equations

Size: px
Start display at page:

Download "A new iterative method for solving a class of complex symmetric system of linear equations"

Transcription

1 A new iterative method for solving a class of complex symmetric system of linear equations Davod Hezari Davod Khojasteh Salkuyeh Vahid Edalatpour Received: date / Accepted: date Abstract We present a new stationary iterative method, called Scale-Splitting SCSP method, and investigate its convergence properties. The SCSP method naturally results in a simple matrix splitting preconditioner, called SCSP-preconditioner, for the original linear system. Some numerical comparisons are presented between the SCSP-preconditioner and several available block preconditioners, such as PGSOR Hezari et al. Numer. Linear Algebra Appl., , 15 and rotate block triangular preconditioners Bai, Sci. China Math. 56, , 13, when they are applied to expedite the convergence rate of Krylov subspace iteration methods for solving the original complex system and its block real formulation, respectively. Numerical experiments show that the SCSP-preconditioner can compete with PGSOR-preconditioner and even more effective than the rotate block triangular preconditioners. Keywords Complex symmetric linear systems symmetric positive definite preconditioner. Mathematics Subject Classification 65F1 65F15. 1 Introduction We consider the nonsingular system of linear equations of the form Az = b, with A = W + it, z = x + iy and b = f + ig, 1 where i = 1 is the imaginary unit, the n n matrices W and T are real and the vectors x, y, f and g are all in R n. Note that A is nonsingular if and only if nullw nullt = {} and i is not a generalized eigenvalue of the matrix pair W, T i.e., T x iw x for any x, where null denotes the null space of the corresponding matrix. Such systems appear in a variety of scientific computing and engineering applications such as computational electrodynamic [1, ], FFT-based solution of certain time-dependent PDEs [3], structural dynamics [4], diffuse optical tomography [5], quantum mechanics [6] and D. Hezari Faculty of Mathematical Sciences, University of Guilan, Rasht, Iran davodhezari@phd.guilan.ac.ir, d.hezari@gmail.com D.K. Salkuyeh Faculty of Mathematical Sciences, University of Guilan, Rasht, Iran khojasteh@guilan.ac.ir, salkuyeh@gmail.com V. Edalatpour Faculty of Mathematical Sciences, University of Guilan, Rasht, Iran vedalat.math@gmail.com

2 D. Hezari, V. Edalatpour and D.K. Salkuyeh molecular scattering [7]. For more applications of this class of complex symmetric systems see [8] and references therein. Bai et al. in [9] described the Hermitian and skew-hermitian splitting HSS method to solve non-hermitian positive definite system of linear equations. By modifying and preconditioning the HSS iteration method, Bai et al. [1, 11] presented a preconditioned and modified HSS PMHSS iteration method for solving the complex linear system 1. Moreover, Lopsided version of the PMHSS method has been presented by Li et al. [1]. Additionly, there are also other solution techniques for solving complex symmetric linear system 1 which are extensions of Krylov subspace methods. For example, the well-known conjugate orthogonal conjugate gradient COCG [13], quasi-minimal residual QMR method [14], conjugate A-orthogonal conjugate residual COCR [15], symmetric complex bi-conjugate gradient SCBiCG method [16] and Quasi-minimal residual variants of the COCG and COCR methods [17]. To avoid the complex arithmetic and constructing efficient preconditioners, one can deal with the n n equivalent real formulation W T Ru T W x y = f d; g see [8,18,19]. In [18], the foregoing PMHSS method is accommodated to solve the block twoby-two linear system using real arithmetics. The PMHSS methods naturally lead to matrix splitting preconditioners, called PMHSS preconditioners, for coefficient matrices of 1 and which have recently gained more attention; see [,1]. When the matrices W and T are symmetric positive semi-definite satisfying nullw nullt = {}, the convergence analysis of these PMHSS methods and the spectral properties of the preconditioned matrix have been presented for linear systems 1 and ; see [18, 11]. Based on the PMHSS preconditioning matrix, Bai in [] proposed three rotated block triangular preconditioning matrices for block two-by-two linear system as follows 1 ωw + T Lω = ω + 1 G, 3 ωt W ωw + T and 1 ωw + T P ω = ω + 1 G ωt W ωw + T 1 ωw + T ωw T Uω = ω + 1 G ωw + T, 1 ωw + T ωw + T ωw T, ωw + T ωw + T I I where G = 1 is a block Givens rotation, I is the identity matrix and ω is a real I I constant. The matrices Lω, Uω and P ω are called the rotated block lower triangular RBLT, rotated block upper triangular RBUT and rotated block triangular product RBTP preconditioners, respectively. The rotated block triangular preconditioners can be applied not only to the case that the matrices W and T are symmetric positive semi-definite but also to case that either W or T is nonsymmetric. At each step of the rotated block triangular preconditioning processes, it is required to invert the matrix ωw + T exactly. To avoid this problem, an inexact variant of the rotated block triangular preconditioners have been proposed in [3], which use a nonsingular approximation of ωw + T, for example, incomplete Cholesky factorization or incomplete LU factorization when the matrix ωw + T is symmetric positive definite or nonsymmetric, respectively. Based on the successive overrelaxation SOR method, recently Salkuyeh et al. [19] applied the generalized SOR GSOR to efficiently solve the block two-by-two linear system. After that, they proposed a preconditioned variant of the GSOR PGSOR method [4] and established conditions such that the PGSOR iteration method to be more effective than the GSOR iteration method.

3 A new iterative method for solving a class of complex symmetric system of linear equations 3 In this paper, a new stationary matrix splitting iteration method, called Scale-Splitting SCSP, is presented to solve the original complex system 1 and convergence theory together with the spectral properties of the corresponding preconditioned matrix are established. The authors in [4] have established the convergence analysis of the PGSOR method when both of the matrices W and T are symmetric positive semi-definite and at leat one of them is positive definite. Here, we weaken these conditions to the requirement that both of the matrices W and T are symmetric positive semi-definite satisfying nullw nullt = {}. Also, the SCSP and PGSOR iteration methods naturally leads to a preconditioning matrix for the original complex linear system 1 and the block two-by-two linear system, respectively. Applications of these two preconditioners involve solving linear subsystem with the coefficient matrix ωw +T that depends on the structures of matrices W and T, and can be exactly solved by the Cholesky factorization or LU factorization. Moreover, similar to the inexact rotated block triangular preconditioners, an inexact variant of the SCSP and PGSOR preconditioners can be used by applying a nonsingular approximation of the matrix ωw +T, for example, incomplete Cholesky factorization or incomplete LU factorization when the matrix ωw + T is symmetric positive definite or nonsymmetric, respectively. Numerical experiments show that the Krylov subspace methods such as GMRES [5] or its restarted variant GMRESl [5] incorporated with the exact or inexact SCSP and PGSOR preconditioners lead to rapid convergence and tend to outperform the exact or inexact rotated block triangular preconditioners. Throughout this paper, we use T and to show the transpose and Euclidean norm of either a vector or a matrix, and denote by ρ the spectral radius of the corresponding matrix. The outline of this paper is as follows. In section we describe the SCSP iteration method for solving the original complex system 1 and investigate its convergence properties as well as extending the analysis to the preconditioned GMRES method. In Section 3 we present and analyze the PGSOR iteration method for solving the block two-by-two linear system and discuss the spectral properties of the corresponding preconditioned matrix. Section 4 presents some numerical examples to show the effectiveness of the SCSP iteration method and to compare the effectiveness of the SCSP, PGSOR and rotate block triangular preconditioners to accelerate Krylov subspace iteration methods such as GMRES or its restarted variant GMRESl. Scale-Splitting iteration method In this section, we present the SCSP iteration method for solving the original complex system 1. Let ω be a real positive constant and the matrix ωw + T be nonsingular. By multiplying the complex number ω i through both sides of the complex system 1 we obtain the following equivalent system A ω z = ω ib, 4 where A ω = ωw + T + iωt W in which i = 1. By rewriting it as the system of fixed-point equation ωw + T z = iw ωt z + ω ib, we obtain the SCSP iteration method, for solving equivalent complex system 4, as follows. SCSP iteration method Given an initial guess z C n and positive constant ω, for k =, 1,..., until {z k } converges, solve ωw + T z k+1 = iw ωt z k + ω ib. 5 for z k+1.

4 4 D. Hezari, V. Edalatpour and D.K. Salkuyeh where The SCSP iteration method can be equivalently rewritten as z k+1 = G ω z k + c ω, k =, 1,,... 6 G ω = iωw + T 1 W ωt and c ω = ω iωw + T 1 b. 7 Here, G ω is the iteration matrix of the SCSP method. In fact, 6 is also obtained from the splitting A = M ω N ω 8 of the coefficient matrix A, with M ω = ω + i ω + 1 ωw + T and N ω = 1 + iω ω W ωt. + 1 Note that G ω = Mω 1 N ω, and M ω can be served as a preconditioner to the system 1. Therefore, the preconditioned system takes the following form Mω 1 Az = Mω 1 b. 9 At each iteration of SCSP method, it is required to solve a linear system with coefficient matrix ωw + T. In particular, if W and T are symmetric positive semi-definite matrices satisfying nullw nullt = {}, then ωw + T is symmetric positive definite. Hence, the SCSP method is well defined and the linear system with coefficient matrix ωw + T can be exactly solved by the Cholesky factorization or inexactly by the CG algorithm. In continuation, we discuss the spectral radius of the iteration matrix G ω and, based on the results obtained, investigate the convergence properties of the method and determine the optimal value of the parameter ω. Theorem 1 Let W, T R n n be symmetric positive semi-definite matrices satisfying nullw nullt = {}, ω be a positive constant, and G ω be the iteration matrix 7 of the SCSP method. Then the following statements hold true: i λ is an eigenvalue of G ω if λ = i 1 ωµ ω + µ, where µ is a generalized eigenvalue of the matrix pair W, T and i is imaginary unit; ii the spectral radius ρg ω of the iteration matrix G ω satisfies { 1 ωµmin ρg ω = max, ωµ } max 1, 1 ω + µ min ω + µ max furthermore, if 1 µ min 1 + µ min < ω < 1 + µ max µ max 1 for µ max > 1 1 µ min 1 + µ min < ω for µ max 1, then ρg ω < 1, where µ min and µ max are the smallest and largest generalized eigenvalues of the matrix pair W, T, respectively; iii the optimal parameter ω which minimizes ρg ω is given by ω = 1 µ minµ max µ min 1 + µ max µ min + µ max, 11 and the corresponding optimal convergence factor of the method is given by ρg ω = 1 ω µ min ω = ω µ max 1 + µ min ω. 1 + µ max

5 A new iterative method for solving a class of complex symmetric system of linear equations 5 Proof To prove i, let x C n be a eigenvector associated with a generalized eigenvalue µ of the matrix pair W, T. We have T x = µw x, 13 which implies ωw + T x = ω + µw x. Since ω >, and W and T are symmetric positive semi-definite matrices satisfying nullw nullt = {}, it is easy to see that µ and ωw + T is symmetric positive definite. Thus, we can write By applying 13 and 14, we have ωw + T 1 W x = 1 x. 14 ω + µ G ω x = iωw + T 1 W ωt x = i1 ωµωw + T 1 W x = i 1 ωµ ω + µ x. We now turn to prove ii. From i we can see that ρg ω = max 1 ωµ, 15 µ σw,t ω + µ where σw, T is the set of generalized eigenvalues of the matrix pair W, T. Invoking that µ and noting that 1 ωµ ω + µ is a decreasing function with respect to µ, it is easy to obtain 1. From Eq. 15, we find that ρg ω < 1 if and only if 1 < 1 ωµ < ω + µ The right inequality of 16 holds if and only if Let us define the function fµ by Then, the inequality 17 holds if and only if 1 µ < ω µ σw, T µ fµ = 1 µ 1 + µ. 1 µ min 1 + µ min < ω, that follows from the fact that fµ is a decreasing function. The left inequality of 16 is equivalent to ωµ 1 < 1 + µ. 18 If µ max 1, then the inequality 18 holds for every µ σw, T and if µ max > 1, it is easy to find that the inequality 18 holds if and only if ω < 1 + µ max µ max 1, which completes the proof of the part ii. Finally, to prove iii, let us define the functions f 1 ω and f ω by f 1 ω = 1 ωµ min ω + µ min and f ω = ωµ max 1 ω + µ max.

6 6 D. Hezari, V. Edalatpour and D.K. Salkuyeh From relation 1, and the fact that f 1 ω and f ω are respectively decreasing and increasing finctions of ω see Figure 1, we deduce that ρg ω attains its minimum when ω satisfies 1 ωµ min = ωµ max 1. ω + µ min ω + µ max Or, equivalently, at ω = 1 µ minµ max µ min 1 + µ max µ min + µ max, at which we have 1. When W and T are symmetric positive semi-definite matrices satisfying nullw nullt = {}, we are ready to describe the spectral properties of the preconditioned matrix Mω 1 A where ω is the optimal parameter introduced in Theorem 1. From 8, we have M 1 ω A = I G ω, where G ω = Mω 1 N ω eigenvalues of the preconditioned matrix Mω 1 is the same as the iteration matrix of the SCSP method. Then, the A are 1 λ where λ is an eigenvalue of the A are as follows matrix G ω. From part i of Theorem 1, we see that the eigenvalues of M 1 ω 1 i 1 ω µ ω + µ, where µ is a generalized eigenvalue of the matrix pair W, T. Since µ, the eigenvalues of the preconditioned matrix Mω 1 A are located on a straight line orthogonal to the real axis at the point 1,, which their imaginary parts are smaller than 1. Therefore, it is expected that the preconditioned equation 9 would lead to considerable improvement over the unpreconditioned one. Remark 1 If the smallest generalized eigenvalue of the matrix pair W, T is positive then, from part ii of Theorem 1, we can see that spectral radius of iteration matrix of the SCSP method, with ω = 1, is smaller than 1 and this means that SCSP iteration method, with ω = 1, is convergent. Therefore, similar to the above discussion, we can conclude that the eigenvalues of the corresponding preconditioned matrix are located on a straight line orthogonal to the real axis at the point 1,, which their imaginary parts are smaller than 1. Fig. 1 Graph of f 1 ω and f ω.

7 A new iterative method for solving a class of complex symmetric system of linear equations 7 3 PGSOR iteration method The PGSOR iteration method was presented in [4] for solving the block two-by-two linear system is summarized as follows. The PGSOR iteration method Given an initial guess x T, y T T R n and positive constants α and ω, for k =, 1,..., until {x kt, y kt T } converges, compute { ωw + T x k+1 = 1 αωw + T x k + αωt W y k + αωf + g, ωw + T y k+1 = αωt W x k αωw + T y k 19 + αωg f. In the matrix-vector form, the PGSOR iteration method can be equivalently rewritten as x k+1 x k ωf + g y k+1 = Lω, α y k + αcω, α, ωg f where ωw + T Lω, α = αωt W ωw + T is iteration matrix, and 1 1 αωw + T αωt W, 1 1 αωw + T 1 ωw + T Cω, α =. αωt W ωw + T The PGSOR method is well defined when matrix ωw + T is nonsingular. In particular, authors in [4] investigated the convergence conditions of the PGSOR iteration method when both of the matrices W and T are symmetric positive semidefinite, with at least one of them, e.g., W, being positive definite, and obtained the optimal values of the parameters α and ω. Here, in the following theorem, we establish the convergence conditions of the PGSOR iteration method under weaker conditions that W and T are symmetric positive semidefinite matrices satisfying nullw nullt = {}. Theorem Let W, T R n n be symmetric positive semidefinite satisfying nullw nullt = {}. Also, let µ min and µ max be the smallest and largest generalized eigenvalues of the matrix pair W, T, respectively. Then i for every positive parameter ω, the PGSOR iteration method is convergent if and only if < α <, 1 + ρ Sω where Sω = ωw + T 1 ωt W and { ρ Sω 1 ωµmin = max, ωµ } max 1. 3 ω + µ min ω + µ max Moreover, the optimal value of relaxation parameter α and corresponding optimal convergence factor for the PGSOR iteration method are respectively given by α = ρ Sω and ρlω, α = 1 α = ρ Sω ; 4

8 8 D. Hezari, V. Edalatpour and D.K. Salkuyeh ii the optimal value of the parameter ω, which minimizes the spectral radius ρlω, α is given by at which we have where ω = 1 µ minµ max µ min 1 + µ max µ min + µ max, 5 α = ξ = 1 ω µ min ω + µ min Moreover, we have ρlω, α < ξ, = ω µ max 1 ω. + µ max Proof The proof of relation 3 is rather similar to that of relation 1 in Theorem 1 and is omitted. Since the PGSOR iteration method is the same as the GSOR iteration method for solving block two-by-two linear system obtained from 4, then, from Theorem 1 and in [19], we can find that i holds. The proof of ii is quite similar to that of Theorem.4 and Corollary. in [4] and is omitted. Remark Note that the optimal value of parameter ω for the PGSOR iteration method is equal to that of the SCSP iteration method. Considering matrices Pω, α = 1 ωw + T ω + 1 Q αωt W ωw + T 6 and 1 1 αωw + T αωt W Fω, α = ω + 1 Q 1 αωw + T ωi I with Q =, we see that R = Pω, α Fω, α and Lω, α = P I ωi 1 ω, αfω, α. Therefore, the PGSOR method is a stationary iterative method induced by the previous matrix splitting. Hence, we deduce that the Pω, α can be used as a preconditioner for the block two-by-two system, which will be referred to as the PGSOR preconditioner. In the implementation of the preconditioner Pω, α, we need to solve Pω, αv = r for the generalized residual vector v = va T, vb T T, where r = ra T, rb T T is the current residual vector. Due to the multiplicative structure of the preconditioner Pω, α, computing of the generalized residual vector can be accomplished through the following procedure which involves solving two linear subsystems of the same coefficient matrix ωw + T : Set r a = ωf + g and r b = ωg f; Solve ωw + T v a = r a for v a ; Solve ωw + T v b = αw ωt v a + r b for v b. Comparing with the rotated block triangular preconditioners in [], we see that the action costs of the PGSOR preconditioner are essentially the same as those of RBLT and RBUT preconditioners and less than that of RBTP preconditioner. Remark 3 If, in the PGSOR preconditioner 6, ω = 1 and α = 1 then Pω, α is a scaled product of the RBLT preconditiner with ω = 1.

9 A new iterative method for solving a class of complex symmetric system of linear equations 9 Remark 4 By writing the real form of the SCSP iteration method 5, it is easy to see that the resulting iteration method is the same as block Jacobi iteration method for solving the block two-by-two linear system and the SCSP preconditioner M ω have the following real form H ω = ωw + T ωw + T which can be used for the block two-by-two linear system. It is noteworthy that the PGSOR iteration method and corresponding preconditioner have no complex counterpart. In the following, we describe the spectral properties of the preconditioned matrix P 1 ω, αr for some values of α and ω. Remark 5 Let Pω, α be the PGSOR preconditioning matrix and Lω, α be the iteration matrix of the PGSOR method defined in 6 and 1, respectively. Then, according to Theorem, the following statements hold true. i Since ρlω, α < 1.17, the eigenvalues of the preconditioned matrix P 1 ω, α R are contained within the complex disk centered at 1 with radius 1 due to P 1 ω, α R = I Lω, α. ii If ω = 1, then the PGSOR iteration method is convergent for every < α < 1, since ρ S1 < 1. Also, from part i of Theorem we see that the optimal parameter α corresponding to ω = 1 belongs to the interval, 1 and ρlω, α ρl1, α <, And similar to the case i, the eigenvalues of the preconditioned matrix P 1 1, α R are also contained within the complex disk centered at 1 with radius 1. Since it may turn out to be difficult to find the optimal values of the parameters ω and α, Hezari et al. in [4] used the values ω = 1 and α =, and presented some reasons for using these values. Numerical experiments in [4] show that performance of the PGSOR iteration method with ω = 1 and α = is close to those of the optimal parameters. In the following theorem, we exactly describe the spectral radius of the iteration matrix Lω, α of the PGSOR iteration method and the spectral distribution of the preconditioned matrix P 1 ω, αr with those values that is a strong reason why the corresponding PGSOR iteration method and the PGSOR preconditioner are of high-performance. Theorem 3 Let R be the block two-by-two matrix defined in, with W and T being symmetric positive semidefinite matrices satisfying nullw nullt = {}, and let Lω, α be the iteration matrix of the PGSOR method and Pω, α be the PGSOR preconditioning matrix defined in 1 and 6, respectively. If ω = 1 and α = then ρl1, 1 = and the eigenvalues of the preconditioned matrix P 1 1, R are contained within the complex disk centered at 1 with radius 1 Proof If ω = 1 it follows from part i of Theorem, that the corresponding optimal parameter α is given by α = ρ S1, where S1 = W +T 1 T W. From 3 and the fact the generalized eigenvalues of matrix pair W, T is nonnegative, we have ρ S1 < 1 and this implies that < α. Note that

10 1 D. Hezari, V. Edalatpour and D.K. Salkuyeh the PGSOR iteration method with ω = 1 is the same as the GSOR iteration method for solving block two-by-two linear system obtained from 4 and therefore, L1, α and α are the same as the corresponding iteration matrix and the optimal relaxation parameter of the GSOR method, respectively. Following the proof of finding the optimal value of relaxation parameter for the GSOR method [19, Theorem ], we find that for every α < α, it holds that ρl1, α = 1 α. Therefore, here for α = < α, we can conclude 7. The second part of the theorem is derived from the fact that P 1 1, R = I L1,. Now, we consider weaker conditions for the matrices W and T, and similar to the [, Theorem 3.1], we estimate region for the eigenvalues of the preconditioned matrix P 1 ω, αr. Theorem 4 Let ω be a real constant and α, 1]. Also, let R be the nonsingular block twoby-two matrix defined in, with W and T being real square matrices such that ωw + T is nonsingular. Then, the eigenvalues of P 1 ω, αr are located within the complex disk centered at 1 with radius δω = V ω 1 + α V ω +1 α where V ω = ωw +T 1 W ωt. Proof Let Then, Y ω = V ω αv ω and Zω = α 1V ω 1 P 1 ωw + T ωw + T W ωt ω, αr = αωt W ωw + T ωt W ωw + T ωw + T = 1 ωw + T W ωt αv ωωw + T 1 ωw + T 1 ωt W ωw + T I V ω = α 1V ω I + αv ω = I + Y ω + Zω. From straightforward computations we can further obtain Y ω + Zω Y ω + Zω V ω 1 + α V ω + 1 α. Now, by making use of the Lemma 3. in [6], we deduce that the eigenvalues of P 1 ω, αr are located within the complex disk centered at 1 with radius δω. Now, when α = 1, we can obtain deeper properties for the preconditioned matrix P 1 1, αr. Theorem 5 Let ω be a real constant and R be the nonsingular block two-by-two matrix defined in, with W and T being real square matrices such that ωw + T is nonsingular. Then, for the PGSOR preconditioning matrix Pω, 1, it holds that i the eigenvalues of P 1 ω, 1R are 1 and 1 + µ, with µ being the eigenvalue of the matrix V ω, so they are contained within the complex disk centered at 1 with radius V ω ; ii let l be the degree of the minimal polynomial of the matrix V ω. Then the degree of the minimal polynomial of the matrix P 1 ω, 1R is, at most, l + 1. Proof Following the proof of Theorem 4, we know that P 1 I V ω ω, 1R = I + V, ω which immediately results in i. Moreover, following the proof of the Proposition.1 in [7] and using the structure of the matrix P 1 ω, 1R, we find that ii holds true. From Theorem 5 we find that when a Krylov subspace iteration method with an optimal or Galerkin property, like GMRES, is used to the preconditioned linear system P 1 ω, 1R, it will converge to the exact solution in l + 1 or fewer number of iteration steps, in exact arithmetic. When the matrix V ω is diagonalizable, similar to the theorem 3.3 in [], we can give the analytical expression for the eigenvalues and eigenvectors of the preconditioned matrix P 1 ω, αr..

11 A new iterative method for solving a class of complex symmetric system of linear equations 11 Theorem 6 Let ω and α be real constants, and R be the nonsingular block two-by-two matrix defined in, with W and T being real square matrices such that ωw +T is nonsingular. Let Ψω be a nonsingular matrix and Υ ω be a diagonal matrix such that V ωψω = ΨωΥ ω, with ψ j := ψ j ω the j-th column of Ψω and v j := v j ω the j-th diagonal element of Υ ω. Denote by s j = signv j the sign function of v j. Then, the eigenvalues λ j, j = 1,,..., m, of the matrix P 1 ω, αr are λ j = λ + j and λ m+j = λ j, j = 1,,..., m, where λ ± j = 1 and the corresponding eigenvectors are 1 ψ 1+β j x j = j β j, x m+j = ψ 1+β j j + αvj ± α vj αv j 1 1+γ j ψ j γj ψ 1+γ j j,, j = 1,,..., m, where β j = 1 αv j + s j α vj + 41 α, γ j = 1 αv j + s j α vj + 41 α. Proof For convenience, let Ψω Ψω =, Kω, α = Ψω I Υ ω α 1Υ ω I + αυ ω From the proof of Theorem 4, we observe that P 1 I V ω ω, αr = α 1V ω I + αv ω 1 Ψω I Υ ω Ψω = Ψω α 1Υ ω I + αυ ω Ψω = ΨωKω, α Ψ 1 ω. Since the sub-blocks of Kω, α are square diagonal matrices, then by following the proof of [, Theorem 3.3], we can obtain the spectral decomposition of Kω, α, and after straightforward computations, the above eigenvalues and eigenvectors of the matrix P 1 ω, αr are derived.. 4 Numerical experiments In this section, we use three examples which are of the complex linear system of the form 1 with real matrices W and T that are symmetric positive definite. By using these examples, we illustrate the feasibility and effectiveness of the SCSP iteration method when it is employed as a solver to solve the complex symmetric system 1. We compare the performance of the SCSP iteration method with that of the PGSOR iteration method, which is applied for the equivalent block two-by-two linear system, from the point of view of both the number of iterations denoted by IT and the total computing times in seconds, denoted by CPU. Beside these three complex linear systems, one [] is given which is a nonsymmetric variant of the complex linear system 1. Using these examples, we examine the numerical behavior of the SCSP, PGSOR and RBLT preconditioning matrices and the corresponding preconditioned GMRES method, termed briefly as SCSP-GMRES, PGSOR-GMRES and RBLT-GMRES. For a comprehensive comparison we also solve the original complex valued system 1 via Matlab s sparse direct solver \ and by un-preconditioned or ILU-preconditioned GMRES. Note that, since the numerical experiments in [] show that the performance of the RBLT,

12 1 D. Hezari, V. Edalatpour and D.K. Salkuyeh RBUT and RBPT preconditioning matrices are almost the same, and the structure of RBLT preconditioning matrix is similar to that of the PGSOR preconditioning matrix, the comparison is done only with the RBLT preconditioning matrix. In the SCSP, PGSOR and RBLT preconditioning processes, it is required to solve linear subsystems with the coefficient matrix ωw + T and depending on the structure of the matrices W and T it can be done exactly by making use of the Cholesky or LU factorization. Moreover, similar to the inexact RBLT preconditioning matrix IRBLT in [3], an inexact variant of the SCSP and PGSOR preconditioning matrices and the corresponding preconditioned restarted GMRESl method, termed briefly as ISCSP-GMRESl, IPGSOR- GMRESl and IRBLT-GMRESl, can be used by applying a nonsingular approximation of the matrix ωw + T. Depending on structure of matrices W and T, we use an incomplete Cholesky or LU factorization of ωw + T with dropping tolerance.1. The reported CPU times are the sum of the CPU time for the convergence of the method and the CPU time for computing the complete or incomplete Cholesky or LU factorization. It is necessary to mention that to solve symmetric positive definite and nonsymmetric system of linear equations we have used the sparse Cholesky and LU factorization incorporated with the symmetric and column approximate minimum degree reordering, respectively. To do so we have used the symamd.m and colamd.m commands of MATLAB. It should be noted that in all the tests, IT for the preconditioned or non-preconditioned GMRESl stands for the number of restarts. All tests are performed in MATLAB bit on a laptop ASUS, Intel Core i7, 1.8 GHz with 6GB RAM. We use a null vector as an initial guess and the stopping criterion b Ax k < 1 6 b. For Example 1, and 4, the optimal values of the parameters ω and α for the PGSOR method are the ones determined by the formula given in Theorem and, according to Remark, the optimal value of the parameter ω for SCSP iteration method is the same as that of the PGSOR iteration method. Since Example 3 is a nonsymmetric variant of the complex linear system 1, the parameters ω and α adopted in exact preconditioners are the experimentally found optimal ones that minimize the total numbers of iteration steps of the GMRES iteration process and listed in Table 1 for different values of m denoted by α and ω. These optimal parameters is also used in inexact preconditioners and corresponding restarted GMRESl method. If the optimal parameters ω and α are not single points, we choose the ω as the one closest to 1. and the α as the smallest one. Moreover, we use an incomplete LU factorization of ωw + T with dropping tolerance.1 when applying ISCSP-GMRESl, IPGSOR-GMRESl and IRBLT-GMRESl for this example. Example 1 [4] Consider the linear system of equations 1 of the form [ K I τ + i K ] 3 I x = b, 8 τ where τ is the time step-size and K is the five-point centered difference matrix approximating the negative Laplacian operator L = with homogeneous Dirichlet boundary conditions, on a uniform mesh in the unit square [, 1] [, 1] with the mesh-size h = 1/m + 1. The matrix K R n n possesses the tensor-product form K = I V m + V m I, with V m = h tridiag 1,, 1 R m m. Hence, K is an n n block-tridiagonal matrix, with n = m. We take W = K τ I and T = K I, τ and the right-hand side vector b with its jth entry b j being given by b j = 1 ij, j = 1,,..., n. τj + 1 In our tests, we take τ = h. Furthermore, we normalize the coefficient matrix and the righthand side by multiplying both by h.

13 A new iterative method for solving a class of complex symmetric system of linear equations 13 Table 1 The experimental optimal parameters for preconditioned GMRES for Example 3 by minimizing iteration steps Method m SCSP-GMRES ω [.1,.9] [.1, 1.] [.5, 1.4] [.7,.99] [.7, 1.1] PGSOR-GMRES ω [.,.48] [.4,.8] [.74,.91] [.7, 1.1] [.85, 1.1] α [.9, 1.] [.9, 1.3] [.9, 1.] [.8, 1.5] [.8, 1.5] RBLT-GMRES ω [.8, 1.4] [.83, 1.6] [.95, 1.3] [.97, 1.3] [.98, 1.] Table IT and CPU for SCSP and PGSOR methods for Examples 1 and when ω = ω and α = α. m Example Method No.1 SCSP PGSOR No. SCSP PGSOR Example [4] Consider the linear system of equations 1 as following [ θ M + K + iθc V + C H ] x = b, where M and K are the inertia and the stiffness matrices, C V and C H are the viscous and the hysteretic damping matrices, respectively, and θ is the driving circular frequency. We take C H = µk with µ a damping coefficient, M = I, C V = 1I, and K the five-point centered difference matrix approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions, on a uniform mesh in the unit square [, 1] [, 1] with the mesh-size h = 1/m+1. The matrix K R n n possesses the tensor-product form K = I V m +V m I, with V m = h tridiag 1,, 1 R m m. Hence, K is an n n block-tridiagonal matrix, with n = m. In addition, we set θ = π, µ =., and the right-hand side vector b to be b = 1 + ia1, with 1 being the vector of all entries equal to 1. As before, we normalize the system by multiplying both sides through by h. The number of iteration steps and the CPU times for the PGSOR and SCSP methods for the examples 1, and 4 with respect to different values of the problem size m, are listed in Tables -5, where the CPU times are shown in parentheses. Tables and 4 show ITs and CPUs when the optimal parameters ω and α are adopted, while Tables 3 and 5 show ITs and CPUs when the iteration parameters ω and α are set to be 1 and, respectively. Note that the SCSP method is employed for solving the original complex system 1, while PGSOR method is employed for solving equivalent block two-by-two system. At each step of the PGSOR method, it is required to solve two subsystems with the coefficient matrix ωw + T, while in SCSP method one subsystem with the same coefficient matrix is required. In Table we can see that the number of iterations of the SCSP method for Example 1 are almost twice as much as that of the PGSOR method, but the their computing times are close together. As seen, the PGSOR iteration method outperforms the SCSP iteration method. In addition, when the results of ω = 1 and α = are compared with those of the optimal values, for each m, we see that the ITs and CPUs of the PGSOR and SCSP methods in Tables 3 and 5 are larger than those in Tables and 4, respectively. However, for Example 1, it can be mentioned that the performance of ω = 1 and α = is acceptable and close to those of the optimal values and also, for Examples and 4, the performance of the PGSOR method with ω = 1 and α = is acceptable and close to those of the optimal values, while performance of the SCSP method with ω = 1 is far away those of the optimal values. Tables 6-8 present results for Example 1 where the CPU times are shown in parentheses. Table 7 shows ITs and CPUs when the optimal parameters ω and α are adopted, while

14 14 D. Hezari, V. Edalatpour and D.K. Salkuyeh Table 3 IT and CPU for SCSP and PGSOR methods for Examples 1 and when ω = 1 and α =. m Example Method No.1 SCSP PGSOR No. SCSP PGSOR Table 4 IT and CPU for SCSP and PGSOR methods for Example 4 when ω = ω and α = α. m 3 Method SCSP PGSOR Table 5 IT and CPU for SCSP and PGSOR methods for Example 4 when ω = 1 and α =. m 3 Method SCSP PGSOR Table 6 IT and CPU for GMRES, GMRES, ILU-GMRES, ILU-GMRES and A\b for Example 1. m Method A\b GMRES GMRES ILU-GMRES ILU-GMRES Table 8 shows ITs and CPUs when the iteration parameters ω and α are set to be 1 and, respectively. The symbol indicates that the computing time has lasted more than one hour. As seen, the direct method is slower than PGSOR-GMRES except for m = 64 and also slower than SCSP-GMRES except for m = 64, 18. Both SCSP-GMRES and PGSOR-GMRES are very robust in all cases and exhibit mesh-independent convergence. From Table 7, we see that among exact preconditioned GMRES methods and among inexact preconditioned GMRESl methods, the PGSOR-preconditioned GMRES and ISCSPpreconditioned GMRESl need the least iteration step and CPU time, respectively. In addition, when the results of ω = 1 and α = are compared with those of the optimal values, for each m, we see that the ITs of the SCSP-GMRES method in Table 8 are the same as those in Table 7 and the CPUs are smaller than those in Table 7. The ITs and CPUs of the RBLT-GMRES and IRBLT-GMRES in Table 8 are exactly the same as those in Table 7. Moreover, the ITs and CPUs of the PGSOR-GMRES in Table 8 are larger than those in Table 7, and also the ITs of the IPGSOR-GMRES and ISCSP-GMRES in Table 8 are exactly the same as those in Table 7 except for IPGSOR-GMRES when m = 14,. Thus, it can be said that the performance of ω = 1 and α = is acceptable and close to those of the optimal values. Tables 9-11 present results for Example where the CPU times are shown in parentheses. Table 1 shows ITs and CPUs when the optimal parameters ω and α are adopted, while Table 11 shows ITs and CPUs when the iteration parameters ω and α are set to be 1 and, respectively. The symbol indicates that the computing time has lasted more than one hour. As seen, the direct method is slower than SCSP-GMRES except for m = 64 and

15 A new iterative method for solving a class of complex symmetric system of linear equations 15 Table 7 IT and CPU for exact SCSP-, PGSOR- and RBLT-preconditioned GMRES and inexact SCSP-, PGSOR- and RBLT-preconditioned GMRESl methods for Example 1 when ω = ω and α = α. m Method SCSP-GMRES PGSOR-GMRES RBLT-GMRES ISCSP-GMRES IPGSOR-GMRES IRBLT-GMRES Table 8 IT and CPU for exact SCSP-, PGSOR- and RBLT-preconditioned GMRES and inexact SCSP-, PGSOR- and RBLT-preconditioned GMRESl methods for Example 1 when ω = 1 and α =. m Method SCSP-GMRES PGSOR-GMRES RBLT-GMRES ISCSP-GMRES IPGSOR-GMRES IRBLT-GMRES Table 9 IT and CPU for GMRES, GMRES, ILU-GMRES, ILU-GMRES and A\b for Example. m Method A\b GMRES > 5 GMRES > 5 ILU-GMRES ILU-GMRES slower than PGSOR-GMRES for m = 51, 14. Both SCSP-GMRES and PGSOR-GMRES are very robust in all cases and exhibit mesh-independent convergence. From Table 1, we see that among exact preconditioned GMRES methods, the performance of SCSP- and PGSOR-GMRES is better than RBLT-GMRES from point of view of both the iteration number and the CPU time. The ITs of the SCSP-GMRES method show one more iteration step than those of the PGSOR-GMRES method except for m = 64, in which case the IT of the SCSP-GMRES method is exactly the same as that of the PGSOR-GMRES method. It is important to note that, among exact preconditioned GMRES methods, the computing time of the SCSP-GMRES is the least. Among inexact preconditioned GMRESl methods, we see that the ITs and CPUs of ISCSP-GMRES are the least. are compared with those of the optimal values, for each m, we see that the ITs of the SCSP-, and RBLT-GMRES and SCSP-, and IRBLT-GMRES method in Table 11 are exactly the same as those in Table 1. The ITs and CPUs of the IPGSOR-GMRES in Table 11 are larger than those in Table 1 except for m = 64. It is important to note that among the exact the preconditioned GMRES and the inexact preconditioned GMRESl methods in Table 11, ITs and CPUs of the SCSP-GMRES and ISCSP-GMRES are the least, respectively. In addition, when the results of ω = 1 and α = Example 3 [] Nonsymmetric BBC problem Consider the linear system of equations W + it x = b, with W = I L + L I and T = 1I L c + L c I + 9e 1 e T m + e m e T 1 I,

16 16 D. Hezari, V. Edalatpour and D.K. Salkuyeh Table 1 IT and CPU for exact SCSP-, PGSOR- and RBLT-preconditioned GMRES and inexact SCSP-, PGSOR- and RBLT-preconditioned GMRESl methods for Example when ω = ω and α = α. m Method SCSP-GMRES PGSOR-GMRES RBLT-GMRES ISCSP-GMRES IPGSOR-GMRES > 5 IRBLT-GMRES > 5 Table 11 IT and CPU for exact SCSP-, PGSOR- and RBLT-preconditioned GMRES and inexact SCSP-, PGSOR- and RBLT-preconditioned GMRESl methods for Example when ω = 1 and α =. m Method SCSP-GMRES PGSOR-GMRES RBLT-GMRES ISCSP-GMRES IPGSOR-GMRES > 5 IRBLT-GMRES > Fig. Eigenvalues distribution of the matrix A left and the preconditioned matrix, Mω 1 A where ω = ω middle and Mω 1 A where ω = 1 right for Example 1 with m = Fig. 3 Eigenvalues distribution of the block two-by-two matrix R left, the preconditioned matrix P 1 ω, α R middle and the preconditioned matrix P 1 1, R right for Example 1 with m = 3.

17 A new iterative method for solving a class of complex symmetric system of linear equations 17 where L = tridiag 1 θ,, 1 + θ R m m 1, θ = m+1, L c = L e 1 e T m e m e T 1 R m m and e 1 and e m are the first and last unit vectors in R m, respectively. We take the right-hand side vector b to be b = 1 + ia1, with 1 being the vector of all entries equal to 1. Example 4 [8] Consider the 3-D Helmholtz equation { u k u + iσ u = fx, y, z, x, y, z Ω = [, 1] [, 1] [, 1], u Γ = gx, y, z, x, y, z Γ, Fig. 4 Eigenvalues distribution of the matrix A left and the preconditioned matrix, Mω 1 A where ω = ω middle and Mω 1 A where ω = 1 right for Example with m = Fig. 5 Eigenvalues distribution of the block two-by-two matrix R left, the preconditioned matrix P 1 ω, α R middle and the preconditioned matrix P 1 1, R right for Example with m = Fig. 6 Eigenvalues distribution of the matrix A left and the preconditioned matrix, Mω 1 A where ω = ω middle and Mω 1 A where ω = 1 right for Example 3 with m = 3.

18 18 D. Hezari, V. Edalatpour and D.K. Salkuyeh where k and Γ denote the wave number and the boundary of Ω = [, 1] [, 1] [, 1], respectively. Let σ and k be.1 and 1, respectively. Using the center-difference scheme to the above 3-D Helmholtz equation with the mesh-size h = 1 m+1, we have A = W + it, where W = T m I m I m + I m T m I m + I m I m T m k h I m I m I m and T m = σ I m I m I m. Moreover, I m R m m is the identity matrix and T m is the tridiagonal matrix defined by T m = tridiag 1,, 1. We take the right-hand side vector b to be b = 1 + ia1, with 1 being the vector of all entries equal to 1. Tables 1-14 present results for Example 3 where the CPU times are shown in parentheses. Table 13 shows ITs and CPUs when the optimal parameters ω and α are adopted, while Table 14 shows ITs and CPUs when the iteration parameters ω and α are set to be 1 and, respectively. The symbol indicates that the computing time has lasted more than one hour. As seen, the direct method is sometimes slightly faster and is slower than SCSP-GMRES and PGSOR-GMRES for m = 51, 14. From Table 13, we see that among exact preconditioned GMRES methods, the performance of SCSP- and PGSOR-GMRES is better than RBLT-GMRES from point of view of both the iteration number and the CPU time. It is important to note that, among exact preconditioned GMRES methods, the computing time of the PGSOR-GMRES, for large m, is the least. Among inexact preconditioned GMRESl methods, we see that the ITs and CPUs of ISCSP-GMRES are the least. In addition, when the results of ω = 1 and α = are compared with those of the optimal values, for each m, we see that the ITs and CPUs of the RBLT-GMRES and IRBLT Fig. 7 Eigenvalues distribution of the block two-by-two matrix R left, the preconditioned matrix P 1 ω, α R middle and the preconditioned matrix P 1 1, R right for Example with m = Fig. 8 Eigenvalues distribution of the block two-by-two matrix R left, the preconditioned matrix P 1 ω, α R middle and the preconditioned matrix P 1 1, R right for Example 4 with m = 15.

19 A new iterative method for solving a class of complex symmetric system of linear equations 19 Table 1 IT and CPU for GMRES, GMRES, ILU-GMRES, ILU-GMRES and A\b for Example 3. m Method A\b GMRES > 5 > 5 GMRES > 5 ILU-GMRES ILU-GMRES Table 13 IT and CPU for exact SCSP-, PGSOR- and RBLT-preconditioned GMRES and inexact SCSP-, PGSOR- and RBLT-preconditioned GMRESl methods for Example 3 when ω = ω and α = α. m Method SCSP-GMRES PGSOR-GMRES RBLT-GMRES ISCSP-GMRES IPGSOR-GMRES IRBLT-GMRES GMRES1 methods in Table 14 are exactly the same as those in Table 13. Also, the ITs and CPUs of the SCSP-GMRES method in Table 14 are exactly the same as those in Table 13 except for m = 64, 51, in which cases the difference of IT is only one. The results of the PGSOR-GMRES method in Table 14 are close to those in Table 13, and for some cases the results are the same. The ITs of the ISCSP-GMRES method in Table 14 are the same as those in Table 13 except for m = 51, in which case the difference of IT is only one. Thus, it can be said that the performance of ω = 1 and α = is acceptable and close to those of the optimal values, especially for the SCSP preconditioner. Tables present results for Example 4 where the CPU times are shown in parentheses. Table 16 shows ITs and CPUs when the optimal parameters ω and α are adopted, while Table 17 shows ITs and CPUs when the iteration parameters ω and α are set to be 1 and, respectively. The symbol indicates that the computing time has lasted more than one hour. As seen, the direct method is slower than SCSP-GMRES and PGSOR-GMRES methods. From Table 16, we see that among exact preconditioned GMRES methods, the performance of SCSP- and PGSOR-GMRES is better than RBLT-GMRES from point of view of both the iteration number and the CPU time. It is important to note that, among the exact preconditioned GMRES methods, the computing time of the SCSP-GMRES is the least, except for m = 5 in which case that of the PGSOR-GMRES is the least. Among inexact preconditioned GMRESl methods, we see that the ITs and CPUs of ISCSP-GMRES are the least. are compared with those of the optimal values, for each m, we see that, among exact preconditioned GMRES methods, the ITs of the RBLT-GMRES and SCSP-GMRES in Table 17 are exactly the same as those in Table 16. Also, among inexact preconditioned GMRESl methods, the ITs of all of them in In addition, when the results of ω = 1 and α = Table 17 are exactly the same as those in Table 16. Thus, it can be said that the performance of ω = 1 and α = is acceptable and close to those of the optimal values. In Figures 4-4 we depict the eigenvalues distribution of the coefficient matrix A and SCSP-preconditioned matrices Mω 1 A, when ω = ω and ω = 1, and also the eigenvalues distribution of the block two-by-two coefficient matrix R and PGSOR-preconditioned matrices P 1 ω, α R and P 1 1, R with m = 3 for Examples 1-3. It is evident that systems which are preconditioned by SCSP and PGSOR method are of a well-clustered spectrum around 1,. These observations imply that when SCSP and GSOR are applied as a

20 D. Hezari, V. Edalatpour and D.K. Salkuyeh Table 14 IT and CPU for exact SCSP-, PGSOR- and RBLT-preconditioned GMRES and inexact SCSP-, PGSOR- and RBLT-preconditioned GMRESl methods for Example 3 when ω = 1 and α =. m Method SCSP-GMRES PGSOR-GMRES RBLT-GMRES ISCSP-GMRES IPGSOR-GMRES IRBLT-GMRES Table 15 IT and CPU for GMRES, GMRES, ILU-GMRES, ILU-GMRES and A\b for Example 4. m 3 Method A\b GMRES GMRES ILU-GMRES ILU-GMRES Table 16 IT and CPU for exact SCSP-, PGSOR- and RBLT-preconditioned GMRES and inexact SCSP-, PGSOR- and RBLT-preconditioned GMRESl methods for Example 4 when ω = ω and α = α. m 3 Method SCSP-GMRES PGSOR-GMRES RBLT-GMRES ISCSP-GMRES IPGSOR-GMRES IRBLT-GMRES Table 17 IT and CPU for exact SCSP-, PGSOR- and RBLT-preconditioned GMRES and inexact SCSP-, PGSOR- and RBLT-preconditioned GMRESl methods for Example 4 when ω = 1 and α =. m 3 Method SCSP-GMRES PGSOR-GMRES RBLT-GMRES ISCSP-GMRES IPGSOR-GMRES IRBLT-GMRES preconditioner for GMRES, the rate of convergence can be improved considerably. This fact is further confirmed by the numerical results presented in Tables Conclusion In this paper we have presented a new stationary iterative method, called Scale-Splitting SCSP method, for solving original complex linear system 1. Convergence properties of the method have been also investigated. We have further explored algebraic and convergence properties of the PGSOR iteration method for solving the block two-by-two equivalent linear

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2010-026 On Preconditioned MHSS Iteration Methods for Complex Symmetric Linear Systems by Zhong-Zhi Bai, Michele Benzi, Fang Chen Mathematics and Computer Science EMORY UNIVERSITY On

More information

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Journal of Computational Mathematics Vol.xx, No.x, 2x, 6. http://www.global-sci.org/jcm doi:?? ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Davod

More information

Two efficient inexact algorithms for a class of large sparse complex linear systems

Two efficient inexact algorithms for a class of large sparse complex linear systems Two efficient inexact algorithms for a class of large sparse complex linear systems Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan, Rasht,

More information

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,

More information

On preconditioned MHSS iteration methods for complex symmetric linear systems

On preconditioned MHSS iteration methods for complex symmetric linear systems DOI 10.1007/s11075-010-9441-6 ORIGINAL PAPER On preconditioned MHSS iteration methods for complex symmetric linear systems Zhong-Zhi Bai Michele Benzi Fang Chen Received: 25 November 2010 / Accepted: 13

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Splitting Iteration Methods for Positive Definite Linear Systems

Splitting Iteration Methods for Positive Definite Linear Systems Splitting Iteration Methods for Positive Definite Linear Systems Zhong-Zhi Bai a State Key Lab. of Sci./Engrg. Computing Inst. of Comput. Math. & Sci./Engrg. Computing Academy of Mathematics and System

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Modified HSS iteration methods for a class of complex symmetric linear systems

Modified HSS iteration methods for a class of complex symmetric linear systems Computing (2010) 87:9 111 DOI 10.1007/s00607-010-0077-0 Modified HSS iteration methods for a class of complex symmetric linear systems Zhong-Zhi Bai Michele Benzi Fang Chen Received: 20 October 2009 /

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices Journal of Computational Applied Mathematics 6 (009) 136 154 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam Optimization

More information

ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS

ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS J. Appl. Math. & Informatics Vol. 36(208, No. 5-6, pp. 459-474 https://doi.org/0.437/jami.208.459 ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS DAVOD KHOJASTEH SALKUYEH, MARYAM ABDOLMALEKI, SAEED

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems Applied Mathematical Sciences, Vol. 10, 2016, no. 30, 1477-1488 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.6269 Performance Comparison of Relaxation Methods with Singular and Nonsingular

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Residual iterative schemes for largescale linear systems

Residual iterative schemes for largescale linear systems Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Zhong-Zhi Bai State Key Laboratory of Scientific/Engineering Computing Institute of Computational Mathematics

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University

More information

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,

More information

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction Journal of Computational Mathematics Vol.xx, No.x, 200x, 1 9. http://www.global-sci.org/jcm doi:10.4208/jcm.1401-cr7 ON AUGMENED LAGRANGIAN MEHODS FOR SADDLE-POIN LINEAR SYSEMS WIH SINGULAR OR SEMIDEFINIE

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 36, pp. 39-53, 009-010. Copyright 009,. ISSN 1068-9613. ETNA P-REGULAR SPLITTING ITERATIVE METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS U..B. Sci. Bull., Series A, Vol. 78, Iss. 4, 06 ISSN 3-707 ON A GENERAL CLASS OF RECONDIIONERS FOR NONSYMMERIC GENERALIZED SADDLE OIN ROBLE Fatemeh anjeh Ali BEIK his paper deals with applying a class

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58 Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms

More information

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations Akira IMAKURA Center for Computational Sciences, University of Tsukuba Joint work with

More information

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system !"#$% "&!#' (%)!#" *# %)%(! #! %)!#" +, %"!"#$ %*&%! $#&*! *# %)%! -. -/ 0 -. 12 "**3! * $!#%+,!2!#% 44" #% &#33 # 4"!#" "%! "5"#!!#6 -. - #% " 7% "3#!#3! - + 87&2! * $!#% 44" ) 3( $! # % %#!!#%+ 9332!

More information

Research Article Convergence of a Generalized USOR Iterative Method for Augmented Systems

Research Article Convergence of a Generalized USOR Iterative Method for Augmented Systems Mathematical Problems in Engineering Volume 2013, Article ID 326169, 6 pages http://dx.doi.org/10.1155/2013/326169 Research Article Convergence of a Generalized USOR Iterative Method for Augmented Systems

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006 P. Wesseling, E. Oñate and J. Périaux (Eds) c TU Delft, The Netherlands, 2006 CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2007-002 Block preconditioning for saddle point systems with indefinite (1,1) block by Michele Benzi, Jia Liu Mathematics and Computer Science EMORY UNIVERSITY International Journal

More information

The semi-convergence of GSI method for singular saddle point problems

The semi-convergence of GSI method for singular saddle point problems Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Front. Math. China 2017, 12(6): 1409 1426 https://doi.org/10.1007/s11464-017-0644-1 Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Xinzhen ZHANG 1, Guanglu

More information

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera Research Report KSTS/RR-15/006 The flexible incomplete LU preconditioner for large nonsymmetric linear systems by Takatoshi Nakamura Takashi Nodera Takatoshi Nakamura School of Fundamental Science and

More information

Delayed Over-Relaxation in Iterative Schemes to Solve Rank Deficient Linear System of (Matrix) Equations

Delayed Over-Relaxation in Iterative Schemes to Solve Rank Deficient Linear System of (Matrix) Equations Filomat 32:9 (2018), 3181 3198 https://doi.org/10.2298/fil1809181a Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Delayed Over-Relaxation

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

Iterative methods for positive definite linear systems with a complex shift

Iterative methods for positive definite linear systems with a complex shift Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Introduction to Scientific Computing

Introduction to Scientific Computing (Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic

More information

On the accuracy of saddle point solvers

On the accuracy of saddle point solvers On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS DAVOD KHOJASTEH SALKUYEH and FATEMEH PANJEH ALI BEIK Communicated by the former editorial board Let A : R m n R m n be a symmetric

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information