Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems

Size: px
Start display at page:

Download "Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems"

Transcription

1 Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Zhong-Zhi Bai State Key Laboratory of Scientific/Engineering Computing Institute of Computational Mathematics and Scientific/Engineering Computing Academy of Mathematics and System Sciences Chinese Academy of Sciences, P.O. Box 279, Beijing 8, P.R.China Gene H. Golub Scientific Computing and Computational Mathematics Program Department of Computer Science Stanford University, CA , USA Lin-Zhang Lu Department of Mathematics, Xiamen University Xiamen 365, P.R.China Jun-Feng Yin State Key Laboratory of Scientific/Engineering Computing Institute of Computational Mathematics and Scientific/Engineering Computing Academy of Mathematics and System Sciences Chinese Academy of Sciences, P.O. Box 279, Beijing 8, P.R.China May 2, 23 Abstract By further generalizing the concept of Hermitian (or normal) and skew-hermitian splitting for a non-hermitian and positive-definite matrix, we introduce a new splitting, called positive-definite and skew-hermitian (PS) splitting, and then establish a class of positivedefinite and skew-hermitian splitting (PSS) methods similar to the Hermitian (or normal) and skew-hermitian splitting (HSS or NSS) method for iteratively solving the positive definite systems of linear equations. Theoretical analysis shows that the PSS method converges Subsidized by The Special Funds For Major State Basic Research Projects G Research supported, in part, by DOE-FC2-ER477. Supported by National Natural Science Foundation of China.

2 2 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin unconditionally to the exact solution of the linear system, with the upper bound of its convergence factor being only dependent on the spectrum of the positive-definite splitting matrix, and independent of the spectrum of the skew-hermitian splitting matrix as well as the eigenvectors of all matrices involved. When we specialize the PS splitting to block-triangular (or triangular) and skew-hermitian (BTS, or TS) splitting, the PSS method naturally leads to a block-triangular (or triangular) and skew-hermitian splitting (BTSS, or TSS) iteration method, which may be more practical and efficient than the HSS and the NSS iteration methods. Applications of the BTSS method to the linear systems of block two-by-two structures are discussed in detail. Numerical experiments further show the effectiveness of our new methods and the correctness of the corresponding theories. Keywords. Non-Hermitian matrix, Positive definite matrix, Triangular matrix, Block triangular matrix, Hermitian and skew-hermitian splitting, Splitting iteration method. AMS subject classifications: 65F, 65F5, 65F5. Introduction We consider iterative solution of the large sparse system of linear equations Ax = b, A = (a k,j ) C n n nonsingular and b C n, () where A C n n is a positive definite complex matrix, which may be either Hermitian or non- Hermitian. Denote by H = 2 (A + A ) and S = 2 (A A ) the Hermitian and the skew-hermitian parts of the matrix A, respectively. Then it immediately holds that A = H + S, which naturally leads to the Hermitian and skew-hermitian (HS) splitting of the matrix A[, 7]. Based on the HS splitting and motivated by the classical alternating direction implicit (ADI) iteration technique[8], Bai, Golub and Ng[] recently presented the following Hermitian and skew-hermitian splitting (HSS) iteration method for solving the non-hermitian positive definite system of linear equations (): { (αi + H)x (k+ 2 ) = (αi S)x (k) + b, (αi + S)x (k+) = (αi H)x (k+ 2 ) + b, where α is a given positive constant. Then, by further generalizing the HS splitting to the normal and skew-hermitian (NS) splitting A = N + S o, N C n n normal and So C n n skew-hermitian, they obtained the normal and skew-hermitian splitting (NSS) iteration method[2], which is of an analogous formula to the above HSS scheme except for the splitting matrices H and S in HSS iteration being replaced by N and S o, respectively.

3 Block-triangular and skew-hermitian splitting methods 3 It was demonstrated in [, 2] that both HSS and NSS iteration methods converge unconditionally to the unique solution of the non-hermitian positive definite system of linear equations (), with the bounds on their convergence about the same as those of conjugate gradient[9] (for HSS) and GMRES[, ] (for NSS) methods when they are applied to Hermitian and normal matrices, respectively. Moreover, the upper bounds of their contraction factors are only dependent on the spectrums of the Hermitian (for HSS) and the normal (for NSS) parts, but are independent of the spectrums of the skew-hermitian parts as well as the eigenvectors of all matrices involved. Numerical results show that both HSS and NSS iteration methods are very effective and robust when they are used to solve large sparse positive definite system of linear equations (). A noticeable property of this class of methods is that by making use of matrix splittings they split a general non-hermitian positive definite linear system into two special subsystems of linear equations which can be solved effectively by either direct methods (e.g., Cholesky factorization and Bunch decomposition[5, 6]) or iterative methods (e.g., Krylov subspace methods[], multigrid and multilevel methods). Hence, The HSS and the NSS iteration techniques build a bridge between the classical splitting iteration methods and the modern subspace projection and grid correction methods. More generally, we know that any Hermitian or non-hermitian positive definite matrix A C n n also possesses a splitting of the form A = P + S, P C n n positive-definite and S C n n skew-hermitian. (2) That is to say, A is of a positive-definite and skew-hermitian splitting (2), or in short, PS splitting. By applying the technique of constructing HSS and NSS iterations, we can establish a class of positive-definite and skew-hermitian splitting (PSS) iteration method for solving the positive definite system of linear equations (). Different from both HSS and NSS methods, the new PSS method can be used to effectively compute iterative solutions of both Hermitian and non-hermitian positive definite linear systems. Theoretical analysis shows that the PSS iteration method preserves all properties of both HSS and NSS iteration methods. By specializing the PS splitting to block-triangular (or triangular) and skew-hermitian (BTS, or TS) splittings, the PSS method directly results in a class of block-triangular (or triangular) and skew-hermitian splitting (BTSS, or TSS) iteration method for solving the system of linear equations (), which may be more practical and efficient than both HSS and NSS iteration methods, because for the BTSS (or TSS) method, we only need to solve block-triangular (or triangular) linear sub-systems, rather than to invert shifted positive-definite matrices as in the PSS method or shifted Hermitian (normal) positive-definite matrices as in the HSS (NSS) method, at the first-half of each iteration steps. In addition, application of the BTSS method to linear systems of block two-by-two structures are discussed in detail, and several numerical examples are used to show effectiveness of our new methods and examine correctness of the corresponding theories. This paper is organized as follows: We establish the PSS iteration method and its convergence theory in Section 2, and describe the BTSS iteration methods in Section 3. Applications of the BTSS methods to linear systems of block two-by-two structure are discussed in Section 4, and numerical results are listed and analyzed in Section 5. Finally, in Section 6 we briefly give some concluding remarks.

4 4 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin Abbreviated Terms and the Descriptive Phrases Notation Description HS Hermitian and skew-hermitian HSS Hermitian and skew-hermitian splitting NS normal and skew-hermitian NSS normal and skew-hermitian splitting PS positive-definite and skew-hermitian PSS positive-definite and skew-hermitian splitting TS triangular and skew-hermitian TSS triangular and skew-hermitian splitting BTS block-triangular and skew-hermitian BTSS block-triangular and skew-hermitian splitting 2 PSS iteration method and its convergence More precisely, the above-mentioned PSS iteration scheme based on the PS splitting (2) for solving the positive definite system of linear equations () can be described as follows: The PSS Iteration Method. Given an initial guess x () C n. For k =,, 2,... until {x (k) } converges, compute where α is a given positive constant. { (αi + P )x (k+ 2 ) = (αi S)x (k) + b, (αi + S)x (k+) = (αi P )x (k+ 2 ) + b, Evidently, just like HSS and NSS methods, each iterate of the PSS iteration alternates between the positive-definite matrix P and the skew-hermitian matrix S, analogously to the classical ADI iteration for partial differential equations[8, 2]. In fact, we can reverse roles of the matrices P and S in the above PSS iteration so that we may first solve the linear system with coefficient matrix αi + S and then the linear system with the coefficient matrix αi + P. We easily see that when P C n n is normal or Hermitian, the above PSS iteration method reduces to the NSS or the HSS iteration methods, accordingly. In matrix-vector form, the PSS iteration can be equivalently rewritten as where x (k+) = M(α) x (k) + G(α) b, (3) { M(α) = (αi + S) (αi P )(αi + P ) (αi S), G(α) = 2α(αI + S) (αi + P ). (4) Thus, M(α) is the iteration matrix of the PSS iteration. As a matter of fact, (3) may also result from the splitting A = B(α) C(α)

5 Block-triangular and skew-hermitian splitting methods 5 of the coefficient matrix A, with { B(α) = 2α (αi + P )(αi + S) G(α), C(α) = 2α (αi P )(αi S) G(α) M(α). To prove convergence of the PSS iteration method, we first demonstrate the following two lemmas. Lemma 2. Let If P C n n is a positive definite matrix, then it holds that V (α) = (αi P )(αi + P ). (5) V (α) 2 <, α >. Proof. Because P C n n is a positive definite matrix, i.e., 2 (P + P ), the Hermitian part of P, is a Hermitian positive definite matrix, we know that for any y C n \ {}, 2α (P + P )y, y > or (αp + αp )y, y > ( αp αp )y, y holds. Here, denotes the inner product in C n. It then straightforwardly follows that or equivalently, (α 2 I + αp + αp + P P )y, y > (α 2 I αp αp + P P )y, y, (αi + P )y, (αi + P )y > (αi P )y, (αi P )y. (6) Let x = (αi + P )y. Then we have x since (αi + P ) is nonsingular and y. Now, (6) can be equivalently written as x, x > V (α)x, V (α)x. That is to say, it holds that or in other words, V (α)x 2 x 2 <, α >, V (α) 2 <, α >. Lemma 2.2 Let S C n n be a skew-hermitian matrix. Then for any α >, (a) αi + S is a non-hermitian positive definite matrix; and (b) the Cayley transform Q(α) (αi S)(αI + S) of S is a unitary matrix. Proof. See [, 2]. With Lemmas 2. and 2.2, we are now ready to prove convergence of the PSS iteration method.

6 6 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin Theorem 2. Let A C n n be a positive definite matrix, M(α) defined in (4) be the iteration matrix of the PSS iteration, and V (α) be the matrix defined in (5). Then the spectral radius ρ(m(α)) of M(α) is bounded by V (α) 2. Therefore, it holds that ρ(m(α)) V (α) 2 <, α >, i.e., the PSS iteration converges to the exact solution x C n of the system of linear equations (). Proof. Because S C n n is a skew-hermitian matrix, from the definition of the PS splitting (2) we know that P +P = A +A. Hence, P C n n is also a positive definite matrix. Moreover, by Lemma 2.2 we see that Q(α) = (αi S)(αI + S) is a unitary matrix. Let M(α) = V (α)(αi S)(αI + S). Then M(α) is similar to M(α). Therefore, by applying Lemma 2. we have ρ(m(α)) = ρ( M(α)) M(α) 2 = V (α)(αi S)(αI + S) 2 V (α) 2 Q(α) 2 = V (α) 2 <. Theorem 2. shows that the PSS iteration method converges unconditionally to the exact solution of the positive definite system of linear equations (). Moreover, the upper bound of its contraction factor is only dependent on the spectrum of the positive definite part P, but is independent of the spectrum of the skew-hermitian part S as well as the eigenvectors of the matrices P, S and A. We should point out that two important problems need to be further studied for the PSS iteration method. One is the choice of the skew-hermitian matrix S, or the splitting of the matrix A, and another is the choice of the acceleration parameter α. Theoretically, due to Theorem 2. we can choose S to be any skew-hermitian matrix such that the matrix P = A S is positive definite, and α to be any positive constant. However, practically, besides the above requirements we have to choose S to be the skew-hermitian matrix such that the linear systems with the coefficient matrices αi +P and αi +S can be solved easily and effectively, and to choose the positive constant α such that the PSS iteration converges very fast. Evidently, these two problems may be very difficult and, usually, their solutions strongly depend on the concrete structures and properties of the coefficient matrix A as well as the splitting matrices P and S. For a positive definite matrix A C n n, whatever Hermitian or non-hermitian, one useful choice of its PS splitting matrices P and S is as follows: Let A = H + S be the HS-splitting of the matrix A, and let H = D + L H + L H be the (block) triangular splitting of the Hermitian positive definite matrix H, where D is the (block) diagonal matrix and L H is the strictly (block) lower triangular matrix of H. Then we can define P and S to be P = D + 2L H and S = L H L H + S, respectively. This PS-splitting directly leads to a special PSS iteration method for solving the system of linear equations (), which only solves (block) lower triangular linear systems at the

7 Block-triangular and skew-hermitian splitting methods 7 first-half of each of its iteration steps. Alternatively, we can also define P and S to be P = D + 2L H and S = L H L H + S, respectively, which immediately leads to a similar PSS iteration method. In the next section, we will give another two practical choices of the PS splitting. These two special kinds of PS splittings are very basic. Technical combinations of them can yield variety and new positive-definite splittings, and hence, many practical PSS iteration methods. According to the positive constant α, if P C n n is a normal matrix, then we can compute α = arg min { V (α) 2} by making use of the formula in Theorem 2.2 in [2]; if P C n n is a α> general positive definite matrix, we do not have any formula to compute a usable α and hence, the upper bound of V ( α ) 2. Usually, it holds that and α α opt arg min α> {ρ(m(α))} ρ(m( α )) > ρ(m(α opt )). Therefore, it is important to know how to compute an approximation of α opt as accurately as possible for improving the convergence speed of the method, and it is a hard task that needs further in-depth study from the viewpoint of both theory and computations. 3 BTSS iteration methods Without loss of generality, we assume, in this section, that the coefficient matrix A C n n of the system of linear equations () is partitioned into the following block m-by-m one: A = (A l,j ) m m C n n, A l,j C n l n j, l, j =, 2,..., m, m where n l, l =, 2,..., m, are positive integers satisfying n l = n. Let D, L and U be the block diagonal, the strictly block lower triangular and the strictly block upper triangular parts of the block matrix A, respectively. Then we have l= A = (L + D + U ) + (U U ) T + S = (L + D + U) + (L L ) T 2 + S 2. (7) Clearly, T and T 2 are block lower triangular and block upper triangular matrices, respectively, and both S and S 2 are skew-hermitian matrices. We will call the two splittings in (7) as blocktriangular and skew-hermitian (BTS) splittings of the matrix A. We remark that these two splittings are both PS splittings, because T l + T l = A + A (l =, 2) and A C n n is positive definite. If we make technical combinations of the BTS splitting with the HS or the NS splitting, other interesting and practical cases of the PS splitting can be obtained. For example, A = (L + 2 ) ( ) (D + D ) + U + 2 (D D ) + U U T 3 + S 3 ( = L + ) ( ) 2 (D + D ) + U + 2 (D D ) + L L T 4 + S 4 (8)

8 8 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin are two BTS splittings, which come from combinations of BTS splittings of the matrix A in (7) and HS splitting of the matrix D. Now, with the choices P = T l, S = S l, l =, 2, 3, 4, we can immediately define the corresponding block-triangular and skew-hermitian splitting (BTSS) iteration methods for solving the positive definite system of linear equations (). We note that for these four BTSS iteration methods, we only need to solve block-triangular linear sub-systems, rather than to invert shifted positive-definite matrices as in the PSS iteration method or shifted Hermitian (normal) positive-definite matrices as in the HSS (NSS) iteration method. Moreover, the block-triangular linear sub-systems can be solved recursively through solutions of the systems of linear equations (αi + A j,j )x j = r j, j =, 2,..., m (9) for the BTSS iteration methods associated with the splittings in (7), and ( αi + ) 2 (A j,j + A j,j) x j = r j, j =, 2,..., m () for those associated with the splittings in (8). Because the splitting matrices T l (l =, 2, 3, 4) are positive definite, the block sub-matrices A j,j (j =, 2,..., m) are also positive definite, in particular, 2 (A j,j +A j,j ) (j =, 2,..., m) are all Hermitian positive definite matrices. Therefore, we may employ another BTSS iteration to solve the linear sub-systems (9) and the conjugate gradient iteration to solve the linear sub-systems () if necessary. In addition, the matrices T l, l =, 2, 3, 4, may be much more sparse than the matrices H and N in HSS and NSS methods. For instance, when the matrix A is an upper Hessenberg matrix, T l and S l, l =, 2, 3, 4, in the BTSS (or TSS) splittings are still very sparse, but H, S, and N, S o in the HS and the NS splittings may be very dense. Therefore, the BTSS iteration methods may save computing costs considerably more than both HSS and NSS iteration methods. Another advantage of the BTSS iteration methods is that they can be used to solve both Hermitian and strongly non-hermitian positive definite system of linear equations more effectively than both HSS and NSS iteration methods. For example, consider the non-hermitian positive definite system of linear equations (αi + Ŝ)z = r arising from HSS, NSS and TSS iteration methods, where Ŝ Cn n is a skew-hermitian matrix, α is a positive constant, and r C n is a given right-hand-side vector. Both HSS and NSS iteration methods can not be used to solve it, however, the BTSS iteration method may solve it very efficiently. This shows that the BTSS iteration methods have a large application area. When D, L and U are the (pointwise) diagonal, the (pointwise) strictly lower triangular and the (pointwise) strictly upper triangular parts of the matrix A, we call the BTS splitting triangular and skew-hermitian (TS) splitting and the BTSS iteration method as triangular and skew-hermitian splitting (TSS) iteration method. We remark that both BTSS and TSS iteration methods are, in general, different from the HSS and the NSS iteration methods. Only when D is Hermitian (normal) and L + U =, the BTSS and the TSS methods give the same scheme as the HSS (NSS) method.

9 Block-triangular and skew-hermitian splitting methods 9 The following investigation describes formulae for approximately estimating the acceleration parameters α for the BTSS iteration methods. For l =, 2, 3, 4, let T l = D l + G l, where { D, for l =, 2, D l = 2 (D + D ), for l = 3, 4, and G l = { L + U, for l =, 3, L + U, for l = 2, 4. Obviously, D l (l =, 2, 3, 4) are block diagonal matrices and G l (l =, 2, 3, 4) are strictly block lower (for l =, 3) or upper (for l = 2, 4) triangular matrices. Therefore, (αi + T l ) = ((αi + D l ) + G l ) = [( I + G l (αi + D l ) ) (αi + D l ) ] = [ (αi + D l ) ( I + (αi + D l ) G l )] m = (αi + D l ) ( ) j [ G l (αi + D l ) ] j = j= m ( ) j [ (αi + D l ) ] j G l (αi + Dl ). j= Here, the last two equalities hold since [ Gl (αi + D l ) ] m = [ (αi + Dl ) G l ] m =. It then follows that V l (α) (αi T l )(αi + T l ) = I 2T l (αi + T l ) = I 2(D l + G l )(αi + T l ) = m (αi D l )(αi + D l ) + 2α(αI + D l ) ( ) j [ G l (αi + D l ) ] j (αi D l )(αi + D l ), (a first-order approximation) and j= V l (α) 2 (αi D l )(αi + D l ) 2 { max (αi Aj,j )(αi + A j,j ) } j m 2, for l =, 2, { = max (αi 2 ) j m (A j,j + A (αi j,j ) + 2 ) } (A j,j + A 2 j,j ), for l = 3, 4. For l = 3, 4, because 2 (A j,j + A j,j ) (j =, 2,..., m) are Hermitian matrices, we immediately

10 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin have where λ (j) k α = arg min V l(α) 2 (l = 3, 4) α> { arg min (αi max 2 ) α> j m (A j,j + A (αi j,j ) + 2 ) } (A j,j + A 2 j,j ) { } = arg min max max α λ (j) k α> j m k n j α + λ (j) k = λ (jo) min λ(jo) max, (k =, 2,... n j ) are eigenvalues of the matrix 2 (A j,j + A j,j ), λ(jo) min and λ(jo) max are the smallest and the largest eigenvalues of the matrix 2 (A j,j +A j,j ), j o = arg max j m κ( 2 (A j,j +A j,j )), and κ( ) denotes the spectral condition number of the corresponding matrix. In particular, when m = n and A j,j (j =, 2,..., m) are entries, we have for l =, 2, 3, 4 that α = arg min max α a j,j α> j n α + a j,j = a min a max, where 4 Applications a min = min j n {a j,j}, a max = max j n {a j,j}. We now give applications of the BTSS iteration methods to the systems of linear equations () whose coefficient matrices possess the block two-by-two structure [ ] W F A =, E N where W C q q and N C (n q) (n q) are positive definite complex sub-matrices such that A C n n is a positive definite matrix. Without loss of generality, we may further assume that both W and N are Hermitian, because otherwise we can consider the BTSS iteration methods induced by the BTS splittings in (8). According to (7) we know that the BTS splittings of the block two-by-two matrix A C n n are [ ] [ ] W F A = E + F + N F T + S [ W E = ] [ ] + F E + T N E 2 + S 2 [ = 2 (W + W ] [ ) E + F 2 (N + N + 2 (W W ] ) F ) F 2 (N N T ) 3 + S 3 [ = 2 (W + W ) E ] [ + F 2 (N + N + 2 (W W ) E ] ) E 2 (N N T ) 4 + S 4.

11 Block-triangular and skew-hermitian splitting methods Given an initial guess x () C n. Then the BTSS iterations compute sequences {x (k) } as follows: { (αi + T l )x (k+ 2 ) = (αi S l )x (k) + b, (αi + S l )x (k+) = (αi T l )x (k+ 2 ) + b, l =, 2, 3, 4. As example, in the following we only investigate the BTSS iteration for the case l =, because the other three cases can be discussed analogously. Moreover, without loss of generality, we could assume that both W and N are Hermitian if necessary, because otherwise we can consider the BTSS iteration methods induced by the BTS splittings for l = 3, 4. Let x and b be correspondingly partitioned into blocks as x = [ u p ] [ f, b = g where u, f C q and p, g C n q. The first half-step of the BTSS iteration requires the solution of linear systems of the form ], (αi + W )u (k+ 2 ) = αu (k) + f F p (k). () Once the solution u (k+ 2 ) of () has been obtained, we compute (αi + N)p (k+ 2 ) = (E + F )u (k+ 2 ) + αp (k) + g + F u (k). (2) Note that the coefficient matrices in () and (2) are positive definite or Hermitian positive definite if the matrices W and N are, respectively. Therefore, the linear systems () and (2) can be solved by any algorithm for positive definite systems (e.g., a PSS method) when the matrices W and N are positive definite, or by any algorithm for Hermitian positive definite systems (e.g., a sparse Cholesky factorization or the conjugate gradient method[9]) when the matrices W and N are Hermitian positive definite. The second half-step of the BTSS iteration requires the solution of linear systems of the form { αu (k+) + F p (k+) = (αi W )u (k+ 2 ) + f, F u (k+) + αp (k+) = (E + F )u (k+ 2 ) + (αi N)p (k+ 2 ) (3) + g. This linear system can be solved in various ways, including the CG-like method discussed in [7, ] and the BTSS (or TSS) method itself. Besides, when n 2q, we may first solve the Hermitian positive definite system of linear equations (α 2 I + F F )p (k+) = (αe + F W )u (k+ 2 ) + α(αi N)p (k+ 2 ) + F f + αg, and then compute u (k+) = α ( ) F p (k+) + (αi W )u (k+ 2 ) + f ; and when n 2q, we may first solve the Hermitian positive definite system of linear equations (α 2 I + F F )u (k+) = (α(αi W ) + F (E + F ))u (k+ 2 ) F (αi N)p (k+ 2 ) + αf F g,

12 2 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin and then compute p (k+) = α ( ) F u (k+) (E + F )u (k+ 2 ) + (αi N)p (k+ 2 ) + g. We remark that unlike the HSS and the PHSS iteration methods in [3, 4], the BTSS iteration methods are divergent for any α > when they are applied to solve the saddle-point problem [ ] [ ] [ ] W F u f Ax F = b, p g where W C q q is Hermitian positive definite, F C q (n q) is of full column rank, and 2q n. More concretely, when W = I, following an analogous derivation to that in [3] we obtain that the eigenvalues of the BTSS iteration matrix are α α+ M (α) = (αi + S ) (αi T )(αi + T ) (αi S ) with multiplicity 2q n and (α + )(α 2 + σ 2 k ) (α(α 2 + 3σ 2k ) ± (α 2 + σ 2k )2 + 4α 2 σ 2k (α2 + 2σ 2k ) ), k =, 2,..., n q, where σ k (k =, 2,..., n q) are the positive singular values of the matrix F. Therefore, α(α 2 + 3σk (α 2) σk 2)2 + 4α 2 σk 2(α2 + 2σk 2) ρ(m (α)) = max k n q (α + )(α 2 + σk 2) > for any α >. However, we see that all eigenvalues of the iteration matrix M (α) are real, and the eigenvalues α α+ and (α + )(α 2 + σ 2 k ) (α(α 2 + 3σ 2k ) (α 2 + σ 2k )2 + 4α 2 σ 2k (α2 + 2σ 2k ) ), k =, 2,..., n q are within the interval (, ), while the other eigenvalues (α + )(α 2 + σ 2 k ) are within the interval (, + ). (α(α 2 + 3σ 2k ) + (α 2 + σ 2k )2 + 4α 2 σ 2k (α2 + 2σ 2k ) ), k =, 2,..., n q 5 Numerical experiments We use three examples to numerically examine feasibility and effectiveness of our new methods. All our tests are started from random vectors, performed in MATLAB with machine precision 6, and terminated when the current iterate satisfies r (k) 2 / r () 2 < 5, where r (k) is the residual of the current, say k-th, iteration. The right-hand-side vector b is computed from b = Ax, where x is the exact solution of the system of linear equations () which is a randomly generated vector.

13 Block-triangular and skew-hermitian splitting methods 3 Example 5. [] Consider the system of linear equations (), for which A R n n is the upwind difference matrix of the two-dimensional convection-diffusion equation (u xx + u yy ) + q exp(x + y)(xu x + yu y ) = f(x, y) on the unit square Ω = [, ] [, ] with the homogeneous Dirichlet boundary conditions. The stepsizes along both x and y directions are the same, i.e., h = m+. In our computations, we use the TSS iteration resulted from the TS splitting A = T + S in (7) as the test method. In Figures and 2, we plot the eigenvalues of the iteration matrices of both HSS and TSS methods when q = and m = 24, 32, respectively. It is clear that the eigenvalue distributions of the HSS and the TSS iteration matrices are quite different: the eigenvalues of the HSS iteration matrix are tightly clustered around the real axis and a circular arc on the complex plane, while those of the TSS iteration matrix are closely around the real axis and a circle; however, the distribution domains of the eigenvalues of both iteration matrices are very similar. This observation is further confirmed by Figures 3 and 4 for which m = 32 and q = 6, 9, respectively. In particular, from these figures we can see that when q becomes larger, the shapes and domains of the eigenvalue distributions of the HSS and the TSS iteration matrices become more similar. In Table, we list the experimentally optimal parameters α exp and the corresponding spectral radii ρ(m(α exp )) of the iteration matrices M(α exp ) of both TSS and HSS methods for several m when q =. The data show that when m is increasing, α exp is decreasing while ρ(m(α exp )) is increasing, for both TSS and HSS methods. α exp and ρ(m(α exp )) of TSS are larger than those of HSS, correspondingly, for all of our tested m. This straightforwardly implies that the number of iteration steps (IT) of TSS may be larger than that of HSS. However, because TSS has much less computational workload than HSS at each of the iteration steps, the actual computing time (CPU) of TSS may be less than that of HSS. These facts are confirmed by the numerical results in Table 2. In fact, the speed-up of TSS with respect to HSS is quite noticeable, where we define by speed-up = CPU of HSS method CPU of TSS (or BTSS) method. In Table 2, the speed-up is at least.52 for m = 64, and it even achieves 3.76 for m = 24. Table : α exp versus ρ(m(α exp )) for Example 5. (q = ) m TSS α exp ρ(m(α exp )) HSS α exp ρ(m(α exp ))

14 4 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin.8 m = 24 m = Figure : The eigenvalue distributions of the HSS iteration matrices when q =, and m = 24 (left) and m = 32 (right), for Example 5..8 m = 24 m = Figure 2: The eigenvalue distributions of the TSS iteration matrices when q =, and m = 24 (left) and m = 32 (right), for Example 5. As we have mentioned in Sections 2 and 3, to compute the optimal parameter α opt for both HSS and TSS is very difficult, and is usually problem-dependent. In Figure 5 we depict the curves of ρ(m(α)) with respect to α and ρ(m(α exp )) with respect to q, and in Figure 6 we depict the curves of α exp with respect to q, when m = 32, for both TSS and HSS iteration methods to intuitively show these functional relationships. Evidently, from Figure 5 we see that ρ(m(α)) attains the minimum at about α = α exp, ρ(m(α exp )) monotonically decreases when q is increasing, and both ρ(m(α)) and ρ(m(α exp )) for TSS are larger than those for HSS for all α and q, respectively. These facts are further confirmed by the numerical results listed in Table 3. It then follows that IT of TSS will be larger than that of HSS. However, because TSS has much less computational workload than HSS at each of the iteration steps, the actual CPU of TSS may be less than that of HSS, which straightforwardly implies that TSS is, practically, more efficient than HSS. These facts are confirmed by the numerical results in Table 4. In fact, in Table 4 the speed-up of TSS with respect to HSS is quite noticeable; it is at least.78 for all

15 Block-triangular and skew-hermitian splitting methods 5 m = 32 m = Figure 3: The eigenvalue distributions of the HSS iteration matrices when m = 32, and q = 6 (left) and q = 9 (right), for Example 5..8 m = 32.8 m = Figure 4: The eigenvalue distributions of the TSS iteration matrices when m = 32, and q = 6 (left) and q = 9 (right), for Example 5. tested values of q except for q = 5 whose speed-up is.5, and it even achieves 2.8 for q = 3. In addition, from Figure 6 we observe that α exp monotonically increases when q is increasing for both TSS and HSS iteration methods, α exp for TSS is larger than that for HSS for all q, and the gap between the α exp s for TSS and HSS also becomes larger when q becomes larger. In Figure 7 we depict the curves of IT and CPU with respect to q for both TSS and HSS iteration methods. We see that for a small q, say q < 4, IT of TSS is less than that of HSS; while for a large q, say q 4, the situation is just reversed. However, CPU of TSS is always less than that of HSS for all of our tested q. This clearly shows that TSS is much more effective than HSS in actual computations. Example 5.2 Consider the system of linear equations (), for which A = (a k,j ) R n n is

16 6 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin Table 2: IT and CPU for Example 5. (q = ) m TSS IT CPU HSS IT CPU speed-up m = 32 TSS: solid HSS: dash dotted m = 32 TSS: solid HSS: dash dotted ρ ( M (α)) ρ ( M( α exp ) ) α q Figure 5: Curves of ρ(m(α)) versus α with q = (left) and ρ(m(α exp )) versus q (right) for the HSS and the TSS iteration matrices when m = 32 for Example 5. defined as follows: a k,j = { (k+j )( k j +), for max{, k l b} j min{n, k + l u },, otherwise, k, j =, 2,..., n, where l b and l u are the lower and the upper half-bandwidth of the matrix A. Note that A R n n is a truncated Hilbert-like matrix, and it is non-symmetric positive definite when l b l u. When l b > l u (or l b < l u ), we use the TSS iteration method induced by the TS splitting with l = (or l = 2) in (7) to solve the system of linear equations (). In our computations, we choose l u = and l b = n so that the coefficient matrix A is an n-by-n lower Hessenberg matrix. In Figures 8 and 9, we plot the eigenvalues of the iteration matrices of both HSS and TSS methods when n = 4 and n = 8, respectively. Again, it is clear that the eigenvalue distributions of the HSS and the TSS iteration matrices are quite different: The eigenvalues of the HSS iteration matrix are clustered around the real axis and a circle on the complex plane, while those of the TSS iteration matrix are clustered around the real axis and several circles; however, the distribution domain of the eigenvalues of the TSS iteration matrix is considerably

17 Block-triangular and skew-hermitian splitting methods TSS: solid HSS: dash dotted m = α exp q Figure 6: Curves of α exp versus q for the HSS and the TSS iteration matrices when m = 32 for Example 5. Table 3: α exp and ρ(m(α exp )) when m = 32 for Example 5. q TSS α exp ρ(m(α exp )) HSS α exp ρ(m(α exp )) smaller than that of the HSS iteration matrix, in particular, along the direction of the imaginary axis. In Table 5, we list the experimentally optimal parameters α exp and the corresponding spectral radii ρ(m(α exp )) of the iteration matrices M(α exp ) of both TSS and HSS methods for several n. The data show that when n is increasing, α exp is decreasing while ρ(m(α exp )) is increasing, for both TSS and HSS methods. ρ(m(α exp )) of TSS are slightly larger than those of HSS, correspondingly, for all our tested n. This implies that IT of TSS may be comparable with that of HSS. Because TSS has a much less computational workload than HSS at each of the iteration steps, the actual CPU of TSS may be much less than that of HSS. Therefore, TSS will be much more efficient than HSS. This fact is further confirmed by the numerical results in Table 6. In Table 6 the speed-up is at least.45 for n = 8, and it achieves.77 for n =. In Figure we depict the curves of ρ(m(α)) with respect to α when n = 8, for both TSS and HSS iteration methods to intuitively show this functional relationship. Evidently, we see that ρ(m(α)) attains the minimum at about α = α exp, and the spectral radius of TSS iteration matrix is almost equal to that of HSS iteration matrix when α becomes larger than α exp of TSS.

18 8 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin Table 4: IT and CPU when m = 32 for Example 5. q TSS IT CPU HSS IT CPU speed-up m = 32 m = 32 TSS: solid HSS: dash dotted 9 TSS: solid HSS: dash dotted IT 9 85 CPU q q Figure 7: Curves of IT versus q (left) and CPU versus q (right) for the HSS and the TSS iteration methods when α = α exp and m = 32 for Example 5. Example 5.3 Consider the system of linear equations (), for which [ ] W F Ω A = F T, where W R q q and N, Ω R (n q) (n q), with 2q > n. N We define the matrices W = (w k,j ), N = (n k,j ), F = (f k,j ) and Ω = diag(ω,..., ω n q ) as follows: k +, for j = k, w k,j =, for k j =, k, j =, 2,..., q,, otherwise, k +, for j = k, n k,j =, for k j =,, otherwise, k, j =, 2,..., n q, { j, for k = j + 2q n, f k,j =, otherwise, k =, 2,..., q; j =, 2,..., n q,

19 Block-triangular and skew-hermitian splitting methods 9.6 n = 4.6 n = Figure 8: The eigenvalue distributions of the HSS iteration matrices when n = 4 (left) and n = 8 (right) for Example n = 4.25 n = Figure 9: The eigenvalue distributions of the TSS iteration matrices when n = 4 (left) and n = 8 (right) for Example 5.2 and ω k =, k =, 2,..., n q. k Note that A R n n is a non-symmetric positive definite matrix. We use the BTSS iteration method defined by ()-(3) to solve the system of linear equations (). In our computations, we choose q = 9 n. In Figures and 2, we plot the eigenvalues of the iteration matrices of both HSS and BTSS methods when n = 4 and n = 8, respectively. Analogously, it is clear that the eigenvalue distributions of the HSS and the BTSS iteration matrices are quite different: The eigenvalues of the HSS iteration matrix are clustered on the real axis and a circular arc on the complex plane, while those of the BTSS iteration matrix are clustered on the real axis and a a circular arc as well as located in a triangular area; however, the distribution domain of the eigenvalues

20 2 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin Table 5: α exp and ρ(m(α exp )) for Example 5.2 n TSS α exp ρ(m(α exp )) HSS α exp ρ(m(α exp )) Table 6: IT and CPU for Example 5.2 n TSS IT CPU HSS IT CPU speed-up of the BTSS iteration matrix is considerably smaller than that of the HSS iteration matrix, in particular, along the direction of the imaginary axis. In Table 7, we list the experimentally optimal parameters α exp and the corresponding spectral radii ρ(m(α exp )) of the iteration matrices M(α exp ) of both BTSS and HSS methods for several n. The data show that when n is increasing, α exp and ρ(m(α exp )) are increasing for both BTSS and HSS methods. ρ(m(α exp )) of BTSS is slightly larger than that of HSS, correspondingly, for all our tested n. This straightforwardly implies that IT of BTSS may be comparable to that of HSS. Because BTSS has much less computational workload than HSS at each of the iteration steps, the actual CPU of BTSS may be much less than that of HSS. Therefore, BTSS will be much more efficient than HSS. This fact is further confirmed by the numerical results in Table 8. In Table 8 the speed-up is at least.75 for n = 8, and it even achieves 2.4 for n = 2. Table 7: α exp and ρ(m(α exp )) for Example 5.3 n BTSS α exp ρ(m(α exp )) HSS α exp ρ(m(α exp )) In Figure 3 we depict the curves of ρ(m(α)) with respect to α when n = 8, for both BTSS and HSS iteration methods to intuitively show this functional relationship. Evidently, we see that ρ(m(α)) attains the minimum at about α = α exp, and the spectral radius of BTSS

21 Block-triangular and skew-hermitian splitting methods n = 8 TSS: solid HSS: dash dotted ρ ( M (α)) α Figure : Curves of ρ(m(α)) versus α for the HSS and the TSS iteration matrices when n = 8 for Example 5.2 Table 8: IT and CPU for Example 5.3 n BTSS IT CPU HSS IT CPU speed-up iteration matrix is almost equal to that of HSS iteration matrix when α becomes larger than α exp of BTSS. 6 Concluding remarks We have further developed the HSS and the NSS iteration methods, and established a more general framework of iteration method based on the PS-splitting of the positive definite coefficient matrix of the system of linear equations. We have proved the convergence of the PSS iteration method, and showed that it inherits all advantages of both HSS and NSS iteration methods. Several special examples of the PSS method, i.e., the TSS (or BTSS) methods, have been described, and numerical examples have been implemented to show that the TSS/BTSS iteration methods are much more practical and effective than the HSS iteration method in the senses of computer storage and CPU time. However, we should mention that how to choose the optimal parameters for these iteration methods is a very practical and interesting problem that needs further in-depth study.

22 22 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin n = 4 n = Figure : The eigenvalue distributions of the HSS iteration matrices when n = 4 (left) and n = 8 (right) for Example n = 4.3 n = Figure 2: The eigenvalue distributions of the BTSS iteration matrices when n = 4 (left) and n = 8 (right) for Example 5.3

23 Block-triangular and skew-hermitian splitting methods 23 n = BTSS: solid HSS: dash dotted ρ ( M (α) ) α Figure 3: Curves of ρ(m(α)) versus α for the HSS and the BTSS iteration matrices when n = 8 for Example 5.3

24 24 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin References [] Z.-Z. Bai, G.H. Golub and M.K. Ng, Hermitian and skew-hermitian splitting methods for non-hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 24:3(23), Also available at [2] Z.-Z. Bai, G.H. Golub and M.K. Ng, On successive-overrelaxation acceleration of the Hermitian and skew-hermitian splitting iteration, BIT, submitted. Also available at [3] Z.-Z. Bai, G.H. Golub and J.-Y. Pan, Hermitian and skew-hermitian splitting methods for non-hermitian positive semidefinite linear systems, Technical Report SCCM-2-2, Scientific Computing and Computational Mathematics Program, Department of Computer Science, Stanford University, 22. [4] M. Benzi and G.H. Golub, An iterative method for generalized saddle-point problems, Technical Report SCCM-2-4, Scientific Computing and Computational Mathematics Program, Department of Computer Science, Stanford University, 22. [5] J.R. Bunch, A note on the stable decomposition of skew-symmetric matrices, Math. Comput., 38:58(982), [6] J.R. Bunch, Stable algorithms for solving symmetric and skew-symmetric systems, Bull. Austral. Math. Soc., 26:(982), 7-9. [7] P. Concus and G.H. Golub, A generalized conjugate gradient method for non-symmetric systems of linear equations, in Computing Methods in Applied Sciences and Engineering, Lecture Notes in Econom. and Math. Systems 34, R. Glowinski and J.R. Lions eds., Springer-Verlag, Berlin, 976, pp Also available at [8] J. Douglas, Jr. and H. Rachford, Jr., Alternating direction methods for three space variables, Numer. Math., 4(956), [9] G.H. Golub and C.F. Van Loan, Matrix Computations, 3rd edition, Johns Hopkins University Press, Baltimore, MD, 996. [] Y. Saad, Iterative Methods for Sparse Linear Systems, PWS Publishing Company, Boston, 996. [] Y. Saad and M.H. Schultz, GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 7(986), [2] R.S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Cliffs, NJ, 962.

Splitting Iteration Methods for Positive Definite Linear Systems

Splitting Iteration Methods for Positive Definite Linear Systems Splitting Iteration Methods for Positive Definite Linear Systems Zhong-Zhi Bai a State Key Lab. of Sci./Engrg. Computing Inst. of Comput. Math. & Sci./Engrg. Computing Academy of Mathematics and System

More information

Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices

Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices Zhong-Zhi Bai 1 Department of Mathematics, Fudan University Shanghai

More information

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices Journal of Computational Applied Mathematics 6 (009) 136 154 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam Optimization

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2010-026 On Preconditioned MHSS Iteration Methods for Complex Symmetric Linear Systems by Zhong-Zhi Bai, Michele Benzi, Fang Chen Mathematics and Computer Science EMORY UNIVERSITY On

More information

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Journal of Computational Mathematics Vol.xx, No.x, 2x, 6. http://www.global-sci.org/jcm doi:?? ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Davod

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Modified HSS iteration methods for a class of complex symmetric linear systems

Modified HSS iteration methods for a class of complex symmetric linear systems Computing (2010) 87:9 111 DOI 10.1007/s00607-010-0077-0 Modified HSS iteration methods for a class of complex symmetric linear systems Zhong-Zhi Bai Michele Benzi Fang Chen Received: 20 October 2009 /

More information

POSITIVE DEFINITE AND SEMI-DEFINITE SPLITTING METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS * 1. Introduction

POSITIVE DEFINITE AND SEMI-DEFINITE SPLITTING METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS * 1. Introduction Journal of Computational Mathematics Vol.34, No.3, 2016, 300 316. http://www.global-sci.org/jcm doi:10.4208/jcm.1511-m2015-0299 POSITIVE DEFINITE AND SEMI-DEFINITE SPLITTING METHODS FOR NON-HERMITIAN POSITIVE

More information

On preconditioned MHSS iteration methods for complex symmetric linear systems

On preconditioned MHSS iteration methods for complex symmetric linear systems DOI 10.1007/s11075-010-9441-6 ORIGINAL PAPER On preconditioned MHSS iteration methods for complex symmetric linear systems Zhong-Zhi Bai Michele Benzi Fang Chen Received: 25 November 2010 / Accepted: 13

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 36, pp. 39-53, 009-010. Copyright 009,. ISSN 1068-9613. ETNA P-REGULAR SPLITTING ITERATIVE METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS

More information

On inexact Hermitian and skew-hermitian splitting methods for non-hermitian positive definite linear systems

On inexact Hermitian and skew-hermitian splitting methods for non-hermitian positive definite linear systems Available online at www.sciencedirect.com Linear Algebra its Applications 428 (2008 413 440 www.elsevier.com/locate/laa On inexact Hermitian sew-hermitian splitting methods for non-hermitian positive definite

More information

Available online: 19 Oct To link to this article:

Available online: 19 Oct To link to this article: This article was downloaded by: [Academy of Mathematics and System Sciences] On: 11 April 01, At: 00:11 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 107954

More information

Regularized HSS iteration methods for saddle-point linear systems

Regularized HSS iteration methods for saddle-point linear systems BIT Numer Math DOI 10.1007/s10543-016-0636-7 Regularized HSS iteration methods for saddle-point linear systems Zhong-Zhi Bai 1 Michele Benzi 2 Received: 29 January 2016 / Accepted: 20 October 2016 Springer

More information

The semi-convergence of GSI method for singular saddle point problems

The semi-convergence of GSI method for singular saddle point problems Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION AXB = C *

A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION AXB = C * Journal of Computational Mathematics Vol.34, No.4, 2016, 437 450. http://www.global-sci.org/jcm doi:10.4208/jcm.1601-m2015-0416 A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection Journal of Computational and Applied Mathematics 226 (2009) 77 83 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

Two-parameter generalized Hermitian and skew-hermitian splitting iteration method

Two-parameter generalized Hermitian and skew-hermitian splitting iteration method To appear in the International Journal of Computer Mathematics Vol. 00, No. 00, Month 0XX, 1 Two-parameter generalized Hermitian and skew-hermitian splitting iteration method N. Aghazadeh a, D. Khojasteh

More information

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction Journal of Computational Mathematics Vol.xx, No.x, 200x, 1 9. http://www.global-sci.org/jcm doi:10.4208/jcm.1401-cr7 ON AUGMENED LAGRANGIAN MEHODS FOR SADDLE-POIN LINEAR SYSEMS WIH SINGULAR OR SEMIDEFINIE

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS U..B. Sci. Bull., Series A, Vol. 78, Iss. 4, 06 ISSN 3-707 ON A GENERAL CLASS OF RECONDIIONERS FOR NONSYMMERIC GENERALIZED SADDLE OIN ROBLE Fatemeh anjeh Ali BEIK his paper deals with applying a class

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Efficient Solvers for the Navier Stokes Equations in Rotation Form

Efficient Solvers for the Navier Stokes Equations in Rotation Form Efficient Solvers for the Navier Stokes Equations in Rotation Form Computer Research Institute Seminar Purdue University March 4, 2005 Michele Benzi Emory University Atlanta, GA Thanks to: NSF (MPS/Computational

More information

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,

More information

Iterative Methods for Linear Systems

Iterative Methods for Linear Systems Iterative Methods for Linear Systems 1. Introduction: Direct solvers versus iterative solvers In many applications we have to solve a linear system Ax = b with A R n n and b R n given. If n is large the

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

2 Computing complex square roots of a real matrix

2 Computing complex square roots of a real matrix On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Cholesky factorisations of linear systems coming from a finite difference method applied to singularly perturbed problems

Cholesky factorisations of linear systems coming from a finite difference method applied to singularly perturbed problems Cholesky factorisations of linear systems coming from a finite difference method applied to singularly perturbed problems Thái Anh Nhan and Niall Madden The Boundary and Interior Layers - Computational

More information

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the loss of orthogonality in the Gram-Schmidt orthogonalization process CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical

More information

Parallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen

Parallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen Parallel Iterative Methods for Sparse Linear Systems Lehrstuhl für Hochleistungsrechnen www.sc.rwth-aachen.de RWTH Aachen Large and Sparse Small and Dense Outline Problem with Direct Methods Iterative

More information

Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning

Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning Adv Comput Math DOI 10.1007/s10444-013-9330-3 Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning Mohammad Khorsand Zak Faezeh Toutounian Received:

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

Chebyshev semi-iteration in Preconditioning

Chebyshev semi-iteration in Preconditioning Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

Singular Value Inequalities for Real and Imaginary Parts of Matrices

Singular Value Inequalities for Real and Imaginary Parts of Matrices Filomat 3:1 16, 63 69 DOI 1.98/FIL16163C Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Singular Value Inequalities for Real Imaginary

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

Residual iterative schemes for largescale linear systems

Residual iterative schemes for largescale linear systems Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

First, we review some important facts on the location of eigenvalues of matrices.

First, we review some important facts on the location of eigenvalues of matrices. BLOCK NORMAL MATRICES AND GERSHGORIN-TYPE DISCS JAKUB KIERZKOWSKI AND ALICJA SMOKTUNOWICZ Abstract The block analogues of the theorems on inclusion regions for the eigenvalues of normal matrices are given

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

On the Schur Complement of Diagonally Dominant Matrices

On the Schur Complement of Diagonally Dominant Matrices On the Schur Complement of Diagonally Dominant Matrices T.-G. Lei, C.-W. Woo,J.-Z.Liu, and F. Zhang 1 Introduction In 1979, Carlson and Markham proved that the Schur complements of strictly diagonally

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

R, 1 i 1,i 2,...,i m n.

R, 1 i 1,i 2,...,i m n. SIAM J. MATRIX ANAL. APPL. Vol. 31 No. 3 pp. 1090 1099 c 2009 Society for Industrial and Applied Mathematics FINDING THE LARGEST EIGENVALUE OF A NONNEGATIVE TENSOR MICHAEL NG LIQUN QI AND GUANGLU ZHOU

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2007-002 Block preconditioning for saddle point systems with indefinite (1,1) block by Michele Benzi, Jia Liu Mathematics and Computer Science EMORY UNIVERSITY International Journal

More information

On the performance of the Algebraic Optimized Schwarz Methods with applications

On the performance of the Algebraic Optimized Schwarz Methods with applications On the performance of the Algebraic Optimized Schwarz Methods with applications Report 12-1-18 Department of Mathematics Temple University October 212 This report is available at http://www.math.temple.edu/~szyld

More information

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,

More information

1. Introduction. We consider the solution of systems of linear equations with the following block 2 2 structure:

1. Introduction. We consider the solution of systems of linear equations with the following block 2 2 structure: SIAM J. MATRIX ANAL. APPL. Vol. 26, No. 1, pp. 20 41 c 2004 Society for Industrial and Applied Mathematics A PRECONDITIONER FOR GENERALIZED SADDLE POINT PROBLEMS MICHELE BENZI AND GENE H. GOLUB Abstract.

More information

A Review of Preconditioning Techniques for Steady Incompressible Flow

A Review of Preconditioning Techniques for Steady Incompressible Flow Zeist 2009 p. 1/43 A Review of Preconditioning Techniques for Steady Incompressible Flow David Silvester School of Mathematics University of Manchester Zeist 2009 p. 2/43 PDEs Review : 1984 2005 Update

More information

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems Applied Mathematics Letters 19 (2006) 1029 1036 wwwelseviercom/locate/aml The upper Jacobi upper Gauss Seidel type iterative methods for preconditioned linear systems Zhuan-De Wang, Ting-Zhu Huang School

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, SERIES B Volume 5, Number 1-2, Pages 21 30 c 2014 Institute for Scientific Computing and Information A SPARSE APPROXIMATE INVERSE PRECONDITIONER

More information

When is the hermitian/skew-hermitian part of a matrix a potent matrix?

When is the hermitian/skew-hermitian part of a matrix a potent matrix? Electronic Journal of Linear Algebra Volume 24 Volume 24 (2012/2013) Article 9 2012 When is the hermitian/skew-hermitian part of a matrix a potent matrix? Dijana Ilisevic Nestor Thome njthome@mat.upv.es

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

WHEN studying distributed simulations of power systems,

WHEN studying distributed simulations of power systems, 1096 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 21, NO 3, AUGUST 2006 A Jacobian-Free Newton-GMRES(m) Method with Adaptive Preconditioner and Its Application for Power Flow Calculations Ying Chen and Chen

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

Partitioned Methods for Multifield Problems

Partitioned Methods for Multifield Problems Partitioned Methods for Multifield Problems Joachim Rang, 10.5.2017 10.5.2017 Joachim Rang Partitioned Methods for Multifield Problems Seite 1 Contents Blockform of linear iteration schemes Examples 10.5.2017

More information

An accelerated Newton method of high-order convergence for solving a class of weakly nonlinear complementarity problems

An accelerated Newton method of high-order convergence for solving a class of weakly nonlinear complementarity problems Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 0 (207), 4822 4833 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa An accelerated Newton method

More information

Boundary Value Problems and Iterative Methods for Linear Systems

Boundary Value Problems and Iterative Methods for Linear Systems Boundary Value Problems and Iterative Methods for Linear Systems 1. Equilibrium Problems 1.1. Abstract setting We want to find a displacement u V. Here V is a complete vector space with a norm v V. In

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,

More information

c 2004 Society for Industrial and Applied Mathematics

c 2004 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 6, No., pp. 377 389 c 004 Society for Industrial and Applied Mathematics SPECTRAL PROPERTIES OF THE HERMITIAN AND SKEW-HERMITIAN SPLITTING PRECONDITIONER FOR SADDLE POINT

More information