POSITIVE DEFINITE AND SEMI-DEFINITE SPLITTING METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS * 1. Introduction
|
|
- Delilah Sanders
- 6 years ago
- Views:
Transcription
1 Journal of Computational Mathematics Vol.34, No.3, 2016, doi: /jcm.1511-m POSITIVE DEFINITE AND SEMI-DEFINITE SPLITTING METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS * Na Huang and Changfeng Ma School of Mathematics and Computer Science, Fujian Normal University, Fuzhou , China macf@fjnu.edu.cn Abstract In this paper, we further generalize the technique for constructing the normal (or positive definite) and skew-hermitian splitting iteration method for solving large sparse non- Hermitian positive definite system of linear equations. By introducing a new splitting, we establish a class of efficient iteration methods, called positive definite and semi-definite splitting (PPS) methods, and prove that the sequence produced by the PPS method converges unconditionally to the unique solution of the system. Moreover, we propose two kinds of typical practical choices of the PPS method and study the upper bound of the spectral radius of the iteration matrix. In addition, we show the optimal parameters such that the spectral radius achieves the minimum under certain conditions. Finally, some numerical examples are given to demonstrate the effectiveness of the considered methods. Mathematics subject classification: 65F10, 65F30, 65F50. Key words: Linear systems, Splitting method, Non-Hermitian matrix, Positive definite matrix, Positive semi-definite matrix, Convergence analysis. 1. Introduction Many problems in scientific computing give rise to a system of linear equations Ax = b, A C n n, and x, b C n, (1.1) with A being a large sparse non-hermitian but positive definite matrix. We call a matrix B positive definite (or positive semi-definite), if B + B is Hermitian positive definite (or positive semi-definite), i.e., for all 0 x C n, x (B + B )x > 0 (or x (B + B )x 0), where B denotes the complex conjugate transpose of the B. Let D = diag(a 11, a 22,, a nn ) be the diagonal part of A and e i = (0,, 0, 1, 0,, 0) T. Since the coefficient matrix A is positive definite, we have e i (A + A )e i = a ii + a ii > 0. This shows that D is positive definite. The linear system has many important practical applications, such as diffuse optical tomography, molecular scattering, lattice quantum chromodynamics (see, e.g., [1,7,8,22,24,38,39,41]). Many researchers have been devoted themselves to the numerical solution of (1.1) (see e.g., [2 4,10, 11, 18, 21, 25, 27, 28, 36,37, 40, 42, 45 47] and the references therein) and proposed kinds of available iteration methods for solving the system (1.1), in which splitting iteration methods (see e.g., [9,13 17,19,29 31,35,44]) and Krylov subspace methods (see e.g., [5,20,23,26,32,43]) * Received May 25, 2015 / Revised version received October 13, 2015 / Accepted November 4, 2015 / Published online May 3, 2016 /
2 Splitting Methods for Non-Hermitian Positive Definite Linear Systems 301 attract a lot of attention. In [16], the authors presented a Hermitian and skew-hermitian s- plitting (HSS) iteration method for solving (1.1) and showed that the HSS method converges unconditionally to the unique solution of the system. Then many researchers focused on the HSS method and proposed kinds of iterations method based on Hermitian and skew-hermitian splitting (see e.g., [6, 9, 12, 13, 17]). In recent years, other kinds of splitting iteration methods have also been studied (see e.g., [14,15,33 35,44]). Normal and skew-hermitian splitting (NSS) iteration methods for solving large sparse non-hermitian positive definite linear system was studied in [15]. Based on block triangular and skew-hermitian splitting, a class of iteration methods for solving positive-definite linear systems was established in [14]. Krukier et. al proposed the generalized skew-hermitian triangular splitting iteration methods to solve (1.1) and applied the methods to solve the saddle-point linear systems (see [34]). In this work, we further generalize the technique for constructing the normal (or positive definite) and skew-hermitian splitting iteration method for solving (1.1). Throughout this paper, we use the following notations: C m n is the set of m n complex matrices and C m = C m 1. We use C and R to denote the set of complex numbers and real numbers, respectively. For any a C, we write Re(a) and Im(a) to denote the real and imaginary parts of a. For B C n n, we write B 1, B 2, Λ(B) and ρ(b) to denote the the inverse, 2-norm, the spectrum and the spectral radius of the matrix B, respectively. I denotes the identity matrix of size implied by context. i = 1 denotes the imaginary unit. The organization of this paper is as follows. In Section 2, we present the positive definite and semi-definite splitting methods for solving non-hermitian positive definite linear systems and study the convergence properties of the PPS iteration. In Section 3, we establish two kinds of typical practical choices of the PPS method and study the upper bound of the spectral radius of iteration matrix. Numerical experiments are presented in Section 4 to show the effectiveness of our methods. 2. The Positive Definite and Semi-definite Splitting Method In this section, we study efficient iterative methods for solving (1.1) based on the positive definite and semi-definite splitting (PPS for short) of the coefficient matrix A, and establish the convergence analysis of the new methods. For this purpose, we split A into positive-definite and positive semi-definite parts as follows: A = M + N, (2.1) where M is a positive-definite matrix and N is a positive semi-definite matrix. Then it is easy to see that A = αi + M (αi N) = αi + N (αi M). This implies that the system (1.1) can be reformulated equivalently as: (αi + M)x = (αi N)x + b, or (αi + N)x = (αi M)x + b. By the two above fixed point equations, we can get the following iteration method: { (αi + M)x (k+ 1 2 ) = (αi N)x (k) + b, (αi + N)x (k+1) = (αi M)x (k+ 1 2 ) + b. (2.2)
3 302 N. HUANG AND C.F. MA For convenience, we call the scheme (2.2) positive-definite and semi-definite splitting (PPS for short) iteration method. It is worth mentioning that the normal and skew-hermitian splitting iteration method proposed in [15] and the positive definite and skew-hermitian splitting iteration method proposed in [14] are both the special cases of the PPS iteration method. Actually, if A = M + N with N being a skew-hermitian matrix, then we can see that M + M = A N + (A N) = A + A, which shows that M is positive definite. This, along with the fact that all the skew-hermitian matrix are positive semi-definite, follows that both of the two splitting mentioned above belong to the positive-definite and semi-definite splitting. In the following, we shall establish the convergence theorem of the PPS method. To this end, we rewrite the scheme (2.2) equivalently. By using (2.2), we can obtain x (k+ 1 2 ) = (αi + M) 1 (αi N)x (k) + (αi + M) 1 b, (2.3) and which follows that x (k+1) = (αi + N) 1 (αi M)x (k+ 1 2 ) + (αi + N) 1 b, (2.4) x (k+1) = S(α)x (k) + T (α)b, (2.5) where S(α) = (αi + N) 1 (αi M)(αI + M) 1 (αi N), T (α) = 2α[(αI + M)(αI + N)] 1. (2.6a) (2.6b) Therefore, the PPS method is convergent if and only if the spectral radius of the iterative matrix S(α) is less than 1. To this purpose, we present the following lemma. Lemma 2.1. For any positive semi-definite matrix P C n n, it holds that (αi P )(αi + P ) 1 2 1, α > 0. Furthermore, if P is positive definite, then it holds that (αi P )(αi + P ) 1 2 < 1, α > 0. Proof. Let σ(α) =: (αi P )(αi + P ) 1 2. By the definition of 2-norm and the similarity invariance of the matrix spectrum, we have σ(α) 2 = (αi P )(αi + P ) = ρ ((αi + P ) 1 (αi P )(αi P )(αi + P ) 1) ( = ρ (αi P )(αi P )(αi + P ) 1 (αi + P ) 1). Note that the matrix (αi + P ) 1 (αi P )(αi P )(αi + P ) 1 is Hermitian positive semidefinite. It follows that all the eigenvalues of (αi P )(αi P )(αi + P ) 1 (αi + P ) 1 are
4 Splitting Methods for Non-Hermitian Positive Definite Linear Systems 303 nonnegative. This, together with the fact (αi P )(αi P )(αi + P ) 1 (αi + P ) 1 = [α 2 I + P P α(p + P )][α 2 I + P P + α(p + P )] 1 = [α 2 I + P P + α(p + P ) 2α(P + P )][α 2 I + P P + α(p + P )] 1 = I 2α(P + P )[α 2 I + P P + α(p + P )] 1 = I 2α(P + P )(αi + P ) 1 (αi + P ) 1 =: I L(α), yields that all the eigenvalues of the matrix I L(α) are nonnegative and σ(α) 2 = ρ(i L(α)). (2.7) For any λ Λ(L(α)), it is easy to see that 1 λ Λ(I L(α)), which shows that 1 λ 0. Then for any positive semi-definite matrix P and α > 0, we can know that 2α(P + P ) is Hermitian positive semi-definite. This, along with the fact that L(α) is similar to 2α(αI + P ) 1 (P + P )(αi + P ) 1, leads to λ 0. Then we have Combining this with (2.7), it leads to σ(α) 2 = 0 1 λ 1. max 1 λ 1. λ Λ(L(α)) Furthermore, if P is positive definite, then it is easy to see that 2α(αI+P ) 1 (P +P )(αi+p ) 1 is Hermitian positive definite, which shows that λ > 0. At this case, we have 0 1 λ < 1. It leads to σ(α) 2 < 1 immediately. Then the result follows. Now we establish the convergence theorem for the PPS iteration method. Theorem 2.1. Let A = M + N be a positive definite and semi-definite splitting of A. Then for any positive constant α, the spectral radius ρ(s(α)) of the iteration matrix S(α) satisfied ρ(s(α)) < 1, where S(α) is defined as in (2.6a). Therefore, the PPS iteration method is convergent to the exact solution x C n of the linear system (1.1). Proof. Note that N is a positive semi-definite matrix and M is a positive definite matrix. Then by using Lemma 2.1, we can get ρ(s(α)) = ρ((αi + N) 1 (αi M)(αI + M) 1 (αi N)) = ρ((αi M)(αI + M) 1 (αi N)(αI + N) 1 ) (αi M)(αI + M) 1 2 (αi N)(αI + N) 1 2 (αi M)(αI + M) 1 2 < 1, which completes the proof. We shall emphasize that the scheme (2.5) can induce a class of preconditioners for solving the system (1.1). Actually, let R(α) = 1 (αi M)(αI N), 2α
5 304 N. HUANG AND C.F. MA where M, N are defined as in (2.1). Then by using the definition of T (α), we can know that A = T (α) 1 R(α) (2.8) defines a new splitting of the coefficient matrix A in (1.1). It is easy to see that the PPS iteration method can also be induced by the matrix splitting (2.8). Moreover, the matrix T (α) 1 can be alternatively used to precondition the Krylov subspace iteration methods for solving the system (1.1). 3. Two Typical Practical Choices of the PP-splitting In this section, we give two kinds of typical practical choices of the PPS method. One is based on Hermitian and skew-hermitian splitting, the other one is based on triangular splitting. To this end, let H(A) = 1 2 (A + A ), S(A) = 1 2 (A A ). Then H(A), S(A) are the Hermitian and skew-hermitian parts of the matrix A, respectively. Let η be any real constant, let M = H(A) + iηi, N = S(A) iηi. (3.1) It is easy to verify that M + N = A, M + M = H(A) + iηi + H(A) iηi = 2H(A) is positive definite, and N + N = S(A) iηi S(A) + iηi = 0 is positive semi-definite. Thereby, we can get the following PPS iteration method: (αi + H(A) + iηi)x (k+ 1 2 ) = (αi S(A) + iηi)x (k) + b, (3.2) (αi + S(A) iηi)x (k+1) = (αi H(A) iηi)x (k+ 1 2 ) + b. If η = 0, the iteration scheme (3.2) reduces to the Hermitian and skew-hermitian splitting (HSS) iteration method proposed in [16]. This implies that the HSS iteration method is a special case of the PPS iteration method. In the following, we shall establish the convergence analysis of (3.2). For this purpose, we introduce the quantities V (α, η) = (αi H(A) iηi)(αi + H(A) + iηi) 1, W (α, η) = (αi S(A) + iηi)(αi + S(A) iηi) 1, S(α, η) = (αi + S(A) iηi) 1 V (α, η)(αi S(A) + iηi), (3.3) T (α, η) = 2α[(αI + H(A) + iηi)(αi + S(A) iηi)] 1. (3.4) Then it is easy to verify that the iteration scheme (3.2) can be equivalent to x (k+1) = S(α, η)x (k) + T (α, η)b. (3.5) This shows that the method (3.2) is convergent if and only if the spectral radius of the iterative matrix S(α, η) is less than 1. Theorem 3.1. Let A C n n be a positive definite matrix, M and N be defined as in (3.1), λ 1 and λ n be the maximum and minimum eigenvalues of H(A). Then for any η R and α > 0,
6 Splitting Methods for Non-Hermitian Positive Definite Linear Systems 305 (1) the sequence generated by the iteration scheme (3.2) converges to the exact solution x C n of the system (1.1); (2) the spectral radius ρ(s(α, η)) of the iteration matrix S(α, η) is bounded by where ρ(s(α, η)) 1 φ(α, η), (3.6a) { 4αλ 1 φ(α, η) = min (α + λ 1 ) 2 + η 2, 4αλ } n (α + λ n ) 2 + η 2. (3.6b) Proof. Noticing that A and M are positive definite, and N is positive semi-definite, then by Theorem 2.1, we can get the first result. Now we estimate the bound of ρ(s(α, ω)). It is easy to see that (αi S(A) + iηi) 1 S(A) = S(A)(αI S(A) + iηi) 1. Then we can know that W (α, η) W (α, η) = (αi S(A) + iηi) 1 (αi + S(A) iηi)(αi S(A) + iηi)(αi + S(A) iηi) 1 = (αi + S(A) iηi)(αi S(A) + iηi) 1 (αi S(A) + iηi)(αi + S(A) iηi) 1 = I. Similarly, we can get W (α, η)w (α, η) = I. This shows that W (α, η) is a unitary matrix and W (α, η) 2 = 1. This, along with the fact that S(α, η) is similar to V (α, η)w (α, η), leads to ρ(s(α, η)) = ρ(v (α, η)w (α, η)) V (α, η) 2 W (α, η) 2 = V (α, η) 2. (3.7) On the other hand, by Rayleigh-Ritz theorem, it follows that V (α, η) 2 2 = ρ(v (α, η) V (α, η)) = ρ ((αi + H(A) iηi) 1 (αi H(A) + iηi)(αi H(A) iηi)(αi + H(A) + iηi) 1) (α λ + iη)(α λ iη) = max λ Λ(H(A)) (α + λ iη)(α + λ + iη) = max (α λ) 2 + η 2 λ Λ(H(A)) (α + λ) 2 + η 2. (3.8) Noticing that for any λ Λ(H(A)), we can get (α λ) 2 + η 2 (α + λ) 2 + η 2 = α2 2λα + λ 2 + η 2 α 2 + 2λα + λ 2 + η 2 = α2 + 2λα + λ 2 + η 2 4λα α 2 + 2λα + λ 2 + η 2 = 1 4λα α 2 + 2λα + λ 2 + η 2 = 1 4α (α 2 + η 2 )/λ + λ + 2α. (3.9) Since α > 0 and α 2 + η 2 > 0, we have α 2 + η 2 { α 2 + η 2 max + λ = max + λ 1, α2 + η 2 } + λ n. λ Λ(H(A)) λ λ 1 λ n
7 306 N. HUANG AND C.F. MA This, together with α > 0, (3.8) and (3.9), yields that [ V (α, η) 2 2 = 1 max λ Λ(H(A)) 4α (α 2 + η 2 )/λ + λ + 2α 4α = 1 min λ Λ(H(A)) (α 2 + η 2 )/λ + λ + 2α { 4α = 1 min (α 2 + η 2 )/λ 1 + λ 1 + 2α, 4α } (α 2 + η 2 )/λ n + λ n + 2α { 4αλ 1 = 1 min (α + λ 1 ) 2 + η 2, 4αλ } n (α + λ n ) 2 + η 2. Then the second result follows. From the above theorem, we can see that the optimal point (α, η ) of ρ(s(α, η)) make φ(α, η) maximum. This implies that we just need to solve the following optimization problem if we need to find the optimal parameters. max φ(α, η) α>0,η R Theorem 3.2. Under the same settings and conditions as in Theorem 3.1, for any η R and α > 0, it holds that (α, η ) arg max α>0,η R φ(α, η) = ( λ 1 λ n, 0), (3.10) φ(α, η ) = 4 λ 1 λ n ( λ 1 + λ n ) 2. (3.11) Proof. From (3.6b), we can see that the function φ(α, η) decreases monotonically with respect to η 2 for any α > 0. This shows that η arg max φ(α, η) = 0, (3.12a) α>0 { 4αλ1 φ(α, 0) = min (α + λ 1 ) 2, 4αλ } n (α + λ n ) 2. (3.12b) In order to find the maximum point of φ(α, 0), in the following, we derive α such that minimize the function 1/φ(α, 0). It follows from (3.12b) that 1 { (α + φ(α, 0) = max λ1 ) 2, (α + λ n) 2 } 4αλ 1 4αλ n = max { α 4λ 1 + λ 1 4α + 1 2, ] α 4λ n + λ n 4α }. (3.13) Since H(A) is Hermitian positive definite, we have λ 1 > 0 and λ n > 0. Then it is easy to see that α = λ 1 λ n. This, along with the independence of α and η, leads to (3.10) which immediately yields (3.11). This completes the proof. Remark 3.1. We shall emphasize that ( λ 1 λ n, 0) is only the minimum point of 1 φ(α, η), rather than that of ρ(s(α, η)). Therefore, from Theorem 3.3, we can see that the optimal point (α, η ) is the quasi-optimal parameter of ρ(s(α, η)), and the quasi-optimal spectral radius is ρ(s(α, η )) λ1 1 φ(α, η λ n ) = λ1 +. (3.14) λ n
8 Splitting Methods for Non-Hermitian Positive Definite Linear Systems 307 In the following, we shall study another typical practical choice of the PPS method. Let A = D + L + U, (3.15) where D, L and U are the diagonal, the strictly lower-triangular, and the strictly uppertriangular matrices of the matrix A. For any ω R, let M = D + L + U + iωi, N = U U iωi, (3.16a) M = D + L + U + iωi, N = L L iωi, (3.16b) it is easy to verify that A = M + N. Noticing that M + M = D + L + U + iωi + D + L + U iωi = A + A, M + M = D + L + U + iωi + D + L + U iωi = A + A, (3.17a) (3.17b) we can know that the matrix M defined above is positive definite. On the other hand, from the fact N + N = 0, it follows that N is positive semi-definite. Then we can get the PPS iteration method as follows: (αi + D + L + U + iωi)x (k+ 1 2 ) = (αi U + U + iωi)x (k) + b, (αi + U U iωi)x (k+1) = (αi D L U iωi)x (k+ 1 2 ) + b, (3.18) or (αi + D + L + U + iωi)x (k+ 1 2 ) = (αi L + L + iωi)x (k) + b, (αi + L L iωi)x (k+1) = (αi D L U iωi)x (k+ 1 2 ) + b. (3.19) Particularly, if ω = 0, then the above iteration formulas reduce to the triangular and skew- Hermitian splitting splitting (TSS) iteration method proposed in [14]. This shows that the TSS iteration method is a special case of the PPS iteration method. In addition, it is easy to see that the iterative formula (3.18) and (3.19) have same structure and property. Hence we just need to analyze the convergence of (3.18) in the following. Let V (α, ω) = (αi D L U iωi)(αi + D + L + U + iωi) 1, W (α, ω) = (αi U + U + iωi)(αi + U U iωi) 1, S(α, ω) = (αi + U U iωi) 1 V (α, ω)(αi U + U + iωi), (3.20) T (α, ω) = 2α[(αI + D + L + U + iωi)(αi + U U iωi)] 1. (3.21) Then the iteration scheme (3.18) can be reformulated equivalently as Firstly, we propose the following lemmas. x (k+1) = S(α, ω)x (k) + T (α, ω)b. (3.22) Lemma 3.1. Let D, U, L be defined as in (3.15), and let t 1 = α iω, t 2 = α+iω, Q = L+U. Then it holds that n 1 V (α, ω) = (t 1 I D)(t 2 I + D) 1 + 2α(t 2 I + D) 1 ( 1) j [Q(t 2 I + D) 1 ] j. (3.23) j=1
9 308 N. HUANG AND C.F. MA Proof. By the definition of t 1, t 2 and Q, we can get V (α, ω) = (t 1 I D Q)(t 2 I + D + Q) 1. (3.24) It is easy to verify that t 2 I + D + Q = [I + Q(t 2 I + D) 1 ](t 2 I + D). Consequently, (t 2 I + D + Q) 1 = (t 2 I + D) 1 [I + Q(t 2 I + D) 1 ] 1 = (t 2 I + D) 1 ( 1) j [Q(t 2 I + D) 1 ] j. Noticing that Q(t 2 I+D) 1 is a strictly lower triangular matrix, it follows that all the eigenvalues of Q(t 2 I + D) 1 are 0. Thus its characteristic polynomial is f(λ) = λ n. This, together with the Hamilton-Cayley theorem, yields [Q(t 2 I + D) 1 ] n = 0. Then we have j=0 n 1 (t 2 I + D + Q) 1 = (t 2 I + D) 1 ( 1) j [Q(t 2 I + D) 1 ] j. (3.25) Using the definition of t 1, t 2 and (3.24), we can obtain V (α, ω) = I [(t 2 t 1 )I+2(D + Q)](t 2 I+D + Q) 1 = I 2(iωI+D + Q)(t 2 I+D + Q) 1 n 1 = I 2(iωI + D + Q)(t 2 I + D) 1 ( 1) j [Q(t 2 I + D) 1 ] j j=0 j=0 n 1 = I 2(iωI + D)(t 2 I + D) 1 ( 1) j [Q(t 2 I + D) 1 ] j j=0 n 1 2Q(t 2 I + D) 1 ( 1) j [Q(t 2 I + D) 1 ] j j=0 n 1 = I 2(iωI + D)(t 2 I + D) 1 2(iωI + D)(t 2 I + D) 1 ( 1) j [Q(t 2 I + D) 1 ] j n ( 1) j [Q(t 2 I + D) 1 ] j j=1 n 1 = (t 1 I D)(t 2 I + D) 1 + 2α(t 2 I + D) 1 ( 1) j [Q(t 2 I + D) 1 ] j, (3.26) where the sixth equality holds by [Q(t 2 I + D) 1 ] n = 0. This completes the proof. By Lemma 3.1, we can take (t 1 I D)(t 2 I + D) 1 as a first-order approximation of the matrix V (α, ω), then we get j=1 V (α, ω) 2 (t 1 I D)(t 2 I + D) 1 2. Lemma 3.2. For any α > 0 and ω R, the matrix W (α, ω) is a unitary matrix, which follows that W (α, ω) 2 = 1 By the above analysis, we are now in a position to present the following theorem. Theorem 3.3. Assume A C n n is a positive definite matrix. Let D = diag{a 11, a 22,, a nn }, L and U be the diagonal, the strictly lower-triangular, and the strictly upper-triangular matrices of the matrix A. Then for any α > 0 and ω R, j=1
10 Splitting Methods for Non-Hermitian Positive Definite Linear Systems 309 (1) the sequence generated by the iteration scheme (3.18) converges to the exact solution x C n of the system (1.1); (2) the spectral radius ρ(s(α, ω)) of the iteration matrix S(α, ω) is bounded by approximatively; α iω a ii ρ(s(α, ω)) max 1 i n α + iω + a ii, (3.27) (3) If a ii, i = 1,, n are all real with a min and a max being the minimum and the maximum elements of a ii, i = 1,, n, respectively, then it holds that (α, ω ) arg min ψ(α, ω) = ( a min a max, 0), α>0,ω R ψ(α, ω amax a min ) = amax +, a min (3.28a) (3.28b) where α iω a ii ψ(α, ω) = max 1 i n α + iω + a ii. (3.28c) Proof. As the iteration scheme (3.18) is a special case of the PPS iteration method, from Theorem 2.1, we get the first result. Since A is a positive definite matrix, we have Re(a ii ) > 0, i = 1,, n. It follows from Lemmas 3.1 and 3.2 that ρ(s(α, ω)) = ρ(v (α, ω)w (α, ω)) V (α, ω) 2 W (α, ω) 2 (t 1 I D)(t 2 I + D) 1 2 t 1 a ii = max 1 i n t 2 + a ii = max α iω a ii < 1, (3.29) 1 i n α + iω + a ii where the last equality holds by the definition of t 1 and t 2. Then the second result follows. Furthermore, if a ii, i = 1,, n are all real, then a ii > 0. This yields that For any i = 1,, n, since α iω a ii [ (α ψ(α, ω) = max 1 i n α + iω + a ii = max aii ) 2 + ω 2 ] i n (α + a ii ) 2 + ω 2. (3.30) (α a ii ) 2 + ω 2 (α + a ii ) 2 + ω 2 = (α + a ii) 2 + ω 2 4αa ii 4αa ii (α + a ii ) 2 + ω 2 = 1 (α + a ii ) 2 + ω 2, we can know that ψ(α, ω) increases monotonically with respect ω 2 for any α > 0. Then ω arg min ψ(α, ω) = 0, (3.31) ω R which immediately follows that ψ(α, ω α a ii { α ) = max 1 i n α + a ii = max amin α + a min, α a } max. (3.32) α + a max If α is a minimum point of ψ(α, ω ), then it must satisfy that α a min > 0, α a max < 0, and α a min α + a min = a max α α + a max.
11 310 N. HUANG AND C.F. MA By some simple calculation, we can get α arg min α>0 ψ(α, ω ) = a min a max. This, along with the independence of α and ω, leads to (3.28a) and (3.28b). Remark 3.2. We shall emphasize that the condition of the third result on the diagonal elements is necessary. Otherwise, we can not derive the specific formula of ψ(α, ω). For example, let A = 1 + i i Evidently, A is positive definite and a 11 = 1 + i, a 22 = 2, a 33 = 1 1 2i. Then for α = 1 and ω = 0, we have. 1 a a 11 = 1, 5 1 a a 22 = 1 3, 1 a a 33 = This show that ψ(1, 0) = 1 a a 11. However, a 11 does not have apparent characteristic point, such as maximum modulus or minimum modulus. Therefore, for general problems, we can not determine the expression of ψ(α, ω). And then we can not calculate the minimum of ψ(α, ω). 4. Numerical Experiments In this section, we use two examples to numerically examine feasibility and effectiveness of our new methods. All our tests are started from zero vector, and terminated when the current iteration satisfies r k 2 r 0 2 < 10 5, (4.1) where r k is the residual of the current, say k-th, iteration. We report the number of required iterations (IT), the norm of the relative residual (4.1) (Res) when the process is stopped, the required CPU time (CPU). All algorithms were coded by MATLAB 2011b and were run on a PC with 2.20 GHz Pentium(R) Dual-Core CPU and 2.00 GB RAM. Example 4.1. Consider the system of linear equations (1.1), for which A = tridiag( 1 + i, 10, 1 i), b = (1, 1,, 1) T. Since A+A = tridiag(2i, 20, 2i), A is non-hermitian positive definite. We split A into M +N with This shows that M = tridiag(0, 5, 1 i), and N = tridiag( 1 + i, 5, 0). M + M = tridiag(1 + i, 10, 1 i), and N + N = tridiag( 1 + i, 10, 1 i).
12 Splitting Methods for Non-Hermitian Positive Definite Linear Systems 311 Table 4.1: Numerical results of Example 4.1. HSS TSS PPS α IT CPU Res IT CPU Res IT CPU Res e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e-7 Fig ρ(m(α)) versus α for the PPS, the HSS and the TSS iteration matrices for Example 4.1. Hence both M and N are positive definite. In our test, we compare the PPS method based on this splitting with the HSS method and the TSS method. We take the dimension n = The test results are listed in Table 4.1. And we depict the curves of ρ(m(α)) with respect to α for all PPS, HSS and TSS iteration methods to intuitively show this functional relationship in Figure 4.1. From Table 4.1, we can see that all the methods stopped regularly and the optimal parameter α opt of the PPS method is in the interval of [5, 6]. The optimal parameters of the HSS method and the TSS method are both near by 10. This fact is further confirmed by Figure 4.1. As shown in Table 4.1, whatever the method, the number of iterations and the actual computing time vary seriously for the different parameter α. Furthermore, whatever the parameter α we choose, the CPU time of the PPS method is always the least.
13 312 N. HUANG AND C.F. MA Table 4.2: Numerical results of Example 4.2. HPPS TPPS η/ω α opt IT CPU Res α opt IT CPU Res e e e e e e e e e e e e e e e e e e-006 Example 4.2. (see [14]) Consider the system of linear equations (1.1), for which ( ) W F Ω A = F T, N where W R q q, and N, Ω R (n q) (n q), with 2q > n, b = Ae, e = (1,, 1) T. In particular, the matrices W, N, F and Ω = diag{ν 1,, ν n q } are defined as follows: w kj = n kj = k + 1, for j = k, 1, for k j = 1, 0, otherwise, k + 1, for j = k, 1, for k j = 1, 0, otherwise, { j, for k = j + 2q n, f kj = 0, otherwise, k, j = 1,, q, k, j = 1,, n q, k = 1,, q, j = 1,, n q, ν k = 1, k = 1,, n q. k In this example, we compare the iteration scheme (3.2) (HPPS for short) with the iteration scheme (3.19) (TPPS for short) with different parameter η or ω. In our computations, we choose n = 200 and q = 9 10n. The test results are listed in Table 4.2. And we depict the surfaces of ρ(s(α, η)) (or ρ(s(α, ω))) with respect to α and η (or ω) for HPPS (or TPPS) iteration method to intuitively show this functional relationship in Figure 4.2 (or Figure 4.3). Moreover, we depict the eigenvalue distributions of the HPPS (or TPPS) iteration matrices for different parameter η (or ω) and the corresponding optimal parameter α opt in Figure 4.4 (or Figure 4.5). As can be seen from Table 4.2, when η = 0, the number of iterations and the CPU time of the HPPS method are the best. And the HPPS method may be slowed by increasing the module of η. Moreover, we can see that the numerical results of the HPPS method are much the same when the module of η is equal. And the optimal parameter α opt is equal when the module of η is equal. This fact is further confirmed by Figure 4.2. For TPPS method, we have the same results as the HPPS method, which can be seen from Table 4.2 and Figure 4.3. As is shown in Figure 4.4, the distribution domain of the eigenvalues of the HPPS iteration matrix
14 Splitting Methods for Non-Hermitian Positive Definite Linear Systems ρ(s(α,η)) η α Fig Surfaces of ρ(s(α, η)) versus α and η for the HPPS iteration matrix for Example ρ(s(α,ω)) ω α Fig Surfaces of ρ(s(α, ω)) versus α and ω for the TPPS iteration matrix for Example 4.2. Fig The eigenvalue distributions of the HPPS iteration matrices when η = 4, 0, 4 for Example 4.2.
15 314 N. HUANG AND C.F. MA Fig The eigenvalue distributions of the TPPS iteration matrices when ω = 4, 0, 4 for Example Take (n,η)=(200,0.25) HPSS TPSS Spectral radius Parameter α Fig Curves of ρ(s(α)) versus α for the HPPS and the TPPS iteration matrices for Example 4.2 by η = ω = when η = 0 is considerably smaller than that when η 0, in particular, along the direction of the imaginary axis. From Figure 4.5, we can see the same result for the TPPS iteration matrix. In addition, as you will see in Table 4.2, the numerical results of the TPPS method are a little better than that of the HPPS method, which also can be seen from Figure 4.6. Acknowledgments. The authors thank the anonymous reviewers for their valuable comments and suggestions that helped improve the quality of this paper. The project was supported by National Natural Science Foundation of China (Grant Nos , ) and Fujian Natural Science Foundation (Grant No. 2016J01005). References [1] S.R. Arridge, Optical tomography in medical imaging, Inverse Probl., 15 (1999) [2] O. Axelsson, A generalized SSOR method, BIT, 12 (1972)
16 Splitting Methods for Non-Hermitian Positive Definite Linear Systems 315 [3] O. Axelsson, Z.-Z. Bai and S.-X. Qiu, A class of nested iteration schemes for linear systems with a coefficient matrix with a dominant positive definite symmetric part, Numer. Algorithms, 35 (2004) [4] K. Arrow, L. Hurwicz and H. Uzawa, Studies in Nonlinear Programming, Stanford University Press, Stanford, CA, [5] Z.-Z. Bai, Sharp error bounds of some Krylov subspace methods for non-hermitian linear systems, Appl. Math. Comput., 109 (2000) [6] Z.-Z. Bai, Optimal parameters in the HSS-like methods for saddle point problems, Numer. Linear Algebra Appl., 16 (2009) [7] A. Bendali, Numerical analysis of the exterior boundary value problem for the time-harmonic Maxwell equations by a boundary finite element method, Math. Comp., 43 (1984) [8] M. Benzi and D. Bertaccini, Block preconditioning of real-valued iterative algorithms for complex linear systems, IMA J. Numer. Anal., 28 (2008) [9] Z.-Z. Bai, M. Benzi and F. Chen, Modified HSS iteration methods for a class of complex symmetric linear systems, Computing, 87 (2010) [10] Z.-Z. Bai, M. Benzi and F. Chen, On preconditioned MHSS iteration methods for complex symmetric linear systems, Numer. Algorithms, 56 (2011) [11] Z.-Z. Bai, I. Duff and A.J. Wathen, A class of incomplete orthogonal factorization methods I: Methods and theories, BIT, 41 (2001) [12] Z.-Z. Bai and G.H. Golub, Accelerated Hermitian and skew-hermitian splitting iteration methods for saddle point problems, IMA J. Numer. Anal., 27 (2007) [13] Z.-Z. Bai, G.H. Golub and C.-K. Li, Convergence properties of preconditioned Hermitian and skew- Hermitian splitting methods for non-hermitian positive semidefinite matrices, Math. Comp., 76 (2007) [14] Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin, Block triangular and skew-hermitian splitting method for positive-definite linear systems, SIAM J. Sci. Comput., 26 (2005) [15] Z.-Z. Bai, G.H. Golub and M.K. Ng, On successive-overrelaxation acceleration of the Hermitian and skew-hermitian splitting iterations, Numer. Linear Algebra Appl., 14 (2007) [16] Z.-Z. Bai, G.H. Golub and M.K. Ng, Hermitian and skew-hermitian splitting methods for non- Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 24 (2003) [17] Z.-Z. Bai, G.H. Golub and J.-Y. Pan, Preconditioned Hermitian and skew-hermitian splitting methods for non-hermitian positive semidefinite linear systems, Numer. Math., 98 (2004) [18] M.A. Bochev and L.A. Krukier, On an iteration solution for strongly nonsymmetric systems of linear algebraic equations, Comp. Math. Math. Phys., 37 (1997) [19] M. Benzi and D. Szyld, Existence and uniqueness of splittings of stationary iterative methods with applications to alternating methods, Numer. Math., 76 (1997) [20] A. Bunse-Gerstner and R. Stöver, On a conjugate gradient-type method for solving complex symmetric linear systems, Linear Algebra Appl., 287 (1999) [21] Z.-Z. Bai and Z.-Q. Wang, On parameterized inexact Uzawa methods for generalized saddle point problems, Linear Algebra Appl., 428 (2008) [22] S.H. Christiansen, Discrete Fredholm properties and convergence estimates for the electric field integral equation, Math. Comp., 73 (2004) [23] M. Clemens and T. Weiland, Comparison of Krylov-type methods for complex linear systems applied to high-voltage problems, IEEE Trans. Mag., 34 (1998) [24] D.D. Day and M.A. Heroux, Solving complex-valued linear systems via equivalent real formulations, SIAM J. Sci. Comput., 23 (2001) [25] M. Eiermann, W. Niethammer and R.S. Varga, Acceleration of relaxation methods for non- Hermitian linear systems, SIAM J. Matrix Anal. Appl., 13 (1992) [26] R.W. Freund, Conjugate gradient-type methods for linear systems with complex symmetric coefficient matrices, SIAM J. Sci. Stat. Comput., 13 (1992)
17 316 N. HUANG AND C.F. MA [27] A. Hadjidimos, Accelerated overrelaxation method, Math. Comp., 32 (1978) [28] N. Huang and C.-F. Ma, Parallel multisplitting iteration methods based on M-splitting for the PageRank problem, Appl. Math. Comput., 271 (2015) [29] N. Huang and C.-F. Ma, The BGS-Uzawa and BJ-Uzawa iterative methods for solving the saddle point problem, Appl. Math. Comput., 256 (2015) [30] N. Huang and C.-F. Ma, The nonlinear inexact Uzawa hybrid algorithms based on one-step Newton method for solving nonlinear saddle-point problems, Appl. Math. Comput., 270 (2015) [31] N. Huang and C.-F. Ma, Y.-J. Xie, An inexact relaxed DPSS preconditioner for saddle point problem, Appl. Math. Comput., 265 (2015) [32] M.R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Standards, 49 (1952) [33] L.A. Krukier, L.G. Chikina and T.V. Belokon, Triangular skew-symmetric iterative solvers for strongly nonsymmetric positive real linear system of equations, Appl. Numer. Math., 41 (2002) [34] L.A. Krukier, B.L. Krukier and Z.-R. Ren, Generalized skew-hermitian triangular splitting iteration methods for saddle-point linear systems, Numer. Linear Algebra Appl., 21 (2014) [35] L.A. Krukier, T. Martynova and Z.-Z. Bai, Product-type skew-hermitian triangular splitting iteration methods for strongly non-hermitian positive definite linear systems, J. Comput. Appl. Math., 232 (2009) [36] J. Lu and Z. Zhang, A modified nonlinear inexact Uzawa algorithm with a variable relaxation parameter for the stabilized saddle point problem, SIAM J. Matrix Anal. Appl., 31 (2010) [37] T. Manteuffel, An incomplete factorization technique for positive definite linear systems, Math. Comp., 34 (1980) [38] G. Moro and J.H. Freed, Calculation of ESR spectra and related Fokker-Planck forms by the use of the Lanczos algorithm, J. Chem. Phys., 74 (1981) [39] B. Poirier, Effcient preconditioning scheme for block partitioned matrices with structured sparsity, Numer. Linear Algebra Appl., 7 (2000) [40] Y. Saad, Iterative methods for sparse linear systems, (2nd edn), SIAM: Philadelphia, [41] D. Schmitt, B. Steffen and T. Weiland, 2D and 3D computations of lossy eigenvalue problems, IEEE Trans. Magnetics, 30 (1994) [42] T. Sogabe and S.-L. Zhang, A COCR method for solving complex symmetric linear systems, J. Comput. Appl. Math., 199 (2007) [43] H. Van der Vorst, Iterative Krylov methods for large linear systems, Cambridge University Press, Cambridge, [44] L. Wang and Z.-Z. Bai, Skew-Hermitian triangular splitting iteration methods for non-hermitian positive definite linear systems of strong skew-hermitian parts, BIT, 44 (2004) [45] W.-W. Li and X. Wang, A modified GPSS method for non-hermitian positive definite linear systems, Appl. Math. Comput., 234 (2014) [46] X. Wang, Y. Li and L. Dai, On Hermitian and skew-hermitian splitting iteration methods for the linear matrix equation AXB = C, Comput. Math. Appl., 65 (2013) [47] X. Wang, W.-W. Li and L.-Z. Miao, On positive-definite and skew-hermitian splitting iteration methods for continuous Sylvester equation AX + XB = C, Comput. Math. Appl., 66 (2013)
Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems
Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Zhong-Zhi Bai State Key Laboratory of Scientific/Engineering Computing Institute of Computational Mathematics
More informationJournal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices
Journal of Computational Applied Mathematics 6 (009) 136 154 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam Optimization
More informationA MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION AXB = C *
Journal of Computational Mathematics Vol.34, No.4, 2016, 437 450. http://www.global-sci.org/jcm doi:10.4208/jcm.1601-m2015-0416 A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION
More informationThe semi-convergence of GSI method for singular saddle point problems
Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method
More informationON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *
Journal of Computational Mathematics Vol.xx, No.x, 2x, 6. http://www.global-sci.org/jcm doi:?? ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Davod
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 36, pp. 39-53, 009-010. Copyright 009,. ISSN 1068-9613. ETNA P-REGULAR SPLITTING ITERATIVE METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS
More informationSplitting Iteration Methods for Positive Definite Linear Systems
Splitting Iteration Methods for Positive Definite Linear Systems Zhong-Zhi Bai a State Key Lab. of Sci./Engrg. Computing Inst. of Comput. Math. & Sci./Engrg. Computing Academy of Mathematics and System
More informationON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction
Journal of Computational Mathematics Vol.xx, No.x, 200x, 1 9. http://www.global-sci.org/jcm doi:10.4208/jcm.1401-cr7 ON AUGMENED LAGRANGIAN MEHODS FOR SADDLE-POIN LINEAR SYSEMS WIH SINGULAR OR SEMIDEFINIE
More informationTHE solution of the absolute value equation (AVE) of
The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method
More informationModified HSS iteration methods for a class of complex symmetric linear systems
Computing (2010) 87:9 111 DOI 10.1007/s00607-010-0077-0 Modified HSS iteration methods for a class of complex symmetric linear systems Zhong-Zhi Bai Michele Benzi Fang Chen Received: 20 October 2009 /
More informationMathematics and Computer Science
Technical Report TR-2010-026 On Preconditioned MHSS Iteration Methods for Complex Symmetric Linear Systems by Zhong-Zhi Bai, Michele Benzi, Fang Chen Mathematics and Computer Science EMORY UNIVERSITY On
More informationNumerical behavior of inexact linear solvers
Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth
More informationTwo-parameter generalized Hermitian and skew-hermitian splitting iteration method
To appear in the International Journal of Computer Mathematics Vol. 00, No. 00, Month 0XX, 1 Two-parameter generalized Hermitian and skew-hermitian splitting iteration method N. Aghazadeh a, D. Khojasteh
More informationPerformance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems
Applied Mathematical Sciences, Vol. 10, 2016, no. 30, 1477-1488 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.6269 Performance Comparison of Relaxation Methods with Singular and Nonsingular
More informationSEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS
REVISTA DE LA UNIÓN MATEMÁTICA ARGENTINA Vol. 53, No. 1, 2012, 61 70 SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS JIAN-LEI LI AND TING-ZHU HUANG Abstract. Recently,
More informationJournal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection
Journal of Computational and Applied Mathematics 226 (2009) 77 83 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam
More informationNested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning
Adv Comput Math DOI 10.1007/s10444-013-9330-3 Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning Mohammad Khorsand Zak Faezeh Toutounian Received:
More informationOn preconditioned MHSS iteration methods for complex symmetric linear systems
DOI 10.1007/s11075-010-9441-6 ORIGINAL PAPER On preconditioned MHSS iteration methods for complex symmetric linear systems Zhong-Zhi Bai Michele Benzi Fang Chen Received: 25 November 2010 / Accepted: 13
More informationConvergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices
Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices Zhong-Zhi Bai 1 Department of Mathematics, Fudan University Shanghai
More informationJae Heon Yun and Yu Du Han
Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose
More informationRegularized HSS iteration methods for saddle-point linear systems
BIT Numer Math DOI 10.1007/s10543-016-0636-7 Regularized HSS iteration methods for saddle-point linear systems Zhong-Zhi Bai 1 Michele Benzi 2 Received: 29 January 2016 / Accepted: 20 October 2016 Springer
More informationA generalization of the Gauss-Seidel iteration method for solving absolute value equations
A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,
More informationA Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers
Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned
More informationChebyshev semi-iteration in Preconditioning
Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationOn the Preconditioning of the Block Tridiagonal Linear System of Equations
On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft
More informationON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS
U..B. Sci. Bull., Series A, Vol. 78, Iss. 4, 06 ISSN 3-707 ON A GENERAL CLASS OF RECONDIIONERS FOR NONSYMMERIC GENERALIZED SADDLE OIN ROBLE Fatemeh anjeh Ali BEIK his paper deals with applying a class
More informationResearch Article Convergence of a Generalized USOR Iterative Method for Augmented Systems
Mathematical Problems in Engineering Volume 2013, Article ID 326169, 6 pages http://dx.doi.org/10.1155/2013/326169 Research Article Convergence of a Generalized USOR Iterative Method for Augmented Systems
More informationAn accelerated Newton method of high-order convergence for solving a class of weakly nonlinear complementarity problems
Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 0 (207), 4822 4833 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa An accelerated Newton method
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationComparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations
American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov
More informationKey words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR
POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationIterative Solution methods
p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationTwo efficient inexact algorithms for a class of large sparse complex linear systems
Two efficient inexact algorithms for a class of large sparse complex linear systems Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan, Rasht,
More informationarxiv: v1 [math.na] 26 Dec 2013
General constraint preconditioning iteration method for singular saddle-point problems Ai-Li Yang a,, Guo-Feng Zhang a, Yu-Jiang Wu a,b a School of Mathematics and Statistics, Lanzhou University, Lanzhou
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationResidual iterative schemes for largescale linear systems
Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz
More informationIncomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices
Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,
More informationOn the accuracy of saddle point solvers
On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at
More informationEfficient Solvers for the Navier Stokes Equations in Rotation Form
Efficient Solvers for the Navier Stokes Equations in Rotation Form Computer Research Institute Seminar Purdue University March 4, 2005 Michele Benzi Emory University Atlanta, GA Thanks to: NSF (MPS/Computational
More informationON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS
J. Appl. Math. & Informatics Vol. 36(208, No. 5-6, pp. 459-474 https://doi.org/0.437/jami.208.459 ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS DAVOD KHOJASTEH SALKUYEH, MARYAM ABDOLMALEKI, SAEED
More informationLinear and Non-Linear Preconditioning
and Non- martin.gander@unige.ch University of Geneva July 2015 Joint work with Victorita Dolean, Walid Kheriji, Felix Kwok and Roland Masson (1845): First Idea of After preconditioning, it takes only three
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationA new iterative method for solving a class of complex symmetric system of linear equations
A new iterative method for solving a class of complex symmetric system of linear equations Davod Hezari Davod Khojasteh Salkuyeh Vahid Edalatpour Received: date / Accepted: date Abstract We present a new
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationAvailable online: 19 Oct To link to this article:
This article was downloaded by: [Academy of Mathematics and System Sciences] On: 11 April 01, At: 00:11 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 107954
More informationPreface to the Second Edition. Preface to the First Edition
n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................
More informationThe upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems
Applied Mathematics Letters 19 (2006) 1029 1036 wwwelseviercom/locate/aml The upper Jacobi upper Gauss Seidel type iterative methods for preconditioned linear systems Zhuan-De Wang, Ting-Zhu Huang School
More informationON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES
Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.
More informationA Review of Preconditioning Techniques for Steady Incompressible Flow
Zeist 2009 p. 1/43 A Review of Preconditioning Techniques for Steady Incompressible Flow David Silvester School of Mathematics University of Manchester Zeist 2009 p. 2/43 PDEs Review : 1984 2005 Update
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationBOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION
K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex
More informationFACTORIZING COMPLEX SYMMETRIC MATRICES WITH POSITIVE DEFINITE REAL AND IMAGINARY PARTS
MATHEMATICS OF COMPUTATION Volume 67, Number 4, October 1998, Pages 1591 1599 S 005-5718(98)00978-8 FACTORIZING COMPLEX SYMMETRIC MATRICES WITH POSITIVE DEFINITE REAL AND IMAGINARY PARTS NICHOLAS J. HIGHAM
More informationSingular Value Inequalities for Real and Imaginary Parts of Matrices
Filomat 3:1 16, 63 69 DOI 1.98/FIL16163C Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Singular Value Inequalities for Real Imaginary
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationA Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation
A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition
More informationGeneralized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,
Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University
More informationTWO CSCS-BASED ITERATION METHODS FOR SOLVING ABSOLUTE VALUE EQUATIONS
Journal of Applied Analysis and Computation Volume 7, Number 4, November 2017, 1336 1356 Website:http://jaac-online.com/ DOI:10.11948/2017082 TWO CSCS-BASED ITERATION METHODS FOR SOLVING ABSOLUTE VALUE
More informationOn the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012
On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter
More informationON PMHSS ITERATION METHODS FOR CONTINUOUS SYLVESTER EQUATIONS * 1. Introduction
Journal of Computational Mathematics Vol.35, No.5, 2017, 600 619. http://www.global-sci.org/jcm doi:10.4208/jcm.1607-m2016-0613 ON PMHSS ITERATION METHODS FOR CONTINUOUS SYLVESTER EQUATIONS * Yongxin Dong
More informationEIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems
EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues
More informationAn Accelerated Jacobi-gradient Based Iterative Algorithm for Solving Sylvester Matrix Equations
Filomat 31:8 (2017), 2381 2390 DOI 10.2298/FIL1708381T Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat An Accelerated Jacobi-gradient
More informationSolving Sparse Linear Systems: Iterative methods
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010
More informationSolving Sparse Linear Systems: Iterative methods
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary
More informationNotes on Some Methods for Solving Linear Systems
Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms
More informationA SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS
INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, SERIES B Volume 5, Number 1-2, Pages 21 30 c 2014 Institute for Scientific Computing and Information A SPARSE APPROXIMATE INVERSE PRECONDITIONER
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationComparison results between Jacobi and other iterative methods
Journal of Computational and Applied Mathematics 169 (2004) 45 51 www.elsevier.com/locate/cam Comparison results between Jacobi and other iterative methods Zhuan-De Wang, Ting-Zhu Huang Faculty of Applied
More informationPreconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization problems
Zeng et al. Journal of Inequalities and Applications 205 205:355 DOI 0.86/s3660-05-0879-x RESEARCH Open Access Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization
More informationIterative methods for positive definite linear systems with a complex shift
Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More informationParallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1
Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations
More informationComputers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices
Computers Mathematics with Applications 56 (2008) 2048 2053 Contents lists available at ScienceDirect Computers Mathematics with Applications journal homepage: wwwelseviercom/locate/camwa Convergence analysis
More informationITERATIVE METHODS BASED ON KRYLOV SUBSPACES
ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method
More informationPreconditioned inverse iteration and shift-invert Arnoldi method
Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical
More informationMathematics and Computer Science
Technical Report TR-2007-002 Block preconditioning for saddle point systems with indefinite (1,1) block by Michele Benzi, Jia Liu Mathematics and Computer Science EMORY UNIVERSITY International Journal
More informationOn the Modification of an Eigenvalue Problem that Preserves an Eigenspace
Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov
More informationc 2004 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. 6, No., pp. 377 389 c 004 Society for Industrial and Applied Mathematics SPECTRAL PROPERTIES OF THE HERMITIAN AND SKEW-HERMITIAN SPLITTING PRECONDITIONER FOR SADDLE POINT
More informationThe Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna
The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The
More informationIntroduction to Iterative Solvers of Linear Systems
Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their
More informationarxiv: v1 [hep-lat] 2 May 2012
A CG Method for Multiple Right Hand Sides and Multiple Shifts in Lattice QCD Calculations arxiv:1205.0359v1 [hep-lat] 2 May 2012 Fachbereich C, Mathematik und Naturwissenschaften, Bergische Universität
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationContribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa
Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006
More informationIN this paper, we investigate spectral properties of block
On the Eigenvalues Distribution of Preconditioned Block wo-by-two Matrix Mu-Zheng Zhu and a-e Qi Abstract he spectral properties of a class of block matrix are studied, which arise in the numercial solutions
More informationAn Accelerated Block-Parallel Newton Method via Overlapped Partitioning
An Accelerated Block-Parallel Newton Method via Overlapped Partitioning Yurong Chen Lab. of Parallel Computing, Institute of Software, CAS (http://www.rdcps.ac.cn/~ychen/english.htm) Summary. This paper
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More informationCONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX
J. Appl. Math. & Computing Vol. 182005 No. 1-2 pp. 59-72 CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX JAE HEON YUN SEYOUNG OH AND EUN HEUI KIM Abstract. We study convergence
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationBlock triangular preconditioner for static Maxwell equations*
Volume 3, N. 3, pp. 589 61, 11 Copyright 11 SBMAC ISSN 11-85 www.scielo.br/cam Block triangular preconditioner for static Maxwell equations* SHI-LIANG WU 1, TING-ZHU HUANG and LIANG LI 1 School of Mathematics
More informationJordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS
Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative
More informationCONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES
European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006 P. Wesseling, E. Oñate and J. Périaux (Eds) c TU Delft, The Netherlands, 2006 CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT
More informationApproximation algorithms for nonnegative polynomial optimization problems over unit spheres
Front. Math. China 2017, 12(6): 1409 1426 https://doi.org/10.1007/s11464-017-0644-1 Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Xinzhen ZHANG 1, Guanglu
More information