Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University Taiyuan 03002 Shanxi Province, P. R. China e-mail: wenrp@63.com Abstract In 99 A.D. Gunawardena et al. reported that the convergence rate of the Gauss-Seidel method with a preconditioning matrix I + S is superior to that of the basic iterative method. In this paper, we use the preconditioning matrix I + Ŝ. If a coefficient matrix A is a nonsingular L- matrix, the modified method yields considerable improvement in the rate of convergence for the iterative method. Finally, a numerical example shows the advantage of this method. For solving the linear system. Introduction Ax n n n b, A ( a ij ) R nonsingular and b R. (.) Let A M N, and M is nonsingular. Then the basic iterative scheme for Eq. (.) is 2000 Mathematics Subject Classification: 65F0. Keywords and phrases: L-matrix, iterative method, convergence. This work is supported by NSF of Shanxi Province, 2005009. Received July 6, 2008 2008 Scientific Advances Publishers
32 xk+ T xk + c, k 0,,, where T M N, c M b. Assuming A I L U, where I is the identity matrix, and L and U are strictly lower and strictly upper triangular parts of A, respectively. M I L, N U leads to the classical Gauss-Seidel method. Then the iteration matrices of the classical Jacobi and classical Gauss-Seidel methods are J L + U and U T ( I L ), respectively. We consider a preconditioned system of (.) PAx Pb. (.2) The corresponding basic iterative method is given in general by ( k+ ) ( k) x M p N pr + M p Pb, k 0,,, where PA M p N is a splitting of PA. p Some techniques of preconditioning (see [3, 4, 6]) which improve the rate of convergence of the classical iterative methods have been developed. For example, a simple modified Gauss-Seidel (MGS) method was first proposed by Gunawardena et al. (see []) with P I + S and S aij, i j, ( sij ) (.3) 0, i j. The performance of this method on some matrices is investigated in []. Kohno et al. (see [3]) extend Gunawardena et al.'s work to a more general case and presented a new MGS method by using the preconditioned matrix P I + S, where α Sα αiaij, i 0, i j, j 0 αi. The performance of this method on some matrices is investigated in [3]. Later, Li et al. (see [5]) studied convergence of this method on Z- matrices.
A NEW EFFECTIVE PRECONDITIONED METHOD 33 In this paper, we propose another preconditioned form for MGS and Jacobi methods on L-matrices. Theorem 3. and Theorem 3.2 provide convergence results, respectively. 2. Preliminaries n n For A R, the directed graph G ( A) of A is defined to be the pair ( V, E), where V {, 2,, n} is the set of vertices and E {( i, j ) aij 0, i, j n} is the set of edges. A path from i to i p is an ordered tuple of vertices ( i, i2,, i p ) such that for each k, ( ik, ik+ ) E. A directed graph is said to be strongly connected if for each pair ( i, j ) of vertices, there is a path from i to j. The reflexive transitive closure of the graph G ( A) is a graph denoted by G ( A). It is the smallest reflexive and transitive relation which includes the relation G ( A). The matrix A is said to be irreducible if G ( A) is strongly connected (see [5]). A matrix A is called an L-matrix if a 0 for all i, j such that i j and a ii > 0 for all i (see [2]). A matrix A is said to be nonnegative if each entry of A is nonnegative, and is said to be a positive matrix if each entry is positive. We shall denote this by A 0 and A > 0, respectively. The spectral radius of A is denoted by ρ( A). Now let us summarize the modified Gauss-Seidel method [] with the preconditioner P I + S. Let all elements a ii+ of S be nonzero. Then we have Whenever ~ A x + ( I + S ) Ax [ I L SL ( U S SU )] x, ~ b ( I + S ) b. (2.) a for i, 2,, n. ii+ a i + i ( I SL L ) exists, and hence it is possible to define the Gauss-Seidel iteration matrix for A, namely ij
34 ~ T ( I SL L ) ( U S + SU ). (2.2) This iteration matrix T ~ is called the modified Gauss-Seidel iteration matrix. In what follows, when A ( a ij ) is an L-matrix we shall assume that a 0 for i, 2,, n. ii+ a i + i > is We next propose a new preconditioned matrix P I + Ŝ, where Ŝ Sˆ aij, i j +, ( sˆ ij ) (2.3) 0, i j +. Let all elements ai i + of Ŝ be nonzero. Thus we obtain A ˆx ˆ ˆ ˆ + ( I + S ) Ax [ I L SL ( U S SU )] x, b ˆ ( I + Sˆ ) b. (2.4) ˆ Whenever a i+ i for i, 2,, n, ( ˆ I SL L ) exists, and hence it is possible to define the Gauss-Seidel iteration matrix for Â, namely ˆ ( ˆ T I SL L ) ( U Sˆ + SU ˆ ). (2.5) Remark. The usual splitting of A is A D L U, where D, L, and U are the diagonal, strictly lower, and strictly upper triangular parts of A. Since A is an L-matrix, let us consider the new matrix ˆ A D A I D L D U. Clearly the system Aˆ x bˆ D b is equivalent to the system Ax b. Without loss of generality we may assume A has the splitting of the form A I L U when A is an L-matrix. T Remark 2. As the standard Gauss-Seidel iteration matrix U ( I L ), the first column of (2.5) are zero when a2 a2 (In
A NEW EFFECTIVE PRECONDITIONED METHOD 35 additional, the first two columns of (2.5) are zero when a 2 a 2.). Thus we may partition T and Tˆ so that 0 T 0 ˆ 0 and ˆ T0 T T, 0 0 ˆ (2.6) T T where T and T ˆ are both ( n ) ( n ) matrices. Theorem 2. (Perron-Frobenius) (see [2]). (a) If A is a positive matrix, then ρ ( A) is a simple eigenvalue of A. (b) If A is nonnegative and irreducible, then ρ ( A) is a simple eigenvalue of A. Furthermore, any eigenvalue with the same modulus as ρ ( A) is also simple, and A has a positive eigenvector x corresponding to the eigenvalue ρ ( A). Any other positive eigenvector of A is a multiple of x. Theorem 2.2 (see [5]). Let A be a nonnegative matrix. Then: (a) If α x Ax for some nonnegative vector x, x 0, then α ρ( A). (b) If Ax βx for some positive vector x, then ρ( A ) β. Moreover, if A is irreducible and if 0 αx Ax βx for some nonnegative vector x, then α ρ( A ) β and x is a positive vector. Theorem 2.3 (see [5]). Let A be irreducible. If M is the maximum row sum of A and m is the minimum row sum of A, then m ρ( A) M. Theorem 2.4 (see [5]). Let A I L U be an L-matrix with usual splitting. Let T ( I L ) U and J L + U be the Gauss-Seidel and Jacobi iteration matrices of A, respectively. Then exactly one of the following holds: (a) ρ( T ) ρ( J ) 0, (b) 0 < ρ( T ) < ρ( J ) <,
36 (c) ρ( T ) ρ( J ), (d) < ρ( J ) < ρ( T ). Lemma 2.5. Let A I L U be an L-matrix with usual splitting. Let T ( I L ) U and J L + U be the Gauss-Seidel and Jacobi iteration matrices of A, respectively. If a i+ i, then T and T ˆ are irreducible matrices (where T and T ˆ are defined as in (2.6)). Proof. We will show that T is irreducible by showing that ( 2, 3,, n, 2) is a cycle in G ( T ); consequently it is also a cycle in G( T ), so G ( T ) is strongly connected. Since L is a nonnegative nilpotent matrix, ρ( L ) 0. Thus 2 n ( I L ) I + L + L + + L. 2 n Therefore T ( I + L + L + + L )U and G ( T ) G( L) G( U ). The condition a 0 implies that (, 2,, n) is a path in G( U ) ii+ a i + i > and ( n, n,, ) is a path in G ( L). Also G( U ) G( T ), since 2 n G( T ) G( U ) G( LU ) G( L U ) G( L U ). In particular, ( 2, 3,, n) is a path in G ( T ), so we only need to show that ( n, 2) is an edge in G ( T ). For this note that ( n, ) is an edge in G ( L) and (, 2) is an edge in G ( U ). Consequently ( n, 2) is an edge in G( T ). Similarly, it can be shown that T ˆ is irreducible. Lemma 2.6. Let A be an L-matrix such that a for i+ i i, 2,, n. Then ρ( T ) implies ρ( T ˆ ). (All matrices considered here are definited as in Remark.)
A NEW EFFECTIVE PRECONDITIONED METHOD 37 Proof. Clearly ρ( T ) implies ρ( T ). From Lemma 2.5, T is irreducible. By Theorem 2. there exists a positive vector ω such that T ω ω. Now define T0ω ω. ω Then clearly T ω ω. Since T 0 0, T0ω is a positive scalar and hence ω is a positive vector. Consider By factoring ˆ Tω ( I L SL ) ( U S + SU ) ω. I L from the right hand side we get Tˆ ω [ I ( I L ) SL ] ( I L ) ( U S + SU )ω [ I ( I L ) SL ] [( I L ) Uω ( I L ) Sω + ( I L ) SUω]. I U Now ( L ) ω ω implies U ω ( I L ) ω. Therefore we get Tˆ ω [ I ( I L ) SL ] [ ω ( I L ) Sω + ( I L ) S( I L ) ω] [ I ( I L ) SL ] [ ω ( I L ) SLω] [ I ( I L ) SL ] [ I ( I L ) SLω] ω. Thus by Theorem 2.2, ρ( Tˆ ). Lemma 2.7. Suppose A I L U is an L-matrix such that a i+ i, where L and U are the strictly lower and strictly upper triangular parts of A, respectively. Then the standard Jacobi iteration matrix J L + U and the modified Jacobi iteration matrix Tˆ ( I SL ˆ ) ( L + U Sˆ + SU ˆ ) are both irreducible. Proof. It follows from the condition a a 0 that i+ i ii+ > (, 2,, n, n, n,, 2, ) is a path in G ( L + U ). Hence G(J) is strongly connected, and so J is irreducible.
38 Next we show that Ĵ is an irreducible matrix. Note that G( J ˆ ) G( L + U Sˆ + SU ˆ ) G( L) G( U Sˆ ) G( Sˆ U ), where the first line follows from the fact that ( ˆ I SL ) is a nonnegative matrix with a positive diagonal. Recall that G ( L) contains the path ( n, n,, ), and note that G( SU ˆ ) contains the edges (, 3), ( 2, 4),, ( n 2, n). Hence G( J ˆ ) is strongly connected and Ĵ is irreducible. Theorem 3.. Suppose 3. Main Results on L-matrices A I L U is an L-matrix such that a i+ i, where L and U are the strictly lower and strictly upper triangular parts of A, respectively. Let T ( I L ) U and Tˆ ( I SL ˆ L ) ( U Sˆ + SU ˆ ) be the classical and modified Gauss- Seidel iteration matrices, respectively. Then (a) ρ( T ˆ ) < ρ( T ) if ρ( T ) <, (b) ρ( T ˆ ) ρ( T ) if ρ( T ), (c) ρ( T ˆ ) > ρ( T ) if ρ( T ) >. Proof. Part (b) follows from Lemma 2.6. Now we prove (a) and (c), first we note there exists a positive vector ω such that where λ ρ( T ). Now consider Tˆ ω ( I L SL ˆ ) ( U Sˆ + SU ˆ )ω T ω λω, (3.) ( I L SL ˆ ) ( I + Sˆ ) U ω ( I L SL ˆ ) Sˆ ω
A NEW EFFECTIVE PRECONDITIONED METHOD 39 ( I L SL ˆ ) ( I + Sˆ ) λ( I L ) ω ( I L SL ˆ ) Sˆ ω by (3.). Therefore Tˆ ω Tω ˆ ˆ ˆ ˆ ( I L SL ) [ λ ( I + S )( I L ) ω Sω ( I L SL )( I L ) U ω] ˆ ( I L SL ) [ λ( I L ) ω + ( λ ) Sω U ω] ( ˆ I L SL ) ( λ ) Sˆ ω. Write ( I L SL ˆ ) D + L for some positive diagonal matrix D and a nonnegative strictly lower triangular matrix L. Then ( ˆ I L SL ) Sˆ ω ( D + L ) Sˆ ω 0, since D Sˆ ω 0. Also, since DS ˆ ω ( I L SL ˆ 0, ) S ˆ ω is a nonzero, nonnegative vector. If λ <, then T ˆω Tω 0. Therefore By using the partitioned form of ˆ λω. Tω ˆ Tˆ 0 0 Tˆ 0 Tˆ introduced in Remark 2, we get ρ( T ˆ ) < λ, by Theorem 2.2. If λ >, then T ˆω Tω 0 but not equal to 0. Hence T ˆ ω λω, and this implies T ˆ ω λω, where ω α ω, ω > 0. Therefore ( T ˆ ) > λ by Theorem2.2. Hence ρ( Tˆ ) > λ. ρ Theorem 3.2. Let A I L U be an L-matrix, where L and U are the strictly lower and strictly upper triangular parts of A respectively. Let J L + U and Jˆ ( I SL ˆ ) ( L + U Sˆ + SU ˆ ) be the classical and
320 modified Jacobi iteration matrices, respectively. Further assume that J and Ĵ are irreducible matrices. Then (a) ρ( Jˆ ) < ρ( J ) if ρ( J ) <, (b) ρ( Jˆ ) ρ( J ) if ρ( J ), (c) ρ( Jˆ ) > ρ( J ) if ρ( J ) >. Proof. Since J is irreducible, by Theorem2. there exists a positive vector ω such that J ω λω, where λ ρ( J ). This implies Now consider ( L + U ) ω λω. (3.2) Jˆ ω Jω ( I SL ˆ ) [ L + U Sˆ + SU ˆ ( I SL ˆ )( L + U )]ω ( I SL ˆ ) Sˆ 2 [ I + U + L + LU ]ω ˆ ˆ ( I SL ) S[ I + U + λl] ω by ( 3.2) ( I SL ˆ ) Sˆ ( ω + λω Lω + λlω) ( ˆ I SL ) Sˆ ( I + L )( λ ) ω. (3.3) If ρ( J ) <, then Tˆ ω ρ( J )ω by (3.3), and so, from Theorem 2.2, we obtain ρ( J ˆ ) < ρ( J ). Similarly we can get ρ( J ˆ ) > ρ( J ) if ρ( J ) >, and (b) also follows from Theorem 2.2. Corollary 3.3. Let A I L U, J, Jˆ be as definite in Theorem 3.2. Replace J, J ˆ are irreducible matrices by the condition a i+ i. Then the conclusion of Theorem 3.2 holds. Remark. We have tested many examples with random entries. Finally, we present some matrices to show: the modified iteration matrix
A NEW EFFECTIVE PRECONDITIONED METHOD 32 Tˆ has a faster convergence rate when the iteration matrix T and T ~ are both convergent, and the modified iteration matrix Tˆ diverges even faster when the iteration matrix T and T ~ are both divergent. The proof follows from Lemma 2.7. 4. Numerical Example We present some matrices to illustrate our results. Let 0.2 A 0.3 0. 0.2 0.2 0.2 0. 0.3 0. 0.3 0. 0.4 0.4 0. 0. 0.3 0.2 0.6 0.6 I 0.0 L U, where L and U are the strictly lower and strictly upper triangular parts of A, respectively. If ~ T ( I L) U, T and Tˆ are defined as (2.2) and (2.5), respectively. ~ Then ρ( T ) 0.96, ρ( T ) 0. 9505 and ρ( Tˆ ) 0.703. Let 0.289 A 0.424 0.3454 0.0363 0.0089 0.3383 0.3384 0.45 0.305 0.4724 0.4843 0.33680 0.0679 0.2938 0.0972 0.266 0.0252 0.3628 0.0290 I L U, 0.2982 ~ then ρ( T ) 0.6897, ρ( T ) 0. 560 and ρ( Tˆ ) 0.3832.
322 References [] A. D. Gunawardena, S. K. Jain and L. Snyder, Modified iterative methods for consistent linear systems, Linear Algebra Appl. 54-56 (99), 23-43. [2] Jia-Gan Hu, Iterative Methods for the Solution of Linear Systems, Science Press, 999. [3] T. Kohno, H. Kotakemori, H. Niki and M. Usui, Improving modified iterative methods for Z- matrices, Linear Algebra Appl. 267 (997), 3-23. [4] W. Li and W. W. Sun, Modified Gauss-Seidel type methods and Jacobi type methods for Z- matrices, Linear Algebra Appl. 37 (2000), 227-240. [5] R. S. Varga, Matrix Iterative Analysis, Second Edition, Springer-Verlag, Berlin, Heidelberg, 2000. [6] Li Wen, Preconditioned AOR iterative methods for linear systems, Inter. J. Comp. Math. 79 (2002), 89-0. g