A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

Similar documents
Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

On optimal improvements of classical iterative schemes for Z-matrices

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

Improving the Modified Gauss-Seidel Method for Z-Matrices

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Iterative Methods. Splitting Methods

Computational Methods. Systems of Linear Equations

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Perron Frobenius Theory

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin

Solving Linear Systems

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Some bounds for the spectral radius of the Hadamard product of matrices

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

On the Schur Complement of Diagonally Dominant Matrices

Computational Linear Algebra

Lecture 18 Classical Iterative Methods

CAAM 454/554: Stationary Iterative Methods

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

9. Iterative Methods for Large Linear Systems

Chapter 7 Iterative Techniques in Matrix Algebra

9.1 Preconditioned Krylov Subspace Methods

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

Z-Pencils. November 20, Abstract

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

A Refinement of Gauss-Seidel Method for Solving. of Linear System of Equations

Iterative Methods for Solving A x = b

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

First, we review some important facts on the location of eigenvalues of matrices.

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Algebra C Numerical Linear Algebra Sample Exam Problems

Abed Elhashash and Daniel B. Szyld. Report August 2007

An Extrapolated Gauss-Seidel Iteration

The Solution of Linear Systems AX = B

Lecture Summaries for Linear Algebra M51A

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Generalizations of M-matrices which may not have a nonnegative inverse

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Notes on Linear Algebra and Matrix Theory

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Markov Chains, Stochastic Processes, and Matrix Decompositions

Numerical Methods - Numerical Linear Algebra

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Homework 2 Foundations of Computational Math 2 Spring 2019

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

Numerical Linear Algebra And Its Applications

MATH36001 Perron Frobenius Theory 2015

Review of Basic Concepts in Linear Algebra

Properties for the Perron complement of three known subclasses of H-matrices

Strictly nonnegative tensors and nonnegative tensor partition

2 Two-Point Boundary Value Problems

Scientific Computing WS 2018/2019. Lecture 10. Jürgen Fuhrmann Lecture 10 Slide 1

Computational Economics and Finance

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

To appear in Monatsh. Math. WHEN IS THE UNION OF TWO UNIT INTERVALS A SELF-SIMILAR SET SATISFYING THE OPEN SET CONDITION? 1.

Applicable Analysis and Discrete Mathematics available online at GRAPHS WITH TWO MAIN AND TWO PLAIN EIGENVALUES

CLASSICAL ITERATIVE METHODS

Next topics: Solving systems of linear equations

Comparison results between Jacobi and other iterative methods

Two Characterizations of Matrices with the Perron-Frobenius Property

Properties of Matrices and Operations on Matrices

JACOBI S ITERATION METHOD

Fast Verified Solutions of Sparse Linear Systems with H-matrices

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Spectral Properties of Matrix Polynomials in the Max Algebra

CHAPTER 5. Basic Iterative Methods

Analysis of Iterative Methods for Solving Sparse Linear Systems C. David Levermore 9 May 2013

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Classical iterative methods for linear systems

arxiv: v1 [math.ra] 11 Aug 2014

Permutation transformations of tensors with an application

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University

Iterative Solution methods

Basic Concepts in Linear Algebra

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION

Numerical Linear Algebra Homework Assignment - Week 2

Iterative Methods for Ax=b

Discrete mathematical models in the analysis of splitting iterative methods for linear systems

Fiedler s Theorems on Nodal Domains

Interlacing Inequalities for Totally Nonnegative Matrices

COURSE Iterative methods for solving linear systems

Splitting Iteration Methods for Positive Definite Linear Systems

Solving Linear Systems of Equations

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

Dense LU factorization and its error analysis

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Transcription:

Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University Taiyuan 03002 Shanxi Province, P. R. China e-mail: wenrp@63.com Abstract In 99 A.D. Gunawardena et al. reported that the convergence rate of the Gauss-Seidel method with a preconditioning matrix I + S is superior to that of the basic iterative method. In this paper, we use the preconditioning matrix I + Ŝ. If a coefficient matrix A is a nonsingular L- matrix, the modified method yields considerable improvement in the rate of convergence for the iterative method. Finally, a numerical example shows the advantage of this method. For solving the linear system. Introduction Ax n n n b, A ( a ij ) R nonsingular and b R. (.) Let A M N, and M is nonsingular. Then the basic iterative scheme for Eq. (.) is 2000 Mathematics Subject Classification: 65F0. Keywords and phrases: L-matrix, iterative method, convergence. This work is supported by NSF of Shanxi Province, 2005009. Received July 6, 2008 2008 Scientific Advances Publishers

32 xk+ T xk + c, k 0,,, where T M N, c M b. Assuming A I L U, where I is the identity matrix, and L and U are strictly lower and strictly upper triangular parts of A, respectively. M I L, N U leads to the classical Gauss-Seidel method. Then the iteration matrices of the classical Jacobi and classical Gauss-Seidel methods are J L + U and U T ( I L ), respectively. We consider a preconditioned system of (.) PAx Pb. (.2) The corresponding basic iterative method is given in general by ( k+ ) ( k) x M p N pr + M p Pb, k 0,,, where PA M p N is a splitting of PA. p Some techniques of preconditioning (see [3, 4, 6]) which improve the rate of convergence of the classical iterative methods have been developed. For example, a simple modified Gauss-Seidel (MGS) method was first proposed by Gunawardena et al. (see []) with P I + S and S aij, i j, ( sij ) (.3) 0, i j. The performance of this method on some matrices is investigated in []. Kohno et al. (see [3]) extend Gunawardena et al.'s work to a more general case and presented a new MGS method by using the preconditioned matrix P I + S, where α Sα αiaij, i 0, i j, j 0 αi. The performance of this method on some matrices is investigated in [3]. Later, Li et al. (see [5]) studied convergence of this method on Z- matrices.

A NEW EFFECTIVE PRECONDITIONED METHOD 33 In this paper, we propose another preconditioned form for MGS and Jacobi methods on L-matrices. Theorem 3. and Theorem 3.2 provide convergence results, respectively. 2. Preliminaries n n For A R, the directed graph G ( A) of A is defined to be the pair ( V, E), where V {, 2,, n} is the set of vertices and E {( i, j ) aij 0, i, j n} is the set of edges. A path from i to i p is an ordered tuple of vertices ( i, i2,, i p ) such that for each k, ( ik, ik+ ) E. A directed graph is said to be strongly connected if for each pair ( i, j ) of vertices, there is a path from i to j. The reflexive transitive closure of the graph G ( A) is a graph denoted by G ( A). It is the smallest reflexive and transitive relation which includes the relation G ( A). The matrix A is said to be irreducible if G ( A) is strongly connected (see [5]). A matrix A is called an L-matrix if a 0 for all i, j such that i j and a ii > 0 for all i (see [2]). A matrix A is said to be nonnegative if each entry of A is nonnegative, and is said to be a positive matrix if each entry is positive. We shall denote this by A 0 and A > 0, respectively. The spectral radius of A is denoted by ρ( A). Now let us summarize the modified Gauss-Seidel method [] with the preconditioner P I + S. Let all elements a ii+ of S be nonzero. Then we have Whenever ~ A x + ( I + S ) Ax [ I L SL ( U S SU )] x, ~ b ( I + S ) b. (2.) a for i, 2,, n. ii+ a i + i ( I SL L ) exists, and hence it is possible to define the Gauss-Seidel iteration matrix for A, namely ij

34 ~ T ( I SL L ) ( U S + SU ). (2.2) This iteration matrix T ~ is called the modified Gauss-Seidel iteration matrix. In what follows, when A ( a ij ) is an L-matrix we shall assume that a 0 for i, 2,, n. ii+ a i + i > is We next propose a new preconditioned matrix P I + Ŝ, where Ŝ Sˆ aij, i j +, ( sˆ ij ) (2.3) 0, i j +. Let all elements ai i + of Ŝ be nonzero. Thus we obtain A ˆx ˆ ˆ ˆ + ( I + S ) Ax [ I L SL ( U S SU )] x, b ˆ ( I + Sˆ ) b. (2.4) ˆ Whenever a i+ i for i, 2,, n, ( ˆ I SL L ) exists, and hence it is possible to define the Gauss-Seidel iteration matrix for Â, namely ˆ ( ˆ T I SL L ) ( U Sˆ + SU ˆ ). (2.5) Remark. The usual splitting of A is A D L U, where D, L, and U are the diagonal, strictly lower, and strictly upper triangular parts of A. Since A is an L-matrix, let us consider the new matrix ˆ A D A I D L D U. Clearly the system Aˆ x bˆ D b is equivalent to the system Ax b. Without loss of generality we may assume A has the splitting of the form A I L U when A is an L-matrix. T Remark 2. As the standard Gauss-Seidel iteration matrix U ( I L ), the first column of (2.5) are zero when a2 a2 (In

A NEW EFFECTIVE PRECONDITIONED METHOD 35 additional, the first two columns of (2.5) are zero when a 2 a 2.). Thus we may partition T and Tˆ so that 0 T 0 ˆ 0 and ˆ T0 T T, 0 0 ˆ (2.6) T T where T and T ˆ are both ( n ) ( n ) matrices. Theorem 2. (Perron-Frobenius) (see [2]). (a) If A is a positive matrix, then ρ ( A) is a simple eigenvalue of A. (b) If A is nonnegative and irreducible, then ρ ( A) is a simple eigenvalue of A. Furthermore, any eigenvalue with the same modulus as ρ ( A) is also simple, and A has a positive eigenvector x corresponding to the eigenvalue ρ ( A). Any other positive eigenvector of A is a multiple of x. Theorem 2.2 (see [5]). Let A be a nonnegative matrix. Then: (a) If α x Ax for some nonnegative vector x, x 0, then α ρ( A). (b) If Ax βx for some positive vector x, then ρ( A ) β. Moreover, if A is irreducible and if 0 αx Ax βx for some nonnegative vector x, then α ρ( A ) β and x is a positive vector. Theorem 2.3 (see [5]). Let A be irreducible. If M is the maximum row sum of A and m is the minimum row sum of A, then m ρ( A) M. Theorem 2.4 (see [5]). Let A I L U be an L-matrix with usual splitting. Let T ( I L ) U and J L + U be the Gauss-Seidel and Jacobi iteration matrices of A, respectively. Then exactly one of the following holds: (a) ρ( T ) ρ( J ) 0, (b) 0 < ρ( T ) < ρ( J ) <,

36 (c) ρ( T ) ρ( J ), (d) < ρ( J ) < ρ( T ). Lemma 2.5. Let A I L U be an L-matrix with usual splitting. Let T ( I L ) U and J L + U be the Gauss-Seidel and Jacobi iteration matrices of A, respectively. If a i+ i, then T and T ˆ are irreducible matrices (where T and T ˆ are defined as in (2.6)). Proof. We will show that T is irreducible by showing that ( 2, 3,, n, 2) is a cycle in G ( T ); consequently it is also a cycle in G( T ), so G ( T ) is strongly connected. Since L is a nonnegative nilpotent matrix, ρ( L ) 0. Thus 2 n ( I L ) I + L + L + + L. 2 n Therefore T ( I + L + L + + L )U and G ( T ) G( L) G( U ). The condition a 0 implies that (, 2,, n) is a path in G( U ) ii+ a i + i > and ( n, n,, ) is a path in G ( L). Also G( U ) G( T ), since 2 n G( T ) G( U ) G( LU ) G( L U ) G( L U ). In particular, ( 2, 3,, n) is a path in G ( T ), so we only need to show that ( n, 2) is an edge in G ( T ). For this note that ( n, ) is an edge in G ( L) and (, 2) is an edge in G ( U ). Consequently ( n, 2) is an edge in G( T ). Similarly, it can be shown that T ˆ is irreducible. Lemma 2.6. Let A be an L-matrix such that a for i+ i i, 2,, n. Then ρ( T ) implies ρ( T ˆ ). (All matrices considered here are definited as in Remark.)

A NEW EFFECTIVE PRECONDITIONED METHOD 37 Proof. Clearly ρ( T ) implies ρ( T ). From Lemma 2.5, T is irreducible. By Theorem 2. there exists a positive vector ω such that T ω ω. Now define T0ω ω. ω Then clearly T ω ω. Since T 0 0, T0ω is a positive scalar and hence ω is a positive vector. Consider By factoring ˆ Tω ( I L SL ) ( U S + SU ) ω. I L from the right hand side we get Tˆ ω [ I ( I L ) SL ] ( I L ) ( U S + SU )ω [ I ( I L ) SL ] [( I L ) Uω ( I L ) Sω + ( I L ) SUω]. I U Now ( L ) ω ω implies U ω ( I L ) ω. Therefore we get Tˆ ω [ I ( I L ) SL ] [ ω ( I L ) Sω + ( I L ) S( I L ) ω] [ I ( I L ) SL ] [ ω ( I L ) SLω] [ I ( I L ) SL ] [ I ( I L ) SLω] ω. Thus by Theorem 2.2, ρ( Tˆ ). Lemma 2.7. Suppose A I L U is an L-matrix such that a i+ i, where L and U are the strictly lower and strictly upper triangular parts of A, respectively. Then the standard Jacobi iteration matrix J L + U and the modified Jacobi iteration matrix Tˆ ( I SL ˆ ) ( L + U Sˆ + SU ˆ ) are both irreducible. Proof. It follows from the condition a a 0 that i+ i ii+ > (, 2,, n, n, n,, 2, ) is a path in G ( L + U ). Hence G(J) is strongly connected, and so J is irreducible.

38 Next we show that Ĵ is an irreducible matrix. Note that G( J ˆ ) G( L + U Sˆ + SU ˆ ) G( L) G( U Sˆ ) G( Sˆ U ), where the first line follows from the fact that ( ˆ I SL ) is a nonnegative matrix with a positive diagonal. Recall that G ( L) contains the path ( n, n,, ), and note that G( SU ˆ ) contains the edges (, 3), ( 2, 4),, ( n 2, n). Hence G( J ˆ ) is strongly connected and Ĵ is irreducible. Theorem 3.. Suppose 3. Main Results on L-matrices A I L U is an L-matrix such that a i+ i, where L and U are the strictly lower and strictly upper triangular parts of A, respectively. Let T ( I L ) U and Tˆ ( I SL ˆ L ) ( U Sˆ + SU ˆ ) be the classical and modified Gauss- Seidel iteration matrices, respectively. Then (a) ρ( T ˆ ) < ρ( T ) if ρ( T ) <, (b) ρ( T ˆ ) ρ( T ) if ρ( T ), (c) ρ( T ˆ ) > ρ( T ) if ρ( T ) >. Proof. Part (b) follows from Lemma 2.6. Now we prove (a) and (c), first we note there exists a positive vector ω such that where λ ρ( T ). Now consider Tˆ ω ( I L SL ˆ ) ( U Sˆ + SU ˆ )ω T ω λω, (3.) ( I L SL ˆ ) ( I + Sˆ ) U ω ( I L SL ˆ ) Sˆ ω

A NEW EFFECTIVE PRECONDITIONED METHOD 39 ( I L SL ˆ ) ( I + Sˆ ) λ( I L ) ω ( I L SL ˆ ) Sˆ ω by (3.). Therefore Tˆ ω Tω ˆ ˆ ˆ ˆ ( I L SL ) [ λ ( I + S )( I L ) ω Sω ( I L SL )( I L ) U ω] ˆ ( I L SL ) [ λ( I L ) ω + ( λ ) Sω U ω] ( ˆ I L SL ) ( λ ) Sˆ ω. Write ( I L SL ˆ ) D + L for some positive diagonal matrix D and a nonnegative strictly lower triangular matrix L. Then ( ˆ I L SL ) Sˆ ω ( D + L ) Sˆ ω 0, since D Sˆ ω 0. Also, since DS ˆ ω ( I L SL ˆ 0, ) S ˆ ω is a nonzero, nonnegative vector. If λ <, then T ˆω Tω 0. Therefore By using the partitioned form of ˆ λω. Tω ˆ Tˆ 0 0 Tˆ 0 Tˆ introduced in Remark 2, we get ρ( T ˆ ) < λ, by Theorem 2.2. If λ >, then T ˆω Tω 0 but not equal to 0. Hence T ˆ ω λω, and this implies T ˆ ω λω, where ω α ω, ω > 0. Therefore ( T ˆ ) > λ by Theorem2.2. Hence ρ( Tˆ ) > λ. ρ Theorem 3.2. Let A I L U be an L-matrix, where L and U are the strictly lower and strictly upper triangular parts of A respectively. Let J L + U and Jˆ ( I SL ˆ ) ( L + U Sˆ + SU ˆ ) be the classical and

320 modified Jacobi iteration matrices, respectively. Further assume that J and Ĵ are irreducible matrices. Then (a) ρ( Jˆ ) < ρ( J ) if ρ( J ) <, (b) ρ( Jˆ ) ρ( J ) if ρ( J ), (c) ρ( Jˆ ) > ρ( J ) if ρ( J ) >. Proof. Since J is irreducible, by Theorem2. there exists a positive vector ω such that J ω λω, where λ ρ( J ). This implies Now consider ( L + U ) ω λω. (3.2) Jˆ ω Jω ( I SL ˆ ) [ L + U Sˆ + SU ˆ ( I SL ˆ )( L + U )]ω ( I SL ˆ ) Sˆ 2 [ I + U + L + LU ]ω ˆ ˆ ( I SL ) S[ I + U + λl] ω by ( 3.2) ( I SL ˆ ) Sˆ ( ω + λω Lω + λlω) ( ˆ I SL ) Sˆ ( I + L )( λ ) ω. (3.3) If ρ( J ) <, then Tˆ ω ρ( J )ω by (3.3), and so, from Theorem 2.2, we obtain ρ( J ˆ ) < ρ( J ). Similarly we can get ρ( J ˆ ) > ρ( J ) if ρ( J ) >, and (b) also follows from Theorem 2.2. Corollary 3.3. Let A I L U, J, Jˆ be as definite in Theorem 3.2. Replace J, J ˆ are irreducible matrices by the condition a i+ i. Then the conclusion of Theorem 3.2 holds. Remark. We have tested many examples with random entries. Finally, we present some matrices to show: the modified iteration matrix

A NEW EFFECTIVE PRECONDITIONED METHOD 32 Tˆ has a faster convergence rate when the iteration matrix T and T ~ are both convergent, and the modified iteration matrix Tˆ diverges even faster when the iteration matrix T and T ~ are both divergent. The proof follows from Lemma 2.7. 4. Numerical Example We present some matrices to illustrate our results. Let 0.2 A 0.3 0. 0.2 0.2 0.2 0. 0.3 0. 0.3 0. 0.4 0.4 0. 0. 0.3 0.2 0.6 0.6 I 0.0 L U, where L and U are the strictly lower and strictly upper triangular parts of A, respectively. If ~ T ( I L) U, T and Tˆ are defined as (2.2) and (2.5), respectively. ~ Then ρ( T ) 0.96, ρ( T ) 0. 9505 and ρ( Tˆ ) 0.703. Let 0.289 A 0.424 0.3454 0.0363 0.0089 0.3383 0.3384 0.45 0.305 0.4724 0.4843 0.33680 0.0679 0.2938 0.0972 0.266 0.0252 0.3628 0.0290 I L U, 0.2982 ~ then ρ( T ) 0.6897, ρ( T ) 0. 560 and ρ( Tˆ ) 0.3832.

322 References [] A. D. Gunawardena, S. K. Jain and L. Snyder, Modified iterative methods for consistent linear systems, Linear Algebra Appl. 54-56 (99), 23-43. [2] Jia-Gan Hu, Iterative Methods for the Solution of Linear Systems, Science Press, 999. [3] T. Kohno, H. Kotakemori, H. Niki and M. Usui, Improving modified iterative methods for Z- matrices, Linear Algebra Appl. 267 (997), 3-23. [4] W. Li and W. W. Sun, Modified Gauss-Seidel type methods and Jacobi type methods for Z- matrices, Linear Algebra Appl. 37 (2000), 227-240. [5] R. S. Varga, Matrix Iterative Analysis, Second Edition, Springer-Verlag, Berlin, Heidelberg, 2000. [6] Li Wen, Preconditioned AOR iterative methods for linear systems, Inter. J. Comp. Math. 79 (2002), 89-0. g