Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Similar documents
Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

The Solution of Linear Systems AX = B

CAAM 454/554: Stationary Iterative Methods

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Chapter 7 Iterative Techniques in Matrix Algebra

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

9.1 Preconditioned Krylov Subspace Methods

9. Iterative Methods for Large Linear Systems

Numerical Methods - Numerical Linear Algebra

Direct Methods for Solving Linear Systems. Matrix Factorization

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

CHAPTER 5. Basic Iterative Methods

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Classical iterative methods for linear systems

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Solving Linear Systems

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Lecture 18 Classical Iterative Methods

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Course Notes: Week 1

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Conjugate Gradient (CG) Method

Iterative Methods. Splitting Methods

Computational Methods. Systems of Linear Equations

First, we review some important facts on the location of eigenvalues of matrices.

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Algebra C Numerical Linear Algebra Sample Exam Problems

CLASSICAL ITERATIVE METHODS

Iterative methods for Linear System

Numerical Analysis: Solving Systems of Linear Equations

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Iterative techniques in matrix algebra

Process Model Formulation and Solution, 3E4

Lab 1: Iterative Methods for Solving Linear Systems

Computational Economics and Finance

Iterative Methods for Solving A x = b

Iterative Methods for Sparse Linear Systems

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

JACOBI S ITERATION METHOD

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Jae Heon Yun and Yu Du Han

LINEAR SYSTEMS (11) Intensive Computation

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

COURSE Iterative methods for solving linear systems

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Linear Algebra: Matrix Eigenvalue Problems

Introduction to Scientific Computing

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Preconditioning Techniques Analysis for CG Method

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

4.6 Iterative Solvers for Linear Systems

Kasetsart University Workshop. Multigrid methods: An introduction

Solving Linear Systems of Equations

Solving Linear Systems

On the Preconditioning of the Block Tridiagonal Linear System of Equations

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Notes on Linear Algebra and Matrix Theory

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Computational Linear Algebra

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods

Iterative Methods for Ax=b

Next topics: Solving systems of linear equations

Theory of Iterative Methods

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Notes for CS542G (Iterative Solvers for Linear Systems)

On Algebraic Structure of Improved Gauss-Seidel Iteration. O. M. Bamigbola, A. A. Ibrahim

Parallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen

Numerical Linear Algebra And Its Applications

Z-Pencils. November 20, Abstract

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

An Extrapolated Gauss-Seidel Iteration

Scientific Computing

Linear Algebraic Equations

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

CS 246 Review of Linear Algebra 01/17/19

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

The Conjugate Gradient Method

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

MATH36001 Perron Frobenius Theory 2015

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

1 - Systems of Linear Equations

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Transcription:

Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative methods are among iterative methods for solving linear system of equations. In this paper, a new iterative method is introduced, it is based on the linear combination of old and most recent calculated solutions. The new method can be considered as a general method, where the Jacobi and Gauss-Seidel methods as two special cases of it. Some convergence properties are studied, and numerical examples are given to show the effectiveness of the new method. When Jacobi method converges, the new method can be used to accelerate the convergence. In special cases, when one of the two iterative methods, Jacobi or Gauss-Seidel, diverges, the new method can be used to obtain convergence. 1. Introduction Direct methods for solving linear systems Ax = b, based on the triangularization of the matrix A become prohibitive in terms of computer time and storage if the matrix A is quite large [1]. On the other hand, there are practical situations such as the discretization of partial differential equations, where the matrix size can be as large as several hundred thousand. For such problems, the direct methods become 2000 Mathematics Subject Classification. 65F10, 93C05. Key words and phrases. Jacobi method; Gauss-Seidel method; Linear Systems; Iterative Methods; Convergence. Copyright c Deanship of Research and Graduate Studies, Yarmouk University, Irbid, Jordan. Received: May 12, 2011 Accepted : Jan 31, 2012. 169

170 ADEL H. AL-RABTAH impractical. Furthermore, most large problems are sparse, and the sparsity is lost to a considerable extent during the triangularization procedure. Thus, at the end we have to deal with a very large matrix with too many nonzero entries, and storage becomes a crucial issue. For such problems, iterative methods are suitable for this situation. The Jacobi and Gauss-Seidel iterative methods are classical iterative methods for solving linear systems of equations. They are used primarily to solve a very large and sparse linear system Ax = b. Iterative methods are seldom used for solving linear system of equations of small dimension since the time required for sufficient accuracy exceeds that required for direct methods such as Gaussian elimination. They are used for large systems with a high percentage of zero entries, these methods efficient in terms of both computer storage and computations. These large systems arise from engineering applications, as in circuit analysis and in the numerical solution of boundary-value problems and partial-differential equations [2]. There are many iterative methods such as the Successive Over-Relaxation method SOR), where the Gauss-Seidel method is a special case of it. GMRES [3], and BI-CGSTAB [4] algorithms are also used to solve linear systems of equations. The Conjugate Gradient method is used to solve symmetric positive definite systems [1]. The Jacobi and the Gauss-Seidel methods converge for diagonally dominant matrices [1, 5]. The Gauss-Seidel method also converges for symmetric positive definite systems [1]. For special types of matrices, some useful properties for the convergence of the Jacobi, the Gauss-Seidel, and the SOR methods, are presented in [6].

A NEW ITERATIVE METHOD 171 2. Preliminaries The basic idea behind an iterative method is first to convert the n n linear equation 2.1) Ax = b, into an equivalent system of the form 2.2) x = T x + c, for some fixed matrix T and vector c. After an initial approximation vector x 0) of the solution vector x is selected, generate a sequence of approximation {x k) } by computing 2.3) x k) = T x k 1) + c, for k = 1, 2,... with a hope that the sequence {x k) } converges to the solution x of the system Ax = b as k. To implement this process, consider the linear system of equations 2.1), where A R n n, is nonsingular matrix, x and b R n. The method works under the assumption that a ii 0, for each i = 1,..., n, to guarantee this, we convert the system 2.1) into an equivalent system P Ax = P b, where P is a permutation matrix. For simplicity, we assume that the matrix A satisfies the assumption. Thus, write the matrix A in the form A = D L U,

172 ADEL H. AL-RABTAH where D is the diagonal matrix whose diagonal entries are those of A, L is the strictly lower- triangular part of A, and U is the strictly upper- triangular part of A. i.e. a 11 0 0 0 a 22........... 0 0 0 a nn A = 0 0 a 21............ a n1 a n,n 1 0 0 a 12 a 1n............ an 1,n 0 0 so we obtain the equivalent system D L U)x = b. The Jacobi iterative method is defined as x k) = D 1 L + U)x k 1) + D 1 b, for k = 1, 2,... we call T j = D 1 L + U) the Jacobi iterative matrix, and c j = D 1 b the Jacobi vector. The Gauss-Seidel method is a modification of the Jacobi method, which can be defined as x k) = D L) 1 Ux k 1) + D L) 1 b, for k = 1, 2,... we call T g = D L) 1 U the Gauss-Seidel iterative matrix, and c g = D L) 1 b the Gauss-Seidel vector.

A NEW ITERATIVE METHOD 173 3. A New Iterative Method For Solving Linear Systems Of Equations Consider the linear system of equations a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a n1 x 1 + a n2 x 2 + + a nn x n = b n,. where a ii 0, for each i = 1,..., n. To implement the new method, solve the i th equation of the above system for x i, and generating each x k) i from components of x k 1), and the most recently calculated values of x k), for k 1. That is, ) x k) i = 1 i 1 )) n a ij µx k) j + 1 µ)x k 1) j a ij x k 1) j + b i, a ii for each i = 1,..., n. Where µ [0, 1]. j=i+1 To write the method in matrix notations; consider the linear system Ax = b, where A is a nonsingular matrix with nonzero diagonal entries, and let A = D L U as described earlier. The new iterative method can be written as 3.1) x k) = T µ x k 1) + c µ, for k = 1, 2,... where T µ = D µl) 1 [1 µ)l + U] and c µ = D µl) 1 b. Notice that the matrix D µl) 1 exists since D µl) is a lower triangular matrix with nonzero diagonal entries. Note that if µ = 0, then the iterative method 3.1) results in the Jacobi iterative method, and if µ = 1, then it results in the Gauss-Seidel iterative method.

174 ADEL H. AL-RABTAH Definition 3.1. The quantity ρa) defined by ρa) = max i λ i, where λ 1,..., λ n are the eigenvalues of A, is called the spectral radius of A. Theorem 3.1. If the spectral radius ρt ) satisfies ρt ) < 1, then I T ) 1 exists, and I T ) 1 = I + T + T 2 +... = T j. j=0 Theorem 3.2. For any x 0) R n, the sequence { x k)} k=0 x k) = T x k 1) + c, for k 1, defined by converges to the unique solution of x = T x + c if and only if ρt ) < 1. Definition 3.2. An n n matrix A is said to be strictly row diagonally dominant SRDD) if a ii > n,j i a ij, i = 1,..., n. Theorem 3.3. If A is a strictly row diagonally dominant matrix, then the iterative method 3.1) converges for any arbitrary choice of x 0). Proof. Let x be a nonzero vector such that T µ x = λx, where λ is an eigenvalue of T µ. This gives or, equivalently, Then, i 1 1 µ)l + U) x = D µl) λx, 1 µ)a ij x j + n λa ii x i = λ i 1 j=i+1 a ij x j = λa ii x i i 1 µa ij x j i 1 1 µ)a ij x j n j=i+1 µλa ij x j. a ij x j. Let x k be the largest component having the magnitude 1) of the vector x. Then

A NEW ITERATIVE METHOD 175 λ a kk λ k 1 Thus k 1 3.2) λ 1 µ) a kj + µ a kj + 1 µ) k 1 n j=k+1 Since A is strictly diagonally dominant, then a kk > n,j k a kk µ k 1 a kj = µ + 1 µ)) k 1 a kj > 1 µ) k 1 a kj + n j=k+1 a kj. ) ) k 1 a kj / a kk µ a kj. a kj + n a kj + n j=k+1 j=k+1 a kj, and a kj. From the inequality 3.2) we conclude λ < 1, then ρt µ ) < 1, which implies that T µ is a convergent matrix. Therefore, the iterative method 3.1) converges by theorem 3.2). Theorem 3.4. If A is a symmetric positive definite matrix. Then the iterative method 3.1), in the case µ = 1, converges for any arbitrary choice of x 0). Proof. Since A is symmetric, then A = D L U = D L L T. We need to show that ρt µ ) < 1, where T µ = D µl) 1 [1 µ)l + U] = D µl) 1 [ 1 µ)l + L T ]. Let λ be an eigenvalue of T µ, and u its associated eigenvector. Then D µl) 1 [ 1 µ)l + L T ] u = λu, and [ 1 µ)l + L T ] u = λd µl)u. Using the equation L T = D L A, the last equation becomes [D µl A] u = λd µl)u.

176 ADEL H. AL-RABTAH By multiplying both sides by u, we can write 3.3) u Au = 1 + λ)u D µl)u. Taking conjugate transpose, we obtain 3.4) u Au = 1 + λ)u D µl T )u. From equations 3.3) and 3.4), we obtain ) 1 1 + λ) + 1 u 1 + λ) Au = 2 µ)u Du + µu Au. If µ = 1 Gauss-Seidel method), then ) 1 1 + λ) + 1 u 1 + λ) Au > u Au. This implies ) 1 1 + λ) + 1 > 1, or 1 + λ) 2 + λ + λ 1 + λ) ) > 1. 1 + λ Let λ = α + iβ, then λ = α iβ, so the last inequality becomes 21+α) 1+α) 2 +β 2 > 1, from which it follows that α 2 + β 2 < 1. That is, ρt g ) < 1, because λ = α 2 + β 2. Therefore, the iterative method 3.1), in the case µ = 1, converges for any arbitrary choice of x 0) by theorem 3.2). Definition 3.3. A real matrix A of order N is an L-matrix if a i,i > 0, i = 1, 2,..., N, and a i,j 0, i j, i, j = 1, 2,..., N. In the next theorem, we let B denote the matrix whose elements are the moduli of the corresponding elements of B.

A NEW ITERATIVE METHOD 177 Theorem 3.5. If A B, then ρa) ρb). Proof of theorem 3.5) is given by Oldenberger [7]. Theorem 3.6. If A 0, then ρa) is an eigenvalue of A, and there exists a nonnegative eigenvector of A associated with ρa). Proof of theorem 3.6) is given by Frobenius [8]. Theorem 3.7. If A is an L-matrix, then a. If ρt j ) < 1 and ρt µ ) < 1, then ρt µ ) ρt j ) < 1. b. If ρt j ) 1 and ρt µ ) 1, then ρt µ ) ρt j ) 1. Proof. Let L = D 1 L, and Ũ = D 1 U, so we can rewrite the Jacobi, and the new method iterative matrices, respectively as, T j = L + Ũ, T µ = I µ L) 1 [1 µ) L + Ũ Since L is a strictly lower triangular matrix, then L N = 0, for some positive integer N, and since A is an L-matrix, and 0 µ 1, we have I µ L) 1 = I + µ L + µ 2 L2 +... + µ N 1 LN 1 0, and ] T µ = I µ L) [1 1 µ) L + Ũ 0. Let λ = ρt µ ), and λ = ρt j ). By theorem 3.6), λ is an eigenvalue of T µ, and for some w 0, we have T µ w = λw, ]. and [ ] 1 µ + µ λ) L + Ũ w = λw.

178 ADEL H. AL-RABTAH Since λ is an eigenvalue of 1 µ + µ λ) L + Ũ, we have ) λ ρ 1 µ + µ λ) L + Ũ. If λ ) 1, then ρ 1 µ + µ λ) L + Ũ ρ L + Ũ) = λ, by theorem 3.5), and λ λ. On the other hand, if λ 1, then ) λ ρ 1 µ + µ λ) L + Ũ ρ λ µ λ + µ λ) L + λũ ) = λρ L + Ũ) = λ λ, and We have thus shown i) If λ 1, then λ λ. ii) If λ 1, then λ 1. which implies iii) If λ < 1, then λ < 1. λ 1. Since T j = L + Ũ) 0, it follows by theorem 3.6) that λ is an eigenvalue of T j. Therefore, for some x 0, we have L + Ũ)x = λx, and ) 1 [ ] I µ/ λ) L 1 µ) L + Ũ x = λx, then ) 1 [ λ Ũ] ) ρ I µ/ λ) L 1 µ) L +. If λ 1, and since µ/ λ) 1, then we have Hence ) 1 N 1 I µ/ λ) L = I + µ/ λ) L +... + µ/ λ) LN 1 1 I + µ L +... + µ N 1 LN 1 = I µ L), and ) 1 [ ] ) 1 [ ] I µ/ λ) L 1 µ) L + Ũ I µ L 1 µ) L + Ũ.

A NEW ITERATIVE METHOD 179 λ λ. Thus we have shown, iv) If λ 1, then λ λ. By i) and iii) we have a). By ii) and iv) we have b). Theorem 3.8. If A is an L-matrix, then a. If ρt g ) < 1 and ρt µ ) < 1, then ρt g ) ρt µ ) < 1. b. If ρt g ) 1 and ρt µ ) 1, then ρt g ) ρt µ ) 1. Proof. Let L = D 1 L, and Ũ = D 1 U, so we can rewrite the Gauss-Seidel, and the new method iterative matrices, respectively as, T g = I L) 1 Ũ, T µ = I µ L) 1 [1 µ) L + Ũ ]. And, as in the proof of theorem 3.7), we have Let λ = ρt µ ), and ˆλ = ρt g ). ] T µ = I µ L) [1 1 µ) L + Ũ 0. By theorem 3.6), λ is an eigenvalue of T µ, and for some v 0, we have T µ v = λv, and therefore [ I µ + 1 µ)/ λ ) L] 1 Ũv = λv, [ λ ρ I µ + 1 µ)/ λ ) ) 1 L] Ũ.

180 ADEL H. AL-RABTAH if λ < 1, and since µ + 1 µ)/ λ ) 1, then we have [ I µ + 1 µ)/ λ ) ] 1 L = I + µ + 1 µ)/ λ ) N 1 L +... + µ + 1 µ)/ λ) LN 1 I + L +... + L L) 1 N 1 = I, and and hence, [ I µ + 1 µ)/ λ ) ] 1 L L) 1 Ũ I Ũ, λ ˆλ. Thus, we have shown i) If λ < 1, then ˆλ λ < 1. But, if λ 1, and since µ + 1 µ)/ λ ) 1, then we have [ I µ + 1 µ)/ λ ) ] 1 L = I + µ + 1 µ)/ λ ) N 1 L +... + µ + 1 µ)/ λ) LN 1 I + L +... + L L) 1 N 1 = I, and and hence, [ I µ + 1 µ)/ λ ) ] 1 L L) 1 Ũ I Ũ, λ ˆλ. We have thus shown, ii) If λ 1, then ˆλ λ 1. By i) we have a). By ii) we have b). Corollary 3.1. If A is an L-matrix, then a. If ρt j ) < 1, ρt µ ) < 1, and ρt g ) < 1, then ρt g ) ρt µ ) ρt j ) < 1. b. If ρt j ) 1, ρt µ ) 1, and ρt g ) 1, then ρt g ) ρt µ ) ρt j ) 1.

A NEW ITERATIVE METHOD 181 Proof. The proof follows from theorems 3.7), and 3.8). 4. Numerical examples Example 4.1. Consider the following linear system: x 1 + 2x 2 2x 3 = 7 x 1 + x 2 + x 3 = 2 2x 1 + 2x 2 + x 3 = 5 The exact solution is 1, 2, 1) t. The Gauss-Seidel method does not converge in this case, since ρt g ) = 2 > 1. On the contrary, the method 3.1) with µ = 0.15 converges, where ρt µ ) = 0.9378, we can choose any value for µ close to zero. It is clear that the convergence is slow, but it avoids the problem with the Gauss-Seidel method. See Table 1. The stopping criterion x x k) 10 5 was used, and the initial solution was taken to be the zero vector, the approximation is rounded and the norm criterion is met at iteration 204. Iteration i x 1 2 3 203 204 1 7.00000 10.33000 4.86970 1.00000 1.00000 2 0.95000-8.11450 1.66870 2.00001 1.99999 3 2.61500-9.17965-0.72787-0.99999-1.00001 Table 1. Solution of Example 4.1 using the method 3.1) with µ = 0.15.

182 ADEL H. AL-RABTAH Example 4.2. Consider the following linear system: 2x 1 x 2 + x 3 = 1 2x 1 + 2x 2 + 2x 3 = 4 x 1 x 2 + 2x 3 = 5 The exact solution is 1, 2, 1) t. The Jacobi method does not converge in this case, since ρt j ) = 5/2 > 1. On the contrary, the method 3.1) with µ = 0.5 converges, where ρt µ ) = 0.7588, we can choose any value for µ close to one. We obtained an excellent approximation in 45 iterations. See Table 2. The stopping criterion x x k) 10 5 was used, and the initial solution was taken to be the zero vector, the approximation is rounded and the norm criterion is met at iteration 45. Iteration i x 1 2 3 44 45 1-0.50000 1.65625 1.63086 0.99999 1.00000 2 2.25000 3.48438 1.13379 1.99999 2.00000 3-2.06250-0.77734-0.52368-1.00000-1.00000 Table 2. Solution of Example 4.2 using the method 3.1) with µ = 0.5. Example 4.3. Consider the following linear system: 4x 1 x 2 x 3 = 5 x 1 + 4x 2 x 4 = 3 x 1 + 4x 3 x 4 = 7 x 2 x 3 + 4x 4 = 9 The exact solution is 1, 0, 1, 2) t. The Jacobi method converges in 18 iterations, where ρt j ) = 0.5, while the method 3.1), with µ = 0.7, converges in 12 iterations,

A NEW ITERATIVE METHOD 183 where ρt µ ) = 0.375, we can choose any value for µ in 0, 1). See Table 3, and Table 4. The stopping criterion x x k) 10 5 was used, and the initial solution was taken to be the zero vector. Notice that the coefficient matrix A is a strictly row diagonally dominant, symmetric positive definite, and it is an L-matrix. Iteration i x 1 2 3 11 12 1 1.25000 0.73438 0.97227 0.99998 0.99999 2-0.53125-0.05547-0.04955-0.00002-0.00001 3-1.53125-1.05547-1.04955-1.00002-1.00001 4 1.88906 1.90090 1.97434 1.99999 2.00000 Table 3. Solution of Example 4.3 using the method 3.1) with µ = 0.7. Iteration i x 1 2 3 17 18 1 1.25000 0.62500 1.06250 1.00000 0.99999 2-0.75000 0.12500-0.18750-0.00001 0.00000 3-1.75000-0.87500-1.18750-1.00001-1.00000 4 2.25000 1.62500 2.06250 2.00000 1.99999 Table 4. Solution of Example 4.3 using the Jacobi method. 5. Conclusions A new general iterative method, by linear combination of old and most recent calculated solutions, was introduced. It works as a general method where the Jacobi and Gauss-Seidel methods are special cases of it. We have proved some convergence properties. some numerical examples where presented to show the effectiveness of

184 ADEL H. AL-RABTAH the new method. In special cases, where the matrix A is an L-matrix, we have shown theoretically and numerically that the new method converged faster than the Jacobi method when the spectral radius of both iteration matrices is less than one. And when it compared with the Gauss-Seidel method, in the case of divergence, we have seen that the divergence was more pronounced for the Gauss-Seidel method. When Jacobi method converges, the new method can be used to accelerate the convergence. In special cases, when one of the two iterative methods, Jacobi or Gauss-Seidel, diverges, the new method can be used to obtain convergence. References [1] Datta B.N. Numerical Linear Algebra And Applications. Brooks/Cole Publishing Company, 1995. [2] Richard L. Burden, J. Douglas Faires. Numerical Analysis, Eighth Edition. Thosmson Brooks/Cole, 2005. [3] Saad Y, Schultz M. H. GMRES: A Generaluzed Minimal Residual Algoritm For Solving Nonsymmetric Linear Systems. SIAM J. Sci. Stat. Comput., 1986, 7: 856-869. [4] Van Der Vorst H. A. Bi-CGSTAB: A Fast And Smoothly Converging Variant Of Bi-CG For The Solution Of Nonsymmetric Linear Systems. SIAM J. Sci. Stat. Comput., 1992, 12: 631-644. [5] Saad Y. Iterative Methods For Sparce Linear Systems. PWS Press, New York, 1995. [6] David M. Young. Iterative Solution Of Large Linear Systems. Academic Press, Inc. 1971. [7] Oldenberger, R. Infinite Powers of Matrices and Characteristic roots. Duke Math. J. 6, 1940), 357-361. [8] Frobenius, G. über Matrizen aus Positiven Elementen. S.-B. Preuss. Akad. wiss. Berlin, 1908), 471-476. Department of Mathematics and Statistics, Mutah University P.O. Box 7, Al-Karak, Jordan E-mail address: rabtah@mutah.edu.jo