The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems

Size: px
Start display at page:

Download "The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems"

Transcription

1 Applied Mathematics Letters 19 (2006) wwwelseviercom/locate/aml The upper Jacobi upper Gauss Seidel type iterative methods for preconditioned linear systems Zhuan-De Wang, Ting-Zhu Huang School of Applied Mathematics, University of Electronic Science Technology of China, Chengdu, Sichuan, , PR China Received 15 March 2004; received in revised form 21 October 2005; accepted 27 October 2005 Abstract The preconditioner for solving the linear system Ax = b introduced in [DJ Evans, MM Martins, ME Trigo, The AOR iterative method for new preconditioned linear systems, J Comput Appl Math 132 (2001) is generalized Results obtained in this paper show that the convergence rate of Jacobi Gauss Seidel type methods can be increased by using the preconditioned method when A is an M-matrix c 2005 Elsevier Ltd All rights reserved Keywords: Preconditioned iterative method; Upper Jacobi upper Gauss Seidel type method; Spectral radius 1 Introduction For solving the linear system Ax = b, many preconditioners have been proposed [1 3[7 11 In 2001 Evans et al [1proposed the preconditioner 1 0 a 1n P = I + S = showed that the preconditioned AOR method is better than the original one In this paper, we generalize the preconditioner as follows 1 0 αa 1n P(α) = I + S(α) = (11) Corresponding author address: tzhuang@uestceducn (T-Z Huang) /$ - see front matter c 2005 Elsevier Ltd All rights reserved doi:101016/jaml

2 1030 Z-D Wang, T-Z Huang / Applied Mathematics Letters 19 (2006) consider the convergence of the upper Jacobi upper Gauss Seidel type iterative methods for preconditioned linear systems 2 Convergence of the upper Jacobi upper Gauss Seidel iterative methods Consider the usual splitting of A,namely, A = D L U, (21) where D is adiagonal matrix L U are strictly lower upper triangular matrices, respectively Throughout this paper we assume that A is a nonsingular M-matrix So, without loss of generality, we replace (21) by A = I L U (22) Applying P(α) on (11) we obtain the equivalent linear system Ã(α)x = b(α) with Ã(α) = (I + S(α))A b(α) = (I + S(α))b, (23) where, if needed, we will write Ã(α) = D(α) L(α) Ũ(α) (24) where D(α) is diagonal L(α) Ũ(α) strictly lower strictly upper triangular matrices By the equalities above, we have Ã(α) = I L U + S(α) S(α)L S(α)U, with S(α)U = 0 The elements a ij (α) of Ã(α) are given by the expression: a ij, 2 i n, a ij (α) = (1 α)a 1n, i = 1, j = n, a 1 j αa 1n a nj, i = 1, j n Requesting that ã 1n (α) = (1 α)a 1n 0, if α 1, the nonpositivity of all the off-diagonal elements will be preserved so will the Z-matrix character of Ã(α), ã 11 = 1 αa 1n a n1 > 0 For α [0, 1,definethematrices D(α) := diag(αa 1n a n1, 0,,0) (25) S(α)L = (P(α) I)L := D α + U α, (26) where U α is the strictly upper triangular components of S(α)L Bythefact that S(α)U = 0 the preceding discussion, the three matrices on the right side of (24) are given by D(α) = I D α, L(α) = L, Ũ(α) = U S(α) + U α (27) The diagonal elements of D(α) are positive while those of L(α) Ũ(α) are non-negative Definition 21 Let B be any n n matrix with zero diagonal entries We call B = U + L L ω := (I ωu) 1 {ωl + (1 ω)i} the upper Jacobi upper successive overrelaxation matrix, respectively Especially, we call L 1 Gauss Seidel matrix the upper

3 Z-D Wang, T-Z Huang / Applied Mathematics Letters 19 (2006) Forthe needs of one of our main statements the following splitting will be considered: M(α) N(α) = (I + S(α)) (I + S(α))(L + U), Ã(α) = M (α) N (α) = I (U S(α) + U α + D α + L), M (α) N (α) = (I D α ) (U S(α) + U α + L) Below we define the upper Jacobi type iterative matrices associated with the above splittings: B(α) = B := M 1 (α)n(α) = U + L, B (α) := M 1 (α)n (α) = I (I + S(α))A = U S(α) + U α + D α + L, B(α) B (α) := M 1 (α)n (α) = (I D α ) 1 (I (I + S(α))A D α ) = (I D α ) 1 (U S(α) + U α + L), as well as the splittings that define the upper Gauss Seidel type matrices: M(α) N(α) = (I (U S(α))) (I + S(α))L, Ã(α) = M (α) N (α) = ((I (U S(α))) U α ) (D α + L), M (α) N (α) = ((I (U S(α))) U α D α ) L H (α) H := (I U) 1 L, H (α) := ((I (U S(α))) U α ) 1 (D α + L), H(α) H (α) := ((I (U S(α))) D α U α ) 1 L (28) (29) (210) (211) Lemma 21 Let the upper Jacobi matrix B := U + Lbeanon-negative n nmatrix with zero diagonal entries, let L 1 be the upper Gauss Seidel matrix, the special case ω = 1 for L ω Then, one only one of the following relations is valid: 1 ρ(b) = ρ(l 1 ) = <ρ(l 1 )<ρ(b) <1 3 1 = ρ(b) = ρ(l 1 ) 4 1 <ρ(b) <ρ(l 1 ) (Remark Thus, the upper Jacobi matrix B the upper Gauss Seidel matrix L 1 are either both convergent, or both divergent) Proof Similar to the proof of the Stein Rosenberg theorem in [4 6, the proof of this lemma is easy Theorem 21 Let Abe a nonsingular M-matrix (a) For any α [0, 1, wehave that: There exists y R n,with y 0,suchthat B (α)y By, (212) ρ( B(α)) ρ(b (α)) 1, (213) ρ( H(α)) ρ(h (α)) ρ(h )<1, (214) ρ( H(α)) ρ( B(α)), ρ(h (α)) ρ(b (α)), ρ(h )<ρ(b) <1; (215) (b) Suppose that A is irreducible Then: (i) For α [0, 1,provided that α 0,thematrices B(α), B (α) Bare irreducible all the inequalities in (213) (215) are strict Moreover, there holds ρ(b (α)) ρ(b); (216) (ii) For α = 1, the(n 1) (n 1) matrices B 1 (1) B 1 (1) of the top left corner of B (1) B(1) are irreducible all the inequalities in (213) (216) are strict

4 1032 Z-D Wang, T-Z Huang / Applied Mathematics Letters 19 (2006) Proof (a) For (212):Toprove (212) we need the expressions of the non-negative elements of the two Jacobi iteration matrices involved Below we give the elements for all three matrices in (29): b ii = 0, i N, b ij = a ij, i, j N, j i (217) b 11 (α) = αa 1na n1 = αb 1n b n1, b 1 j (α) = αa 1na nj a 1 j = αb 1n b nj + b 1 j, 2 j n 1, b 1n (α) = (α 1)a 1n = (1 α)b 1n, (218) b ii (α) = 0 2 i n b ij (α) = a ij = b ij, 2 i n, i j b ii (α) = 0 b 1 j (α) = αa 1na nj a 1 j = αb 1nb nj + b 1 j 2 j n 1, 1 αa 1n a n1 1 αb 1n b n1 b 1n (α) = (α 1)a 1n = (1 α)b (219) 1n, 1 αa 1n a n1 1 αb 1n b n1 b ij (α) = a ij = b ij, 2 i n, i j For the non-negative Jacobi iteration matrix B there exists a non-negative vector y such that By = ρ(b)y Equating the first row of the two vector replacing the elements b 1 j of B in terms of the elements b 1 j (α) of B (α) using (217) (218), wesuccessively obtain ρ(b)y 1 = n 1 b 1 j y j = b 1n y n + b 1 j y j = (b 1n (α) + αb n 1 1n)y n + (b 1 j (α) αb 1nb nj )y j = (b 1n (α) + αb n 1 1n)y n + (b 1 j (α) αb 1nb nj )y j + b 11 (α)y 1 b 11 (α)y 1 = b 1 j (α)y n 1 j αb 1n b nj y j + αb 1n y n αb 1n b n1 y 1 = b 1 j (α)y n 1 j αb 1n b nj y j + αb 1n y n By the fact that ρ(b)y n = n 1 b njy j replacing in (220),wehave ρ(b)y 1 = b 1 j (α)y j + αb 1n ( 1 ρ(b) 1 ) n 1 (220) b nj (α)y j (221) Since the second term on the sum in (221) is non-negative, b ij (α)y j b ij y j (222) Then, (212) follows from (222) For (213): ForaZ-matrix A the statement A is a nonsingular M-matrix is equivalent to the statement there exists a positive vector y(> 0) R n such that Ay > 0 (see Theorem 623 Condition I 27 of [6) But P(α) = I + S(α) 0, implies that Ã(α)y = P(α)Ay > 0

5 Z-D Wang, T-Z Huang / Applied Mathematics Letters 19 (2006) Consequently, Ã(α), whichisaz-matrix,is a nonsingular M-matrix So, the last two splittings in (28)are regular ones because M 1 (α) = I 1 = I 0, N (α) 0 M 1 (α) = (I D α ) 1 0, N (α) 0, so they are convergent For a Z-matrix, the statement A is a nonsingular M-matrix is equivalent to the statement all the principal minors of A are positive (see Theorem 622, condition (A 1 ) of [6) So, we have0 a 1n a n1 < 1 Thus, 0 αa 1n a n1 < 1, 0 < 1 αa 1n a n1 1M 1 (α) M 1 (α), the left inequality in (213) is true For (214): Consider the splittings (210) that define the iteration matrices in (211) Thematrix M(α) = I (U S(α)) of the first splitting isupper triangular with units on the diagonal, elements of the first row the last column (1 α)a 1n, remaining ones those of the strictly lower triangular part of ASo, all the off-diagonal elements of M(α) are nonpositive therefore M(α) is a nonsingular M-matrix which implies that M 1 (α) 0 Also, (I + S(α))L 0, so the first splitting in (210) is a regular one M (α) can be written as M (α) = M(α) U α = M(α)(I M 1 (α)u α ), setting Ū = M 1 (α)u α 0, we have M 1 (α) = (I Ū) 1 M 1 (α) = (I + Ū + Ū 2 + +Ū n 1 )M 1 (α) 0 (223) Since N (α) = D α + L 0, the second splitting in (210) is also a regular one The last splitting is a regular one since Ã(α) is a nonsingular M-matrix so is M (α) since the latter is derived from the former by setting some off-diagonal elements equal to zero N (α) = L + L α 0 The inequalities in (214) are established because we notice that N(α) = U α + D α + L N (α) = D α + L N (α) = L For (215): SinceA is a nonsingular M-matrix, the rightmost inequality is a straightforward implication of Lemma 21 as was mentioned before The other two inequalities in (215) are implied directly by the facts that Ã(α) is a nonsingular M-matrix, the last two pairs of splittings in (28) (210) Fromthefour matrices involved, H(α), B(α), H (α) B (α), areproduced, are regular ones with U S(α) + U α + L L U S(α) + U α + D α + L D α + L (b): For α [0, 1), Ã(α) is irreducible because it inherits the nonzero structure of the irreducible matrix A (i) of (b): For (213) (216): Byvirtue of the irreducibility of the corresponding matrices involved, the theorem used previously also can be applied to prove the strict inequalities in (213) (215) Similar to Theorem 22 of [7, (216) is easily proved (ii) of (b): We consider the block partitions [ [ A1 a h I1 a h A = av T, P(1) = 1 0n 1 T 1 [Ã1 (1) 0 Ã(1) = n 1 av T 1, (224) Then the associated block upper Jacobi upper Gauss Seidel iteration matrices will be [ [ B1 a h B = av T, B B (1) = 1 (1) 0 n 1 0 av T, 0 [ (225) B B(1) = 1 (1) 0 n 1 av T 0 H = [ H1 0 n 1 av T, H (1) = 0 [ H 1 (1) 0 n 1 av T 0 H(1) = [ H 1 (1) 0 n 1 a T v 0, (226)

6 1034 Z-D Wang, T-Z Huang / Applied Mathematics Letters 19 (2006) For (213) (216): Bystudying the structure of the matrices B 1, B 1 (1), B 1 (1), H 1, H 1 (1) H 1 (1), wecan find out that the associated irreducibility properties hold for these matrices So the theorems used previously also can be applied in each case to prove the strict inequalities (213) (215) Similar to Theorem 22 of [7, (216) is easily proved Lemma 22 ([8) Let A 1,A 2 R n,n A i = M i N i,i = 1, 2,beweak splittings (T i = M 1 i N i 0,i = 1, 2) If the Perron eigenvector z 2 ( 0) of T 2 satisfies T 1 z 2 T 2 z 2,thenρ(T 1 ) ρ(t 2 ) Theorem 22 Let Abe a nonsingular M-matrix Then, for 0 α α 1,wehave ρ( B(α )) ρ( B(α)) ρ( H(α )) ρ( H(α)) (227) Proof Note that the upper Jacobi upper Gauss Seidel iteration matrices associated with any A = D L U are the same as those associated with D 1 A = I D 1 L D 1 UObservethat by virtue of Lemma 21,thenature of the vector y in (212) (213), isρ( B(α)) ρ(b) By(214), ρ( H(α)) ρ(h )Therefore, the Jacobi the Gauss Seidel iterative methods associated with a preconditioned matrix Ã(α), are no worse than the corresponding ones of the unpreconditioned matrix ASince D 1 Ã has the same Jacobi Gauss Seidel iteration matrices with Ã, its elements, denoted by thesamesymbols as those of Ã, are ã ii = a ii = 1, 1 i n, ã ij = a ij, i 1, ã 1 j = a 1 j αa 1n a nj 1 αa 1n a n1, 2 j n 1, ã 1n = (1 α)a 1n 1 αa 1n a n1 Consider β which is defined by β = 0 ifα = 1, β = α α 1 α if α 1 (228) Apply to D 1 Ã the preconditioner P(β) The upper Jacobi the upper Gauss Seidel iterative methods associated with the new preconditioned matrix Ã(β) = P(β) D 1 Ã will be no worse than the ones corresponding to D 1 ÃTheelements a ij of the matrix D 1 (β) Ã(β) will be given by the sameexpressions as those in (228) where the a ij will be replaced by a ij the α by βthea ij are given by ã ii = 1, 1 i n, ã ij =ã ij, i 1, ã 1 j = ã1 j βã 1n ã nj 1 βã 1n ã n1, 2 j n 1, ã 1n = (1 β)ã 1n 1 βã 1n ã n1 Substituting in (229) the ã ij β,aftersomesimple calculation, we obtain (229) ã ii = 1, 1 i n, ã ij =ã ij, i 1, ã 1 j = a 1 j α a 1n a nj 1 α a 1n a n1, 2 j n 1, ã 1n = (1 α )a 1n 1 α, a 1n a n1 which proves (227)

7 Z-D Wang, T-Z Huang / Applied Mathematics Letters 19 (2006) Numerical examples Fig 1 Interior mesh point of five-point difference approximation Example 1 In order to obtain the numerical solution of the Laplace equation 2 u(x, y) x u(x, y) y 2 = u xx (x, y) + u yy (x, y) = 0, under a uniform square mesh of five-point difference approximations, the interior mesh points as shown in Fig 1, we can obtain the linear system Ax = b where A = If we use the preconditioned method, for α = 09, we have ρ(b(α )) = 07239, ρ(b (α )) = ρ( B(α )) = 07176

8 1036 Z-D Wang, T-Z Huang / Applied Mathematics Letters 19 (2006) Analogously, we obtain ρ(h (α )) = 05241, ρ(h (α )) = ρ( H(α )) = For α = 05, we obtain ρ( H(α )) = <ρ( H(α)) = ρ( B(α )) = <ρ( B(α)) = Example 2 Let the coefficient matrix A of (11) be 1 q r s q s 1 q r q q s s A = r, 1 q r s q s 1 q s r q s 1 where q = p/n, r = p/(n + 1) s = p/(n + 2) [11 Here, we let n = 10 p = 1 If we use the preconditioned method, for α = 09, we have ρ(b(α )) = 08227, ρ(b (α )) = ρ( B(α )) = Analogously, we obtain ρ(h (α )) = 06804, ρ(h (α )) = ρ( H(α )) = For α = 05, we obtain ρ( H(α )) = <ρ( H(α)) = ρ( B(α )) = <ρ( B(α)) = Acknowledgement The second author was supported by the NCET in universitiesofchina foundation of national key lab, Beijing Appl Phy Math Institute References [1 DJ Evans, MM Martins, ME Trigo, The AOR iterative method for new preconditioned linear systems, J Comput Appl Math 132 (2001) [2 A Hadjimos, Accelerated overelaxation method, Math Comp 32 (1978) [3 AD Gunawardena, SK Jain, L Snyder, Modified iterative methods for consistent linear systems, Linear Algebra Appl 41 (1981) [4 RS Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Cliffs, New York, 2000 [5 DM Young, Iterative Solution of Large Linear Systems, Academic Press, New York, London, 1971 [6 A Berman, RJ Plemmons, Nonnegative Matrices in the Mathematical Sciences, Academic Press, New York, 1979 Reprinted by SIAM, Philadelphia, PA, 1994 [7 JP Milaszewicz, Improving Jacobi Gauss Seidel iterations, Linear Algebra Appl 93 (1987) [8 A Hadjidimos, D Noutsos, M Tzoumas, More on modifications improvements of classical iterative schemes for M-matrices, Linear Algebra Appl 364 (2003) [9 W Li, W Sun, Modified Gauss Seidel type methods Jacobi type methods for Z-matrices, Linear Algebra Appl 317 (2000) [10 T Kohno, H Kotakemori, H Niki, M Usui,Improving the Gauss Seidel method for Z-matrices, Linear Algebra Appl 267 (1997) [11 M Usui, H Niki, T Kohno, Adaptive Gauss Seidel method for linear systems, Int J Comput Math 51 (1994)

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices Computers Mathematics with Applications 56 (2008) 2048 2053 Contents lists available at ScienceDirect Computers Mathematics with Applications journal homepage: wwwelseviercom/locate/camwa Convergence analysis

More information

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,

More information

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University

More information

On optimal improvements of classical iterative schemes for Z-matrices

On optimal improvements of classical iterative schemes for Z-matrices Journal of Computational and Applied Mathematics 188 (2006) 89 106 www.elsevier.com/locate/cam On optimal improvements of classical iterative schemes for Z-matrices D. Noutsos a,, M. Tzoumas b a Department

More information

Comparison results between Jacobi and other iterative methods

Comparison results between Jacobi and other iterative methods Journal of Computational and Applied Mathematics 169 (2004) 45 51 www.elsevier.com/locate/cam Comparison results between Jacobi and other iterative methods Zhuan-De Wang, Ting-Zhu Huang Faculty of Applied

More information

Some bounds for the spectral radius of the Hadamard product of matrices

Some bounds for the spectral radius of the Hadamard product of matrices Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices Applied Mathematics Letters 25 (202) 2339 2343 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Comparison theorems for a subclass

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX J. Appl. Math. & Computing Vol. 182005 No. 1-2 pp. 59-72 CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX JAE HEON YUN SEYOUNG OH AND EUN HEUI KIM Abstract. We study convergence

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Generalizations of M-matrices which may not have a nonnegative inverse

Generalizations of M-matrices which may not have a nonnegative inverse Available online at www.sciencedirect.com Linear Algebra and its Applications 429 (2008) 2435 2450 www.elsevier.com/locate/laa Generalizations of M-matrices which may not have a nonnegative inverse Abed

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 36, pp. 39-53, 009-010. Copyright 009,. ISSN 1068-9613. ETNA P-REGULAR SPLITTING ITERATIVE METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS

More information

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS REVISTA DE LA UNIÓN MATEMÁTICA ARGENTINA Vol. 53, No. 1, 2012, 61 70 SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS JIAN-LEI LI AND TING-ZHU HUANG Abstract. Recently,

More information

Preconditioning Strategy to Solve Fuzzy Linear Systems (FLS)

Preconditioning Strategy to Solve Fuzzy Linear Systems (FLS) From the SelectedWorks of SA Edalatpanah March 12 2012 Preconditioning Strategy to Solve Fuzzy Linear Systems (FLS) SA Edalatpanah University of Guilan Available at: https://works.bepress.com/sa_edalatpanah/3/

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

Interlacing Inequalities for Totally Nonnegative Matrices

Interlacing Inequalities for Totally Nonnegative Matrices Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Abed Elhashash and Daniel B. Szyld. Report August 2007

Abed Elhashash and Daniel B. Szyld. Report August 2007 Generalizations of M-matrices which may not have a nonnegative inverse Abed Elhashash and Daniel B. Szyld Report 07-08-17 August 2007 This is a slightly revised version of the original report dated 17

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex

More information

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15]) LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and

More information

Splitting Iteration Methods for Positive Definite Linear Systems

Splitting Iteration Methods for Positive Definite Linear Systems Splitting Iteration Methods for Positive Definite Linear Systems Zhong-Zhi Bai a State Key Lab. of Sci./Engrg. Computing Inst. of Comput. Math. & Sci./Engrg. Computing Academy of Mathematics and System

More information

The semi-convergence of GSI method for singular saddle point problems

The semi-convergence of GSI method for singular saddle point problems Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method

More information

Improving the Modified Gauss-Seidel Method for Z-Matrices

Improving the Modified Gauss-Seidel Method for Z-Matrices Improving the Modified Gauss-Seidel Method for Z-Matrices Toshiyuki Kohno*, Hisashi Kotakemori, and Hiroshi Niki Department of Applied Mathematics Okayama University of Science Okayama 700, Japan and Masataka

More information

ELA

ELA Volume 16, pp 171-182, July 2007 http://mathtechnionacil/iic/ela SUBDIRECT SUMS OF DOUBLY DIAGONALLY DOMINANT MATRICES YAN ZHU AND TING-ZHU HUANG Abstract The problem of when the k-subdirect sum of a doubly

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58 Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms

More information

A note on estimates for the spectral radius of a nonnegative matrix

A note on estimates for the spectral radius of a nonnegative matrix Electronic Journal of Linear Algebra Volume 13 Volume 13 (2005) Article 22 2005 A note on estimates for the spectral radius of a nonnegative matrix Shi-Ming Yang Ting-Zhu Huang tingzhuhuang@126com Follow

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

Improving AOR Method for a Class of Two-by-Two Linear Systems

Improving AOR Method for a Class of Two-by-Two Linear Systems Alied Mathematics 2 2 236-24 doi:4236/am22226 Published Online February 2 (htt://scirporg/journal/am) Imroving AOR Method for a Class of To-by-To Linear Systems Abstract Cuixia Li Shiliang Wu 2 College

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS TWMS J. Pure Appl. Math., V.6, N.1, 2015, pp.15-26 ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS J. SAEIDIAN 1, E. BABOLIAN 1, A. AZIZI 2 Abstract. A new iterative method is proposed

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Fast Verified Solutions of Sparse Linear Systems with H-matrices

Fast Verified Solutions of Sparse Linear Systems with H-matrices Fast Verified Solutions of Sparse Linear Systems with H-matrices A. Minamihata Graduate School of Fundamental Science and Engineering, Waseda University, Tokyo, Japan aminamihata@moegi.waseda.jp K. Sekine

More information

Two Characterizations of Matrices with the Perron-Frobenius Property

Two Characterizations of Matrices with the Perron-Frobenius Property NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2009;??:1 6 [Version: 2002/09/18 v1.02] Two Characterizations of Matrices with the Perron-Frobenius Property Abed Elhashash and Daniel

More information

Geometric Mapping Properties of Semipositive Matrices

Geometric Mapping Properties of Semipositive Matrices Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices

More information

Some New Results on Lyapunov-Type Diagonal Stability

Some New Results on Lyapunov-Type Diagonal Stability Some New Results on Lyapunov-Type Diagonal Stability Mehmet Gumus (Joint work with Dr. Jianhong Xu) Department of Mathematics Southern Illinois University Carbondale 12/01/2016 mgumus@siu.edu (SIUC) Lyapunov-Type

More information

Some bounds for the spectral radius of the Hadamard product of matrices

Some bounds for the spectral radius of the Hadamard product of matrices Some bounds for the spectral radius of the Hadamard product of matrices Tin-Yau Tam Mathematics & Statistics Auburn University Georgia State University, May 28, 05 in honor of Prof. Jean H. Bevis Some

More information

Properties for the Perron complement of three known subclasses of H-matrices

Properties for the Perron complement of three known subclasses of H-matrices Wang et al Journal of Inequalities and Applications 2015) 2015:9 DOI 101186/s13660-014-0531-1 R E S E A R C H Open Access Properties for the Perron complement of three known subclasses of H-matrices Leilei

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 4, pp. 1-13, March 1996. Copyright 1996,. ISSN 1068-9613. ETNA NON-STATIONARY PARALLEL MULTISPLITTING AOR METHODS ROBERT FUSTER, VIOLETA MIGALLÓN,

More information

R, 1 i 1,i 2,...,i m n.

R, 1 i 1,i 2,...,i m n. SIAM J. MATRIX ANAL. APPL. Vol. 31 No. 3 pp. 1090 1099 c 2009 Society for Industrial and Applied Mathematics FINDING THE LARGEST EIGENVALUE OF A NONNEGATIVE TENSOR MICHAEL NG LIQUN QI AND GUANGLU ZHOU

More information

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Elena Virnik, TU BERLIN Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov

More information

MATRIX SPLITTING PRINCIPLES

MATRIX SPLITTING PRINCIPLES IJS 8:5 00 5 84 PII. S067000706 http://ijmms.hindawi.com Hindawi Publishing Corp. ATRIX SPLITTING PRINCIPLES ZBIGNIEW I. WOŹNICKI Received 5 arch 00 Abstract. The systematic analysis of convergence conditions,

More information

Partitioned Methods for Multifield Problems

Partitioned Methods for Multifield Problems Partitioned Methods for Multifield Problems Joachim Rang, 10.5.2017 10.5.2017 Joachim Rang Partitioned Methods for Multifield Problems Seite 1 Contents Blockform of linear iteration schemes Examples 10.5.2017

More information

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}

More information

A note on the unique solution of linear complementarity problem

A note on the unique solution of linear complementarity problem COMPUTATIONAL SCIENCE SHORT COMMUNICATION A note on the unique solution of linear complementarity problem Cui-Xia Li 1 and Shi-Liang Wu 1 * Received: 13 June 2016 Accepted: 14 November 2016 First Published:

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13]) SIAM Review 4():02 2, 999 ILL-CONDITIONED MATRICES ARE COMPONENTWISE NEAR TO SINGULARITY SIEGFRIED M. RUMP Abstract. For a square matrix normed to, the normwise distance to singularity is well known to

More information

Some Preconditioning Techniques for Linear Systems

Some Preconditioning Techniques for Linear Systems Some Preconditioning Techniques for Linear Systems The focus of this paper is on preconditioning techniques for improving the performance and reliability of Krylov subspace methods It is widely rec- QINGBING

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems Applied Mathematical Sciences, Vol. 10, 2016, no. 30, 1477-1488 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.6269 Performance Comparison of Relaxation Methods with Singular and Nonsingular

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Bounds for Levinger s function of nonnegative almost skew-symmetric matrices

Bounds for Levinger s function of nonnegative almost skew-symmetric matrices Linear Algebra and its Applications 416 006 759 77 www.elsevier.com/locate/laa Bounds for Levinger s function of nonnegative almost skew-symmetric matrices Panayiotis J. Psarrakos a, Michael J. Tsatsomeros

More information

CHAPTER 5. Basic Iterative Methods

CHAPTER 5. Basic Iterative Methods Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices Journal of Computational Applied Mathematics 6 (009) 136 154 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam Optimization

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

An Extrapolated Gauss-Seidel Iteration

An Extrapolated Gauss-Seidel Iteration mathematics of computation, volume 27, number 124, October 1973 An Extrapolated Gauss-Seidel Iteration for Hessenberg Matrices By L. J. Lardy Abstract. We show that for certain systems of linear equations

More information

Monte Carlo Method for Finding the Solution of Dirichlet Partial Differential Equations

Monte Carlo Method for Finding the Solution of Dirichlet Partial Differential Equations Applied Mathematical Sciences, Vol. 1, 2007, no. 10, 453-462 Monte Carlo Method for Finding the Solution of Dirichlet Partial Differential Equations Behrouz Fathi Vajargah Department of Mathematics Guilan

More information

Lecture # 20 The Preconditioned Conjugate Gradient Method

Lecture # 20 The Preconditioned Conjugate Gradient Method Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

On the Schur Complement of Diagonally Dominant Matrices

On the Schur Complement of Diagonally Dominant Matrices On the Schur Complement of Diagonally Dominant Matrices T.-G. Lei, C.-W. Woo,J.-Z.Liu, and F. Zhang 1 Introduction In 1979, Carlson and Markham proved that the Schur complements of strictly diagonally

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

A Refinement of Gauss-Seidel Method for Solving. of Linear System of Equations

A Refinement of Gauss-Seidel Method for Solving. of Linear System of Equations Int. J. Contemp. Math. Sciences, Vol. 6, 0, no. 3, 7 - A Refinement of Gauss-Seidel Method for Solving of Linear System of Equations V. B. Kumar Vatti and Tesfaye Kebede Eneyew Department of Engineering

More information

Iterative Methods for Ax=b

Iterative Methods for Ax=b 1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative

More information

Convergence on Gauss-Seidel iterative methods for linear systems with general H-matrices

Convergence on Gauss-Seidel iterative methods for linear systems with general H-matrices Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 54 2015 Convergence on Gauss-Seidel iterative methods for linear systems with general H-matrices Cheng-yi Zhang Xi'an Polytechnic

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

In 1912, G. Frobenius [6] provided the following upper bound for the spectral radius of nonnegative matrices.

In 1912, G. Frobenius [6] provided the following upper bound for the spectral radius of nonnegative matrices. SIMPLIFICATIONS OF THE OSTROWSKI UPPER BOUNDS FOR THE SPECTRAL RADIUS OF NONNEGATIVE MATRICES CHAOQIAN LI, BAOHUA HU, AND YAOTANG LI Abstract AM Ostrowski in 1951 gave two well-known upper bounds for the

More information

INVERSE TRIDIAGONAL Z MATRICES

INVERSE TRIDIAGONAL Z MATRICES INVERSE TRIDIAGONAL Z MATRICES (Linear and Multilinear Algebra, 45() : 75-97, 998) J J McDonald Department of Mathematics and Statistics University of Regina Regina, Saskatchewan S4S 0A2 M Neumann Department

More information

Introduction and Stationary Iterative Methods

Introduction and Stationary Iterative Methods Introduction and C. T. Kelley NC State University tim kelley@ncsu.edu Research Supported by NSF, DOE, ARO, USACE DTU ITMAN, 2011 Outline Notation and Preliminaries General References What you Should Know

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Linear Systems Fall 2015 1 / 12 Introduction We continue looking how to solve linear systems of

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

Boundary value problems on triangular domains and MKSOR methods

Boundary value problems on triangular domains and MKSOR methods Applied and Computational Mathematics 2014; 3(3): 90-99 Published online June 30 2014 (http://www.sciencepublishinggroup.com/j/acm) doi: 10.1164/j.acm.20140303.14 Boundary value problems on triangular

More information

DISCRETE MAXIMUM PRINCIPLES in THE FINITE ELEMENT SIMULATIONS

DISCRETE MAXIMUM PRINCIPLES in THE FINITE ELEMENT SIMULATIONS DISCRETE MAXIMUM PRINCIPLES in THE FINITE ELEMENT SIMULATIONS Sergey Korotov BCAM Basque Center for Applied Mathematics http://www.bcamath.org 1 The presentation is based on my collaboration with several

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

arxiv: v1 [math.ra] 11 Aug 2014

arxiv: v1 [math.ra] 11 Aug 2014 Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091

More information

Weak Monotonicity of Interval Matrices

Weak Monotonicity of Interval Matrices Electronic Journal of Linear Algebra Volume 25 Volume 25 (2012) Article 9 2012 Weak Monotonicity of Interval Matrices Agarwal N. Sushama K. Premakumari K. C. Sivakumar kcskumar@iitm.ac.in Follow this and

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

A property concerning the Hadamard powers of inverse M-matrices

A property concerning the Hadamard powers of inverse M-matrices Linear Algebra and its Applications 381 (2004 53 60 www.elsevier.com/locate/laa A property concerning the Hadamard powers of inverse M-matrices Shencan Chen Department of Mathematics, Fuzhou University,

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 1: Introduction to Multigrid 2000 Eric de Sturler 1 12/02/09 MG01.prz Basic Iterative Methods (1) Nonlinear equation: f(x) = 0 Rewrite as x = F(x), and iterate x i+1

More information