Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices

Size: px
Start display at page:

Download "Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices"

Transcription

1 Computers Mathematics with Applications 56 (2008) Contents lists available at ScienceDirect Computers Mathematics with Applications journal homepage: wwwelseviercom/locate/camwa Convergence analysis of the preconditioned Gauss Seidel method for H-matrices Qingbing Liu a,b, Guoliang Chen a,, Jing Cai a,c a Department of Mathematics, East China Normal University, Shanghai , PR China b Department of Mathematics, Zhejiang Wanli University, Ningbo , PR China c School of Science, Huzhou Teachers College, Huzhou , PR China a r t i c l e i n f o a b s t r a c t Article history: Received 8 August 2011 Received in revised form 25 February 2012 Accepted 5 March 2012 Keywords: H-matrix Preconditioner Preconditioned iterative method Gauss Seidel method H-splitting In 1997, Kohno et al [Toshiyuki Kohno, Hisashi Kotakemori, Hiroshi Niki, Improving the modified Gauss Seidel method for Z-matrices, Linear Algebra Appl 267 (1997) ] proved that the convergence rate of the preconditioned Gauss Seidel method for irreducibly diagonally dominant Z-matrices with a preconditioner I + S α is superior to that of the basic iterative method In this paper, we present a new preconditioner I + K β which is different from the preconditioner given by Kohno et al [Toshiyuki Kohno, Hisashi Kotakemori, Hiroshi Niki, Improving the modified Gauss Seidel method for Z-matrices, Linear Algebra Appl 267 (1997) ] prove the convergence theory about two preconditioned iterative methods when the coefficient matrix is an H-matrix Meanwhile, two novel sufficient conditions for guaranteeing the convergence of the preconditioned iterative methods are given Crown Copyright 2008-Published by Elsevier Ltd All rights reserved 1 Introduction We consider the following linear system Ax b, where A is a complex n n matrix, x b are n-dimensional vectors For any splitting, A M N with the nonsingular matrix M, the basic iterative method for solving the linear system (1) is as follows: x i+1 M 1 Nx i + M 1 b i 0, 1, 2, Some techniques of preconditioning which improve the rate of convergence of these iterative methods have been developed Let us consider a preconditioned system of (1) PAx Pb, where P is a nonsingular matrix The corresponding basic iterative method is given in general by x i+1 M 1 P N P x i + M 1 P Pb i 0, 1, 2,, where PA M P N P is a splitting of PA (1) (2) This project was granted financial support by the Shanghai Science Technology Committee Shanghai Priority Academic Discipline Foundation (no ) Corresponding author addresses: lqb_2008@hotmailcom (Q Liu), glchen@mathecnueducn (G Chen)

2 2049 In 1997, Kohno et al [1] proposed a general method for improving the preconditioned Gauss Seidel method with the preconditioned matrix P I + S α, if A is a nonsingular diagonally dominant Z-matrix with some conditions, where 0 α 1 a 1, α 2 a 2,3 0 S α α n1 a n1,n They showed numerically that the preconditioned Gauss Seidel method is superior to the original iterative method if the parameters α i 0 (i 1, 2,, n 1) are chosen appropriately Many other researchers have considered left preconditioners applied to linear system (1) that made the associated Jacobi Gauss Seidel methods converge faster than the original ones Such modifications or improvements based on prechosen preconditioners were considered by Milaszewicz [2] who based his ideas on previous ones (see, eg, [3]), by Gunawardena et al [4], very recently by Li Sun [5] who extended the class of matrices considered in [1] by other researchers (see, eg, [6 12]), many results for more general preconditioned iterative methods were obtained In this paper, besides the above preconditioned method, we will consider the following preconditioned linear system A β x b β, where A β (I + K β )A b β (I + K β )b with β 1 a 2, K β 0 β 2 a 3,2 0 0, 0 0 β n1 a n,n1 0 where β i 0 (i 1, 2,, n 1) Our work gives the convergence analysis of the above two preconditioned Gauss Seidel methods for the case when a coefficient matrix A is an H-matrix obtains two sufficient conditions for guaranteeing the convergence of two preconditioned iterative methods (3) 2 Preliminaries Without loss of generality, let the matrix A of the linear system (1) be A I L U, where I is an identity matrix, L U are strictly lower upper triangular matrices obtained from A, respectively We assume a i,i+1 0, considering the preconditioner P I + S α, then we have whenever A α (I + S α )A I L S α L (U S α + S α U) b α (I + Sα)b, α i a i,i+1 a i+1,i 1 for i 1, 2,, n 1, then (I L S α L) 1 exists Hence it is possible to define the Gauss Seidel iteration matrix for A α, namely T α (I L S α L) 1 (U S α + S α U) (4) Similarly, if a i,i1 0, considering the preconditioner P I + K β, then we have A β (I + K β )A I L + K β K β L (U + K β U) b β (I + K β )b, define the Gauss Seidel iteration matrix for A β, namely T β (I L + K β K β L) 1 (U + K β U) We first recall the following: A real vector x (x 1, x 2,, x n ) T is called nonnegative(positive) denoted by x 0 (x > 0), if x i 0 (x i > 0) for all i Similarly, a real matrix A (a i,j ) is called nonnegative denoted by A 0 (A > 0) if a i,j 0 (a i,j > 0) for all i, j, the absolute value of A is denoted by A ( a i,j ) Definition 21 ([13]) A real matrix A is called an M-matrix if A si B, B 0 s > ρ(b), where ρ(b) denotes the spectral radius of B (5)

3 2050 Definition 22 ([13]) A complex matrix A (a i,j ) is an H-matrix, if its comparison matrix A (ā i,j ) is an M-matrix, where ā i,j is ā i,i a i,i, ā i,j a i,j, i j Definition 23 ([14]) The splitting A M N is called an H-splitting if M N is an M-matrix Lemma 21 ([14]) Let A M N be a splitting If it is an H-splitting, then A M are H-matrices ρ(m 1 N) ρ( M 1 N ) < 1 Lemma 22 ([15]) Let A have nonpositive off-diagonal entries Then a real matrix A is an M-matrix if only if there exists some positive vector u (u 1,, u n ) T > 0 such that Au > 0 3 Convergence results Theorem 31 Let A be an H-matrix with unit diagonal elements, A α (I + S α )A M α N α, M α I L S α L N α US α +S α U Let u (u 1,, u n ) T be a positive vector such that A u > 0 Assume that a i,i+1 0 for i 1, 2,, n1, u i i1 a i,j u j n a i,j u j + a i,i+1 u i+1 α i, a i,i+1 n a i+1,j u j then α i > 1 for i 1, 2,, n 1 for 0 α i < α i, the splitting A α M α N α is an H-splitting ρ(mα 1 N α) < 1 so that the iteration (2) converges to the solution of (1) Proof By assumption, let a positive vector u > 0 satisfy A u > 0, from the definition of A, we have u i a i,j u j > 0 for i 1, 2,, n 1 Therefore, we have u i a i,j u j u i a i,j u j + a i,i+1 u i+1 a i,i+1 a i,j u j + a i,i+1 u i+1 Observe that u i n u i a i,j u j a i+1,j u j a i+1,j u j for i 1, 2,, n 1 +1 a i,j u j > 0 u i+1 n a i+1,j u j > 0, then we have +1 a i,j u j + a i,i+1 u i+1 a i,i+1 a i+1,j u j > 0, u i a i,j u j a i,j u j + a i,i+1 u i+1 > a i,i+1 a i+1,j u j > 0 for i 1, 2,, n 1 This implies u i i1 a i,j u j n a i,j u j + a i,i+1 u i+1 α i > 1 for i 1, 2,, n 1 a i,i+1 n a i+1,j u j Hence, α i > 1 for i 1, 2,, n 1

4 2051 In order to prove that ρ(mα 1 N α) < 1, we first show that M α N α is an M-matrix Let [( M α N α )u] i be the ith element in the vector ( M α N α )u for i 1, 2,, n 1 Then we have [( M α N α )u] i 1 α i a i,i+1 a i+1,i u i a i,j α i a i,i+1 a i+1,j u j a i,j α i a i,i+1 a i+1,j u j u i α i a i,i+1 a i+1,i u i a i,j u j α i a i,i+1 a i+1,j u j [( M α N α )u] n u n a i,j u j α i a i,i+1 a i+1,j u j 1 α i a i,i+1 u i+1, (6) a n,j u j > 0 (7) j n If 0 α i 1 for i 1, 2,, n 1, then we have [( M α N α )u] i u i α i a i,i+1 a i+1,i u i a i,j u j α i a i,i+1 a i+1,j u j Since u i n u i a i,j u j α i a i,i+1 a i+1,j u j (1 α i ) a i,i+1 u i+1 a i,j u j + α i a i,i+1 u i+1 α i a i,i+1 u i a i,j u j + α i a i,i+1 u i+1 a i,j u j > 0 u i+1 n a i+1,j u j > 0, we have +1 a i+1,j u j +1 a i+1,j u j (8) [( M α N α )u] i > 0 for i 1, 2,, n 1 (9) +1 If 1 < α i < α i for i 1, 2,, n 1, from (6) the definition of α i, then we have [( M α N α )u] i u i α i a i,i+1 a i+1,i u i a i,j u j α i a i,i+1 a i+1,j u j a i,j u j α i a i,i+1 a i+1,j u j (α i 1) a i,i+1 u i+1 u i a i,j u j > 0 Therefore, from (7) to (10), we have ( M α N α )u > 0 for 0 α i < α i a i,j u j + a i,i+1 u i+1 α i a i,i+1 a i+1,j u j By Lemma 22, M α N α is an M-matrix for 0 α i < α i (i 1, 2,, n 1) From Definition 23, A α M α N α is an H-splitting for 0 α i < α i (i 1, 2,, n 1) Hence, according to Lemma 21, we have that A α M α are H-matrices ρ(mα 1 N α) ρ( M α 1 N α ) < 1 for 0 α i < α i (i 1, 2,, n 1) Theorem 32 Let A be an H-matrix with unit diagonal elements, A β (I + K β )A M β N β, M β I L + K β K β L N β U + K β U Let v (v 1,, v n ) T be a positive vector such that A v > 0 Assume that a i,i1 0 for i 2,, n, v i i2 a i,j v j n a i,j v j + a i,i1 v i1 β i, a i,i1 n a i1,j v j (10)

5 2052 then β i > 1 for i 2,, n for 0 β i < β i, the splitting A β M β N β is an H-splitting ρ(m 1 β N β) < 1 so that the iteration (2) converges to the solution of (1) Proof From the conditions of the theorem, we know there exists a positive vector v > 0 satisfying A u > 0, then we have v i a i,j v j > 0 for i 2,, n Therefore, we have v i a i,j v j v i Since v i n a i,j v j + a i,i1 v i1 a i,i1 a i,j v j + a i,i1 v i1 v i a i,j v j a i1,j v j a i1,j v j for i 2,, n 1 a i,j v j > 0 v i1 n a i1,j v j > 0, we have 1 a i,j v j + a i,i1 v i1 a i,i1 So it is obvious to obtain v i a i,j v j a i,j v j + a i,i1 v i1 > a i,i1 This implies β i v i i2 a i,j v j n a i,j v j + a i,i1 v i1 a i1,j v j > 0 a i1,j v j > 0 for i 2,, n > 1 for i 2,, n a i,i1 n a i1,j v j Namely, β i > 1 for i 2,, n In order to prove that ρ(m 1 β N β) < 1, we first show that M β N β is an M-matrix, let [( M β N β )v] i be the ith element in the vector ( M β N β )v for i 2,, n Then we have [( M β N β )v] i v i β i1 a i,i1 a i1,i v i a i,j β i1 a i,i1 a i1,j v j a i,j β i1 a i,i1 a i1,j v j v i β i1 a i,i1 a i1,i v i a i,j v j β i1 a i,i1 a i1,j v j a i,j v j β i1 a i,i1 a i1,j v j 1 β i1 a i,i1 v i1, (11) [( M β N β )v] 1 v 1 a 1,j v j > 0 (12) j2 If 0 β i1 1 for i 2,, n, then we have [( M β N β )v] i v i β i1 a i,i1 a i1,i v i a i,j v j β i1 a i,i1 a i1,j v j a i,j v j β i1 a i,i1 a i1,j v j (1 β i1 ) a i,i1 v i1

6 Observe that v i n v i a i,j v j + β i1 a i,i1 v i1 β i1 a i,i1 v i a i,j v j + β i1 a i,i1 v i1 a i1,j v j a i1,j v j (13) 1 a i,j v j > 0 v i1 n a i1,j v j > 0, then we have 1 [( M β N β )v] i > 0 for i 2, 3,, n (14) If 1 < β i1 < β i1 for i 2,, n, from (11) the definition of β i1, then we have [( M β N β )v] i v i β i1 a i,i1 a i1,i v i a i,j v j β i1 a i,i1 a i1,j v j a i,j v j β i1 a i,i1 a i1,j v j (β i1 1) a i,i1 v i1 v i a i,j v j > 0 Therefore, from (12) to (15), we have a i,j v j + a i,i1 v i1 β i1 a i,i1 ( M β N β )v > 0 for 0 β i1 < β i1 a i1,j v j By Lemma 22, M β N β is an M- matrix for 0 β i1 < β i1 (i 2,, n) From Definition 23, A β M β N β is an H-splitting for 0 β i1 < β i1 (i 2,, n) Hence, from Lemma 21, we have that A β M β are H-matrices ρ(m 1 β N β) ρ( M β 1 N β ) < 1 for 0 β i1 < β i1 (i 2,, n) Remark 31 From Theorems 31 32, we observe that the two preconditioned iterative methods converge to the solution of the linear system (1) if the coefficient matrix A which is an H-matrix satisfies the conditions of the corresponding theorems Hence the two theorems provide sufficient conditions for guaranteeing the convergence of the preconditioned iterative methods Acknowledgements The authors wish to express their thanks to Professor EY Rodin the two referees for helpful suggestions comments References [1] Toshiyuki Kohno, Hisashi Kotakemori, Hiroshi Niki, Improving the modified Gauss Seidel method for Z-matrices, Linear Algebra Appl 267 (1997) [2] JP Milaszewicz, Improving Jacobi Gauss Seidel iterations, Linear Algebra Appl 93 (1987) [3] JP Milaszewicz, On modified Jacobi linear operators, Linear Algebra Appl 51 (1983) [4] A Gunawardena, SK Jain, L Snyder, Modified iterative methods for consistent linear systems, Linear Algebra Appl 149/156 (1991) [5] W Li, WW Sun, Modified Gauss Seidel type methods Jacobi type methods for Z-matrix, Linear Algebra Appl 317 (2000) [6] A Hadjidimos, D Noutsos, M Tzoumas, More on modifications improvements of classical iterative schemes for M-matrices, Linear Algebra Appl 364 (2003) [7] D Noutsos, M Tzoumas, On optimal improvements of classical iterative schemes for Z-matrix, J Comput Appl Math 188 (2006) [8] M Neumann, RJ Plemmons, Convergence of parallel multisplitting iterative methods for M-matrix, Linear Algebra Appl 88/89 (1987) [9] W Li, A note on the preconditioned Gauss Seidel(GS) method for linear systems, J Comput Appl Math 182 (2005) [10] XZ Wang, TZ Huang, YD Fu, Comparison results on preconditioned SOR-type method for Z-matrix linear systems, J Comput Appl Math 206 (2007) [11] MJ Wu, L Wang, YZ Song, Preconditioned AOR iterative method for linear systems, Appl Numer Math 57 (2007) [12] LY Sun, Some extensions of the improved modified Gauss Seidel iterative method for H-matrix, Numer Linear Algebr 13 (2006) [13] RS Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Cliffs, NJ, 1962, pp [14] A Frommer, DB Szyld, H-splitting two-stage iterative methods, Numer Math 63 (1992) [15] KY Fan, Topological proofs for certain theorems on matrices with non-negative elements, Monatsh Math 62 (1958) (15)

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems Applied Mathematics Letters 19 (2006) 1029 1036 wwwelseviercom/locate/aml The upper Jacobi upper Gauss Seidel type iterative methods for preconditioned linear systems Zhuan-De Wang, Ting-Zhu Huang School

More information

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,

More information

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University

More information

On optimal improvements of classical iterative schemes for Z-matrices

On optimal improvements of classical iterative schemes for Z-matrices Journal of Computational and Applied Mathematics 188 (2006) 89 106 www.elsevier.com/locate/cam On optimal improvements of classical iterative schemes for Z-matrices D. Noutsos a,, M. Tzoumas b a Department

More information

Comparison results between Jacobi and other iterative methods

Comparison results between Jacobi and other iterative methods Journal of Computational and Applied Mathematics 169 (2004) 45 51 www.elsevier.com/locate/cam Comparison results between Jacobi and other iterative methods Zhuan-De Wang, Ting-Zhu Huang Faculty of Applied

More information

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX J. Appl. Math. & Computing Vol. 182005 No. 1-2 pp. 59-72 CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX JAE HEON YUN SEYOUNG OH AND EUN HEUI KIM Abstract. We study convergence

More information

Some Preconditioning Techniques for Linear Systems

Some Preconditioning Techniques for Linear Systems Some Preconditioning Techniques for Linear Systems The focus of this paper is on preconditioning techniques for improving the performance and reliability of Krylov subspace methods It is widely rec- QINGBING

More information

Preconditioning Strategy to Solve Fuzzy Linear Systems (FLS)

Preconditioning Strategy to Solve Fuzzy Linear Systems (FLS) From the SelectedWorks of SA Edalatpanah March 12 2012 Preconditioning Strategy to Solve Fuzzy Linear Systems (FLS) SA Edalatpanah University of Guilan Available at: https://works.bepress.com/sa_edalatpanah/3/

More information

Improving the Modified Gauss-Seidel Method for Z-Matrices

Improving the Modified Gauss-Seidel Method for Z-Matrices Improving the Modified Gauss-Seidel Method for Z-Matrices Toshiyuki Kohno*, Hisashi Kotakemori, and Hiroshi Niki Department of Applied Mathematics Okayama University of Science Okayama 700, Japan and Masataka

More information

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices Applied Mathematics Letters 25 (202) 2339 2343 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Comparison theorems for a subclass

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Generalizations of M-matrices which may not have a nonnegative inverse

Generalizations of M-matrices which may not have a nonnegative inverse Available online at www.sciencedirect.com Linear Algebra and its Applications 429 (2008) 2435 2450 www.elsevier.com/locate/laa Generalizations of M-matrices which may not have a nonnegative inverse Abed

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

Some bounds for the spectral radius of the Hadamard product of matrices

Some bounds for the spectral radius of the Hadamard product of matrices Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard

More information

Improving AOR Method for a Class of Two-by-Two Linear Systems

Improving AOR Method for a Class of Two-by-Two Linear Systems Alied Mathematics 2 2 236-24 doi:4236/am22226 Published Online February 2 (htt://scirporg/journal/am) Imroving AOR Method for a Class of To-by-To Linear Systems Abstract Cuixia Li Shiliang Wu 2 College

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Partitioned Methods for Multifield Problems

Partitioned Methods for Multifield Problems Partitioned Methods for Multifield Problems Joachim Rang, 10.5.2017 10.5.2017 Joachim Rang Partitioned Methods for Multifield Problems Seite 1 Contents Blockform of linear iteration schemes Examples 10.5.2017

More information

Key words. Index, Drazin inverse, group inverse, Moore-Penrose inverse, index splitting, proper splitting, comparison theorem, monotone iteration.

Key words. Index, Drazin inverse, group inverse, Moore-Penrose inverse, index splitting, proper splitting, comparison theorem, monotone iteration. The Electronic Journal of Linear Algebra. A publication of the International Linear Algebra Society. Volume 8, pp. 83-93, June 2001. ISSN 1081-3810. http://math.technion.ac.il/iic/ela ELA ADDITIONAL RESULTS

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Abed Elhashash and Daniel B. Szyld. Report August 2007

Abed Elhashash and Daniel B. Szyld. Report August 2007 Generalizations of M-matrices which may not have a nonnegative inverse Abed Elhashash and Daniel B. Szyld Report 07-08-17 August 2007 This is a slightly revised version of the original report dated 17

More information

A note on the unique solution of linear complementarity problem

A note on the unique solution of linear complementarity problem COMPUTATIONAL SCIENCE SHORT COMMUNICATION A note on the unique solution of linear complementarity problem Cui-Xia Li 1 and Shi-Liang Wu 1 * Received: 13 June 2016 Accepted: 14 November 2016 First Published:

More information

A Refinement of Gauss-Seidel Method for Solving. of Linear System of Equations

A Refinement of Gauss-Seidel Method for Solving. of Linear System of Equations Int. J. Contemp. Math. Sciences, Vol. 6, 0, no. 3, 7 - A Refinement of Gauss-Seidel Method for Solving of Linear System of Equations V. B. Kumar Vatti and Tesfaye Kebede Eneyew Department of Engineering

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

Properties for the Perron complement of three known subclasses of H-matrices

Properties for the Perron complement of three known subclasses of H-matrices Wang et al Journal of Inequalities and Applications 2015) 2015:9 DOI 101186/s13660-014-0531-1 R E S E A R C H Open Access Properties for the Perron complement of three known subclasses of H-matrices Leilei

More information

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex

More information

CHAPTER 5. Basic Iterative Methods

CHAPTER 5. Basic Iterative Methods Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS TWMS J. Pure Appl. Math., V.6, N.1, 2015, pp.15-26 ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS J. SAEIDIAN 1, E. BABOLIAN 1, A. AZIZI 2 Abstract. A new iterative method is proposed

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Elena Virnik, TU BERLIN Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 4, pp. 1-13, March 1996. Copyright 1996,. ISSN 1068-9613. ETNA NON-STATIONARY PARALLEL MULTISPLITTING AOR METHODS ROBERT FUSTER, VIOLETA MIGALLÓN,

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

EQUIVALENCE OF CONDITIONS FOR CONVERGENCE OF ITERATIVE METHODS FOR SINGULAR LINEAR SYSTEMS

EQUIVALENCE OF CONDITIONS FOR CONVERGENCE OF ITERATIVE METHODS FOR SINGULAR LINEAR SYSTEMS EQUIVALENCE OF CONDITIONS FOR CONVERGENCE OF ITERATIVE METHODS FOR SINGULAR LINEAR SYSTEMS DANIEL B. SZYLD Department of Mathematics Temple University Philadelphia, Pennsylvania 19122-2585 USA (szyld@euclid.math.temple.edu)

More information

An Extrapolated Gauss-Seidel Iteration

An Extrapolated Gauss-Seidel Iteration mathematics of computation, volume 27, number 124, October 1973 An Extrapolated Gauss-Seidel Iteration for Hessenberg Matrices By L. J. Lardy Abstract. We show that for certain systems of linear equations

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

arxiv: v1 [math.ra] 11 Aug 2014

arxiv: v1 [math.ra] 11 Aug 2014 Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091

More information

Applied Mathematics Letters

Applied Mathematics Letters Applied Mathematics Letters 25 (2012) 545 549 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml On the equivalence of four chaotic

More information

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems Applied Mathematical Sciences, Vol. 10, 2016, no. 30, 1477-1488 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.6269 Performance Comparison of Relaxation Methods with Singular and Nonsingular

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices Journal of Computational Applied Mathematics 6 (009) 136 154 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam Optimization

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.

More information

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin A LINEAR SYSTEMS OF EQUATIONS By : Dewi Rachmatin Back Substitution We will now develop the backsubstitution algorithm, which is useful for solving a linear system of equations that has an upper-triangular

More information

Ann. Funct. Anal. 5 (2014), no. 2, A nnals of F unctional A nalysis ISSN: (electronic) URL:

Ann. Funct. Anal. 5 (2014), no. 2, A nnals of F unctional A nalysis ISSN: (electronic) URL: Ann Funct Anal 5 (2014), no 2, 127 137 A nnals of F unctional A nalysis ISSN: 2008-8752 (electronic) URL:wwwemisde/journals/AFA/ THE ROOTS AND LINKS IN A CLASS OF M-MATRICES XIAO-DONG ZHANG This paper

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

An Even Order Symmetric B Tensor is Positive Definite

An Even Order Symmetric B Tensor is Positive Definite An Even Order Symmetric B Tensor is Positive Definite Liqun Qi, Yisheng Song arxiv:1404.0452v4 [math.sp] 14 May 2014 October 17, 2018 Abstract It is easily checkable if a given tensor is a B tensor, or

More information

MATRIX SPLITTING PRINCIPLES

MATRIX SPLITTING PRINCIPLES IJS 8:5 00 5 84 PII. S067000706 http://ijmms.hindawi.com Hindawi Publishing Corp. ATRIX SPLITTING PRINCIPLES ZBIGNIEW I. WOŹNICKI Received 5 arch 00 Abstract. The systematic analysis of convergence conditions,

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15]) LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and

More information

The semi-convergence of GSI method for singular saddle point problems

The semi-convergence of GSI method for singular saddle point problems Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method

More information

ELA

ELA Volume 16, pp 171-182, July 2007 http://mathtechnionacil/iic/ela SUBDIRECT SUMS OF DOUBLY DIAGONALLY DOMINANT MATRICES YAN ZHU AND TING-ZHU HUANG Abstract The problem of when the k-subdirect sum of a doubly

More information

Discrete mathematical models in the analysis of splitting iterative methods for linear systems

Discrete mathematical models in the analysis of splitting iterative methods for linear systems Computers and Mathematics with Applications 56 2008) 727 732 wwwelseviercom/locate/camwa Discrete mathematical models in the analysis of splitting iterative methods for linear systems Begoña Cantó, Carmen

More information

1. Introduction. We consider restricted Schwarz methods for the solution of linear systems of the form

1. Introduction. We consider restricted Schwarz methods for the solution of linear systems of the form SIAM J. NUMER. ANAL. Vol. 40, No. 6, pp. 2318 2336 c 2003 Society for Industrial and Applied Mathematics CONVERGENCE THEORY OF RESTRICTED MULTIPLICATIVE SCHWARZ METHODS REINHARD NABBEN AND DANIEL B. SZYLD

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 36, pp. 39-53, 009-010. Copyright 009,. ISSN 1068-9613. ETNA P-REGULAR SPLITTING ITERATIVE METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS

More information

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

THE RELATION BETWEEN THE QR AND LR ALGORITHMS SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Splitting Iteration Methods for Positive Definite Linear Systems

Splitting Iteration Methods for Positive Definite Linear Systems Splitting Iteration Methods for Positive Definite Linear Systems Zhong-Zhi Bai a State Key Lab. of Sci./Engrg. Computing Inst. of Comput. Math. & Sci./Engrg. Computing Academy of Mathematics and System

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

H-matrix theory vs. eigenvalue localization

H-matrix theory vs. eigenvalue localization Numer Algor 2006) 42: 229 245 DOI 10.1007/s11075-006-9029-3 H-matrix theory vs. eigenvalue localization Ljiljana Cvetković Received: 6 February 2006 / Accepted: 24 April 2006 / Published online: 31 August

More information

An inexact parallel splitting augmented Lagrangian method for large system of linear equations

An inexact parallel splitting augmented Lagrangian method for large system of linear equations An inexact parallel splitting augmented Lagrangian method for large system of linear equations Zheng Peng [1, ] DongHua Wu [] [1]. Mathematics Department of Nanjing University, Nanjing PR. China, 10093

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

A new approach to solve fuzzy system of linear equations by Homotopy perturbation method

A new approach to solve fuzzy system of linear equations by Homotopy perturbation method Journal of Linear and Topological Algebra Vol. 02, No. 02, 2013, 105-115 A new approach to solve fuzzy system of linear equations by Homotopy perturbation method M. Paripour a,, J. Saeidian b and A. Sadeghi

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods

More information

Some bounds for the spectral radius of the Hadamard product of matrices

Some bounds for the spectral radius of the Hadamard product of matrices Some bounds for the spectral radius of the Hadamard product of matrices Tin-Yau Tam Mathematics & Statistics Auburn University Georgia State University, May 28, 05 in honor of Prof. Jean H. Bevis Some

More information

R, 1 i 1,i 2,...,i m n.

R, 1 i 1,i 2,...,i m n. SIAM J. MATRIX ANAL. APPL. Vol. 31 No. 3 pp. 1090 1099 c 2009 Society for Industrial and Applied Mathematics FINDING THE LARGEST EIGENVALUE OF A NONNEGATIVE TENSOR MICHAEL NG LIQUN QI AND GUANGLU ZHOU

More information

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing

More information

Second Order Iterative Techniques for Boundary Value Problems and Fredholm Integral Equations

Second Order Iterative Techniques for Boundary Value Problems and Fredholm Integral Equations Computational and Applied Mathematics Journal 2017; 3(3): 13-21 http://www.aascit.org/journal/camj ISSN: 2381-1218 (Print); ISSN: 2381-1226 (Online) Second Order Iterative Techniques for Boundary Value

More information

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58 Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms

More information

Determinants of Partition Matrices

Determinants of Partition Matrices journal of number theory 56, 283297 (1996) article no. 0018 Determinants of Partition Matrices Georg Martin Reinhart Wellesley College Communicated by A. Hildebrand Received February 14, 1994; revised

More information

The convergence of stationary iterations with indefinite splitting

The convergence of stationary iterations with indefinite splitting The convergence of stationary iterations with indefinite splitting Michael C. Ferris Joint work with: Tom Rutherford and Andy Wathen University of Wisconsin, Madison 6th International Conference on Complementarity

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Fast Verified Solutions of Sparse Linear Systems with H-matrices

Fast Verified Solutions of Sparse Linear Systems with H-matrices Fast Verified Solutions of Sparse Linear Systems with H-matrices A. Minamihata Graduate School of Fundamental Science and Engineering, Waseda University, Tokyo, Japan aminamihata@moegi.waseda.jp K. Sekine

More information

Tensor Complementarity Problem and Semi-positive Tensors

Tensor Complementarity Problem and Semi-positive Tensors DOI 10.1007/s10957-015-0800-2 Tensor Complementarity Problem and Semi-positive Tensors Yisheng Song 1 Liqun Qi 2 Received: 14 February 2015 / Accepted: 17 August 2015 Springer Science+Business Media New

More information

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS REVISTA DE LA UNIÓN MATEMÁTICA ARGENTINA Vol. 53, No. 1, 2012, 61 70 SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS JIAN-LEI LI AND TING-ZHU HUANG Abstract. Recently,

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 2, pp. 83-93, December 994. Copyright 994,. ISSN 068-963. ETNA CONVERGENCE OF INFINITE PRODUCTS OF MATRICES AND INNER OUTER ITERATION SCHEMES RAFAEL

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Linear Systems Fall 2015 1 / 12 Introduction We continue looking how to solve linear systems of

More information

Scientific Computing WS 2018/2019. Lecture 10. Jürgen Fuhrmann Lecture 10 Slide 1

Scientific Computing WS 2018/2019. Lecture 10. Jürgen Fuhrmann Lecture 10 Slide 1 Scientific Computing WS 2018/2019 Lecture 10 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 10 Slide 1 Homework assessment Lecture 10 Slide 2 General Please apologize terse answers - on the bright

More information

NP-hardness of the stable matrix in unit interval family problem in discrete time

NP-hardness of the stable matrix in unit interval family problem in discrete time Systems & Control Letters 42 21 261 265 www.elsevier.com/locate/sysconle NP-hardness of the stable matrix in unit interval family problem in discrete time Alejandra Mercado, K.J. Ray Liu Electrical and

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

The estimation of eigenvalues of sum, difference, and tensor product of matrices over quaternion division algebra

The estimation of eigenvalues of sum, difference, and tensor product of matrices over quaternion division algebra Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 3023 3033 www.elsevier.com/locate/laa The estimation of eigenvalues of sum, difference, and tensor product of matrices

More information