A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

Size: px
Start display at page:

Download "A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES"

Transcription

1 Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University Taiyuan Shanxi Province, P. R. China wenrp@63.com Abstract In 99 A.D. Gunawardena et al. reported that the convergence rate of the Gauss-Seidel method with a preconditioning matrix I + S is superior to that of the basic iterative method. In this paper, we use the preconditioning matrix I + Ŝ. If a coefficient matrix A is a nonsingular L- matrix, the modified method yields considerable improvement in the rate of convergence for the iterative method. Finally, a numerical example shows the advantage of this method. For solving the linear system. Introduction Ax n n n b, A ( a ij ) R nonsingular and b R. (.) Let A M N, and M is nonsingular. Then the basic iterative scheme for Eq. (.) is 2000 Mathematics Subject Classification: 65F0. Keywords and phrases: L-matrix, iterative method, convergence. This work is supported by NSF of Shanxi Province, Received July 6, Scientific Advances Publishers

2 32 xk+ T xk + c, k 0,,, where T M N, c M b. Assuming A I L U, where I is the identity matrix, and L and U are strictly lower and strictly upper triangular parts of A, respectively. M I L, N U leads to the classical Gauss-Seidel method. Then the iteration matrices of the classical Jacobi and classical Gauss-Seidel methods are J L + U and U T ( I L ), respectively. We consider a preconditioned system of (.) PAx Pb. (.2) The corresponding basic iterative method is given in general by ( k+ ) ( k) x M p N pr + M p Pb, k 0,,, where PA M p N is a splitting of PA. p Some techniques of preconditioning (see [3, 4, 6]) which improve the rate of convergence of the classical iterative methods have been developed. For example, a simple modified Gauss-Seidel (MGS) method was first proposed by Gunawardena et al. (see []) with P I + S and S aij, i j, ( sij ) (.3) 0, i j. The performance of this method on some matrices is investigated in []. Kohno et al. (see [3]) extend Gunawardena et al.'s work to a more general case and presented a new MGS method by using the preconditioned matrix P I + S, where α Sα αiaij, i 0, i j, j 0 αi. The performance of this method on some matrices is investigated in [3]. Later, Li et al. (see [5]) studied convergence of this method on Z- matrices.

3 A NEW EFFECTIVE PRECONDITIONED METHOD 33 In this paper, we propose another preconditioned form for MGS and Jacobi methods on L-matrices. Theorem 3. and Theorem 3.2 provide convergence results, respectively. 2. Preliminaries n n For A R, the directed graph G ( A) of A is defined to be the pair ( V, E), where V {, 2,, n} is the set of vertices and E {( i, j ) aij 0, i, j n} is the set of edges. A path from i to i p is an ordered tuple of vertices ( i, i2,, i p ) such that for each k, ( ik, ik+ ) E. A directed graph is said to be strongly connected if for each pair ( i, j ) of vertices, there is a path from i to j. The reflexive transitive closure of the graph G ( A) is a graph denoted by G ( A). It is the smallest reflexive and transitive relation which includes the relation G ( A). The matrix A is said to be irreducible if G ( A) is strongly connected (see [5]). A matrix A is called an L-matrix if a 0 for all i, j such that i j and a ii > 0 for all i (see [2]). A matrix A is said to be nonnegative if each entry of A is nonnegative, and is said to be a positive matrix if each entry is positive. We shall denote this by A 0 and A > 0, respectively. The spectral radius of A is denoted by ρ( A). Now let us summarize the modified Gauss-Seidel method [] with the preconditioner P I + S. Let all elements a ii+ of S be nonzero. Then we have Whenever ~ A x + ( I + S ) Ax [ I L SL ( U S SU )] x, ~ b ( I + S ) b. (2.) a for i, 2,, n. ii+ a i + i ( I SL L ) exists, and hence it is possible to define the Gauss-Seidel iteration matrix for A, namely ij

4 34 ~ T ( I SL L ) ( U S + SU ). (2.2) This iteration matrix T ~ is called the modified Gauss-Seidel iteration matrix. In what follows, when A ( a ij ) is an L-matrix we shall assume that a 0 for i, 2,, n. ii+ a i + i > is We next propose a new preconditioned matrix P I + Ŝ, where Ŝ Sˆ aij, i j +, ( sˆ ij ) (2.3) 0, i j +. Let all elements ai i + of Ŝ be nonzero. Thus we obtain A ˆx ˆ ˆ ˆ + ( I + S ) Ax [ I L SL ( U S SU )] x, b ˆ ( I + Sˆ ) b. (2.4) ˆ Whenever a i+ i for i, 2,, n, ( ˆ I SL L ) exists, and hence it is possible to define the Gauss-Seidel iteration matrix for Â, namely ˆ ( ˆ T I SL L ) ( U Sˆ + SU ˆ ). (2.5) Remark. The usual splitting of A is A D L U, where D, L, and U are the diagonal, strictly lower, and strictly upper triangular parts of A. Since A is an L-matrix, let us consider the new matrix ˆ A D A I D L D U. Clearly the system Aˆ x bˆ D b is equivalent to the system Ax b. Without loss of generality we may assume A has the splitting of the form A I L U when A is an L-matrix. T Remark 2. As the standard Gauss-Seidel iteration matrix U ( I L ), the first column of (2.5) are zero when a2 a2 (In

5 A NEW EFFECTIVE PRECONDITIONED METHOD 35 additional, the first two columns of (2.5) are zero when a 2 a 2.). Thus we may partition T and Tˆ so that 0 T 0 ˆ 0 and ˆ T0 T T, 0 0 ˆ (2.6) T T where T and T ˆ are both ( n ) ( n ) matrices. Theorem 2. (Perron-Frobenius) (see [2]). (a) If A is a positive matrix, then ρ ( A) is a simple eigenvalue of A. (b) If A is nonnegative and irreducible, then ρ ( A) is a simple eigenvalue of A. Furthermore, any eigenvalue with the same modulus as ρ ( A) is also simple, and A has a positive eigenvector x corresponding to the eigenvalue ρ ( A). Any other positive eigenvector of A is a multiple of x. Theorem 2.2 (see [5]). Let A be a nonnegative matrix. Then: (a) If α x Ax for some nonnegative vector x, x 0, then α ρ( A). (b) If Ax βx for some positive vector x, then ρ( A ) β. Moreover, if A is irreducible and if 0 αx Ax βx for some nonnegative vector x, then α ρ( A ) β and x is a positive vector. Theorem 2.3 (see [5]). Let A be irreducible. If M is the maximum row sum of A and m is the minimum row sum of A, then m ρ( A) M. Theorem 2.4 (see [5]). Let A I L U be an L-matrix with usual splitting. Let T ( I L ) U and J L + U be the Gauss-Seidel and Jacobi iteration matrices of A, respectively. Then exactly one of the following holds: (a) ρ( T ) ρ( J ) 0, (b) 0 < ρ( T ) < ρ( J ) <,

6 36 (c) ρ( T ) ρ( J ), (d) < ρ( J ) < ρ( T ). Lemma 2.5. Let A I L U be an L-matrix with usual splitting. Let T ( I L ) U and J L + U be the Gauss-Seidel and Jacobi iteration matrices of A, respectively. If a i+ i, then T and T ˆ are irreducible matrices (where T and T ˆ are defined as in (2.6)). Proof. We will show that T is irreducible by showing that ( 2, 3,, n, 2) is a cycle in G ( T ); consequently it is also a cycle in G( T ), so G ( T ) is strongly connected. Since L is a nonnegative nilpotent matrix, ρ( L ) 0. Thus 2 n ( I L ) I + L + L + + L. 2 n Therefore T ( I + L + L + + L )U and G ( T ) G( L) G( U ). The condition a 0 implies that (, 2,, n) is a path in G( U ) ii+ a i + i > and ( n, n,, ) is a path in G ( L). Also G( U ) G( T ), since 2 n G( T ) G( U ) G( LU ) G( L U ) G( L U ). In particular, ( 2, 3,, n) is a path in G ( T ), so we only need to show that ( n, 2) is an edge in G ( T ). For this note that ( n, ) is an edge in G ( L) and (, 2) is an edge in G ( U ). Consequently ( n, 2) is an edge in G( T ). Similarly, it can be shown that T ˆ is irreducible. Lemma 2.6. Let A be an L-matrix such that a for i+ i i, 2,, n. Then ρ( T ) implies ρ( T ˆ ). (All matrices considered here are definited as in Remark.)

7 A NEW EFFECTIVE PRECONDITIONED METHOD 37 Proof. Clearly ρ( T ) implies ρ( T ). From Lemma 2.5, T is irreducible. By Theorem 2. there exists a positive vector ω such that T ω ω. Now define T0ω ω. ω Then clearly T ω ω. Since T 0 0, T0ω is a positive scalar and hence ω is a positive vector. Consider By factoring ˆ Tω ( I L SL ) ( U S + SU ) ω. I L from the right hand side we get Tˆ ω [ I ( I L ) SL ] ( I L ) ( U S + SU )ω [ I ( I L ) SL ] [( I L ) Uω ( I L ) Sω + ( I L ) SUω]. I U Now ( L ) ω ω implies U ω ( I L ) ω. Therefore we get Tˆ ω [ I ( I L ) SL ] [ ω ( I L ) Sω + ( I L ) S( I L ) ω] [ I ( I L ) SL ] [ ω ( I L ) SLω] [ I ( I L ) SL ] [ I ( I L ) SLω] ω. Thus by Theorem 2.2, ρ( Tˆ ). Lemma 2.7. Suppose A I L U is an L-matrix such that a i+ i, where L and U are the strictly lower and strictly upper triangular parts of A, respectively. Then the standard Jacobi iteration matrix J L + U and the modified Jacobi iteration matrix Tˆ ( I SL ˆ ) ( L + U Sˆ + SU ˆ ) are both irreducible. Proof. It follows from the condition a a 0 that i+ i ii+ > (, 2,, n, n, n,, 2, ) is a path in G ( L + U ). Hence G(J) is strongly connected, and so J is irreducible.

8 38 Next we show that Ĵ is an irreducible matrix. Note that G( J ˆ ) G( L + U Sˆ + SU ˆ ) G( L) G( U Sˆ ) G( Sˆ U ), where the first line follows from the fact that ( ˆ I SL ) is a nonnegative matrix with a positive diagonal. Recall that G ( L) contains the path ( n, n,, ), and note that G( SU ˆ ) contains the edges (, 3), ( 2, 4),, ( n 2, n). Hence G( J ˆ ) is strongly connected and Ĵ is irreducible. Theorem 3.. Suppose 3. Main Results on L-matrices A I L U is an L-matrix such that a i+ i, where L and U are the strictly lower and strictly upper triangular parts of A, respectively. Let T ( I L ) U and Tˆ ( I SL ˆ L ) ( U Sˆ + SU ˆ ) be the classical and modified Gauss- Seidel iteration matrices, respectively. Then (a) ρ( T ˆ ) < ρ( T ) if ρ( T ) <, (b) ρ( T ˆ ) ρ( T ) if ρ( T ), (c) ρ( T ˆ ) > ρ( T ) if ρ( T ) >. Proof. Part (b) follows from Lemma 2.6. Now we prove (a) and (c), first we note there exists a positive vector ω such that where λ ρ( T ). Now consider Tˆ ω ( I L SL ˆ ) ( U Sˆ + SU ˆ )ω T ω λω, (3.) ( I L SL ˆ ) ( I + Sˆ ) U ω ( I L SL ˆ ) Sˆ ω

9 A NEW EFFECTIVE PRECONDITIONED METHOD 39 ( I L SL ˆ ) ( I + Sˆ ) λ( I L ) ω ( I L SL ˆ ) Sˆ ω by (3.). Therefore Tˆ ω Tω ˆ ˆ ˆ ˆ ( I L SL ) [ λ ( I + S )( I L ) ω Sω ( I L SL )( I L ) U ω] ˆ ( I L SL ) [ λ( I L ) ω + ( λ ) Sω U ω] ( ˆ I L SL ) ( λ ) Sˆ ω. Write ( I L SL ˆ ) D + L for some positive diagonal matrix D and a nonnegative strictly lower triangular matrix L. Then ( ˆ I L SL ) Sˆ ω ( D + L ) Sˆ ω 0, since D Sˆ ω 0. Also, since DS ˆ ω ( I L SL ˆ 0, ) S ˆ ω is a nonzero, nonnegative vector. If λ <, then T ˆω Tω 0. Therefore By using the partitioned form of ˆ λω. Tω ˆ Tˆ 0 0 Tˆ 0 Tˆ introduced in Remark 2, we get ρ( T ˆ ) < λ, by Theorem 2.2. If λ >, then T ˆω Tω 0 but not equal to 0. Hence T ˆ ω λω, and this implies T ˆ ω λω, where ω α ω, ω > 0. Therefore ( T ˆ ) > λ by Theorem2.2. Hence ρ( Tˆ ) > λ. ρ Theorem 3.2. Let A I L U be an L-matrix, where L and U are the strictly lower and strictly upper triangular parts of A respectively. Let J L + U and Jˆ ( I SL ˆ ) ( L + U Sˆ + SU ˆ ) be the classical and

10 320 modified Jacobi iteration matrices, respectively. Further assume that J and Ĵ are irreducible matrices. Then (a) ρ( Jˆ ) < ρ( J ) if ρ( J ) <, (b) ρ( Jˆ ) ρ( J ) if ρ( J ), (c) ρ( Jˆ ) > ρ( J ) if ρ( J ) >. Proof. Since J is irreducible, by Theorem2. there exists a positive vector ω such that J ω λω, where λ ρ( J ). This implies Now consider ( L + U ) ω λω. (3.2) Jˆ ω Jω ( I SL ˆ ) [ L + U Sˆ + SU ˆ ( I SL ˆ )( L + U )]ω ( I SL ˆ ) Sˆ 2 [ I + U + L + LU ]ω ˆ ˆ ( I SL ) S[ I + U + λl] ω by ( 3.2) ( I SL ˆ ) Sˆ ( ω + λω Lω + λlω) ( ˆ I SL ) Sˆ ( I + L )( λ ) ω. (3.3) If ρ( J ) <, then Tˆ ω ρ( J )ω by (3.3), and so, from Theorem 2.2, we obtain ρ( J ˆ ) < ρ( J ). Similarly we can get ρ( J ˆ ) > ρ( J ) if ρ( J ) >, and (b) also follows from Theorem 2.2. Corollary 3.3. Let A I L U, J, Jˆ be as definite in Theorem 3.2. Replace J, J ˆ are irreducible matrices by the condition a i+ i. Then the conclusion of Theorem 3.2 holds. Remark. We have tested many examples with random entries. Finally, we present some matrices to show: the modified iteration matrix

11 A NEW EFFECTIVE PRECONDITIONED METHOD 32 Tˆ has a faster convergence rate when the iteration matrix T and T ~ are both convergent, and the modified iteration matrix Tˆ diverges even faster when the iteration matrix T and T ~ are both divergent. The proof follows from Lemma Numerical Example We present some matrices to illustrate our results. Let 0.2 A I 0.0 L U, where L and U are the strictly lower and strictly upper triangular parts of A, respectively. If ~ T ( I L) U, T and Tˆ are defined as (2.2) and (2.5), respectively. ~ Then ρ( T ) 0.96, ρ( T ) and ρ( Tˆ ) Let A I L U, ~ then ρ( T ) , ρ( T ) and ρ( Tˆ )

12 322 References [] A. D. Gunawardena, S. K. Jain and L. Snyder, Modified iterative methods for consistent linear systems, Linear Algebra Appl (99), [2] Jia-Gan Hu, Iterative Methods for the Solution of Linear Systems, Science Press, 999. [3] T. Kohno, H. Kotakemori, H. Niki and M. Usui, Improving modified iterative methods for Z- matrices, Linear Algebra Appl. 267 (997), [4] W. Li and W. W. Sun, Modified Gauss-Seidel type methods and Jacobi type methods for Z- matrices, Linear Algebra Appl. 37 (2000), [5] R. S. Varga, Matrix Iterative Analysis, Second Edition, Springer-Verlag, Berlin, Heidelberg, [6] Li Wen, Preconditioned AOR iterative methods for linear systems, Inter. J. Comp. Math. 79 (2002), g

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,

More information

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices Computers Mathematics with Applications 56 (2008) 2048 2053 Contents lists available at ScienceDirect Computers Mathematics with Applications journal homepage: wwwelseviercom/locate/camwa Convergence analysis

More information

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems Applied Mathematics Letters 19 (2006) 1029 1036 wwwelseviercom/locate/aml The upper Jacobi upper Gauss Seidel type iterative methods for preconditioned linear systems Zhuan-De Wang, Ting-Zhu Huang School

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

On optimal improvements of classical iterative schemes for Z-matrices

On optimal improvements of classical iterative schemes for Z-matrices Journal of Computational and Applied Mathematics 188 (2006) 89 106 www.elsevier.com/locate/cam On optimal improvements of classical iterative schemes for Z-matrices D. Noutsos a,, M. Tzoumas b a Department

More information

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,

More information

Improving the Modified Gauss-Seidel Method for Z-Matrices

Improving the Modified Gauss-Seidel Method for Z-Matrices Improving the Modified Gauss-Seidel Method for Z-Matrices Toshiyuki Kohno*, Hisashi Kotakemori, and Hiroshi Niki Department of Applied Mathematics Okayama University of Science Okayama 700, Japan and Masataka

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX J. Appl. Math. & Computing Vol. 182005 No. 1-2 pp. 59-72 CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX JAE HEON YUN SEYOUNG OH AND EUN HEUI KIM Abstract. We study convergence

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Perron Frobenius Theory

Perron Frobenius Theory Perron Frobenius Theory Oskar Perron Georg Frobenius (1880 1975) (1849 1917) Stefan Güttel Perron Frobenius Theory 1 / 10 Positive and Nonnegative Matrices Let A, B R m n. A B if a ij b ij i, j, A > B

More information

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin A LINEAR SYSTEMS OF EQUATIONS By : Dewi Rachmatin Back Substitution We will now develop the backsubstitution algorithm, which is useful for solving a linear system of equations that has an upper-triangular

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Some bounds for the spectral radius of the Hadamard product of matrices

Some bounds for the spectral radius of the Hadamard product of matrices Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard

More information

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex

More information

On the Schur Complement of Diagonally Dominant Matrices

On the Schur Complement of Diagonally Dominant Matrices On the Schur Complement of Diagonally Dominant Matrices T.-G. Lei, C.-W. Woo,J.-Z.Liu, and F. Zhang 1 Introduction In 1979, Carlson and Markham proved that the Schur complements of strictly diagonally

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices Applied Mathematics Letters 25 (202) 2339 2343 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Comparison theorems for a subclass

More information

A Refinement of Gauss-Seidel Method for Solving. of Linear System of Equations

A Refinement of Gauss-Seidel Method for Solving. of Linear System of Equations Int. J. Contemp. Math. Sciences, Vol. 6, 0, no. 3, 7 - A Refinement of Gauss-Seidel Method for Solving of Linear System of Equations V. B. Kumar Vatti and Tesfaye Kebede Eneyew Department of Engineering

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS TWMS J. Pure Appl. Math., V.6, N.1, 2015, pp.15-26 ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS J. SAEIDIAN 1, E. BABOLIAN 1, A. AZIZI 2 Abstract. A new iterative method is proposed

More information

First, we review some important facts on the location of eigenvalues of matrices.

First, we review some important facts on the location of eigenvalues of matrices. BLOCK NORMAL MATRICES AND GERSHGORIN-TYPE DISCS JAKUB KIERZKOWSKI AND ALICJA SMOKTUNOWICZ Abstract The block analogues of the theorems on inclusion regions for the eigenvalues of normal matrices are given

More information

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Elena Virnik, TU BERLIN Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Abed Elhashash and Daniel B. Szyld. Report August 2007

Abed Elhashash and Daniel B. Szyld. Report August 2007 Generalizations of M-matrices which may not have a nonnegative inverse Abed Elhashash and Daniel B. Szyld Report 07-08-17 August 2007 This is a slightly revised version of the original report dated 17

More information

An Extrapolated Gauss-Seidel Iteration

An Extrapolated Gauss-Seidel Iteration mathematics of computation, volume 27, number 124, October 1973 An Extrapolated Gauss-Seidel Iteration for Hessenberg Matrices By L. J. Lardy Abstract. We show that for certain systems of linear equations

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Generalizations of M-matrices which may not have a nonnegative inverse

Generalizations of M-matrices which may not have a nonnegative inverse Available online at www.sciencedirect.com Linear Algebra and its Applications 429 (2008) 2435 2450 www.elsevier.com/locate/laa Generalizations of M-matrices which may not have a nonnegative inverse Abed

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015 Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in NUMERICAL ANALYSIS Spring 2015 Instructions: Do exactly two problems from Part A AND two

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Numerical Linear Algebra And Its Applications

Numerical Linear Algebra And Its Applications Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,

More information

MATH36001 Perron Frobenius Theory 2015

MATH36001 Perron Frobenius Theory 2015 MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

Properties for the Perron complement of three known subclasses of H-matrices

Properties for the Perron complement of three known subclasses of H-matrices Wang et al Journal of Inequalities and Applications 2015) 2015:9 DOI 101186/s13660-014-0531-1 R E S E A R C H Open Access Properties for the Perron complement of three known subclasses of H-matrices Leilei

More information

Strictly nonnegative tensors and nonnegative tensor partition

Strictly nonnegative tensors and nonnegative tensor partition SCIENCE CHINA Mathematics. ARTICLES. January 2012 Vol. 55 No. 1: 1 XX doi: Strictly nonnegative tensors and nonnegative tensor partition Hu Shenglong 1, Huang Zheng-Hai 2,3, & Qi Liqun 4 1 Department of

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

Scientific Computing WS 2018/2019. Lecture 10. Jürgen Fuhrmann Lecture 10 Slide 1

Scientific Computing WS 2018/2019. Lecture 10. Jürgen Fuhrmann Lecture 10 Slide 1 Scientific Computing WS 2018/2019 Lecture 10 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 10 Slide 1 Homework assessment Lecture 10 Slide 2 General Please apologize terse answers - on the bright

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

To appear in Monatsh. Math. WHEN IS THE UNION OF TWO UNIT INTERVALS A SELF-SIMILAR SET SATISFYING THE OPEN SET CONDITION? 1.

To appear in Monatsh. Math. WHEN IS THE UNION OF TWO UNIT INTERVALS A SELF-SIMILAR SET SATISFYING THE OPEN SET CONDITION? 1. To appear in Monatsh. Math. WHEN IS THE UNION OF TWO UNIT INTERVALS A SELF-SIMILAR SET SATISFYING THE OPEN SET CONDITION? DE-JUN FENG, SU HUA, AND YUAN JI Abstract. Let U λ be the union of two unit intervals

More information

Applicable Analysis and Discrete Mathematics available online at GRAPHS WITH TWO MAIN AND TWO PLAIN EIGENVALUES

Applicable Analysis and Discrete Mathematics available online at   GRAPHS WITH TWO MAIN AND TWO PLAIN EIGENVALUES Applicable Analysis and Discrete Mathematics available online at http://pefmath.etf.rs Appl. Anal. Discrete Math. 11 (2017), 244 257. https://doi.org/10.2298/aadm1702244h GRAPHS WITH TWO MAIN AND TWO PLAIN

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Comparison results between Jacobi and other iterative methods

Comparison results between Jacobi and other iterative methods Journal of Computational and Applied Mathematics 169 (2004) 45 51 www.elsevier.com/locate/cam Comparison results between Jacobi and other iterative methods Zhuan-De Wang, Ting-Zhu Huang Faculty of Applied

More information

Two Characterizations of Matrices with the Perron-Frobenius Property

Two Characterizations of Matrices with the Perron-Frobenius Property NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2009;??:1 6 [Version: 2002/09/18 v1.02] Two Characterizations of Matrices with the Perron-Frobenius Property Abed Elhashash and Daniel

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

Fast Verified Solutions of Sparse Linear Systems with H-matrices

Fast Verified Solutions of Sparse Linear Systems with H-matrices Fast Verified Solutions of Sparse Linear Systems with H-matrices A. Minamihata Graduate School of Fundamental Science and Engineering, Waseda University, Tokyo, Japan aminamihata@moegi.waseda.jp K. Sekine

More information

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form: 17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix

More information

Spectral Properties of Matrix Polynomials in the Max Algebra

Spectral Properties of Matrix Polynomials in the Max Algebra Spectral Properties of Matrix Polynomials in the Max Algebra Buket Benek Gursoy 1,1, Oliver Mason a,1, a Hamilton Institute, National University of Ireland, Maynooth Maynooth, Co Kildare, Ireland Abstract

More information

CHAPTER 5. Basic Iterative Methods

CHAPTER 5. Basic Iterative Methods Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than

More information

Analysis of Iterative Methods for Solving Sparse Linear Systems C. David Levermore 9 May 2013

Analysis of Iterative Methods for Solving Sparse Linear Systems C. David Levermore 9 May 2013 Analysis of Iterative Methods for Solving Sparse Linear Systems C. David Levermore 9 May 2013 1. General Iterative Methods 1.1. Introduction. Many applications lead to N N linear algebraic systems of the

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

arxiv: v1 [math.ra] 11 Aug 2014

arxiv: v1 [math.ra] 11 Aug 2014 Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION HUAJUN HUANG AND TIN-YAU TAM Abstract. The m-th root of the diagonal of the upper

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Iterative Methods for Ax=b

Iterative Methods for Ax=b 1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative

More information

Discrete mathematical models in the analysis of splitting iterative methods for linear systems

Discrete mathematical models in the analysis of splitting iterative methods for linear systems Computers and Mathematics with Applications 56 2008) 727 732 wwwelseviercom/locate/camwa Discrete mathematical models in the analysis of splitting iterative methods for linear systems Begoña Cantó, Carmen

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A Spielman September 9, 202 7 About these notes These notes are not necessarily an accurate representation of what happened in

More information

Interlacing Inequalities for Totally Nonnegative Matrices

Interlacing Inequalities for Totally Nonnegative Matrices Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Splitting Iteration Methods for Positive Definite Linear Systems

Splitting Iteration Methods for Positive Definite Linear Systems Splitting Iteration Methods for Positive Definite Linear Systems Zhong-Zhi Bai a State Key Lab. of Sci./Engrg. Computing Inst. of Comput. Math. & Sci./Engrg. Computing Academy of Mathematics and System

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems Applied Mathematical Sciences, Vol. 10, 2016, no. 30, 1477-1488 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.6269 Performance Comparison of Relaxation Methods with Singular and Nonsingular

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to : MAC 0 Module Systems of Linear Equations and Matrices II Learning Objectives Upon completing this module, you should be able to :. Find the inverse of a square matrix.. Determine whether a matrix is invertible..

More information